By: Steve Presley, Product Manager, NetDocuments
There are many facets to a successful implementation of Zero Trust Security and a living process after it is deployed. In this ILTA roundtable series, our goal is to discuss some of the major topics to consider when starting this important journey. The compendium blog posts, such as this one, are meant to augment the roundtable content by discussing topics that were either not covered in a particular session or to expand further on points that were discussed but we did not have the time to delve into further. Additional topics may also be linked for reference or further investigation as you proceed through the process, some may apply to your organization or not, and further research is needed on your part to identify areas that we may not have covered in the sessions or the blog posts.
In the first roundtable, we discussed Logical Components of an implementation. We discussed the definition of Zero Trust, what certifications/validations are available from an industry standard or certification authority perspective, the most import parts of the process, key elements as they pertain to Law Firms and the shift in thinking at the organizational level to not only implement but maintain a successful Zero Trust governance process after deployment.
The discussion in the first roundtable was a great overview of topics to consider for your own implementation, so please review the video and take notes as to items that may apply to your situation. I’d like to give a quick overview of what Zero Trust is at a high level, then focus on two topics.
Briefly, Zero Trust is the idea that ONLY the person(s) authorized to access data for a particular body of work has access, and everyone else in the organization does not. Zero Trust can be described as an Inclusionary Ethical Wall. It can also be referred to as pessimistic security or a “Need to Know” security model.
It is the exact opposite of the idea of an Exclusionary Ethical Wall, where one or more people are explicitly DENIED access to a body of work, but everyone else in the firm can see it.
Along with Ethical Walls, there are broader Ethical considerations to Zero Trust – that go beyond a security model that applies to only documents on certain parts of the network. NIST defines Zero Trust as focusing on “protecting resources (assets, services, workflows, network accounts, etc.), not network segments, as the network location is no longer seen as the prime component to the security posture of the resource “.
There is a fundamental shift in thinking that must happen to implement a true Zero Trust model, and the first of which is to determine which rules your organization needs to adhere to. Most of these rules are driven by client requests, and some are jurisdictional or regulatory requirements based on the industry sector (finance, government, etc.) or the geographic region(s) your firm does business in. The requirements (and following governance policies) should consider in which contexts information subject to the rules can be stored, viewed, or even discussed. For example, should you be discussing a Swiss matter with a colleague while in a public elevator while working in your New York City office building? Or, while working on a plane on Wi-Fi, should you be viewing compensation of your staff or contracts with a potential customer where your seat mates can see the screen of your laptop? Again, this information is “outside” of a location in your firm’s network, so it should be governed whether on your laptop, iPad, printed out when you’re working remotely from a café, or even spoken aloud as part of a phone or in-person conversation.
Once the requirements and restrictions are identified, the next two steps are to consider Architecture and Identity Governance.
Architecture is a broad topic, and rightly so, as the architecture of a firm’s data is typically far and wide both metaphorically as well as geographically. Data is stored in any manner of places – network shares, desktops, mobile devices, CDs/DVDs/USB drives, tape backups, records closets, microfiche libraries, deep storage which is off-site, enterprise systems and databases, cloud storage or application providers as well as typically spread across the globe or even across someone’s desk. Frequently data storage is distributed intentionally for Disaster Recovery and Data Redundancy as part of business continuity plans but thought also must be given to the unintentional dispersion of data as it resides near the people who perform the day -to-day work for the firm.
Brick and Mortar offices, home offices, cafes, airports, taxis, trains are all locations where people now have internet access and are working from. Even the new Starship rockets have satellite Wi-Fi embedded as part of their own architecture and crews inadvertently published IP addresses to the cameras in one live stream. While the intent was to give live footage of a launch, there were attempts by nefarious folks to flood the IPs and gain access within minutes.
Many breaches leverage data that is not kept under key by default, so an innocuous gesture to share something of value often has unthought of or hidden data embedded that can be used by third parties for competitive, criminal, or other reasons. While an IP address is an example of a network asset accidently published to the world, the idea that data is only on “earth” is also something to consider. As part of the renewed space initiatives, you could likely be doing business in the next few years with data created, or being edited in space, the Moon or even Mars. Thinking through the bigger picture of data at rest in server racks is no longer the extent of a “security policy”, so data in transit, on end user devices, public and private cloud, the internet, and literally other worlds is all now fair game.
Identifying the various data sources and repositories in your organization as well as those in the public or private cloud is the first step in understanding your firm’s corpus of work. The second step is to compile the rules and regulations you need to adhere to.
Based on client requests, you may be required to limit permissions to all data related to a client so that only the authorized teams in your organization are allowed to access that data. Once access is granted, you may further need to govern which actions are allowed with the data. This is typically done in concert with a Data Loss Prevention initiative, where policy-based rule sets and classification labels can be applied based on the restrictions dictated. For instance, even if an attorney can access an email received from a client, there could be rules in place that the attorney is not allowed to print or forward that email once received.
Based on the location of the client, or jurisdiction that a matter or case is filed in, documents may need to be stored in a specific geographic region. Additionally, physical access to the storage of those documents is tightly controlled. Zero Trust expands beyond technology access to data, but also physical access, whether it is virtual or physical IT systems, or paper records in boxes in a warehouse. If doing business with clients in Switzerland or Saudi Arabia, those documents must be stored at rest in those countries, so having a geographic storage rule for those situations is paramount, not only for access limitations, but also compliance.
In the examples above, not only have we discussed having your rules defined, but the third step, which is intertwined, is to classify the data as to which rule sets (there may be one or more) need to apply to the data.
Without understanding firstly where the data currently resides both within and outside of your enterprise, and secondly which rules you must adhere to due to governmental, industry and/or client requirements, it is impossible to then classify the data to become compliant in a Zero Trust scenario. Applying those classifications and rules are the bare minimum to think of when starting a Zero Trust implementation. Physical moving of paper records as well as bandwidth and hardware needed to store computerized documents is another logistical feat with its own planning challenges. Continual auditing of known storage endpoints in the architecture as well as discovered new, or previously unknown endpoints is also an on-going task throughout the process. Ensuring that data is classified properly is an often-overlooked piece of the implementation as folks tend to store data in the easiest and quickest fashion for them to save it. This usually means it is classified incorrectly in a sub-folder under a personal workspace instead of the correct client’s workspace, or the drawer in their desk instead of being sent to the Records department.
As we progress through the series, feel free to send us feedback and comments around challenges that you are facing and would like guidance on, or if you can share your solution to challenging aspects to help others in their Zero Trust journey, it would be greatly appreciated.
About the Author:
Steve Presley is a Product Manager at NetDocuments, responsible for the PROTECT Security & Governance suite of products, Mobile Applications, and Admin/Migration Tools. In his 25 years of experience in web/mobile/application development, database management, desktop deployments and enterprise platform management and architecture, a common thread weaves through his various experiences – securing applications and platforms based on organizational needs and external compliance rules.
He is a member of the ILTA Security & Compliance CCT team as well as a charter member of the ASPInsiders. He is also a speaker and organizer for community development User Groups, CodeCamps and national development conferences.