The data loss prevention (DLP) market has had to overcome significant challenges since the introduction of the concept. DLP is now undoubtedly a necessary and business-critical part of a modern company`s IT infrastructure; however, misconceptions about how to implement a successful DLP programme are still plentiful.
Businesses choose to implement a DLP solution for a variety of reasons, but mainly to protect customer data and/or their intellectual property. While DLP solutions can achieve these business objectives, many DLP programmes often fail as companies try to implement large overarching systems that encompass too much information, too soon. This is a perennial issue for DLP programmes; costs spiral as realistic targets are not set, which leads to unattainable success because it requires more resources than they anticipated and takes too long to show value.
People assume they cannot get value from their DLP deployment without answering the following questions: What is my sensitive data? Who owns this data? Where is my sensitive data?
While these questions are indeed important, businesses should be aware that trying to answer those questions as a prerequisite for a DLP project is wrong and will lead to a lengthy and costly process of data classification, data ownership and discovery, without really showing any value or risk reduction on the DLP side.
One customer in the financial industry in Europe indicated it had taken more than two-and-a-half years to determine what technologies it needed to answer the above questions, when the actual business requirements were to protect intellectual property in unstructured data (documents), which could have been done without that process.
Let`s look at those three processes in more detail.
Most people implementing a DLP solution assume they need to classify data to find out what portion is deemed sensitive. Data classification is a common procedure, but data can still be leaked out because of the constant addition of even more data.
A better approach to this question is data identification, which answers the question in a better way, as the "classification" is determined by the content of the file and not manually by someone who works on the file.
If customers still want to use data classification then it is best to start in small steps and only on those projects that require absolute protection – prototypes, etc – securing this small but high-profile first step will not only generate a real and quick return on investment, but also build the momentum necessary to move on to the harder aspects of the programme.
Many believe in order to be able to protect data without impacting the business, data ownership must be determined in advance. However, if planned correctly, data ownership can be determined by using the DLP capabilities of data identification/categorisation.
The process should identify owners from various business units simply based on the data type itself: financial data owned by finance, HR data owned by HR, etc. Most systems that offer data ownership marking are looking on file and folder attributes and ignore the data itself. But how do you decide who owns a miscellaneous file? Is it the creator? The person that last modified it? The person who accesses the file the most?
Most document management systems and file servers support the ability to investigate a file`s attributes and help in determining data ownership. This is important especially after an incident when you need to know where the data originated and find out how valuable it is. However, most people assume you must assign an owner to everything for it to be useful.
To answer the question of where sensitive data is located, many believe they need to go into a large-scale discovery programme. However, the process of discovery across networks can be compared to running an anti-virus program for the first time; the numbers of events/alerts generated by the system will be enormous. Customers are usually concerned with how much time it will take the discovery system to scan all their data; the collection of that much data might be achieved in a short period, but to process all that data could take much longer. Customers should also understand the value that this process will bring, knowing that certain data sitting in specific places is not enough. The investigation process also needs to come with answers to the questions of: Why is it there, who can access it, and can it be moved? Additionally, if remediation is required, a second scan is useful to determine if actions were taken that add more time.
However, if you focus on a specific data or a specific file server/storage system, identify what data sits there and why, and if needed, identify the owner of the data. You can then recommend necessary actions and move/encrypt or archive the data – you can also check if the actions were taken and then move on to another critical folder without tackling irrelevant data.
These misconceptions lead many organisations to make the mistake of seeing a DLP programme as an "all-in-one" deal. The right approach to launching a DLP programme is one that is focused and targeted. Once the business-critical data is categorised/identified, the DLP programme can be rolled out to new data sets, while continuing to manage the programme for the existing policies. Company directors, after all, are more interested in the results of the projects and the financial returns.
The sensitive nature of the information being handled by DLP programmes often means companies like to handle the process in-house. While there are many talented IT staff working within companies, DLP often requires a specialist security consultant/partner to support, and properly manage, a DLP system.
Choosing the right partner for your DLP programme will help it gain a successful ROI, ensure the core management tools are implemented to monitor data activity, and take proactive steps to ensure information is properly monitored and secured.
While DLP will not solve every problem within your IT systems, it is a fundamental part of a modern IT infrastructure and will provide tangible results to your business if the right steps are taken.
Lior Arbel is the chief technical officer of Performanta (UK). Performanta specialises in information security and risk management, offering enterprise clients end-to-end products, services and consulting capabilities.
Performanta will be participating in the forthcoming ITWeb Security Summit, which will be staged at the Sandton Convention Centre from 27-29 May. In over 30 sessions presented in tracks for either senior business management or IT security professionals, information security professionals will examine the risks facing enterprise information systems today, and the strategies and technologies needed to counter them. In-depth workshops will also be presented on day three of the event, offering practical training on security status reporting and testing Web applications for security vulnerabilities.
For more information, go to www.securitysummit.co.za. Join the conversation on Twitter at #itwebsec