Saturday, 2 June 2007

Humblest pie - part 3: Data-classification

I'm almost done with my preaching, and parallel apologising to Rory. This should be short and sweet, so I'll end up with a summary and say goodnight.

Data-classification can be done in a number of ways. I'm not 100% sure of the process now I come to think about it, but it basically involves assigning attributes to every single piece of data on your network. This is obviously a massive job, and once it's done, you need to control access to that data. This requires a device in line with the user accessing the data, plus a massive initial effort of data processing. But, this is being done as we speak in a number of very large companies. EMC offer a data classification service, NetApp has OEMed Kazeon, other solutions are attracting huge investment, this is a "right now" opportunity.

The data-classification companies are selling this service as a "de-duping" activity, removing duplicate data from systems to save on storage costs. This is a good way to sell to short-sighted management, but the long term benefits are far greater.

Data-classification allows security to be applied directly to data. Decisions can be made about access to data made on the data's sensitivity level, and clearance level of a user. Data is always kept at an appropriate level of security in storage, and presented to applications with an appropriate level of control.


Data-classification allows the appropriate level of control in data-centric security systems. Data-centric security allows freer movement of data between user and storage, because a large number of network controls can be removed or consolidated. DRM exists as an endpoint issue, on a network this can be largely solved with application control inside a network context, and physical controls outside of this context. Outside of a network, this is still a very difficult topic to address.

Thanks to Rory McCune for his patience and sticking to his guns. I get what you were talking about now...

No comments: