Monday, February 16, 2009

Why Centricity Doesn't Support Privacy

Every now and then I see people putting forward the notion that they would like to physically control all their identity information and have it held in a single place of their choosing. But does that really solve the problem of privacy? It certainly seems like it does.

The problem is that this idea ignores the nature of relationships we establish with service providers and agencies on the Internet. Bob Blakley puts it nicely here:
Identity 2.0 mistook the symptom (inaccurate identities, and the damage to privacy and ability of users to transact effectively) for the cause and took up the banner of “user centricity.” User centricity will not solve the Identity 1.0 problem; indeed, it will make the problem worse by creating antagonism between individuals and businesses. This antagonism will undermine already weak relationships and thereby make it even more difficult for businesses to get the identity information they need.
Part of the antagonism that Bob Blakley is referring to is the assumption that service providers are doing bad things, and users must have control through a centralized identity provider of their choosing. Does centralizing information in a place of our choosing improve our privacy? Some have argued yes, because eventually vendors won't need to retain information. We can give it to them every time we communicate with them and audit when they use it. That would be a cool thing if that covered the entire issue.

We may want a vendor to never hold our identity information, but what about when we expect the vendor to actually do something with the information we provide such as ship a book or offer a subscription? Often, delivery of the service itself requires the use our personal information. Even using the services of a provider may in fact generate new personal information. That is the nature of having a strong relationship. As Bob indicates, relationship is something Identity 2.0 does not take into account.

When we use services, new information about ourselves is often generated. Privacy is not just about controlling and consenting to the use of information we give to a vendor, but also about what that vendor does with what it learns about us through the use of their service. For example, if I sell a bunch of items on a fictional electronic marketplace called eStuff, I will gain a reputation with eStuff. Reputation is not something self-asserted. In this case, "reputation" is a piece of my persona that gets generated as a result of using the eStuff service. The more transactions I successfully complete and the higher my trading partners rank me, the better my "reputation". In this case, "reputation" is a result of my relationship with eStuff.

Because "reputation" has value, we could conclude that there is value in making that assertion portable. Yet clearly, this personal information is information generated by eStuff based on my relationship with eStuff and its other customers. Because of my relationship with eStuff, the eStuff service could be said to be an "authoritative" source for reputation. eStuff, having invested in its reputation service, has a business interest in how the information is used.

Of course, we would like to feel we have control over our reputation and about how others get to see our reputation information. eStuff, the generator of this information also has its own interests. It is for this reason that moving the storage of this data to a third party provider who has no relationship with eStuff becomes problematic. We should have a say in the use of our own information, but the asserting party (eStuff) should also have a say. It is part of a healthy relationship! If this personal information is to be given to another party, both the person and asserting provider should have an opportunity to agree.

In practice, eStuff asserts its rights by being the issuer of claims, while the user, asserts rights by consenting. User-centric systems enhance this process by allowing the user themselves to become the vehicle of exchange generating implied consent.

eStuff is just a hypothetical example, but there are many other places where applications generate information about us. An employer's HR system generates information about who is an employee and knows our job roles. A travel agents knows our travel preferences and travel plans. The department of motor vehicles knows about our driving record. Facebook, LinkedIn and other social networks know about our relationships and about our interests. Each of these sources can be considered "authoritative" because we have a relationship with these sources. Used with our consent, authoritative personal information can improve our relationships with other service providers.

The challenge today is not how we effect transfer of information. There are lots of protocols to transfer information. The challenge is defining the programatic policy that describes when information should be shared or for that matter updated, and what constraints or consent have users stipulated. Oracle has proposed that XACML is the best tool for the job and in particular has proposed AAPML as a profiles XACML for use in relation to identity services. As XACML is now beginning to mature, watch this space for more information on the progress of AAPML towards standards.

No comments:

Post a Comment