Showing posts with label Liberty Alliance. Show all posts
Showing posts with label Liberty Alliance. Show all posts

Tuesday, April 21, 2009

Big Changes!

Yesterday, my plan was to write a post announcing some changes at Project Liberty. I was distracted by the announcement that Oracle has entered into an agreement to acquire Sun Microsystems! Some other interesting coverage can be found here.

Now, the other big news!

The Liberty Alliance Project announced formation of a new organization known as the Kantara Initiative. Kantara is an organization with a much more accessible approach to its membership. It carries an Intellectual Property structure that is much more flexible and should allow for a greater ability to bridge between industry communities working on identity services. Brett McDowell, Executive Director of Liberty Alliance Project, gives an overview of the Kantara Initiative here:


One of the first big differences between Project Liberty and Kantara is that Kantara will be not be setting standards. Instead work groups will define recommendations to share with other standards setting organizations (SSOs). For example, the work on IGF AAPML, and various protocol profiles for IGF will likely each be referred to the SSO organization responsible for the parent specification.

For those following IGF and Project Aristotle, the work continues under the Kantara Initiative. One of the cool new features of the Kantara Initiative is the ability to support multiple open source projects with different licenses. This means it will be a lot easier to support a more diverse open source community. As an example, for Project Aristotle, it will make it a lot easier to work with the Higgins community now that we have a way to bridge between EPL and Apache licensing.

It seems that the themes of bridging and harmonization are in the air!

Tuesday, September 30, 2008

Time For Customers to Shout!

The attempt to merge various identity standards appears to be on the rocks with Johannes and Kaliya commenting here about the withdrawl by the Identity Commons folks.

Johannes is on the money about the culture clash. But in the end, the real loser is the customers as the the Identity version of HD-DVD vs Blu-Ray debate over competing standards and features will continue.

Customers, it is your time to shout! Contact your vendors (including my employer) and make yourselves heard! The vendors and open source communities need to hear from you!

Monday, June 23, 2008

Liberty Announces First Release of IGF and IAF Specifications

Great news! Liberty Alliance announced the release the first drafts of the Identity Governance Framework and the Identity Assurance Framework.

The current IGF draft has 3 major components:
  • Privacy Constraints - This document describes a small set of atomic privacy constraints based on WS-Policy that can be used in other IGF specifications. Privacy constraints are atomic constraints on the use, display, retention, storage and propagation of identity data. When combined with policy frameworks such WS-Policy, such assertions can be used to describe composite constraints on identity data.
  • Client Attributes Requirements Markup Language - This document describes an XML declaration format describing identity-related data usage by an application.
  • CARML Profile for Privacy Constraints - This document profiles the use of privacy constraints within CARML.
The complete specifications page for IGF can be found here. I should also point out this is just the first release of an ongoing series of specifications around identity governance. Next steps will likely include profiling of IGF in connection with various communication protocols and Attribute Authority Policy Markup Language which is currently proposed as a profile of XACML.

The Identity Assurance Framework is a new specification that defines 4 levels of assurance that can be used between federated providers to define the level of assurance or trust-worthiness of information.
The four identity assurance levels outlined in the Liberty Identity Assurance Framework are based on a comprehensive set of process and policy criteria organizations must meet to participate in IAF-based federations. The IAF details authentication requirements to allow federation operators and federating organizations to address cross-industry business, policy and privacy requirements related to applications and services built using any federation protocol and falling into each identity assurance level. The first version of the Liberty Alliance Identity Assurance Framework released today is available for download.
For those of you wondering at this point, do these specifications represent new protocols? The answer is no. These specifications are really information-level policy declarations describing how and when to use identity-related information and its level of assurance. These declarations are intended to be used with any protocol system used to exchange information whether it be LDAP, ID-WSF, or WS-*. The diagram below should help show the relationship between IAF, IGF, and the various Identity protocols.

Many thanks to my fellow colleagues at Liberty Alliance who worked so hard to provide their input and contributions to these specifications. Without such excellent attention, this work would not have been possible!

Wednesday, April 23, 2008

IGF Webcast Slides

Thanks to all who attended the webcast today!

The slide-deck for the IGF presentation and the others in the "Privacy in Perspective" series are available at: http://www.projectliberty.org/liberty/resource_center/presentations_webcasts

Monday, April 21, 2008

IGF Webcast - Privacy In Perspective Series

In case you have not seen the press release, I will be doing another webcast on IGF this Wednesday at 8 Pacific. This is part of a series, called "Privacy in Perspective" that Liberty is presenting.

The series began with a presentation by Robin Wilton of Sun Microsystems on April 16, and ends with a webcast on May 7, when Byron Acohido and Jon Swartz, technology editors of USA Today, who will be talking about highlights from their new book "Zero Day Threat". Zero Day Threat looks like it is going to be an interesting book indeed!

For registration to these webcasts, click here.

Sunday, April 13, 2008

OASIS XACML Interop At RSA

Last week, members of the OASIS consortium participated in a interoperability demonstration of XACML. My co-worker, Rich Levinson, was there leading Oracle's participation, along with participants from BEA, IBM, Sun, Axiomatics, Cisco, and the US Department of Veteran Affairs, [correction: and Redhat/JBoss too!].

For me the cool thing was the scenario put forth by Veteran Affairs. It was a scenario that dealt patient health records and privacy (For more info, see Anil Saldhana's write-up.). For me, the really cool thing was when Rich showed me how a patient could block access to a specific doctor, or conversely, a doctor in an emergency room situation could be granted access to patient records. This particular scenario has been one of the primary examples put forward by many government organizations I have spoken with. It was also talked widely by participants of the business requirements review of IGF at Project Liberty.

In fact, when we first talked about a policy language for attribute authorities (back in 2006) to decide how to release personal information, we gravitated quickly towards XACML at Rich Levinson's suggestion. Now, with web policy demonstrating these requirements in an application context at an open interop, it makes the Rich's initial recommendation of basing Attribute Authority Policy Markup Language (AAPML) as a profile of XACML to be right on target!

Thursday, April 10, 2008

Standards and Implementations

Kim Cameron posted a response today to my post yesterday about requirements for his next generation Meta-directory and how IGF aims to do just that. It seems Kim and I are in total agreement on these points and developer appeal is definitely key. I do want to clarify one paragraph from his response:
I haven’t seen CARML - perhaps it is still a private proposal? [UPDATE: I’ve been advised that CARML and the IGF Attribute Servces API are the same thing.] I think having a richer common representation for people will be the most important ingredient for success. I’m a little bit skeptical about confining developers to a single API - is this likely to fly in a world where people want to innovate? But details aside, it sounds like CARML will be a helpful input to an important industry discussion. Above all, this needs to be a wide-ranging and inclusive discussion, where we take lots of input. To get “as many applications as possible” involved we need to win the participation and support of application developers - this is not just an “infrastructure’ problem.

IGF is a collection of specifications (e.g. CARML, AAPML) that Oracle and others announced in November of 2006. With the positive response and encouragement from other vendors we quickly took the initial proposal for IGF to the Liberty Alliance on its way to making IGF an important identity information standard.

The AttributeServices API I spoke of is an Apache 2.0 License open source project under openLiberty.org that demonstrates just one possible implementation of the use of the IGF specifications. In other words, I totally agree with Kim on his skepticism about one API - the last thing we want to do is confine developers to a single API implementation. This was one reason why we went to the Apache License. We want to make it easy for other developers to take what we've done and port and adapt the implementation for their own purpose. From my perspective, the key is that all APIs should share implementation of IGF as a future standard.

To expand on Kim's point there are many communities that need to buy in: Developers, Privacy Officers, Deployment Managers, Infrastructure Managers for a start. But it goes without saying that if developers don't use it, none of this matters. Developers won't do this simply because the other parties (like infrastructure managers) want it. The key benefit I see for developers is the ability to write applications without having to worry about issues of deployment or issues surrounding protocol implementation through powerful development tooling.

Kim is correct about needing a wide-ranging inclusive discussion. We're doing that by developing the standard inside Liberty Alliance and by simultaneously developing the first reference implementation under openLiberty.org. I know there may be issues, particularly for small vendors to join the Liberty Alliance, but feel free to check out the mailing lists, and by all means, join up with OpenLiberty, which in contrast has wide open membership.

Wednesday, February 27, 2008

2 Articles on Why Liberty's IGF Is So Important

In the last couple of days, 2 blogs posts came to my attention that may be of interest.

The first is from Felix Gaehtgens of Kuppinger Cole. Felix talks about why IGF is becoming more important in today's increasingly global society...
[...] For a starter, many enterprises still have private identity data stored in many different data stores. Even though the trend is to minimise the number of “data silos” (places where identity data is stored), the reality is still that data can be found in many places. This creates a problem in our globalising society, where the HR department might be run in one country, and the support desk in another, and a myriad of services being outsourced yet to other locations. How can one ensure that the flow of data is controlled in such a way to ensure that all privacy laws are being complied with? Another example could be a federated environment of several suppliers working together in order to process an order. The order is received by company A, which then sends out several orders for parts to companies B1, B2 and B3, who then ship everything to company C that assembles everything and uses company D to ship out the finalised order to the customer.
Felix goes on to accurately describe CARML and AAPML and concludes:
So why another set of protocols? Isn’t this already addressed in some other standards? Liberty’s ID-WSF springs to mind, or SAML 2.0’s AttributeQuery, SPML, or even - to a certain extent - WS-Trusts Security Token Service. However, CARML and AAPML bridge a very important gap that is not addressed anywhere else: not how to request and receive attributes, but to express the need and purpose of identity data, and on the other side the allowed use and conditions for its consumption. IGF’s framework conceptually fits seamlessly into architectures harnessing today’s frameworks and picks up where CardSpace, Higgins, Bandit and WS-Trust, leave off.
The second bog post is from Sajjad, who writes:
IGF's framework conceptually fits seamlessly into architectures harnessing today's
frameworks and picks up where CardSpace, Higgins, Bandit and WS-Trust,
leave off.
I like to put sum it up this way - most of today's protocols deal with the "how" and "what" information should be transferred - the technical mechanisms for transferring information. For this, there are indeed many mechanisms each stressing different security objectives. IGF's role is to really talk about the "why", "when", "where", and for "whom" information should be consumed, shared, used, and updated.

Thursday, January 24, 2008

SOX Compliance Journal: Identity Governance Framework

An article written by myself and Marco Casassa Mont of HP on Liberty Alliance's IGF initiative addressing privacy and SOX is featured in Sarbanes-Oxley Compliance Journal.

This article is a good introduction to the problems of privacy and compliance as it relates to personal information and how IGF is intended to make the compliance of applications and the businesses that deploy them much easier to achieve.

Enjoy

Thursday, December 6, 2007

Copy and Sync Bad for Privacy

I read an article by Rosie Lombardi in InterGovWorld that turned out not to be what I thought it was about on first reading the title "Secret identity: Solving the privacy puzzle in a federated model".

The article turned out to be a discussion not of classic web federation, but one of different approaches to using LDAP in a federated government setting. In the article, Rosie lays out the case for the copy-and-sync meta-directory approach, vs. the case for dynamic access via virtual directories. While the article was not about classic web federation using SAML or InfoCards, the article makes for a very interesting case study in federation, because the author is talking about two very different approaches using the same protocol.

Note: for those that don't know, I came to Oracle as the head of development for OctetString--a virtual directory vendor. I am obviously biased, but I hope you will see my observations are much more general than just about LDAP.

As I read the case for copy-and-sync, another article came to mind from Robin Wilton at Sun. He writes about the recent HMRC security breach in the UK where government entities were copying citizen data between departments and in the process lost one of the copies. As it turned out, their approach of copying information created huge exposure for the UK Government.

Any time entire data sets are being copied eyebrows should be raising. Instead of minimizing information usage, information was being propagated. Control was being distributed, enabling the possibility of mistakes as more systems and hands have access to valuable personal information. In fact, the people with the least control are usually the persons identified within the data -- the persons whose privacy should be protected!

On the other hand, Rosie makes a good case that when you take the minimal approach of federating information on the fly (such as with Virtual Directory), your security may be minimized to the lowest level security provider of the federation. In response, I would contend that bad data is still bad data whether it is obtained through copy-and-sync or through dynamic querying. The fault lies not with the approach but with the data itself. The protocol and approach matters little at this point, bad data is always bad data.

The positive news is that obtaining data dynamically from a provider of personal information means that data is the most current available and not dependent the frequency of the last update. Control is maintained by the information provider and each usage is auditable. Consent is also more easily verified as it is possible to check each specific use of information and whether consent is needed and obtained.

Whether the protocol used for federation was LDAP, SAML, or WS-Trust, the issues remain the same. Those building federated applications need to be able to trust their providers. They have to be able to assess the quality of their sources. There are no easy answers right now. Just as with PKI trust in the past, trusting information transferred comes down to assessing the quality of information and procedures and the quality and stability of the physical infrastructures. Liberty Alliance has launched a new initiative called the Identity Assurance Framework (IAF) where they hope to begin to solve this problem. Check it out.

Monday, November 19, 2007

Consent, Control, and Minimal Disclosure

Last Saturday I posted some screen shots showing some help text included in the Microsoft Cardspace client that indicate that what is displayed (display tokens) is not necessarily what is transmitted to a web site, and further that there are no guarantees that Cardspace has no control over what the web site does with the information once submitted.

Interestingly, Kim wrote a response to a thread on a similar topic on an exchange between Citi's Francis Shanahan, Microsoft's Vittorio Bertocci, and University of Wisconsin's Eric Norman. This exchange talks about Cardspace and whether or not it violates Kim's Law of Identity #1 - User Control and Consent - "Technical identity systems must only reveal information identifying a user with the user's consent."

Aside: If you haven't already read Kim's 7 Laws of Identity, I recommend reading them. My opinion is that they set an excellent starting point for minimum requirements - though I believe many other observations will be made in the future (e.g. Madsen's Lemma of Dubious Attributes).

While I agree with Kim that some level of compromise is needed to make the identity meta-system work, I also think this thread indicates there is more work to do.

With regards to the issue of concealed or coded information which is not displayed in the display token, one of the justifications I can think of for coded information are reference pointers. These pointers might be there because of Law#2 - minimal disclosure.

For example, when going to a book merchant web site I would normally provide my shipping address so that I can receive my order. However, if I had an existing relationship with a courier firm, then that firm could act as a managed card provider with the web merchant. When ordering a book, I could elect to use my courier's managed card which would include a shipping reference token. The web merchant would use that token to mark my package and notify the courier. When the courier picks up the package, the courier is able to translate the shipping reference token into my confidential mailing address. This allows me to avoid giving my shipping address to the web merchant.

What we have here is a case of Law #2, minimal disclosure causing conflict with Law#1. The shipping reference isn't meaningful to me, but it does work on my behalf. It allows me to avoid giving out my address.

There are of course many other cases where the encoded information is not in the user's best interests. And so, the Cardspace client should be able to test a token from a managed card provider to see if the display text matches the encoded text.
While we can’t then prevent evil, we can detect and punish it. The claims in the token are cryptographically bound to the claims in the display token. The binding is auditable.
In this case though, the audit should be done by the Cardspace in order to verify the token. This would allow Cardspace to warn the user of encoded data or additional data not originally requested.

I also agree strongly that Identity Providers need to be trusted. This is why the Liberty Alliance has begun work on the Identity Governance Framework and the Identity Assurance Framework. BTW. Both IGF and IAF are intended to work in a multi-protocol environment, including WS-Trust (Law#5 - Pluralism of Operators and Technologies). ;-)

Sunday, November 11, 2007

Desktop-Centricity, Service-Centricity, and User-Centricity

Paul Madsen comments on Dale Old's comment that "user-centric" should just mean the user participates in the flow of personal information.

Hmmm. In my mind, I prefer to interpret user-centric to mean the user has control. Control is what users care about. Conflating the idea of user-control with specific implementation or architecture is a mistake.

I mention this because a desktop-centric model such as Cardspace (an example of a user-agent) brings forward several problems:
  • Sometimes desktops are shared (e.g. in manufacturing, service, or medical areas) and a single "windows" login does not match an identity. How would multiple doctors share the same desktop in an ER for example?
  • Sometimes we users use multiple desktops at home, at work, and travelling. How do we stop the proliferation of personal information (e.g. InfoCards) and avoid leaving traces of ourselves?
  • Sometimes we users are offline or not present during an information exchange. When an online retailer ships product, it happens when we are not present. Our personal information is shared with the shipper. How can we control the exchange of information when we do not have access to our normal desktop?
  • Social networks and multi-party transactions create new challenges. What if my wife wants to look up my travel itinerary? Do I manage this dynamically, or pre-arrange this by policy (consent)? When you consider multi-party scenarios, the likelihood of one of more parties being away from their desktop or "offline" becomes problematic. I suspect this is why consent in social networks is most often handled by e-mail.
There have been some interesting discussions of late in the telco community about cell phones. Should they be the platform for a user-agent? Do they represent a better user-agent then desktop agent? Could you associate a mobile user-agent with a user in control of a desktop browser? Example, a doctor with a mobile cell phone associates with a browser session in a hospital workstation or kiosk. Further, what happens when the person is in an environment, such as a hospital, where cell phones are banned?

SAMLv2 and ID-WSF's more service-centric implementation of user-centricity is also an excellent design. The supported use-cases for user-control and relationship handling are very broad indeed. Finally, they depend little on a user-agent but still offer user-control. The thin user-agent has been both their key advantage, but also key disadvantage. Microsoft's work on Cardspace shows how important it can be when it comes to providing a strong visual interface that improves a user's understanding of information flow.

It seems that like life, no single implementation model of user-centricity will fits all use cases (at least nobody has proposed one yet). I'm not sure what a combined solution would look like. But I think this thread highlights why more of this type of conversation is needed at Project Concordia (an open effort to look at how we can have better interoperability across protocols).


Aside: It also occurs to me that smartcards are also a nice portable form of identity. The use of the card is under the control of the user. But are they a good implementation for user-centricity? The problem with smartcards is they are fixed in time and tend to be created for a single purpose. Is it appropriate to use a smartcard for reasons other than its intended purpose? If a card maps to a single-identity or set of assertions, then that may create problems with privacy. The beauty of SAMLv2 and Infocard systems is the potential for dynamic relationships that allow only specific information to be provided to each web site - kind of like having many many cards in your wallet. Smartcards are still an important format for credentials, but like driver's licenses and passports, I'm skeptical about the use of a smart card as a universal credential.

Saturday, November 10, 2007

Thinking About An Identity Services API

My colleague at Oracle, Nishant Kaushik, was asking me about whether I knew if something was going on to develop a high-level API for Identity Services in the open source community. I realized that in a way, my work at Open Liberty on the implementation of the CARML API for the Identity Governance Framework was in fact turning out to be the startings of that exact API. From the developer's perspective, we're not building an IGF API, we're building an attribute/identity services API of which identity policy is only one of the many services needed.

The focus of the Open Liberty IGF project has been to demonstrate and provide libraries for using IGF, the issue I have run into, is that this project also needs a complete set of Identity Services. It needs to work with all the popular protocols (which is why working with a protocol specific API isn't good enough and why we're using Higgins IdAS to build on top of). It needs to deal with a wide variety web application needs and deployment environments. What started out as simply an implementation of IGF is definitely much bigger.

I would like to invite anybody interested in identity services to join the IGF Development Discussion group and add your thoughts. Should we broaden the direction of the IGF development to be more generic? Should we even rename or relocate the project? What do you think should be included in Identity Services. I know opinions vary widely on what Identity Services is. But this is exactly why collaborative input is needed now. Are you interested in getting involved? If so, feel free to respond to this post, or add your thoughts to the official development list!

Monday, October 22, 2007

Liberty Tokyo Meeting Day 1

The Liberty Alliance meeting got off to a great start today here in Tokyo. After some great discussions and hard work this morning, Paul Madsen, our outgoing Technical Expert Group chair got things going by arranging for a beach football (soccer)  match...
Have some ill-feeling towards a TEG colleague? Didn't appreciate their constantly changing feedback on a spec you edited? Resent travelling to Dulles every second meeting? Believe their blog numbers are mostly driven by Irish relatives? Bring all such tensions to the game, I know I will.

I have been preparing for the game by practicing

- rolling around on the floor clutching my shins
- raising arms in mock disbelief at an offside call
- running away from team mates after scoring
- dating vacuous swimsuit models
Some photographic evidence of the game:










A shot from last night (I thought I was in the wrong city!):