Showing posts with label LDAP. Show all posts
Showing posts with label LDAP. Show all posts
Tuesday, December 16, 2014
Standards Corner: IETF SCIM Working Group Reaches Consensus
On the Oracle Fusion blog, I blog about the recent SCIM working group consensus, SCIM 2's advantages, and its position relative to LDAP.
Thursday, February 27, 2014
Standards Corner: SCIM and the Shifting Enterprise Identity Center of Gravity
My latest blog post on SCIM is available over on the Oracle Fusion Middleware blog.
Monday, May 5, 2008
Identity Bus Discussion at European Identity Conference
Felix Gaehtgens leads an interesting round-table discussion on the "Identity Bus" and the need for a higher-level interface to identity with Dave Kearns, Kim Cameron, Jackson Shaw, and Dale Olds.
Felix...sorry I couldn't make it. I wish I could have been there!
Felix...sorry I couldn't make it. I wish I could have been there!
Wednesday, April 16, 2008
Should Developers Move Away from LDAP APIs
While Jeff Bohren's previous post points out having to justify the business need (which I responded to here), yesterday afternoon, Jeff pointed out developers need to be convinced.
However, many developers have encountered lots of issues building great inter-operable LDAP applications. Clayton Donley has some good examples here. For many of these developers, virtual directory has been the solution.
Putting the business value question aside (we discussed that in previous posts), the developer audience for the IGF-enabled Attribute Service API is all of the other developers working on applications that don't fit the above criteria (probably about 90%). Up until now, those developers have been stuck building their own "silos" of identity information. After-all if you can't figure out how to plug into an unknown environment, why not create your own?
With the advent of new user-centric protocols, the demand to use Identity Services is growing. But if the LDAP community was bad at tooling, user-centric systems are worse (if only because it is still early days for these protocols). I think the point of having an open source attribute service project is to begin the process of creating an API relevant to applications developers, supports the newer protocols, and most importantly, is worth moving away from the silo architectures of the past and moving towards an Identity Network approach that inherently makes the application better in compelling ways.
Jeff is correct, building this kind of tooling and a community of support is not going to be easy - but we have to start somewhere. To that end, your input is indeed welcomed and wanted! Please check out the IGF Attribute Service project at openLiberty.
As I said, I wasn’t suggesting that IGF replaces AD. But if you expect developers to migrate to a new way for developing client applications you need to give them a compelling business case.Well that's actually easily answered. If you are a developer who only needs to talk to one type of directory (e.g. Active Directory) and only one instance of it, you SHOULD continue to use your favorite LDAP API (e.g. ADSI).
However, many developers have encountered lots of issues building great inter-operable LDAP applications. Clayton Donley has some good examples here. For many of these developers, virtual directory has been the solution.
Putting the business value question aside (we discussed that in previous posts), the developer audience for the IGF-enabled Attribute Service API is all of the other developers working on applications that don't fit the above criteria (probably about 90%). Up until now, those developers have been stuck building their own "silos" of identity information. After-all if you can't figure out how to plug into an unknown environment, why not create your own?
With the advent of new user-centric protocols, the demand to use Identity Services is growing. But if the LDAP community was bad at tooling, user-centric systems are worse (if only because it is still early days for these protocols). I think the point of having an open source attribute service project is to begin the process of creating an API relevant to applications developers, supports the newer protocols, and most importantly, is worth moving away from the silo architectures of the past and moving towards an Identity Network approach that inherently makes the application better in compelling ways.
Jeff is correct, building this kind of tooling and a community of support is not going to be easy - but we have to start somewhere. To that end, your input is indeed welcomed and wanted! Please check out the IGF Attribute Service project at openLiberty.
What About IGF and Existing LDAP Systems?
Jeff Bohren responds in his latest blog post to a post from Clayton Donley. The gist of Jeff's post is the suggestion that IGF replaces or changes AD. That is not the case.

The nice thing about CARML is that it is just a declaration. There is nothing saying a CARML declaration cannot be created by hand for an existing application. Though we are working on an open source implementation, it does not have to be used for applications and infrastructure managers to receive benefits from IGF. The new API is really about creating appeal for developers. Developers want something very different than enterprises. They want to be able to write flexible applications without having to spend 90% of their time writing code to support varied deployment scenarios and varied protocols.
For business, the benefits of IGF are going to be mainly around risk management and privacy as demand to use personal information increases beyond current traditional enterprise directory content. Enterprises wanting to use identity-related information from HR systems or CRM systems already have to worry about legislative and regulatory issues. While manageable today, the processes are largely manual and forensic in nature. It's a situation that cries out for standardization.
Finally we should be careful not to focus exclusively on classic enterprise identity and ignore other business systems that use identity-related information. Many businesses have customer relationship systems that hold and retain customer information have most often not been ActiveDirectory or LDAP based. This is why IGF can't exclusively focus on LDAP or any other single-protocol and why it must function at a higher level.
But the hardest thing is getting adoption of these standards. The point of my post was not to suggest that standards for identity services other than LDAP aren’t a good thing. The point was that to drive adoption you have to accept the reality that AD and other LDAPs have the predominant mind-share today.To be clear. Enterprise LDAP is a key part of what we are thinking about for IGF. The plan for IGF (and its components CARML and AAPML) is to develop a profile against multiple protocols (LDAP, ID-WSF, WS-*) used for identity information. Each profile will explain how IGF is used in the context of a particular protocol. For LDAP, the big challenge is what to do about existing applications. After-all, these applications aren't going to change for a long time - and probably do not need to. IGF is not a replacement for these protocols but is instead a layer that runs on top of them.To many enterprises, LDAP is their one identity hammer. And they see all their identity problems as nails. If we want them to put down the LDAP hammer and pick up the IGF pneumatic impact wrench, we have to explain to him in real world business cases why it’s better. Because they know the LDAP hammer will work and they already have it in their tool box. The IGF pneumatic impact wrench is a strange new tool to him that they must first understand and second justify purchasing.

The nice thing about CARML is that it is just a declaration. There is nothing saying a CARML declaration cannot be created by hand for an existing application. Though we are working on an open source implementation, it does not have to be used for applications and infrastructure managers to receive benefits from IGF. The new API is really about creating appeal for developers. Developers want something very different than enterprises. They want to be able to write flexible applications without having to spend 90% of their time writing code to support varied deployment scenarios and varied protocols.
For business, the benefits of IGF are going to be mainly around risk management and privacy as demand to use personal information increases beyond current traditional enterprise directory content. Enterprises wanting to use identity-related information from HR systems or CRM systems already have to worry about legislative and regulatory issues. While manageable today, the processes are largely manual and forensic in nature. It's a situation that cries out for standardization.
Finally we should be careful not to focus exclusively on classic enterprise identity and ignore other business systems that use identity-related information. Many businesses have customer relationship systems that hold and retain customer information have most often not been ActiveDirectory or LDAP based. This is why IGF can't exclusively focus on LDAP or any other single-protocol and why it must function at a higher level.
Tuesday, April 8, 2008
Kim Cameron On The New Generation of Metadirectory
As you may know, there has been an ongoing discussion on what does the next generation of meta-directory look like. Kim Cameron's latest post elaborates on what he thinks is needed for the next generation of "metadirectory".
But, while CARML was cool in itself, the business benefit to CARML was that knowing how an application consumes and uses identity data would not only help the identity network but it would also greatly improve the ability of auditors to perform privacy impact assessments.
We've recently begun an open source project at OpenLiberty called the IGF Attribute Services API that does exactly what Kim is talking about (by the way, I'm looking for nominations for a cool project name - let me know your thoughts). The Attribute Services API is still in early development stages - we are only at milestone 0.3. But that said, now is a great time for broader input. I think we are beginning to show that a fully de-coupled API that meets the requirements above is possible and dramatically easier to use and yet at the same time, much more privacy centric in its approach.
The key to all of this is to get as many applications as possible in the future to support CARML as a standard form of declaration. CARML makes it possible for identity infrastructure product vendors and service providers to build the identity network or next generation of metadirectory as described by Kim.
These are actually some of the key reasons I have been advocating for a new approach to developing identity services APIs for developers. We are actually very close in our thinking. Here are my thoughts:
- By “next generation application” I mean applications based on web service protocols. Our directories need to integrate completely into the web services fabric, and application developers must to be able to interact with them without knowing LDAP.
- Developers and users need places they can go to query for “core attributes”. They must be able to use those attributes to “locate” object metadata. Having done so, applications need to be able to understand what the known information content of the object is, and how they can reach it.
- Applications need to be able to register the information fields they can serve up.
- There should be a new generation of APIs that de-couple developers from dependence on particular vendor implementations, protocols, and potentially even data schemas when it comes to accessing identity information. Applications should be able to define their requirements for data and simply let the infrastructure deal with how to deliver it.
- Instead of thinking of core attributes as those attributes that are used in common (e.g. such as surname is likely the same everywhere). I would like to propose we alter the definition slightly in terms of "authoritativeness". Application developers should think about what data is core to their application. What data is the application authoritative for? If an application isn't authoritative over an attribute, it probably shouldn't be storing or managing that attribute. Instead, this "non-core" attribute should be obtained from the "identity network" (or metaverse as Kim calls it). An application's "core" data should only be the data for which the application is authoritative. In that sense, I guess I may be saying the opposite of Kim. But the idea is the same, an application should have a sense of what is core and not core.
- Applications need to register the identity data they consume, use, and update. Additionally, applications need to register the transactions they intend to perform with that data. This enables identity services to be built around an application that can be performant to the application's requirements.
But, while CARML was cool in itself, the business benefit to CARML was that knowing how an application consumes and uses identity data would not only help the identity network but it would also greatly improve the ability of auditors to perform privacy impact assessments.
We've recently begun an open source project at OpenLiberty called the IGF Attribute Services API that does exactly what Kim is talking about (by the way, I'm looking for nominations for a cool project name - let me know your thoughts). The Attribute Services API is still in early development stages - we are only at milestone 0.3. But that said, now is a great time for broader input. I think we are beginning to show that a fully de-coupled API that meets the requirements above is possible and dramatically easier to use and yet at the same time, much more privacy centric in its approach.
The key to all of this is to get as many applications as possible in the future to support CARML as a standard form of declaration. CARML makes it possible for identity infrastructure product vendors and service providers to build the identity network or next generation of metadirectory as described by Kim.
Monday, April 7, 2008
Oh, I see now. Virtual *IS* Meta!
In a post today, Dave Kearns quotes Kim Cameron on the original definitions of meta-directory.
This quote from Kim caught my eye:
LOL! This is *EXACTLY* the definition of a Virtual Directory. The virtual directory normally does not (with the exception of Radiant Logic which does both) hold actual data. It simply holds the metadata and knows where to get information. It presents information to client applications as if it held the data itself - hence the term "virtual".
It was unfortunate that as Kim said, folks didn't like the term "uberdirctory". If Zoomit had been called an uberdirectory, there might not have been as much confusion about the difference between Virtual Directory and Meta-Directory!
In the end, for me, virtual and meta/uber are 2 very different tools that solve different problems in enterprises. There is need for both. In Oracle's case, we chose not to build an uber-directory. Instead Oracle offers another variation on the meta-directory concept known as provisioning. This is where data is moved between the authoritative sources that already exist. Thus we have Oracle Identity Manager covering the provisioning side, and Oracle Virtual Directory covering the virtualization side.
This quote from Kim caught my eye:
"In my world, a metadirectory is one that holds metadata - not actual objects, but descriptions of objects and their locations in other physical directories."
LOL! This is *EXACTLY* the definition of a Virtual Directory. The virtual directory normally does not (with the exception of Radiant Logic which does both) hold actual data. It simply holds the metadata and knows where to get information. It presents information to client applications as if it held the data itself - hence the term "virtual".
It was unfortunate that as Kim said, folks didn't like the term "uberdirctory". If Zoomit had been called an uberdirectory, there might not have been as much confusion about the difference between Virtual Directory and Meta-Directory!
In the end, for me, virtual and meta/uber are 2 very different tools that solve different problems in enterprises. There is need for both. In Oracle's case, we chose not to build an uber-directory. Instead Oracle offers another variation on the meta-directory concept known as provisioning. This is where data is moved between the authoritative sources that already exist. Thus we have Oracle Identity Manager covering the provisioning side, and Oracle Virtual Directory covering the virtualization side.
Sunday, March 30, 2008
Welcome Clayton Donley to the Blogosphere!
A big welcome to Clayton Donley who has just decided to join the blogosphere! Of course, Clayton and I go way back to OctetString days. Clayton now is top dog for directory services at Oracle and you can expect some interesting posts from now on!
His first post responds to Dave Kearn's recent article about the two-billion user benchmark achieved with Oracle Internet Directory.
His first post responds to Dave Kearn's recent article about the two-billion user benchmark achieved with Oracle Internet Directory.
Thursday, December 6, 2007
Copy and Sync Bad for Privacy
I read an article by Rosie Lombardi in InterGovWorld that turned out not to be what I thought it was about on first reading the title "Secret identity: Solving the privacy puzzle in a federated model".
The article turned out to be a discussion not of classic web federation, but one of different approaches to using LDAP in a federated government setting. In the article, Rosie lays out the case for the copy-and-sync meta-directory approach, vs. the case for dynamic access via virtual directories. While the article was not about classic web federation using SAML or InfoCards, the article makes for a very interesting case study in federation, because the author is talking about two very different approaches using the same protocol.
Note: for those that don't know, I came to Oracle as the head of development for OctetString--a virtual directory vendor. I am obviously biased, but I hope you will see my observations are much more general than just about LDAP.
As I read the case for copy-and-sync, another article came to mind from Robin Wilton at Sun. He writes about the recent HMRC security breach in the UK where government entities were copying citizen data between departments and in the process lost one of the copies. As it turned out, their approach of copying information created huge exposure for the UK Government.
Any time entire data sets are being copied eyebrows should be raising. Instead of minimizing information usage, information was being propagated. Control was being distributed, enabling the possibility of mistakes as more systems and hands have access to valuable personal information. In fact, the people with the least control are usually the persons identified within the data -- the persons whose privacy should be protected!
On the other hand, Rosie makes a good case that when you take the minimal approach of federating information on the fly (such as with Virtual Directory), your security may be minimized to the lowest level security provider of the federation. In response, I would contend that bad data is still bad data whether it is obtained through copy-and-sync or through dynamic querying. The fault lies not with the approach but with the data itself. The protocol and approach matters little at this point, bad data is always bad data.
The positive news is that obtaining data dynamically from a provider of personal information means that data is the most current available and not dependent the frequency of the last update. Control is maintained by the information provider and each usage is auditable. Consent is also more easily verified as it is possible to check each specific use of information and whether consent is needed and obtained.
Whether the protocol used for federation was LDAP, SAML, or WS-Trust, the issues remain the same. Those building federated applications need to be able to trust their providers. They have to be able to assess the quality of their sources. There are no easy answers right now. Just as with PKI trust in the past, trusting information transferred comes down to assessing the quality of information and procedures and the quality and stability of the physical infrastructures. Liberty Alliance has launched a new initiative called the Identity Assurance Framework (IAF) where they hope to begin to solve this problem. Check it out.
The article turned out to be a discussion not of classic web federation, but one of different approaches to using LDAP in a federated government setting. In the article, Rosie lays out the case for the copy-and-sync meta-directory approach, vs. the case for dynamic access via virtual directories. While the article was not about classic web federation using SAML or InfoCards, the article makes for a very interesting case study in federation, because the author is talking about two very different approaches using the same protocol.
Note: for those that don't know, I came to Oracle as the head of development for OctetString--a virtual directory vendor. I am obviously biased, but I hope you will see my observations are much more general than just about LDAP.
As I read the case for copy-and-sync, another article came to mind from Robin Wilton at Sun. He writes about the recent HMRC security breach in the UK where government entities were copying citizen data between departments and in the process lost one of the copies. As it turned out, their approach of copying information created huge exposure for the UK Government.
Any time entire data sets are being copied eyebrows should be raising. Instead of minimizing information usage, information was being propagated. Control was being distributed, enabling the possibility of mistakes as more systems and hands have access to valuable personal information. In fact, the people with the least control are usually the persons identified within the data -- the persons whose privacy should be protected!
On the other hand, Rosie makes a good case that when you take the minimal approach of federating information on the fly (such as with Virtual Directory), your security may be minimized to the lowest level security provider of the federation. In response, I would contend that bad data is still bad data whether it is obtained through copy-and-sync or through dynamic querying. The fault lies not with the approach but with the data itself. The protocol and approach matters little at this point, bad data is always bad data.
The positive news is that obtaining data dynamically from a provider of personal information means that data is the most current available and not dependent the frequency of the last update. Control is maintained by the information provider and each usage is auditable. Consent is also more easily verified as it is possible to check each specific use of information and whether consent is needed and obtained.
Whether the protocol used for federation was LDAP, SAML, or WS-Trust, the issues remain the same. Those building federated applications need to be able to trust their providers. They have to be able to assess the quality of their sources. There are no easy answers right now. Just as with PKI trust in the past, trusting information transferred comes down to assessing the quality of information and procedures and the quality and stability of the physical infrastructures. Liberty Alliance has launched a new initiative called the Identity Assurance Framework (IAF) where they hope to begin to solve this problem. Check it out.
Subscribe to:
Posts (Atom)