Friday, July 23, 2021

Standards: SCIM Birds of a Feather Meeting July 29

 

SCIM (RFC7643 and RFC7644) was published back in September 2015. SCIM has over 65 published implementations including the new open source project i2scim.io. There are deployments from small IoT systems all the way to large scale deployments in the billions. SCIM's primary benefit has been to serve as an industry standard way to provision and manage identities at service providers using an industry standard RESTful API using JSON documents.

Recently members of the SCIM mailing list have been discussing next steps and the re-chartering of the SCIM Working Group.

As part of the IETF process, there will be a "SINS" (SCIM Industry Next Steps) meeting at IETF111 to discuss possible upcoming work and working group formation. 

The goal of the meeting is simply to engage the broader IETF community in a birds of a feather style meeting and let people know about planned work.  A partial list of items being discussed:

  • Paging proposals
    • Stateful paging
    • Filtering and Paging of multi-value attributes (e.g. such as group members).
  • New schema proposals
    • HR Data
    • Enterprise Groups
    • Privileged Access Management
  • Soft deletes - enabling accounts to be resurrected
  • Profiling SCIM with SSO protocols such as OpenID Connect
  • Asynchronous events including cross-domain signals co-ordination and synchronization
  • Best practices including evolution of externalid usage
  • More?
The BoF meeting will be next Thursday, July 29, from 1:30 to 2:30PM Pacific Time. If you can make it, please attend! Register here.

Saturday, July 10, 2021

Launching i2scim.io and a new Independent Identity

It has been a while since I last blogged. I had found myself writing on Oracle's platform and this blog took a back seat. Now that I am truly independent, I am blogging again! 

To date, the focus of this blog has been an independent view on standards and the protocols that make Identity systems work. Going forward, I will continue to comment on standards but I will also bring new content around open source.

During the COVID shutdown, I got motivated to get back into development and started working on a new from the ground up SCIM server. I was fascinated by Kubernetes and the idea of building my own bare metal cluster. I settled on a half dozen Raspberry Pis and bought a PicoCluster kit. This became the motivator to do more...

Independent Identity is incorporated with the goal of building independent open source projects in support of SCIM and other Identity related standards. Rather than just help enterprise customers, I hope to work with application developers who are trying to figure out how best, to adopt support for protocols like SCIM.

Introducing i2scim!

i2scim.io

The first Independent Identity project is "i2scim". i2scim is my take on a full-feature modern SCIM server. Years ago, SCIM was envisioned by SFDC, Oracle, Ping Identity, SailPoint and many many others as the answer to how to handle identity management in an open RESTful way.  After years of collaboration, in September 2015, as the editor, I was very happy see RFC7643 and RFC7644 be published signalling a common method for provisioning accounts was now availble. Since then, there has been wide adoption by service providers. However, application developers looking to implement SCIM provisioning have been slow to support SCIM. SCIM's database like feature have been seen as both powerful and intimidating. My goal is to make i2scim more accessible. A configure and go approach where the core engine just becomes part of an application's data platform.

i2scim.io's is a high-performance, lightweight SCIMv2 server for Kubernetes environments using a small Docker image (178Mb in 0.5.0-Alpha). i2scim supports configurable schema and resource types. No more hard coded Java classes for resources. i2scim defines a provider interface which allows it to be adapted to different databases and APIs. The current version includes support for Mongo Database clusters and an in-memory database. 


Key i2scim features:

  • Full SCIMv2 protocol support with Bulk support coming.
  • Supports HTTP HEAD and HTTP Conditional requests (RFC7232)
  • Uses the Quarkus (1.13.6 Final) platform for fast startups, lightweight stack, and performance
    • Quarkus SmallRye JWT (Jose4J) support 
    • Quarkus SmallRye JWT / OAuth support
    • Quarkus SmallRye Health DevOps support 
    • Jackson Serializer
  • Mongo Database support exploits Mongo's native JSON document engine providing full SCIM support for Mongo Clusters.
  • Extension points
    • IScimProvider interface allows i2scim to be enhanced work against other databases and APIs
    • IEventHandler enables the ability to send and receive asynchronous events
    • IVirtualValue enables values that are mapped, or calculated
  • Configurable schema using JSON configuration files (supplied via K8S configMap) to define the objects types (ResourceTypes) and their composition (schema).
  • Security
    • JSON based RBAC Access Control configuration evolved from LDAP ACI model
    • Bearer token support with Quarkus SmallRye JWT (RFC7523)
    • Basic Auth (RFC7617) authenticating against SCIM users
    • Secure password storage using PBKDF2 (Password Based Key Derivation Function 2) with salt and pepper hash for FIPS 140 compliance.
  • Docker images (amd64 and arm64) available with K8S Yaml deployment templates.
  • More to come!
The current i2scim.io release is 0.5.0-Alpha and is a preview release intended to gather interest and begin building community. PR requests welcome!

 

Tuesday, February 24, 2015

A 'Robust' Schema Approach for SCIM

This article was originally posted on the Oracle Fusion Blog, Feb 24, 2015.

Last week, I had a question about SCIM's (System for Cross-domain Identity Management) approach to schema. How does the working group recommend handling message validation? Doesn't SCIM have a formal schema?

To be able to answer that question, I realized that the question was about a different style of schema than SCIM supports. The question was assuming that “schema” is defined how XML defines schema as a way to validate documents.

Rather then focus on validation, SCIM’s model for schema is closer to what one would describe as a database schema much like many other identity management directory systems of the past. Yet, SCIM isn't necessarily a new web protocol to access a directory server. It is also for web applications to enable easy provisioning. The SCIM schema model is "behavioural" - it defines the attributes and associated attribute qualities a particular server supports. Do clients need to discover schema? Generally speaking they do not. Let’s take a closer look at schema in general and how SCIM’s approach supports cross-domain schema issues.

Many Definitions of Schema and Many Schema Practices

Looking at the definition in Wikipedia, schema is a very broadly defined term. It can define a software interface, a document format (such as XML Schema), a database, a protocol, or even a template. There is even a new JSON proposal called JSON Schema. This too is very different from XML Schema. It has some elements that describe data objects, but JSON Schema focuses a lot more defining a service and more closely resembles another schema format: WADL.

With XML schema, the bias seems to be about “enforcement” and “validation” of documents or messages. Yet, for many years, the REST/JSON community has been proud of resisting formalizing “schema”. May it just hasn't happened yet. This does appear to be an old debate with two camps claiming the key to interoperability is either strict definition and validation, or strict adherence to flexibility or “robustness” or Jon Postel’s law [from RFC 793]:

“Be conservative in what you do, be liberal in what you accept from others.” 

12 years ago or so, Arran Swartz blogged "Postel's law has no exceptions!". I found Tim Bray’s post from 2004 to be enlightening - "On Postel, Again". So, what is the right approach for SCIM?


The Identity Use Case

How does SCIM balance the "robustness" vs. "verifiability" to achieve inter-operability in a practical and secure sense? Consider that:

There is often a cross-domain governance requirement by client enterprises that information be reasonably accurate and up-to-date across domains.
Because the mix of applications and users in each domain are different, the schema in one domain is will never exactly be the same as in another domain.
Different domains may have different authentication methods and data to support those methods and may even support federated authentication from another domain.
A domain or application that respects privacy tends to keep and use only the information it has a legitimate need for rather than just a standard set of attributes.
An identifier that is unique in one domain may not be unique in another. Each domain may need to generate its own local identifier(s) for a user.
A domain may have value-added attributes that other domains may or may not be interested in.

SCIM’s Approach

SCIM’s approach is to allow a certain amount of “specified" robustness that enables each domain to accept what it needs, while providing some level of assurance that information is exchanging properly. This means that a service provider is free to drop attributes it doesn't care about when being provisioned from another domain, while the client can be assured that the service provider has accepted their provisioning request. Another example, is a simple user-interface requirement where a client retrieves a record, changes an attribute and puts it back. In this case, the SCIM service provider sorts out, whether some attributes are to be ignored because they are read-only, and updates the modifiable attributes. The client is not required to ask what data is modifiable and what isn’t. This isn't a general free-for-all, that the server can do whatever it wants. Instead, the SCIM specifications state how this robust behaviour is to work.

With that said, SCIM still depends largely on compliance with HTTP protocol and the exchange of valid JSON-parsable messages. SCIM does draw the line with regards to the information content “validation” in an abstract sense like XML schema does.
Does the SCIM completely favour simplicity for SCIM clients? Not exactly. Just as a service provider needs to be flexible in what it accepts, so too must SCIM clients when a service provider responds. When a SCIM service provider responds to a client request the client must be prepared to accept some variability in SCIM responses. For example, if a service provider returns a copy of a resource that has been updated, the representation always reflects the final state of the resource on the service provider . It does not reflect back exactly what the client requested. Rather, the intent is that the service provider informs the client about the final state of a resource after a SCIM request is completed.

Is this the right model?

Let’s look at some key identity technologies of the past, their weak points and their strong points:

  • X.500 was a series of specifications developed by the ITU in 1988. X.500 had a strict schema model that required enforcement. One of the chief frustrations for X.500 developers (at least for myself) was that while each server had its own schema configuration, clients were expected to alter their requests each time. This became particularly painful if you were trying to code a query filter that would work against multiple server deployments. If you didn’t first “discover” server configuration and adjust your code, your calls were doomed to fail. Searching became infuriating when common attributes weren’t supported by a particular server deployment since the call would be rejected as non-conformant. Any deviation was cause for failure. In my experience X.500 services seemed extremely brittle and difficult to use in practice.
  • LDAP, developed by the IETF in 1996, was based on X.500, but loosened things up somewhat. Aside from LDAP being built for TCP/IP, LDAP took the progressive step of simply assuming that if a client specified an undefined attribute in a search filter, that there was no match. This tiny little change meant that developers did not have to adjust code on the fly, but could rather build queries with “or” clauses profiling common server deployments such as Sun Directory Server vs. Microsoft Active Directory and Oracle Directory. Yet, LDAP still carried too many constraints and ended up with some of the brittleness as X.500. In practice, the more applications that integrated with LDAP the less able a deployer was able to change schema over time. Changing schema meant updating clients and doing a lot of staged production testing. In short, LDAP clients still expected LDAP servers to conform to standard profiles.
  • In contrast to directory or provisioning protocols, SAML is actually a message format for sending secure assertions. To be successful, SAML had to ensure a lot of optionality that depended on “profile” specifications to clearly define how and when assertions could be used. A core to its success has been clear definition of MUST understand vs. MUST ignore. In many cases, if you don’t understand an assertion value, you are free to ignore it. This opens the door to extensibility. On the other hand, if as a relying party you understand an attribute assertion, then it must conform to its specification (schema).

In our industry, we tend to write security protocols in strict forms in order to assure security. Yet we've often achieved brittleness and lack of usability. Because information relationships around identity and the attributes consumed are constantly variable, history appears to show that identity protocols that have robust features are incrementally more successful. I think SCIM as a REST protocol, moves the ball forward by embracing a specified robust schema model, bringing significant usability features over the traditional use of LDAP.

Post-note: I mentioned in my last blog post that SCIM had reached 'last call'. The working group has felt that this issue is worth more attention and is currently discussing clarifications to the specifications as I have discussed above.

Tuesday, December 16, 2014

Standards Corner: IETF SCIM Working Group Reaches Consensus

On the Oracle Fusion blog, I blog about the recent SCIM working group consensus, SCIM 2's advantages, and its position relative to LDAP.

Friday, May 30, 2014

Standards Corner: Preventing Pervasive Monitoring

On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF.
Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce SchneierBrian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF.
Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones.
Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise.
The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM.
The draft goes on to further qualify what it means by “attack”, clarifying that
The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.
The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices.

Cross-posted from Oracle Fusion Blog.

Monday, May 12, 2014

Draft 05 of IETF SCIM Specifications

I am happy to announce that draft 05 of the SCIM specifications has been published at the IETF. We are down to a handful of issues (8) to sort out.


Major changes:

  • Clarifications on case preservation and exact match filter processing
  • Added IANA considerations
  • Formalized internationalization and encoding (UTF-8)
  • Added security considerations for using GET with confidential attributes
  • General editing and clarifications

Wednesday, April 9, 2014

Standards Corner: Basic Auth MUST Die!

Basic Authentication (part of RFC2617) was developed along with HTTP1.1 (RFC2616) when the web was relatively new. This specification envisioned that user-agents (browsers) would ask users for their user-id and password and then pass the encoded information to the web server via the HTTP Authorization header.

Basic Auth approach quickly died in popularity in favour of form based login where browser cookies were used to maintain user session, rather than repeated re-transmission of the user-id and password for each web request. Basic Auth was clinically dead and ceased being the "state-of-the-art" method for authentication.

These days, now that non-browser based applications are increasing in popularity, one of the first asks by architects is support for Basic Authentication. It seems the Basic Authentication "zombie" lives on. Why is this? Is it for testing purposes?

Why should Basic Authentication die?

Well, for one, Basic Auth requires that web servers have access to "passwords" which have continually been shown to be one of the weakest security architecture. Further, it requires that the client application ask users directly for their user-id and password greatly increasing the points of attack a hacker might have. A user giving an application (whether a mobile application or a web site) their user-id and password is allowing that application the ability to impersonate the user.  Further, we now know that password re-use continues to undermine this simple form of authentication.

There are better alternatives.

A better alternative uses "tokens", such as the cookies I mentioned above, to track client/user login state. An even better solution, not easily done with Basic Auth, is to use an adaptive authentication service whose job it is to evaluate not only a user's id and password, but can also evaluate multiple factors for authentication. This can go beyond the idea of something you know, to something you are, and something you have types of factors. Many service providers are even beginning to evaluate network factors as well, such as, has the user logged in from this IP address and geographical location before?

In order to take advantage of such an approach, the far better solution is to demand OAuth2 as a key part of your application security architecture for non-browser applications and APIs. Just like form-based authentication dramatically improved browser authentication in the 2000s, OAuth2 (RFC6749 and 6750), and its predecessor, Kerberos, provide a much better way for client applications to obtain tokens that can be used for authenticated access to web services.

Token authentication is far superior because:
  • Tokens cleanly separate user authentication and delegation from the application's activities with web services.
  • Tokens do not require that clients impersonate users. They can be highly scoped and restrictive in nature.
  • The loss of a token, means only a single service is compromised where as the loss of a password compromises every site where a user-id and password is used.
  • Tokens can be issued by multi-factor authentication systems.
  • Tokens do not require access to a password data store for validation.
  • Tokens can be cryptographically generated and thus can be validated by web services in a "stateless" fashion (not requiring access to a central security database).
  • Tokens can be easily expired and re-issued.
RFC 2617 Basic Authentication is not only dead. It needs to be buried. Stop using it. You can do it!

Cross-posted from Oracle Fusion Blog.

Thursday, March 13, 2014

Standards Corner: Maturing REST Specifications and the Internet of Things

Cross-posted from the Oracle Fusion Middleware Blog.
As many of you know, much of today's standards around REST center around IETF based specifications. As such, I thought I would share some RESTful services related news coming from last week's IETF meetings. Many working groups are now in the final stages of moving key specifications into standard status…

Friday, February 14, 2014

New IETF SCIM drafts - Revision 03 Details

Yesterday, the IETF SCIM (System for Cross Domain Identity Management) Working Group published new draft specification revisions:

This draft was essentially a clean-up of the specification text into IETF format as well as a series of clarifications and fixes that will greatly improve the maturity and interoperability of the SCIM drafts. SCIM has had a number of outstanding issues to resolve and in this draft, we managed to knock off a whole bunch of outstanding issues - 27 in all! More change log details are also available in the appendix of each draft.

Key updates include:

  • New attribute characteristics: 
    • returned - When are attributes returned in response to queries
    • mutability - Are attributes readOnly, immutable, readWrite, or writeOnly
    • readOnly - this boolean has been replaced by mutability
  • Filters
    • A new "not" negation operator added
    • A missing ends with (ew) filter was added
    • Filters can now handle complex attributes allowing multiple conditions to be applied to the same value in a multi-valued complex attributes. For example:
      • filter=userType eq "Employee" and emails[type eq "work" and value co "@example.com"]
  • HTTP
    • Clarified the response to an HTTP DELETE
    • Clarified support for HTTP Redirects
    • Clarified impact of attribute mutability on HTTP PUT requests
  • General
    • Made server root level queries optional
    • Updated examples to use '/v2' paths rather than '/v1'
    • Added complete JSON Schema representation for Users, Groups, and EnterpriseUser.
    • Reformatting of documents to fit normal IETF editorial practice
Thanks to everyone in the working group for their help in getting this one out!