Jump to: navigation, search



This has been replaced by the wiki page: Keystone/DistributedSigning and this page will be repurposed to talk about User to User Delegation.


Keystone currently supports a single Identity management store. This document describes an extension to Keystone that will allow multiple user databases to interoperate in a single cloud environment. With PKI authentication, the remote services no longer need direct access to a Keystone server for the majority of their operations. Instead, Keystone will primarily serve them as place to distribute the signing certificates for tokens. Most Keystone data will come into the systems in the form of a signed document attached to the request. While the first implementation has this information coming from the same Keystone server, there is no reason that is has to in the future. Keystone is changing from calling the grouping mechanism Tenants to Proejcts. For this document, I will refer to them as projects. It will be possible to set up a system where the Keystone server used to sign the token is behind a firewall and is not accessible to the Nova server. The user goes to the keystone server, requests and receives a token whose signature can be verified by a signing certificate. All the Nova instance needs now is to be able to fetch the signing certificate from Keystone. This ability to separate the Keystone servers provides the mechanism to Federate the authentication and authorization mechanisms used in OpenStack. With Domains, we have a natural boundary for responsibility. An OpenStack Deployment that has two domains could have two Keystone servers, one for each domain. So long as users only seek access to resources in their own domain, they can perform all operations through their own Keystone server. This keystone server could potentially be hosted at a different data center than hosts the rest of the OpenStack Deployment. What if a user from Domain A wants access to a resource managed by Domain B? The keystone server for Domain A has to be willing to accept tokens generated from the Keystone server that manages Domain B.

UPDATE. 20 Feb 2013

Using the federated identity management infrastructure (see https://blueprints.launchpad.net/keystone/+spec/federation) that has been implemented by the University of Kent, it is now possible to integrate multiple Keystones together to provide the functionality that Adam requires, by simply configuring them in the correct way. One Keystone (the identity provider) does not need any new software, and can be Keystone V2. It is simply configured to trust/know about the other Keystone (the service provider) as if it were just another of its cloud services such as Swift or Glance. The Keystone service provider on the other hand does need the new federated identity management capabilities, and it is configured to trust the other Keystone to be its identity provider. Here's how it works from a user perspective.

  1. The user's client contacts the Keystone SP saying it wants to use federated identity management (-f option)
  2. Keystone SP returns the list of trusted IdPs, which will include the Keystone IdP.
  3. The user chooses the Keystone IdP, and the client contacts the Keystone IdP.
  4. The user enters his un/pw for the Keystone IdP and it returns an unscoped token and list of tenants (projects).
  5. The user chooses the tenant that will give him access to the Keystone SP, and the Keystone IdP produces a scoped token for the Keystone SP service.
  6. The user's client returns the scoped token to Keystone SP, and it validates this against the Keystone IdP and if successful it returns the user an unscoped token for its services and the list of local tenants that are available to the user.
  7. The user chooses the correct tenant and continues to access the Keystone SP services as if he had logged in locally to it.


Here is an example scenario to demonstrate the process.

  • Cumulus, Inc is a Cloud Provider running OpenStack. It will has domains for two towns: Stoughton and Canton. A resident of Stoughton has a username composed of their first initial and last name: ayoung. The usernames used in Keystone are then username@stoughton. A resident of Canton might also have the same username, but it will be qualified with their doman name. So ayoung@canton is a different user from ayoung@stoughton. The Canton domain has a project deployed a Cumulus. We'll call this Project Turing. The administrator of the Canton domain grants the Role *user* to ayoung@stoughton in their Keystone. Here are the sequence of events.

Sequence of Events

In the following scenario, I'll use ayoung to refer to the real person, and ayoung@stoughton as his userid.

  • ayoung performs a keystone token-get against the Stoughton Keystone instance, passing in his userId and password. To limit the scope of this token, he requests it for the canton project, a special project set up only for access to Canton resources.
  • Stoughton Keystone issues a new token for the Stoughton domain.
  • ayoung performs a keystone token-get against the Canton Keystone, passing in the Stoughton token. He requests a token for the Canton Turing Project.
  • The Canton Keystone server uses the Stoughton Certificate to validate the Stoughton token.
  • The Canton Keystone server issues a new token for ayoung@Stoughton for the project turing@canton. I'll refer to this ticket as ayoung@stoughton/turing@canton.
  • ayoung sends a request to a Nova server running at Cumulus to start a virtual machine inside the turing@canton project, passing on the token for ayoung@stoughton/turing@canton.
  • The nova server sees that the ticket was issued by the Canton Keystone server, and uses the Canton Signing certificate in order to validate it.
  • Nova sends the start command and returns a success code to ayoung.

The mechanism described is very similar to how Kerberos does cross domain trusts. Security Concerns: The python-keystoneclient should enforce that the initial token are for requesting services from another domains Keystone server. If the user requests a general purpose ticket, and then hands that off to the Canton Keystone server, someone with access to that server now could impersonate ayoung@stoughton by reusing his token. TO compare with Kerberos, this would be like handing over at TGT to an untrusted service. It is also essential that an internal Keystone server not allow token re-authentication for signed tokens from the delegated servers. If a user with access to the Canton server then passes in the initial ayoung@stought token to an internal Keystone, the internal Keystone server should deny it. The current way that tokens are implemented are that they are verified against an internal data store. The only exception to this rule should be for explicit trust relationships like the one described above.

Comment by David

Hi Adam

I think you are mixing up two issues

i) access to services from foreign domains (the main thrust of this blueprint), and

ii) copying and using tokens belonging to someone else

The second issue is one that needs addressing for all users of Keystone, and is not restricted to multiple keystones and access from foreign domains. The issue arises because the tokens issued by keystone are "bearer" tokens, and there is nothing in them to link the token to the client that is submitting it. This issue has been dealt with in SAML through the use of the "holder of key" feature, in which the client has a key pair, and its public key is included in the token. The client can then prove that the token is his by signing the message with his private key. In this way the token is protected from copying and replaying by someone else.

The first issue I would say is one of federation, not delegation (although there is an aspect of delegation in federation, in that the resource or service provider delegates the act of authentication to the identity provider. However this is not usually termed delegation, but rather trust. We normally say that the SP trusts the IdP to authenticate its users, and we dont normally say that the SP has delegated authority to the IdP to authenticate its users). Delegation is actually an abbreviation for delegation of authority, so it is primarily to do with authorisation, and not authentication. So I delegate my role to someone, since this is authorising them to act as me.

Back to your problem scenario. I would say that it is best addressed by considering that the two OpenStack installations, Stoughton and Canton, federate together so that each trusts the other to authenticate the users of its own domain and they will honour that. So, the administrator of the Canton domain grants the Role *user* to ayoung@stoughton in their Keystone. This is simple role assignment as now, the only difference being that the administrator is granting the role to a remote user rather than a local user. Next ayoung@stoughton authenticates to his local Keystone server and is given an authentication assertion, signed by Stoughton Keystone, to say "I have authenticated this user, believe me". Ayoung presents this to Keystone at Canton, which validates the assertion (because it trusts Stoughton Keystone), and returns him a scoped token for Project Turing and role user. Its that simple. So I would re-title this blueprint "OpenStack/Keystone Federation"

Comment by Matt Joyce

Hi Adam

So, I agree with David is so far as this post seems to be blending two issues. I feel like the solution suggested here is one specific to transitive trusts between domains. I am not sure we want to solve this issue yet.

Here is my concern. We're still laying out RBAC, and what the token will end up looking like for PKI and other enhancements. We cannot address how to transit those tokens and tie them back to credential checking until we know what the credentials are?

Going a little further, there is going to be a need ( ESPECIALLY with keystone domains providing transitive trusts ) for there to be a more fine grained ability to be certain who signed a token and for what explicit purview.

In short, I think tokens will need to be able to be tied to:

  • specific instance(s)
  • specific user(s)
  • specific role(s)?
  • specific tenant/project(s)
  • domain.

In short, when a system is evaluating the validity of a token it will need to be able to say :

  • was this token created by ayoung?
  • is it a token intended for the host it is coming from ( wild cards allowed ? )?
  • does the request match the acl of the role ( or something else? security group? =/ )
  • Is this token specific to a tenancy?
  • Is this token specific to a domain ( probably hard locked to a domain it was created within ).

What I don't see is the specific break down of how the token is structured to meet todays needs, as well as tomorrows. And, consequently how are we going to pass that data between domains properly?

I'd rather we locked down the token at the very least as a good baseline from which we can grow. However, I do caution against using existing standards as a drop in solution. Cloud IS different. Our target instances ( in cloud systems ) will be entirely under the control of potentially unfriendly users. That means we need to be sure that any token we create can be checked without having more access than it needs to do so.

The other question is, how are tokens sent in from external sites treated? Long term token handling, token authentication when keystone is unreachable, etc.

I know there are other blueprints that address some of this, but I want to see how all of this works into trust transiting between multiple keystone endpoints. What makes it? What doesn't? What is coming in the future? What isn't?

Reply by David

Hi Matt I have been saying for a while that what we need is fully documented model for what a Keystone token needs to be in the long term, and what it currently is. Only when we know precisely what the functionality of existing tokens are, and what we want them to be in the long term, can we make any sensible progress towards getting to our desired destination. I think the core group should initiate a design team to work on the conceptual model for the ideal future keystone token, and then present this to the community for comment. Once the conceptual model has been agreed upon, then and only then, can the actual encoding (or multiple encodings) be determined.

Comment by Pramod

Hi, Adam i think what you are talking about is described exactly in this paper


David and Matt please take a look at this.

Reply by David

Hi Pramod I have read your paper. Thanks for the pointer. However, I think that your paper is primarily concerned with authorisation policies and authz decision making whilst this particular blueprint is concerned with federating keystones together. Can I suggest that we move the discussion of your paper to the appropriate location that is discussing the policy API?

Sharing of resources between multiple tenants is aimed at here. David i understand what you are taking about is Federation Management between more than one Cloud Service Providers/Instances ie, n > 1 Openstacks. But within the same Openstack installation, access of resources between tenants is aimed at here, which is not implemented in the current Keystone.

Reply by David

Adam's example does not have one Keystone, but it has two. So I dont think Adam is addressing the issue of sharing resources between multiple tenants. If he was, he would not have multiple keystones would he?

If a federated service is implemented, this would take care of the issue Adam is talking about here. What if i don't want a federated system and i only have 1 central Openstack installation running im am concerned with which has multiple tenants ( eg. 1 in Soughton and 1 in Canton ) and sharing of resources and access management between them has to be managed? I wouldn'nt be able to accomplish this as it is not possible with the current implementation of Keystone.

Your thoughts?

Reply by David

I would say that the solution should be very simple. The policy that the cloud service provider sets up should be able to allow multiple tenants to access it. In fact it should be able to allow anyone to access it, including the public!. After all, the cloud service is in charge of its own resources and so should be able to fully determine who has access to them. It should not be controlled by Keystone or anyone else. There is currently a bug in the Keystone policy implementation (which Henry Nash has called a clever feature) whereby what should be a policy rule, and therefore editable, is in fact hardcoded into the Keystone implementation so that it cannot be changed. The code does something like check that the tenant of the token matches the tenant of the resource. This should not be there. No policy rules should be hardcoded into the Keystone code. If you can get the Keystone core team to remove this bug you would be halfway towards a solution.