Working with identity and access is more of a journey than a destination. Many of the areas above have dependencies in both directions, and it is important to gradually improve each one. For every iteration, the system matures. One method of evolving is to continuously ask 5 questions: 

  • Who works here? 
  • What should they do and why? 
  • How do we make sure and improve? 
  • What can go wrong? 
  • Are we playing by the rules? 

Who works here? 

It might look like an easy question to answer, but most often it is more complex than one would think. The easy answer is “the employees”, but looking at use of IT in an organization, there are usually wide varieties of users such as consultants, partners, suppliers, customers and more. This question is answered by work in the area of Identity. Within Identity there are two major areas. One is the connection from an Identity Manager to other applications and directories and the other area is how the identities should be “proven” as users access the IT infrastructure. 

What should they do and why? 

Besides knowing who the users are, it is important to understand what they should have access to and why. It is both to make sure that they get the access they need to do their job, but also make sure they do not get access to information that they are not privilege to. Most organizations try to define groups, attributes and/or roles that the users belong to, which in its turn is the basis for what access they should get, which is the authorization. There are many models for this, but in reality, most often it becomes a combination of them. The highest level of decisions is if the user should have any access to an application, and the second highest level is typically what role the user should have. Sometimes there is also a deeper level of authorization where the IAM solution decides on individual “permissions”. The enforcement is all about how the IAM solution shares this information to the application, and how the application makes sure it follows the policies. 

How do we make sure and improve? 

As mentioned above, this is more of a journey, and the systems under control are not static. Governance is often looked upon as an obstacle, but handled correctly and proactively, it can become a resource and create a structure that makes life easier. It is important to be able to follow what happens. To avoid overload, it is important to detect changes to what is considered normal. Based on this information there is always room for improvement, and changes should be controlled. An uncontrolled change can have a negative impact in many ways. What is important is that since an audit often goes back 12 months, it is important to be able to answer what access was possible during that time. Automation can often handle many events, but exceptions to automation must be possible, and also tracked. 

What can go wrong? 

No one can build a perfect system where nothing goes wrong. It is important to analyze what one should be prepared for, have a plan on how to fix it, and understand the impact or consequences. With mitigation one can also reduce the risk that it actually happens. 

Are we playing by the rules? 

When defining policies and setting the level of control, it is important to follow the rules defined both internally and externally. Compliance frameworks can be used as measuring sticks of progress  and as a driver for making changes to the other four areas. 

Management –   ”top down” or  ”bottom up” 

Different organizations have different approaches, but in the more mature the organization, the more common it is with ”bottom up”. A very immature and reactive organization also uses the “bottom up” approach, but with a lot of pain and seldom good results. The first proactive steps are to try to answer the five questions from the top. It is hard to decide what a person should do, unless you know if they are working ”here”. 

True Metadirectory

As the number of directories started to grow some 30 years ago, one approach was to build solutions that tied these directories together, and presented a view of the information from multiple sources. The way this is explained in Wikipedia is as follows (April 2022):

  • A metadirectory system provides for the flow of data between one or more directory services and databases, in order to maintain synchronization of that data, and is an important part of identity management systems. The data being synchronized typically are collections of entries that contain user profiles and possibly authentication or policy information. Most metadirectory deployments synchronize data into at least one LDAP-based directory server, to ensure that LDAP-based applications such as single sign-on and portal servers have access to recent data, even if the data is mastered in a non-LDAP data source. 
  • Metadirectory products support filtering and transformation of data in transit. 
  • Most identity management suites from commercial vendors include a metadirectory product, or a user provisioning product. 

In 2000 however, Microsoft launched Windows 2000, with a new architecture that was very reliant on a new directory named “Active Directory” (AD). The name is to some extent misleading, because it is a fairly passive directory. There are some features that have become popular, that have some active components, with the Group Policy “push” and software deployments being the most common. 

The big change AD created in 2000, was that all application vendors had to adapt their solutions to AD. AD was almost LDAP, and introduced its own schemas, that few were brave enough to change. AD became the mountain, that everyone went to. 

One of the challenges with AD is it reliance on some internal network protocols (I.e. Kerberos) and security models. As applications dared to break away from the local network, Microsoft’s first response was ADFS, and most organizations used it as its first step in integrating applications outside. 

In 2009 the SVP of Engineering (Todd McKinnon) and a solution engineer (Frederic Kerrest) at Salesforce saw the growing need for a true identity source in the cloud as a SaaS.  During their first ten years, most of their business was about going to the “mountains” that started appearing in the cloud, such as Salesforce, ServiceNow, Box, Microsoft and Google. 

When I joined Okta as the first solution engineer in the Nordics, I saw the value of having a “gofer” that ran around and talked to the mountains. Over the years, Okta became a mountain by itself, and required the applications to take over the work of integrations. Their nice feature, the discovery of applications’ identities as a first step in integration, was not followed up in terms of using the applications as a source (except a number of HR systems, Google, Microsoft AD (not Azure AD) and LDAP. Okta has a lot of value in what they still deliver, but I decided to step outside of Okta and continue to help organizations to build bridges with what I call a true metadirectory, very much in line with Wikipedia’s definition. 

A true metadirectory is, in my view, a process that does not host the truth by itself, but collects the truth from the sources that hold the truth. I believe that data should managed and owned where the knowledge sits. This means that a metadirectory needs to understand how to prioritize each attribute and group membership per user type. Because of the true metadirectory being sourced from other systems, you can run multiple instances of it, using the same sources.