The ITIL maturity model has been used for many years to rate the capability and maturity of processes and functions. The aim of most organisations is to understand their current maturity ‘scores’ and use those as a starting point for making improvements.
However, the relationship between the scoring method (usually yes / no responses to multiple statements or questions, which are then averaged) and the quality and usefulness of the results which are obtained are rarely, if ever, considered.
So, in this article we'll look at:
IT leaders assess and improve IT Service Management (ITSM) processes for a variety of reasons. Foremost amongst these is the idea of benchmarking ITSM maturity and capability. This is where planned assessment activities and analysis are used to rate the performance of IT elements against an industry standard ITIL Maturity Model.
An assessment usually proceeds by interviewing stakeholders, observing work in progress, and by identifying and testing material evidence. The intention is to ultimately improve the effectiveness, efficiency and quality of the interactions between the IT processes or IT functions under investigation.
Zeno: Powerful process improvement made simple
During the assessment of a process or function, a series of questions or statements are put to the interviewee. The standard set of statements for all ITIL processes is supplied by Axelos, who also define the ITIL Maturity Model itself, and the online assessment mechanism for providing responses to each statement. Here is an example statement:
“Training is provided to new people with a role within the process”.
In the Axelos assessment methodology accompanying the ITIL maturity model, there are only two ways of responding to this statement: either ‘yes’ or ‘no’.
One can immediately see how there are a possible range of answers which cannot be reflected by a simple binary choice. Also, when using this method face-to-face with an interviewee, the requirement for a binary response effectively shuts down discussion, and can be a demoralising experience for those who would like to provide a much richer answer.
The binary response format, in contrast to the Likert Scale format, also gives us no additional information, no clues about how to find corroborating evidence, and no opportunity to mine the skillsets and experiences of those being interviewed.
Let's look at this more closely and suppose that one hundred statements relating to a single process such as Incident Management (most processes and functions have at least one hundred associated statements) are put to each of five interviewees, giving a total of five hundred responses to statements. Although Axelos do not reveal the exact method by which they arrive at a maturity rating when responses are submitted via their online portal, many assessment professionals have reverse-engineered maturity scores by this or similar methods:
At this point, there are two related questions we really should be asking.
First, of what practical use is that ‘final number’ maturity rating to an organisation?
Secondly, what issues, inconsistencies, errors, and potentially useful information are buried so far below this ‘final number’ that might have been helpful in moving the organisation forward? It is no exaggeration to say that this kind of assessment methodology gives Maturity Models a bad name.
Before moving on to how a different method of assessment using the Likert Scale can bring a maturity model to life, let us examine the ITIL Maturity Model itself a little more closely.
The ITIL Maturity Model itself is straightforward and useful, even if its implied methodology for arriving at Maturity Levels is not. In summary, the model provides definitions of five maturity levels referred to as
The definitions are expanded upon by a useful set of ‘maturity level characteristics’, and an extra Level 0 (referred to as ‘absence’ or ‘chaos’) is also added. The characteristics are referred to as ‘reactive’ for Initial Level 1, ‘active’ for Repeatable Level 2, ‘proactive’ for Defined Level 3, ‘pre-emptive’ for Managed Level 4 and just ‘Optimized’ for Level 5.
Examples of characteristics from each level:
L1: There is little management commitment.
L2: Procedures are usually followed but vary from person to person and team to team.
L3: There is starting to be a focus on operating proactively, although the majority of work is still reactive.
L4: Most activities that can be automated are automated.
L5: Process improvements are actively sought, registered, prioritized and implemented, based on the business value and a business case.
You can download a free copy of the ITIL Maturity Model from Axelos.
The ITIL Maturity Model then is perfectly serviceable and fit for purpose for ITIL processes and functions. But the assessment methodology to which it is tied doesn’t facilitate the benefits of a well-run assessment. Every assessing organisation should be not just looking for a Maturity Rating, but also a detailed set of results and insights which are actionable and specific.
When handled well, every assessment can and should be a springboard for:
The perfect vehicle for these ideas is the combination of a Likert Scale with the Maturity Model.
Simply put, a Likert-type scale (to give it its proper name) is a five (or seven or even nine) point scale which prompts the interviewee to express how much they agree or disagree with a particular statement.
“Regular customer surveys and stakeholder feedback are used to improve the process and activities”Strongly Disagree / Disagree / Neither Agree nor Disagree / Agree / Strongly Agree
An important advantage of the Likert Scale over the binary response method is that it allows for far greater accuracy at the granular level of individual statements. For example, perhaps customer surveys are used, but not consistently, or only in one area.
This leads to probably the most important advantage of the Likert Scale in that it has the effect of opening up the conversation with the interviewee to expose the subtleties of the issues and improvement opportunities.
The Likert Scale also allows for ‘outlier analysis’. For example, if one person ‘Strongly Disagrees’ but all other respondents ‘Strongly Agree’, then we have an outlier result which can be investigated and resolved. The binary response approach is incapable of giving this type of rich data.
Another advantage of the Likert Scale is that when used skilfully and intelligently it can be mapped against the ITIL Maturity Model in a very effective way. For example, in practice we have found that the following mappings can be made, particularly when an interviewee’s response is followed up with supplementary questions when necessary:
In the actual assessment situation, whether using spreadsheets or more advanced software like Zeno, each of the Likert Scale responses is visibly aligned with the definitions and characteristics of the ITIL Maturity Model levels.
To use an example, an interviewee might respond that they ‘Strongly Disagree’ with this statement:
“This process has been formally adopted with at least some ad hoc process activities being undertaken”.
The ‘strongly disagree’ response is mapped to a Level 1 maturity rating with the following statement derived from the ITIL maturity model:
“Process adoption is ad hoc, disorganized or chaotic. The organization has recognized that issues exist and need to be addressed. But there are no standardized procedures or process activity. Process adoption is low priority with few resources allocated to it. Ad hoc adoption approaches are applied on a case-by-case basis”.
In the large majority of cases this correlation between ‘strongly disagree’ and a Level 1 maturity rating will hold good, as it does for the other four maturity levels and Likert Scale responses. As implied earlier, the skill and intelligence of the interviewer are called upon to decide when the interviewee’s initial response would actually be reflected more accurately by a different maturity level. This happens rarely.
The visible correlation between Likert Scale responses and Maturity Model statements is especially useful in self-assessment situations. The advantage here is that the maturity model becomes a living, dynamic way of looking at processes and functions, rather than just an external academic imposition.
There is no apparent reason why these methods could not be applied to other maturity scales and applications. The advantages are clear and useful in most if not all circumstances: more accurate maturity ratings, richer data and information, more productive gathering of anecdotal evidence, greater participation and motivation of staff within the assessment interview itself, which in turn spills over into subsequent improvement projects.
Most of the above relies to some extent on the willingness of an interviewer to engage in a real conversation with the interviewee, rather than relying on a rote scoring method. This is how the really useful information is discovered and used to improve organisations.
Zeno: Powerful process improvement made simple
Maturity assessment is more than just a series of interviews, of course, and you can read about the wider context in the Visual Guide to ITSM Maturity Assessment.
In this series on interviewing we’ll cover: