One thing that I’ve come to truly appreciate: the balanced scorecard.  Don’t get me wrong: I’ve been using scorecards and dashboards for over a decade.  I helped to build one at American Express.  But I have come to see, from an executive level, why they are so freakin’ useful… you can use them to hold people accountable to measurable strategic improvement.

With a scorecard, it is possible to reduce "passion-based decision making" from the organization, without requiring every decision to be based on Return-on-investment.  (I like ROI, but only as a single measure within a balanced scorecard, not as the entire scorecard mechanism ;-).  If everyone understands the mechanisms by which "organizational health" are measured, then it is OK to improve one measure at the expense of another if the final outcome moves toward "health." 

In that vein, I’m looking at the measures that Enterprise Architecture should use to demonstrate alignment with critical IT strategies and business goals.  We have to make sure that our work delivers value, and demonstrate that value, as part of our own scorecard. 

The Corporate Executive Board, an excellent organization that brings together peers from across industry, put together a presentation on the various measures of Enterprise Architecture used by their member companies.  I won’t go into details, but it appears that the measures break down into four general areas:

  • EA environment and activities — this is what I call "Proof of Life" metrics.  Useful process metrics, and you can put ranges on them to push general activity.  These kinds of measures include "number of to-be architectures defined," and "number of business processes mapped."  Unfortunately, if a metric is not properly aligned, these metrics can end up being little more than "looking busy."  These metrics prove you are working, but not that the work is having a positive impact.
     
  • EA compliance and adoption — these are the "Proof of Effect" metrics.  This is a lot closer to proving the case that EA is not only present, and busy, but having an effect.  These include measures like "% of applications used by more than one business" and "% of projects compliant with EA standards" and "% of transactions that adhere to master data standards."  Assumably, these are good performance indicators that can be rationally tied to business value.  Having this measure is important.  Having that connection to business value is also important.  Note that the CEB study did not include two of the key measures that Microsoft IT finds important:
    • % of Business Stakeholders that view IT as a trusted advisor and strategic partner, and
    • % of Strategic Project Milestones reached on time 
       
  • Spending and Savings — these are the "Cost Cutting" metrics.  These are directly valuable to the business, as a single dollar of cost saved can go straight to the bottom line.  This group of measures includes things like "savings from a reduction in interfaces," and "savings from standardized purchase agreements."  You often need the "Proof of Effect" metrics to back up this group, to show that there is a correlation.  Otherwise, you can leave open the possibility of having a really large impact, for which another group is given credit.  For those of you involved in getting funding for EA, you’ll recognize how perilous that road can be.
     
  • Revenue and Profit — these are the "Value Stream" metrics.  These metrics are valuable to the company’s stockholders in the most visible of ways.  Metrics like these can include "revenue from new IT-enabled business capabilities" or "opportunity benefits of agility: revenue during time-to-market savings."  Unfortunately, it can be a long road between "govern the standards or an IT project" and "increase revenue."  At this level, EA can be part of a contribution to IT alignment agility and quality, which can be part of a contribution to Business agility and performance, which contributes to Business profitability.  On the other hand, I think that these numbers are not the better measure of EA performance since the contribution can vary wildly from one project to the next, or even one quarter to the next, due to conditions that are completely outside of the control of EA (or even IT).  In many cases, these measures are the "cinnamon air freshener" of the CIO’s office.  They smell nice, but vanish quickly, leaving behind no evidence that they were ever there.

Personally, I found this study useful on so many fronts.  It gave me context, ideas, and key questions to answer.  But now I’d like to ask you, the practitioner… what do you think?

If it were up to you to create a measure of Enterprise Architecture, what metrics would you collect?  What metrics would you ignore? 

Please share…

By Nick Malik

Former CIO and present Strategic Architect, Nick Malik is a Seattle based business and technology advisor with over 30 years of professional experience in management, systems, and technology. He is the co-author of the influential paper "Perspectives on Enterprise Architecture" with Dr. Brian Cameron that effectively defined modern Enterprise Architecture practices, and he is frequent speaker at public gatherings on Enterprise Architecture and related topics. He coauthored a book on Visual Storytelling with Martin Sykes and Mark West titled "Stories That Move Mountains".

7 thoughts on “How do you measure Enterprise Architecture?”
  1. These are excellent points around the value of a balance scorecard, rather than an ROI metric alone.  How do you see extending these type of metrics with a balanced score card to Government based SOA type projects such as Medicaid and Medicare?

  2. Sounds good… but you haven’t really said how you measure these things which was, after all, the title of your article.

    For instance, say I have a suite of projects and I ensure that all of them are "compliant with EA standards". This should hopefully lead to a "reduction in interfaces" that would otherwise be built within each of these projects.

    But after these projects have gone live, how do I measure the "savings from a reduction in interfaces"?

    I’d have to try to estimate how much each project would have cost if it had not adhered to EA standards and therefore more interfaces had been built and how much it would cost the company to support these additional interfaces,…

    And then how much more complicated this would make FUTURE projects, etc…

    This is all virtually impossible, isn’t it? I’ve been struggling with this for a while and when I saw the title of your article I thought euraka! Alas… no magic bullet was was found.

  3. Hi Matt,

    Sorry, no magic bullets.  The title of the post really was intended as a question, as in "I’m asking you (the audience of architecture practitioners) about the measures that you use."

    In many ways, we are in the same boat… trying to come to terms with a "good" answer to that very difficult question.  

    My gut tells me that the answer lies in a balanced scorecard that includes "proof of life" measures, a few "proof of effect" measures, and a couple of rather speculative "cost cutting" measures.  

    To be a valid set of measures, we would need to analyze the measures over time to show that the changes in "proof of life" measures are correlated with changes in the "Proof of effect" measures.  Otherwise, there is no way to know that EA is actually contributing to the observed effects.

    Savings from a reduction in interfaces is not a measure of the cost to build a system.  It is a measure of the cost to own the environment.  That includes building systems, installing systems, maintaining systems, and decommissioning systems.  So you don’t guess how much the environment would have cost.  You measure how much the environment DID cost, and using statistical analysis, you show a correlation between the number of interfaces and the total cost of ownership.

    I hope this helps.  Good luck.  Let me know how things turn out.

    — Nick

  4. Hi Nick,

    The balanced scorecard is definitely an approach of measuring something and it seems you have covered to measure "everything" in the scope of EA. I understand that the objective of measuring EA is intended to cover everything but wouldn’t it be better focus on measuring the EA to align it with business benefit?

    Is there a way to answer the following questions?

    1. Can I walk into an organization and use the model to say that it is a maturity (EA) level 3.5 in a scale of 1-5? (And this is what you need to move up to a level 4)… Or your Business and IT are aligned at 0.7 in a scale of 0-1 and this is your roadmap for a better alignment.

    2. Is there a way to link an executive dashboard with EA-nugget plug-ins to show the real-time benefit of the traceable monetized transactions or business processes? Which in other words define and measure the business benefits?

    3. Can we link the maturity of measuring and governance enforcement with the EA maturity or Business-IT alignment?

    4. How do we model the governance (or define a scorecard) to achieve this?

    These are some questions which are answered in many ways. Is there a way define a reference model for this?

    I believe that is what you are interpreting here for "Measuring EA", and it would be great to have your thoughts on my questions.

    Regards,

    Anirban

  5. Hello Anirban,

    Maturity models are in interesting conversation, and using a maturity model to measure the effectiveness of EA is a useful and often quoted practice.  That said, there is a rather pernicious assumption built in to a metric of this type: that moving a number on a maturity model will have a positive impact on the organization!  In other words, we have to assume, or simply have faith, that the maturity model itself is correct… that it places the right focus on the right things and that it does not exclude things that end up being valuable to the actual effectiveness of EA or alignment in your organization.

    There is a way to determine a good maturity model… it involves collecting information from companies that demonstrate VALUE from their alignment of business to IT, and then gather observations from those "positive outliers."  If you examine enough of them, you can produce a set of ‘best practices’ that reflect their experience, and from that, you can develop a rational maturity model for the rest of us to follow.

    The CEB study that I cite contributes to this body of knowledge but does not accomplish the goal.  Perhaps this industry is too young to achieve this goal at this time.  

    Until then, using a maturity model to measure EA is risky at best.  

    As far as defining and measuring business benefits in real time, the algorithmic complexity of that endeavor far exceeds the value of producing it.  As I mentioned, we are a support function for a supporting business function.  

    In other words, if I do my job well, it allows IT to do its job well, which allows business to do it’s job well, which allows the customer to benefit.  Each time, in the above sentence, where I used the word ‘allows’ is a weak link in metrics.  That is because ‘allowing’ someone to do something is not the same as doing it.  

    (Analogy: I can give you a fast car, but that doesn’t mean you will win the race.  If business were a formula one racing team, then EA would be the guy who makes sure that the mechanic’s crew has the right tools with which to build and maintain the car.

    It is risky to measure the person who supplies the tools on the basis of how many races the team has won.  There have to be more immediate measures, or you will never be able to tell if they are doing a good job.)

    I’ll think more about this space and blog further on it.  I hope my response is helpful.

  6. I suggest complexity indexes such as ratios of the number of hub and spoke to point to point integrations or the number of redundant data stores by domain (e.g. Party, Location, Contracts). Holding the line on rising complexity can be considered as effective as actually lowering complexity. This would be equivalent to stabilizing the IT environment as a first step towards controlling TCO. Over time you would want to see evidence of lower complexity based on metrics. It’s like turning a super tanker ship, it can take some time to see a change in direction.    

Leave a Reply

Your email address will not be published. Required fields are marked *

17 − 10 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.