Measuring Risk in Application Portfolio Management

By |2007-12-26T13:28:00+00:00December 26th, 2007|Enterprise Architecture|

I decided to take a few minutes of my vacation time to catch up on my reading, and I read through Mike Walker’s article on MSDN on APM and EA.  It is an interesting and useful article. (I’d give it a B-).

One thing that I’d like to highlight in the practice of application portfolio management is that of risk management, an area that Mike implicitely touches on, but which I believe is fundamental to the business case of APM.

You see, there is nothing wrong with owning a bunch of stuff.  Think about it: how many chairs does your company own?  How many desks?  How often does the company spend money to replace every chair in every office?  If your business is typical, the answer to that question may very well be “never.”

Yet, we do see projects where a company will replace four billing systems with a single billing system.  That happens.  Clearly, owning an application portfolio is different than owning an inventory of assets.

Key among the differences is risk… especially risk to business continuity.  There are many other factors, of course, and Mike covers some of them quite well in his article, but I want to focus on risk and risk management.

There is a substantial intersection between Application Portfolio Management and Risk Management.   However, I suspect that some folks who read this may not be aware of the area of risk management.  From wikipedia, here is a fairly good definition:

Risk management is the human activity which integrates recognition of risk, risk assessment, developing strategies to manage it, and mitigation of risk using managerial resources.

The strategies include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk.

By way of example:

When you look at an inventory of chairs, you have risks.  If a chair gets old, and breaks, and an employee is injured, then the business faces insurance claims.  Morale suffers.  Productivity may decline due to lost work time and morale.  If the incident is public, then the company’s reputation may suffer.

Managing that risk involves understanding the kinds of things that can go wrong (falls, wounds, productivity decline, etc) and determining the factors about a chair that may lead to them (poor condition, missing parts, wobbling, etc).  If you collect this information about your inventory, and then you group your chairs according to these attributes, you might get a few classes of chairs: (excellent, workable, frail, dangerous). 

With each category, you can determine the risk to the business for owning it.  Clearly, the risk to own dangerous chairs is higher than the risk to own workable chairs.  While it doesn’t make sense to replace every chair, these statistics can provide an excellent business case for replacing the dangerous chairs (right away) and the frail chairs (over a finite period of time).

We use essentially the same process for applications.

What are the things that can happen to the business if an application fails?  Let’s list out those things, and then create a set of attributes that an application has that will help to differentiate some applications from others.

Risk scenarios –> Attributes –> Data collection –> categorization

Within each category, you can determine the risks to the business that need to be mitigated.

Note that you can have many heirarchies, many categorizations.  You can group applications by their lifecycle stage (Strategic, Core, Maintain, and Sunset), and that is certainly useful for combining APM with PPM.  In other words, it is useful to know how much of your planned budget is devoted to improving strategic applications. (Mike mentions this in his MSDN article, with different definitions that we don’t actually use internally in Microsoft IT).

Another useful categorization is application impact on operations.  The attribute to measure is  the speed at which a failure would impact operations of the business:

  • Instant (<6 hours)
  • Immediate (<2 days)
  • Rapid (< 10 days)
  • Serious (< 60 days)
  • Corrosive (within 9 months)
  • Hidden (gradual impacts on quality of customer experience or regulatory compliance)
  • Competitive (no impact on operations, but potential impact on ability to compete)
  • None (no one will miss this app if it goes away)

This is far more useful than a subjective measure like “strategic” or “core” when determining the value of investment in an application, and it also shows something else as well: the serious problems that may arise from a lack of investment.  

A terrific example was described in CIO magazine a while back, describing a situation where Comair Airlines kept putting off investments in a new crew management system, only to have the system crash during a heavy Christmas season that literally grounded the airline.

No one in the business would have considered a crew scheduling system to be ‘strategic’ and so an investment portfolio that breaks things down by how ‘strategic’ an application is would not have favored the replacement of that application.  On the other hand, a categorization that captures the application’s impact on operations would clearly have placed that application in the Immediate category. 

Of course, correct categorization is only the first step.  Now you have to determine the risk of failure.

Categorization –> risk of failure –> cost to business –> priority for mitigation

By determining how likely an application is to fail, based on its risk categorization, you can select the applications that most need attention.  Now, that attention does not have to involve a rewrite.  There are lots of ways to mitigate risk.  You can move the risk by making someone outside the business responsible for handling that business capability.  You can reduce the risk of failure by introducing redundancy or failover.  You can reduce the cost to the business by moving non-essential decisions off of one application and onto another, more reliable, application. 

Mitigation review –> Comparison of alternatives –> investment in mitigation

I am not an employee of Comair, and I have no desire to criticize.  Their case is very public, but there are many more failures in IT that impact operations that are not so well described.  I refer to their misfortune as an example for us all to learn from.  In that vein: perhaps if there were a graph that showed the amount of investment against high risk applications, as opposed to the amount of investment against ‘strategic’ applications, then it would have helped to seal the business case for mitigating, and ultimately preventing, the heavy losses that the company faced when IT failed to keep the system running.

The key here is that IT has to work closely with business, something that IT folks are not very good at and that business folks often fail to understand the value of.  But by showing that some applications deserve mitigation, and by working as partners to reduce the risks faced by those applications, the business will willingly invest in IT the mitigations that are needed.  This is the visibility gap that APM can fill.

Success requires a conversation between IT and the business, one that Enterprise Architecture must foster.  And this is one area where EA and APM intersect.  One of many, but an important area that we must not forget. 

EA + APM + Proper Measurement = Risk Management

Measuring the agility of a SOA approach

By |2007-12-14T16:34:30+00:00December 14th, 2007|Enterprise Architecture|

I’m thinking about the business case for integration again… (still).  We talk about SOA providing a benefit by being more agile.  In other words, if you have a SOA infrastructure, you can change to meet the needs of the business in an agile way.

Here’s how to prove it.

  • Step 1: create a metric
  • Step 2: use the metric against two projects, one with SOA and one without.
  • Step 3: present differences.

Step 1: create a metric

You need to create a metric for the “IT Agility” recognized in the architecture.  This will provide you with a baseline to compare radically different projects on different architecture.

IT Agility metrics are unusual.  In my view, this metric would relate the speed by which a change occurs with the complexity of a change. 

Literally: Agility = Change Complexity / Change Duration.

With a metric of this kind, you can compare the agility of two approaches: say non-SOA distributed systems vs. SOA distributed systems. 

Measuring the speed of a change is not too difficult.  Time of request to time of delivery.  The other half is a bit harder.  Measuring the “complexity of change” is an interesting problem, but not intractable.  Certainly, there is cyclomatic complexity, but I don’t think that says anything about the architecture.  It is too algorithmic and quite dependent on the language.

Change Complexity = (architectural depth factor) * (process breadth factor)

Think of this like the area of a rectangle: “depth in the architecture” times “breadth in the process” equals “area of the change.” 

Architectural depth is the number of LOGICAL layers of your system that are impacted by the change.  Logical layers could be a simple list.  I like this list: user experience, business process, business rules, information storage, information integration, but you could use your own list, as long as you are consistent.

Process breadth is tricky, since processes include processes.  I’d say that the breadth of a process is a sum of the number of distinct roles involved in the process, and the number of hand-offs of responsibility from one role to the other. 

Process breath = (number of roles) + (number of handoffs between roles)

Now, with a metric of this kind, you need to use it to measure something… that the next section.

Step 2: Use the metric against two projects

Find two projects in IT where you can get your hands on the project requirements and timelines, for each update.  Project 1 has to be a non-SOA project, while project 2 has to be a project that was SOA from the start. 

If you cannot do this, then you need to create a SOA Proof-Of-Concept project.  I’m assuming that you have some SOA systems in place already, but that you need to show value for the approach.

If you have no SOA Proof-of-concepts (POC) yet, stop here.  I cannot help you.  Go get a POC going.

Assuming you have a SOA POC, and assuming you can get the numbers for another system across it’s lifespan…

For each type of app, answer these questions

  • How often in the past 5 years has this business application needed an update?  (frequency of change)
  • How long did each update take? (duration of change)
  • How complex was the change? (see above section)
  • Are there other applications that do the same thing in your company?  (If so, include them in a generic category, like “Order Management”) 

Step 3: present differences

Now you should be able to make a value-add comparison for the SOA vs. Non SOA projects and, assuming folks like me are right (no promises), you should be able to show that SOA projects are more agile.  That presentation needs to look good, and have the right information in it.  Don’t assume everyone will know what to do with your recommendation.

Special considerations for rip-and-replace projects

Note: if the project you picked is part a “rip and replace” space, you need to do a bit more work. 

A “rip and replace” space is any generic category of software (or solution domain) where the business invested, for the past few years, to remove an existing system and replace it with a new one.  Note that a R&R project doesn’t need to be successful to be part of the problem, but clearly failures are easier to ignore when calculating the agility of the solution.

For the sake of this comparison, avoid any projects in your comparison that have significant rip and replace projects against them. 

The battle for the net-top heats up

By |2007-12-11T02:06:14+00:00December 11th, 2007|Enterprise Architecture|

Sometimes, in a long struggle, a goal that was strategic one day, becomes unimportant later.  This happens when some underlying assumption is challenged, when some previously secure resource becomes unavailable, or when the behavior of large groups of people shifts.

hamburgerhill (Caveat: my views are my own, and may or may not be shared by my employer, Microsoft.  Investors, customers, partners: please, do not make financial or purchasing decisions on the basis of my opinions.  Nothing I say is “official.”  God forbid.)

That doesn’t mean that the battle was lost… just that its strategic value is lost.  Winning that battle was hard-fought, and valuable at the time, but that battle, whether won or lost, just isn’t as important any more. 

For years, Microsoft has fought to put the most software onto the desktop of every personal computer in the world.  It is no secret that “windows on every desktop” was a rallying cry for this company for a while.  Although we are not so focused on a single product anymore, we still want to get our products on as many machines as we can, and machines into as many homes as possible.  That drives adoption, which creates a de-facto standard, and creating a compelling “virtuous cycle.”

We’ve been criticized for this strategy.  We’ve been lauded for this strategy.  We’ve been sued over this strategy.  We’ve been successful because of this strategy.  Microsoft software on every desktop! 

But now I’m going to venture an opinion… a prediction of the future.

In the future, winning the desktop won’t matter as much anymore.  That goal, in the coming decade, will gradually decline in importance.  Putting a bunch of software on every desktop will be nice, and it will earn a lot of money, but, IMHO, it won’t fund the next level of growth for Microsoft. 

A new battle has emerged, one for the hearts and minds of the future generation: the generation of the digital native.  This is a battle of love and passion and inventiveness, a battle to earn the good will and the respect of a billion people.  A battle we cannot lose.

Our past is based on the desktop.  Our future is based on the net-top. 

What is the net-top?  The net-top is the Internet equivalent of the desktop, a grand shared space where all applications are installed already, and you pay for only what you use.  Where ultimate choice drives the day, where small players and large players alike have an much more even playing-field.  Where it doesn’t matter if you live in China or India or Brazil or the USA… you get the same applications, available in the language you choose, and you can choose which ones to use because they are all already installed through the web and Silverlight and services.

The net-top is the new surface of computing.  It is the Internet, plus service, plus software that is needed on the device to make up for the inherent frailty and constraints of the network.  It is neither open source nor proprietary.  It is not a browser.  It could be a mashup surface that provides access to every internet software+service application, already installed (even big-bad Microsoft’s services), along with access to the virtual storage needed to hold the information. 

(Note: Hosted desktop services are part of the net-top, but not all.  I’d start there, but my definition far exceeds the hosted desktop solutions that are currently available).

I believe that, eventually, the service is all that will matter, and the download of software to the desktop will be both free, and very simple to do.  It won’t matter where the desktop lives: on a laptop or a hosted desktop or a PDA or a telephone or in a car or woven into the material of your winter coat.  What will matter is the service.  Data will be “in the cloud,” and available to every service that needs it.

The control of the CIO over the contents of the corporate desktop will wane.  This trend has been going on for some time, and CIO magazine has not only recognized it, but recommended that CIOs embrace it. (See Users who know too much). It is time to let the users have the control.  The force is unstoppable anyway.  Initiatives and products that attempt to wrestle control back to the CIO will meet with success briefly, but will ultimately fail to gain foothold as the tidal wave of user-self-determination washes away these obstacles.

Information will move to the ‘cloud.’  There is no avoiding it.  The individual users who create distill information from data will control that information, often outside the boundaries of the corporate walls.  Secrets will become harder to keep, and IP will become even more difficult to control, even as IP becomes more valuable to the survival of the top corporations of the world. 

Corporations will install their on local or proxied versions of popular Internet services in hopes of keeping intellectual property assets from leaking out.  In-hosted services, however, will fail to prevent the migration to the internet cloud, as partnerships and communities will increasingly extend well past the boundaries of the corporation.  As they do, the ‘center of gravity’ will shift away from the corporation to the community: an extended space defined by the people themselves, with their own rules for information management. 

To cope, Corporations will purchase “spaces” in popular sites for members of their company to collaborate safely.  IT departments will begin to adopt common standards for protecting that data, and will push those standards on large service providers.  A new conversation will emerge, from the IT community that, in the past, drove very few standards.  So while corporate information will move out of IT, control over how it is managed will collectively shift.  Information will be assured and managed, not controlled. 

All of this is driven by the net-top.  This is the new space, and Microsoft is coming.  We are creating products an increasing rate, moving resources, shifting priorities, reorganizing
.  The movement is taking hold inside Microsoft, and that is an amazing thing to watch.  I was here when Microsoft “discovered” the Internet, and this time, there is even more seriousness than in the 90’s.  Microsoft will not, cannot, has not, ignored the net-top. 

Sure, folks like Salesforce and Amazon are already there, and winning customers with excellent products.  But we are there as well, and we are driving forward at an accelerating rate.  Competition is what drives us all.   No one loves to compete more than Microsoft. 

And you can’t count us out.  Not on something we are serious about.  A long time ago, Microsoft was not first in spreadsheets, but now Excel is the king of spreadsheets.  Once upon a time, Microsoft was not the first in presentation software, but now elementary school kids learn Powerpoint as an essential job skill.  I don’t know the market share of Exchange or SQL Server, but I’m certain that we have gained, gradually, relentlessly, continuously. 

We are serious about the net-top. 

The battle has been joined. 

Fitting SOA+BPM into the software lifecycle

By |2007-12-07T09:38:00+00:00December 7th, 2007|Enterprise Architecture|

I have a SOA view of the software development lifecycle.  And, in that SOA view, BPM fits nicely.

First, a comparison: Waterfall looks like this:
Waterfall: Plan –> Envision –> Design –> Develop & Test –> Deploy
Agile: Plan –> Sprint –> Sprint –> (occasionally) Deploy –> Sprint –> Deploy

A SOA SDLC looks more like this:

Plan –> Sprint (Process and User Experience) –> Sprint (Process & Services) –> Deploy –> Sprint (P&UX) –> Sprint (P&S) –> Deploy

In other words, you get as far as you can go with the user experience, you update the services, and you deploy.  Then, you do it again.  (I think Agile works a LOT better than waterfall).

So what does a “Sprint (Process and User Experience)” do?  The dev team is focused ONLY on the front end.  No changes to the back end are allowed.  This is a feedback cycle, in that services are developed slowly and carefully, mostly with heavy unit test and runtime testing requirements.  During that time, there is less involvement with the customer’s team.  So by cycling like this (assuming sprint length of between three weeks and five weeks), you can have greater feedback and user acceptance testing by consuming more of their time in U/X during ‘high cycles’ and let them do their ‘day jobs’ during service cycles. 

During “Sprint (Process and Service)” cycles, the team focuses on meeting the string requirements for creating and consuming enterprise services.  Heavy unit tests.  Real-time test harnesses.  Synthetic Transactions.  Idempotent design.  Activity Monitoring.  Performance testing.  Reliability testing.  You get the picture.

Both kinds of sprints: process changes are happening.  That is because it can take a LONG time to run through a User Acceptance test on a Process, get feedback, and incorporate it.  No good reason to create a ‘low cycle’ in that work.

I’m assuming mature process management tools, of course.


Get BPM into IT project funding

By |2007-12-06T04:14:00+00:00December 6th, 2007|Enterprise Architecture|

One challenge that we run into: having a software developer design the business process.  Now, that’s no slam on software developers.  There are some very smart cookies out there writing software… but if you want to develop a business process, you need to make sure that the business likes the process before you write the code.

I believe that the person who develops the business process has to be seperate from the person who writes the basic code.  SOA supports this idea.  In SOA, the compositon is where the business process lives.  The services don’t care what order they are called in.

So if the developer doesn’t write the process… who does?  Is it the IT analyst?  Is it the IT project manager or IT solution owner?  Only if they are trained to create well-designed and efficient processes.  If they are not, then it’s not much better than having the developer do it.

Honestly, the benefits of shared processes in a company can be substantial.  Any process that does not differentiate the business in the marketplace should be considered as a candidate for sharing between lines of business… (as long as those lines of business already share data).  Sharing a process can provide real opportunities for reducing cost and improving efficiency.  Each unique process carries a cost… in training, in tools, in exception procedures…  The fewer, the better.

However, IT projects often do not have Business Process Management figured in.  BPM must be part of the way in which the software is understood, described, and communicated.  If BPM is considered first, then SOA can produce the benefits we want it to produce.  If BPM is not considered first, then you are cutting off many of the benefits of SOA. 

Don’t undercut the value of SOA by failing to manage the processes in your enterprise.  It would be a crying shame if you did.

Alignment through "honeypot" funding

By |2007-12-03T18:16:00+00:00December 3rd, 2007|Enterprise Architecture|

How do we take EA governance from a “push” model to a “pull” model?  In other words, how to we create a system where people want to do the same things that EA wants them to do, without calling it governance, and without the political battles that ensue?

I have an idea, and it relates to the process of creating a budget forecast.  That’s right… it’s boring… but at the same time, this is where the rubber meets the road: money.  People do what you pay them to do.  That truth is immutable, and quite useful.  While a small number of people don’t do what you pay them to do, the wild majority do.  Correlary: If you make it easier to fund well-aligned projects than poorly aligned projects, then more projects will be well aligned.

Based on this theory, and this theory alone, I suggest that EA should focus their efforts in two ways.  First: create and get approval for a future-state model describing what the future should look like, and second, get the IT funding process to encourage projects that align with the future. 

That’s it.   Who needs governance when people will only do what you pay them to do, and you control the purse strings?  This is exactly the model that Corporate America, and their friends in the Republican party, have used to take over the news media and fire every progressive anchor and reporter they can find.  They didn’t ban progressive thought… they just refused to pay for it.

If we do the same with Enterprise Architecture, and we take the long-term view, we can get the future vision to appear without raising a single ‘red flag’ or “governing” a single project.

I call this idea “honeypot funding.” 

It works like this.

Business goes through a cycle of ‘strategy’ setting where they create a list of their strategies for the coming period (hopefully five years, but in many companies, you are lucky to get more than 9 months…).  Business architects map the strategy to business capabilities.  They ask, “What capabilities will the business need to improve upon in order to meet these strategies?”  They create a list of “hot spots” where specific capabilities are weak and need to improve.

Application architects come along and map the “hot spots” to “logical platforms.”  A logical platform is a technology-neutral description of an application or system that encompasses specific features, exposes specific services and is not tied to any particular line of business.  App architects ask, “what features will the IT platforms need to improve upon?”  (Note: our term for a ‘logical platform’ is “Solution Domain”)

All this happens within a few weeks of the creation of the strategy documents. 

The budget office then takes a look at the work of the architects.  The areas where the platforms need improvement, or areas where maturity is needed but it doesn’t exist… these areas get the honey.  When creating the initial budget plan for NEXT cycle, finance works with the application architects to set a priority.  Projects, when submitted for funding, have to tie to a particular ‘logical platform’  and they are considered for funding in the order of the priority of the platform in the budget system.

The priorities are published as soon as possible.

When IT projects are proposed, they are inevitably aligned to the highest priority platforms, because the business folks and IT folks who see those priorities will naturally spend time and effort on projects that they believe are likely to be funded.  Why propose a project aligned to a tiny budget when you can propose a project that is aligned to a larger budget?  People will go where the money is.

To add to the mix, you can stipulate that an IT project that wants to be able to get a budgetary extension later on, must sign up to EA requirements at the time of initial funding.  Now, if you think you can deliver your app on the original budet timeline, no need to sign up… but the only projects that fall into that category are doomed for failure anyway, so who cares.  The rest of the projects will gladly sign up to EA requirements, just so that they can get permission to extend their project deadlines later on down the line.

Gotta think about this one some more. 

IT Funding Processes

By |2007-12-02T12:47:00+00:00December 2nd, 2007|Enterprise Architecture|

Like many corporations, Microsoft has many business units, and many IT groups.  Enterprise Architecture has a lot to keep track of.  The big win, for EA, is in helping to decide what projects are funded.  Once a project is funded, the opportunities to guide, direct, and influence the project drops dramatically.  So let’s talk about funding processes.  This post is generally about funding, and less so about EA’s role.

Before I describe bits of our funding process, let me give a generalized definition.  Most folks will have something similar, but probably organized differently.

A funding process, in general, is a rationalization process whereby the needs of the business are weighed against the capacity and capabilities of the IT group.  Projects that overlap are combined, and projects that provide a low level of business value drop in priority, often to the bottom of the list.  The IT group accepts as much work as they can do, some projects are outsourced, other projects go away (and sometimes come back the next time around).  Companies will run through a funding process on a periodic basis.  Sometimes annually, sometimes semi-annually, sometimes quarterly.  At each iteration, all projects and all capacity is considered, so provisions have to be made for in-flight projects.

From that definition, there are bits of both “what” and “why.”  The reason for having a funding process is simple.

  1. IT is a constrained resource.  An IT group can only handle a fairly fixed amount of work every quarter.  Each project needs to be estimated from a standpoint of the number and type of resources that it will consume, so that one part of IT is not overbooked, while another part is wanting.
  2. Planning reduces churn and makes an IT environment work.  People like to work less than 50-hour work weeks.  By accepting the right amount of work, and then setting expectations about when the work will begin, the staff in the IT department doesn’t feel pressured to kill themselves to deliver “everything, all at once.”
  3. Good ideas come from many sources.  An innovation that can help the business compete may be described by many people, and come to IT in many different costumes.  If IT is lucky, they will recognize the overlaps and, if the process is engineered right, the projects can be combined to produce the business effect with a single investment.  Analogy: my wife and I both drive a car.  If my car starts to break down, it would be foolish for both of us to buy a new car without talking to the other.  We need one car.  Both of us saw that.  We should agree on what that new car will do, and go to the dealership together.  IT is the dealership.  The project builds the car.
  4. The business doesn’t always see the interdependencies between projects, or the need for infrastructure investments.  A funding process provides a way to surface projects that must be done before another project is started, all in support of a business strategy. 
  5. Some ideas from the business should not be funded.  We’ve all seen the ‘squeeky wheel’ syndrome, where the person with the loudest voice got what they wanted, regardless of whether it was the business needed.  This happens in IT projects as well.  Sometimes the business will request a project, and they may even be able to justify it with an ROI, but the business shouldn’t be asking, and the IT group shouldn’t be doing it.  Why?
    1. No Fund reason #1: The project may take the company into an area of business where the executives are not interested in going.  Reality is that the most important decision a company can make is not “where to compete,” but rather “where not to compete.”  That decision, in a large company, may have to be made many times, or made once and enforced many times.  Sometimes, a good idea just shouldn’t be pursued. 
    2. No Fund reason #2: The project may fail to recognize all of the costs, either the costs of change or the operational costs, associated with their ‘business plan.’  It may look like the amount of revenue resulting will easily overcome the IT costs, but I’ve seen projects will small IT costs that couldn’t break even, because it required massive retraining of staff or required three full-time business people dedicated to managing business relationships. 
    3. No Fund reason #3: The project may wildly overestimate the revenue.  This is simply optimistic business planning.  Let’s say that you have a supply chain, and you figure that if you offer a service to your own suppliers, they will pay you for it.  So you calculate the benefits to the suppliers, come up with a ‘price’ for your service, and then predict, optimistically, that 80% of your suppliers will ‘buy’ the service.  What if no one buys it?  What if your suppliers have a lot of ‘say’ in other areas of the business, or you are a small consumer of their products, and they can safely ignore you?  What if the costs of change, on their part, are too high to get a benefit out of your service, even if they want to use it?  What happens to your return on investment if no one buys the ‘product?’  I saw this one first-hand.  IT projects that are part of “new business opportunities” are part of a business risk.  They need to be vetted by the business at another level altogether, before IT gets involved.
    4. No Fund reason #4: Some ideas serve particular business departments in terms of cost containment or ‘annoyance control’ (as in: let’s automate this task… it’s really annoying to do).  There may be no impact on revenue, or quality of service, or turn around time.  It may be more efficient for the business to have people do the work.  It may be expensive, or simply wrong, to automate the task.  There may be compliance problems with automating the task.  These kinds of projects may be dropped because they are an example of people spending money to make their lives easier, not because it benefits the bottom line.  If the person pushing this kind of project wants it to be funded, they need to find a way to tie it to business strategy or measurables. 

So when we discuss a funding process, the challenge is “where does Enterprise Architecture add the correct amount of value.”  What are we responsible for, and what are we supposed to overlook or leave to others?

The Enterprise Architect needs to be an agent that

  • points out gaps where the IT group should be funding projects, but where projects are not appearing in the stream of requests, and works to get projects queued up. 
  • points out overlapping requirements across projects between teams, and works to combine projects where it makes sense to do so.
  • places specific requirements on specific projects where, doing so, moves the entire IT ecosystem towards a future vision that they have ALREADY created and gotten buy-off on.
  • asks if all of the other compliance checkpoints have been checked off, but does not withold approval on the basis if another group failing to provide a check off (that is the EPMO’s job). 

In addition, architects who are not working in Enterprise Architecture, but rather are aligned with the project teams themselves, need to add another layer of information.  They need to be agents that:

  • helps ‘vet’ the operational costs for a project where those costs exist in IT or are necessitated by the tools.
  • helps insure that the development, test, and deployment costs are correctly calculated and are including the costs of development that may vary based on training or readiness. 
  • places specific requirements on specific projects where, doing so, moves a segment of the IT ecosystem towards a future vision that is aligned to, and supports, both the EA vision and the goals of the IT team.

There are interdependencies between stakeholders, of course, and politics.  That is part of a later post.  Just wanted to put out, on the table, the generic role of EA in a funding process, and ask you, gentle reader, if your company is organized similarly, or differently.

What's in your wallet?

By |2007-11-28T00:25:00+00:00November 28th, 2007|Enterprise Architecture|

Capital One has repeated that phrase to me so many times, it’s amazing.  The fact that I cancelled my Cap One card after they charged me 10 times the fees of any other card means that this phrase now has another meaning… what should NOT be in my wallet.

Applying that to architecture…

One cool thing about EA, we get to think about the future.  What capabilities are required by the company, in the future, in order to meet the needs of a vision, a business goal, a dream…  It’s fun.

So I’ll ask “what’s in your roadmap?”

A federated ESB?

A common information model?

Matrix or Cloud database management?

Business process execution engines?

Near-real time Business Intelligence?

B-to-B federated identity management?

What bit of esoteric infrastructure or wild-eyed darn-near impossible business requirement is creeping into your five year view? 

I’d love to know…

All bloggers are Customer 2.0, but not every Customer 2.0 is a blogger

By |2007-11-27T03:51:00+00:00November 27th, 2007|Enterprise Architecture|

I’d like to draw a distinction that I should have drawn before.  I had an interesting discussion in e-mail after my previous blog post on EA and Customer 2.0.  I suggested that Kai, our persona for Customer 2.0, learned how to write code and develop mashups in school, but she doesn’t need to use that skill because we would have to provide her with a beautiful experience…

That created an unintended perception in the mind of one reader: that only mashup artists and bloggers would qualify for my definition of Customer 2.0.  That is certainly not my intent.  My definition is not so narrow.

I do believe that Customer 2.0 is far more ‘internet literate’ than I was at the age of 20.  That said, she is not a geek.  On the contrary.  She is a digital native, and has no tolerance for poor quality services or navigational dead ends or any of the things we overlooked when HTML was cool. 

She will not decide where to put her hard-earned micro-transactions and ad-clicks on the basis of geekiness.  She will choose largely based on unique interests, self-defined identity, and membership in one or more communities. 

Therefore, while the early adopters, bloggers, and mashup artists who help to build the communities are clearly included in the definition of Customer 2.0, so are the men and women who use twitter to keep up with their friends, or write quick notes on other people’s Facebook pages.  They will listen to new music that their friends are listening to, and will visit restaurants and clubs that their extended community recommends. 

Customer 2.0 is motivated by community.  Mass marketing is not as effective, but word-of-mouth advertising is more effective than ever before.  Acquisition is difficult.  Retention is everything.  Brand matters.  Cool matters.  Trust matters.

Geekiness is OK, but not required.