Help wanted: who pays to simplify your IT portfolio?

By |2006-04-28T02:01:00+00:00April 28th, 2006|Enterprise Architecture|

Entropy creates IT portfolios.  As time goes on, business needs change.  Mergers happen.  Tool strategies change.  It is inevitable that every IT department will find itself (rather frequently) facing the situation where there are two or three or four apps that perform ‘important task fourty-three’ when the enterprise needs exactly one.

So let’s say that important task fourty-three is to “track the number of widgets in the supply chain from distributor to retailer.”  Let’s say that the Contoso company has two divisions.  One makes toys.  The other makes fruit snacks (a mildly perishable snack).  These two divisions have different needs, and the were built at different times.  Contoso purchased a supply chain application for the toy division, and built their own for the food division. 

So, along comes an architect and he says ‘you only need one.’  After the business stops laughing, they ask if he’s serious.  When he says that he is, they argue that he’s wrong.

Let’s assume he wins the argument.

Who pays to combine them?

The needs are different.  The application that manages the toys is not able to handle perishable items and the interesting stock manipulations that go on.  It is not able to track batches, and their due dates.  It is not able to calculate the incentive rebates that Contoso offers to retailers to refresh their stock when they flush aging inventory.

Combining the two may very well mean purchasing a third application and migrating both of the two older systems’ data into it.  Or it may mean expensive modifications to the commercial package, or the addition of custom code to the in-house tool.

That costs money.  We want to spend money to save money.  Fine. So who takes it on the chin?  Who extends the credit?  The food division or the toy division?

These decisions are not rare enough to run it up to the CIO at every turn.  There has to be a rational way to fund the decommission of an application, move and archive data, validate functionality, update process flows, re-attach broken integration pathways, etc.  It can cost as much to turn an application off as it did to install it in the first place.

What have you seen work in your organizations? What process do you use?

Do we need a new measure of software complexity to calculate the TCO of a portfolio?

By |2006-04-27T01:42:00+00:00April 27th, 2006|Enterprise Architecture|

A few days back, I blogged about a formula I’d suggest to measure the TCO of the software portfolio.  One responder asked “how do you measure complexity, since it is a large part of your formula?”

To be honest, I was thinking about traditional models of complexity measurement that are often built into static analysis tools.  Things like Cyclomatic Complexity are useful if you have a codebase that you are maintaining, and you want to estimate the risk of changing it.  I would charge that risk is proportional to cost when it comes to maintenance.  That said, the complexity should get as low as possible, but no lower.

However, after reading the question, and rereading the entry on Cyclomatic Complexity on the SEI site, I realized that the definition on that site is wildly out of date.  The page references a number of methods, but not one had any notion of whether an app’s complexity goes down if it is assembled from configured components (See Fowler’s paper on Dependency Injection and Inversion of Control).

In addition to the advent of patterns, we have made great strides in removing code complexity by placing the business rules external to the code.  In some respect, this has a payoff in reducing the cost of ownership.  On the other hand, you have to account for the complexity of how the business rules are encoded and/or maintained.  Rules encoded as VBScript are hardly less complex than code.  But they may be less complex (to maintain) than rules encoded as a hand-built static linked list or tree structure stored as database records. 

We have also removed complexity from code by placing some of it in the orchestration layer of an integrated system.  In fact, this can be a problem, because complexity in orchestration can be quite difficult to manage.  I’ve seen folks install multiple instances of expensive server software just because they felt that they could better manage the proliferation of messaging ports and channels if they dedicated entire instances of integration server software to a specific subset of the channels. 

Not for performance is this done. It may even make deployment more difficult. But if your messaging infrastructure is a single flat address space, then fixing a single message path is like trying to find a single file in a directory with 12,000 files, each having a GUID for a filename, and the sort option is broken. 

So complexity in the orchestration has to be taken into account.  Remember that we are talking about the complexity of the entire portfolio.  If you say that neither App one nor App two own the orchestration between them, then are you saying that the orchestration itself is a new app, called App three?  How will THAT affect your TCO calculations? 

Most of the really old complexity measures are useless for capturing these distinctions.

Of course, you could just measure lines of code or Function Points.  While I feel that Function Points are useful for measuring the size of the requirements, the TCO is not derived from the size of the requirements.  It is derived from the size of the design.  And while I feel that LOC has a place in application measurement, I do not feel that it is useful for providing any useful mechanism for the total cost of owning the application, since a well architected system may require more lines of total code, but should succeed in reducing the amount of ‘cascading change’ since well architected systems reduce coupling.

On the other hand, complexity, to be useful, must be measurable by tools. 

I’m not sure I if I can whip up a modern complexity calculation formula to replace these older tools in the context of a blog entry.  To do this topic justice would require the time to perform a masters thesis. 

That said, I can describe the variables I’d expect to capture and some of the effects I’d expect each of these variables to play on total complexity.  Note: I would view orchestrations to be ‘inside the boundary’ of an application domain area, but if an area of the architecture has a sizably amount of logic within the connections between two or more systems, then I’d ask if the entire cohesive set could be viewed,f or the sake of the complexity calculation, to be a single larger application glued together by messaging.

Therefore, within this definition of application, I’d expect the following variables to partake in some way in the function:

Variables for a new equation for measuring complexity

number of interfaces in proportion to the number of modules that implement the interface: an interface that has multiple children shows an intent to design by contract, which is a hallmark of good design practice.  That said, each interface has to have at least two modules inheriting from it to achieve any net benefit, and even then, the benefit is small until a sizable amount of the logic is protected by the module.

Total ports, channels, and transformations within the collaboration layer divided by the number of port ‘subject areas’ that allow for grouping of the data for management.: The idea here is that the complexity of collaboration increases in an S curve.

Total hours it takes to train a new administrator on how to perform each of the common use cases to a level of competence that does not require oversite. 

Total number of object generation calls — in ther words, each time the ‘new’ keyword is used, either directly or indirectly.  By indirectly, we want to count each call to a builder as the same (or slightly less) complexity for each call to the ‘new’ keyword.

Total count of the complexity as it is measured by coupling — There are some existing tools that appear to do a fine job of measuring the complexity by measuring the module coupling. 

I’m sure I’m missing some more obvious ones, because I’m tired. 

Once the list is understood and generated, then creating a formula that models the actual data isn’t simple.  That said, I’m sure that we need to do it. 

Architecture is an attitude, not a model

By |2006-04-25T02:40:00+00:00April 25th, 2006|Enterprise Architecture|

I ran across an interesting post by Bob McIlree that discusses, among other things, that the ‘real problem’ is not what we might think.  To quote:

So…the real problem we’re solving for, as JT noted, isn’t necessarily better, faster, cheaper. In the large corporate and governmental areas, I’d argue that the real aggregate problems we’re solving for are, as examples: cost-effective front and back-end, compliant, auditable, available (pick your nines), extendable/maintainable, interoperable, and secure.

This is a soft descripton of Software Quality Attributes, which is a mechanism that you can use to evaluate and review software architecture.  (search for ATAM method).  That Bob needed to take the time to describe this is, in my opinion, indicative of just how young, and probably how misunderstood, the architecture profession really is. 

Anyone who needs to spend a lot of time telling others what their job is, is working in a new job.  Some would say “a job that is not needed,” but there I would disagree.

Those of you who say that Project Managers have always been part of software are too young to remember when Systems Analysts would solve the problems themselves.  The emergence of the Project Management profession was not an easy one.  A lot of folks questioned the need for a PM, and others resented that they got to do some of the up-front work that tends to get a good bit of the visibility.  Is it really that hard to remember that the reason we needed project managers in the first place is that developers, by themselves, had a cruddy success rate in delivering software on time?

The problem was not that developers couldn’t manage time, or tasks.  It was that there needed to be a dedicated group of people who were seperate, and were dedicated to solving the problem of delivery (time, resources, funding), seperate from development.

Where PMs solve the problem of “deliver software right,” EAs solve the problem of “deliver the right software.”  We are needed for the same reasons: because development teams, by themselves, have a cruddy success rate at delivering software in small testable components that are hooked together in carefully managed but loosely coupled links. 

We are the ones that figure out where those small testable components are, what their boundaries look like, how they are managed, and how to communicate with them.  We tie their abilities with the needs of the business: the capabilities of the organization.

For those who say that one technology or another will be the ‘magic bullet,’ I’d point out that we introduced a technology a long time ago that allows for loose coupling… it’s called the dynamically linked library (DLL).  That will solve everything!  Right? 

The problem is not that developers cannot manage loose coupling, or messaging.  It’s that there needs to be a group of people who are incented to solve this particular problem, seperate from all the other stresses of business or IT that tend to prevent these designs from emerging.  We need people dedicated to solving the problem of capability scope and component boundary definition, seperate from appliation design or technology deployment.

It’s a dual role, not unlike that of being both the city planner and the zoning board inspector.  You not only help decide where the street is supposed to go, but you ‘encourage’ the land developers to build according to the city plan.   When it works, the streets flow.  When it doesn’t, you get Houston.

To be fair, I think we are still coming to terms with the profession ourselves.

So, to Bob and all the others who feel the need to explain what EAs are for, I add my voice and recognize, that in my meager attempt to describe what I do, I am also defining it, refining it… and maybe even understanding it… just a little bit more.

How Enterprise Architecture enables Web 2.0

By |2006-04-22T11:57:00+00:00April 22nd, 2006|Enterprise Architecture|

The role of an enterprise architect is not well understood.  That much is clear.  Some folks say that EA is at one end of the scale, while Web 2.0 is at the other.  Those people are not enterprise architects.  They are missing the point.

Web 2.0 is about building solutions in a new way.  Enterprise Architecture does not tell you to build the solution in the correct way, as much as it tells you to build the correct solution. 

Enterprise Architecture would be completely unnecessary if you could simply teach all of the practitioners of IT software development to build the right systems.  In fact, that was the first approach most organizations used. 

Smart people would notice that stupid things were happening, like many systems showing up in an organization, all doing the same things in a different way, each consuming millions in cash to create and maintain, instead of building smaller components, with independent capabilities, hooked together with messages.  Smart people would say “This is dumb.” 

Management would say “We agree.  Tell everyone to stop doing that.”

Smart people would tell other IT staff to stop doing it.

And it kept happening.

So Enterprise Architecture is born.  Not to be a bastion of smart people who are somehow smarter than anyone else.  Nope.  To be a group of smart people who are incented differently.

Every day, I make decisions.  Some of them are easy.  Others are about as difficult as they come.  I have a set of principles on my wall that I use to guide my decisions.  They are public principles.  Others helped to craft them.  But I don’t report to those others.  I report to central IT.  So when the customer says “I need to solve this problem,” and IT says “Let’s build a glorious new infrastructure,” I can say “No” without fear of reprisal.

And here’s the kicker:

I can say “I’ve been working with this other team, and they built a bunch of apps based on shared services.  Those services do the same things that you need done.  The services catalog is at http://servicecatalog and I expect you to use those services.  Take a look.  I will review your design doc.  If you aren’t consuming those services, you will stop.  If you are adding to the list, I’ll be thrilled.”

Then I sit back, and watch the “next generation web” launch itself into success.

Look, there are going to be a lot of things needed to make the next web successful.  I’ve blogged about them in the past.  We need to know which services to pick, and when to pick them, this is true.  But we also need to know when they have died, and which dependencies a service has, so that we can adapt to changing situations. 

As an Enterprise Architect, I am incented to think about these things, find solutions, and make sure we are all using them.  That way, when you pick a service, you KNOW how reliable it is (because it uses our framework which allows you to inspect the uptime numbers), and you KNOW that it can handle the traffic you are going to send it (because we required that the team that developed it performed a scalability test and published the results for you to see). 

For those services that have no information, you can use them… I won’t stop you… but your customers (the business users who are paying your salary), they will.  They like cheap.  They like agile.  They don’t care how.  But they do care that it runs reliably.  They do care that it can be tested. 

Enterprise Architecture is not the Central Soviet of IT.  We are the city planners who set zoning, inspect new construction, enforce setbacks, and protect wetlands.  You are just as free to make a brave new world of Web 2.0 with EA in the picture. 

In fact, you are far more likely to succeed if we are there.

Is IT software development BETTER than embedded software development?

By |2006-04-21T04:22:00+00:00April 21st, 2006|Enterprise Architecture|

We hear all the time, especially in IT, about the dismal failure rate for software development projects.  So may millions wasted on project X or project Y that was cancelled or rewritten within a year.

So I’ve been part of a group of frustrated ‘change agents’ who, for years, have struck out to find better ways to improve.  Better requirements, better estimation, better design.  More agile, more deliveries, more quality.  Tight feedback loops.  All that.  It works.

But then I get in my Toyota Prius and I can’t figure out how to find the darned FM radio station that is tuned to my MP3 transmitter because it involves a complex interaction of pressing a panel button, followed by a button on the touch-screen, followed by a completely different panel button.

The labels on the buttons are meaningless.  The layout is seemingly random.  No IT software development process that I know of would get NEAR production with an interaction design like this, yet here it is, in one of the most technologically advanced cars in the world, from a world class innovative engineering company, after the model spent numerous years in consumer trials in Japan. 

That isn’t the only example, of course.  Consumer electronics are full of bad interface designs.  I have a wall-mounted stereo that uses an LCD backlight in the default setting, except that the default setting is to show you the radio station, not the clock, and if you switch to the clock display, the back light goes out. 

How about the remote control that requires a complicated sequence of button presses to allow you to watch one channel while you record another (on your XP-Media Center, Tivo or VCR)?  Or the clock radio with a “Power” button on the face to turn the radio on, but reusing the Snooze button on top to turn it off, unless you happen to hit the ‘sounds’ button in the middle, which now requires you to hit the power button first, then followed by the snooze button to turn it off (I’m not kidding).

I have an MP3 player that doesn’t let you move forward two songs quickly until it fully displays the title of the first song on the scrolling LCD display.  If it is not playing, and you press the play button for one second, it plays, but if you mistakenly hold the play button down for two seconds, it turns off.  Quick: Find song 30 and play it while driving… I dare you.

I use a scale that shows my weight and body fat and supposedly records previous settings, although I have yet to figure out, from looking at the six buttons (on a scale, no less), what combination of magic and voodoo is needed to actually get the previous weight to pop up.

How about the cell phone that makes me punch 6 buttons to add a new entry to the internal phone book, or the Microwave oven with 20 buttons labeled with things like Popcorn and soup, but which proves inscrutable if you just want to run it for 90 seconds on High?

All of these are software bugs or usability issues embedded in hardware devices.  Nearly all of these devices are inexpensive consumer electronics (except the car), and therefore the manufacturer was not particularly motivated to produce an excellent interface.

Yet, if a software application, like Word or MS Money, was to have some of these issues, that application would be BLASTED in the media and shunned by the public.  Software developers in hardware companies seem to get a pass from the criticism… that is until a hardware company comes along that does it VERY VERY WELL (example: Apple iPod), and puts the rest to shame.

I used to write software for embedded devices.  I understand the mindset.  Usability is not the first concern.  However, it shouldn’t be the last either.

It think it is high time that we turn the same bright light of derision on hardware products with sucky usability and goofy embedded software, with the same gusto that we normally reserve for source code control tools. 

My expections of good design have been raised.  You see, I work in IT. 

Ahead of the curve… again

By |2006-04-18T21:24:00+00:00April 18th, 2006|Enterprise Architecture|

Fascinating.  First, we hear that pundits on the blogosphere have given the name AJAX to 1997 Microsoft technologies and called it ‘new.’ Now some folks are talking about the basic capabilities of Windows Sharepoint Services as though they didn’t happen three years ago.  (See Enterprise 2.0)

Blogs, wikis, worker-driven content in the Intranet.  Dude, Microsoft has been using these technologies, internally, for years, literally.  The product is Sharepoint, and it has been a FREE download for Windows Server 2003 almost since the day that product was released. 

The IT group I’m in uses blogs to communicate.  Nearly all of our documents, plans and specs are shared in public or semi-public collaboration sites, entirely self service, hosted through Sharepoint portal server.  In addition, there are two major Wiki sites with literally hundreds of sub-sites on each one for internal use.  (One based on FlexWiki, the other based on Sharepoint Wiki Beta).

Sharepoint is not just used in Microsoft.  It is one of the most successful server products in the line.  Once a company installs Sharepoint, it is hard to keep it from becoming a de-facto standard for collaboration, sharing, and distribution of content.  The product is unstoppable.

I guess I don’t mind when two scientists reach the same conclusion from different sources.  Happens all the time.  However, reputable scientists give credit to the first one to publish their ideas.  In this case, I’d expect that folks wouldn’t name products from other companies without also mentioning widely accepted products from Microsoft.

We didn't start the fire, so when do the hoses arrive?

By |2006-04-17T08:46:00+00:00April 17th, 2006|Enterprise Architecture|

The meeting has begun.  It is a meeting I have been dreading for three weeks.  Odd, really, when you consider the fact that I’m the meeting organizer and it’s been darn next to impossible to get it to happen.

“Let’s hear your idea,” Franz starts.  Franz is a leader of a self-sufficient IT group assigned to one of the ‘businesses’ within the bowels of the company.  He is flanked on his left by his ‘Chief of Staff,’ a new position that is springing up around mid-level IT executives, the Chief of Staff is essentially an uber-project manager assigned to their pet projects to provide visibility (and control).  The chief of Franz’ staff is a thin fellow named Jay, mid fourties, just as tough as Franz.  On his right is Mary, the leader of his Project Management team.  He came prepared to listen… or shoot me down.

“I’m here to talk about a new structure for empowering the Architecture team to perform their Governance role.”  I wonder how long he will let me swing.

Franz and I have an interesting relationship.  Franz in a tall man in his late fifties with bushy, almost wild hair.  He speaks with a soft German accent, even after so many years in the US, and carries himself with confidence.  Franz has the air of someone who knows he is right.  Usually, he is.

I launch into my presentation.  I’ve given it about a dozen times already, to nearly every other executive at his level, trying to get traction.  His boss has taken an extended leave of absence, and I need buy-in from his level.  Moving up the ladder is not useful to get my idea to fruition.  I’m left with the need to convince each member of an extended team about Architectural Governance, and it’s not easy.

I ramble on about artifacts and interaction models.  I describe responsibility assignments and escalation paths.

Slowly Franz starts to pick up interest.  He asks questions, good ones. Then comes the first shot across the bow.

“If my business comes to me with an urgent project need, and I analyze the project and tell them it will take 4 Million to do it, in needs to take 4 Million to do it.  I can’t come back and say that it would have taken 4 Million, but the architects want to slow it down, tie it to six other initiatives, and raise the cost to six!” Franz finishes with a flourish.  He has placed the Truth As Franz Sees It on the table and dared me to pick it up.  I have to think quick.

Franz is excellent at getting the right people to agree with him.  He has a saavy way of handling situations… not so much in the moment, but rather by careful manipulation.  He never looks like he is dragging his feet or delaying things on purpose, even when that is precisely what he is doing.  When he does push an idea, it doesn’t look so much like promotion as ‘an absense of resistence,’ while someone from his staff proposes the idea and takes the risks if someone higher up doesn’t like it.

Screw up with Franz and the idea is dead.

“I completely agree.”  I start.  “If the business comes to you with an urgent need, you have to be ready to help.  And that is what governance is all about.  It lets you be ready, so that you can respond quickly, not to this week’s hair-on-fire episode, or next week’s, but next months crisis is reduced and next year’s crisis is avoided.”

It was a valliant effort, but I could tell from the look on his face that he wasn’t buying it.

“Look, my team is always buffeted by demands.”  Mary jumped in.  “The business will pick at our estimates, trying to take them apart, questioning everything.  They negotiate every ounce of ‘fat’ out of every project.  There isn’t any room for building things that we can’t prove that they need right now.”

Franz turned back to me.  Mary was Right.

“There is a way around it.  You have to hit them with a vision, sell them on it.  A vision of how their systems will behave to their benefit.”  I had to emphasize the last three words.  “We can build a simpler architecture that allows their business to leverage the tools we have, the expertise we have, to allow for ‘Rapid Successes’ that are both rapid and successful.”  My turf now.

“Agility,” I continue, “is not just about how you write software.  Agility is also about the software you choose to write!”  I pause for a moment, to allow that notion to sink in before continuing.

“If a system is architected well to begin with, with loosely coupled systems communicating rapidly over a scalable EAI infrastructure, then business change can be empowered much more quickly.  You are changing small, simple, easy to test components, instead of large legacy applications.”

He knew all this.  It’s the standard SOA speech.  He’s heard it before.  He probably gave it, once.  Problem is, it assumes a world he doesn’t live it.

“Jay,” I ask, turning to his Chief of Staff, “tell me.  In the past year, how many projects came in with the stamp of urgency, bad requirements, and no time to get it right before developers were writing code.” 

That one caught him off guard.  “Um, I’m not sure what you mean.”  Good answer. 

“If a project comes in, and the business wants a particular functional change, how much time is spent analyzing the problem?  How much care is taken before everyone signs off and you start changing code?” I reply.  I’m on thin ice.  It’s a desperate move.  I shouldn’t ask the question if I don’t know the answer.  I’m gambling in his need to always ‘look good’ for Franz.  That’s why I picked my words as carefully as I could, considering the fact that I’m desperate.

“We take as much care as we can, but we can’t always predict things.  Like last quarter, when the business came in with a program to change the way we recalculate prices for one segment of the market based on a totally different volume mechanism.  That one meant changes in four mission-critical systems, but they wanted the changes ‘yesterday.’  That was a nightmare.”  Jay darn-near put his head in his hands. 

“Right.  Now, there’s been smart people in your group for years.  Why was the code to calcuate the price by volume buried deeply in four different systems?  Why wasn’t it focused on a single system?  Is that smart?”  I replied.

“No.  It’s not smart, ” retorted Jay, right on cue, “but those systems had grown up over time, independently of one another.  We knew that there was some overlap, but the business never wanted to pay to clean the mess up.” 

“And that,” I said, turning back to Franz, “is why you need Architectural Governance.  You are here now, and will be here for a long time, but you don’t have to be here forever.  You can break this cycle by investing in Architecture now.”

I continued, “Get a group of your smartest people to create the rules that No One Breaks, sell your business on the notion of living by the rules, and empower the team to strictly enforce them.  No exceptions.  If the business has their hair on fire, give them two alternatives: spend less and take time to do it right, or spend more and do it twice.  The short term fix has to end, or you will never get out.”

“That is how you prevent these things.  That is how you invest in system flexibility.  That is how you pay off ‘developer’s debt.’  Over time, the fires are not as hot.  If agile development methods are about how you build systems, Architecture is about what systems you choose to build.”

By the time the meeting broke up, I think we had a common understanding of what I am trying to do.  Only time will tell.  One thing I did learn: I never want to play Poker with Franz. 

The service repository concept is incomplete

By |2006-04-15T16:14:00+00:00April 15th, 2006|Enterprise Architecture|

Everyone in my family is a big fan of the new TV show “Numbers.”  We are also big fans of Tivo, so even though the show airs on Friday night, we don’t usually watch until Saturday.  

I just watched a repeat episode where Charlie (the mathematician) is bothered because his mathematical model didn’t predict the existence of a very large drug lab.  In discussing it with another character, he says “I have data that I know is true, but it isn’t predicted by my algorithm.”

The other character replies “Then your algorithm is wrong.”

“No,” he replies, “It is incomplete.”

I guess this distinction is lost many times.  I’ve seen it over and over.  If an approach has some minor flaw, then the entire approach must be wrong, rathern than saying that it is correct, but not complete.

A few years ago, we all said that there would be commercial ‘services’ available and that it would change the nature of software on the web. That’s a big part of what folks have been calling ‘Web 2.0.’  So if it is so compelling, how come the nature of software on the web hasn’t changed?

Because the model is incomplete. 

We’ve taken ‘supply’ into consideration, but not demand.  (Show me an economic system that works without both).  I guess we figured we could use the web for supply but that we would use traditional business means to figure out demand.

Why not use the web to figure out demand as well.  (As Homer would say “Doh!”)

I suggest that we create an exchange site where the following activities occur:

  • A consumer (developer?) can ask for a service, and can describe it, and can describe the money they would pay, per transaction, for it.  Other consumers can join the request.
  • A supplier (developer?) can respond, and can make a proposal.   One or more consumers can accept the proposal.  More than one supplier can build a service.
  • Once the service is built, it is listed on the same site.  This allows someone looking for a service to come to the site as a ‘one stop shop’ to either find an existing service or request a new one.

Kind of like ‘Rent-A-Coder’. 

Could work…

What is a SOA application, part deux

By |2006-04-13T09:20:00+00:00April 13th, 2006|Enterprise Architecture|

In a prior post, I asked the question: should we redefine the word ‘Application’ now that our definitions do not refer to the same things as they used to.  Kevin responded to say that the notion of a component is well understood, and that applications composed of components are similar to the structure of the brain.

So, assuming we don’t change the definition of application, let’s see what that definition used to be and see if we can be meaningful in how we use the list of applications in our portfolio management activities.

An application, in an older definition, might be “a deployable unit of software functionality that provides a business capability to a user, usually involving the collection, storage, and presentation of data.” In previous models, the code that was shared was incidental to the definition.  In other words, the fact that two applications would share a few small components that dealt with installation, or with error logging, didn’t matter to this definition, because there was little or no real overlap in the complexity. 

Now, in the world of SOA, I may have a portal.  In my portal, I have six web parts. Each one presents or updates data through the use of back-end services.  Each web part is seperately deployed to my portal server, and in fact, they have substantially different lifecycles. 

Each web part would qualify according to the old definition of application.  However, none of them ‘store’ data.  That is done by the web services.  Let’s say that two of the web parts call the ‘StoreData’ service: wpCRUD-A and wpCRUD-B.  Let’s also say that three of the other four parts (wpQuery-C, wpQuery-D, and wpQuery-E) produce simple queries from that data using the “QueryData” service, while the last part (wpReport-F) produces reports using the “GetReport” service.

So, in my definition of an application, I can include the services themselves, or they can be considered to be seperate applications.  If I include them, then I have App1 which contains wpCRUD-A and the StoreData service.  App2 can have only the wpCRUD-B web part, but should it include the StoreData service?

Here’s the conundrum: the complexity of App1 is the sum of a small amount of complexity for the webpart and a larger amount of complexity for the service.  Let’s say that wpCRUD-A and B both have complexity of 20 units, while the complexity of the StoreData service has 55 units.  Therefore, the application complexity for App1 is 75 units.  If you include the StoreData service in App2, then it’s complexity is also 75 units.  The total is 150 units.

However, the actual total complexity for these two should be 55 for the shared component and 20 each for the web parts, which is 95 total units.  The difference between 150 and 95 is substantial.  If we use the 150 in our TCO calculations, will get a different number than if we use the 95.

The problem is exacerbated if you do the same thing for the reporting and query components. 

This matters if we calculate the complexity by application.  If we do not, and we calculate by component, then the problem doesn’t emerge.

However, our systems for managing a portfolio don’t usually capture the distinction.  Our executives (across industry) talk of reducing the portfolio.  Therefore, they would see it as beneficial to remove wpCRUD-B because the app count would go down, even though the unique complexity of that app is very very small.

Also, let’s say that this entire structure (the portal and the web parts and services) replaced a single web application that was there before.  Therefore, when it was rolled out, the number of applications went up, from one, to three or four or six, depending on how you count them.  That said, this model is nearly always less complex, and the total complexity probably went down when the silo app was replaced with services.

So, fundamentally, the discussion needs to be around reducing the portfolio complexity overall, and not portfolio count

The other problem comes from looking at the portfolio from the standpoint of users.  Let’s say that webpart wpCRUD-A is the only web part that users from the finance group can see.  Let’s say that the reporting web part is the only web part that the users from the sales group can see.  To them, the complexity of the app includes all the back-end services needed to deliver the functionality.  To them, if you roll up their application support portfolio, you would include these services in the count of complexity for the sake of understanding how difficult it will be to provision support for the apps in each department.

Executives are not fools.  They can understand this distinction between count and total complexity.   Even then, as the departmental scenario illustrates, there are some hard choices to be made.  However, tools and toolmakers are behind.  The portfolio tools in place do not often make this distinction.  Those that do have simple “dependency” links, and not “composition” links. 

As a result, the calculation of overall complexity is difficult to do, since it requires a detailed and careful construction of the data in order to get good results.  This is tough if you have a few hundred applications.  It is downright nasty if you have thousands (as Microsoft IT has). 

So I guess my challenge is to developers of APM tools (including Microsoft, now that we purchased one). 

Challenge: Depict the composition of complex apps from app services in the management system to allow the complexity of each app to be understood, both from the end user standpoint and the deployment standpoint, so that all the reports are well understood.

SOA Services are Guilty until proven Innocent

By |2006-04-12T21:09:00+00:00April 12th, 2006|Enterprise Architecture|

I have to credit my manager for this gem.  And it is easy to see why it is true.  If you have an application, especially one so dependent on services as a SOBA, any problem in the application will first be blamed on the code that is ‘not invented here.’  It is sad, but true. 

So what do we make of this… we who think that Services can provides some of the flexibility that integrated systems so desperately need?  What is the action item?

All services must be instrumented to show when they are working, when they are failing, and what they are doing.  There must be the ability to track a message, from end to end, in a process to show where the message sits, where it drops, and to provide the peculiars of the message itself (since it is quite normal for this behavior to only happen for messages that look like X, but not Y, even though X only happens once in a blue moon.

In effect, if you are a services developer, and you develop a service that you cannot PROVE is up and running, then your service will be blamed whenever anything fails.  Services that cannot be instrumented are the work of hobbyists, not professional developers.

So what do you need to prove?

  • Life – the service must respond to a ‘Alive’ poll very quickly.
  • Health – the service must check downstream dependencies and return with a metric showing that it does or does not believe that it is functionally able to perform based on whether downstream resources are available.
  • Throughput – the service must both log performance data as well as return performnce data for the last N transactions.  This will allow a seperate system to watch a message go through and then ask, after the fact, how long that transaction took to complete.
  • Message status – for each message inbound, the state of the message is returned.  This allows an observer to determine if a message is in process or has completed.  Status information needs to be detailed enough to determine which component the message was last sent to and, hopefully, if there are conditions that must be fulfilled for the message to continue moving.

Until you can answer these questions, the service is guilty of every sin imaginable.