//January

A service by any other name…

By |2006-01-27T09:18:00+00:00January 27th, 2006|Enterprise Architecture|

In a meeting yesterday, we discussed the various ‘types’ of services.  There are some pretty passionate folks, and each person came to the meeting with their own taxonomy in mind.  The problem with labeling services with a ‘type’ is that you have to have a good idea of what you will do with this attribute.  Is it used to describe the service behavior or the service logical domain?  Is it used to describe that behavior to a developer, to a business person, or to another enterprise architect?  Does it matter who the source of the taxonomy is?

The IASA is working on creating a taxonomy, but that may take a while.  In the meantime, we have a long list of standards.  The Microsoft DSI has some terms.  Microsoft Biztalk has some terms, as does the WCF (Indigo).  Then, outside the Microsoft world, there are taxonomies from the analysis organzations (Gartner/Meta, and others) as well as from software and hardware vendors. 

The net result of having no standard is that every organization invents one. 

Including, unfortunately, mine.

(We may end up with multiple taxonomies, one for architects that describes some particularly useful aspects of the behavior, and one for business consumers to describe a disjoint set of behaviors that has to do with business capabilities).

The Estimation Game: Do not confuse cost with size

By |2006-01-25T02:03:00+00:00January 25th, 2006|Enterprise Architecture|

The cost of a project is a function of how big it is.  That isn’t really a hard thing to understand… you’d think.  When I say this to a PM, they usually say “sure” with that look that says “say something interesting now.” 

Yet, when you ask for the cost of a new project, by providing a set of requirements, how many of those Project Managers respond by asking “how big is it?”  Not nearly enough.

That should be the first question.  The conversation between the customer (Tom), the PM (Mary) and the Analyst (Wang) should go like this: 

Tom: I want you to create a new report on our site for Acme Manufacturing to use.  Their CEO mentioned the need for one to our CEO over a game of exploding golf.

Mary: Can you write down what this report will look like for me, Tom?

Tom: I knew you’d ask, Mary.  Here’s an example.

Mary: Thank you Tom. (Turns to Wang)  Wang: how big is this project?

Wang: (goes away and returns an hour later) The project is 130 Implementation Units, Mary.

Mary: Thank you, Wang.  I will type the number ‘130’ into my estimation model.

Wang: Don’t you want me to tell you how long it will take?

Mary: That’s OK, Wang.  I can figure that out.  You see, the last time I asked you for a size, you told me that the project was 165 units.  That project took 16 days.  The model says that this project will take about 13 days.  See?

Wang: That’s pretty cool.  I’m glad we decided, last year, to chose an industry-standard definition of an ‘Implementation Unit.’ 

Mary: Me, too, Wang.  You see, it wouldn’t matter if I asked you or Amy or Naveen.  All three of you would have calculated the size the same way, and my model would figure out the cost.  No need for guesswork, and the estimate is always the same.

Wang: and it’s more accurate too.

Mary: Yep.  It’s been right on the last six projects.

Wang: what a change that is.  I’m glad to be out of the business of guessing ‘time.’  Calculating the size of requirements is analytical. I’m an analytical person.  It’s easy for me to do, and you get the numbers you need.

Mary: (calling Tom on the phone) Hi Tom

Tom: Hi Mary.  So how long will it take to produce this report.

Mary: Sixteen to Nineteen work days.  Better say an even month.

Tom: Your estimates on the last few projects was “right on the mark.”  I trust you. 

This is not a fantasy.  While the conversation is fictional, this kind of interaction has happened many times and will happen repeatedly for folks like Mary.  She uses estimation tools, and she understands the distinction between Cost and Size.  Do you?

Agile definition: Chickens and Pigs

By |2006-01-23T16:38:00+00:00January 23rd, 2006|Enterprise Architecture|

In agile project management, you have frequent meetings of the project team (usually daily).  The goal of these meetings is twofold:

  1. team members come together to present any “obstacles” that the group coordinator is charged with clearing and
  2. team members may provide any “daily data” (like the number of hours needed to complete the in-flight task). 

These meetings are designed to be short, and agile methods like Scrum suggest that you ask everyone to remain standing during the meeting.  Fifteen minutes should be a reasonable meeting length. 

In order to keep these meeting short, the only people who can speak are people who have obstacles or information that others need to take action on.  The key word is action.  If information isn’t salient for ‘right now,’ it shouldn’t be discussed in this forum. 

The people who can speak are ‘pigs.’  Other stakeholders may attend but they should not speak (much).  These people are called ‘chickens.’

The terms ‘chickens’ and ‘pigs’ comes from the statement: “In a ham-and-eggs restaurant, the pig is committed but the chicken is simply involved.”  Numerous versions of this statement exist as jokes or humorous anecdotes.

Why define this again? I went looking for a good definition of the “Chickens and Pigs” metaphor and my search didn’t turn up a lot of useful hits, so I thought I’d add my own definition that I can link to. 

Workflow visibility: driving levels of abstraction into functional requirements

By |2006-01-23T14:14:00+00:00January 23rd, 2006|Enterprise Architecture|

I’m sitting in a meeting typing a blog.  Shoot me.  However, there is a discussion going on about how a process may flow differently depending on the level of information that may be made available. 

Earlier I described different levels of abstraction in workflow.  Recap: Business Unit View is high level-Unit-to-Unit.  Very document and message based.  Business Process View is mid level: items move from stage to stage based on conditions, and Work Step View, which is describable at the technical level (petri nets and workflow diagrams that techies tend to wrap themselves up in).

The problem comes when there are a set of steps that need to occur in a workflow where the workflow has Business Unit implications, but where one unit doesn’t want to expose the Process View details.  In other words, if group 1 sends a request to group 2, and then calls three weeks later asking about the status of the request, they shouldn’t get detailed information about the person in group 2 who is stuck.  In a B-to-B scenario, for example, a hospital may send an insurance claim to a payer, and then call up in a few days asking if the claim will be paid.  Should the payer respond that there is a data or systems issue, or should they respond in the generic: “we are working on it.  Should be done in 10 days.”

The latter is more useful to business. 

This only works if our systems allow for us to actually drive these levels of abstraction into our workflow execution systems.  Literally, an process should live within a “container” that provides information to external requesters.  That way, if the process changes within the container, or a particular step represents a process that is of strategic value, those process steps are not exposed.

It’s data hiding at the workflow level.  It’s useful.  It is uncommon.  Let’s fix this.

Project Management Antipattern 3: Guesses for Estimates

By |2006-01-21T12:30:00+00:00January 21st, 2006|Enterprise Architecture|

Basically, in IT work, we usually need to figure out, early on, if a project is “large” or “small” and budget accordingly… so we ask an experienced person or two to examine the requirements and figure, based on experience, how much it will cost.  Then, if you are in a high-ceremony process like TSP/PSP or RUP, you will go through an entirely seperate round of estimation later on, after the requirements are better understood, to figure out a “cost” to earn value against.

The first one is expected but unnecessary.  The second is simply foolish and wasteful.  Neither work very well.

A tool would be very simple to construct, using the notions of function points or story points, to meet both needs.  The basic idea of tools like these are to enter specific measurable aspects of the REQUIREMENTS into an interface, and have it produce a number of hours it will take to create the program.  These measurables can be the number of fields of data entry or data reporting, the number of new and changed user interface screens, the number of navigation pages, etc.  You actually avoid some, but not all, of the infrastructure, since those are usually choices that are made in a similar manner for each project.

You measure the first project, enter data, get a bad forecast, and use it.  Then you take the actual data from the END of the project and key it back in.  This makes the model more accurate.  You then use the estimation model to create the next estimate, this time a good one.  Keep this loop going.  After a few dozen projects (which in a large IT shop should take less than a year), you have a model that is ALWAYS more accurate than guesswork.  Always.

There was a time when using guesses for estimates was necessary.  That time is over.

Project Management Antipattern 2: Pardon My Dust

By |2006-01-19T09:35:00+00:00January 19th, 2006|Enterprise Architecture|

I ran across this anti-pattern on a non-software project, but it definitely applies.  This one comes from painting my living room and kitchen.

My wife and I did a little repainting last week as part of converting our unused formal living room into an exercise space.  Last weekend, we basically finished the change-work after installing light fixtures and a five foot by nine foot mirror.  However, it took until today for us to really begin moving furniture back into the adjoining (and also painted) dining room.  Why?  Because we had book shelves that didn’t look good with the new colors, and because we needed more light now that the mirrors were installed, and because… yada yada yada

It’s scope creep, pure and simple, but not the kind that injects itself at the beginning of the project.  That kind is easy to stop.  Nope, this is an altogether different animal.  This is “pardon my dust” scope creep.  This one happens with full cooperation and insistence of the customer.

I call this “pardon my dust” because a customer will be forgiving of a project’s lack of completion as long as new features are being added.  There is hope.  There is a bright, shining future!  And there is one more dinner in a family room filled with dining room furniture…  We are forgiving of the mess because we are getting what we want. 

As a project manager, especially on a Scrum or XP software project, it is tempting to simply “add another sprint” so that Features 88-94 can be added, especially since they are demonstrated monthly to the customer.  That customer keeps adding the next sprint, adding the next set of features, without ever asking for the deployment sprint to start!  (Deployment sprints are irritating.  You get no new features, it takes just as long, and you have to deal with messy details like QA reviews of the installation guides and maintenance documentation.)  The project manager has deniability because it’s the customer who keeps asking for the sprint, and it’s her money, after all.

It’s a trap.  The way out is for the project team to set, at the outset, a maximum number of new-code sprints between each deployment sprint.  I suggest three as a good maximum.  That way, the rest of the end users get an upgrade at least semi-annually, which should serve to keep frustration low. 

Now to invite company for dinner…

I have a dream of software

By |2006-01-15T02:25:00+00:00January 15th, 2006|Enterprise Architecture|

I was listening to a portion of one of Dr. King’s speeches the other day.  I noticed one aspect of leadership I hadn’t really paid attention before.

In his speech, Dr. King spoke of going to the top of the mountain and seeing the promised land. While the phrasing is very biblical, the point is transcendent: if you want to change things, describe the future.  Over and over.  Find a language and a voice: become the symbol for how the future can look.  “Where a man is judged by the content of his character, and not the color of his skin.” 

Dr. King repeated his vision of the future.  He believed that it could happen… maybe not in his lifetime, but that it could happen.  He built a desire for the future, and hope for the future, and a community of people who were dedicated to making it happen.

With great respect to a leader who I never had the chance to know, I hope to apply this skill that he demonstrates.  I hope to spend my time describing the future of software.  I’ll start right now…

  • I have a dream, of a world where business users describe their needs and software systems can reconfigure themselves to meet them, without spending wheelbarrows of cash, months of lost nights, and many layers of stomach lining.
  • I have a dream, of tools that free the truly visionary software developers from drudgery and allow the real expression of pure analytical thought.
  • I have a dream, of standards so ubiquitious that it is as easy to change a business rule in a production application as it is for me to carry my televison set from my family room to my living room and know that when I plug it in, it will work.
  • I have a dream, of a world where the people who don’t belong in software can leave it, because the rote and monotonous work that we keep giving them will dry up, and they can take up oil painting, mountain climbing and new car sales.
  • I have a dream, of tools that allow teams to focus on doing what they do best, and remove those extra steps that project managers ask for to measure value, but which serve to increase costs without adding value.
  • I have a dream, of a framework of well-described standard components, all well understood and readily available, allowing a business to change only the single component that will provide strategic value.
  • I have a dream of a world where an army of consultants is not required for business to learn and apply Six Sigma, Lean processes, and the Theory of Constraints to optimize the value chain… Where these skills are magnified by excellent tools using simple methods and widely taught to real business people.

And, with deference to one of the greatest human beings of our time or any other, I have taught myself and my children to love all people, and to judge a man or woman by the content of their character, and never, ever, the color of their skin.

 

Using BITS to move private data

By |2006-01-06T13:27:00+00:00January 6th, 2006|Enterprise Architecture|

I’m looking at the possibility of using the BITS (Background Intelligent Transfer Service) to move packets of private data from a central server to individual client machines.  BITS, for those who haven’t messed with it, is a really useful service built in to XP and Windows Server 2003 (and available for download for Win2K) that manages background downloading and uploading of data.  It is used by Windows Update, and APIs are available for any application to use it as well.

Cool beans for downloading an automatic update for your application, or for getting an updated data file for your virus scanner.  Even good for applications to share things like domain data (drop-down lists that change infrequently).  It is fine for secure download, since it supports transfer over HTTPS, but doesn’t do any verification of the content on the client end… that is up to you.  A few gotchas for secure upload as well, since the temporary file that the data is uploaded into has to be kept secure by code or configuration that is outside of BITS.  Still, a pretty darn useful tool.

Thing is: if I want to create a COLD report on a server, containing large amounts of private data, and download it to a client workstation, using BITS appears problematic.  This is outside of the intent of the service, I know.  I’m just wondering if the visible obstacles would be hard to overcome.  This includes things like controlling access to the file on the server (since the web site in question does not, as of yet, use Active Directory to control access… so there are no group ACLs that I can use.  On the surface, this means that each COLD report is essentially available to everyone… bad for security), and informing the server that the transfer is complete (a web service… I suppose).

Even with HTTPS transfer, we’d need to add bits to insure that the data arrives secure, intact and unaltered.  Not necessary for downloads of an application update or a virus file, but pretty darn necessary for private data files.

I’ll post a blog entry if I find anything that helps with securing the server file.  I suppose I could write an HTTP Filter that checks a SQL database for authentication before allowing access to static content… (sounds like an excuse to fire up MSN Search…).  If you have suggestions, please post a reply.

Does SOA make eXtreme Programming (XP) obsolete?

By |2006-01-04T20:00:00+00:00January 4th, 2006|Enterprise Architecture|

One of the promises of SOA and SOBA is that applications will be less complex, and therefore can be developed more quickly.  This complexity is reduced by having strict rules about how SOBA apps will leverage and reuse services.  In essesence, SOA takes an architectural approach to the problem of apps that take a long time to create, deploy, and modify. 

Interestingly enough, Agile Project Management methods (like Scrum, XP, and others) solve a very similar business problem in an entirely different way.  Instead of breaking up the complexity of the application in order to speed up delivery, they address the process by which that large and complex application is created using methods that focus on embracing scope change (while controlling it), improving dev team communications (while reducing the time spent on it), and prioritizing the feature set. 

Clearly, these two approaches are not tied to each other for success.  An XP project can (quickly) deliver a stovepipe app that is difficult to maintain.  A SOBA can be (quickly) developed using a heavy process like RUP.  However, both SOA and Agile SDLC methods use the same problem definition to justify their existence, and both purport that each, in its own right, is sufficient to solve the problem.

Clearly, if one problem spawns two different solutions, you have to ultimately ask the question: are both solutions necessary? 

In my opinion, SOA does nothing to address the fundamental problems caused by using a bad SDLC process.  Agile software development processes go a long way towards making life livable for the people who write code for a living.  On the other hand, Agile development processes do nothing to address the fundamental problems caused by system definitions that are too complex, refuse to consider reuse, and will ultimately cost a fortune to maintain.

So, in a way, both are needed.  On the other hand, I think we need to frame the problem a bit differently so that there are clearly two different problems being solved.  That way, when SOA solves one, and agile methods solve the other, both can be measured independently of one another.

My suggestion on how to reframe the problem statements to account for both—

Agile methods solve the problem of software development processes that produce frustration, rework, long hours, and missed expectations.  These are very tactical needs tied directly to the act of developing software.

SOA solves the problem of systems that embody multiple business capabilities in a non-reusable manner, thus forcing developers to re-invent the wheel every time a new application is created.  These are architectural needs tied to the business’ need to deliver consistent solutions in a rapid manner.

 

Project Management AntiPattern – PMs who write specs

By |2006-01-03T16:59:00+00:00January 3rd, 2006|Enterprise Architecture|

One of my favorite organizational mistakes, and I’ve seen this one MANY times, is asking your Project Manager to write a functional spec for the IT application you are writing.  I’ve seen this so often, I’d consider it a Project Management anti-pattern.

Why is this bad?  Because there needs to be discourse (and disagreement) between the person who describes the system and the person who manages the project that fulfills it.  When you are building a house, the contractor and the architect discuss, argue, and debate.  When you are building a bridge, the engineering designers have constant feedback on the bridge as it comes into being.  Not so with IT projects where the project manager writes the functional specification.

I’m honestly astonished when I run into this.  I’ve seen experienced project managers simply assert that “this is the right way,” without even considering the conflict of interest that goes on in this condition.  If one person decides both the “stuff” that is in a project and the “plan” needed to complete it, the kinds of junk that comes out the other end is amazingly bad.  I don’t care how “engaged” your business customer is. 

The answer is for the spec to be written by business analysts who WORK FOR the business, REPORT TO the business, and SIT IN MEETINGS WITH the business.  Some folks like for these folks to be paid by IT but report to the business, but personally I don’t agree.  Real benefits come when this is actually a business person:  They have to have more than IT responsibilities.  They have to understand the processes and constraints used by the business.  They have to be available to IT at a very deep level.  And they have to be financially responsible for success.

More importantly, the spec is not dictated by this person to the PM.  The spec is written by this person.  Not just owned… written.

Caveat: I’m not saying that this (bad) practice does or doesn’t happen within Microsoft IT.  It definitely happens.