//November

Should the name of a department be encoded in a namespace?

By |2006-11-29T13:26:00+00:00November 29th, 2006|Enterprise Architecture|

One thread of discussion going through our internal community is this: should the .Net namespace include the name of the IT team that created it?  There are two camps:

Camp 1: Declare the Owners of the Code

We have a structure with about ten different IT teams, each assigned to different areas of the Microsoft business.  Each team has a unique identity, and for the most part, unique standards.  This camp wants the name of the IT team included in the namespace. 

So if a project in the Services IT team (SIT, connected to Microsoft Consulting Services) creates an employee object (tied in the HR database), it may have the namespace of:  MS.IT.SIT.Employee

If the Human Resources IT (HRIT) team were to create similar code, it would have the namespace of: MS.IT.HRIT.Employee

The reasoning goes like this: no matter how much we want to create code for the enterprise, the fact remains that a specific team will create the code and will continue to maintain it. Therefore, when using someone’s code, it is imperative that they are able to quickly and easily find out whose code they are using in the event of a bug or the need for extension.  Therefore, the name of the owner should be in the code.

Camp 2: Declare the Business Process but not the owner

We have a centrally defined ‘business process framework’ that identifies a heirarchy of overall business processes, both primary and supporting.  Primary process families are things like “Create Product, Market, Sell, Fullfill, Support” while secondary process families are things like “HR, Legal, IT, Finance”.

This camp says: put the process family name into the namespace, but not the name of the team.  This will allow code developed by different groups, but supporting the same processes, to come together in the same heirarchy.

Back to our example.  If the Services IT team was using the Employee objects to encapsulate services-specific rules, then perhaps the namespace for those classes would be: MS.IT.Support.Employee.  On the other hand, if they were creating base code to access the HR database, those classes should be in MS.IT.HR.Employee.

The Human Resources IT team would use MS.IT.HR.Employee most of the time, since presumably, the rules they are implementing would cross all of the corporate employees.

The reasoning goes like this: The point of shared corporate code is that one team can rely on another for their knowledge.  A single namespace tied to process families allows a more natural grouping of the functionality that we all have to rely upon.  The ownership of the code is managed in a seperate tool.  (note: the tool already exists for managing ‘who owns the code in what part of the namespace heirarchy.’  The .Net Framework team uses it extensively). 

So, the challenge is, which namespace approach is better?

Personally, I think that Camp 2 is correct. Reasons:

  • As long as we place the name of IT teams into namespaces, we encourage the development of duplicate code to do the same things.  If I see my team name in the namespace, but no code to do what I want, I’ll feel free to add it, even if the same code exists somewhere else.
  • Another downside to Camp 1:  We would be encouraging the notion that “someone else’s code” is to be avoided at all costs.  Developers will feel less confident about using the code from someone else’s team if they see their team name in the namespace.
  • Organizationally, we won’t develop the needed muscles for managing a namespace of functionality that crosses multiple teams’ needs.  The product groups do this, and MS IT should as well.

Of course, I’m just one opinionated SOB among a long list of opinionated peers.  Convincing people of the value of one approach over another is going to take time.  Whatever compromise comes out, I’ll support (assuming it allows healthy practices to grow). 

What is your opinion?  Should teams put their names in a namespace?

 

Stories

By |2006-11-22T03:20:00+00:00November 22nd, 2006|Enterprise Architecture|

My father used to tell stories.  He would gather us around, myself and my two older brothers, and at bedtime we would collect on his bed, and he’d weave some fanciful ribbon about three boys on a grand adventure in a jungle, with monkeys and tigers and snakes.  We would sit for what seemed like ages, just listening.  No television show or comic book had anywhere near as much excitement and plain fun as his wonderful tales.

I’ve been following in his footsteps, telling stories to my three wonderful children.  We gather, usually at bedtime, and I’ll weave some tale about princes and castles and riddles.  There’s usually some poor person who plays a role, most often ending up better off. 

The stories are informed by the books I’ve read, including hundreds of short stories, as well as my father’s wonderful tales.  I carry the influences of generations of great story tellers before me, though I am a poor shadow by comparison.  I join in a great tradition of sharing great themes and tiny choices and bits of detail to enrich, enjoy and enhance. 

One thing that my mother had always voiced: a regret that my father had never written down his stories.  He did, later in life, write his stories… literally over 2,000 of them in a collected set of unpublished volumes, but they weren’t the same. 

They weren’t the rich and wonderful stories that a child hears when laying on the edge of his parents bed, tugged by dreams, listening to the musical tones of a great teller of tiny epics as he weaves among the trees, brushing alternatively up against the oak of adventure, the spruce of sadness, the maple of cleverness, and redwood of achievement. 

And so, as I tell stories to my children, I vow to make an honest effort to write them down.  They will not be original.  They never are.  They will be blends of bits of stories I’ve heard and ideas I want to express, and the mood of the night. 

To my dear father, I give you this.  As you look down from heaven, know that I carry, in my heart, a story that you started. I will finish it for you.

Is there value in consistency?

By |2006-11-21T18:44:00+00:00November 21st, 2006|Enterprise Architecture|

Do all of your project managers deliver the same information to their team and management?  Do all of your developers use common tools and techniques?  Do all of your testers follow the same patterns for creating test cases?

Process improvement is an interesting, and sometimes overwrought term.  We can all benefit from ‘excellent practices’ but the counterbalance is that ‘excellent practices’ are the result of steady improvement (six sigma or CMMI style) over ‘common practices,’ and many IT people reject the basic idea of ‘common practices’ altogether.

So what is this idea that some people love, while others despise? 

It is the radical notion that an activity or output that is valuable in a particular situation is also valuable in other (similar) situations, and that you can use proven value from one project to guide and inform staff members working on another (similar) project.

The problem isn’t collecting this guidance.  Everyone is willing to have their idea considered as ‘the best practice.’  The problem is getting other folks to learn, practice, and improve upon that guidance.  They already have a way of doing things, and your ideas may not appear to be all that much better.

One way to attack a ‘common practice’ is by saying “My situation is not similar to yours, so your practice is not valuable to me.”  This is occasionally true, but often it is a claim made by a person who thinks his or her way is just fine, thank you, and doesn’t need the ‘improvement’ offered by others.

Another way to attack a ‘common practice’ is by saying “Your idea is not better than mine, so I won’t adopt it.”  This gets really fun when one person or the other starts trying to create measurements to prove how much better they are.  Don’t get me wrong.  I like measurements as a way of driving process analysis.  However, those measurements have to measure the things that really matter: can you make more money?  Can you deliver to the business better?  Can you cut costs?  Otherwise, the measurements are unlikely to have any relationship whatsoever with stated company strategy or goals. 

To whit: I’ve seen folks quote numbers that talk about the drop in the number of defects if you follow ‘process X’ when that process substantially increases the development time needed to produce a system.  This is fine if you don’t mind increasing costs or sacrificing agility.  Executives and managers get to decide what the priority should be between agility, scalability, reliability, flexibility, and performance.  Here’s a radical idea: we should ask them.

More to the point, should an organization even create a ‘common practice’ guideline at all?  Is there value if asking people to perform their work in a common manner?  Most would say “yes” but I’m willing to bet I’d get a wide array of responses if I asked “how detailed” that common practice should be.

So, to add to the different quality attributes, I add the attribute of process consistency.  What is the value in making sure that your systems are created in a consistent manner? 

It’s a fine line.  What do you think?

Can your software be TOO functional?

By |2006-11-20T22:54:00+00:00November 20th, 2006|Enterprise Architecture|

When deciding what package of software to purchase, or to decide if you should build your own solution, it is common to hear the question: “does it give us more than we actually need?”

Example: you run a small business with a single cash register.  Do you need a cash register solution that can run on a network, or that requires a login, or tracks inventory?  Each of these are key features, but perhaps one of these features is not important to you,

So, you could decide not to buy the ‘high end’ system and instead buy a less capable system to save $250.  You may feel pretty good about it… until you need to add another cash register.

Of course, you could ask a more sophisticated question: are these features we may, someday, need?  What is the value of a feature you will never use, after all?  If you were to ask that question, then software that can handle two cash registers on a network would be valuable, even if you only have one register (today). 

But what is the actual cost of NOT getting the capability?  What is the cost of building a system of your own just because the commercial package is too capable

From experience, you would be nuts to build a system when you can buy one that meets your needs, even if the system you buy gives you capabilities you may never use!  Even if you have a team of developers in China offering to build your app for pennies!

As long as a commercial app covers a very large percentage of your needs (80%+ of the function points, for example), then buying beats building, hands down, even if your needs account for less than 40% of the systems’ capabilities.

Reality check: the cost of owning a custom application is high.  It is the cost of collecting requirements, managing the project, rolling out the software, maintaining expertise needed to fix it when things change (as they always do). 

It is the cost of upgrading platforms as developers become scarce.  It is the cost of backup processes, restore trial runs, helpdesk staff and software, and other infrastructure, both human and technical, that is needed to keep custom software “up and running.”

I suppose it is feasable to say “why buy SAP when a simpler package, like Dynamics, is available for a fraction of the cost.”  I would agree, as long as the package you choose actually meets your needs.  You should absolutely go with the least expensive solution that works.

On the other hand, never assume you can write in a weekend. 

Rule of thumb: Cover your needs and a little bit more.  Whatever else you get in the box… it’s free.

Iterative… agile… architecture

By |2006-11-16T05:05:00+00:00November 16th, 2006|Enterprise Architecture|

A salesman walks into a bar near Microsoft.  He sees that there is no where to sit, but he’s dying for a drink.  After waiting a few minutes patiently for a barstool to become available, he loses his patience. 

So he climbs up on top of the bar and announces that he’s a salesman, and he’s got a great idea for a software system, and it can be written in a week by any programmer with a brain.

The bar clears out.

The fact is that we’ve all been victims of a WAMI, (Wild-A**ed Marketing Idea).  Some of us more than others.  It’s not that these ideas are bad.  In fact, they are usually quite good.  It’s that they usually come with a wildly unreasonable expectation of how “easy” it will be to bring them to life.

And there’s the rub.  In the initial impression and early agreements made on a project, dealing with expectations of cost and capability, a lot of architectural assumptions are made, and then estimates are based on those assumptions.

But if you don’t write them down, how will those assumptions be reviewed?  How meaningful are they?  How can they be challenged, or validated, or even reused? 

One idea that I heard lately goes like this:  when a business leader describes a problem and proposes a solution, to internal IT groups, the group should NOT respond with an estimate.  They should respond with a high-level architecture, complete with assumptions and potential tradeoff decisions for the business leader to validate.  (a couple of pages of diagrams, and a single page of assumptions and tradeoffs).

Once he or she says “yes” to those constraints, then (and only then), provide an estimate.

This way, the architecture starts as soon as the idea does.  For those folks who work in the Waterfall model, the architecture exists (at a vague level) before the requirements document is completed.  At each stage, the architecture is updated to reflect things that were not known before.

For those folks working in agile projects, you don’t have Big Design Up Front, but you don’t have Zero Design Up Front either.  You have something small, something light, and hopefully, something that directly emits code or tests along with those diagrams.

Point is that the initial architecture doesn’t have to be ‘right’ but it can form the basis for understanding the system, its assumptions and constraints.  It is updated with each sprint, or reworked with each phase or whatever your SDLC process calls an ‘iteration.’ 

At least then, hopefully, an IT team gets away from the notion of “t-shirt size project estimation” and towards the notion of transparent assumptions, managed expectations, and realistic costs.

Should our next generation of languages require us to declare the applications' architecture?

By |2006-11-14T12:39:00+00:00November 14th, 2006|Enterprise Architecture|

As languages ‘improve’ over time, we see a first principle emerge:

Move responsibility for many of the ‘good practices’ into the language itself, allowing the language (and therefore the people who use it) to make better and more consistent use of those practices.

With assembler, we realized that we needed a variable location to have a consistent data type, so in comes variable declaration.  We also want specific control structures like WHILE and FUNCTION.  As we moved up into C and VB and other 3GLs, we started wanting the ability to encapsulate, and then to create objects.  OO languages emerged that took objects into account.

Now that application architecture is a requirement of good application design, why is it that it that the languages don’t enforce basic structural patterns like ‘layers’ and standard call semantics that allow for better use of tracing and instrumentation?  Why do we continue to have to ‘be careful’ when practicing these things?

I think it may be interesting if applications had to declare their architecture.  Classes would be required to pick a layer and the layers would be declared to the system, so that if the developer accidentally broke his own rules, and had the U/I call the data access objects directly, instead of calling the business objects, for example, then he or she could be warned.  (With constructs to allow folks to override these good practices, of course, just as today you can create a static class which gives you, essentially global variables in an OO language).

What if an application had to present it’s responsibilities when asked, in a structured and formal manner?  What if it had to tie to a known heirarchy of business capabilities, as owned by the organization, allowing for better maintenance and lifecycle control? 

In other words, what would happen if we built-in, to a modern language, the ability of the application to support, reflect, and defend the solution architecture?

Maybe, just maybe, it would be time to publish the next seminal paper: “Use of unconstrained objects considered harmful!”

Introducing a culture of code review

By |2006-11-13T10:26:00+00:00November 13th, 2006|Enterprise Architecture|

I got a ping-back from another blog post written by Jay Wren.  He mentioned that his dev team doesn’t have a ‘test culture’ so he has to play a ‘noisemaker’ role when he is challenging bad designs or code.

I read with interest because, to be fair, code review and design review is not usually done by the test team.  Functional testing is usually where the test team really digs in, although they have a healthy input in other places.

Designers need to ‘test’ the design, but this is most often done by having the architect create high level designs and having other team members, including other architects, review the design. 

This is, far and away, the most important ‘test’ of software, in my opinion, but is too rarely done, especially for code that is written for internal use as most IT systems are. 

Frequently, there is no architect available. 

Even if there is a senior person available who can play the role of reviewing architect, what process would they follow?  I’d suggest that any company in this position investigate the ATAM method from SEI.   This is a way of evaluating a design from the standpoint of the tradeoffs that the design accounts for.

Essentially, the concept is: each design must serve the purpose of meeting functional requirements, organizational requirements, and cost/complexity requirements.  By reviewing first collecting and prioritizing requirements for reusability, scalability, maintainability, and other ‘abilities’ (called system quality attributes), you can then evaluate the code to decide if it is ‘scalable enough’ or ‘maintainable enough’ to meet the needs.

This allows a realistic review of a system.  It takes a lot of the ‘personality conflict’ out of the equation.  There is no perfect software system.  However, if a system’s design is a good match for the requirements of the organization that creates it, that’s a good start. 

Should an interface be stable when semantics are not?

By |2006-11-02T10:35:00+00:00November 2nd, 2006|Enterprise Architecture|

I know an architect who is developing an enterprise service for the passing of contracts from one system to another, (document metadata, not the image).  He knows the needs of the destination system very well, but he defined an interface that is not sufficient to meet the needs.

The interface describes the subset of data that the source system is prepared to send.  The source system is new, and will be released in iterations.  Eventually, it will send all of the data that the destination system needs.

In the mean time, it will send only a subset.

For some reason, he wants the interface to change with each iteration.  And thus, he will create the service repeatedly.

This thinking is typical: define what you need, refactor as you go.  The problem is that it ASSUMES that no one else ever needs to call your service or use your service.  It assumes that no one else will get to the destination system first.  In short, that you know everything.

The justification: we will change the destination system when the source system comes online.  Since we will not change it right now, there is no need to model the interface.

What do you think?  Should the interface describe all of the data that will eventually be needed, even if neither the source nor destination systems can leverage all of it yet?  Should there be a different interface each time the behavior of the destination system changes?