What do .Net Solution Architects need to know?

By |2008-09-08T16:02:20+00:00September 8th, 2008|Enterprise Architecture|

A friend and colleague, J.D. Meyer, asked me to consider this question, and I have to admit that it’s a bit of a tough question: What do .Net Solution Architects need to know?  (If you have an opinion, please go here).

Why so tough? 

For me, being a .Net architect means "applying various architecture skills to the Windows .Net platform."  We could just look at the intersection of architecture and platform and go from there.  This is a narrow scope: only platform-specific information.

On the other hand, there are a great deal of topics in architecture and engineering that are not specific to any platform.  We do a great service by providing a consistent message that allows architects of all stripes to seamlessly blend good practices with an understanding and knowledge of the platform.  This is a wide-scope: take best practices and apply them to a particular platform.

I tried to create a Venn diagram to illustrate these intersections (off the top of my head) for a small subset of practice areas.  This is what I came up with…

intersection of architecture and platform  

So let’s scope that question a little… With an architecture guide, what problem are we trying to solve?  Are we providing:

Advice for solution architects who happen to use .Net, or

.Net technology-specific advice for solution architects

I asked J.D Meyer that question.  His answer: Both!  The following is a (mostly) direct quote from his e-mail back to me:

I’m a principle-based, platform kind of a guy so I tend to focus on principles, patterns, and practices that are tech agnostic.  But we need to provide specific guidelines as well.

You can see that the big up front part of these guides are tech agnostic … and then tech guidelines are pinned against the backdrop:

· Improving Perf Scale (check out the first few chapters) – http://msdn.microsoft.com/en-us/library/ms998530.aspx

· Improving Web App Security (check out the first few chapters) – http://msdn.microsoft.com/en-us/library/ms994921.aspx

So, how I’m solving it now is … a thin, lead principle/pattern based guide as much as possible … and a KB, where the KB has how-to’s, cheat sheets .. etc. for applying it to the technology.

That’s the working model so far.

Here’s the last WCF project:

· The KB: http://www.codeplex.com/WCFSecurity

· The Guide: http://www.codeplex.com/WCFSecurityGuide

So, there you have it!  When providing advice, from Microsoft to all of the brilliant folks who live in the "real world" using our tools and technologies, we want to provide both the high-level principles and patterns, as well as the detailed level: here’s how to use all this good stuff in .Net.

My ask: if you have some ideas of "What .Net Architects need to know," Please go to J.D.’s blog and share your ideas with him.  The link is here.

Applying DDD to IT Management: First failure

By |2008-09-03T15:14:00+00:00September 3rd, 2008|Enterprise Architecture|

I always learn more from failure than from success.  In that spirit, I’ll share a (small) failure with you.

In my last post (Working in the dark), I mentioned that I would be discussing the metamodel [domain model] underlying all the things we do in operating Microsoft IT.  In effect, I’m following the precepts of Domain Driven Design, and applying those concepts to the act of software development in an IT setting, from planning through requirements, development through deployment, operations through retirement.  I don’t know if I will end up with a federated domain model or a single unified model.  That depends on a lot of things.  In the spirit of agile: I forge ahead.

One of the core principles of my effort, first and foremost, is “Leverage before build” which is a fancy way of saying “don’t invent what already exists.”  That implies that I will look to the [domain] models that exist.  One of them is the Common Information Model (CIM) from the Distributed Management Task Force.   

To leverage the work of the DMTF, I want to start with their highest level information model (conceptual level).  From the concepts, I can understand the classes.  I’ve used this process before, and it works.  To the best of my knowledge, this is the process used by DDD.

I opened the CIM from DMTF expecting to find an Information model.  After all, the name of their effort was “Common Information Model.”  And I found out that my nice little plan was not going to work… because the DMTF didn’t publish an information model.  They published a detailed class model.

(Caveat: I’m no expert on DMTF deliverables.  I spent three hours perusing their site, reading docs, and viewing models.  If there is a domain model there, I didn’t see it.  If someone does know of one, please send me a link.)

An example, from the DMTF CIM system model, looks like this (click to see the full picture):


Now for my controversial statement: This is not a good communication mechanism for information modeling, especially when sharing with people who are not already part of your project. 

I love Class models.  They are great!  Unfortunately, they are not a domain model that business people can use.  It is not a model that I, as a potential adopter of DMTF models, can use.  I need the business concepts first, to make sure I can align and/or adopt. 

There’s an information [domain] model under the DMTF class model.  That much is discernable.  But it is not explicit.  It is not exposed.  Both class models and information models should exist.  But if I have to choose one, I’ll start with the information model. 

(Caveat: I’m aware of their CIM metamodel.  It is a model used to describe the CIM meta-elements, not a conceptual model that serves as the foundation for the CIM object model).

So, while I’d like to consume the DMTF work, I’m afraid that the published models of the CIM are not appropriately modeled for me to consume and leverage them.  I could re-model them in my own UML environment… but what value would I gain?  I’m not sure. 


Working in the dark

By |2008-08-30T16:05:33+00:00August 30th, 2008|Enterprise Architecture|

If we listen to smart people who create development processes, we hear things like "collect requirements" and "understand business process."  We then go and write use cases, design software components, and write code.  Test cases describe the things we are going to test, and automated tests allow us to test our systems over and over.  Build scripts and deployment scripts and maintenance scripts automate complex tasks.  Whew!

There’s a lot of stuff in there.  And that is just the software development process.  But software development is part of a much larger process. 

When you start to consider the end-to-end processes, you have to consider the planning and operations aspects.

Planning includes things like business strategies, trends, business programs, scorecard measurements, metrics, scenarios, business capabilities, high level business processes, business functions, divisions, roles, teams, budgets, roadmaps, and rollout plans. 

Operations teams have even more considerations, leveraging things like configurations, change plans, incident reports, problem statements, service levels, events, assets, and services.  Assets include servers, systems, components, databases, and network components. 

Why the litany?

I’m trying to make a point.  Many people are involved in running a business, and many are involved in making changes to the business, ostensibly to improve it.

If you write software, or work in IT, you are part of that system as am I.

But if we don’t understand, even on the surface, the entire system by which the business operates, we are working in the dark.  We can’t see how our work affects others, and we can’t see how important (or unimportant) it is that we perform our responsibilities well.

Most importantly, without seeing the system, it is tempting to make things up. 

For example: If we don’t see how the requirements we gather connect to the business processes, we might be tempted to ignore the processes and simply invent requirements that "make sense" … to whom?  the project manager? The customer representative?  What makes the requirements "correct" if we have nothing to connect them to?  I’ve seen this happen many times.  It is crazy, but typical.

Another example: if we don’t see how our services are connected to enterprise information models, we can’t see if we are creating a service that avoids unintended consequences, or would create problems for managing data, or requires a process to "Magically" come into existence, complete with staffing and expertise, that the business is not expecting.

It is critical for the people involved in software development to understand the entire system of corporate operations, even at a visceral level.  IT teams must have access to the system of processes, especially as that system changes over time.

Over the next few months, I expect to be writing more about this understanding… How to see the system, and how to connect your parts to the "whole." 

There is a lot to understand, and learning is a process.  Each day, I consider myself a student, and a day is well spent when I did two things: used my understanding to help someone, and learned something new.  As I write, I am learning, so I’m inviting you, gentle reader, to share this journey with me.  Share the things you have learned, and the perspectives from your own experiences. 

Instead of working in the dark, let’s light some candles…

Traceability, the Solution Model, and Metamodeling

By |2008-08-26T11:05:42+00:00August 26th, 2008|Enterprise Architecture|

It is nice to point out, on occasion, when two different leaders in the architecture community are saying things that, when added together, become greater than the sum of their parts. 

First off, my friend and colleague Gabriel Morgan recently described the business value of creating a single underlying model to connect all aspects of a particular software project (from requirements through code).  He calls the model a Solution Model, and rests that model firmly on a metamodel that allows these underlying elements to be related to one another in a useful way.

His blog post, which is long and richly detailed, is not about modeling.  It is about value.  I recommend it highly.

"this blog is about understanding the value of modeling to a project team and is focused on helping Solution Architects gain a practical understanding of the value of modeling to, in turn, help explain its value to the project team for adoption." (Gabriel Morgan, from his blog)

The other contributor is Jean-Jacques Dubray.  He recently posted a very interesting article on "Model Driven Engineering" where he discusses many things, including the value of a metamodel.

"My recommendation to developers and architect is: metamodel (as a verb), metamodel completely and thoroughly and even if you don’t create a [Platform Independent] model of your solution and a compiler (based on this metamodel), write code with the metamodel in mind (this will end up looking like a framework of course). For instance, define precisely what a business entity is, an association, a business process, a task… Remember, you are NOT creating an OO model, you are creating a metamodel. Every solution domain has a metamodel. There is nothing absolute about it, the metamodel of an information system is different from the metamodel of an industrial process control system, and what works for a travel company may not work for an insurance company." (Jean-Jacques Dubray, from his blog)

What makes these blogs interesting is not that they are about the same thing.  They are quite different from one another.  What makes them interesting, together, is the deep and fundamental support that each provides to the practice of "using the metamodel."  This is a term that is not discussed much, but it should be.

The metamodel is the conceptual information architecture that classifies the information that we can use to construct solutions, understand problem domains, and create practices that insure that we build the system that we should build.  As JJ points out, the terms matter.  As Gabriel demonstrates, those connections are valuable.

The metamodel is key.  With it, we can tie the requirements to the design in a way that supports agility.  We can say, definitively, what the impact of a change in requirements will have, allowing us to select the requirements that we want to change in order to have a desired effect.  This is powerful, and necessary.

And it all starts with the metamodel.

Technorati Tags: ,

Malik's Laws of Service Oriented Architecture

By |2008-08-21T12:46:00+00:00August 21st, 2008|Enterprise Architecture|

  • No one but you will build the services you need in time for you to use them
  • If you build a service that no one else asked for, you will have built it for yourself
  • If you build a service for yourself,  you will optimize it for your own use
    • It is therefore the optimal service for you to use
    • It is very unlikely to be the optimal one for anyone else to use
    • No one besides you will use it
    • You will not use anyone else’s


Therefore, any team building reusable services must build each one only after two or more people have asked for it, with full knowledge that the resulting service will almost certainly be available too late for any of them to use it.

Therefore, no team should intentionally build reusable services.

Additional Laws and Corollaries

  • If you invest in improving someone else’s pre-existing service, you will create a reusable service.
  • Creating a reusable service, be improving someone else’s service, will cost you more, up front, than writing a completely new one.
  • The cost of maintaining a service increases proportionally to the number of consumers that use it.

Merging EA Frameworks

By |2008-08-05T11:01:00+00:00August 5th, 2008|Enterprise Architecture|

I’ve spent some time of late looking at various EA frameworks.  Nothing perfect out there yet, but quite an array of useful things.  But what would it take to create a single consistent framework for the IT profession?  Let’s look at the stuff that’s there now.  (Caveat: I reserve the right to be wrong.  If you disagree with anything here, send me an e-mail and I’ll update the text).

  • TOGAF – Basic strength: solution architecture.  Various models and how to create them.  Basic weaknesses: Planning methods and governance framework.  Weak on Information Architecture
  • FEAF – Basic strength: complete implementation tied to measurement framework.  Basic weaknesses: very specific to government, lack of independent process taxonomy keeps processes “in the silo.” 
  • eTOM – Basic strength: excellent process taxonomy with rich details. Strong information architecture.  Great for governing external vendors.  Basic weaknesses: fairly specific to telecom industry, gaps in governance and enterprise architecture models. 
  • ITIL – Basic strength: excellent process framework for operations and (now) governance.  Basic weaknesses: no architectural methodology to speak of.  Sizeable gaps in information or application architecture.
  • Enterprise Unified Process – Basic strength: soup-to-nuts coverage of enterprise software development processes, including funding and operations.  Basic weaknesses: poor adoption rate and lack of a governing body to allow for growth, minimal architectural methods, no enterprise process or capability framework.
  • Zachman – Basic strength: comprehensive taxonomy of architectural artifacts (to let you know when you are done).  Basic weaknesses: Lack of published and vetted methods to avoid “boil-the-ocean” exercises and focus on one particular benefit.  Very shallow: No detailed process, capability, or solution frameworks for “level 2” detail.  Highly proprietary.

What would an ideal framework look like?  It would have all of these things.  This list is “off the top of my head,” so I’m going to miss a few, but this is where I’d start:

Capabilities / Measurement / Process model for the enterprise

Capability and process modeling take a huge hit when trying to create an enterprise model just to create the base taxonomy.  A published taxonomy, governed by a passionate community, is necessary to get enterprise architecture efforts up to speed in non-Fortune 500 organizations.
Service, Solution, Feature and Technology model Application simplification and portfolio management require a base taxonomy of solutions and technologies to align the work in various divisions and speed up adoption of integrated solutions.  An industry standard taxonomy is necessary to allow vendors to provide useful information up front and smooth the development of SOA services across the enterprise.
Detailed process descriptions for all aspects of IT If wishes where horses, I’d merge ITIL and TOGAF and EUP and MSF and Agile processes to get a consistent, community governed, richly detailed process model for all aspects of Information Technology governance, processes, and measurement.  Include measurement, planning, improvement, transition, operations, and introspection.  Takes a “business of IT” approach.
Rich architectural methods and training for all aspects of architecture Starting with TOGAF, I’d extend the ADM to cover, in rich detail, three subtypes of architecture (collaboration and measurement across the enterprise, governance within a functional unit, solution design and development) across three architectural “areas of focus” (business capability and processes, information design and integration, solution development and technologies).  It is simple to create this “table.”  Doing so allows us see the opportunity to fill out the ADM.
Richly detailed “Business conceptual model” Everyone has their own ideas of what basic business terminology means.  A community governed business conceptual model can be governed by, and provided to, business schools around the world to create and maintain consistent information models.  This removes one of the key causes of software project failure: failure to share common context.
Multiple federated “common information models” Some industry implementations of EA frameworks are already here, and some vendors provide a good starting point for the shared elements (for common, shared processes).  A framework to allow not only many models, aligned by industry or attributed types, but also to allow organizations to create and manage federated models within their own walls to manage their unique value proposition while keeping their information models aligned.

The frameworks that are there are just not ready to do everything.  Only by describing what ‘everything’ would look like can we begin to fill these gaps.

Enterprise SOA needs a Federated Evolutionary Modeling Environment

By |2008-07-30T10:33:00+00:00July 30th, 2008|Enterprise Architecture|

I’ve been thinking a lot lately about the gap between “what we have” and “what we need” in the Enterprise SOA space.  I think I have a need that is not yet filled by software.  (that I’m aware of).

I put up a post back in June about the difficulty in creating a common information model in a large enterprise, especially one with a federated environment like ours.  (CISR coordination and diversification models).  The feedback I got back was telling.  The majority of respondants told me what I suspected: developing an enterprise-wide common information model was so difficult that many folks considered it unfeasable.  (Actual words included things like “utopian” and “big design up front.”  I’ll stick to “unfeasable”).

That said, I have also stated in public that I believe, firmly, that SOA at the enterprise level requires some levels of agreement on the way that information is understood and coordinated.  Each business unit can own information that is specific to that unit, but in the areas of coordination, where there is value, the business needs to be able to communicate.

So, we have something that is hard to do (build a CIM and keep it up to date).  That something is useful (to get the benefits of Enterprise SOA).  So, why not take the software approach?  After all, I do work at Microsoft.

What business scenario would this tool need to support?

Basically: each business unit would submit their information model to a common repository.  The submitters are architects, and they MUST align the models in some way, even if it is just to show that there are differences.  Services are posted to the same repository, and must be lined up under specific information models.  In order to consume a service, the business has to agree to the information model.  Multiple competing models are allowed to co-exist.  What do we get?  An economic model of self-governance that produces an optimum information model: one where agreement is reached only where agreement makes financial sense.

Specific capabilities:

  • A business unit architect can consume part of an information model without consuming the entire thing.
  • A business unit architect can assert part of an information model without proposing the entire thing.
  • A business unit architect can offer services tied to their information model.
  • A business unit architect can assert that a portion of an information model is “standard” for their use
  • A developer, writing software for that business unit, can easily find and adopt their information model.
  • A project team can provide a report to a governance committee proving that they are conformant to local standards
  • An enterprise architect can run reports to isolate “points of difference” between business unit information models.
  • A system designer, intent upon consuming services from another business unit, can begin a workflow to insure that the consuming business unit agrees to the information model of the presenting business unit. (an “information adoption workflow”).
  • The workflow needed for a consuming business unit to agree can be custom-tailored to the organization.
  • Organizations that present a service have an automatic measurement mechanism built in allowing them to “charge back” the businesses that consume the service.  Various financial models must be supported (one time fee, pay per use, annual licenses).  This provides economic incentive for sharing of code, as well as the incentive to create services that align to commonly needed information models.
  • Built in support for the versioning of information models over time (including both past and future versions) allows the business to change their minds, and even chart their course.

That’s what my gut tells me.  This has some pretty interesting effects:

  1. Information architects have a clearly defined and critical role at the earliest stages of a project: get consensus on information model changes needed to allow the consumption of existing, lower cost services.
  2. Economics will drive good behavior.  No need for an Enterprise Architect to “design” good behavior. 
  3. Less emotion.  There will not be consensus on everything, and this model doesn’t require it.  Consensus will be reached surprisingly easily on some key areas, and it will happen without any architect looking to make it happen.  This helps remove politics from the picture as well.
  4. It is easier to adopt existing code than build new code if services, offered by other groups, aligned with the information standards of the consuming business, are already clearly identifiied. 

We may be closer than we think.  With bits from various MS products, and with Oslo coming, this vision is getting closer to reality.  It’s an end-to-end idea. 

Something to consider.  Comments?

Excellence depends on the environment you are in

By |2008-07-30T00:17:00+00:00July 30th, 2008|Enterprise Architecture|

Not long ago, I was asked an interesting question about our Enterprise Architecture team.  The question was “Does Microsoft provide the internal support to create an excellent Enterprise Architecture program?”

The answer is “yes” but it got me thinking: what qualifies as “excellent?”  That term is subjective.  In our business, what does it mean to be “excellent” and how might that differ from another business?

Excellent, to me, means that the effort is tailored to the needs of the business.  That includes business strategy, business structure, and corporate culture.  Our business, in Microsoft, is the business of developing and distributing software.  We are pretty good at it, although we have our critics.  

So our EA program is just that: tailored to the needs of Microsoft.  We don’t do more than Microsoft needs, or less than Microsoft demands.  We push the envelope, as change agents and thought leaders, but we don’t crimp creativity… let’s face it: we make money on applied creativity.  If one idea out of 1,000 makes money, we earn back the investment.  It’s a unique space to try to operate an EA program in.  We are excellent, but probably not typical.

I can only conjecture about what “excellent” would look like in another company.  We pay industry analysts and attend conferences, just like many of you do.  Part of the reason: to listen and learn about how practitioners in different companies do what they do.  Basically, we are trying to find out how our peers describe excellence for their own enterprise.

How do you define excellence?  Are you there yet?

Everybody, Somebody, Anybody, and Nobody

By |2008-07-26T02:28:46+00:00July 26th, 2008|Enterprise Architecture|

This is the story of four people named Everybody, Somebody, Anybody, and Nobody.

There was an important job to be done and Everybody was asked to do it.

Anybody could have done it, but Nobody did it.

Somebody got angry about that, because it was Everybody’s job.

Everybody thought Anybody could do it, but Nobody realized that Everybody wouldn’t do it.

Consequently, it wound up that Nobody told Anybody, so Everybody blamed Somebody.

Clarifying the Use Case

By |2008-07-23T11:27:07+00:00July 23rd, 2008|Enterprise Architecture|

A Use case is a cool thing.  A little too cool.  The term has been occasionally misused, and in some respects, that misuse diminishes the value of a use case.  To succeed, we have to know what a use case is.   When you are done reading this post , you will still know what a use case is… but you will also know what a use case isn’t.

What a use case is

The following section is a direct excerpt from “Writing Effective Use Cases” by Alistair Cockburn.

“A use case captures a contract between stakeholders of a system about its behavior. The use case describes the system’s behavior under various conditions as the system responds to a request from one of the stakeholders, called the primary actor. The primary actor initiates an interaction with the system to accomplish some goal. The system responds, protecting the interests of all the stakeholders. Different sequences of behavior, or scenarios, can unfold, depending on the particular requests made and the conditions surrounding the request. The use case gathers those different scenarios together.” (Cockburn, 2001)

With all due respect to Cockburn, his discussion doesn’t so much define a use case as describe one.  There are very few formal definitions available in the public domain or in reference works.  Here is my attempt at a more formal definition:

A use case is a formal description of an interaction between an actor (usually a person) and a system for a specific purpose or goal.

Many of the discussions of use cases in the literature go into great detail about the requisite parts of this formal description.  Most include the concept of ‘actors’, ‘use case scenarios,’ ‘preconditions,’ ‘postconditions,’ and a stated ‘goal.’ 

Things to notice

  1. In a use case, there are always at least two actors, and one of them is a system.  The use case is a description of system level interaction… in rich detail. "Enter name and address and click the ‘enter’ button."  There is very little about a use case that is abstract or high level.
  2. The amount of formality is not part of the definition.  In fact, Cockburn specifies that you should create a use case in a fairly informal way at first, when the system is still being understood.  Only in a later iteration of the requirements, when the project is funded and the scope is reasonably well understood, should the specifics of the use case be added.

What a use case is not

As I mentioned before, the term "use case" has been used in many ways, and it has been applied in some pretty unusual things.  To be effective, we should recognize that a use case is a tool that is tailored to one purpose, and using it for a different purpose may not be optimal. 

  1. A use case is not a description of a business process.  The use case describes the interaction between a single actor and a system.  At best, that interaction can be considered a single (atomic) activity in a business process.  A business process is much more than that, including many activities from inputs to outputs in support of a goal.  Let’s not pretend that use cases describe business processes.  One activity, perhaps two… that I will buy.  Rarely, if ever, anything more.
  2. A use case is not decomposable into other use cases.  It is the atom.  Break it down and you have parts that are not atoms.  Combine use cases and you have composites (molecules) that are not atoms.  A use case is the description of an interaction between person and machine.  That is all.
  3. A use case is an inappropriate tool to describe system to system interaction.  Certainly you CAN use a use case this way, just as you CAN drive a screw into wood with a hammer.  But it is not optimal to do so.  A much better set of tools include UML Interaction diagrams, protocol descriptions, standard identifier formats, and WSDL.   
  4. A use case is used to elicit requirements but it is not the requirement itself.  Requirements need to be collected and called out as statements.  A couple of noted authors have weighed in on the skills needed to describe and understand requirement statements.  Both analysts and developers should learn these skills.  
  5. It is optional to use the use case approach.  While I’m a fan of use cases, I’m also in a role where we have to draw clear distinctions between the work that someone must do and the work that someone should do.  The requirements must be collected.  Use cases should be used.  If you can collect requirements in a different way, that is not wrong.  That said, I’m fairly comfortable stating that the use case approach is a ‘best practice’ for describing requirements to software developers.
  6. For traceability and requirements validation, use cases are not the source of requirements.  Requirements come from the business needs, and most of the business needs are fairly easy to connect to specific stages of the business processes (with some fascinating exceptions).  As I pointed out in my prior post, I view the source of most business requirements to be the business processes and customer experience scenarios that software must support.  

    Therefore, if you want to determine if a requirement is needed, or provides value, or has been completely met, it is better to trace the requirement back to the business process.  The use case is an abstraction along the way.  (This is my opinion, of course, and your mileage may vary).

    (note to contributors: the distinction between functional and non-functional requirements is too vague to clearly delineate the requirements that are not easily traced back to business process or user experience scenarios.  There’s another blog post in there somewhere.)

In short, a use case is an essential and valuable tool in the Business Analysts’ toolkit.  Let’s use it wisely.