//June

Getting the Enterprise Canonical Data Model right

By |2007-06-30T14:46:00+00:00June 30th, 2007|Enterprise Architecture|

What is the correct level of abstraction for the Enterprise Canonical Data Model (ECDM)?

As I blogged before, the ECDM is used to decide what data should be passed through the integration infrastructure in the notifications that occur on business events.  The canonical schema that define “things” are all subsets of the ECDM (or extensions as well will see).

In some organizations, there are fairly few variations in basic ‘things’ like order, product, and agreement.  In other organizations, including Microsoft, the need for independent variation is more apparent.  As we move more toward “Software as a service,” the number and types of products will only grow.  And what exactly is an order if we are using click-stream billing for a service call?  This will be fun.  So we need lots of flexibility as the business grows and changes.  An ECDM that is too prescriptive or too large can end up constraining the business’ ability to grow and change.

There are basically two types of messages that need to rely on the ECDM: event notifications and full data entities.  Both are transitory, in that they state a fact at a particular point in time, but the event notifications are more transitory because they are only sent once across the infrastructure.  We need to be able to replay them, but (with the exception of BAM), we don’t often query them.

In general, I’d say the rule for event notifications should be:

Communicate sparingly, communicate clearly, allow for questions.

Communicate sparingly: Define your entities to the minimum level needed to share “concepts” and “relationships” across the enterprise.  If an order happens from “company ABC” for 10,000 licenses of “product XSP” under marketing program “VLR”, then the canonical schema for that order needs to be pretty short, and the event notification even shorter, so that receiving systems can decide if they even care.  Remember that your event system will send a LOT of events.  Keep them small but provide enough information for the recipient to decide if they need to know more.  So, perhaps the “order placed” notification has things like order id, customer id, partner id, reseller id, program id (sales are made under marketing programs) and a list of product categories that the items in the order represent.  That’s it.  The receiving system can decide if they need to know more.

Communicate Clearly: The id’s must be generic and enterprise wide.  If a receiving system gets a notification or a canonical element (like the full order), they have to be able to interpret it consistently.  That means that the systems listening for the events have to know what the ids mean and how to get more information on an id if they don’t already have it.

Allow for Questions: the infrastructure needs to provide a generic way to ask the question: I need to know more about order 1234 to customer ABC on program VLC.

So if the needs of the event notification are for brevity and consistency, what are the needs for full data entities?

When a system gets an event notification, it will look at the event and decide if it cares.  Most of the time, it won’t, and our use case ends.  Sometimes it will.  When it does, it needs to ask for full details of that data entity.  Perhaps it wants to store data.  Perhaps it wants to calculate something to append to the records for the customer, the partner, the reseller, the sales team that made the sale, or the product group that made the product.  Lots of reasons why the system getting the message will need more data.  We have the ability to ‘ask questions’ listed above, but that one comes to full data entities as well.

I’d say the rule for full data entities is:

Provide a complete document, at a point in time, allow for questions

Provide a complete document – the full data entity contains all of the data that the source system can share about it, including denormalized details about related entities.  For example, if I get an order as stated above, for 10,000 licenses for product XSP, we would provide the full “legal name” for the product and some attributes for the product (like the fact that it is a license, what country it is sold in, languages, product family id, etc). On the other hand, we don’t want to constrain the business, so allow for optional fields in the semantics of the canonical object.  Allow a system that doesn’t have a data element (like a price or even a quantity) to send the order anyway.  Also allow the system that is sending data to append ‘system specific’ data elements.  That way, a team can use the canonical model to send data to another closely related system in the same business stream, where those ‘system specific details’ can be understood and used.

At a point in time – Recognize that your documents are not static.  Provide dates and version numbers for each and every document and allow a document to be called back up on the basis of those dates and version numbers.  This is key to being able to recreate a data stream later in time, an operational necessity that is often overlooked.  So, yes, your order has a version number. 

Allow for questions: as complete as your order document is, it will still need to have codes in it referring to other things.  For example, each product may have a product family.  By including the product family code, you are stating this: “At the time this order was placed, product “Sharepoint” was part of the “Office Family” of products”.  For some products, this may not change much, but for others, this could.  So you include the product family, but there is no need to include attributes of the product family.  The receiving system can ask for product family details of the same infrastructure if it needs to follow up.

Hopefully, with these simple guidelines, we can build the ECDM at the right level of abstraction.

As the role changes…

By |2007-06-29T05:08:00+00:00June 29th, 2007|Enterprise Architecture|

In my career, if I take any window of time that is two years long, regardless of start and end date, I cannot find a single period where I started and ended the period doing the same thing.  Not one.  Oh, I’ve worked at employers for longer than two years, but not doing the same job. 

I’m about to begin my fourth job at Microsoft.  I’ve been here three years (this time).  It’s a good job.  It’s a different job.

Started in one of the IT groups before moving to Enterprise Architecture.  Loved the people, and made the best of the job.  Then, I move to EA and became an Enterprise Application Architect embedded in the OEM division… which meant that it was my job to ‘govern’ the IT projects.  I’m not much for governing.  I’m a lot better at collaborating, and I really enjoyed collaborating with that team.  Some very smart people and I had a lot of fun working with them.  Since Spring, I was the Lead Systems Architect for a large distributed Enterprise-focused Service-Oriented Business Application (out of necessity, really).  I had a blast.  Just finished turning that gig over to an amazing architect who I have the utmost respect for so that I could move to Central Enterprise Architecture… this time to be ‘Mr. SOA’ for Microsoft IT.

Of course, Microsoft IT has far more than one SOA architect.  My peers are probably better than I am in some pretty key ways.  We have many talented SOA architects working in different divisions.  What I’m hoping to do is take Microsoft IT to the next level of SOA maturity by driving the development of the Enterprise Canonical Data Model, Business Event Taxonomy, Enterprise Solution Domain Integration Model, and the Periodic Table of Services (a set of planned services that are needed to drive SOA forward).  This is one of the toughest jobs I’ve taken on in years (since co-founding a dot-com).

I’m ready. 

It’s always a bit hard, and a bit sad, to leave the ‘comfortable’ and go to the ‘new.’  There are a great many good people who I won’t get to work with daily any more. I’ll miss that daily contact.

On the other hand, there are a great many good people who I haven’t had the chance to work with, but will get that chance now.  Looking forward to that part.

Microsoft IT is a great place.  If you are an IT professional, and you are the best darn architect or developer or tester or PM or operations specialist in your team, I encourage you to seriously consider joining this organization.  You can truly build a career here, if you are gutsy, and smart, and most importantly, passionate about being excellent at what you do.

There is no way to go higher than when you are soaring with the eagles.

What I like about Acropolis

By |2007-06-27T00:26:00+00:00June 27th, 2007|Enterprise Architecture|

Just checking out the online resources on the new Orcas front-end development technology called Acropolis that builds MVC/MVP patterns into WPF software development.

What I find promising: an Acropolis part can essentially consume a SOA service, allowing the composition of process and activity services to be as simple as snapping parts onto a surface.  This is not particularly new from a software development standpoint, but it’s pretty new for the Microsoft stack.  Nice to see.

We could theoretically get to the point where Mort himself can compose an application from services…  And there would be little or no code to maintain. (Thus solving the problem of unmaintainable code).  It also makes the creation of a Service Oriented Business App so fast as to provide real, useful, practical, business agility.

Code is becoming free.

The Unimportant SOA Catalog

By |2007-06-26T09:01:00+00:00June 26th, 2007|Enterprise Architecture|

Have you ever woke up in the morning with an idea in your head that you simply have to write down?  I just did.  Here’s the idea: Everyone talks about how important the catalog (or repository) is to Service Oriented Architecture.  It isn’t.

The reason everyone wants a catalog is simple: If I create a uniquely valuable service, and I want people to use it, I need a place to advertise it.  So I put it in a catalog.  The catalog contains useful information like what the service is, and what it does, and who made it, and how to call it.  Useful stuff.  Sears Roebuck, circa 1893.

So how can that be unimportant?

Because this is a case of ‘doing a really good job of solving the wrong problem.’

A friend of mine and fellow architect here in Microsoft IT named Mohamed El-Ghazali changed the way I think about service contracts.  And Gartner changed the way I thought about “what makes adoption work” and together, there’s a powerful brew.  It took me a while, because these ideas are “just different enough” to make me pause, but between these two sources, they had the intended effect, and now I can say, without blinking, that the catalog is not the high order bit.

Why? Because the catalog is not an IFaP.  It is a list of chaos. 

If you have 20 services, or even 50 services, a catalog is really useful.  I’m looking at an architecture that will require something around 500 enterprise information, activity, and process services, about 200 infrastructure services, and countless ‘point solution’ services.  There is no way a list will do.  No human can remember it, or use it.  Duplication and overlap will prevail.  Face it, the catalog doesn’t scale.

So where does the solution lie? 

How about looking to the past to find the future. 

I call your attention to the history of the Periodic Table of Elements.

If you are not familiar with the history of the creation of this simple yet extraordinarily powerful concept, you should read this page.  Two key concepts I’d like to pull out:

First off, by creating the periodic table of elements, Mendeleev created a situation not only where elements could be classified, but where missing elements could be predicted.

Between 1868 and 1870, in the process of writing his book, The Principles of Chemistry, Mendeleev created a table or chart that listed the known elements according to increasing order of atomic weights. When he organized the table into horizontal rows, a pattern became apparent–but only if he left blanks in the table. If he did so, elements with similar chemical properties appeared at regular intervals–periodically–in vertical columns on the table.

Mendeleev was bold enough to suggest that new elements not yet discovered would be found to fill the blank places. He even went so far as to predict the properties of the missing elements. Although many scientists greeted Mendeleev’s first table with skepticism, its predictive value soon became clear.

This meant that not only did Mendeleev help to understand the list of ‘needed domain knowledge’, he actually created boundaries that empowered other people to focus their efforts and deliver incredibly quick innovation.  This innovation came from people he had never met.

The second thing I’d like to highlight is that the original table was useful but it was changed as knowledge increased to match a more modern understanding of chemistry and modern techniques for measuring atoms that was not available when it was developed.  In other words, the concept is good, even if the implementation is iterative.  (19th century agility).  The boundaries remained, and the table stands today as a fundamental artifact in the understanding of our natural world.

What does that have to do with SOA?

I am creating a similar table of services based (loosely) on the layers defined by Shy Cohen, message exchange patterns defined by the W3C, the work on Solution Domains that my team in IT Enterprise Architecture has started, and the business behaviors that I see as necessary to accomplish a partitioned design.  The goal is to create an all-up IFaP of services based on multiple spanning layers. 

Unlike the periodic table, this will not be bounded by physics.  Instead, it will be bounded by the data elements and solution elements defined by performing a Solution Domain mapping exercise against the enterprise.  Your organization will have different elements, but either way, there will be boundaries, and that will, I believe, foster organized and directed effort, creativity, and discoverability.

I believe the value will be clear.

  1. We will know what services we need to develop to meet the needs of the enterprise.  We can even prioritize the list and create a roadmap showing the CIO when we will be “done.”
     
  2. We will have basic patterns already established for how they will be called and what they will return.  This reduces a huge amount of churn and will give brave developers the ability to resist the “not invented here” plague.  The patterns can be designed to include all the needs of the test and support teams that are normally ‘left out’ of application specs but are ever more critical to the success of SOA.
     
  3. We will have generic test harnesses in place to test them before they are written, allowing test architects to build reusable test value, while at the same time relieving project teams from writing difficult and complex test software to support SOA.
     
  4. We will have sufficient information to estimate their cost by the team that must build and maintain them, providing some visibility to the cost of developing an integrated application.  This gives us the ability to seperate out the incremental cost of SOA from the cost of application development in general.

I’m pretty excited about doing this, and I think it is a strategy that can work. 

So what part of this kills the catalog?

The catalog helps a programmer to find the name of a service that performs a specific purpose. 

However, if I know the purpose, and the list of activities is a constrained list (as is the list of data subject areas), then I can create the name of the service and just hit the infrastructure up for it.  If it exists, the service will respond with details.  If not, the infrastructure can respond with information on what is needed and where it should live.

It really is that simple. 

We go from this:

The catalog describes the service infrastructure (bad)

to this

The catalog is the service infrastructure. (good)

And in this world, the catalog is informative, but not required.

Enterprise IT Integration and Data Security in the Canonical Data Model?

By |2007-06-25T19:59:00+00:00June 25th, 2007|Enterprise Architecture|

One thing that I do is spend a lot of time staring at a single problem: how to make a large number of systems “speak” to one another without creating piles of spaghetti-links and buckets of operational complexity.

 So this past week, I’ve been thinking about security in the integration layer.

In Microsoft, we have a lot of competing business interests.  One company may be a Microsoft Partner in one channel, a customer in another, and a competitor in a third.  (IBM is a perfect example, as is Hewlett-Packard.  We love these guys.  Honest.  We also compete against them).  To add to the fun, in a vast majority of cases, our Partners compete with each other, and we need to absolutely, positively, with no errors, maintain the confidentiality and trust that our partners have in us.  In order to protect the ‘wall’ between Microsoft and our partners, and between our partners and each other, in competitive spaces, while still allowing open communication in other spaces, we have some pretty complicated access rules that apply not only to customer access, but also how the account managers in Microsoft, who work on their behalf, can access internal data.  For example, an account manager assigned to work with Dell as an OEM (a Microsoft Employee) cannot see the products that Hewlett Packard has licensed for their OEM division, because he or she may accidentally expose sensitive business information between these fierce competitors.

In this space, we’ve developed a (patented) security model based on the execution of rules at the point of data access (Expression-Based Access Control, or EBAC).  This allows us to configure some fairly complicated rules to define what kind of data a customer may directly access (or an employee may access on behalf of their customers).  So I’m looking at the EBAC components as well as more traditional Role-based Access Control (RBAC) and thinking about integration.

 What right does any application have to see a particular data element?

This gets sticky. 

I can basically see two models. 

Model 1: The automated components all trust one another to filter access at the service boundary, allowing them to share data amongst themselves freely. 

Model 2: Every request through the system has to be traced to a credential and the data returned in a call depends heavily on the identify of the person instigating the request.

Model 1 is usually considered less secure than model 2.

I disagree. 

I believe that we need a simple and consistent infrastructure for sharing automated data, and that we should move all “restriction” to the edge, where the users live.  This allows the internal systems to have consistently filled, and consistently correct, data elements, regardless of the person who triggered a process.

In real life, we don’t restrict data access to the person who initiated a request.  So why do it when we automate the real life processes?  For example, if I go to the bank and ask them to look into a questionable charge on my credit card, there is no doubt that the instigator of the request is me.  However, I do not have access to the financial systems.  A person, acting on my behalf, may begin an inquiry.  That person will have more access than I have.  If they run into a discrepency, they may forward the request to their manager, or an investigator, who has totally different access rights.  If they find identity theft, they may decide to investigate the similarity between this transaction and a transaction on another account, requiring another set of access rights. 

Clearly, restricting this long-running process to the credentials of the person who initiated it would hobble the process. 

So in a SOA infrastructure, what security level should an application have?

Well, I’d say, it depends on how much you trust that application.  Not on how much you trust the people who use it.  Therefore, applications have to be granted a level of trust and have to earn that level somehow.  Perhaps it is through code reviews?  Perhaps through security hardnening processes or network provisioning?  Regardless, the point is that the application, itself, is an actor. It needs its own level of security and access, based on its needs, seperate from the people that it is acting on behalf of.

And how do you manage that?  Do you assign an application access to a specific database?  Microsoft IT has thousands of databases, and thousands of applications.  The cartesian product alone is enough to make your head spin.  Who wants to maintain a list of millions of data items?  Not me.

No, I’d say that you grant access for an application against a Data Subject Area.  A Data Subject Area is an abstraction.  It is the notion of the data as an entity that exists “anywhere” in the enterprise in a generic sense.  For example: A data subject area may be “invoice” and it covers all the systems that create or manage invoices.  This is most clear in the Canonical Data Model, where the invoice entity only appears once.

Since applications should only integrate and share information using the entities of the canonical data model, would it not, therefore, make sense to align security access to the canonical data elements as well?

I’ll continue to think on this, but this is the direction I’m heading with respect to “data security in the cloud.”

Your feedback is welcome and encouraged.

Simple Lifecycle Agility Maturity Model

By |2007-06-23T13:37:00+00:00June 23rd, 2007|Enterprise Architecture|

How agile are you?  Can you measure your agility?

My discussions over the past week, about who is and who isn’t agile, started me wondering: if you want to improve your agility, you need to be able to measure it.  This idea is simple and repeatable.  It is used in most "continuous improvement" processes. 

I created a simple model for measuring the agility of a software development process.  I call it the Simple Lifecycle Agility Maturity Model (SLAMM).  It is a single excel spreadsheet (Office 97-2003 compatible, virus free), complete with instructions, measurements, and a chart you can use or share.  You can find it here.

Using this model, the team follows a simple process:

  1. Write a simple story that describes the process you followed.  Examples are included in the spreadsheet.
  2. Rate your process on 12 criteria based on the Agile Alliance principles
  3. Enter weights and view results
  4. Create a list of steps to address deficiencies.  Follow the normal agile process to estimate these steps and add to the backlog.

I’d like to share this model with the community.  Please take a look.  If you like it, use it.  Completely open source.

The weights came from careful reading of the principles on the Agile Alliance site (with a dash of my own experience).  I invite the community to discuss the weights and create a consensus to change them if you’d like.  Note that the biggest benefit of models like this is the ability to compare the agility of processes in DIFFERENT COMPANIES or organizations, so we need to stick to a single set of weights in order to have a standard for comparison.

I hope this is the positive outcome of the blog flurry of late. 

<3-30-2009: Link to SLAMM spreadsheet updated after CodePlex dropped the SLAMM project. >

Mort and the Economics of Unmaintainable Code

By |2007-06-19T10:48:00+00:00June 19th, 2007|Enterprise Architecture|

I’ve been called a lot of things in the past few days since I had a public disagreement with many folks over the definition of Mort.  On the surface, it looks like I’m a pretty “out of touch” guy when it comes to the ‘common vernacular.’ Granted, but looks can be deceiving.  There’s more here.  Please bear with me.

First off, I want to publically apologize to Sam Gentile.  I meant no offense when I asked if he wanted MS to develop BDUF tools.  It was an obviously absurd question, (OK, obvious to me, not obvious to Sam).  I sometimes ask obviously absurd questions when I want to point out that the “logical conclusion” of a line of thinking will take someone where they don’t intend to go.  That didn’t work on the blog.  My bad. 

Add to that another mistake: My reading of Sam’s message to me was probably incorrect.  In a response I made to a post on Microsoft and Ruby, Sam said:

I don’t think you are getting that point. MSFT is making tools for Morts (the priority) at the expense of every other user (especially Enterprise Developers and Architects). They have nothing for TDD. And I would further contend that making these tools “dumbed down” has significantly contributed to why Morts are Morts in the first place and why they are kept there. Think VB6 and the disatrous code that was unleashed everywhere. If Microsoft took some responsibilty and created tools that advocate good software design principles and supported them then things would be different. You and Peter (which is a friend of mine) are covering the corporate butt. It’s a cop out.

Does it look like Sam is saying “Mort is dumb” or “Mort is bad?”  I thought so at the time.  Perhaps that was not right.  I carried my misreading a bit further.  I read Sam’s message to mean, “people who think like Mort thinks are errant.”  In hindsight, I believe that Sam meant to say “people who work like Mort works are errant.”  The difference is subtle but the result is profound.  Implication: Mort is not a bad person or a stupid person, but the code that Mort produces is not maintainable, and that is bad for all of us.  (I hope I got that right this time).  Sam cares about maintainable code

To rephrase, the problem is not Mort.  The problem is the unmaintainable code he produces.

So my apologies to Sam. 

But was I insane in my conclusions as I’ve been accused of?  Did I redefine Reality? No.

First, let me put up a quadrant. 

Agile Quadrant

I got the axis values directly from the agile manifesto, so it shouldn’t be a surprise to anyone in the agile community. 

Take a look for a minute and see if there’s anything you disagree with.  One thing that may be a bit odd is the entry called “agile tool users without ceremony.”  This is for the folks who use agile tools like NUnit and CruiseControl to do development, but don’t follow any of the other elements of agile development (like rapid cycles, time-box development, FDD, XP, Scrum, etc).  I don’t know how prevalent these folks are, but I’ve certainly met a few.

Regardless, look at the values expressed in the Agile Manifesto.  Someone who cares more about “meeting the needs of a user” than they do “following a process” would move up the Y axis.  Someone who cares more about “working software” than they do “comprehensive documentation” would move along the X axis.

OK… reader… where is Mort?

Think about it.

Mort doesn’t follow lots of process.  He writes code for one-off applications.  Why?  Because that is why he was hired, and that is what he is paid to do.  He does exactly what his company pays him to do.  Does he write a lot of documentation?  No.  So given those two variables, which quadrant does he fall into?

The upper right.  The one marked “agile.”

If you wonder why a lot of development managers are unsure of the agile community, it is because this comparison is not lost on them.  Any person who doesn’t care for process and who doesn’t want to write a lot of documentation can fit in that upper quadrant.  Agile folks are there.

So is Mort.

I can hear the chorus of criticism: “That doesn’t make Mort agile!”  Hugo is Agile.  Mort is not!

I’m not done.

Mort is certainly a problem, because in our world, unmaintainable software is a pain to work with.  Some folks have decided not to hate Mort but to educate him.  (which is good).  The subtle goal here: move Mort’s skillset and mindset: help him to value maintainable code. 

If we do this, and we help Mort grow, will he keep his current job?  Probably not.  He was hired into his current job because he was a Mort.  He was hired because his company values quick fix apps.  Once our intrepid student no longer values unmaintainable code, he will no longer fit in his current positon.  He will find another job.  So what will the company do with the open position?  They will hire someone else and TRAIN THEM TO BECOME ANOTHER MORT.

Remember, we don’t hate Mort.  We have a hard time with his code.  We want to eradicate his code.  But the code is still being developed… by a new Mort.

There are an infinite supply of new Morts.  Therefore, the solution of “educate Mort” doesn’t work to solve the problem of unmaintainable code.  The solution doesn’t address the underlying reasons why Mort exists or why his code is bad.  You cannot fight economics with education.  You have to fight economics with economics. 

Let’s look at the economics of unmaintainable code, and think about Mort a little more.

Code is unmaintainable because it’s complexity exceeds the ability of a developer to maintain it.  Would you agree that is a good definition of ‘unmaintainable code?’ 

Rather than look at “making code maintainable,” what if we look at making code free.  Why do we need to maintain code?  Because code is expensive to write.  Therefore, it is currently cheaper to fix it than rewrite it.  On the other hand, what if code were cheap, or free?  What if it were cheaper to write it than maintain it?  Then we would never maintain it.  We’d write it from scratch every time. 

Sure, we can choose to write maintainable code.  We can use practices like patterns, object oriented development, and careful design principles.  On the other hand, we can give Mort an environment where he can’t hurt himself… where his code is always small because only small amounts of code are needed to get the job done. 

This is useful thinking here.  If you cannot make sure that Mort will write maintainable code, make him write less code.    Then when it comes time for you (not Mort) to maintain it (he can’t), you don’t.  You write it again.

And that is fighting Mort with economics.  Soon, Mort’s skill set doesn’t matter.  He is writing small amounts of unmaintainable code, and we really won’t care.  Someone ‘smart’ has written the ‘hard’ stuff for Mort, and made it available as cross cutting concerns and framework code that he doesn’t have to spend any time worrying about.  Mort’s code is completely discardable.  It’s essentially free.

Hugo cares about quality code.  Mort does not.  In the fantasy world of free code, what value does Hugo bring, and where does Mort fit?  Does Mort put process first or people first?  He puts people first, of course.  He writes the code that a customer wants and gets it to the customer right away.  The customer changes the requirements and Mort responds.  If it sounds like a quick iteration, that is because it is.  This process is fundamentally agile

Yep. I said it.  In situations where maintainability doesn’t matter, Mort is agile.  His values are agile.  He is paid to be agile.  He delivers value quickly, with large amounts of interaction with the customer, not a lot of process, and not a lot of documentation.  According to the Agile Manifesto, in a specific situation, Mort is agile.  He is also dangerous. 

So we constrain him.  As long as Mort can’t hurt himself and others, we are protected from him. 

Of course, we can give Mort smarter tools.  But that goes back to the argument that Mort is the problem.  Mort is not the problem.  His employer is.  We train Mort.  He becomes a quality programmer.  He leaves. The company hires another Mort.

So what about those Morts that we cannot train?  Every time we try to shove great tools at “untrainable Mort”, we don’t get “smarter Mort.”  The tools get used by other people, but Mort ignores them.  We get faster and better code written by the people who care about faster and better code.  Mort doesn’t care.  He is not paid to care.  He is paid to write code quickly, solve a quick problem, and go on.  His code is not maintainable, and THAT IS OK, because he can write small amounts of code (or no code) and still deliver value.

So how do we pull this off?  How do we allow Mort to write small amounts of code so that we don’t care?

We’ve been trying to solve this problem for a decade or so.  We tried creating an easy drag-and-drop environment, but it didn’t protect us from Mort.  We tried creating controls that do all the hard stuff, but it didn’t protect us from Mort. 

Now, SOA and the Web 2.0 space has opened up a whole new world for Mort to play in.  Generation Next is here, and finally we may be a bit closer to an answer.

Possible Answer: We can have Mort consume a service.  He can’t change it.  He can’t screw it up.  But he can still deliver value, because often 60% of the business value is in supporting individual steps in a business process.  Those steps are carefully controlled by the business, but honestly, are not that hard to put together.  It’s a matter of “step one comes first, step two comes next.”  As long as the details of the interaction are hard to screw up, we are protected from Mort. 

Here’s the cool thing: Microsoft didn’t invent this iteration.  This little bit of “Mort defense” came from the integration space (think ESB) combined with great thinking from the web community.  This approach is not something we thought of, but it works to the same ends.  This new way is based on SOA principles and REST protocols (what some are calling WOA or Web Oriented Architecture). 

Web 2.0 and Mashups are the new agility.  Write little or no code… deliver value right away.

And in this space, Mort is agile.  Heck, we even like him.

And in case you are wondering why I don’t hate Mort… this is the space I live in.

Tools for Mort

By |2007-06-15T12:05:00+00:00June 15th, 2007|Enterprise Architecture|

For those of you not familiar with the term “Mort,” it comes from a user profile used by the Devdiv team.  This team has created imaginary “people” that represent key market segments.  They have talents, and goals, and career paths.  An the one that developers love to bash is poor Mort.

I like Mort.  I have hired many folks like Mort.  Mort is a guy who doesn’t love technology… he loves solving problems.  Mort works in a small to medium sized company, as a guy who uses the tools at hand to solve problems.  If the business needs some data managed, he whips up an Access database with a few reports and hands it to the three users who need the data.  He can write Excel macros and he’s probably found in the late afternoons on Friday updating the company’s one web site using Frontpage.

Mort is a practical guy.  He doesn’t believe in really heavy processes.  He gets a request, and he does the work. 

One fellow who I know, and who worked with me on a project here in Microsoft  (and who reads this blog) once told me that he considers himself “a Mort” and was quite proud of it.  He introduced me to CruiseControl.  I already new NAnt and NUnit, but he showed me how CruiseControls adds continuous automated testing to the mix.  I loved it.  You see, my friend was a programmer, but he was also Mort.  He believed in getting things done.  He was no alpha geek, to use Martin Fowler’s terminology.  (Yes, I know that Martin didn’t originate the term, O’Reilly did at MacWorld many years ago.  But he repeated it, and he’s a pretty important guy).

My friend had taken a typical “Mort” point of view.  He is a really good programmer.  He could write fairly decent C# code and his code worked well.  But what made this guy “better than your average bear” was the practical point of view that he took to his work.  He believed like I believe: technology is a tool.  It is like any tool.  You can use it too little or too much, but the point is to use it well.

The world needs MORE people like Mort.  In fact, with the movement towards mashups and web 2.0, the world will soon be taken over by people like Mort.  And I couldn’t be happier.  Put architects and BDUF designers out of work!  That would really shake things up.

Most of my open source friends either are a Mort or know a few, because the underlying philosophy of Mort is a big part of the open source and agile movements: People come before Process.  Solving the Problem comes before negotiating the contract.

So when I got this reply to one of my posts from Sam Gentile, I have to admit that I was really confused.  He starts by quoting me and then provides his feedback.

>In this response to Martin, Peter argues eloquently for including tools in the toolset that support ALL developers, not just Martin’s “alpha geeks.”  

I don’t think you are getting that point. MSFT is making tools for Morts (the priority) at the expense of every other user (especially Enterprise Developers and Architects). They have nothing for TDD. And I would further contend that making these tools “dumbed down” has significantly contributed to why Morts are Morts in the first place and why they are kept there. Think VB6 and the disatrous code that was unleashed everywhere. If Microsoft took some responsibilty and created tools that advocate good software design principles and supported them then things would be different.

Wow, Sam.  I didn’t know you had so much animosity for the Agile community!  Are you sure that’s what you intended to say? 

Do you really mean that Microsoft should make a priority of serving top-down project managers who believe in BDUF by including big modeling tools in Visual Studio, because the MDD people are more uber-geeky than most of us will ever be?  I hate to point this out, Sam, but Alpha Geeks are not the ones using TDD.  It’s the Morts of the programming world.  Alpha geeks are using Domain Specific Languages.

I think you are wrong about Visual Basic.  As a trend, VB brought millions of people out of the realm of Business BASIC and slowly, over the course of seven versions, to the world of object oriented development.  Microsoft, whether on purpose or not, single-handedly drove millions of people into the arms of C# and Java. 

Microsoft cares passionately about good design principles.  So does IBM and so does Sun. Each of these companies (and others) has one or more dedicated groups of people publishing research for free on the web, supporting university research programs, and funding pilot efforts in partner companies, with the expressed goal of furthering Computer Science.  Do NOT underestimate the value that Microsoft Research and our humble Patterns and Practices group has provided to the Microsoft platform community.  You do yourself a disservice by making any statement that reeks of that.

VB.Net is object oriented.  It is not compatible with VB6.  Microsoft took a LOT of community heat for that… much more so than the current TD.Net dust-up.  If that isn’t taking responsibility, I don’t know what is.  We alienated millions of developers by telling them “move up or else.”  We are the singlehanded leaders in the space of bringing BASIC developers up to modern computing languages.  NO ONE ELSE HAS DONE THAT.  I dare you to find someone else that has forcebly moved millions of people to object oriented development. 

Lastly, where do Microsoft’s own add-ons, in the paid version of Visual Studio, actively PREVENT any of the agile tools from working?  Give me a break!  If an agile tool does an excellent job, and it is free, what motivation does MS have for spending real money to add that capability to the product when it cannot possibly increase the value of the paid Visual Studio product for the people who are already using the free open source tool?

We didn’t add MSTest for the users of NAnt and NUnit.  We added it for the poor folks who wanted to use those tool but their corporate idea-killers wouldn’t let them.  To be honest with you, we made mistakes.  Our first attempt at those tools are flawed, but they are an improvement over the previous version, and I’ve been told that there is further interest in improving them. 

Just like Toyota’s first Corolla was a boxy ugly car that bears no resemblence to today’s sleek little sedan, steady improvement is a mindset.  It takes time to understand.  Toyota is winning, and the reason is the same as for Microsoft.  Both companies share a relentlessness of improvement.  Slow, perhaps.  Steady?  not always.  Customer-driven?  yes.  We don’t win by being monopolistic.  We win by being persistent.

No one is asking you to stop using the free tools you are using, ever.  If those tools continue to improve faster than Microsoft’s attempts (something I expect) then you have won the “uber-geek” for the platform, not us.  Thank you.  Keep it up. 

 

Microsoft ESB as a toolkit

By |2007-06-14T12:12:00+00:00June 14th, 2007|Enterprise Architecture|

Sorry it took me a while to notice, but Microsoft released their first CTP of the ESB Guidance toolkit on Codeplex in May.  If you are interested in Enterprise Services Bus, or message brokers in general, I recommend this link.

http://www.codeplex.com/esb

I’ve downloaded it and will start looking to see if the connected services team have finally delivered an ESB for Microsoft customers.

Martin Fowler wants to see Ruby on Microsoft to save the alpha geek

By |2007-06-12T23:22:00+00:00June 12th, 2007|Enterprise Architecture|

I like Martin Fowler.  As a veritable lighthouse of the patterns and agile communities, he’s both a resource and a calm steady voice for change in an industry that cannot succeed without change.

So, when he posted his recent entry on “Ruby and Microsoft” I was eager to take a look.  He cites a general willingness of the Ruby community to work with Microsoft and I’m glad of that.  He also points out, and rightly so, that Microsoft has some pretty strict rules designed to prevent open source code from creeping into the product code base, rules that get in the way of open source collaboration.  That’s what happens when the company is sued repeatedly for two decades by our competitors and government agencies. 

Just as IBM suffered under long running, financially and politically motivated, anti-trust suits, which knocked them down a step and opened up the computer hardware market, Microsoft has been similarly affected.  Hopefully, we are making the turn quicker than our friends in big blue did, largely by observing their example.  They did turn the corner, and IBM makes money.  We will turn the corner, and we make money too.  I’m sure of that.  But the lawsuits matter.  They really do.

That said, I have to say that I disagree with Martin about many of the aspects he hit upon.  I refer readers to this excellent post from Peter Laudati.

http://blogs.gotdotnet.com/peterlau/archive/2007/06/11/shaking-out-the-innovation.aspx

In this response to Martin, Peter argues eloquently for including tools in the toolset that support ALL developers, not just Martin’s “alpha geeks.”   I agree with Peter.  The MS Platform should encourage all developers to succeed.  I also resent the term “alpha geek.”  Truly awful. 

I would add that Microsoft should NOT deliver open source tools built in to the Visual Studio platform, because we cannot possibly support those tools.  If the community develops a tool, they should support it.  I have no problem linking to the alt.net stuff and encouraging folks to use it. 

I think it would be great if a group of Open Source developers would create an all-up “add-on” install that contains all their favorite tools like NAnt, NHibernate, NUnit, Spring.Net, etc in a single package, complete with documentation and samples, that allows folks to easily add the alt.net tools to their setup in one jump. 

Mr Fowler’s being unfair to suggest that MS treats open source differently than “technology companies” like IBM, Sun and Apple.  We aren’t wildly supporting open source.  We don’t oppose open source either.  (not anymore).  The vast majority of software companies are “friendly but not too friendly” with open source.  (There are tens of thousands of software companies.  Martin doesn’t name a single serious software company on the open source side.) 

It’s not the entire industry on one side with Microsoft on the other.  It’s an industry segment who support open source and make their money on hardware and/or services vs. the segment of companies who make their money selling software licenses.  That latter group pretty much ignores open source (or releases bits into open source when we don’t want to support it ourselves).  Microsoft happens to be in the latter camp, and we are a big player… but we are far from unusual.  (Note: I include OSS vendors like Redhat as services companies because, face it, you aren’t paying for the operating system… you are paying for the support, and support is a service).

Oh, and I remember when the uber-geeks of yesterday went to Powerbuilder (and declared the death of VB) and then to Delphi (and again declared the death of VB) and then to EJB (and declared the death of everything).  Nothing happened.  Those platforms are not serious threats.  The uber-geeks don’t have a great track record for picking winners.   I’m not worried.