//February

Top 100 patterns: Building from a nascent catalog

By |2007-02-20T10:20:00+00:00February 20th, 2007|Enterprise Architecture|

My respect for Grady Booch has grown over the past few days, as I’ve investigated the idea of building a list of the 100 patterns that every developer should know.  As I’ve surfed around looking at pattern-oriented sites, including those in MS-land, I’ve come to recognize and appreciate the amazing amount of work that Grady Booch has put into his ‘software architecture handbook’ site.

He even has a heirarchy of systems by type, a better list than the one on Wikipedia.  I will work on the Wikipedia pages to clean them up.  They needed a reference source anyway.

For those of you who haven’t taken a look at his site yet, visit www.booch.com/architecture and sign up (free). It’s worth a look.  It is still a work in progress, and to be fair, I believe that he is hindering his own progress by using a traditional site instead of a wiki, but it is a fairly good start on the herculean tasks he has set out for himself.

I am not so patient.  When I consider the fact that there are clearly over 2000 published patterns, many overlapping in intent, I’ve realized that the work of selecting 100 core patterns will be really tough to do without much of the basic cataloging already done (as Grady Booch has started to do). 

As most scientists recognize, you cannot effectively analyze a body of knowledge until that knowledge has been cataloged.  Naturalists of the 19th century discovered this and spent countless hours of labor before the major breakthroughs in understanding were reached.

(Yes, I’ve been to the Portland Patterns Repository.  With all due respect to WardC, the PPR is a self-organized site created by intensely creative people, and as such, there’s not a lot of cataloging going on.  As a result, a good navigation of patterns, or even a good comprehensive list, is not readily available there. Booch’s work, while still in its infancy, is already more comprehensive and better organized than the self-organized PPR.  That doesn’t mean that the PPR should be dismissed.  It just doesn’t solve the initial problem.)

Alas, I cannot do this by myself.  I don’t believe my 25 years in software to be sufficient, by itself, to this task.

In academia, when you need help, you seek out a graduate student eager to pick a thesis topic, and you convince him or her that your research idea is worth their while. Not sure if the same tactic would work for this space, but I’m mulling it over. 

Top 100 patterns: how to pick the list

By |2007-02-19T10:12:00+00:00February 19th, 2007|Enterprise Architecture|

In my previous posts, I suggested that we should create a list of the top 100 patterns that every developer should know.  Of course, the challenge is this: how to pick the list?  There are thousands!

Let me start with criteria and see if we can approach it logically (alas, logic is the developers greatest weapon and most bothersome weakness.)

So, the question I want to ask this group is two-fold.

  1. What are the criteria for considering a pattern as part of the set of “the most important patterns so far,” and
  2. What patterns do you believe should be members of this set.

The goal here is communication.  We are building a common “base of concepts”, so for the sake of this set, we want patterns that explain or demonstrate fundamental ideas, or which other patterns are composed of, so that we can communicate amonst ourselves.  Therefore, we may elect to include a pattern in the set that we don’t expect to use very often, simply because it does a fine job of explaining a key concept or being useful for understanding other patterns.

Some ground rules:

  1. No patterns that describe the behavior of humans reacting to one another, collecting requirements, or performing the act of writing, deploying or maintaining software.   These patterns belong to the areas of organizational psychology and process management.  For the sake of this list: the pattern must be implementable in code.
     
  2. No algorithms, even if the algorithm has been documented as a ‘pattern’ in a peer-reviewed work, this list needs to differentiate between patterns that establish, demonstrate, or communicate fundamental principles, not ones that implement specific algorithms.
     
    For example, I do not believe that ‘circular buffer’ is a good candidate.  It is an algorithm (and a good one), and perhaps a useful pattern in its own right, but I don’t believe that it is a pattern that belongs in this list.  Of course, we are going to have to have a debate on the patternity test at some point.  We may also need to decide that we will keep a seperate list of ‘algorithms that every developer should know.’  (then again, I’m inclined to leave that work to Dr. Knuth).
     
  3. No business rules or best practices, even if the business rule has been documented as a ‘pattern’ in a peer-reviewed work.  A pattern that simply describes a ‘common business rule’ that applications in a particular space should implement may not be fundamentally useful for creating a basis of communication among developers.

I’m thinking the set could break down this way:

25 low level design patterns
15 mid-level application architecture patterns
20 high-level system integration patterns
10 patterns related to performance and real-time systems
10 patterns related to data management, persistence, and retrieval
10 patterns related to system components that behave in a particular manner
10 misc patterns useful for describing key concepts

Opinions?  (The weakest part of this post is the breakdown above.  I don’t think I got that part right.  Your feedback would be great).

Why create a list of the top 100 patterns?

By |2007-02-13T23:20:00+00:00February 13th, 2007|Enterprise Architecture|

I posted a blog entry about creating a list of the top 100 patterns that every developer should know, and got a veritable slew of responses.  In this post, I will try to do a better job of explaining WHY we need a top 100.

Concern 1: Is this a grammar or jargon? 

It is neither.  With all due respect, the word jargon is perjorative to my intent. 

Dictionary.com defines Jargon as:

1. the language, esp. the vocabulary, peculiar to a particular trade, profession, or group: medical jargon.
2. unintelligible or meaningless talk or writing; gibberish.
3. any talk or writing that one does not understand.
4. pidgin.
5. language that is characterized by uncommon or pretentious vocabulary and convoluted syntax and is often vague in meaning.

Of the five definitions, four are clearly negative in tone and intent.  The first definition, while valid, is not reflective of my intent when taken in context with the other four.

On the other hand, this is not grammar either.  The same source defines grammar as:

1. the study of the way the sentences of a language are constructed; morphology and syntax.
2. these features or constructions themselves: English grammar.
3. an account of these features; a set of rules accounting for these constructions: a grammar of English.

I am not talking about a set of the rules of discussion.  I am simply talking about creating a taxonomy of patterns that forms a basis for discussion between professionals.  It is, in effect, a jargon, but without the sense that it should be unintelligible, meaningless, or difficult to understand.

Concern 2: Different people need different patterns

I am amazed at the efforts of Grady Booch and his co-authors on his Handbook of Software Architecture.  There are over 2000 patterns there.  However, in the works he cites as inspiration (my favorite being Elements of Style by Strunk and White), there are far fewer concepts cataloged.  Is that because the Software profession is that much more widely used than the English language?  no. 

It is because the authors whittled down the list to a core set.  A key set.  A set of elements that everyone should know. 

My computer science degree required me to learn a great many things that I have not directly used since. Why?  Because being able to speak to someone whose job is different from mine, but related, is a key attribute of being literate.  It allows developers to be mobile, moving from sector to sector.  I started out writing retail software for the hospital market.  I jumped to firmware.  Then worked on the guts of an operating system.  The patterns were different each time.  But the ability to be literate was key.

A core set of 100 patterns goes a long way towards that literacy.  The number doesn’t have to end up at 100, but it should be close.  Too many means overload for folks to learn the basics of other concerns.  Too few means that we are all literate only in our shared areas, which doesn’t foster mobility of knowledge.

Concern 3: We aren’t ready for that in this industry

Nonsense.  You become ready by forging a path for those folks who want to be ready.  That’s step 1.  Early adopters will follow by themselves.  College campuses can be convinced.  The rest of us have to be marketed to and sold to.  We can never reach critical mass until we try.

Concern 4: do only the principles, doing the patterns is a waste of time.

The principles are well known.  That clearly isn’t enough.  The practices need to be shared too.  Not everyone in the world is a budding Grady Booch.  Some coders are just coders.  They like writing code.  They think in code.  I am still a little bit like that.  It’s where I came from.  I didn’t mine patterns or submit papers to a PLoP conference.  I would not have known how, but I have benefited from the patterns movement and literature. 

Robert Martin shared a great set of principles over a decade ago.  They are still linked on his site. 

Conclusion: we need the top 100 patterns.

Using a top 100, we force the conversation.  What are the elements we should all know about EACH OTHER’S work? What are the key learnings that we should be able to share and discuss?  What elements, if shared, have the greatest potential for impacting other areas?  What principles, when expressed in code, change the mind and the thinking in the most elemental way?

The rest is gravy.

What are the top 100 patterns that every developer must know?

By |2007-02-12T02:46:00+00:00February 12th, 2007|Enterprise Architecture|

It has been 10 years since the seminal publication of the Gang-of-Four Design Patterns book.  Since then, at least a dozen major works have followed, each with its own viewpoint, its own set of repeatable problems.  Some took on Architecture, others Real Time Systems, and so on. 

Now, the goal of the Patterns work was not to create a bunch of noise that dies down.  It was to change the way we communicate with one another.  The language that we use. The forces that we consider.  The patterns themselves, even when not fully understood, can lead us to what Christopher Alexander called “the quality.”  A design that is beautiful, elegant, balanced, and completely in context with its environment.

We have the words.  We have too many of them.  Now, it’s time to make the language.

Will you help me? I’d like to start to create a consensus list.  I’d like to answer the question: what are the top 100 patterns and idioms that every developer should learn by the end of their second year of professional practice? 

Is 100 the right number?  Only by attempting to create the language will we actually know what the right number is.  Scientists in other disciplines have settled on common terms for their common field.

It is our turn.

If someone has a suggestion for the best Wiki site to do this on, please reply. 

(Please note: I followed this post with a follow-up post that attempts to answer some of the questions from the responses below.  See here.)

The minimum amount of architecture needed for Test Driven Design

By |2007-02-09T19:57:00+00:00February 9th, 2007|Enterprise Architecture|

My good friend Malcolm posted a response to my IFaP article and asked, in essence, “what is the minimum amount of architecture needed for a system built with Test Driven Design.”

I had to stare at that.  For me, TDD meant Test Driven Development, not Test Driven Design.  Search.Live.Com brought up Scott Ambler’s site where he clearly states his formula for Test Driven Design:

Test Driven Design = Test First Development (TFD) + Refactoring

Hmmm.  My training by one of the original authors of JUnit clearly calls the technique Test Driven Development, so it appears to me (perhaps unfairly) that Scott is redefining an acronym to mean more than it originally meant.  He is not the first to do this, of course. 

I truly dislike this aspect of our industry.  If an idea catches on, it becomes an acronym.  Then if someone has another idea they want to glue to the side of the first, they just redefine the acronym and say “it’s been this way all along.”  No.  It wasn’t.

Test Driven Development is a fairly low level coding activity.  Don’t count me as a critic.  I really believe that a combination of Pair Programming and Test Driven Development can really make for writing good code fairly quickly.  However, neither handles design.  Design has to be done first.

Mr. Ambler does have an answer to “when design happens,” called Agile Model Driven Development.  Unfortunately, it’s a waterfall process, so it’s hardly agile.  According to his site, there is a set of iterations on the models, just enough to get them ‘good enough’ and then you start coding using TDD.  Note that you never go back to the models.  If they were wrong, you fix it with refactoring.  You cannot iterate on the design itself.  No iterations based on feedback.  No incremental improvement.  It’s not agile.

I’m not calling this incorrect.  I’m calling this “not-agile.”  I don’t believe it is incorrect.  Perhaps our tools are just not ready for this kind of post-coding refactoring of the model.

I do believe that a project requires design up front, but it doesn’t have to be big design.  I don’t believe that you have to create every little detail in the diagrams, or even define much more than the interface and responsibilities for each independent component.  However, you DO need to define the following key elements right up front. 

This is the minimum agile architecture.

For each independent component:

  • In one or two sentences, describe the responsibility of the component.  Add one more sentence to clarify, if necessary, what the component will NOT do. 
  • Describe every endpoint, both inside the system and externally that your system calls.  For each endpoint, describe the Protocol used to communicate.  Describe every Unique Identifier that your system will use to look data up.  Describe the format of the data being passed to and from the endpoint (in a services-model, this is the canonical schema).  (Identifier + Format + Protocol = IFaP)
  • Describe the priority quality attributes that the endpoint must exhibit.  If an endpoint must be reliable, but should be performant, then you’d say “reliable first, performant second.” 

That should take less than three days, and should be done with two or three people working closely together, and then reviewed with the team to find things you missed.

Agile design is light.  It is enabled by tools.  It is defined well enough to get people to work independently of one another.  Describing the IFaP allows tests to be written to the interface first, so it can be coded using Test Driven Development.  Hopefully, if you do this well, the need for refactoring, at least before the first full release is out the door, is pretty slim. 

IFaP : Middle Out Architecture

By |2007-02-08T08:23:00+00:00February 8th, 2007|Enterprise Architecture|

There is some discussion these days about “middle out” architecture.  The key idea in “middle out” is that it is neither top down nor bottom up.  So what does that mean?

Top down architecture means to take the entire enterprise and create a model with large, vague, blocks of functionality.  The architect then drills down on each block, adding details and fleshing out the design, until a sufficiently detailed design is created.  It’s basically Functional Decomposition 101. 

Bottom up architecture means to allow different teams to create whatever services they want, set up an infrastructure for sharing them, and then stepping back, hoping magic will happen.  Sometimes it does.  Usually, it’s just chaos.

You cannot craft a work of art by pouring a bucket of sand on the street, and you cannot craft an efficient enterprise by endorsing services and then standing back to ‘watch things happen.’ 

Middle out architecture starts at the center.  The goal of middle out architecture is to create a stable ‘center’ as an abstract combination of Identifier standard, Format standard and Protocol standard (an IFaP).  This abstract combination is something that allows variation both in the business uncertainties (where a business would consume a service in a composable manner) and in technology uncertainties or variations (where the technologies to be consumed could be widely different).  This is often illustrated as an Hourglass shape, with the narrow waist being the IFaP.

Most enterprises will need more than one IFaP.  I can easily imagine one IFaP for Master Data Integration needs and another for Functional Services, and a third for Business Intelligence and perhaps a fourth for orchestrations and long running transactions.

Why the combination if Identifier, Format, and Protocol?  What’s magic about this combination?  

I’ll try to answer in a different way: what is magic about the Internet?  What has allowed this technology to blossom and grow in a completely unregulated manner, with no single person, group, or organization driving it’s growth?  A couple of IFaPs.  TCP/IP has all three.  HTTP has all three.  SMTP has all three.  As does RSS.  What do we get?  We get rapid growth.  It is not difficult to write a web server, an SMTP server, an RSS reader.  Why?  Because the standards are simple, understandable, and technology independent.  The road to failure is well traveled by folks who have attempted to create a standard that didn’t have all three elements, or didn’t provide for versioning in each of the elements (versioned identifiers, versioned formats, and versioned protocols).

So, to make your SOA take off, start with an IFaP.  You can and will certainly need more than that.  You will need some elements of both top-down and bottom-up in order to drive adoption, make teams aware of their obligations to the enterprise, and make the services managable.  You will need governance and a reliable infrastructure.  That is critical.  But so is an IFaP.

What does that mean?  It means that you do more than just say “We will use SOAP” or “We will use REST”.  It means that you provide specific guidance about how all ‘enterprise services’ shall behave, and then you make it as easy as humanly possible to create services that behave that way.  You make developers aware of the services, and allow them to call the services at will, without the need for heavy procedures and hand-waving and sign-offs. 

You build in mechanisms to make sure that the ‘enterprise services’ are managable without the developers needing to know every detail about how those mechanisms do their work.  It’s a cross-cutting concern, and you can and should give them lots of details, but they shouldn’t be forced to learn them all to use the system, any more than I have to know the intracies of Name Servers to write a simple web browser. 

More importantly, if someone creates a service that you didn’t include on your plan, embrace it.  As long as it is compliant with the IFaP, then it should enter the enterprise as a first-class citizen.  You still want to encourage specific services to be created and delivered, and you still want to insure that a funding model is in place, but you must not stifle creativity. 

Over the course of the next few months, I hope to be working on an IFaP approach within Microsoft IT to making services accountable, discoverable, managable, very easy to develop, and very simple to use.   I will report back to this blog about progress as it occurs, and any of the pitfalls that we run in to along the way.

The best employee I ever had

By |2007-02-07T02:01:00+00:00February 7th, 2007|Enterprise Architecture|

I was deep in the mix during the dot-com bubble.  It was one heckuva ride.  In the early days, it was hard as heck to get talented people.  The first, and often the most key, risk that we would take was “who to put on the team.” 

I have some very serious regrets about some of the people I hired or helped hire.  Other folks, on the other hand, I have no regrets.  Let me tell you about the best employee I ever had…

This gentleman came to the company with minimal software experience.  He could create fairly good graphics, but striking and creative visual design wasn’t his forte either.  He was basically a good web page coder… at least at first.

What made this guy the best employee I had was this: he loved to learn.  Wow did he love to learn!  He would drop by my office about twice a week just to ask questions and learn.  He would read books on every topic he could (that related, even remotely, to his work).  He learned project management, and interface design, and graphic information modeling, and eventually became one of the best user experience guys around.

I love to work with people who love to learn. 

How do you know if you are a person who loves to learn?  A couple of ways:

  1. self-motivation: you reach out for resources, find them, and consume them, at a rate that would make the ordinary employee blush. Often the resources are non-traditional, like learning from newsgroups or joining an open source project with experienced coders.  No closed doors.  No fear of looking foolish.  Just go.
     
  2. self-driven mastery: not only learning, but applying what you learn, to the point where soon, you are jumping in to teach others.  The terminology trips off your tongue.  You learn not only the words but their meanings.  You read deeply technical articles that use the new concepts, just to make sure that you can master their meaning in practice.  You practice justifying the ideas in articles and blogs and in hallway conversations.  Soon, others come to you to solve problems in the space you’ve learned only a few months before.
     
  3. passion for quality: It’s not just enough to learn, or know, but you must do.  You simply must.  It’s not optional.  You cannot help yourself.  The learning and knowing are wrapped up in the doing.  In your mind, it is not complete until you, and your employer, and your customer, have reaped a tangible reward.  You have to try it, to solve it, to solve it again.
      
  4. intense desire to fix what you screw up: You are fallable, and you know it, and you make an effort, every single time, to review how well you did and learn from it.  Sometimes you review your efforts more than once: once right after it is done, and then another time later (a month, six months, a year… however long it takes to lose the ‘self-congratulatory’ gloss).  Every time out of the gate is a learning experience.  Doesn’t stop you from bold thinking and bold action, but it does prevent you from earning the reputation of ‘reckless’ or ‘loose cannon.’
     
  5. values-driven: You start with what you believe, and you drive your learning from there.  Therefore, when you learn something, it sticks.  You don’t throw away the good with the bad.  Sure, you sometimes have to unlearn a practice that you discover is not useful, but you don’t flit from one fad to another, proposing one model one week, and another model the next.  

    You bind new ideas to the core values that you care about, and you place ideas into your internal model based on how well they align to your core values. This allows you to construct, build, and grow… not tear down and start over every two years.  

    It also means that titles and org charts are borderline useless.  A title only means something if you need it to.  Position means a bit more, as does recognition, but the truly valuable things in life don’t come from position or title or recognition.  They come from examining your work in the light of your values.  If you value what you do, and you measure your success from a stable, consistent viewpoint, then you will sleep extremely well.

Alas, I have met less than a dozen men and women who fire on every cylinder.  When I have, I have been better simply by knowing them.

If you don’t hit on every one of these points, look at yourself and think about this:  Do you want to be the person that your manager, a decade later, writes a blog about with the title of “best employee I ever had?”  If so, find the elements above that you aren’t doing, and start doing them.  Hear that sound?  It’s success calling…

A case study in breaking up a tightly coupled integration

By |2007-02-03T10:28:00+00:00February 3rd, 2007|Enterprise Architecture|

About 15 years ago, Microsoft upgraded their internal system that manages the list of unique product offerings: our product catalog.  The catalog is fairly complex.  Microsoft sells products all around the world in over a hundred different languages using different mechanisms for licensing.  It is not enough to say that MS sells Word, but more appropriately MS sells Word 2003 Service Pack 1 Portuguese edition in Brazil, as sold through the Open Value license program.  The packaging is unique, as are many of the localized aspects of the application itself. 

Of course, the data in this catalog is key to a lot of business processes, so literally hundreds of systems have been integrated, either directly or indirectly, with this source system.  The categories of downstream systems include financial systems, order management, sales allocation, supply chain, marketing management, partner management, and more, with a dozen or more applications in each category.

The creation of this catalog source predates most of the ERP systems within Microsoft.  Therefore, the ERP systems, including Dynamics AX, while capable of managing a large and complex catalog, are not largely being used for that purpose. 

Most of the downstream systems who consume this catalog do so through an older Master Data Management (MDM) system that was also developed around the same time.  The MDM system provided a way to subscribe to large flat files and/or tables (remember: this catalog system predates XML) that contain the catalog data.  If you want to write an application that consumes from this catalog, your application could subscribe to get hourly updates directly to your database tables, and the MDM system would manage the SQL Integration at the fine detailed level. 

It is basically a single table, so the feed is flat (no heirarchy).  While it is managed using SQL, it is not dissimilar from managing it as flat files.  This large flat file feed accounts for over 90% of all of the systems that integrate with the product catalog system.

Here’s an illustration:

OK, so we have this heavily wired infrastructure and we want to change it.  What do we change it to?  Well, Microsoft Enterprise Architecture has stated, categorically, that it is better to leverage an existing system than buy a new one, so we are leveraging our existing ERP and CRM infrastructure.  We will move the catalog to the ERP system.

Of course, ERP systems are known for being versatile.  Every business group within Microsoft wants their own attributes attached to their own products, often in rather distinct ‘models.’  So a team of folks went through the business, for well over a year, interviewing all of the different business groups about what they want in their models.  I’m confident that they will be able to implement their model, or something that represents the model, in the ERP system. 

Interestingly enough, the ERP system is not new.  We are just using it in a new way.  One of those downstream systems that consumes the old catalog is our ERP system.  (Really, it’s plural.  There are many ERP systems in place, including Dynamics AX.  We’ve picked one to source this data.)

Now, how do we change the integration?  The obvious answer is right in front of us: have the new system produce the old data feed.  That way, we get the benefit of the new data model, but all the downstream systems that rely on the old data feed can continue to operate, at least until our army of IT developers can crack the covers and either shut down the old apps or refactor them to use the new data structure.

So, here’s the logical view of what we would change:

Making the new system produce the old feed will not be easy.  You see, with the addition of these different models and with the increased versatility of the ERP solution, we will have a rather different looking catalog.  The number of products will be different as will their names and attributes.  The end result will be similar in that we will still sell products, but some of the things we cannot do in the old system will be fairly easy in the new one.

In addition to this, the old catalog system used auto-number (Identity) columns to create unique ids.  The ERP system would create altogether new numbers.  If we are going to keep the feed functional for more than a day, we’d need the business event of ‘add a new product’ to produce the same data effects in the MDM system as before.  Making the ERP system do this is difficult and byzantine.

This is tricky. We need the ability to move a downstream system from the old data structure to the new one.  We cannot move all of them at once.  So, for some period of time, both the old and new data structures have to be available.  But producing the old data structure from the new system is difficult.

The solution we are looking at is interesting.  We don’t produce the old data structure from the new system. We produce it from the old one.

In this model, all changes to the product catalog happen in the ERP system.  The ERP system has the new data structures in it.  In the old model, data used to flow From the catalog system To the ERP system.  In this model, we reverse the flow.  Data will flow From the ERP system To the old catalog system.

Now, we can’t have users entering data in the old system as well, since that data wouldn’t make it to the ERP system any more, so we cut off the GUI.  All new data comes in through the ERP system.  However, the old data structure continues to be generated by the old system, along with all of it’s hidden intricacies that the downstream systems are tightly bound to. 

By definition, the data files will behave in the same way, because the tight dance that each of the downstream systems has learned, over the years, continues to play on.

Of course, a new data feed will move to the MDM system as well, this time using the new data structure.  When a downstream system is ready to adopt the new model, a dev project is fired off to either refactor it to consume the new data feed or to retire it completely.  In a sense, it’s a cleanup of the magnitude of Y2K in that we will have to examine each one of those systems to decide if it is worthy of the investment needed to fix it.

The advantage of this mechanism:

  • Data is only entered once, in the ERP system, and it is replicated in a definable manner downstream.
  • The logic needed to map the data from the old catalog system to the ERP system can be reversed to feed the old catalog system first, to make sure it works, before you put the new data structure in the ERP system.
  • There is no pressure for a “big bang” migration of downstream systems, or even a series of “little bang” migrations.  They can be done in any order we want.

The disadvantage of this mechanism

  • As changes occur in the new ERP data model, the data mapping components in the reversed feed will also have to be kept up to date.  This adds to the cost of changing the ERP data model. 

It’s an interesting problem. 

There are other solutions, of course.  I won’t go into details on why I think they are less appealing.  This one is my favorite, but the decision will be made by a team of experts. 

Managing the bindings from systems to EAI infrastructure

By |2007-02-01T11:49:00+00:00February 1st, 2007|Enterprise Architecture|

Every system is responsible for publishing its own events.  I hold that as a core requirement of participating in an Enterprise Application Integration Infrastructure (EAI-I).  What does that mean:

The system, as part of it’s ownership, it’s code, it’s configuration, is responsible for describing how it meets corporate canonical event requirements.  That belongs with the system, not with the EAI side. 

EAI is responsible for routing, translation, orchestration, logging, management, etc.  It should behave correctly regardless of how data is collected from the source system. 

Problem is that there are serious advantages to having a central infrastructure to manage these connections.  I spend a lot of my time looking at ‘spaghetti’ diagrams and one thing that is absolutely clear to me is that I spend way too much time collecting data individually on these integration points.  That reduces my ability to actually manage any of these connections.  As we move more to SOA, we will need this even more. 

What I’d like to see is a standard mechanism that meets the following needs.  If anyone can help me to understand a known standard or existing RFC that addresses these points, I’d appreciate it.

  1. A system publishes and makes available the list of integration points that it has with other systems.
  2. The EAI system queries these published points to drive it’s configuration and expectations.
  3. Publishing of these expectations should be both dynamic (as a message) and static (as something that can be queried)
  4. The description of an integration pathway or channel must be standardized, so that different EAI infrastructures can use them, and so that different reporting and management systems can leverage them, without adapters.
  5. A system can version the connection points with a release in a way that is not too difficult for developers to understand and work with.

Note that UDDI presents PROVIDERS for integration.  I need CONSUMERS and COLLABORATORS to cooperate as well, in a way that is completely under the control of the system that consumes, collaborates, or provides the integrations.