/Tag: workflow

Should Business Architects use the Business Model Canvas at the Program level?

By |2013-01-31T10:51:21+00:00January 31st, 2013|Enterprise Architecture|

In the Open Group conference at Newport Beach, I listened to a series of presentations on business architecture.  In one of them, the presenter described his practice of using Osterwalder’s Business Model Canvas to create a model of his program’s environment after a business program (aka business initiative) is started.  He felt that the canvas is useful for creating a clear picture of the business impacts on a program.  There are problems with this method, which I’d like to share in this post. 

Let me lay out the context for the sake of this post since there is no business architecture “standard vocabulary.” 

A “business program” is chartered by an “enterprise” to improve a series of “capabilities” in order to achieve a specific and measurable business “goal.”  This business program has a management structure and is ultimately provided funding for a series of “projects.”  The business architect involved in this program creates a “roadmap” of the projects and to rationalizes the capability improvements across those projects and between his program and other programs. 

For folks who follow my discussions in the Enterprise Business Motivation Model, I use the term “initiative” in that model.  I’m using the term “program” for this post because the Open Group presenter used the word “program.”  Note that the presentation was made at an Open Group conference but it does NOT represent the opinion or position of the Open Group and is not part of the TOGAF or other deliverables of the Open Group.

The practice presented by this talk is troubling to me.  As described, the practice that this presenter provided goes like this: Within the context of the program, the business architect would pull up a blank copy of the business model canvas and sit with his or her executive sponsor or steering committee to fill it out.  By doing so, he or she would understand “the” business model that impacts the program. 

During the Q&A period I asked about a scenario that I would expect to be quite commonplace: what if the initiative serves and supports multiple business models?  The presenter said, in effect, “we only create one canvas.”  My jaw dropped.

A screwdriver makes a lousy hammer but it can sometimes work.  The wrong tool for the job doesn’t always fail, but it will fail often enough to indicate, to the wise, that a better tool should be found.

The Osterwalder’s business model canvas makes a very poor tool for capturing business forces from the perspective of a program.  First off, programs are transitory, while business models are not.  The notion of a business model is a mechanism for capturing how a LINE OF BUSINESS makes money independent of other concerns and other lines of business.  Long before there is a program, and long after the program is over, there are business models, and the canvas is a reasonable mechanism for capturing one such model at a time.  It is completely inappropriate for capturing two different models on a single canvas.  Every example of a business model, as described both in Osterwalder’s book and on his web site, specifically describe a single business model within an enterprise.

I have no problem with using business models (although my canvas is different from Osterwalder’s).  That said,  I recommend a different practice: If the business initiative is doing work that will impact MULTIPLE business models, it is imperative that ALL of those business models are captured in their own canvas.  The session speaker specifically rejected this idea.  I don’t think he is a bad person.  I think he has been hammering nails with a screwdriver.  (He was young).

Here’s where he made his mistake:

multistream value chain

In the oversimplified value stream model above, Contoso airlines has three business models.  The business owners for these three businesses are on the left: Bradley, Janet, and Franklin.  Each are primarily concerned with their own business flows.  In this oversimplified situation, there are only two programs, each with one project.  If the session speaker were working on the Plantheon program, his idea works.  there is only one business model to create.  That nail can be hammered in with a screwdriver.  Lucky speaker.  Showing Franklin his own business model is a good thing.

But if we are working on the Flitrack program, what do we show Franklin?  if we create a “generic” canvas that includes cargo, he will not recognize the model as being applicable to his concerns.  He will not benefit and neither will the program.  In fact, Franklin will think us fools because he had a presentation from Plantheon yesterday showing him an accurate model… don’t you people talk?

Program Flitrack should have one-on-one conversations with Bradley and Janet to develop their business models.  The business model that Franklin cares about does not need to be created again.  It can come out of the repository.  The Flitrack program would consider all three models as independent inputs to the business architecture of the organization impacting the program. 

Anything less is business analysis, not business architecture.

The bizarre assumption of functional decomposition

By |2008-10-28T04:51:10+00:00October 28th, 2008|Enterprise Architecture|

I ran into a friend today and, as friends often do, we let our conversation wander over the different "broken things" in IT in general (and a few in Microsoft in specific).  One thing that I’d like to share from that conversation: a truly bizarre assumption that we teach, over and over, to new programmers… the assumption that simply following a "functional decomposition" process, a well-trained programmer will naturally end up with a good design.

Now, I’m no great expert on product design or graphic design or industrial design… but one thing I can say for certain: creating a good design is not the natural outcome of a simple process.  Only an excellent design process can produce excellent design.

Let me provide an example of good design from the world of products.  This picture is a picture of a footrest.  You read that right: a place to rest your feet.  Mundane, right? 

You tell me.  Does this product LOOK mundane to you?  How about the fact that it promotes movement and blood flow while serving it’s primary function? (special call out to the design firm, humanscale, for creating this beautiful product.)

 main_fm500

Design is not just a process of decomposing a problem into its constituent parts.  Nor is it a process of creating objects that contain functionality and data… yadda-yadda-yadda.  There are dozens of techniques.  Don’t read my blog post as a slam on any one of them.  I’m slamming anyone who thinks that they should reach a conceptual architecture, derive one solution, and go… without considering alternatives.

Design is a process where you consider the problem and then propose multiple, competing, wildly creative solutions.  You then narrow down your brainstorm and weed out the bits that won’t work… and then you propose multiple, competing, wildly creative refinements… and the cycle continues.  This happens a couple of times, until you have refined your solution by being creative, then logical, then creative, then… you get the picture.

When was the last time you were in a design review and the team being reviewed came in with multiple solutions, asking for help to narrow it down?  Really?

In no other field is the word ‘design’ so misused as it is in software development.

I have not met many folks that use a process whereby they design multiple, competing, wildly creative solutions in software and then evaluate them, select one or two, and go after the problem again at a finer level of abstraction. 

Not many folks at all.

Why is that?  We are living with the myth: that you can follow a simple process, produce a simple and "pretty good" solution architecture that represents a "good design".  No alternatives.  Very little creativity.

How’s that working for ya?

As-Is versus To-Be… what to model first

By |2008-05-07T00:06:18+00:00May 7th, 2008|Enterprise Architecture|

I have always taken the advice at face value: the "to be" model matters much more than the "as is" model does.  Implicit in that: spend as little time on the "as is" model as you can.  Perhaps, even, do the "to be" model first.

Of course, I wouldn’t be blogging this point if I didn’t run into that bit of advice today.  We are modeling the ‘as is’ process first.  And spending a good bit of time on it.  Why in the world would we do that?

Because, there’s a BPM benefit to modeling the ‘as is’ process, and sometime we have to earn that benefit before we can wander in the clouds of ‘what will come.’ 

Sometimes we have to be willing to write down what others have not wanted to write down: that the customer doesn’t experience a simple process… that our methods are not efficient or effective… that different people use overlapping words in confusing ways… that levels of abstraction create layers of confusion that can be impenetrable for "outsiders" to understand.

Once the complexities are pointed out, and sometimes only after they have, can we begin to get people focused on the future.

Sometimes, we have to take the time to consider where we are before we can begin to understand where we are going.

Technorati Tags: ,,

Killing the Command message: should we use Events or Documents?

By |2007-08-07T09:19:00+00:00August 7th, 2007|Enterprise Architecture|

If we want to decouple a SOA system, we must get away from the notion of the remote procedure call.  In other words, our services need to have as few “command” messages as we can get away with.  This is a design philosophy but it is easier said than done.

According to Hohpe and Wolfe, there are three basic message patterns.  Excerpt from their classic work Enterprise Integration Patterns:

Message intent — Messages are ultimately just bundles of data, but the sender can have different intentions for what it expects the receiver to do with the message. It can send a Command Message, specifying a function or method on the receiver that the sender wishes to invoke. The sender is telling the receiver what code to run. It can send a Document Message, enabling the sender to transmit one of its data structures to the receiver. The sender is passing the data to the receiver, but not specifying what the receiver should necessarily do with it. Or it can send an Event Message, notifying the receiver of a change in the sender. The sender is not telling the receiver how to react, just providing notification.

If you look carefully, it isn’t hard to see that the command message is sent with the understanding that something will happen on the receiving end.  More importantly, the sending KNOWS what will happen on the receiver’s end.  This is a particularly insidious form of coupling.  It is also really simple.

command

All kinds of things creep into a SOA message as a result of this knowledge.  If I am asking a service to “Create an Invoice” and I send data that describes an invoice to another system, I am making a lot of assumptions. 

  1. I am assuming that the receiver will succeed.  The receiver has to be present. I have not said “I have an invoice for you.”  I have said “I need you to do this for me.”  If the receiver isn’t present, then what?  Sure, I can use durable messaging, but if a message is stuck in a queue, it isn’t in ANY part of the distributed system.  It vanishes from existence until it turns up at the other end.  The invoice isn’t created… it doesn’t exist in any form… for an indefinite period of time.  NOT GOOD.
     
  2. I am assuming that the sender should, ultimately, have the right to decide if an invoice should be created.  That’s odd.  Why do I even need the receiver to create the invoice?  Answer: because the sender cannot.  Implication: the sender does not have the “right” to create an invoice.  The receiver is the “system of record,” not the sender.
     
    But if someone wants to add a new validation rule, that says we won’t create invoices for customers in Germany because of a new import law that went into effect, where does that restriction go?  If we put it into the system of record, (the receiver), then we have to put a rule into the sender as well, to allow it to return an error message or handle the refusal.  In effect, the sender still has intimate knowledge of the workings of the receiver. 
     
  3. I am assuming that the invoice isn’t already there and the sender is the first system to notice!  This flies in the face of reality.  It is entirely normal for the CRM system, the billing system, the shipping system, and perhaps even the portal sytem to be part of a “new invoice” process.  If I make the statement that “It is I, Sender of the Magic Message, Master of Invoice Creation, who has the right to demand an invoice into existence!” then what… people have to call the sender to create an invoice?  What if they don’t!  What if they call the system of record and I discover the sale later.  Will I mistakenly generate another invoice?  Or will I have to put complex rules into my code to insure that I only generate some of the invoices that I am aware of, but not others because I know that other systems have ordered the creation of the invoice.  Unmaintainable.  
     
  4. I assume that creating the invoice should happen RIGHT NOW for both my system and the receiver.  That may be convenient for me, but not for the receiver.  In fact, it may be wildly unreliable for the receiver to receive messages as they come in the door.  Or perhaps some messages happen right away but others take too long.  This places an artifical constraint on the system of record: do what you want, as long as it doesn’t take more than 100 milliseconds.  This is seriously tight coupling. 

Each of these assumptions exist in a Remote Procedure Call.  They are forms of coupling, pure and simple.  They fly in the face of SOA.

So what to do?  How do you avoid making SOA endpoints that are commands?  If I want to offer the ability to create an invoice to the enterprise, what should my endpoint look like?

You have two choices: event driven and document driven

Event driven looks like this:

event driven

First of all, there is an event that you need to subscribe to.  It is not the event of “invoice created” because the sender is not allowed to create invoices.  Therefore, we need the system of record to subscribe to a different event… but what event?

What event occurs in a business that says “create an invoice.”  How about “we made a sale?”  Think of the subtle difference.  An invoice is a document.  We use it to track the sale.  We assign a number to it and we look up other sales etc.  But it isn’t the BUSINESS event.  The business didn’t make an invoice.  The business made a sale.  Operations people made an invoice to track the sale (long before computers came along).  One tidbit: the fact that the system of record can reject the transaction means that this is an “unapproved sale.”

Notice Steps 2 and 3.  The event message usually doesn’t contain sufficient information for  a system of record to fulfill it’s responsibilities.  It subscribed to the event, and therefore discovered it, but it needs to call back to the source system to get the actual data to act upon. 

Notice Step 4 above.  There are two subscriptions back to the sending system.  This handles the case where the sale wasn’t allowed.  It is the system of record that denies the sale.  

There is an interesting bit of coupling still going on here.  The source of our sale came from a system that is not aware of the rules surrounding a sale.  It has a simple task: collect data and start a sale transaction.  However, we had to subscribe to two kinds of events, didn’t we?  We had to subscribe to both “Invoice” and “Sale Denied”.  That means that we had to tell that source system that there were two possible status values for a sale, and we had to subscribe to each. What if a third comes along?  What if the business wants to change the rules to allow a new kind of sale… one that doesn’t generate an invoice OR a sale-denied message?  We’d have to change BOTH systems.

Both systems are coupled on the business process itself… the business process isn’t in the diagram but it definitely affects the design.  So how do you decouple from that?  Let’s look at Document based messaging.

Document Driven

First thing to notice: the number of ports and channels is a LOT simpler and we don’t spend nearly the same amount of time “chatting” about things.  However, in this model, the responsibilities of both systems are VERY different.  This is an architectural design change.  This kind of change CAN be added to a system later, but it is more expensive than if you add it up front.

In this model, we don’t send events at all.  Notice that.  We send documents and the documents have a transaction id that is carried from point to point.  As the document goes from describing an unapproved sale to describing an invoice (and later to a shipment), you carry one transaction id along the way.  This is your correlation identifier.  This allows each of the systems to perform activities based on their own business processes, without needing to know anything about the business process implied in the other system.

Notice that we are down to one response subscription, and it isn’t even a specific subscription.  It basically says “For any transaction that started with me, or that I touched, please send me back any documents related to it so I can update my status.”  It is very simple.

The simplicity is a bit deceiving.  If we need to trigger another event on the transaction source, we need to put in some logic for that, but it is not a substantial change over the event-driven approach.  The logic is simply contained in the app instead of the messaging system.

Conclusion

So which method is better?  Should we use commands, events, or documents? 

I’m a big fan of simplicity.  And the simplest method, in the end, is document driven.  Unfortunately, due to the architectural changes needed to make it happen, we often cannot start there. 

So, for a migration plan, take legacy systems and make them event driven.  If you are building new systems: make them document driven.  Either way… kill the command message!  Avoid that mess.

Microsoft, BPEL and Open Standards

By |2007-04-04T08:43:00+00:00April 4th, 2007|Enterprise Architecture|

We are frequently criticized for not supporting enough open standards.  Honestly, I think it’s negative hype.  MS supports a long list of open standards, some of which we created, some of which we didn’t.  This one is cool: BPEL.

When business analysts write down business processes, they use diagram tools.  In the Microsoft stack, we’ve had Visio, but that tool is a general purpose diagram tool, not really a niche tool for any particular niche (much as we’ve worked to extend it).  As a result, in the Business Process Engineering space, MS has fallen behind major players who produce tools that MAKE SENSE to business process people, and even provide some great analysis capabilities.

However, if a company uses one of these tools, and then wants to share those process flows with developers, it gets weird.  In a pure MS world, the developer would have to re-envision the workflow using the WF Designer in Visual Studio.  If that seems like extra work, it is.  It’s wasteful and the models can easily become out of sync.

Enter BPEL.  John Evdemon has done a good job of keeping track of this space for us, and I refer you to his excellent blog.  BPEL is an open standard for sharing Business Process workflows between tools, and now WF can play in that space.  This means, of course, that we can now skip the step where a developer “interprets” the workflow created by a business analyst.  We can import it directly.

This is so cool.  There are excellent tools out there, and through this simple feature, we allow the users of all of those tools to feed WF-based systems with workflow.  I encourage all my architect collegues to become familiar with BPEL 2.0 and how WF will use it, and spread the word: WF is enterprise ready.

Perhaps it is time to declare victory in the battle of Rules Engines vs. Dependency Injection

By |2007-03-06T09:28:00+00:00March 6th, 2007|Enterprise Architecture|

I watched on the sidelines, not long ago, as a team of architects carefully inspected and examined different technologies for managing a rules engine.  I found it interesting, but not terribly pertinent, because…well… to be honest… rules engines tend to create more problems than they solve.

So let’s look at the problem we are trying to solve with a rules engine. 

When an application is written, it will encapsulate some business rules related to the problems it is trying to solve.  Assuming it is valuable, continued investment will occur.  As this happens, more business rules will be placed in the application.  Unfortunately, these rules can be applied at different points in the system’s architecture (user interface, composition layer, service layer, data layer, and in data migration processes).

Some rules apply in the user interface.  We may decide that a field should be implemented as a drop-down list.  Why?  Because the data entered in that field must be “known” already to the application, either for integration purposes or just to reduce data entry error.  That is a business rule.  We could allow free form input in a user interface field… and then apply editing rules (like date formatting, or putting the spaces and dashes in phone numbers).  That is a rule as well, especially if the user interface is data driven.

We may decide to validate data at the middle tier or services layer.  For example, in some situations, I can only submit an order for products if I provide the agreement number that I have signed with vendor.  That agreement specifies lots of things, like terms of payment and perhaps pricing and rebate rules.  So what if I enter an order but provide an invalid or expired agreement number?  There would be rules for this as well.

There are a couple of problems that rules engines are designed to solve:

  1. If a business rule needs to change, and it is implemented in many places, potentially in many applications, then it is tedious and expensive to change it.  This slows down the business and increases the cost of agility. 
  2. If an existing application will take on new needs, then new business rules may need to be added to it.  This can increase the complexity of the application substantially.
  3. Business rules often drive tight coupling between systems, especially in an integrated environment.  If I am going to pass data from system FOO to system BAR, and I want to make sure that the data will be acceptable to system BAR, I may be tempted (or required) to validate the data in FOO using the business rules from BAR.  The person who write the code for those rules in BAR is long gone from the department.  The expense of making sure those rules are in sync, and kept in sync, can be high.

These are very valid problems, and a rules engines propose to solve it by providing interesting mechanisms.  They include the ability to pass values to the engine and have it calculate a result that can be understood by the caller.  This allows isolation, to a point, because the calculation itself can be changed.  You can also place process rules into a rules engine, so that you pass in the state machine information and one or more inputs or events, and the state machine reacts by sending out events and changing state.  This is the core concept of workflow components.

That said, I think there are very narrow uses where rules engines are actually a good idea.  Many folks argue that workflow engines are essentially a subclass of rules engines, and Workflow is a good idea to isolate.  Why? Because writing parallel workflow capabilities into your code, unless you are an expert in Petri-Nets, is HARD.   What you are really encapsulating is not the data, or even the process, but the capability of executing the process properly.  I’m not even sure if I consider workflow engines to be a subclass of rules engines, given this fact, and the remainder of this blog post specifically excludes workflow engines or any other ‘rules engine’ where “how” you execute is more difficult to manage than “what” you execute.

The generic rules engine, on the other hand, is not so specific.  More often than not, rules engine proponents say “use the engine for encapsulating the rules, and allow them to be executed here.”  Nice idea.  Too bad it doesn’t work.

The problem with making it work is, as always, in the details.  In order to delegate the execution of rules, they have to be rules that are efficient to delegate (there goes user interface editing rules), not specifically associated with data coupling (there goes the problem of passing in domain data for a drop-down box), and describable from the standpoint of an algorithm or formula (there goes error handling rules).  In addition, the algorithm sometimes has to be encoded in a programming language that is executed as script.  This is slow and inefficient.

A much better approach is to create a set of strategy patterns to be used rules validation, write code that implements those patterns, and inject that code, at run-time, into the executing environment. 

Write the rules as small bits of code, carefully controlled, adopting an interface that is called in a standard manner.  Your data drives your system to use the code module.  Inside the module, you have a good bit of freedom to figure out what you want to accomplish.  You can even share ‘global’ values across code modules if the framework is put together well.

Note: this is not a rules engine.  It is a rules framework.  Rules engines execute the rules.  A framework merely gives you the ability to control their instances.  Your app directly executes them.

This is the basic idea behind event driven programming!  Nothing new there.  I’m just suggesting that you use a framework to do it, so that systems can change at run time by changing configuration files. 

For those folks who don’t know what I mean by ‘inject,’ that means that you set up configuration in text files (presumably XML) that declares what code module contains your rules classes, all of which implement the proper interfaces.  Then, your system uses that configuration data to load those modules at the right time and keep them around for rules validation.

This is far better than using a rules engine.  What’s odd is that I haven’t seen many comparisons of the two, yet dependency injection has clearly won the battle.  Over the years, a lot more code has been written to be injected than to call out to external systems for execution.

So why bother to revisit this topic?  To declary victory for dependency injection and kill the generic ‘rules engine’ concept completely.    Dependency injection won.  Let’s not waste any more time discussing generic rules engines.

Managing the bindings from systems to EAI infrastructure

By |2007-02-01T11:49:00+00:00February 1st, 2007|Enterprise Architecture|

Every system is responsible for publishing its own events.  I hold that as a core requirement of participating in an Enterprise Application Integration Infrastructure (EAI-I).  What does that mean:

The system, as part of it’s ownership, it’s code, it’s configuration, is responsible for describing how it meets corporate canonical event requirements.  That belongs with the system, not with the EAI side. 

EAI is responsible for routing, translation, orchestration, logging, management, etc.  It should behave correctly regardless of how data is collected from the source system. 

Problem is that there are serious advantages to having a central infrastructure to manage these connections.  I spend a lot of my time looking at ‘spaghetti’ diagrams and one thing that is absolutely clear to me is that I spend way too much time collecting data individually on these integration points.  That reduces my ability to actually manage any of these connections.  As we move more to SOA, we will need this even more. 

What I’d like to see is a standard mechanism that meets the following needs.  If anyone can help me to understand a known standard or existing RFC that addresses these points, I’d appreciate it.

  1. A system publishes and makes available the list of integration points that it has with other systems.
  2. The EAI system queries these published points to drive it’s configuration and expectations.
  3. Publishing of these expectations should be both dynamic (as a message) and static (as something that can be queried)
  4. The description of an integration pathway or channel must be standardized, so that different EAI infrastructures can use them, and so that different reporting and management systems can leverage them, without adapters.
  5. A system can version the connection points with a release in a way that is not too difficult for developers to understand and work with.

Note that UDDI presents PROVIDERS for integration.  I need CONSUMERS and COLLABORATORS to cooperate as well, in a way that is completely under the control of the system that consumes, collaborates, or provides the integrations. 

 

Is there value in consistency?

By |2006-11-21T18:44:00+00:00November 21st, 2006|Enterprise Architecture|

Do all of your project managers deliver the same information to their team and management?  Do all of your developers use common tools and techniques?  Do all of your testers follow the same patterns for creating test cases?

Process improvement is an interesting, and sometimes overwrought term.  We can all benefit from ‘excellent practices’ but the counterbalance is that ‘excellent practices’ are the result of steady improvement (six sigma or CMMI style) over ‘common practices,’ and many IT people reject the basic idea of ‘common practices’ altogether.

So what is this idea that some people love, while others despise? 

It is the radical notion that an activity or output that is valuable in a particular situation is also valuable in other (similar) situations, and that you can use proven value from one project to guide and inform staff members working on another (similar) project.

The problem isn’t collecting this guidance.  Everyone is willing to have their idea considered as ‘the best practice.’  The problem is getting other folks to learn, practice, and improve upon that guidance.  They already have a way of doing things, and your ideas may not appear to be all that much better.

One way to attack a ‘common practice’ is by saying “My situation is not similar to yours, so your practice is not valuable to me.”  This is occasionally true, but often it is a claim made by a person who thinks his or her way is just fine, thank you, and doesn’t need the ‘improvement’ offered by others.

Another way to attack a ‘common practice’ is by saying “Your idea is not better than mine, so I won’t adopt it.”  This gets really fun when one person or the other starts trying to create measurements to prove how much better they are.  Don’t get me wrong.  I like measurements as a way of driving process analysis.  However, those measurements have to measure the things that really matter: can you make more money?  Can you deliver to the business better?  Can you cut costs?  Otherwise, the measurements are unlikely to have any relationship whatsoever with stated company strategy or goals. 

To whit: I’ve seen folks quote numbers that talk about the drop in the number of defects if you follow ‘process X’ when that process substantially increases the development time needed to produce a system.  This is fine if you don’t mind increasing costs or sacrificing agility.  Executives and managers get to decide what the priority should be between agility, scalability, reliability, flexibility, and performance.  Here’s a radical idea: we should ask them.

More to the point, should an organization even create a ‘common practice’ guideline at all?  Is there value if asking people to perform their work in a common manner?  Most would say “yes” but I’m willing to bet I’d get a wide array of responses if I asked “how detailed” that common practice should be.

So, to add to the different quality attributes, I add the attribute of process consistency.  What is the value in making sure that your systems are created in a consistent manner? 

It’s a fine line.  What do you think?

What SHOULD a sequence diagram show?

By |2006-10-13T10:18:00+00:00October 13th, 2006|Enterprise Architecture|

For most folks, a UML sequence diagram is something that is either (a) unnecessary, (b) clearly required and essential.  There is rarely a middle ground.  So when you create a diagram (whether by force or by choice) I’d like you to consider the audience, first and foremost.  What do you want to say to them?

Personally, I don’t believe that it makes sense to create a diagram (of any kind) unless you are trying to communicate a thought that may not be clear any other way.  Diagrams are great for that, especially UML, because there is a rigorous way to create the diagram and a clear meaning for each of the symbols.

Therefore, first off, if you are going to create a sequence diagram, and you don’t do this on a daily basis, brush up on the standard.  Look over the description in wikipedia or one of the many sites on the web.  (Ignore the agile sites… Many agilists don’t have any fondness for modeling as anything more than Whiteboard Art and I’ve seen a few that play fast and loose with the diagram standards, which lowers the effectiveness of the communication).

But more important, try to consider this question: “who am I communicating with and what do they need to understand?” 

Let’s say that you are trying to describe a scenario where five systems communicate in a distributed system flow.  Three of them communicate through sync calls to one another.  The other two communicate through async event notifications.

Why would you want to communicate anything at all?  Really.  You can describe things in text, right?  Well, what if the teams that will develop or update or maintain these systems are not familiar with async calls?  What if they’ve never done anything async before?

If that is the case, then you want to make sure that you illustrate the object lifelines.  You want to make sure that you show the “ACK” (aka “thank you”) messages that go from an event notification to the event handler.  You want to show that the lifeline of the caller ENDS before the operations of the async partner is complete.  You want to show that a message is sent back to the collaborator at a later date with information about the processing and you want to make it clear if that return notification is async as well.

It’s a lot to illustrate.  But the point is not to show messages moving.  It is to educate: what will the reader LEARN by reading your diagram?  What do they not already know? 

If you think like an educator, you often find that you remove excess detail, while clearly showing the things that you need the reader to understand. 

So, to answer the question, what should a sequence diagram show?  It should show the information that justifies its existence in the clearest possible manner, with a minimum of excess detail.

 

Why a workflow model is not code

By |2006-03-06T08:54:00+00:00March 6th, 2006|Enterprise Architecture|

It is no secret that I am not fond of using EAI systems like Biztalk for Human Collaborative Workflow.  I believe, instinctively, that it is a bad idea.  However, I have to be more than instinctive in this analytical world (and company).  I need to be prescriptive at best, and constructive at worst.  So I did some thinking.

When I was in college, I really liked some of the logic languages, especially Prolog.  I found it interesting that much of the real power of Prolog comes from the fact that Prolog is not really a language as much as it is an engine that evaluates logic rules, and the database of rules was dynamic.  In other words, a Prolog program could easily add statements to itself.  It is, in effect, self modifying.

I remember getting into a long debate about what it means to “write a program” with an Assistant Professor who felt rather strongly that no language that supports “self modifying code” should be used at all.   He was all about “proving correctness” while I was keyed in to particular problem sets that defy prediction.

And now, 20 years later, I’m beginning to understand my instinctive reason for believing that human collaborative workflow should not be done with an EAI tool… because Workflow is self modifying.

In order for the EAI engine to be helpful in a workflow situation, every state must be known to the engine at compile time.  Avoiding this rule can be done by modifying the logic in the engine.  Workflow must be self-modifying to be truly useful, because Humans are Messy.

EAI engines are not known for being amenable to this modification.  A good workflow engine is not restricted in this way, so for it, no problem arises when a workflow manipulates itself.  But for an EAI system, changing the state machine half way through the process, and applying the change to only one instance of the process (itself, usually), requires flexibility in design that EAI systems are not normally capable of.

What do I mean by self-modifying workflow? 

There are two ways to use a workflow engine: one as a code system and the other as a logic database.  It’s kind of like comparing C# to Prolog.  A truly Prolog system produces a logic database that is inspected at each step by the Prolog engine.  Therefore, if a block of Prolog code updates the database, the logic of the system changes immediately.  This is not so simple with C#.

If you use your workflow engine as code, (the C# model), then a human being can perform “self modification” of the workflow only in very specific and prescribed manners, and only when the designer of that specific workflow would recognize it.  In other words, you can create a list in your data that represents a list of people that an item must be routed to.  You can modify the list as the system moves through, and your code workflow can inspect the list.  However, the constraints come in that the list is a single thread, and that modifying the list to change the people who have already seen the item is possible but logically meaningless.

If you use your workflow engine as a logic database (the Prolog model), then a human being can self modify the workflow by adding complex logic, changing evaluation rules, rewriting error handling, and doing other complex jumps that are essential to creating a system that begins, even remotely, to be able to handle the sophistication of human collaboration.

For an EAI engine, this is foolish.  EAI lives “at the center.”  It is a system for allowing multiple other systems to collaborate.  The rules at the center need to be stable, or all routing can suffer.  This is not a good place for very complex behavior models based on self-modifying instances of code. EAI, to function properly, must submit itself to excellent analysis, careful design, and very very very careful versioning. 

And that is why EAI systems are lousy at human collaborative workflow.