//March

Design review is not a test of your UML skills

By |2006-03-31T10:49:00+00:00March 31st, 2006|Enterprise Architecture|

Meiko asked me yesterday “To prepare for the design reviews, do I need to brush up on UML?” 

It’s really not such a goofy question.  In Microsoft IT, lots of things are changing.  In this environment, I’m hoping to be one of the many who rally for greater emphasis on excellent design, as my prior posts demonstrate.   So, I champion design reviews, and peer reviews, and peer training like study groups.  And not just a review where untrained peers take a peek at your text and go “nice job,” but a real evaluation by architects looking at how well your design reflects tradeoffs, with the teeth to make you change it. 

Hence the question.

Now to be fair to Meiko, she’s a young developer.  Smart, but only about eight years out of college.  She hasn’t been really challenged in this way before, and she wants to be prepared.  I respect that.  I wanted my reply to show that respect.

“I don’t want to pick nits, but in order to design something that someone else will code, you have to communicate in a language they can read.  So, no, you don’t prepare for a design review with UML… you prepare for design with UML”

She gave me that look like ‘you know what I meant.’  I did.  She’s right.  Not fair.

“OK, you want to know what I’m going to look for when I review the design, right?”  I asked.

“That would help.”  Arms folded now.  Defenses going up.  I’m missing a chance.  Dang.

I grab the pen and head for a white board.  “Let’s say you are in my seat, and you look at a system design that looks like this…”  I start to draw.  Her arms are not folded now.  Good.

I draw a picture of some boxes and some lines.  Nothing special.  A database in the corner.  Everything inside the boundary.  Simple example. 

She smiles, “I’d say the designer was stuck in 1994.  That’s basically a three-tier app.”

“But is it any good?” I ask.  “How do you know?  What objective criteria do you use to decide if this design is any better or worse than any other?”

She opened her mouth but silently closed it.  We stood still for a moment.  We were standing in the hallway next to a really large whiteboard that was mounted on the only wall large enough to hold it.  I could hear keys clicking as one of the developers in their office was busy typing.  She looked at me for a second.

“Are you saying that you have objective criteria to measure a design?”  She sounded a bit incredulous.  She spoke the word ‘measure’ slowly for emphasis.  I could see her science training coming through.  She had told me once that her degree was in Physics. 

“Well yes, but I’m not taking credit for writing it.  This stuff comes from the Software Engineering Institute.  It’s called the ATAM method.  I’ll send you a link.”

“Thanks”  she said.  Her reply was small.  I had just told her I was going to ‘grade’ her using a method that she didn’t know, developed by academics known for creating large and unweildy things like the CMM and TSP/PSP.  I had just told her that she was going to have to read boring books to be able to justify what she already does quite well.  Not a good message.

“Wait.  It doesn’t hurt.  The ATAM method simply asks you to look at your design from the standpoint of what they call ‘System Quality Attributes,’ what we call ‘the -ities.’  Availability, Scalability, Reliability, Security… stuff like that.  It’s just a more organized way of doing what we’ve been asking folks to do for a long time.  You don’t have to prepare by reading a thousand boring pages.”

She looked relieved.

“You do have to prepare by making sure your design has considered the System Quality Attributes with respect to the requirements.”  I stopped on the last word and repeated it.  “The requirements are what determines if this silo on the board here,” I turned to point to the diagram we were ignoring, “is a good design.” 

“There are always many ways to design a system, many choices to make.”  I said.  She was back with me now.

I continued.  “But each time you make a choice, you trade one thing for another.  You may chose to make a monolithic app, and trade adaptability for a specific model of deployment.  You’ve been making tradeoffs all along.  We’re just going to ask you to describe them.”

“In UML?”  She chided.

“UML, English, Algebra… pick your language.”  I smiled.  She knew what to do.

 

.

TCO and the number of applications

By |2006-03-28T01:34:00+00:00March 28th, 2006|Enterprise Architecture|

How do you break down the costs of owning your application portfolio into a components that can be calculated?  I have a simple expression that I believe encapsulates the TCO of an architectural component (legacy application, SOBA, service provider, etc).

The Total Cost of Ownership of your portfolio of applications is

   For Each n in Portofolio: sum( BaseCost(Environment(n)) + CpxCost*Complexity(n) + ConnCost(Environment(n))*Connections(n)) 

where

  • Complexity(n) is the complexity of application ‘n’ in the portofolio
  • Connections(n) is the number of systems that are connected to this one either by code or data dependencies
  • BaseCost is the cost of owning an app in each production environment.
  • CpxCost is the cost to maintain the app based on each ‘unit’ of complexity
  • ConnCost is the cost for each connection in a particular environment.
  • Environment(n) is a function that simply returns the environment in which the app runs (on an App Server, within SAP, in IIS, as a rich client, as a smart client, etc)

I’m proposing this formula.  It would require proof that it would accurately capture the cost of owning the portfolio. That said, it makes sense, and is fairly simple to understand.  The formula is based on the following ideas:

  1. The environment in which an app runs dramatically influences the cost of ownership
  2. The complexity of an app is a major factor in its cost of ownership, and not other considerations like the time it took to write it or the language it was written in .
  3. The number of dependencies between an app and another app is a factor in TCO.  Therefore, if you have a connection from app A to app B, the connection cost counts twice: once for A and once for B.    However, the nature of the dependency is not particularly key to the cost… just the fact that it is there.

I’d love to hear opinions on this formula.

 

 

Jacob and the data

By |2006-03-25T03:28:00+00:00March 25th, 2006|Enterprise Architecture|

When Jacob gets going, the best thing to do is step back and watch.  He stands, all five-foot-six of him, at a white board, “speaking” through the myriad of boxes, arrows, and random words scrawled across the not-quite-white surface, as though to add another layer of blue ink to the washed-out background of partially erased thoughts.  Words tumble enthusiastically out of his mouth only to slide and bounce against the shiny surface of ink and dust, because his back is to me as he writes, as though the pictures are his mouth, and the sounds of his voice are the crickets of a summer eve.

“The design is easy.  This problem is solved.”  More lines and arrows emerge.  He sells his confidence as a commodity, freely traded, highly valued.  I watch and listen.

“We have seen countless articles that show five systems interacting over a messaging hub or bus.  They all show a diagram that looks like…”  An image appears that looks not unlike a spider with square feet.  At the center, a circle, with lines radiating out to rectangles about the size of a paperback book.  In the center circle, Jacob writes “hub” and heads around the room nod.

“The problem your typical message-based architecture is that it is designed to be reliable when one system goes down,” he continues, striking a line through one of the rectangles, “but we haven’t really discussed what happens when a new system comes up.”

In an single fluid motion, he has slid five feet along the room-length white board.  In this conference room, with six other architects, Jacob is in his element.  Standing at the far left of the white board, against the corner, is Tom.  Tom is tall, slim, and a picture of calm.  He stands with a whiteboard marker in his hand, limp by his side.  He had started Jacob going with a seemingly simple question.

“How,” Tom had asked, “do we start a subscription to a new system without interrupting the message flow to the existing ones?” 

Of course, no one but Jacob had really understood the question, so he asked it again, this time using the white board.

“If I start with two systems talking,” Tom said to the white board, as he drew a box, a circle, and a box, a few inches apart in a straight line.  He connected them with lines.  System A sends 1000 messages a day.  System B subscribes. 

“Now I want System C to start up.”   Another rectangle appears, this time directly below the circle.  He quickly swiped a line from the new rectangle back to the central circle.  “But system C will need a lot of data that A has, so we prepopulate C’s database.”   This time, a dashed line from one rectangle to another. 

Jacob had been clearly ready to jump up, but you could see he was waiting for Tom to finish the question.  I know he meant it out of respect for Tom, who is brilliant in his own right… but you could see an expression crossing Jacob’s face like that of a fourth-grader, sitting in the front row, throwing his hand in the air because he has just recognized a question from the teacher that he knows the answer to.

“But that prepopulation takes time.  It always does.  Now, when you turn on C, it has already missed some of the transactions sent out from A.  The database is out of sync unless we turn off System A… but we can’t!  It isn’t our system and it’s mission critical.” Tom finished with a slight bit of agitation.  His diagram had a half-finished look about it.  He had circled the bottom rectangle about eight times.  There was a faint smell of white-board ink

That’s when Jacob had taken over.  And now, he was on to his second diagram, the spider picture freshly drawn and a new diagram emerging, this time with a row of squares and long thin rectangles both above and below them.

“Let’s look at this from a different angle,” Jacob said, this time addressing the group.  Patricia was sitting at the end of the long wooden table, and around the back, able to see the entire board, were Phil, Ram, Meiko, and myself.  Tom remained standing in the corner.

Above the top rectangle, Jacob wrote “bus” and we were back on the same page.  He had simply taken the “spider” diagram and blew it out, so that the message mechanism was on top, with the applications below.  But what was the parallel rectangle down below?

As though to answer my question, Jacob pointed at the bottom rectangle.  “This, my friends, is the data bus.  Unlike the message bus,” (pen moves to the top rectangle), ” which passes messages from one system to another, this little beastie,” (pen back down), “collects data extracts from the applications on regular intervals for BI feeds.  This is where you put SQL Integration Services.”

“That still doesn’t answer my question.” Tom chided.  Jacob waved his hands and turned back to the board.  This time, he drew a cylinder next to the top message bus. 

“When messages are sent from A to B, they are also copied to a data store” he said, indicating his new cylinder.  “Assume the data is extracted from system A at noon.”

Jacob continued, pointing at the third rectangle in the middle, that we were supposed to infer was system C.  “Now, when C comes online at 4pm, it subscribes.  However, the subscription request includes a request for all messages that A sent since noon, since that’s the timestamp of the data extract.”

“An agent picks up the message request, goes to the data store, and sends the messages to C that had already been sent by A.  Like replaying a key log.”  Now Jacob was facing the room as he spoke.  His diagram was done, or at least we thought.

“So what’s the point of the data bus?”  Patricia’s turn.  I think she knew the answer, but she was just as interested as I was to hear Jacob explain it.

“To feed the other half of that data store.  The data store holds basic query data, and maybe even some real-time aggregation data, so that many of the messages coming through the message bus can be answered without actually hitting the source system.  Think of it as a BI base for the SOA architecture.”

At this point, Jacob drew a large arrow from the bottom rectangle to the ‘data store’ cylinder. 

“Hell, I learned something today.”  Phil drawled in what was left of his Alabama accent after spending a decade in the Pacific Northwest.  “That’s why I like these sessions” he said, as he started to collect his things.

The meeting had been over for quite some time.  It was the end of the day, and six folks had stayed behind to draw diagrams and talk shop. 

I left with Phil’s words still bouncing around in my head.  “I learned something today.”

And that makes it a good day.

 

Will WCF Indigo empower SOA in your enterprise

By |2006-03-20T09:25:00+00:00March 20th, 2006|Enterprise Architecture|

In order for Service Oriented Architecture to have the impact that Architects around the world have been touting, we need to be able to take large applications and architect them down to services and consumers.  The services are difficult to do well.  So difficult, in fact, it is easy to forget the rest of it.

But one key aspect to this path is the communication infrastructure itself.  It has to be fast.  I has to be flexible.  It has to be standards-based, but willing to bend.  It has to be secure.

Yesterday’s SOAP Web Services are not fast enough for SOBA   SOBA (Service Oriented Business Architecture) implies that the business app itself is composed only of consumers and services. This is an extension of SOA in that SOA apps are full (some would say legacy) apps with their own database, while SOBA consumers are simply service consumers.  No database.

This means that even simple CRUD operations have to run across the SOA interface.  Preloading a user interface has to consume service data.  For this to work, the infrastructure has to be fast.

The SOBA infrastructure has to have a way to put intelligent caching directly into the service consumer itself, so that the number of transactions doesn’t become “chatty.”  It needs to spend very little time serializing and deserializing data.  There cannot be a noticable overhead caused by the configuration infrastructure.

In an async message-based world, these aspects don’t rise to the level of notice that we consider them a problem.  Not so in SOBA.

I have high hopes for Windows Communication Framework (aka Indigo).  WCF is all of the things I mentioned.  It is fast.  It is standards-based.  It consumes it’s configuration quickly. 

If you are finding that WCF has empowered a true SOBA environment, drop me a line.  I’m interested in swapping stories. 

Governance is knowing who holds the key

By |2006-03-18T10:27:00+00:00March 18th, 2006|Enterprise Architecture|

Ever had a good idea that could make millions for your company?  Did you tell the right person?  Did you know who that person was?

I cannot count the times I’ve heard it.   “You know, if we were really smart, we’d be doing XYZ.”  But then it falls to the ‘Inventor’ to sell the idea.  We all know what happened to the inventory of Velcro (nothing!).  People who see a good thing and come up with a good thing are often not adept at explaining, selling, convincing.  Sometimes, the idea is not good.  Sometimes the benefit is unclear.  But the biggest reason that good ideas don’t happen?  The inventor doesn’t know WHO to sell to.

At the end of the day, IT Governance is about knowing who to sell to.  It is about having a clear idea of what group, what council, what committee, and what individual can answer all of the important questions.

I was given a short lecture on this aspect of governance yesterday, and the light went “on.”  Doh!  How come I didn’t see this as clearly before?  It’s stuff I knew already.  Common sense, really.  But I’ll be darned if I really got it.

Start with your vision.  What should the world look like.  Then develop principles: good practices that should lead to that world.

Now take those principles and you break out the questions that need answering.  Assign responsibility for answering it.  Create a forum for deciding it.  Make visible the communications in and out of it.  Define the rules of the game.  Then publish them. 

And now… measure.  Make sure that the ‘good things’ are acutally happening.  If they are not, come back to these people and ask ‘why not?’  You will know who to ask.  (So will the executives).

Example:

Vision: I want an IT ecosystem where any system can be easily hooked up to “integrate” with any other system, so that when business changes, we are ready.

Principle: Every application shall be designed with integration in mind.

Tactical: For every system of enterprise scope, or which manipulates enterprise data, interfaces will be developed and supported for communicating data, events, and functionality.

Governance questions:

  • Who decides which systems are of enterprise scope?
  • Who decides what data ‘subjects’ are of enterprise concern?
  • Who decides what mechanisms will be required for applications to support?
  • Who decides when an application must consume the services of another, instead of creating the services all over again in code?
  • Who decides what the list of ‘enterprise events’ are?

If you can answer these questions, then the inventor of a good idea has someone to talk to.  Executives have someone to go to. No one is left wondering “how did this happen?”

When good design is not an accident

By |2006-03-10T20:24:00+00:00March 10th, 2006|Enterprise Architecture|

One potential often missed in a large IT organization is the potential for us to lift up another person’s design skills.  Perhaps we are competitive, or perhaps sometimes, we figure that “it’s all the same anyway,” but a lot of IT project designers don’t want to show their designs to other folks.

But if I never look at your designs, how will I improve?  And if you never allow me to offer feedback to your design, how will you improve?

Artists get this right.  So do craftsmen.  Emphasis is placed on being recognized.  For that to happen, your design has to be in an understandable media, and has to be on display.  Not on a shelf where someone COULD go look at it if they want to, but on DISPLAY, where other folks have no choice but to see it. 

And then, not just to see it, but to compare, critique, appreciate, and exemplify.  There need to be design competitions, and the winning of a design competition should mean something tangible, like a greater chance of moving up or a bigger bonus or even public praise and acclaim.

Smaller companies that don’t have so many IT workers may not be able to participate, but they should be able to partake of the results.  Acclaim should extend beyond the walls.

We do have “showcase” apps in Microsoft IT, but only where it will sell a product or illustrate how to solve a problem with MS tools.  Not so much as a mandatory mechanism to bring out the best in IT design.

Otherwise, good design happens when a good designer accidentally has a good day or is accidentally assigned to a project that they would be good at.  I mean “accidentally” because something is either an expression of the system that produces it, or it is an accident of the combination of skilled people and a project that suits them. 

Making good design a part of the system, reinforcing it, rewarding it, and heaping public praise and acclaim on those who practice it will go a long way towards making excellence in design a normal part of life.

When emergent design doesn't

By |2006-03-07T01:31:00+00:00March 7th, 2006|Enterprise Architecture|

It is quite possible that the notion of emergent design is an anathema to architects.  Allowing a design to emerge from good practices is a bit like building a house by dragging wood to a homesite and deciding where the first wall should go.  It’s quite interesting and perhaps kind of artsy… until the first windstorm.

That said, a good many applications are built with the notion of allowing the design to emerge from frequent refactoring.  A lot can be said for “big up-front design” in terms of iterating a design on paper instead of in code.  On the other hand, it is easier to overdesign something if you aren’t actually writing the code, and the design doesn’t “appear” to offer a lot of value until the system is running.

Back in High School, I spent two years in drafting class.  Well, the first year was drafting.  The second year was labeled “architecture” but it was really a very light introduction to architecture.  We learned how to do home plans. 

I never forgot those lessons about careful planning and attention to detail.  We used patterns, although I don’t think my teacher had any idea that they were called patterns at the time.  It was just the “right way to do things.” 

In a way, these designs emerged.  From the point of my sharpened pencil scratching across the tip of a t-square on translucent vellum paper, I learned the value of a visual model, and how to erase a mistake without smudging everything up.  I learned to try things out, and accept that some designs were interesting, but not good enough to hand in.  For every design I showed my teacher, there were 15 sketches and five fully drawn main-floor plans that never made it past my drawing tube.

The designs emerged.  They grew out of my fascination with the environments in which people live, the spaces in which they eat and sleep and raise their children.  They grew out of standing in the central space of a busy shopping center, watching, waiting, hoping to catch a glimpse of why one shopping center had crowds, while another had communities.

I think, if I were to walk to a drawing board, tape up some vellum, and whip out a pencil after so many years, I’d make a huge mess… but I’d make a better house than I could have possibly drawn when I was sixteen.  You see, I’ve never stopped noticing those spaces.  I’ve never stopped looking for the quiet corner, the lovers nook, the family patio, the inviting kitchen.  I’ve been cataloging those patterns all these years, in my head, waiting patiently for some time when I’d need them.

And that is where emergent design has its place.  When it emerges from experience.  When you don’t start from dragging wood to a homesite and start nailing, but rather whe you whip out your vellum and draw, from the top of your head and the tip of your sharpened mechanical pencil, a design that is more mature, more excellent, more fit to the job than anything you could have done when you were still learning the craft.

Designs emerge… in the mind and imagination of every dreamer who refuses to grow up.

Refactoring is important, because nothing is “right” until it is understood, and nothing complex can be understood until it is done… but on the other hand, design, up front, is an expression of elegance and beauty that should not be foregone just because it is tough to do.  It is at that fine point, the point of intersection between experience and elegance, the point of a sharpened mind and a sharpened pencil, that architecture truly emerges.

Why a workflow model is not code

By |2006-03-06T08:54:00+00:00March 6th, 2006|Enterprise Architecture|

It is no secret that I am not fond of using EAI systems like Biztalk for Human Collaborative Workflow.  I believe, instinctively, that it is a bad idea.  However, I have to be more than instinctive in this analytical world (and company).  I need to be prescriptive at best, and constructive at worst.  So I did some thinking.

When I was in college, I really liked some of the logic languages, especially Prolog.  I found it interesting that much of the real power of Prolog comes from the fact that Prolog is not really a language as much as it is an engine that evaluates logic rules, and the database of rules was dynamic.  In other words, a Prolog program could easily add statements to itself.  It is, in effect, self modifying.

I remember getting into a long debate about what it means to “write a program” with an Assistant Professor who felt rather strongly that no language that supports “self modifying code” should be used at all.   He was all about “proving correctness” while I was keyed in to particular problem sets that defy prediction.

And now, 20 years later, I’m beginning to understand my instinctive reason for believing that human collaborative workflow should not be done with an EAI tool… because Workflow is self modifying.

In order for the EAI engine to be helpful in a workflow situation, every state must be known to the engine at compile time.  Avoiding this rule can be done by modifying the logic in the engine.  Workflow must be self-modifying to be truly useful, because Humans are Messy.

EAI engines are not known for being amenable to this modification.  A good workflow engine is not restricted in this way, so for it, no problem arises when a workflow manipulates itself.  But for an EAI system, changing the state machine half way through the process, and applying the change to only one instance of the process (itself, usually), requires flexibility in design that EAI systems are not normally capable of.

What do I mean by self-modifying workflow? 

There are two ways to use a workflow engine: one as a code system and the other as a logic database.  It’s kind of like comparing C# to Prolog.  A truly Prolog system produces a logic database that is inspected at each step by the Prolog engine.  Therefore, if a block of Prolog code updates the database, the logic of the system changes immediately.  This is not so simple with C#.

If you use your workflow engine as code, (the C# model), then a human being can perform “self modification” of the workflow only in very specific and prescribed manners, and only when the designer of that specific workflow would recognize it.  In other words, you can create a list in your data that represents a list of people that an item must be routed to.  You can modify the list as the system moves through, and your code workflow can inspect the list.  However, the constraints come in that the list is a single thread, and that modifying the list to change the people who have already seen the item is possible but logically meaningless.

If you use your workflow engine as a logic database (the Prolog model), then a human being can self modify the workflow by adding complex logic, changing evaluation rules, rewriting error handling, and doing other complex jumps that are essential to creating a system that begins, even remotely, to be able to handle the sophistication of human collaboration.

For an EAI engine, this is foolish.  EAI lives “at the center.”  It is a system for allowing multiple other systems to collaborate.  The rules at the center need to be stable, or all routing can suffer.  This is not a good place for very complex behavior models based on self-modifying instances of code. EAI, to function properly, must submit itself to excellent analysis, careful design, and very very very careful versioning. 

And that is why EAI systems are lousy at human collaborative workflow.

Interesting tool for schema-first design

By |2006-03-05T02:56:00+00:00March 5th, 2006|Enterprise Architecture|

I guess it goes without saying that you cannot communicate in a language unless at least two people are using it.  That was always the problem with Esperanto… interesting to learn, hard to find someone to converse with.  WSDL is kinda like that.

One of the four tenants of SOA is that we share contract and not class… but most developers attempting to make services don’t really do that.  They develop a class interface, abstract it into a WSDL description and share it, without ever making the MENTAL distinction that they are making it into a contract.

From a design standpoint, I know that a best practice is only a practice if someone is practicing it.   If the tools prevent us, at the design stage, from describing our interfaces in purely abstract terms, then we aren’t practicing. 

So, the following tool shows up linked from another blog.  (I’ll skip the intermediary) 

http://www.thinktecture.com/Resources/Software/WSContractFirst/default.html

This tool is a Visual Studio add-in that allows us to better use WSDL as a design tool, not just something that is output from VS after the class is created.

Maybe, now, it will be just a little easier to convince folks to actually practice the creation of the contract.