There is one big thing we must do if we are to make IT align with business strategy, we need to get IT out of the role of interpreting the whims and desires of the business. The good folks in IT are really bad at mind-reading. As long as we are in the “mind-reading” business, we will never be given credit for what we do well: automation.
The answer: let the business folks write free code. Not just any business folks. We let Business Process Developers write free code.
What is free code? Free code is unmaintainable code that wires together service calls in a way that is inexpensive to produce. Free code is mashup code. Bugs can be fixed, but we don’t really maintain it. If we want to change free code, we write it again. It was so inexpensive to build that it costs less to rewrite than to modify in any non-trivial way.
Free code, in order to be truly free, needs to be generated from tools that are NOT coding tools. In other words, software development environments are too rich for free code. Why? Because it is too tempting to build expensive code. We need to differentiate, then, between the rich, highly designed, object oriented code that software developers produce, and the free code that business process developers will produce.
Note: I said that free code is unmaintainable. Code is unmaintainable because it’s complexity exceeds the ability of a developer to maintain it. Let’s dig a little deeper. Why do we need to maintain code? Because code is expensive to write. Therefore, it is currently cheaper to fix it than rewrite it. On the other hand, what if code were cheap, or free? What if it were cheaper to write it than maintain it?
Then we would never maintain it. We’d write it from scratch every time.
Sure, we can choose to write maintainable code. We can use practices like patterns, object oriented development, and careful design principles. On the other hand, we can give our business project managers an environment where they can describe their needs and code is simply expressed from those needs. If the code that comes out doesn’t meet their needs, the business process developer knows it the moment they run their code.
What is the value of doing this?
1) Lower the cost of IT through reduced skill requirements. The skill set of the Business Process Developer is different from that of a software developer. Traditionally, we’ve sought folks with both skill sets to employ as software analysts. This usually meant training someone. What is wrong wit that? Answer: We’ve created expensive specialists to overcome tool deficiencies. Why not fix the tools? Then we won’t need the specialists that cost so darn much.
2) The speed of development goes up. If the business process developer can change the process wiring readily, then the software developer can focus on making the needed updates to the services themselves. This removes the coupling between process and code that slows down EVERY project in IT.
3) Projects become more agile. Since a business process developer can develop a mashup of services quickly, they can demonstrate that mashup very readily, directly to business stakeholders. A change can be shown to the business folks quickly as well. If the business needs change, or their understanding grows, and they need the services to do something more than they do, then this kind of agile process encourages rapid feedback to the developers who own the services themselves.
4) Solution quality goes up. Since we can focus our deep design team on developing the services that the business process developers consume, we can improve the quality of those services independently. This allows for better measurement of quality and an increased focus on the key quality measures inside each service. Reusability is a natural outcome of high quality services.
What does this mean for our tools:
We need to seperate business process modeling from software development and produce rich tools aimed at the needs of the BPM practitioner. Those tools need to start and end with an understanding of business capabilities, tied through to business processes, and down to events and business documents against a common information model.
We need our tools to reduce the leaky abstractions that we currently call ‘services’ by helping developers build services that are very simple to consume by the business process developers. We need to capture these requirements and act on them through automated mechanisms built in to both the BPM environment and the IDE.
What does this mean for our processes:
The good folks in IT need to formally and officially take control of managing the common enterprise information model and the business event ontology. If a business wants to change the data and event models, they need to work through a published process that allows and encourages consensus.
The good folks in IT need to formally allow business process developers to easily develop, test, and deploy their processes. Deployment is a problem because IT folks normally just ‘fudge’ their way through deployment processes. If we are going to let business process folks to write code that we deploy, then it needs to be very simple to deploy that code.
Free code makes sense… We need to align IT to business, and this is one very useful mechanism to do it. It is time to stop getting in each other’s hair.
Hear hear..
But what I want is something like Popfly on/for the desktop. Not the web, not even hosted in a web page. I want to right click on the desktop and say "Create new component", have the screen dim out and a Popfly-like design surface overlayed on it.
Want a report of last quarters results — create a new component that consists of a data source, and a report form that consumes it. Set a few parameters, and poof! Instant report UI. Drag it onto the sidebar, and poof! Instant report gadget.
You made your argument for this but did not give a solution or example of what the solution might be for a "Business Process Developer". What is your proposal for this?
I am old enough to know that we’ve been down this road many times before (CASE, anyone?). You need to really think about the following statement:
If the code that comes out doesn’t meet their needs, the business process developer knows it the moment they run their code.
I am confronted every single day with abundant evidence that this is simply not the case. In fact, the opposite is more often true: Somebody discovers six months too late that the business process is fundamentally broken by a minor change to the workflow. That’s not a criticism of the workflow developer or report writer or business process developer. It is a mistake to assume that the connections between services can be rearranged, repurposed, and reused without a deep understanding of the services themselves. We software developers like to think that we can make systems that mimic hardware (standard bus architecture, etc.). Instead, our systems are much more like biological systems where even interconnections that use the same ‘technology’ vary tremendously (e.g. your brain and your gut both are both connected to your blood stream for the same reason, but I wouldn’t suggest trying to swap them around).
Nick
Sounds like you are trying to describe "codeless" development, which is a dream that Ismael Ghalimi (http://itredux.com/blog/) pursues, via his BPMS Intalio (http://www.intalio.com/).
It’s a goal I’ve pursued in the past, just like JohnCJ above, and SO FAR, my experience is the same. The whole service-based framework idea suggests that it is possible, but I also doubt that services can just be strung together successfully unless you have a pretty good understanding of the operation of said services. But I am watching Ismael and Intalio closely ….
Another excellent article. I have written a first attempt at something like this (based on the BPMN notation) that allows process mappers to join services together. It is not as easy to use as it needs to be (especially around data folws). The main problem has been getting people to use it for creating executable processes not just maps.
Afte reading your article I can see that maybe I should be targeting business users not developers.
Thanks for the article. One of reasons to why the code requires maintenance is the iterative nature of the development process. There is no so much pure innovation out there. We simply want to deliver the functionality like X but with our bits on top (the added value)…
I like to think that BPM or same level abstraction will help integrating business users input more rapidly. And it is form of a code, after all….
@KBac,
There is still an equal share of "art" in the science of software maintenance. We ask the question: when does it become cheaper to change what we have than it is to write something new?
The point of my post: our tools should be SO GOOD (still a fantasy) that the decision to modify rather than write new code literally goes away. There should be no difference.
This means that the process developer can understand what is going on quickly, can modify the process or create a new one with the same level of ease, and can verify and validate its correctness in an elegant and repeatable manner. Deployment shouldn’t require a dual degree in neurology and electrical engineering.
There’s still a way to go.
@JohnCJ
I remember CASE tools. They were sold with lots of blaze and fury but they NEVER approached anything similar to what we can currently accomplish with reference implementations of BPEL and BPMN. Add workflow (BPEL4People) and we are coming very close to creating a paradigm that allows the composition layer to be completely independent of the services layer.
And you are right in that most of our service developers are completely clueless about how to play in that space. I am working on a framework that would allow the creation of services that would work well in that world, where composition is not only possible, but practical.
It is one thing to curse the darkness. It is another altogether to light a candle.
Nick,
I don’t want to start an aphorism war, but make sure that candle you’re holding isn’t a stick of dynamite.
I’m not a BPEL expert by any means, so I would appreciate it if you would correct any misconceptions I have about it. My assertion is that practical composition of services requires some way for the service builder to specify the preconditions and environmental assumptions that the service requires. I don’t see anything in BPEL that allows that. Have you considered that for your framework?
Nick,
This post remindede me about the idea of Software Product Line approach (Microsoft calls software factories). There are 2 major roles there – Domain Engineer and Application Engineer – with different skills and responsibilities.
Domain engineer analyses and qualifies the domain and if it makes economic sense produces an Application Engineering Environment that very well can include a domain specific language (DSL). Ideally the semantics and syntax of the language should lie in the problem domain not in the solution.
An Application Engineer then uses that environment to rapidly produce the members of the product line. There are constraints imposed by a DSL on what an Application Engineer can do, but they were imposed on purpose to (for example) prevent the Application Engineer from making mistakes.
Also, the code generated by the Application Engineering Environment can be fully automated, in this case it can be re-generated again every time the Application Engineer changes the model built using DSL. So now instead of talking about maintainability of the code produced by the environment we can talk about maintainability of the model built in DSL, which is supposedly should be much simpler.
Eventually I believe we will get to that industrial model of software development, but I think not very soon.
@JohnCJ,
ROTF,L
Sometimes I Wonder if I lit a candle or a stick of dynamite! Heck… most of the time!
You ask an excellent question: what if the preconditions for a message are not met on the subscriber at the exact moment when the publisher sends it… how do we keep from losing the message.
I’ll dedicate a new blog entry to that. Far to long to respond to in comments. GREAT QUESTION!
Nick,
I like what you’re saying, but aligning mashups with business process is too close to what IT does with BPM/BPEL, etc. The real value in mashups is to support the notion of ad-hoc integration. Ad-hoc integration is user focused, situational and should be share-able. It is also very small. BPEL, BPM, etc is very large and belongs in IT. Business process people typically work on the big business processes. Mashups are better suited for helping the business user get out of having to integrate everything using Excel.
For a good mashup overview, check out Deepak Alur’s blog:
http://blogs.jackbe.com/2007/07/defining-mashups.html
@John,
Do you think the business cannot have a PM that manages a BPEL developer who combines services together?
I know business units that grow their own IT department! They can certainly do a BPEL developer. No need for that to be inside IT.
As for pure (thin) mashup, nothing I’ve said opposes that paradigm… but not everything can be done as a mashup. We can have BOTH.
— Nick
Hi Nick,
I agree that business can have a PM who manages BPEL development.
I was really trying to make the distinction between heavy weight BPEL/BPM and lightweight mashups.
Developers create BPEL and users create mashups.
-jc
Hi John,
I see where we went astray. I said "Free code is mashup code." I believe that mashup code has come as close as anyone has come to the concept of free code. I meant it as an example of what "free code" really is.
Our current tools are not there. Our tools make it so complicated to create a business process diagram that only a developer can do it, and then to decorate that diagram with service interfaces and endpoints and channels, only a developer can UNDERSTAND it.
We need to fix this. Our current tools MUST improve if we are to help empower BPM transformation.
— Nick
A quote from my book:
The most cogent critique of this line of thinking can be inferred from Fred Brooks (“No Silver Bullet.”) Assume for a moment that no central IT organization exists, or that its sole concern is infrastructure. Programming logic and business process support is entirely owned by the business organization, and sophisticated process visualization tools and rules engines front ended by portal frameworks have virtually eliminated all traditional development (a nirvana this author is skeptical about, but let’s continue the thought experiment).
The fundamental complexity does not go away.
The fundamental complexity does not go away. A mis-configured business rule could cost millions in an instant. A poorly conceived business process could make hundreds of customers unhappy on its first implementation.
New processes, component services and their choreographies, business rules, and portal capabilities will still require testing, and it will still be prudent to version them, manage changes to them, assess the risk of new configurations, and provide for change rollback. The need for quality assurance and extensive testing of proposed new functionality will not go away, and in general the same problems that have dogged IT through the decades will remain, albeit with a different face.
This complex infrastructure will still be subject (perhaps even more so) to the entropy of complex systems, and will require portfolio management principles that should be centrally coordinated – just as corporate departments may control their own financial resources but still be accountable to centralized financial discipline.
In general, the IT industry still seems fixated on shiny new objects, and in fundamental denial about the rising legacy swamp waters of obsolescing systems threatening to overtake all innovation. The point is not BPM, SOA, portals, or autonomic computing – the point is the overall run rate of IT, and how to truly drive it down. Achieving this will not be quick or easy; it will simply require much hard work and many painful decisions in the typical large, long-lived IT organization.
-ctb
@Charlie,
Not sure I follow you, sir. You say that ‘free code’ is a silver bullet (in the sense of ‘no silver bullet’) but then mention that your goal is to drive down the IT run rate.
Same thing.
As for a single defect costing millions: true and false. That is certainly true today. However, in a world where applications can be stitched together by process developers, the services are self-defensive. It is wildly unlikely that an error that could cost millions would escape a self-defensive service, any more than the possibility that replacing one car battery with another could cause the car to start to fly.
If you notice in my post, I never suggested the riddance of software development. New code would continue to be needed to support changes in the core capabilities of the underlying services.
As far as versioning and deployment of a misconfigured application… in a truly free-code environment, the tools that allow a process to be presented into one environment can easily allow a process to be presented into more than one environment. As a proponent of ERP for IT, you must recognize this, since the ability to deploy a business process to a sandbox or testing environment is a common capability of ERP systems.
Of course, testing will still be required… I never suggested that business process developers weren’t developers. I simply suggested that they need not write ‘maintainable’ code or ‘well designed’ code, because the environment would provide access to the services and the services are designed for reasonable local optimization.
When I remodel my bathroom, I go to Home Depot and purchase a new toilet. I do not need to create a special design for the ring that connects the toilet to the rest of the plumbing. It is standard. The toilet, on the other hand, can be quite unique with very seperate features.
So what about the complexity of the toilet itself? Someone still had to design and build it. This is true, but it was probably not hand crafted. Very few are, these days. It makes no economic sense.
A world where standards allow free competition is not a world of ‘sameness.’ Innovation thrives… just not in a ‘craftsman’ manner. With the ability to stitch together services we add another layer: the standardization of the service model itself. This allows mix-and-match.
If you buy a product and you don’t like it, return it or replace it. No need to craft your own. Why would you?
Note that standards do not have to be optimal or even universal. My clock radio cannot plug into a wall in London, even though it works fine in Seattle. Different standards. But they are still standards, and that allows the manufacturer to build one device and sell it in two systems with minor changes.
Is a large two-pronged plug optimal? Nope. So what. It is standard. That is all I need. Apparently, I’m not alone.
So if you want to know why I don’t believe it is difficult to imagine a world where a service is limited in the amount of damage it can cause, consider this… how many houses in the USA exploded last year when a resident plugged in an appliance. Exploded. Technically, it is possible. Practically… the odds are so low that insurance companies cover it.
The trouble with your analogy is that home appliances have a limited number of interfaces and modes. Emergent chaotic behavior is much less likely. I work in large scale financial services IT, where service interdependencies are exponentially higher. We’ve seen all too many times that risk cannot be managed intrinsically to the service component, as you imply -risk is an emergent property of novel combinations and re-combinations.
Charlie
@Charlie,
You are clearly an intelligent person, Charlie, and I think we have a lot in common. I am sure that if we were to sit together and discuss ideas for a few minutes, we’d find that we have a lot more in common than different.
When someone develops a new appliance, they are empowered by the standards, not limited by them, but just because an appliance has a power plug, that doesn’t mean it can ONLY interact with the power network. My computer is an appliance that also interacts with other devices on the Internet. My television interacts with other devices on the Cable TV network *and* the internet. My clock radio interacts (read only) with other devices on the radio broadcast network. My central air conditioner interacts with the rooms of my house over the duct network. They all draw electricity.
Every network has their standards. We are not limited to one network. This allows novel combinations that do NOT affect the design of the electrical network.
I would posit that it is fairly simple to create some basic "networks" of messaging that allow systems to interact in predefined ways, and allow the composition of business processes by process developers in an unmaintainable (free code) manner.
Mike Walker wrote a great thought-provoking blog post on the implications SaaS has on Enterprise Architecture
Mike Walker wrote a great thought-provoking blog post on the implications SaaS has on Enterprise Architecture
Mike Walker wrote a great thought-provoking blog post on the implications SaaS has on Enterprise Architecture