As many of you may know, Microsoft has a vocal and thriving Agile Software Development community. Recently, on our community forum, a question appeared about the ability of Agile development to “scale” to a large team. In other words, if we can make agile development practices work in a dev group with hundreds of people, can we make it work in a dev group with thousands of people?
There was a lot of discussion on the alias. Much focused on process improvements. E.g. how to create scrum of scrums, and how to automate test and build processes so that large systems can be integrated continuously. That is part of the answer.
However one quote, from a seasoned and experienced engineering leader, Nathan McCoy, who joined Microsoft as part of our acquisition of aQuantive , provides a real clue to the rest of the answer.
The answer is yes, agile can scale to larger systems… Here’s the quote:
When we were in waterfall mode, we tended to batch up our releases. They were complicated to plan and manage. We burned people out on death march projects that culminated in release weekends where we would work 72 hours with little sleep and little contact with our families.
We turned to agile engineering practices – in my case, not simply because I believed it would be a panacea, but rather it gave me a whole arsenal of techniques to make improvements, techniques that built on engineering practices that made a lot of sense to me.
We evolved away from the big batch release by decoupling on component boundaries, putting in services, adding contracts and other techniques mentioned often on this [forum], to the place where we have not done such a big batch release weekend in years.
Let’s look at that for a minute.
The Nathan McCoy is talking about a painful deployment process that could not scale. Early on, deployment to live servers would take hours, but as the code complexity and number of customers grew, hours turned to days. Deployment suffered. People suffered. Quality suffered.
This team turned to agile techniques, and solved their scalability problem. They did it with decoupling, interfaces, and services. They did it with architecture.
The real lesson is this: using architecture allowed an agile team to decouple various parts of a system, which enabled agility to go further. In other words, the success of the agile project depended on the addition of architecture, at the right time, in the right manner. The problem could not be solved by agile processes alone.
They solved their problem , in an agile environment, using agile architecture. What makes it agile architecture?
- The architecture was introduced through refactoring.
- The architecture supports a specific business problem, and the minimum amount was applied to solve the problem.
- The architecture was not described in a 200 page document beforehand. It was designed in small increments and expressed directly in code.
- The practice emphasized all of the principles of the agile manifesto: working code, delivered at a sustainable pace, in small quantities, with direct customer involvement, using the best practices available.
My conclusion it threefold:
- Solution architecture can be applied in an agile manner.
- As the solutions get larger or teams grow in size and scope, Agile practices alone are not sufficient to solve every problem. For some problems, architecture is required.
- Therefore: solution architecture is a necessary and critical skill for agile project teams to master.
4 thoughts on “Architecture makes Agile Processes Scalable”
Nick… nice thoughts and article, I wanna put loads of comment on this one, will probabily write whenver gets time.
You loves architecture 🙂
Good insight, we’ve had a similar experience where Domain Driven Design techniques were applied to achieve an agile ‘friendly’ architecture.
…a convincing example on benefits of decoupling and implementing web service/contracts. Good article.
But how about the code we inherited? Is it worth refactoring spending again around 5-20% of original effort? A stitch in time saves nine. Probably it is too late to re-architect the inherited/legacy code. However this sure will work cool with newish applications. Unfortunately we still need to maintain old code, and release weekends are horrifying.
>>Is it worth refactoring spending again around 5-20%…"<<
You tell me. Is there a specific business problem you can solve by adding architecture? If you are not adding architecture, are you adding test cases? (Two different things, for two different, complimentary, reasons).
If you cannot solve a business problem by refactoring the code, then perhaps you don’t. You say that "release weekends" are horrifying… are they unsustainable? Does quality suffer? Do your customers suffer? If so, go back to the business and ask them if they want to reduce those impacts. If they say yes, then add JUST ENOUGH architecture (and test case coverage) to solve that problem.
I wouldn’t add architecture if it is not needed to solve a problem.