Monday, November 28, 2016

On discipline, agile, lean and kitchens

Discipline in planning and execution is not a culture war.

I used to see method and process as core execution discliplines, as background activities that one just took for granted on any project. For the past few years, I have witnessed an increasing swell of support for making them a matter of style and team culture.

I would place the shift starting a couple of years after the Lean Startup book came out and inadvertently normalized the notion of outcomes over processes even outside the startup world.

In fairness, that is not what the book and underlying concepts propose, but there were large swaths of the software development field who were ready for a well-articulated message that advocated for less formalism, less planning, less checking, less verification, get the idea.

By Mark Morgan
The ensuing phylosophical battles can be distilled to basic questions such as "how much time should one spend writing down the processes to be followed by a team?" or "How much time should one spend planning activities and estimating their costs before deciding on a course of action?"

The "it-is-boring" camp argues for less discipline in favor of faster execution, which allows for more iterations towards success.  For this camp, the air is sucked out of the room the minute someone asks for  agreement on processes and plans.

The "it-is-boorish" camp argues that unmanaged chaos can seize up execution progress and force part of the team into silently picking up the slack.

I can see a strong correlation between individual style and the choice of camps, which is why it is so easy to dismiss the entire unpleasantness of the debate as a matter of personal style or culture war. 

About "Let us choose"...

Sprinkle some conflict aversion, superficial analysis and we soon find the discussion abandoned behind the wall of false compromises: "let each team choose what works best for them".

I find that reasoning particularly disingenuous in that it implies that the only alternative is an arbirtrary decision forcing freedom fighters into submission. Given time and the right audience, you may even see an Austrian economist or two being quoted in the discussion.

To be clear, teams should absolutely decide what works best for them, as long as the selection of "best" is made against a (preferably long) background of good and bad experiences.

And it goes without saying that the discussion should stay away from the extremes of Go-Horse programming*, with virtually no time assigned for planning, and of the waterfall model where nothing meaningful ever hits the market (and no one has tried pure watefall for at least two decades nor would in their right minds argue for its return) .

This is the point where I confess to lean towards the "it-is-boorish" camp and my reasons are simple:

When no discipline is actually a lot of it

Hidden in the anedoctes about improved results due to less planning and less process, there are invariably teams with extensive practice with planning and processes. Behind each IPO-wonder, you will find leaders with well-lived experiences ahead of other similar initiatives (if you absolutely must bring up Facebook, that is a different animal, leave a comment and I will respond to it) .

Teams do not succeed because they have less discipline, they succeed because they have people who know enough about discipline and processes to hand pick the correct approach for the circumstance at hand. Moving from process to actual results, the parallel is that behind each story of frequent and short iterations leading to a winning design you will find people who have produced winning designs in the past and who have had the benefit of internalizing what worked, what failed, and why.

Many victories seemingly stemming from agility are the consequence of solid experiences and discipline unhindered by minutia. And yet, many of these victories may be short-lived if the  technical debt incurred while executing with less discipline is not managed properly.

Tragedy of the cooks

The analogy here derives from the Tragedy of the Commons, with "order" being the shared resource. In a completely unregulated environment, either by intention or natural pressure, the participants tend to exhaust the shared resource for two reasons: (1) the assumption that the resource is infinite and (2) the expectation that other parties will consume or hoard the resources faster than everyone else competing for it.

Absent some notion of externally mandated order, you end up with an ecosystem where the participants are lifted to the same level of access to the resources and neglect tending to the shared resources along a spectrum of obliviousness and forced sociopathy.

The obliviousness comes in the form of people internalizing an experience where lack of discipline simply works, unaware of other efforts happening in parallel to restore the original order to the system. You know the drill: that coworker who was asked to maintain, improve and share a few guidelines here and there to ensure some bad customer situation was avoided in the future. Then came the point where people realized that the "few guidelines" were several pages long and everyone needed to be mandated to read and follow the guidelines because bad customer situations kept on happening. Poof! Fun is over and the productivity-sapping scapegoat is standing by to take the fall.

An even more insidious side-effect is the internalization of these tragedy-of-the-commons experiences at an earlier stage of someone's the career, where they become the lenses through which the beginners will see work relationships.

By Roberian Borges
On the sociopathy extreme of the spectrum, my analogy is simple: If your team is asked to prepare a four-course meal and there are no rules about who should clean up the kitchen, there is always someone who will be bothered about the mess before the others, and that person is usually someone who has dealt with dried up batter on the counter before (give it a try) . The boorish cycle is completed when the other cooks pat the cleaner on the back, proclaim he has a natural vocation for cleaning, and mentally excuse themselves from the task from that point on.

Sometimes your most productive cooks are simply the ones who can ignore a dirty sink the longest.

The age of WIT (Winging IT)

Though I currently lean towards the "It-is-boorish" camp, I can see the allure of reduced planning and have the feeling that some of the discomfort experienced at the hands of the "it-is-boring" camp are growing pains in a generational shift. We just have to find ways to make it work at the scale it must work, then deal with the new state of things.

In this age of freemium web applications, where everyone expects everything to be free, the lines between outright market dumping and viable business models are becoming blurrier by the day - Uber and its driver's incentive program come to mind. This sort of expectation has become so ingrained in society that large swaths of the workforce simply accept the notion that products should be created under the successful (?) umbrella of freedom.

Where that camp loses me is in the expectation that (1) wonder startup efforts can be created out of thin air without something as basic as market research and (2) established organizations can be morphed into startups. The Lean Startup crowd is onto something that is very specific for the high-failure rate model expected of actual startups developing as-a-Service offerings, but that is a topic for a different posting.

At some point, when you realize most people don't like cleaning the kitchen, chastising them into doing the chore may just drag down morale and push people out. And here is the moment where I acknowledge the lost battle while still staring at the prospect of dealing with a messy kitchen.

Planned chaos

For established organizations, the solution is not to chastise the workforce into doing chores, but finding ways of avoiding the mess in the first place. One can despair and give in to chaos, give up on creating new products and going down the route of acquiring whichever small company survives the Darwinian grinder of the startup world, but that is hardly a system that is scalable or inclusive enough to support the industry as a whole.  Even them, without a solution for the cultural aspects and the right balance of discipline and freedom, these acquisitions will be doomed from the start.

By Nicole Quevillon
Learning fast and adaptability are a powerful combination of success factors, but ignoring past lessons baked into existing processes is a dangerous mix of  irresponsibility and innovation.

The acceptable compromise between camps seems to require a bit of discipline and planning upfront on how much chaos (technical debt) is survivable, how it will be measured, and how it will be remediated. As a concrete example, if a team decides on not commiting to a service level agreement in its initial offering period, will the team agree on implementing enough monitoring to at least keep track of service levels? If the team does not want to have a mandatory training program for reuse of open source software (and dragons be there) , should it spend a few hours publishing a list of licenses that are accepted?

Ultimately, a good conversation should pass through an examination and adaptation of processes.

And the discipline part? It just better be there.


* It amuses me to no end that cowboys and horses are used as common-place characters in analogies about poor practices.

Tuesday, May 31, 2016

Serverless, NoOps, and Silver Bullets

In the aftermath of serverlessconf, Twitter was abuzz with the #serverless tag and it didn't take long for the usual NoOps nonsense to follow (Charity Major's aptly named "Serverlessness, NoOps and the Tooth Fairy" session notwithstanding) .

When you look at operations as the traditional combination of all activities necessary for the delivery of a product or service to a customer, "serverless" addresses the provisioning of hardware, operating system and, to an extent, middleware.

Even when we ignore the reality that many of the services used on the enterprise will still run in systems that are nowhere close to cloud-readiness and containerization, approaches like Docker will only take you so far.

Once you virtualize and containerize what does make sense, there are still going to be applications running on top of the whole stack. They will still need to be deployed, configured, and managed by dedicated operations teams. I wrote my expanded thoughts on the topic a couple of months ago.

One may argue that a well-written cloud-ready application shoud be able to take remedial action proactively, but those are certainly not the kind of applications showing up on conference stages. Switching from RESTful methods deployed on PaaS to event listeners in AWS Lambda will not make the resulting application self-healing.

Whereas I do appreciate the "cattle-not-pets" philosophy and the disposability in a 12-factor app , I have actually worked as a site realiability engineer for a couple of years and we still needed to monitor and correct situations where we had cattle head dying too frequently, which often caused SLA-busting disruptions to end users expecting 5 9's reliability.

#NoTools, #NoMethod

Leaving the NoOps vs /DevOps bone aside, when I look at event-based programming models such as AWS Lamba and IBM OpenWhisk, and put them in contrast with software development cycles, I start to wonder whether development shops have fully understood the model's overall readiness beyond prototyping.

What is the reality of design, development tooling, unit-testing practices, verification cycles, deployment, troubleshooting, and operations? As an example, when I look at OpenWhisk,  I see NodeJS, Swift and ... wait for it... Docker. There is your server in serverless, unless you are keen on retooling your entire shop around one of those two programming languages.

At the peril of offering anecdotes in lieu of an actual study, some of the discussions on unit testing for event handlers can go from clunky to casually redirecting developers towards functional testing. And that should be the most basic material after debugging, which is also something conspicuously absent.

Progress is progress and the lack of a complete solution should bever be a reason to shy away from innovation, but at the same time we have to be transparent about the challenges and benefits.

If the vision takes a sizable number of tinkerers building skunkworks on the new platforms, that is all good, but we have to realize there is also an equally sizable number of shops out there looking for the next silver bullet. These shops will be quick to blame their failures on the hype rather than on their own lack of understanding of the total cost of development and operations of a cloud-based offering.

Click-baiting of dead development methods is well and alive for a reason, until you realize the big development costs depend more on the Big and Complex stuff than on how much time developers spend tending to pet servers under their desk.

As the serverless drumbeat continues, it remains to be seen whether we will witness an accompanying wave of serious discipline prescribing the entire method before another one is put out as the next big thing.

The obvious next step would be codeless code, which is incidentally the name of one of my favorite blogs. It contains hundreds of impossibly well-written and well-thought out material about software development, including this very appropriate cautionary tale on the perils of moving up the stack the concerns without understanding how the lower layers work.