Tuesday, May 31, 2016

Serverless, NoOps, and Silver Bullets

In the aftermath of serverlessconf, Twitter was abuzz with the #serverless tag and it didn't take long for the usual NoOps nonsense to follow (Charity Major's aptly named "Serverlessness, NoOps and the Tooth Fairy" session notwithstanding) .

When you look at operations as the traditional combination of all activities necessary for the delivery of a product or service to a customer, "serverless" addresses the provisioning of hardware, operating system and, to an extent, middleware.

Even when we ignore the reality that many of the services used on the enterprise will still run in systems that are nowhere close to cloud-readiness and containerization, approaches like Docker will only take you so far.

Once you virtualize and containerize what does make sense, there are still going to be applications running on top of the whole stack. They will still need to be deployed, configured, and managed by dedicated operations teams. I will wrote my expanded thoughts on the topic a couple of months ago.

One may argue that a well-written cloud-ready application shoud be able to take remedial action proactively, but those are certainly not the kind of applications showing up on conference stages. Switching from RESTful methods deployed on PaaS to event listeners in AWS Lambda will not make the resulting application self-healing.

Whereas I do appreciate the "cattle-not-pets" philosophy and the disposability in a 12-factor app , I have actually worked as a site realiability engineer for a couple of years and we still needed to monitor and correct situations where we had cattle head dying too frequently, which often caused SLA-busting disruptions to end users expecting 5 9's reliability.

#NoTools, #NoMethod

Leaving the NoOps vs /DevOps bone aside, when I look at event-based programming models such as AWS Lamba and IBM OpenWhisk, and put them in contrast with software development cycles, I start to wonder whether development shops have fully understood the model's overall readiness beyond prototyping.

What is the reality of design, development tooling, unit-testing practices, verification cycles, deployment, troubleshooting, and operations? As an example, when I look at OpenWhisk,  I see NodeJS, Swift and ... wait for it... Docker. There is your server in serverless, unless you are keen on retooling your entire shop around one of those two programming languages.

At the peril of offering anecdotes in lieu of an actual study, some of the discussions on unit testing for event handlers can go from clunky to casually redirecting developers towards functional testing. And that should be the most basic material after debugging, which is also something conspicuously absent.

Progress is progress and the lack of a complete solution should bever be a reason to shy away from innovation, but at the same time we have to be transparent about the challenges and benefits.

If the vision takes a sizable number of tinkerers building skunkworks on the new platforms, that is all good, but we have to realize there is also an equally sizable number of shops out there looking for the next silver bullet. These shops will be quick to blame their failures on the hype rather than on their own lack of understanding of the total cost of development and operations of a cloud-based offering.

Click-baiting of dead development methods is well and alive for a reason, until you realize the big development costs depend more on the Big and Complex stuff than on how much time developers spend tending to pet servers under their desk.

As the serverless drumbeat continues, it remains to be seen whether we will witness an accompanying wave of serious discipline prescribing the entire method before another one is put out as the next big thing.

The obvious next step would be codeless code, which is incidentally the name of one of my favorite blogs. It contains hundreds of impossibly well-written and well-thought out material about software development, including this very appropriate cautionary tale on the perils of moving up the stack the concerns without understanding how the lower layers work.

Wednesday, November 11, 2015

DevOps: On walls and trenches

Know thy wall, mind your trenches. geekish alert

A few years back, a colleague introduced the notion of DevOps to an internal large audience as "we want to be more like [insert your favorite SaaS startup here]", much to the delight of various management and executive teams, besieged with mounting back-pressure in the sales pipeline as a result of long customer deployment cycles.

The enthusiasm after those types of sessions stems from the general notion that once the walls come down between the development and operations teams, a new world of productivity ensues, with multiple deployments a day, with every new feature reaching customer hands minutes after delivery to code streams, ready for usage and therefore, ready for sale.

Beware the strawman, within hours one may have a demolition crew, looking for the walls about to be brought down, hammer in one hand, clenched fist on the other, both united by a chest full of seething rage against the walls. If you ever find yourself leading such a mob, pause for a moment...actually for two moments, during which I need to offer you the most important advice in the art of bringing down walls: "know thy wall".

Is it a wall?

Berlin wallSometimes organizations do not operate their products, they simply build them and sell them to other shops, who are then responsible for standing up hardware, loading up the software and relying on a long chain of support streams to relay any software problem back to its manufacturers.

As a software developer, you are insulated from the good and the bad. There is little access to feedback on how the software is used by end users, even less feedback about how it is installed, configured and managed. There is also less contact with upset customers and minimal exposure to the funny hours at which the systems decide to act up on the myriad of defects that may escape the development cycle.

If an organization operates under that model, it is living inside a bunker. That is understandably the audience most attracted to the wall-bashing revolution, but for the wrong reasons. Energy would be better spent moving into a SaaS business model than attempting to influence operations teams likely outside their control.

Why was it built?

Assuming you passed the first test, you are doing at least SaaS and you have a proper wall between your development and your operations team.

Refrain from a complex of grandeur and realize your wall is not of the tyrannical country-splitting kind, but of the garden-variety blueprint, such as the ones built for property protection, sound insulation or soil retention. In other words, unless the underlying motivations behind the construction of the wall were addressed over time, your wall still serves a purpose.

The reason most walls between development and operations were built is because (and brace for the bar brawl) software development and systems administration are fundamentally different activities.

A software developer is specialized in shaping up a deliverable from thin air, from inception, to elaboration, to construction (coding, testing), to transition (to operations) . Resist the urge here, for a moment, to declare this the "old way" of building software, because these phases still exist even in the wildest agile lean-guild-squad-pizza-night-sleeping-in-the-office deployment cycles.

System administrators are specialized in planning deployment cycles, provisioning systems, wiring them together, loading them with software, rigging everything with probes and hoping the systems stay really quiet and out of sight while repeating the entire cycle.

A thinner wall is still a wall

Cranes, Pines, and BambooI have been on both development and operational sides of the wall and it is really disheartening to see the amount of misinformed passion thrown into the conflation of the continuous pipelines advocated in the DevOps method with the conclusion that development and operations can be unified under a single organization (or tribe if you are so inclined) .

Mix passion, misinformation, a pronounced shortage of trained system administrators, and many organizations may soon find themselves falling in the trap of really tearing down the walls and start assigning their SaaS application developers to operate the platform. Soon they start to realize what was behind that wall: operating systems, security patching, operational architecture, scalability for log retention systems, compliance, alerting policies, escalation policies, on-call schedules, maintenance windows, war rooms, and many other tools and processes that will eat into the development resources disproportionally to the time invested into retraining developers to perform those activities.

As software development manager, if you are ever invited by someone with a hammer for a wall-tearing party, politely redirect the conversation to a proper read of "The twelve-factor app", and emphasize the need of getting rid of the trenches (see below) versus tearing down the walls.

I have seen many debates along the lines of the excellent comments section on "I Don't Want DevOps. I Want NoOps", which conflates the reduced operational costs of running an application on top of a PaaS stack with having no operational needs whatsoever. I can attest to the reduced costs of development and operations in such arrangement, but they are still distinct activities that require a different skill set unless one tries really hard to confuse the development of operational tools (e.g. an automated generator of trouble-tickets) with the development of the application providing the function to the end-users.

Beware of the trenches

World War I Marines in a Trench, circa 1918The worst enemies of faster delivery cycles are not walls between development and operations, but rather the trenches both camps have dug over time. The true DevOps allure is really in getting both sides out of the trenches and shaking hands.

A few examples of software features loved by the operations teams, where continuous interaction and improvements can really make the software shine on the operations floor:
  1. "Pets, not Cattle" architecture. With the exception of databases, all other components should be horizontally scalable and disposable.

  2. Database High Availability and Disaster Recovery as integral part of the architecture. Many database technologies offer a whole spectrum of trade-offs in its many alternatives for HA+DR and the application owners have to be explicit about the interrelationships between the application and these trade-offs. For instance, a database technology may offer different settings for transaction synchronization across primary and standby nodes, some favoring transaction speeds, others geared towards complete reliability. There is a fine line between "my application can work with 2 of these 3 modes, mode A sometimes allows data to be lost and the system complete implodes when that happens" versus "our app uses a database that supports HA+DR, I am sure it will be ok."

  3. Automated delivery pipelines *for good quality* software: A continuous pipeline delivering new software versions every hour may sound like a nightmare for an operations team, but only when the outcome of every build is full of regression problems. There is still room for behavioural changes in the software that may throw off the operational monitors and procedures, but there is always the next bullet.

  4. Documented key performance metrics: One of the most respected software developers on my book once said "read the code", but realistically, not everything under the operations roof is open-source, properly written, or  simple enough to be as consumable as proper documentation. That list of metrics, paired with the written explanation of their implications to end-users are fundamental artifacts for an operations team to rig the software with all their probes, watch for the right things and trigger the right alarms.

  5. Documented configuration settings: Once again, "read the code" is just not enough. The operations team needs a full list of configurable settings, their data types, their ranges, and a few paragraphs about the implications of changing the values.

  6. Health end-points: It is a RESTful world out there, any self-respecting SaaS offering must have a simple URL available to the operations team to get an immediate internal view of the SaaS health, containing basic metadata (version, name, development support page, others as needed) , connectivity data about the status of system dependencies (e.g. database at a given URL is down) , status of various system functions (e.g. console login is down) . Structured APIs, please. JSON or XML are good starting points since they have readily available parsers for virtually all programming languages.

  7. Statistics end-points: Once again, it is a RESTful world out there, whenever an end-user (or a probe) reports slow response times, applications must offer a URL that allows a system administrator to quickly gauge the response times grouped by worst, best, mean time, median , calculated and grouped by different intervals of time, such as "last 5 minutes", "last 30 minutes", "last 12 hours", etc. One can successfully argue that the statistical aspect could be handled by the monitoring infrastructure, and one could be right.

  8. Support for synthetic transactions: Tracking down the causes for a slow system requires a deep understanding of the underlying sub-transactions invoked by the end-user system. The application should expose dedicated RESTful endpoints (in the form of different URLs, special headers or query parameters) that return a breakdown of the transaction across all component systems. Naturally, there should be documentation about the list of synthetic transactions, along with their respective breakdown and linkage to the exact address of the systems called in each sub-transaction.

  9. Administrative logs: End-points and synthetic transactions go a long way towards initial system troubleshooting, but when these less expensive means fail to surface what is happening to the system, it is time for painstaking scrubbing of system activity. A well-thought out logging strategy with clear references to key moments in the system, using terminology lined up with the system architecture, is essential in guiding system administrators towards the root cause of a problem.

  10. Access to the QA testcases, hopefully written using a set of technologies agreed upon with the monitoring team. If you look hard enough, anything that assures the proper functioning of the system at development time may be useful during the regular operation of the system. Imagine, for instance, an expensive QA module that simulates an end-user creating a system account, changing the account password, logging out the user and logging back in with the new password. Now imagine how the actual production authorization system may be subject to load-balancing and replication policies where that particular sequence may break for a period of time and impact end-users. The operations team can definitely benefit from simply letting that testcase run under the monitoring layer on a continuous basis and alert operators in case of failures.
A few examples of things the development teams really appreciate from their operations organization:
  1. Access to the incident database: leaving aside surmountable aspects such as the eventual need to obfuscate the customer identity, there is obvious value in knowing about critical system failures, the timeline of resolution, the steps taken by the ops team to detect and to resolve the problem. All this information can be immediately applied to drive improvements to most points raised before, such as new tests in the delivery pipeline, additional performance indicators, additional information in the health endpoints, additional configuration settings, and many others.

  2. Access to the live data for health and statistics endpoints: once again, leaving surmountable concerns aside, such as security and credential management, there is immediate value for the development team to study the correlation between customer loads and the system metrics, such as increase in response times as the number and nature of requests change over time.

  3. Access to the application logs: in an age of SaaS offerings for log aggregation, application development teams really do not need much from their operations team in this regard, but if the organization strategy calls for in-house log aggregation systems, then it is imperative that application developers have complete access to their own application logs.

  4. Access to the monitoring data for synthetic transactions: the previous examples allow a development organization to build their own data collection and aggregation system, but the ensuing duplication of efforts is rather counter-productive. 
Many developers will point out that nothing stops them from coding back-doors into the system to get access to the system data, but there should always be full-disclosure of such back-doors to the operations team, at best so that there is awareness, at worst so that compliance laws are not violated (e.g. a backdoor that allows access to user information could be in direct violation of privacy laws) .

OaaS, the trenches reinvented, for better or worse

It is a new world of productivity where smaller organizations can put out complex solutions that would rival a large organization from 10 years ago.

Development, provisioning and monitoring tools have become accessible to the point of reaching critical mass of adoption, whether as commodities available for local deployment in a data center or as full-fledged SaaS offerings that obviate the need for local deployments. That said, tools, systems, and processes are not at the singularity point where operations can be seen just as extension of a development cycle.

There are nascent efforts in Operations as a Service that will be very interesting to watch in the coming months, specially in relation to PaaS offerings and how much customization will be possible in the OaaS provider to fit existing DevOps pipelines, specially when these pipelines are becoming increasingly available as add-ons in the PaaS offerings themselves.

Realistically, I think OaaS will be a niche offering akin to software development outsourcing, with the accompanying explanations on how this time it will be different than the first time (other than it won't) .

In my opinion, the current crop of companies co-opting the acronym are doing a disservice to what true OaaS should be: a natural evolution of PaaS where a standard (we still do those, right?) will need to be created to establish the interfaces between applications and the operations floor before any mass progress can be made on shielding development organizations from attempting to master operations, while still allowing the development team to retain full control of the DevOps pipeline.

Thursday, January 09, 2014

What is your problem? - Part 3: Descriptions

Months ago, I wrote about problem reporting within teams, making a general distinction between good problem reporting that leads to a solution versus insufficient reporting that causes the involved parties to lose precious time during a critical situation.

Now it is turn to look at these from the perspective of software development, which may turn off audiences interested in the general topic, at which point I incur in the blunder of assuming anyone beyond my friends in the field read these.

Technical notes, your problem is not someone else’s problem…

There are always those moments in software development where your overall quality assurance process fails your customers and your standards, at which point you must publish a technical note about it. For our project, the template of a technical note requires 6 fields:

  1. problem description. A general view of of the problem. This is a very difficult topic for most developers who have not been exposed to the problem reporting techniques covered in this series, in that general is confused with imprecise. This topic is therefore the focus for this posting.
  2. symptom. List of externally observable behaviors and facts about the system upon occurrence of the problem
  3. cause. List of internal and external triggers for the problem, with special emphasis on those that can be triggered (and hopefully fixed) by the customer versus those that are internal to the product and require a product fix.
  4. affected environments. Complete list of prerequisite software and hardware where the problem can be observed, including versions and releases.
  5. problem diagnosis. Symptoms and causes give a good indication as to whether the problem matches what a customer is seeing, however, the customer needs certainty before moving on to the next field
  6. problem resolution. The ultimate cause for a customer ever reading through a technical note, how can the problem be either fixed or worked around. A common problem in our internal reviews was that original drafts incurred in the mortal sin of limiting themselves to listing the upcoming release where the problem would be fixed. The customer always expects an interim solution to the problem, even if imperfect.

…so how do I know what is your problem?

To paraphrase one particularly troubled internal draft, we had the symptom, cause and description all folded into the problem description field stating that

“search for records may be incomplete due to a [private] database being corrupted upon execution of a [series of commands]” .

At that point, we applied the criteria outlined in the previous posting to determine whether the problem reporting to the customer would lead to a decision or to confusion:

  • What is the expected behavior from the product?

The description can be somewhat ‘reversed’ and allow one to infer that search for records should not be incomplete. However, this indicates what the product should not do instead of what it should do. For the technical types, this kind of wording tends to make the author look sloppy at best, disingenuous at worst.

  • What is the observed behavior in the product?

The description alludes to incomplete results, but results can be incomplete in so many ways, such as not containing all records that would match the search criteria, or containing all records while missing some fields in each record.

  • Does the reported problem happen to all units of the product?
  • Does the reported problem affect the entire product or just portions of it? If so, describe the portions?

The ‘product’ here is the operation executed by the user. Is it all searches that are affected or only certain searches?

  • Does the reported problem happen in all locations where the product is used? (this forces the problem owner to have actually sampled the problem in all locations where the product is used) ?

Locations can be read as systems. If the product can run on multiple operating systems and depend on various versions of middleware, is there any enough information about the kind of systems where the problem occurs? Is it all of them?

  • Does the reported problem happen in combination with other problems?

This particular point would not apply to original problem description as the problem happened independently of other problems, as a function of search parameters and system operations preceding the searches.

  • When did the problem start? If you don’t know, make it clear you don’t known and state when you first observed it

When reporting the potential problem to a customer, the starting date would translate to the release number where the problem would be first observed.

  • What is the frequency? Continuous, cyclic, or random?

The problem description was reasonably clear about the problem being continuous. At least in my opinion, continuous can be assumed whenever considerations about cyclic or random occurrences are not explicit. In other words, I would consider really poor form for those types of frequencies to be omitted.

  • Is the problem specific to a phase in the product life-cycle?

The problem description was reasonably clear about the sequence of operations that would lead to the problem, indicating the problem to affect the system runtime phase versus planning, installation, configuration, or any other.

  • Is the symptom stable or worsening?

The problem description did not mention increasing degradation of results, but it is worth asking that question during a review process prior to publication of the technical note.

From problems to satisfied customers

This is an area to be approached with energy and patience while coaching people who are new to any field in the industry. Describing problems, as a function of language and critical-thinking, is not a purely exact science and requires prolonged periods of practice and feedback to be mastered.

When someone without the proper tools and techniques for problem description encounter someone on the other side who will go out of their way to understand the problem, it is easy to mistake the positive interaction rooted in an act of kindness for the most efficient way of going about it. And not to use efficiency to dismiss acts of kindness as a fundamental value in the workplace, on any given day we would rather have that kind person interacting with more people rather than spend it all on a single person working without proper training.

Once you have put the right effort behind training people in this topic (or training yourself), you will have affected a transformation effect on people that transcends the topic and the workspace: people used to asking all the right questions and solving the right problems under any circumstance.

Monday, November 25, 2013

What is your problem? – Part 2: Real versus imaginary problems

To ask "What is my contribution?" means moving from knowledge to action.
The question is not: "What do I want to contribute?"
It is not: "What I am told to contribute?"
It is: "What should I contribute?"
Peter F. Drucker

In the first part I covered the general aspects of identifying and reporting problems. Now it is time to apply those concepts to my domain of choice: software engineering (minor apologies to the software gardeners out there, I will come back for you in a future posting) .

I use the Agile method as the backdrop because it has completely overtaken the field to the point of erasing debate on alternatives (defeated waterfall proponents are still called through the backdoor to fill the gaps with valuable contributions, but I digress) .

As a short Agile method recap, work is delivered in small iterations called stories and executed over relatively short intervals, called sprints. If you can tell a story while on the run (sprints, running, see what I did there?) , you know sprint durations can range from ear-bleeding single-week sprints to waterfall-bordering 8 weeks. Beyond 8 weeks, there be dragons and T-virus, walk backwards slowly towards the nearest door and avoid eye contact at all costs.

The drudgery of tasks…

A fundamental tenet of Agile method is that stories be written in the form of “As a [role name] I need [a feature] so that [I can achieve a goal]”.

I personally prefer to replace “achieve a goal” with “solve [one of my] problems”. There are far more people in this world facing immediate problems than people who have goals, and even for the people who have both, the immediate problems tend to grind one’s will and resources to execute on long term vision.

That is not to mean Agile should be cast as a reactionary method that can only tackle situations after they have become a problem, but that any complex project can be mapped to a mind-map of tasks, where each node can be represented as a problem to be solved.

… is no match for the challenge of a problem

The payoff for such mental gymnastics is that a problem statement engages, whereas a task dehumanizes, and success in software development hinges on engagement: on engagement between developer and customer, on engagement between user and solution, and as a more recent phenomenon, engagement amongst users.

It is part of many professions to walk into an engagement where the customer knows exactly what they need, are willing to pay for those services, watch you walk out of the door after completion and then deal with the next task in a master project plan.

That is invariably not true of software development for two main reasons: (1) our largely INTJ subtlety-loving personality is prone to invent dozens of different ways of achieving the same goal with dozens of distinct advantages and disadvantages for each choice, (2) we collectively get bored of those solutions more often than we should, reinventing the field every couple of years in a way that is utterly incompatible with what we once thought to be a good idea.

With all that said, as a prospective customer to a solution requiring software, whenever you engage a developer or a development shop, the first part of your homework is to be absolutely sure the problem you want solved is one of your top-most problems, lest you (or your company) may not sustain the motivation to see it through. As the person (or company) experiencing the problem, and this may seem outrageous since you are about to pay for contracted services, you will be integral part of the solution: there will be hard questions to be answered at some cost of time, intermediary solutions to be attempted, final validations to sign off, all activities that will require your attention and resources.

Know your problems…

As a prospective software provider, accepting a task at face value is a recipe for disaster.
A good software architect must know how to artfully act as a devil’s advocate while engaging a customer, not to blindly question motivations, but to understand why the customer needs a solution. In other words, a good architect will ask the customer “what is your [his] problem?”
I often hear from peers disgruntled with the fact a great idea was not accepted by a potential customer, at the same time failing to recognize the problem solved was not all that important to the customer.

… know thyself

In a previous project I joined a team which had developed an internal tool for analyzing log data from hundreds of products. At the time I was the enablement lead for the technology and really got behind it. We persevered for a long while to make our world-wide support team adopt the(internal) product. After a relatively extended period of … err…lukewarm responses, we changed our approach, meeting frequently with these support teams and also with another internal development team who was already successfully supplying tools to the support organization.

It became clear that our log analyzer tool, however sophisticated in what it could do with log files, required memory capacity that, although readily available to our development team, was unthinkable to a support engineer. This tool also had the ability, developed at great expense, to shift the memory requirements to a relational database to cut down on memory usage, but deploying and maintaining a relational database was equally unthinkable for an audience which had no expertise in managing such systems.

The question no one asked…

At the same time, meeting with the more successful internal development team revealed their key selling point to the support organization: their solution was based on a SaaS model and the support teams could access most of its function through web interfaces, avoiding the need for high-end systems and the costs of installing and maintaining new tools. Their tooling also integrated with another SaaS offering where customers submitted all the supporting information, including the all important log files, for any problem reported in the field.

In the end, building (or selling) a log analyzer to a support team which routinely performed log analysis seemed like a success story in the making, but it failed to recognize two key aspects:
  1. Their most common activity related to log analysis was to isolate error entries in log files, then use snippets of the log entry on Internet searches (incredibly effective) , a feature absent in our tool.
  2. All the information used by the support teams resided on virtual services, requiring only a web browser on their machines, which side-stepped the need for high-end systems.
At the time, and this was a remedial approach to not having asked the “what is your problem” question first, we decided to harvest the analysis internals from the tool, put it under the web-based interface already being used by the support team and surface the analysis results through a page that only contained warning and error messages, with a quick link to an Internet search based on the error message.

…is the problem no one had

Had we asked the harder questions first, the answer would have been a SaaS version of the patent we filed a few months later, where log files submitted to customers were automatically analyzed, cross-searched on websites, and results ranked according to their rate of incidence in results. At that point I left the team for other reasons, but I am told they kept on delivering on that vision.

Something fantastically useful eliminates a problem that is really at the center of someone’s attention, and not something you set out to improve, however successfully. Success also involves far more than technique and technology. In fact, too much technology may just put the solution out of reach for the target audience, in requiring system upgrades, training, and adaptation.

In closing, I end with a quote that symbolizes the consequences of offering a solution on the basis of vision without sufficient understanding of the problem, resulting in one of the costliest mistakes of the kind in recorded history:

We could hardly dream of building a kind of Great Wall of France, which would in any case be far too costly. Instead we have foreseen powerful but flexible means of organizing defense, based on the dual principle of taking full advantage of the terrain and establishing a continuous line of fire everywhere.André Maginot
December, 1929

Wednesday, October 30, 2013

Critical thinking and cows in the field

imageI heard this fictitious story as a joke some 20 years ago from my college-time great friend, Walfred Tedeschi, and it has stuck with me for all these years, first for the good joke that it is, then as a profound lesson in critical thinking.
In a gathering at the University, three students see a cow in a nearby field. One of them is a Mathematics student, the other a Physics student, and the other a Software Engineering student. Note: The fields and order of participation change depending on who tells the story, and I can say that as a Physics student at the time, Fred would not agree with my recollection, but it goes like this:
The Math student hears a mooing sound to his left, about 100 yards away, turns around and says: “Look, a cow” .
The Physics student turns around in the same direction, then adds: “A spotted cow”, with a slightly overbearing emphasis on the word “spotted”.

The Software Engineering student analyses the situation for a few moments, then concludes:  “At least on the side we can see.”
That punch-line is the one aspect that makes or breaks a great professional, in that it surfaces how different people may perceive the exact same situation (there is a cow in the field) with varying degrees of details and how those details may be factual (the visible side of the cow is spotted) or inferred (a cow spotted on one of its sides is likely to spotted on the other side) .

I don’t think this is the basis of a new philosophy graduation course, but it is worth telling people starting in their careers, both in the leadership and exact science fields, to guide the way in which we reach conclusions and how we trust information while making decisions or establishing a new hypothesis.

Later in life people may question why a cow in the field was even worth noticing, or whether the students were so distracted by the verbal sparring that they missed the sight of a golden unicorn a few hundred yards to the other side, but those are stories for a different time.

Friday, March 08, 2013

Crowds in the clouds, a brave old world

Sunrise at King penguin colony, Salisbury plain
Sometimes, I like crowds.

I will soon be flying home, back from Las Vegas, where I had the privilege of attending IBM Pulse's conference.

The opportunity to meet in person many colleagues and friends from all over the world is equal only to the opportunity of listening to some of the most prominent voices in the technology field, from some of our brightest colleagues, from analysts, from business partners, and specially from our customers.

These interactions are an anchor in reality that cannot be taken lightly and at the same time are an anchor of a scale and relevance that are almost impossible to comprehend. This conference has become the equivalent of a small city of over 10.000 people, which is created and torn apart in the span of a few days. I wrote what I could during the conference, but Twitter only goes so far to convey the sense responsibility that comes with working in the information technology field.

In the view of CEOs, information technology is now the most important aspect of the their companies' future. It is also a critical aspect to the future of the entire world. Connectivity and smart devices are shaping an entire new dimension of interactivity between people, governments and the enterprise.

The democratization of technology... 
If you lived your career through the 90s, for a while it seemed technology would only get faster, until it became totally interconnected and different altogether.

For the first time in modern history (and I use 'modern history' very judiciously here) , self-organization, information sharing, and merit-based leadership have allowed crowds to emerge and galvanize quickly, and somewhat effectively, around subjects ranging from designing a new product, to funding new ideas, to fighting against tyranny and oppression.

The world is small again, and technology has rescued fundamental aspects of human nature from the incomprehensible and ever increasing size of our world population. Easier access to means of production enable people to become direct producers once more, weakening the hold of wage-to-capital relations; pervasive access to means of communication eliminates barriers between individual producers and individual consumers, it reconnects people with their power to decide their destinies and ultimately reconnects individuals with the rest of the world in more meaningful and direct ways.

...is rewriting the books on capital ownership...

The power of the masses puts tremendous positive pressure on leaders from public and private spheres in ways that even the most jaded of citizens cannot refute. Crowd-sourced projects like Domino's Ultimate Delivery Vehicle or movements like the Arab Spring are evidence that the mere existence of crowds can make executives and governments see and engage people in a whole different light.

Open and free platforms for supporting virtual communities are already part of daily activities for a large portion of the world population, 3D printing is maturing at a rapid place, new materials and more efficient recycling technologies will further reduce the importance of capital in human initiative.

In the not so distant future, you will be able to ship a toy to a shop across the country and get it "restructured" in a recycling unit that can reprint the raw materials in the format of another more interesting toy for your growing child

Imagining the gadgets of the future, however exciting, is not nearly as interesting as imagining the dramatic implications to economic and social relations of the future. Micro-financing is a fantastic example, closing the gap in democratizing the access to means of production, a reality for many small businesses in less developed areas of the globe, with rates of default that are smaller or equal to those of traditional bank financing.

Fantastic initiatives like the one from Marcin Jakubowski, outlined in his TED talk titled "Open-sourced blueprints for civilization", also point the way at closing the gap on intellectual capital. Ever increasingly, people are having more access to knowledge, means of financing, communication, and production, than ever before.

...and on social relations

Cities are learning how to integrate immediate feedback from the general population into their own management systems. Researchers are also exploring with how to understand the sentiment of a city. Technology is not only reaching the masses, it is starting to understand the masses, and that is only the beginning.
Regardless of how one may question the motives of companies and governments, it is important to realize that what we are seeing now, however unprecedented, is also a very small step in enabling a different future for humanity.

For those skeptical and suspicious of technology, the only message is to accept it without fear, because the only other option is to become part of a lost generation. The advancements are not here to rob us of our identity and individuality, they are here to restore these qualities in ways modern civilization has long forgotten.

Just like when technology evolution surprised everyone for not being just about making things go faster, and while everyone is grappling with the evolution of the new small world, technology will change the very nature of society and the economy itself.

The new companies and the new governments

Companies will be challenged by customers and employees to achieve a sustainable model that is not based on the ownership of capital, of intellectual property, or of distribution channels. Governments will be forced to adapt to collective participation far beyond general elections. Failure to invest in education for creativity and to adapt the education curriculum for a new merit-based economy will land millions of people in a fairly uncompetitive heap. Allegiances will be formed to communities, not to companies or to countries.

Successful economies and business models will thrive on unlocking the power and creativity of individuals, in forming communities of individuals with large overlaps between personal goals and business goals. Being competitive will be more about doing things that others cannot do than about doing things cheaply and more efficiently.

Being in the technology field for a relatively short (or long) 16 years, I cannot contain my enthusiasm for the things to come in the next 5 years, let alone in the next 16 years.

Wednesday, January 16, 2013

The Agile Enterprise - Communication, collaboration, and cooperation

I received a link to an article titled "Team Collaboration the 2.0 Way", extolling the virtues of Web 2.0 collaboration over regular communication and wanted to register a point I often make in association with the notion of an Agile enterprise:
Collaboration does not foster cooperation, collaboration is premised on the need for cooperation.