Barking Up the Wrong Tree
What if we’re wrong? What if we’re all unwitting participants (and victims) in a mass delusion of biblical proportions?
“Not creating delusions is enlightenment.”
What if the past thirty years of so-called progress in the field of software development has all been one vast waste of time?
What if we’ve fooled ourselves by one huge placebo effect? Or by a combination of placebo effect and other similar pernicious delusions and cognitive biases?
“It is only when we forget all our learning that we begin to know.”
~ Henry David Thoreau
What if what we think we’ve learned turns out to have no validity at all?
Scrum, Waterfall, Agile, Kanban, Xp, etc.. “Process” itself. Could these all in fact be no more than the most egregious of red herrings?
What if it’s really some other factor – or factors in combination – that accounts for some, or indeed all, of the differences we observe from improvement initiatives? Honestly, I don’t think we can discount this possibility. Personally, I am coming round ever more to this belief.
Let’s take a look at some of the pernicious delusions and cognitive biases that may be at play here:
The Hawthorne Effect
The central idea behind the Hawthorne Effect is that changes in participants’ behavior during the course of a study may be “related only to the special social situation and social treatment they receive”.
The Feedback Effect
Improving folks’ performance by improving e.g. their skills may be a consequence of their receiving feedback on their performance (and not as a consequence of any improvement in skills per se). An “agile adoption” may give folks feedback for the first time in their working lives.
The Observer-Expectancy Effect
The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which a researcher’s cognitive bias causes them to unconsciously influence the participants of an experiment.
“Any of a number of subtle cues or signals from an experimenter can affect the performance or response of subjects in the experiment.”
Sounds pretty much like agile coaching or scrum mastering, just about everywhere? Of course, the role of a coach or Scrum Master is indeed to affect their team(s) in such ways (at least, for the better).
The John Henry Effect
The John Henry effect is an experimental bias introduced into social experiments by reactive behavior by the control group (i.e. a group of people, not the subject of the experiment, used as a “control” against which progress in the subject group can be compared.)
As applied to organisations adopting agile, this effect may account, at least in part, for the improvement (if an) in teams, and other departments, not immediately part of the agile adoption (a.k.a. pilot).
The Pygmalion Effect
The Pygmalion effect, or Rosenthal effect, refers to the phenomenon in which the greater the expectation placed upon a group of people, the better they perform.
In agile adoptions, managers typically place a great deal of expectation on the first agile team(s). According to this effect, these teams may improve simply as a consequence of those expectations (and not, for example, as a consequence of any changes to the way the work works).
The Placebo Effect
The placebo effect refers to the phenomenon in which people receiving a fake or otherwise intentionally ineffective treatment improve to more or less the same extent as people receiving a real, intentionally effective treatment.
“Placebos have been shown to work in about thirty percent of patients. Some researchers believe that placebos simply evoke a psychological response. That the act of taking them gives you an improved sense of well-being. However, recent research indicates that placebos may also bring about a physical response.”
The Subject-Expectancy Effect
The subject-expectancy effect is a form of reactivity that occurs when someone, e.g. a research subject, expects a given result and therefore unconsciously affects the outcome, or reports that expected result.
When people already know what the result of a particular “improvement” is supposed to be, they might unconsciously change their reaction to bring about that result, or simply report that result as the outcome – even if it wasn’t. Some researchers believe that people who experience the placebo effect have become classically conditioned to expect improvement from a change. Remember Dr. Ivan Pavlov and the dog that salivated when it heard a bell? In the case of people and placebos, the stimulus is e.g. the “ceremonies” of the new development method, and the response is real (or perceived) improvement and feelings of well-being and positivity.
“The expectation of pain relief causes the brain’s pain relief system to activate.”
The Novelty Effect
The novelty effect, in the context of human performance, is the tendency for performance to initially improve when a new approach to work is instituted – not because of any actual improvement in learning or achievement, but in response to increased interest in e.g. the new approach.
Self-determination theory is concerned with the motivation behind the choices that people make, absent any external influences. The theory focuses on the degree to which an individual’s behavior is self-motivated and self-determined. Key studies that led to emergence of this theory include research on intrinsic motivation.
In effective Agile adoptions, for example, increased self-determination (self-managed teams and the like) may be a causal factor in increased motivation, and thus in increases in e.g. productivity, quality, or what have you. Note here I’m saying that the benefits accruing (if any) are not the result of any material changes in the process (the way the work works), but in the social, motivational context for the work.
Just as in the Hawthorne experiments, we who (merely) observe are part of the system too. Objectivity is delusional. How much else of what we induce and convince ourselves to believe, is delusional too? And how would we know? As part of the “system”, could we ever know?
The Hawthorne experiments – contention over their validity and interpretation notwithstanding – stand as a warning about us viewing even simple experiments on human participants as if the people are only mechanical systems.
“If history repeats itself, and the unexpected always happens, how incapable must Man be of learning from experience?”
~ George Bernard Shaw
Given all the research into how our brains work (and more often, fail to work), should we not be at least open to the possibility that the results we think we have achieved in the world of software development have little or nothing to do with the things we think are important?
What do you think?