The IT debates continue - vehemently as ever - about which language, which architecture, which methodology and whether to use any methodology at all. Sometimes it feels like we're stuck in the veritable goldfish bowl admiring the shiny new castle that periodically presents itself. How could everyone be so right and so wrong at the same time?
Some of us like to think of information technology (or computer science) as a "science". Karl Popper put "science" on a quantitative footing when he proposed that a scientific hypothesis must be "falsifiable". I.e. If you come up with a theory about something, there must be a way to disprove the theory...or else your theory is not scientific. This is the cornerstone of the scientific method.
Few things in IT can be stated in a way that is falsifiable. Claims that Ruby is a better language to use than Java or that SOAP is better than REST immediately run into a whole bunch of unfalsifiable questions. Everyone has a different definition of "better" - you like syntactic elegance, your boss likes cost, the mainframe guys like COBOL. Then there are just too many variables. Cost might seem to be a suitably objective measure of quality but that is influenced by too many other variables - labour costs, experience of the developers, requirements and engagement with stakeholders etc. Martin Fowler puts it best in his bliki entry "Cannot Measure Productivity".
But surely there are heuristics that can approximate "better"? Coming back to Epistemology, this is something thatThomas Kuhn observed in the "The Structure of Scientific Revolutions" - a wonderful book that gave birth to that often marketed and seldom understood word - "paradigm". The dirty secret is that even in hard sciences like Physics, "better" is not always obvious. It took decades for quantum theory to replace classical physics and even to this day most physicist don't use quantum theory in their day-to-day work. They use the techniques that are good enough for their immediate problem.
According to Kuhn, a paradigm is a "framework" within which a scientist works. The questions that are asked, the hypotheses, the tests, the measurements and the interpretations all make sense within a given paradigm and often don't make sense outside that paradigm. Scientists spend most of their time working within a single paradigm and only occasionally there will be a revolution where a new paradigm comes along to replace an older one.
If we're looking for a metaphor here (and I am) I think that much of the debate within IT is debate between different paradigms. A bank's enterprise IT department operates within a whole different paradigm than a hipster cloud start-up. People building distributed systems are operating in a different paradigm than people building a CRM application. The techniques used, the measures of quality and the constraints that must be met are different in each paradigm. So asking which paradigm is better is nonsense. It's like asking whether quantum physics or classical physics is better. It depends! If you want to understand the nature of light, then use quantum physics. If you want to measure the drag on a ships hull then don't use quantum physics.
So what about marketing and those pesky people that shout loudest? Paul Feyerabendsays that Popper and Kuhn are all fooling themselves. There's no such thing as an objectively correct scientific theory. The paradigms that win are the ones that are "marketed" best by the people who shout loudest. His example is Galileo who championed the idea that the Earth revolved around the Sun. Feyerabend claims that the science of the day couldn't distinguish between the two theories. In fact Ptolemaic epicycles fitted observations better than Copernicus' circular orbits. It wasn't until decades later that Tycho Brahe's accurate observations and Kepler's discovery of elliptical orbits that the measurements caught up with the theory. Galileo won because he "shouted loudest" - he used power, influence and community appeal (he wrote in Italian rather than Latin) to sell his theories.
So what's this got to do with IT?
The road from Popper to Feyerabend has moved science from a logical, disinterested objective pursuit by a bunch of stuffy old men into a more realistic arena of competing tribes arguing and shouting to get themselves heard and to win new disciples. But this all works because in most cases there are objective measures of "better". A science may go AWOL for a few decades, but eventually paradigms move forward rather than backward.
I think IT can be fitted into this framework but there are circumstances that mean our gyrations around "better" seem to be large and chaotic. I agree with the view that there is a lack of an historical sense in IT. Too much effort is put into reinventing the flat tyre. And a big problem is that not only can quality not be measured, but often there is disagreement over what constitutes quality.
Perhaps we can take a leaf out of polyglot programming and propose polyglot architecture. The polyglot architect has good understanding of the Architecture Continuum - the smorgasbord of architectural patterns, approaches and paradigms that are out there. A good understand of where they fit and what are their strengths, weaknesses and mistakes. The polyglot architect is ready to work within the paradigm that makes sense for the task at hand - and recognises that often, one single paradigm is not enough.