The hypersane are among us, if only we are prepared to look

‘Hypersanity’ is not a common or accepted term. But neither did I make it up. I first came across the concept while training in psychiatry, in The Politics of Experience and the Bird of Paradise (1967) by R D Laing. In this book, the Scottish psychiatrist presented ‘madness’ as a voyage of discovery that could open out onto a free state of higher consciousness, or hypersanity. For Laing, the descent into madness could lead to a reckoning, to an awakening, to ‘break-through’ rather than ‘breakdown’. 

A few months later, I read C G Jung’s autobiography, Memories, Dreams, Reflections (1962), which provided a vivid case in point. In 1913, on the eve of the Great War, Jung broke off his close friendship with Sigmund Freud, and spent the next few years in a troubled state of mind that led him to a ‘confrontation with the unconscious’.

As Europe tore itself apart, Jung gained first-hand experience of psychotic material in which he found ‘the matrix of a mythopoeic imagination which has vanished from our rational age’. Like Gilgamesh, Odysseus, Heracles, Orpheus and Aeneas before him, Jung travelled deep down into an underworld where he conversed with Salome, an attractive young woman, and with Philemon, an old man with a white beard, the wings of a kingfisher and the horns of a bull. Although Salome and Philemon were products of Jung’s unconscious, they had lives of their own and said things that he had not previously thought. In Philemon, Jung had at long last found the father-figure that both Freud and his own father had failed to be. More than that, Philemon was a guru, and prefigured what Jung himself was later to become: the wise old man of Zürich. As the war burnt out, Jung re-emerged into sanity, and considered that he had found in his madness ‘the primo materia for a lifetime’s work’.

The Laingian concept of hypersanity, though modern, has ancient roots. Once, upon being asked to name the most beautiful of all things, Diogenes the Cynic (412-323 BCE) replied parrhesia, which in Ancient Greek means something like ‘uninhibited thought’, ‘free speech’, or ‘full expression’. Diogenes used to stroll around Athens in broad daylight brandishing a lit lamp. Whenever curious people stopped to ask what he was doing, he would reply: ‘I am just looking for a human being’ – thereby insinuating that the people of Athens were not living up to, or even much aware of, their full human potential.

After being exiled from his native Sinope for having defaced its coinage, Diogenes emigrated to Athens, took up the life of a beggar, and made it his mission to deface – metaphorically this time – the coinage of custom and convention that was, he maintained, the false currency of morality. He disdained the need for conventional shelter or any other such ‘dainties’, and elected to live in a tub and survive on a diet of onions. Diogenes proved to the later satisfaction of the Stoics that happiness has nothing whatsoever to do with a person’s material circumstances, and held that human beings had much to learn from studying the simplicity and artlessness of dogs, which, unlike human beings, had not complicated every simple gift of the gods.

The term ‘cynic’ derives from the Greek kynikos, which is the adjective of kyon or ‘dog’. Once, upon being challenged for masturbating in the marketplace, Diogenes regretted that it were not as easy to relieve hunger by rubbing an empty stomach. When asked, on another occasion, where he came from, he replied: ‘I am a citizen of the world’ (cosmopolites), a radical claim at the time, and the first recorded use of the term ‘cosmopolitan’. As he approached death, Diogenes asked for his mortal remains to be thrown outside the city walls for wild animals to feast upon. After his death in the city of Corinth, the Corinthians erected to his glory a pillar surmounted by a dog of Parian marble.

Jung and Diogenes came across as insane by the standards of their day. But both men had a depth and acuteness of vision that their contemporaries lacked, and that enabled them to see through their facades of ‘sanity’. Both psychosis and hypersanity place us outside society, making us seem ‘mad’ to the mainstream. Both states attract a heady mixture of fear and fascination. But whereas mental disorder is distressing and disabling, hypersanity is liberating and empowering.

After reading The Politics of Experience, the concept of hypersanity stuck in my mind, not least as something that I might aspire to for myself. But if there is such a thing as hypersanity, the implication is that mere sanity is not all it’s cracked up to be, a state of dormancy and dullness with less vital potential even than madness. This I think is most apparent in people’s frequently suboptimal – if not frankly inappropriate – responses, both verbal and behavioural, to the world around them. As Jung puts it:

The condition of alienation, of being asleep, of being unconscious, of being out of one’s mind, is the condition of the normal man.

Society highly values its normal man. It educates children to lose themselves and to become absurd, and thus to be normal.

Normal men have killed perhaps 100,000,000 of their fellow normal men in the last 50 years.

Many ‘normal’ people suffer from not being hypersane: they have a restricted worldview, confused priorities, and are wracked by stress, anxiety and self-deception. As a result, they sometimes do dangerous things, and become fanatics or fascists or otherwise destructive (or not constructive) people. In contrast, hypersane people are calm, contained and constructive. It is not just that the ‘sane’ are irrational but that they lack scope and range, as though they’ve grown into the prisoners of their arbitrary lives, locked up in their own dark and narrow subjectivity. Unable to take leave of their selves, they hardly look around them, barely see beauty and possibility, rarely contemplate the bigger picture – and all, ultimately, for fear of losing their selves, of breaking down, of going mad, using one form of extreme subjectivity to defend against another, as life – mysterious, magical life – slips through their fingers.

We could all go mad, in a way we already are, minus the promise. But what if there were another route to hypersanity, one that, compared with madness, was less fearsome, less dangerous, and less damaging? What if, as well as a backdoor way, there were also a royal road strewn with sweet-scented petals? After all, Diogenes did not exactly go mad. Neither did other hypersane people such as Socrates and Confucius, although the Buddha did suffer, in the beginning, with what might today be classed as depression.

Besides Jung, are there any modern examples of hypersanity? Those who escaped from Plato’s cave of shadows were reluctant to crawl back down and involve themselves in the affairs of men, and most hypersane people, rather than courting the limelight, might prefer to hide out in their back gardens. But a few do rise to prominence for the difference that they felt compelled to make, people such as Nelson Mandela and Temple Grandin. And the hypersane are still among us: from the Dalai Lama to Jane Goodall, there are many candidates. While they might seem to be living in a world of their own, this is only because they have delved more deeply into the way things are than those ‘sane’ people around them.Aeon counter – do not remove

Neel Burton

This article was originally published at Aeon and has been republished under Creative Commons.

Advertisements

Hypersanity Is Out!

Hypersanity cover

My new book, Hypersanity: Thinking Beyond Thinking is now out!

I’m starting with a soft release of the kindle edition, priced, to begin with, at just 99p (99c in the US) to encourage early readers and reviews.

Please help yourselves 😀

UK: https://www.amazon.co.uk/dp/B07T3WCYQC/

US: https://www.amazon.com/dp/B07T3WCYQC/

Here’s the blurb:

R.D. Laing presented madness as a voyage of discovery that could open out onto a free state of higher consciousness, or hypersanity. But if there is such a thing as hypersanity, then mere sanity is not all it’s cracked up to be, a state of dormancy and dullness with less vital potential even than madness. We could all go mad, in a way we already are, minus the promise. But what if there was another route to hypersanity, one which, compared to madness, was less fearsome, less dangerous, and less damaging? What if, as well as a backdoor way, there was also a royal road strewn with petals and sprayed with perfume?

This is a book about thinking, which, astonishingly, is barely taught in formal education. Our culture mostly equates thinking with logical reasoning, and the first few chapters examine logic, reason, their forms, and their flaws, starting with the basics of argumentation. But thinking is also about much more than logical reasoning, and so the book broadens out to examine concepts such as intelligence, knowledge, and truth, and alternative forms of cognition that our culture tends to overlook and underplay, including intuition, emotion, and imagination.

If Hypersanity fails to live up to its tall promise, it should at least make you into a better thinker. And so you can approach the book as an opportunity to hone your thinking skills, which, in the end, are going to be far more important to your impact and wellbeing than any facts that you could ever learn. As B.F. Skinner once put it, ‘Education is what survives when what has been learnt has been forgotten.’

Some Thoughts on Wine Ratings

A wine rating is a summary of the appraisal of a wine by one or more critics, most notoriously Robert Parker, who assigns ‘Parker points’ on a scale of 0 to 100—although the lowest possible score is 50, scores of less than 70 are rare, and scores of less than 80 are uncommon. Since the 1970s, the practice of rating wines on a 100-point scale has proliferated. Other scales, including 0-to-20 and 0-to-5 (sometimes featuring stars in lieu of numbers), are also frequently used. Certain websites enable consumers to emulate critics by contributing to ‘community’ notes and scores. In competitions, wines are generally tasted blind by a panel of critics, usually alongside other wines from the same appellation or region. In theory, a rating is merely intended to supplement a tasting note; in practice, the tasting note—if it even exists—is often ignored or omitted, with the wine reduced to nothing more than a headline number.

Wine ratings convey information quickly and simply, guiding the purchasing decisions of novices in particular. Assuming strict single-blind conditions at the time of tasting, they reflect performance rather than price or pedigree. Scores can easily be compared, which encourages producers to compete and improve their offerings, and rewards them for doing so. Wines with 90-plus points are much more likely to shift, and those with scores in the high 90s can develop cult followings. Château Tirecul la Gravière in Monbazillac became an overnight reference after Robert Parker gave 100 points to its 1995 Cuvée Madame.

However, wine ratings can be criticized on the triple grounds of concept, procedure, and consequences. While a numerical score can come across as scientific, it merely reflects the personal preferences and prejudices of one or several critics, and it may be that grading wines is as misguided as ranking people in a beauty pageant. For what is beauty, and can it be measured on a stage? Like the contestants in the pageant, the wines are often very young, and scores cannot fully account for the delights and disappointments that they are yet to reveal. In any case, the most beautiful girl or boy is probably not on stage, but sitting at home buried in the Nicomachean Ethics. Many hallowed producers shun competitions, partly on ideological grounds, but mostly because they have little to gain and much to lose.

Scores are influenced not only by personal preferences and prejudices, but also by the context and conditions of the tasting, and, in a panel, by the group dynamics, with junior judges exquisitely sensitive to every ‘um’ and ‘aah’ of the distinguished panel chair. The number that comes out of this process might be of existential import to the producer, who has toiled for a year, indeed, several years, to make his or her wine, but reflects no more than a few seconds of tasting with no or very little time for discussion and debate. In competitions, there is also a financial incentive to dish out medals, which encourage further paid entries and increase sales of medal stickers.

As for consequences, wines with the highest scores fall prey to speculators and are traded like financial commodities, effectively removing them from the market-place. More gravely, ratings tend to favour the sort of wines that are able to stand out on a fatigued, tannin-coated palate, at the expense of more delicate wines, which are likely to be more elegant, more interesting, more faithful to terroir, and better suited to the table. This phenomenon has contributed in no small measure to the homogenization, or ‘Parkerization’, of wine styles as producers vie to obtain the highest scores—though Robert Parker himself stepped back significantly in 2016.

Wine ratings have played an important role in the rise of wine culture, but their grip seems to be loosening, if not quite fading, as consumers become more and more experienced and knowledgeable. To me, a score of 98 can also function as a signal for caution.

Adapted from The Concise Guide to Wine and Blind Tasting

Aha, Uh-oh and Doh: The Psychology of Insight

And how to improve cognitive flexibility.

‘Insight’ is sometimes used to mean something like ‘self-awareness’, including awareness of our thought processes, beliefs, desires, emotions, and so on, and how they might relate to truth or usefulness. Of course, self-awareness comes by degrees. Owing to chemical receptors in their tendrils, vining plants know not to coil around themselves, and in that much can be said to have awareness of self and not-self. Children begin to develop reflective self-awareness at around 18 months of age, enabling them to recognize themselves in pictures and mirrors.

But ‘insight’ is also used to mean something like ‘penetrating discernment’, especially in cases when a solution to a previously intractable problem suddenly presents itself—and it is on this particular meaning of the word that I now want to focus on.

Such ‘aha moments’, epitomized by Archimedes’ cry of Eureka! Eureka! (Gr., ‘I found it! I found it!’), involve seeing something familiar in a new light or context, particularly a brighter or broader one, leading to a novel perspective and positive emotions such as joy, enthusiasm, and confidence. It is said that, after stepping into his bath, Archimedes noticed the water level rising, and suddenly understood that the volume of water displaced corresponded to the volume of the part of his body that had been submerged. Lesser examples of aha moments include suddenly understanding a joke, or suddenly perceiving the other aspect of a reversal image such as the duck/rabbit optical illusion (pictured). Aha moments result primarily from unconscious and automatic processes, and we tend, when working on insight problems, to look away from sources of visual stimulus.

Aha moments ought to be distinguished from uh-oh moments, in which we suddenly become aware of an unforeseen problem, and from doh moments, popularized by Homer Simpson, when an unforeseen problem hits us and/or we have a flash of insight into our lack of insight.

‘Thinking out of the box’ is a significant cognitive achievement. Once we have understood something in one way, it is very difficult to see it in any other way, even in the face of strong contradictory evidence. In When Prophecy Fails (1956), Leon Festinger discussed his experience of infiltrating a UFO doomsday cult whose leader had prophesied the end of the world. When the end of the world predictably failed to materialize, most of the cult members dealt with the dissonance that arose from the cognitions ‘the leader said the world would end’ and ‘the world did not end’ not by abandoning the cult or its leader, as you might expect, but by introducing the rationalization that the world had been saved by the strength of their faith!

Very often, to see something in a different light also means to see ourselves and the whole world in that new light, which can threaten and undermine our sense of self. It is more a matter of the emotions than of reason, which explains why even leading scientists can struggle with perceptual shifts. According to the physicist Max Planck, “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” Or to put it more pithily, science advances one funeral at a time.

Even worse, strong contradictory evidence, or attempts to convince us otherwise, can, in fact, be counterproductive and entrench our existing beliefs—which is why as a psychiatrist I rarely challenge my patients or indeed anyone directly. You don’t have to take my word for it: in one recent study, supplying ‘corrective information’ to people with serious concerns about the adverse effects of the flu jab actually made them less willing to receive it.

So, short of dissolving our egos like a zen master, what can we do to improve our cognitive flexibility? Of course, it helps to have the tools of thought, including language fluency and multiple frames of reference as given by knowledge and experience. But much more important is to develop that first sense of ‘insight’, namely, insight as self-awareness.

On a more day-to-day basis, we need to create the time and conditions for allowing new connections to form. My own associative thinking is much more active when I’m both well-rested and at rest, for example, standing under the shower or ambling in the park. As Chairman and CEO of General Electric, Jack Welch spent an hour each day in what he called ‘looking out of the window time’. August Kekulé claimed to have discovered the ring structure of the benzene molecule while daydreaming about a snake biting its own tail.

Time is a very strange thing, and not at all linear: sometimes, the best way of using it is to waste it.

Nyhan B & Reifler J (2015): Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine 33(3):459-464.

The Wines of the Mâconnais

The rock of Solutré

The climate of the Mâconnais is considerably warmer than that of Chablis or even the Côte d’Or. The relief is not as marked as in the Côte d’Or, and vineyards are mixed in with other forms of farming. The most reputed wines are from the south of Mâcon, in an area that rises into three limestone peaks: the Mont de Pouilly, the Roche de Solutré, and the Roche de Vergisson. The Roche de Solutré, which is a prehistoric and pilgrimage site, is picturesque, and well worth the gentle hike to its 493m summit.

Chardonnay predominates in the Mâconnais, but some Gamay and Pinot Noir are also found, especially in areas that are richer in sand and clay. The regional appellations are Mâcon, Mâcon-Villages (white wines only), and Mâcon + commune name. In addition, there are five commune-specific appellations (white wines only): Pouilly-Fuissé, Pouilly-Vinzelles, Pouilly-Loché, and Saint Véran to the south of Mâcon, and Viré-Clessé to the north. The vines are pruned as simple Guyot, with the cane trained in an arc (en arcure), which helps to delay budding (especially of terminal buds) and protect against frost.

Compared to Beaune, most Mâcon is simple and easy to drink, and unlikely to improve with age. That said, certain villages and producers have built a solid reputation and can offer great value for money. The limestone peaks of the Pouilly area belie the geological complexity of the surrounding terroir, with numerous faults and dips associated with at least fifteen distinct soil types. Some of the vineyards around the three peaks are deserving of Premier Cru status, and, in a first for the Mâconnais, there is a project to introduce about twenty. In 1866, Dr Jules Guyot wrote a report for the French ministry of agriculture in which he compared the potential of Meursault to that of Pouilly-Fuissé, and it’s interesting that he put it that way round.

The plan for Premier Crus

As with Chablis, much Mâcon is unoaked. However, Mâcon is less acidic than Chablis. Compared to Beaune and especially to Chablis, it is deeper in colour with riper aromas and a fuller body. The Pouilly wines, which are often lightly oaked, tend to be richer and riper on the one hand, and finer and more complex on the other. Owing to their sought-after smoky, flinty, or ‘wet stone’ character (goût de pierre à fusil), they are, I think, easier to confuse with Chablis than with Beaune. Pouilly-Vinzelles (~40ha) and Pouilly-Loché (~30ha) are exclaves of the much larger Pouilly-Fuissé (~760ha) and the wines from the three appellations are very similar in style. Vinzelles with its two castles was known to the Romans, who called it Vincella, or ‘Small Vine’. The soils in Vinzelles tend to be more ferrous, which can translate into spicier, broader wines. Neighbouring Loché can be labelled as Vinzelles, and is harder to find. Saint-Véran envelopes Pouilly-Fuissé like a scarf (or, to be more precise, like a bun) with wines that tend to a leaner, fresher style. Owing to an administrative cock-up in 1971, the village name is ‘Saint-Vérand’ but the appellation ‘Saint-Véran’, without the ‘d’. Viré-Clessé to the north of Mâcon varies in style, but the best examples, especially from Viré, are easily mistaken for Pouilly-Fuissé—as are the best examples from Saint-Véran.

Notable producers in the Mâconnais include Domaine de la Soufrandière and the related négoce Bret Brothers (very classic regional style), Guffens-Heynen and the related négoce Verget, Ferret, Valette, Chagnoleau, and Rijckaert. It’s all too easy to underestimate the Mâconnais, but the best wines can be as good as anything in Burgundy, at a fraction of the price.

And that’s saying something.

The Problems of Science

An overview of the philosophy of science

What is science? To call a thing ‘scientific’ or ‘scientifically proven’ is to lend that thing instant credibility. It is sometimes said that 90% of scientists who ever lived are alive today—despite a relative lack of scientific progress, and even regress as the planet comes under increasing strain. Especially in Northern Europe, more people believe in science than in religion, and attacking science can raise the same old, atavistic defences. In a bid to emulate or at least evoke the apparent success of physics, many areas of study have claimed the mantle of science: ‘economic science’, ‘political science’, ‘social science’, and so on. Whether or not these disciplines are true, bona fide sciences is a matter for debate, since there are no clear or reliable criteria for distinguishing a science from a non-science.

What might be said is that all sciences, unlike, say, magic or myth, share certain assumptions which underpin the scientific method, in particular, that there is an objective reality governed by uniform laws and that this reality can be discovered by systematic observation. A scientific experiment is basically a repeatable procedure designed to help support or refute a particular hypothesis about the nature of reality. Typically, it seeks to isolate the element under investigation by eliminating or ‘controlling for’ other variables that may be confused or ‘confounded’ with the element under investigation. Important assumptions or expectations include that: all potential confounding factors can be identified and controlled for, any measurements are appropriate and sensitive to the element under investigation, the results are analysed and interpreted rationally and impartially.

Still, many things can go wrong with the experiment. With, for example, drug trials, experiments that have not been adequately randomized (when subjects are randomly allocated to test and control groups) or adequately blinded (when information about the drug being administered/received is withheld from the investigator/subject) significantly exaggerate the benefits of treatment. Investigators may consciously or subconsciously withhold or ignore data that does not meet their desires or expectations (‘cherry picking’) or stray beyond their original hypothesis to look for chance or uncontrolled correlations (‘data dredging’). A promising result, which might have been obtained by chance, is much more likely to be published than an unfavourable one (‘publication bias’), creating the false impression that most studies have been positive and therefore that the drug is much more effective than it actually is. One damning systematic review found that, compared to independently funded drug trials, drug trials funded by pharmaceutical companies are less likely to be published, while those that are published are four timesmore likely to feature positive results for the products of their sponsors!

So much for the easy, superficial problems. But there are deeper, more intractable philosophical problems as well. For most of recorded history, ‘knowledge’ was based on authority, especially that of the Bible and whitebeards such as Aristotle, Ptolemy, and Galen. But today, or so we like to think, knowledge is much more secure because grounded in observation. Leaving aside that much of what counts as scientific knowledge cannot be directly observed, and that our species-specific senses are partial and limited, there is, in the phrase of Norwood Russell Hanson, ‘more to seeing than meets the eyeball’:

Seeing is an experience. A retinal reaction is only a physical state… People, not their eyes see. Cameras and eyeballs are blind.

Observation involves both perception and cognition, with sensory information filtered, interpreted, and even distorted by factors such as beliefs, experience, expectations, desires, and emotions. The finished product of observation is then encoded into a statement of fact consisting of linguistic symbols and concepts, each one with its own particular history, connotations, and limitations. All this means that it is impossible to test a hypothesis in isolation of all the background theories, frameworks, and assumptions from which it issues.

This is important, because science principally proceeds by induction, that is, by the observation of large and representative samples. But even if observation could be objective, observations alone, no matter how accurate and exhaustive, cannot in themselves establish the validity of a hypothesis. How do we know that ‘flamingos are pink’? Well, we don’t know for sure. We merely suppose that they are because, so far, every flamingo that we have seen or heard about has been pink. But the existence of a non-pink flamingo is not beyond the bounds of possibility. A turkey that is fed every morning might infer by induction that it will be fed every morning, until on Christmas Eve the goodly farmer picks it up and wrings its neck. Induction only ever yields probabilistic truths, and yet is the basis of everything that we know, or think that we know, about the world we live in. Our only justification for induction is that it has worked before, which is, of course, an inductive proof, tantamount to saying that induction works because induction works! For just this reason, induction has been called ‘the glory of science and the scandal of philosophy’.

It may be that science proceeds, not by induction, but by abduction or finding the most likely explanation for the observations—as, for example, when a physician is faced with a constellation of symptoms and formulates a ‘working diagnosis’ that more or less fits the clinical picture. But ultimately abduction is no more than a type of induction. Both abduction and induction are types of ‘backward reasoning’, formally equivalent to the logical fallacy of ‘affirming the consequent’:

  • If A then B. B. Therefore A.
  • “If I have the flu, then I have a fever. I have a fever. Therefore, I have the flu.”

But, of course, I could have meningitis or malaria or any number of other conditions. How to decide between them? At medical school, we were taught that ‘common things are common’. This is a formulation of Ockham’s razor, which involves choosing the simplest available explanation. Ockham’s razor, also called the law of parsimony, is often invoked as a principle of inductive reasoning, but, of course, the simplest available explanation is not necessarily the best or correct one. What’s more, we may be unable to decide which is the simplest explanation, or even what ‘simple’ might mean in context. Some people think that God is the simplest explanation for creation, while others think Him rather far-fetched. Still, there is some wisdom is Ockham’s razor: while the simplest explanation may not be the correct one, neither should we labour, or keep on ‘fixing’, a preferred hypothesis to save it from a simpler and better explanation. I should mention in passing that the psychological equivalent of Ockham’s razor is Hanlon’s razor: never attribute to malice that which can be adequately explained by neglect, incompetence, or stupidity.

Simpler hypotheses are also preferable in that they are easier to disprove, or falsify. To rescue it from the Problem of Induction, Karl Popper argued that science proceeds not inductively but deductively, by formulating a hypothesis and then seeking to falsify it.

  • ‘All flamingos are pink.’ Oh, but look, here’s a flamingo that’s not pink. Therefore, it is not the case that all flamingos are pink.

On this account, theories such as those of Freud and Marx are not scientific in so far as they cannot be falsified. But if Popper is correct that science proceeds by falsification, science could never tell us what is, but only ever what is not. Even if we did arrive at some truth, we could never know for sure that we had arrived. Another issue with falsification is that, when the hypothesis conflicts with the data, it could be the data rather than the hypothesis that is at fault—in which case it would be a mistake to reject the hypothesis. Scientists need to be dogmatic enough to persevere with a preferred hypothesis in the face of apparent falsifications, but not so dogmatic as to cling on to their preferred hypothesis in the face of robust and repeated falsifications. It’s a delicate balance to strike.

For Thomas Kuhn, scientific hypotheses are shaped and restricted by the worldview, or paradigm, within which the scientist operates. Most scientists are blind to the paradigm and unable to see across or beyond it. If data emerges that conflicts with the paradigm, it is usually ignored or explained away. But nothing lasts forever, and eventually the paradigm weakens and is overturned. Examples of such ‘paradigm shifts’ include the transition from Aristotelian mechanics to classical mechanics, the transition from miasma theory to the germ theory of disease, and the transition from ‘clinical judgement’ to evidence-based medicine. Of course, a paradigm does not die overnight. Reason is, for the most part, a tool that we use to justify what we are already inclined to believe, and a human life cannot easily accommodate more than one paradigm. In the words of Max Planck, ‘A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ Or to put it more pithily, science advances one funeral at a time.

In the Structure of Scientific Revolutions, Kuhn argued that rival paradigms offer competing and irreconcilable accounts of reality, suggesting that there are no independent standards by which they might judged against one another. Imre Lakatos sought to reconcile Popper and Kuhn, and spoke of programmes rather than paradigms. A programme is based on a hard core of theoretical assumptions accompanied by more modest auxiliary hypotheses formulated to protect the hard core against any conflicting data. While the hard core cannot be abandoned without jeopardising the programme, auxiliary hypotheses can be adapted to protect the hard core against evolving threats, rendering the hard core unfalsifiable. A progressive programme is one in which changes to auxiliary hypotheses lead to greater predictive power, whereas a degenerative programme is one in which these ad hocelaborations become sterile and cumbersome. A degenerative programme, says Lakatos, is one which is ripe for replacement. Though very successful in its time, classical mechanics, with Newton’s three laws of motion at its core, was gradually superseded by the special theory of relativity.

For Paul Feyerabend, Lakatos’s theory makes a mockery of any pretence at scientific rationality. Feyerabend went so far as to call Lakatos a ‘fellow anarchist’, albeit one in disguise. For Feyerabend, there is no such thing as a or the scientific method: anything goes, and as a form of knowledge science is no more privileged than magic, myth, or religion. More than that, science has come to occupy the same place in the human psyche as religion once did. Although science began as a liberating movement, it grew dogmatic and repressive, more of an ideology than a rational method that leads to ineluctable progress. In the words of Feyerabend:

Knowledge is not a series of self-consistent theories that converges toward an ideal view; it is rather an ever increasing ocean of mutually incompatible (and perhaps even incommensurable) alternatives, each single theory, each fairy tale, each myth that is part of the collection forcing the others into greater articulation and all of them contributing, via this process of competition, to the development of our consciousness.

‘My life’, wrote Feyerabend, ‘has been the result of accidents, not of goals and principles. My intellectual work forms only an insignificant part of it. Love and personal understanding are much more important. Leading intellectuals with their zeal for objectivity kill these personal elements. They are criminals, not the leaders of mankind.’

Every paradigm that has come and gone is now deemed to have been false, inaccurate, or incomplete, and it would be ignorant or arrogant to assume that our current ones might amount to the truth, the whole truth, and nothing but the truth. If our aim in doing science is to make predictions, enable effective technology, and in general promote successful outcomes, then this may not matter all that much, and we continue to use outdated or discredited theories such as Newton’s laws of motion so long as we find them useful. But it would help if we could be more realistic about science, and, at the same time, more rigorous, critical, and imaginative in conducting it.

Originally published on Psychology Today