Bad science

4 Homeopathy

And now for the meat. But before we take a single step into this arena, we should be clear on one thing: despite what you might think, I’m not desperately interested in Complementary and Alternative Medicine (a dubious piece of phraseological rebranding in itself). I am interested in the role of medicine, our beliefs about the body and healing, and I am fascinated—in my day job—by the intricacies of how we can gather evidence for the benefits and risks of a given intervention.
Homeopathy, in all of this, is simply our tool.
So here we address one of the most important issues in science: how do we know if an intervention works? Whether it’s a face cream, a detox regime, a school exercise, a vitamin pill, a parenting programme or a heart-attack drug, the skills involved in testing an intervention are all the same. Homeopathy makes the clearest teaching device for evidence-based medicine for one simple reason: homeopaths give out little sugar pills, and pills are the easiest thing in the world to study.
By the end of this section you will know more about evidence-based medicine and trial design than the average doctor. You will understand how trials can go wrong, and give false positive results, how the placebo effect works, and why we tend to overestimate the efficacy of pills. More importantly, you will also see how a health myth can be created, fostered and maintained by the alternative medicine industry, using all the same tricks on you, the public, which big pharma uses on doctors. This is about something much bigger than homeopathy.
What is homeopathy?
Homeopathy is perhaps the paradigmatic example of an alternative therapy: it claims the authority of a rich historical heritage, but its history is routinely rewritten for the PR needs of a contemporary market; it has an elaborate and sciencey-sounding framework for how it works, without scientific evidence to demonstrate its veracity; and its proponents are quite clear that the pills will make you better, when in fact they have been thoroughly researched, with innumerable trials, and have been found to perform no better than placebo.
Homeopathy was devised by a German doctor named Samuel Hahnemann in the late eighteenth century. At a time when mainstream medicine consisted of blood-letting, purging and various other ineffective and dangerous evils, when new treatments were conjured up out of thin air by arbitrary authority figures who called themselves ‘doctors’, often with little evidence to support them, homeopathy would have seemed fairly reasonable.
Hahnemann’s theories differed from the competition because he decided—and there’s no better word for it—that if he could find a substance which would induce the symptoms of a disease in a healthy individual, then it could be used to treat the same symptoms in a sick person. His first homeopathic remedy was Cinchona bark, which was suggested as a treatment for malaria. He took some himself, at a high dose, and experienced symptoms which he decided were similar to those of malaria itself:
My feet and finger-tips at once became cold; I grew languid and drowsy; my heart began to palpitate; my pulse became hard and quick; an intolerable anxiety and trembling arose…prostration…pulsation in the head, redness in the cheek and raging thirst…intermittent fever…stupefaction…rigidity…
–and so on.
Hahnemann assumed that everyone would experience these symptoms if they took Cinchona (although there’s some evidence that he just experienced an idiosyncratic adverse reaction). More importantly, he also decided that if he gave a tiny amount of Cinchona to someone with malaria, it would treat, rather than cause, the malaria symptoms. The theory of ‘like cures like’ which he conjured up on that day is, in essence, the first principle of homeopathy.*
≡ At proper high doses, Cinchona contains quinine, which can genuinely be used to treat malaria, although most malarial parasites are immune to it now.

Giving out chemicals and herbs could be a dangerous business, since they can have genuine effects on the body (they induce symptoms, as Hahnemann identified). But he solved that problem with his second great inspiration, and the key feature of homeopathy that most people would recognise today: he decided—again, that’s the only word for it—that if you diluted a substance, this would ‘potentise’ its ability to cure symptoms, ‘enhancing’ its ‘spirit-like medicinal powers’, and at the same time, as luck would have it, also reducing its side-effects. In fact he went further than this: the more you dilute a substance, the more powerful it becomes at treating the symptoms it would otherwise induce.
Simple dilutions were not enough. Hahnemann decided that the process had to be performed in a very specific way, with an eye on brand identity, or a sense of ritual and occasion, so he devised a process called ‘succussion’. With each dilution the glass vessel containing the remedy is shaken by ten firm strikes against ‘a hard but elastic object’. For this purpose Hahnemann had a saddlemaker construct a bespoke wooden striking board, covered in leather on one side, and stuffed with horsehair. These ten firm strikes are still carried out in homeopathy pill factories today, sometimes by elaborate, specially constructed robots.
Homeopaths have developed a wide range of remedies over the years, and the process of developing them has come to be called, rather grandly, ‘proving’ (from the German Prufung). A group of volunteers, anywhere from one person to a couple of dozen, come together and take six doses of the remedy being ‘proved’, at a range of dilutions, over the course of two days, keeping a diary of the mental, physical and emotional sensations, including dreams, experienced over this time. At the end of the proving, the ‘master prover’ will collate the information from the diaries, and this long, unsystematic list of symptoms and dreams from a small number of people will become the ‘symptom picture’ for that remedy, written in a big book and revered, in some cases, for all time. When you go to a homeopath, he or she will try to match your symptoms to the ones caused by a remedy in a proving.
There are obvious problems with this system. For a start, you can’t be sure if the experiences the ‘provers’ are having are caused by the substance they’re taking, or by something entirely unrelated. It might be a ‘nocebo’ effect, the opposite of placebo, where people feel bad because they’re expecting to (I bet I could make you feel nauseous right now by telling you some home truths about how your last processed meal was made); it might be a form of group hysteria (‘Are there fleas in this sofa?’); one of them might experience a tummy ache that was coming on anyway; or they might all get the same mild cold together; and soon.
But homeopaths have been very successful at marketing these ‘provings’ as valid scientific investigations. If you go to Boots the Chemist’s website, www.bootslearningstore.co.uk, for example, and take their 16-plus teaching module for children on alternative therapies, you will see, amongst the other gobbledegook about homeopathic remedies, that they are teaching how Hahnemann’s provings were ‘clinical trials’. This is not true, as you can now see, and that is not uncommon.
Hahnemann professed, and indeed recommended, complete ignorance of the physiological processes going on inside the body: he treated it as a black box, with medicines going in and effects coming out, and championed only empirical data, the effects of the medicine on symptoms (‘The totality of symptoms and circumstances observed in each individual case,’ he said, ‘is the one and only indication that can lead us to the choice of the remedy’).
This is the polar opposite of the ‘Medicine only treats the symptoms, we treat and understand the underlying cause’ rhetoric of modern alternative therapists. It’s also interesting to note, in these times of ‘natural is good’, that Hahnemann said nothing about homeopathy being ‘natural’, and promoted himself as a man of science.
Conventional medicine in Hahnemann’s time was obsessed with theory, and was hugely proud of basing its practice on a ‘rational’ understanding of anatomy and the workings of the body. Medical doctors in the eighteenth century sneeringly accused homeopaths of ‘mere empiricism’, an over-reliance on observations of people getting better. Now the tables are turned: today the medical profession is frequently happy to accept ignorance of the details of mechanism, as long as trial data shows that treatments are effective (we aim to abandon the ones that aren’t), whereas homeopaths rely exclusively on their exotic theories, and ignore the gigantic swathe of negative empirical evidence on their efficacy. It’s a small point, perhaps, but these subtle shifts in rhetoric and meaning can be revealing.
The dilution problem
Before we go any further into homeopathy, and look at whether it actually works or not, there is one central problem we need to get out of the way.
Most people know that homeopathic remedies are diluted to such an extent that there will be no molecules of it left in the dose you get. What you might not know is just how far these remedies are diluted. The typical homeopathic dilution is 30C: this means that the original substance has been diluted by one drop in a hundred, thirty times over. In the ‘What is homeopathy?’ section on the Society of Homeopaths’ website, the single largest organisation for homeopaths in the UK will tell you that ‘30C contains less than one part per million of the original substance.’
‘Less than one part per million’ is, I would say, something of an understatement: a 30C homeopathic preparation is a dilution of one in 10030, or rather 1060, or one followed by sixty zeroes. To avoid any misunderstandings, this is a dilution of one in 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000, or, to phrase it in the Society of Homeopaths’ terms, ‘one part per million million million million million million million million million million’. This is definitely ‘less than one part per million of the original substance’.
For perspective, there are only around 100,000,000,000,000, 000,000,000,000,000,000 molecules of water in an Olympic-sized swimming pool. Imagine a sphere of water with a diameter of 150 million kilometres (the distance from the earth to the sun). It takes light eight minutes to travel that distance. Picture a sphere of water that size, with one molecule of a substance in it: that’s a 30C dilution.*
≡ For pedants, it’s a 30.89C dilution.

At a homeopathic dilution of 200C (you can buy much higher dilutions from any homeopathic supplier) the treating substance is diluted more than the total number of atoms in the universe, and by an enormously huge margin. To look at it another way, the universe contains about 3 x 1080 cubic metres of storage space (ideal for starting a family): if it was filled with water, and one molecule of active ingredient, this would make for a rather paltry 55C dilution.
We should remember, though, that the improbability of homeopaths’ claims for how their pills might work remains fairly inconsequential, and is not central to our main observation, which is that they work no better than placebo. We do not know how general anaesthetics work, but we know that they do work, and we use them despite our ignorance of the mechanism. I myself have cut deep into a man’s abdomen and rummaged around his intestines in an operating theatre—heavily supervised, I hasten to add—while he was knocked out by anaesthetics, and the gaps in our knowledge regarding their mode of action didn’t bother either me or the patient at the time.
Moreover, at the time that homeopathy was first devised by Hahnemann, nobody even knew that these problems existed, because the Italian physicist Amadeo Avogadro and his successors hadn’t yet worked out how many molecules there are in a given amount of a given substance, let alone how many atoms there are in the universe. We didn’t even really know what atoms were.
How have homeopaths dealt with the arrival of this new knowledge? By saying that the absent molecules are irrelevant, because ‘water has a memory’. This sounds feasible if you think of a bath, or a test tube full of water. But if you think, at the most basic level, about the scale of these objects, a tiny water molecule isn’t going to be deformed by an enormous arnica molecule, and be left with a ‘suggestive dent’, which is how many homeopaths seem to picture the process. A pea-sized lump of putty cannot take an impression of the surface of your sofa.
Physicists have studied the structure of water very intensively for many decades, and while it is true that water molecules will form structures round a molecule dissolved in them at room temperature, the everyday random motion of water molecules means that these structures are very short-lived, with lifetimes measured in picoseconds, or even less. This is a very restrictive shelf life.
Homeopaths will sometimes pull out anomalous results from physics experiments and suggest that these prove the efficacy of homeopathy. They have fascinating flaws which can be read about elsewhere (frequently the homeopathic substance—which is found on hugely sensitive lab tests to be subtly different from a non-homeopathic dilution—has been prepared in a completely different way, from different stock ingredients, which is then detected by exquisitely sensitive lab equipment). As a ready shorthand, it’s also worth noting that the American magician and ‘debunker’ James Randi has offered a $1 million prize to anyone demonstrating ‘anomalous claims’ under laboratory conditions, and has specifically stated that anyone could win it by reliably distinguishing a homeopathic preparation from a non-homeopathic one using any method they wish. This $1 million bounty remains unclaimed.
Even if taken at face value, the ‘memory of water’ claim has large conceptual holes, and most of them you can work out for yourself. If water has a memory, as homeopaths claim, and a one in 1060 dilution is fine, then by now all water must surely be a health-giving homeopathic dilution of all the molecules in the world. Water has been sloshing around the globe for a very long time, after all, and the water in my very body as I sit here typing away in London has already been through plenty of other people’s bodies before mine. Maybe some of the water molecules sitting in my fingers as I type this sentence are currently in your eyeball. Maybe some of the water molecules fleshing out my neurons as I decide whether to write ‘wee’ or ‘urine’ in this sentence are now in the Queen’s bladder (God bless her): water is a great leveller, it gets about. Just look at clouds.
How does a water molecule know to forget every other molecule it’s seen before? How does it know to treat my bruise with its memory of arnica, rather than a memory of Isaac Asimov’s faeces? I wrote this in the newspaper once, and a homeopath complained to the Press Complaints Commission. It’s not about the dilution, he said: it’s the succussion. You have to bang the flask of water briskly ten times on a leather and horsehair surface, and that’s what makes the water remember a molecule. Because I did not mention this, he explained, I had deliberately made homeopaths sound stupid. This is another universe of foolishness.
And for all homeopaths’ talk about the ‘memory of water’, we should remember that what you actually take, in general, is a little sugar pill, not a teaspoon of homeopathically diluted water—so they should start thinking about the memory of sugar, too. The memory of sugar, which is remembering something that was being remembered by water (after a dilution greater than the number of atoms in the universe) but then got passed on to the sugar as it dried. I’m trying to be clear, because I don’t want any more complaints.
Once this sugar which has remembered something the water was remembering gets into your body, it must have some kind of effect. What would that be? Nobody knows, but you need to take the pills regularly, apparently, in a dosing regime which is suspiciously similar to that for medical drugs (which are given at intervals spaced according to how fast they are broken down and excreted by your body).
I demand a fair trial
These theoretical improbabilities are interesting, but they’re not going to win you any arguments: Sir fohn Forbes, physician to Queen Victoria, pointed out the dilution problem in the nineteenth century, and 150 years later the discussion has not moved on. The real question with homeopathy is very simple: does it work? In fact, how do we know if any given treatment is working?
Symptoms are a very subjective thing, so almost every conceivable way of establishing the benefits of any treatment must start with the individual and his or her experience, building from there. Let’s imagine we’re talking—maybe even arguing—with someone who thinks that homeopathy works, someone who feels it is a positive experience, and who feels they get better, quicker, with homeopathy. They would say: ‘All I know is, I feel as if it works. I get better when I take homeopathy.’ It seems obvious to them, and to an extent it is. This statement’s power, and its flaws, lie in its simplicity. Whatever happens, the statement stands as true.
But you could pop up and say: ‘Well, perhaps that was the placebo effect.’ Because the placebo effect is far more complex and interesting than most people suspect, going way beyond a mere sugar pill: it’s about the whole cultural experience of a treatment, your expectations beforehand, the consultation process you go through while receiving the treatment, and much more.
We know that two sugar pills are a more effective treatment than one sugar pill, for example, and we know that salt-water injections are a more effective treatment for pain than sugar pills, not because salt-water injections have any biological action on the body, but because an injection feels like a more dramatic intervention. We know that the colour of pills, their packaging, how much you pay for them and even the beliefs of the people handing the pills over are all important factors. We know that placebo operations can be effective for knee pain, and even for angina. The placebo effect works on animals and children. It is highly potent, and very sneaky, and you won’t know the half of it until you read the ‘placebo’ chapter in this book.
So when our homeopathy fan says that homeopathic treatment makes them feel better, we might reply: ‘I accept that, but perhaps your improvement is because of the placebo effect,’ and they cannot answer ‘No,’ because they have no possible way of knowing whether they got better through the placebo effect or not. They cannot tell. The most they can do is restate, in response to your query, their original statement: ‘All I know is, I feel as if it works. I get better when I take homeopathy.’
Next, you might say: ‘OK, I accept that, but perhaps, also, you feel you’re getting better because of ‘regression to the mean’.’ This is just one of the many ‘cognitive illusions’ described in this book, the basic flaws in our reasoning apparatus which lead us to see patterns and connections in the world around us, when closer inspection reveals that in fact there are none.
‘Regression to the mean’ is basically another phrase for the phenomenon whereby, as alternative therapists like to say, all things have a natural cycle. Let’s say you have back pain. It comes and goes. You have good days and bad days, good weeks and bad weeks. When it’s at its very worst, it’s going to get better, because that’s the way things are with your back pain.
Similarly, many illnesses have what is called a ‘natural history’: they are bad, and then they get better. As Voltaire said: ‘The art of medicine consists in amusing the patient while nature cures the disease.’ Let’s say you have a cold. It’s going to get better after a few days, but at the moment you feel miserable. It’s quite natural that when your symptoms are at their very worst, you will do things to try to get better. You might take a homeopathic remedy. You might sacrifice a goat and dangle its entrails around your neck. You might bully your GP into giving you antibiotics. (I’ve listed these in order of increasing ridiculousness.)
Then, when you get better—as you surely will from a cold—you will naturally assume that whatever you did when your symptoms were at their worst must be the reason for your recovery. Post hoc ergo propter hoc, and all that. Every time you get a cold from now on, you’ll be back at your GP, hassling her for antibiotics, and she’ll be saying, ‘Look, I don’t think this is a very good idea,’ but you’ll insist, because they worked last time, and community antibiotic resistance will increase, and ultimately old ladies die of MRSA because of this kind of irrationality, but that’s another story.*
≡ General practitioners sometimes prescribe antibiotics to demanding patients in exasperation, even though they are ineffective in treating a viral cold, but much research suggests that this is counterproductive, even as a time-saver. In one study, prescribing antibiotics rather than giving advice on self-management for sore throat resulted in an increased overall workload through repeat attendance. It was calculated that if a GP prescribed antibiotics for sore throat to one hundred fewer patients each year, thirty-three fewer would believe that antibiotics were effective, twenty five fewer would intend to consult with the problem in the future, and ten fewer would come back within the next year. If you were an alternative therapist, or a drug salesman, you could turn those figures on their head and look at how to drum up more trade, not less.

You can look at regression to the mean more mathematically, if you prefer. On Bruce Forsyth’s Play Your Cards Right, when Brucey puts a 3 on the board, the audience all shout, ‘Higher!’ because they know the odds are that the next card is going to be higher than a 3. ‘Do you want to go higher or lower than a jack? Higher? Higher?’ ‘Lower!’
An even more extreme version of ‘regression to the mean’ is what Americans call the Sports Illustrated jinx. Whenever a sportsman appears on the cover of Sports Illustrated, goes the story, he is soon to fall from grace. But to get on the cover of the magazine you have to be at the absolute top of your game, one of the best sportsmen in the world; and to be the best in that week, you’re probably also having an unusual run of luck. Luck, or ‘noise’, generally passes, it ‘regresses to the mean’ by itself, as happens with throws of a die. If you fail to understand that, you start looking for another cause for that regression, and you find…the Sports Illustrated jinx.
Homeopaths increase the odds of a perceived success in their treatments even further by talking about ‘aggravations’, explaining that sometimes the correct remedy can make symptoms get worse before they get better, and claiming that this is part of the treatment process. Similarly, people flogging detox will often say that their remedies might make you feel worse at first, as the toxins are extruded from your body: under the terms of these promises, literally anything that happens to you after a treatment is proof of the therapist’s clinical acumen and prescribing skill.
So we could go back to our homeopathy fan, and say: ‘You feel you get better, I accept that. But perhaps it is because of ‘regression to the mean’, or simply the ‘natural history’ of the disease.’ Again, they cannot say ‘No’ (or at least not with any meaning—they might say it in a tantrum), because they have no possible way of knowing whether they were going to get better anyway, on the occasions when they apparently got better after seeing a homeopath. ‘Regression to the mean’ might well be the true explanation for their return to health. They simply cannot tell. They can only restate, again, their original statement: ‘All I know is, I feel as if it works. I get better when I take homeopathy.’
That may be as far as they want to go. But when someone goes further, and says, ‘Homeopathy works,’ or mutters about ‘science’, then that’s a problem. We cannot simply decide such things on the basis of one individual’s experiences, for the reasons described above: they might be mistaking the placebo effect for a real effect, or mistaking a chance finding for a real one. Even if we had one genuine, unambiguous and astonishing case of a person getting better from terminal cancer, we’d still be careful about using that one person’s experience, because sometimes, entirely by chance, miracles really do happen. Sometimes, but not very often.
Over the course of many years, a team of Australian oncologists followed 2,337 terminal cancer patients in palliative care.
They died, on average, after five months. But around 1 per cent of them were still alive after five years. In January 2006 this study was reported in the Independent, bafflingly, as:
‘Miracle’ Cures Shown to Work
Doctors have found statistical evidence that alternative treatments such as special diets, herbal potions and faith healing can cure apparently terminal illness, but they remain unsure about the reasons.
But the point of the study was specifically not that there are miracle cures (it didn’t look at any such treatments, that was an invention by the newspaper). Instead, it showed something much more interesting: that amazing things simply happen sometimes: people can survive, despite all the odds, for no apparent reason. As the researchers made clear in their own description, claims for miracle cures should be treated with caution, because ‘miracles’ occur routinely, in 1 per cent of cases by their definition, and without any specific intervention. The lesson of this paper is that we cannot reason from one individual’s experience, or even that of a handful, selected out to make a point.
So how do we move on? The answer is that we take lots of individuals, a sample of patients who represent the people we hope to treat, with all of their individual experiences, and count them all up. This is clinical academic medical research, in a nutshell, and there’s really nothing more to it than that: no mystery, no ‘different paradigm’, no smoke and mirrors. It’s an entirely transparent process, and this one idea has probably saved more lives, on a more spectacular scale, than any other idea you will come across this year.
It is also not a new idea. The first trial appears in the Old Testament, and interestingly, although nutritionism has only recently become what we might call the ‘bollocks du jour’, it was about food. Daniel was arguing with King Nebuchadnezzar’s chief eunuch over the Judaean captives’ rations. Their diet was rich food and wine, but Daniel wanted his own soldiers to be given only vegetables. The eunuch was worried that they would become worse soldiers if they didn’t eat their rich meals, and that whatever could be done to a eunuch to make his life worse might be done to him. Daniel, on the other hand, was willing to compromise, so he suggested the first ever clinical trial:
And Daniel said unto the guard…’Submit us to this test for ten days. Give us only vegetables to eat and water to drink; then compare our looks with those of the young men who have lived on the food assigned by the King and be guided in your treatment of us by what you see.’
The guard listened to what they said and tested them for ten days. At the end of ten days they looked healthier and were better nourished than all the young men who had lived on the food assigned them by the King. So the guard took away the assignment of food and the wine the) were to drink and gave them only the vegetables.
Daniel 1:1-16.
To an extent, that’s all there is to it: there’s nothing particularly mysterious about a trial, and if we wanted to see whether homeopathy pills work, we could do a very similar trial. Let’s flesh it out. We would take, say, two hundred people going to a homeopathy clinic, divide them randomly into two groups, and let them go through the whole process of seeing the homeopath, being diagnosed, and getting their prescription for whatever the homeopath wants to give them. But at the last minute, without their knowledge, we would switch half of the patients’ homeopathic sugar pills, giving them dud sugar pills, that have not been magically potentised by homeopathy. Then, at an appropriate time later, we could measure how many in each group got better.
Speaking with homeopaths, I have encountered a great deal of angst about the idea of measuring, as if this was somehow not a transparent process, as if it forces a square peg into a round hole, because ‘measuring’ sounds scientific and mathematical. We should pause for just a moment and think about this clearly. Measuring involves no mystery, and no special devices. We ask people if they feel better, and count up the answers.
In a trial—or sometimes routinely in outpatients’ clinic—we might ask people to measure their knee pain on a scale of one to ten every day, in a diary. Or to count up the number of pain-free days in a week. Or to measure the effect their fatigue has had on their life that week: how many days they’ve been able to get out of the house, how far they’ve been able to walk, how much housework they’ve been able to do. You can ask about any number of very simple, transparent, and often quite subjective things, because the business of medicine is improving lives, and ameliorating distress.
We might dress the process up a bit, to standardise it, and allow our results to be compared more easily with other research (which is a good thing, as it helps us to get a broader understanding of a condition and its treatment). We might use the ‘General Health Questionnaire’, for example, because it’s a standardised ‘tool’; but for all the bluster, the ‘GHQ-12’, as it is known, is just a simple list of questions about your life and your symptoms.
If anti-authoritarian rhetoric is your thing, then bear this in mind: perpetrating a placebo-controlled trial of an accepted treatment—whether it’s an alternative therapy or any form of medicine—is an inherently subversive act. You undermine false certainty, and you deprive doctors, patients and therapists of treatments which previously pleased them.
There is a long history of upset being caused by trials, in medicine as much as anywhere, and all kinds of people will mount all kinds of defences against them. Archie Cochrane, one of the grandfathers of evidence-based medicine, once amusingly described how different groups of surgeons were each earnestly contending that their treatment for cancer was the most effective: it was transparently obvious to them all that their own treatment was the best. Cochrane went so far as to bring a collection of them together in a room, so that they could witness each other’s dogged but conflicting certainty, in his efforts to persuade them of the need for trials. Judges, similarly, can be highly resistant to the notion of trialling different forms of sentence for heroin users, believing that they know best in each individual case. These are recent battles, and they are in no sense unique to the world of homeopathy.
So, we take our group of people coming out of a homeopathy clinic, we switch half their pills for placebo pills, and we measure who gets better. That’s a placebo-controlled trial of homeopathy pills, and this is not a hypothetical discussion: these trials have been done on homeopathy, and it seems that overall, homeopathy does no better than placebo.
And yet you will have heard homeopaths say that there are positive trials in homeopathy; you may even have seen specific ones quoted. What’s going on here? The answer is fascinating, and takes us right to the heart of evidence-based medicine. There are some trials which find homeopathy to perform better than placebo, but only some, and they are, in general, trials with ‘methodological flaws’. This sounds technical, but all it means is that there are problems in the way the trials were performed, and those problems are so great that they mean the trials are less ‘fair tests’ of a treatment.
The alternative therapy literature is certainly riddled with incompetence, but flaws in trials are actually very common throughout medicine. In fact, it would be fair to say that all research has some ‘flaws’, simply because every trial will involve a compromise between what would be ideal, and what is practical or cheap. (The literature from complementary and alternative medicine—CAM—often fails badly at the stage of interpretation: medics sometimes know if they’re quoting duff papers, and describe the flaws, whereas homeopaths tend to be uncritical of anything positive.)
That is why it’s important that research is always published, in full, with its methods and results available for scrutiny. This is a recurring theme in this book, and it’s important, because when people make claims based upon their research, we need to be able to decide for ourselves how big the ‘methodological flaws’ were, and come to our own judgement about whether the results are reliable, whether theirs was a ‘fair test’. The things that stop a trial from being fair are, once you know about them, blindingly obvious.
Blinding
One important feature of a good trial is that neither the experimenters nor the patients know if they got the homeopathy sugar pill or the simple placebo sugar pill, because we want to be sure that any difference we measure is the result of the difference between the pills, and not of people’s expectations or biases. If the researchers knew which of their beloved patients were having the real and which the placebo pills, they might give the game away—or it might change their assessment of the patient—consciously or unconsciously.
Let’s say I’m doing a study on a medical pill designed to reduce high blood pressure. I know which of my patients are having the expensive new blood pressure pill, and which are having the placebo. One of the people on the swanky new blood pressure pills comes in and has a blood pressure reading that is way off the scale, much higher than I would have expected, especially since they’re on this expensive new drug. So I recheck their blood pressure, ‘just to make sure I didn’t make a mistake’. The next result is more normal, so I write that one down, and ignore the high one.
Blood pressure readings are an inexact technique, like ECG interpretation, X-ray interpretation, pain scores, and many other measurements that are routinely used in clinical trials. I go for lunch, entirely unaware that I am calmly and quietly polluting the data, destroying the study, producing inaccurate evidence, and therefore, ultimately, killing people (because our greatest mistake would be to forget that data is used for serious decisions in the very real world, and bad information causes suffering and death).
There are several good examples from recent medical history where a failure to ensure adequate ‘blinding’, as it is called, has resulted in the entire medical profession being mistaken about which was the better treatment. We had no way of knowing whether keyhole surgery was better than open surgery, for example, until a group of surgeons from Sheffield came along and did a very theatrical trial, in which bandages and decorative fake blood squirts were used, to make sure that nobody could tell which type of operation anyone had received.
Some of the biggest figures in evidence-based medicine got together and did a review of blinding in all kinds of trials of medical drugs, and found that trials with inadequate blinding exaggerated the benefits of the treatments being studied by 17 per cent. Blinding is not some obscure piece of nitpicking, idiosyncratic to pedants like me, used to attack alternative therapies.
Closer to home for homeopathy, a review of trials of acupuncture for back pain showed that the studies which were properly blinded showed a tiny benefit for acupuncture, which was not ‘statistically significant’ (we’ll come back to what that means later). Meanwhile, the trials which were not blinded—the ones where the patients knew whether they were in the treatment group or not—showed a massive, statistically significant benefit for acupuncture. (The placebo control for acupuncture, in case you’re wondering, is sham acupuncture, with fake needles, or needles in the ‘wrong’ places, although an amusing complication is that sometimes one school of acupuncturists will claim that another school’s sham needle locations are actually their genuine ones.)


So, as we can see, blinding is important, and not every trial is necessarily any good. You can’t just say, ‘Here’s a trial that shows this treatment works,’ because there are good trials, or ‘fair tests’, and there are bad trials. When doctors and scientists say that a study was methodologically flawed and unreliable, it’s not because they’re being mean, or trying to maintain the ‘hegemony’, or to keep the backhanders coming from the pharmaceutical industry: it’s because the study was poorly performed—it costs nothing to blind properly—and simply wasn’t a fair test.
Randomisation
Let’s take this out of the theoretical, and look at some of the trials which homeopaths quote to support their practice. I’ve got a bog-standard review of trials for homeopathic arnica by Professor Edward Ernst in front of me, which we can go through for examples. We should be absolutely clear that the inadequacies here are not unique, I do not imply malice, and I am not being mean. What we are doing is simply what medics and academics do when they appraise evidence.
So, Hildebrandt et al. (as they say in academia) looked at forty-two women taking homeopathic arnica for delayed-onset muscle soreness, and found it performed better than placebo. At first glance this seems to be a pretty plausible study, but if you look closer, you can see there was no ‘randomisation’ described. Randomisation is another basic concept in clinical trials. We randomly assign patients to the placebo sugar pill group or the homeopathy sugar pill group, because otherwise there is a risk that the doctor or homeopath—consciously or unconsciously—will put patients who they think might do well into the homeopathy group, and the no-hopers into the placebo group, thus rigging the results.
Randomisation is not a new idea. It was first proposed in the seventeenth century by John Baptista van Helmont, a Belgian radical who challenged the academics of his day to test their treatments like blood-letting and purging (based on ‘theory’) against his own, which he said were based more on clinical experience: ‘Let us take out of the hospitals, out of the Camps, or from elsewhere, two hundred, or five hundred poor People, that have Fevers, Pleurisies, etc. Let us divide them into half, let us cast lots, that one half of them may fall to my share, and the other to yours…We shall see how many funerals both of us shall have.’
It’s rare to find an experimenter so careless that they’ve not randomised the patients at all, even in the world of CAM. But it’s surprisingly common to find trials where the method of randomisation is inadequate: they look plausible at first glance, but on closer examination we can see that the experimenters have simply gone through a kind of theatre, as if they were randomising the patients, but still leaving room for them to influence, consciously or unconsciously, which group each patient goes into.
In some inept trials, in all areas of medicine, patients are ‘randomised’ into the treatment or placebo group by the order in which they are recruited onto the study—the first patient in gets the real treatment, the second gets the placebo, the third the real treatment, the fourth the placebo, and so on. This sounds fair enough, but in fact it’s a glaring hole that opens your trial up to possible systematic bias.
Let’s imagine there is a patient who the homeopath believes to be a no-hoper, a heart-sink patient who’ll never really get better, no matter what treatment he or she gets, and the next place available on the study is for someone going into the ‘homeopathy’ arm of the trial. It’s not inconceivable that the homeopath might just decide—again, consciously or unconsciously—that this particular patient ‘probably wouldn’t really be interested’ in the trial. But if, on the other hand, this no-hoper patient had come into clinic at a time when the next place on the trial was for the placebo group, the recruiting clinician might feel a lot more optimistic about signing them up.
The same goes for all the other inadequate methods of randomisation: by last digit of date of birth, by date seen in clinic, and so on. There are even studies which claim to randomise patients by tossing a coin, but forgive me (and the entire evidence-based medicine community) for worrying that tossing a coin leaves itself just a little bit too open to manipulation. Best of three, and all that. Sorry, I meant best of five. Oh, I didn’t really see that one, it fell on the floor.
There are plenty of genuinely fair methods of randomisation, and although they require a bit of nous, they come at no extra financial cost. The classic is to make people call a special telephone number, to where someone is sitting with a computerised randomisation programme (and the experimenter doesn’t even do that until the patient is fully signed up and committed to the study). This is probably the most popular method amongst meticulous researchers, who are keen to ensure they are doing a ‘fair test’, simply because you’d have to be an out-and-out charlatan to mess it up, and you’d have to work pretty hard at the charlatanry too. We’ll get back to laughing at quacks in a minute, but right now you are learning about one of the most important ideas of modern intellectual history.
Does randomisation matter? As with blinding, people have studied the effect of randomisation in huge reviews of large numbers of trials, and found that the ones with dodgy methods of randomisation overestimate treatment effects by 41 per cent. In reality, the biggest problem with poor-quality trials is not that they’ve used an inadequate method of randomisation, it’s that they don’t tell you how they randomised the patients at all. This is a classic warning sign, and often means the trial has been performed badly. Again, I do not speak from prejudice: trials with unclear methods of randomisation overstate treatment effects by 30 per cent, almost as much as the trials with openly rubbish methods of randomisation.
In fact, as a general rule it’s always worth worrying when people don’t give you sufficient details about their methods and results. As it happens (I promise I’ll stop this soon), there have been two landmark studies on whether inadequate information in academic articles is associated with dodgy, overly flattering results, and yes, studies which don’t report their methods fully do overstate the benefits of the treatments, by around 25 per cent. Transparency and detail are everything in science. Hildebrandt et ah, through no fault of their own, happened to be the peg for this discussion on randomisation (and I am grateful to them for it): they might well have randomised their patients. They might well have done so adequately. But they did not report on it.
Let’s go back to the eight studies in Ernst’s review article on homeopathic arnica—which we chose pretty arbitrarily—because they demonstrate a phenomenon which we see over and over again with CAM studies: most of the trials were hopelessly methodologically flawed, and showed positive results for homeopathy; whereas the couple of decent studies—the most ‘fair tests’—showed homeopathy to perform no better than placebo.*
≡ So, Pinsent performed a double-blind, placebo-controlled study of fifty-nine people having oral surgery: the group receiving homeopathic arnica experienced significantly less pain than the group getting placebo. What you don’t tend to read in the arnica publicity material is that forty-one subjects dropped out of this study. That makes it a fairly rubbish study. It’s been shown that patients who drop out of studies are less likely to have taken their tablets properly, more likely to have hail side-effects, less likely to have got better, and so on. I am not sceptical about this study because it offends my prejudices, but because of the high drop-out rate. The missing patients might have been lost to follow-up because they are dead, for example. Ignoring drop-outs tends to exaggerate the benefits of the treatment being tested, and a high drop-out rate is always a warning sign.

The study by Gibson et al. did not mention randomisation, nor did it deign to mention the dose of the homeopathic remedy, or the frequency with which it was given. It’s not easy to take studies very seriously when they are this thin.

There was a study by Campbell which had thirteen subjects in it (which means a tiny handful of patients in both the homeopathy and the placebo groups): it found that homeopathy performed better than placebo (in this teeny-tiny sample of subjects), but didn’t check whether the results were statistically significant, or merely chance findings.

Lastly, Savage et al. did a study with a mere ten patients, finding that homeopathy was better than placebo; but they too did no statistical analysis of their results.

These are the kinds of papers that homeopaths claim as evidence to support their case, evidence which they claim is deceitfully ignored by the medical profession. All of these studies favoured homeopathy. All deserve to be ignored, for the simple reason that each was not a ‘fair test’ of homeopathy, simply on account of these methodological flaws.

I could go on, through a hundred homeopathy trials, but it’s painful enough already.

So now you can see, I would hope, that when doctors say a piece of research is ‘unreliable’, that’s not necessarily a stitch-up; when academics deliberately exclude a poorly performed study that flatters homeopathy, or any other kind of paper, from a systematic review of the literature, it’s not through a personal or moral bias: it’s for the simple reason that if a study is no good, if it is not a ‘fair test’ of the treatments, then it might give unreliable results, and so it should be regarded with great caution.
There is a moral and financial issue here too: randomising your patients properly doesn’t cost money. Blinding your patients to whether they had the active treatment or the placebo doesn’t cost money. Overall, doing research robustly and fairly does not necessarily require more money, it simply requires that you think before you start. The only people to blame for the flaws in these studies are the people who performed them. In some cases they will be people who turn their backs on the scientific method as a ‘flawed paradigm’; and yet it seems their great new paradigm is simply ‘unfair tests’.
These patterns are reflected throughout the alternative therapy literature. In general, the studies which are flawed tend to be the ones that favour homeopathy, or any other alternative therapy; and the well-performed studies, where every controllable source of bias and error is excluded, tend to show that the treatments are no better than placebo.
This phenomenon has been carefully studied, and there is an almost linear relationship between the methodological quality of a homeopathy trial and the result it gives. The worse the study—which is to say, the less it is a ‘fair test’—the more likely it is to find that homeopathy is better than placebo. Academics conventionally measure the quality of a study using standardised tools like the ‘Jadad score’, a seven-point tick list that includes things we’ve been talking about, like ‘Did they describe the method of randomisation?’ and ‘Was plenty of numerical information provided?’
This graph, from Ernst’s paper, shows what happens when you plot Jadad score against result in homeopathy trials. Towards the top left, you can see rubbish trials with huge design flaws which triumphantly find that homeopathy is much, much better than placebo. Towards the bottom right, you can see that as the Jadad score tends towards the top mark of 5, as the trials become more of a ‘fair test’, the line tends towards showing that homeopathy performs no better than placebo.


There is, however, a mystery in this graph: an oddity, and the makings of a whodunnit. That little dot on the right-hand edge of the graph, representing the ten best-quality trials, with the highest Jadad scores, stands clearly outside the trend of all the others. This is an anomalous finding: suddenly, only at that end of the graph, there are some good-quality trials bucking the trend and showing that homeopathy is better than placebo.
What’s going on there? I can tell you what I think: some of the papers making up that spot are a stitch-up. I don’t know which ones, how it happened, or who did it, in which of the ten papers, but that’s what I think. Academics often have to couch strong criticism in diplomatic language. Here is Professor Ernst, the man who made that graph, discussing the eyebrow-raising outlier. You might decode his Yes, Minister diplomacy, and conclude that he thinks there’s been a stitch-up too.
There may be several hypotheses to explain this phenomenon. Scientists who insist that homeopathic remedies are in everyway identical to placebos might favour the following. The correlation provided by the four data points (Jadad score 1-4) roughly reflects the truth. Extrapolation of this correlation would lead them to expect that those trials with the least room for bias (Jadad score = 5) show homeopathic remedies are pure placebos. The fact, however, that the average result of the 10 trials scoring 5 points on the Jadad score contradicts this notion, is consistent with the hypomesis that some (by no means all) methodologically astute and highly convinced homeopaths have published results that look convincing but are, in fact, not credible.
But this is a curiosity and an aside. In the bigger picture it doesn’t matter, because overall, even including these suspicious studies, the ‘meta-analyses’ still show, overall, that homeopathy is no better than placebo. Meta-analyses?
Meta-analysis
This will be our last big idea for a while, and this is one that has saved the lives of more people than you will ever meet. A meta-analysis is a very simple thing to do, in some respects: you just collect all the results from all the trials on a given subject, bung them into one big spreadsheet, and do the maths on that, instead of relying on your own gestalt intuition about all the results from each of your little trials. It’s particularly useful when there have been lots of trials, each too small to give a conclusive answer, but all looking at the same topic.
So if there are, say, ten randomised, placebo-controlled trials looking at whether asthma symptoms get better with homeopathy, each of which has a paltry forty patients, you could put them all into one meta-analysis and effectively (in some respects) have a four-hundred-person trial to work with.
In some very famous cases—at least, famous in the world of academic medicine—meta-analyses have shown that a treatment previously believed to be ineffective is in fact rather good, but because the trials that had been done were each too small, individually, to detect the real benefit, nobody had been able to spot it.
As I said, information alone can be life-saving, and one of the greatest institutional innovations of the past thirty years is undoubtedly the Cochrane Collaboration, an international not-for-profit organisation of academics, which produces systematic summaries of the research literature on healthcare research, including meta-analyses.
The logo of the Cochrane Collaboration features a simplified ‘blobbogram’, a graph of the results from a landmark meta-analysis which looked at an intervention given to pregnant mothers. When people give birth prematurely, as you might expect, the babies are more likely to suffer and die. Some doctors in New Zealand had the idea that giving a short, cheap course of a steroid might help improve outcomes, and seven trials testing this idea were done between 1972 and 1981. Two of them showed some benefit from the steroids, but the remaining five failed to detect any benefit, and because of this, the idea didn’t catch on.

Eight years later, in 1989, a meta-analysis was done by pooling all this trial data. If you look at the blobbogram in the logo on the previous page, you can see what happened. Each horizontal line represents a single study: if the line is over to the left, it means the steroids were better than placebo, and if it is over to the right, it means the steroids were worse. If the horizontal line for a trial touches the big vertical ‘nil effect’ line going down the middle, then the trial showed no clear difference either way. One last thing: the longer a horizontal line is, the less certain the outcome of the study was.
Looking at the blobbogram, we can see that there are lots of not-very-certain studies, long horizontal lines, mostly touching the central vertical line of ‘no effect’; but they’re all a bit over to the left, so they all seem to suggest that steroids might be beneficial, even if each study itself is not statistically significant.
The diamond at the bottom shows the pooled answer: that there is, in fact, very strong evidence indeed for steroids reducing the risk—by 30 to 50 per cent—of babies dying from the complications of immaturity. We should always remember the human cost of these abstract numbers: babies died unnecessarily because they were deprived of this li fe-saving treatment for a decade. They died, even when there was enough information available to know what would save them, because that information had not been synthesised together, and analysed systematically, in a meta-analysis.
Back to homeopathy (you can see why I find it trivial now). A landmark meta-analysis was published recently in the Lancet. It was accompanied by an editorial titled: ‘The End of Homeopathy?’ Shang et al. did a very thorough meta-analysis of a vast number of homeopathy trials, and they found, overall, adding them all up, that homeopathy performs no better than placebo.
The homeopaths were up in arms. If you mention this meta-analysis, they will try to tell you that it was a stitch-up. What Shang et al. did, essentially, like all the previous negative meta-analyses of homeopathy, was to exclude the poorer-quality trials from their analysis.
Homeopaths like to pick out the trials that give them the answer that they want to hear, and ignore the rest, a practice called ‘cherry-picking’. But you can also cherry-pick your favourite meta-analyses, or misrepresent them. Shang et al. was only the latest in a long string of meta-analyses to show that homeopathy performs no better than placebo. What is truly amazing to me is that despite the negative results of these meta-analyses, homeopaths have continued—right to the top of the profession—to claim that these same meta-analyses support the use of homeopathy. They do this by quoting only the result for all trials included in each meta-analysis. This figure includes all of the poorer-quality trials. The most reliable figure, you now know, is for the restricted pool of the most ‘fair tests’, and when you look at those, homeopathy performs no better than placebo. If this fascinates you (and I would be very surprised), then I am currently producing a summary with some colleagues, and you will soon be able to find it online at badscience.net, in all its glorious detail, explaining the results of the various meta-analyses performed on homeopathy.
Clinicians, pundits and researchers all like to say things like ‘There is a need for more research,’ because it sounds forward-thinking and open-minded. In fact that’s not always the case, and it’s a little-known fact that this very phrase has been effectively banned from the British Medical Journal for many years, on the grounds that it adds nothing: you may say what research is missing, on whom, how, measuring what, and why you want to do it, but the hand-waving, superficially open-minded call for ‘more research’ is meaningless and unhelpful.
There have been over a hundred randomised placebo-controlled trials of homeopathy, and the time has come to stop. Homeopathy pills work no better than placebo pills, we know that much. But there is room for more interesting research.
People do experience that homeopathy is positive for them, but the action is likely to be in the whole process of going to see a homeopath, of being listened to, having some kind of explanation for your symptoms, and all the other collateral benefits of old–fashioned, paternalistic, reassuring medicine. (Oh, and regression to the mean.)
So we should measure that; and here is the final superb lesson in evidence-based medicine that homeopathy can teach us: sometimes you need to be imaginative about what kinds of research you do, compromise, and be driven by the questions that need answering, rather than the tools available to you.
It is very common for researchers to research the things which interest them, in all areas of medicine; but they can be interested in quite different things from patients. One study actually thought to ask people with osteoarthritis of the knee what kind of research they wanted to be carried out, and the responses were fascinating: they wanted rigorous real-world evaluations of the benefits from physiotherapy and surgery, from educational and coping strategy interventions, and other pragmatic things. They didn’t want yet another trial comparing one pill with another, or with placebo.
In the case of homeopathy, similarly, homeopaths want to believe that the power is in the pill, rather than in the whole process of going to visit a homeopath, having a chat and so on. It is crucially important to their professional identity. But I believe that going to see a homeopath is probably a helpful intervention, in some cases, for some people, even if the pills are just placebos. I think patients would agree, and I think it would be an interesting thing to measure. It would be easy, and you would do something called a pragmatic ‘waiting-list-controlled trial’.
You take two hundred patients, say, all suitable for homeopathic treatment, currently in a GP clinic, and all willing to be referred on for homeopathy, then you split them randomly into two groups of one hundred. One group gets treated by a homeopath as normal, pills, consultation, smoke and voodoo, on top of whatever other treatment they are having, just like in the real world. The other group just sits on the waiting list. They get treatment as usual, whether that is ‘neglect’, ‘GP treatment’ or whatever, but no homeopathy. Then you measure outcomes, and compare who gets better the most.
You could argue that it would be a trivial positive finding, and that it’s obvious the homeopathy group would do better; but it’s the only piece of research really waiting to be done. This is a ‘pragmatic trial’. The groups aren’t blinded, but they couldn’t possibly be in this kind of trial, and sometimes we have to accept compromises in experimental methodology. It would be a legitimate use of public money (or perhaps money from Boiron, the homeopathic pill company valued at $500 million), but there’s nothing to stop homeopaths from just cracking on and doing it for themselves: because despite the homeopaths’ fantasies, born out of a lack of knowledge, that research is difficult, magical and expensive, in fact such a trial would be very cheap to conduct.
In fact, it’s not really money that’s missing from the alternative therapy research community, especially in Britain: it’s knowledge of evidence-based medicine, and expertise in how to do a trial. Their literature and debates drip with ignorance, and vitriolic anger at anyone who dares to appraise the trials. Their university courses, as far as they ever even dare to admit what they teach on them (it’s all suspiciously hidden away), seem to skirt around such explosive and threatening questions. I’ve suggested in various places, including at academic conferences, that the single thing that would most improve the quality of evidence in CAM would be funding for a simple, evidence-based medicine hotline, which anyone thinking about running a trial in their clinic could phone up and get advice on how to do it properly, to avoid wasting effort on an ‘unfair test’ that will rightly be regarded with contempt by all outsiders.
In my pipe dream (I’m completely serious, if you’ve got the money) you’d need a handout, maybe a short course that people did to cover the basics, so they weren’t asking stupid questions, and phone support. In the meantime, if you’re a sensible homeopath and you want to do a GP-controlled trial, you could maybe try the badscience website forums, where there are people who might be able to give some pointers (among the childish fighters and trolls…).
But would the homeopaths buy it? I think it would offend their sense of professionalism. You often see homeopaths trying to nuance their way through this tricky area, and they can’t quite make their minds up. Here, for example, is a Radio 4 interview, archived in full online, where Dr Elizabeth Thompson (consultant homeopathic physician, and honorary senior lecturer at the Department of Palliative Medicine at the University of Bristol) has a go.
She starts off with some sensible stuff: homeopathy does work, but through non-specific effects, the cultural meaning of the process, the therapeutic relationship, it’s not about the pills, and so on. She practically comes out and says that homeopathy is all about cultural meaning and the placebo effect. ‘People have wanted to say homeopathy is like a pharmaceutical compound,’ she says, ‘and it isn’t, it is a complex intervention.’
Then the interviewer asks: ‘What would you say to people who go along to their high street pharmacy, where you can buy homeopathic remedies, they have hay fever and they pick out a hay-fever remedy, I mean presumably that’s not the way it works?’ There is a moment of tension. Forgive me, Dr Thompson, but I felt you didn’t want to say that the pills work, as pills, in isolation, when you buy them in a shop: apart from anything else, you’d already said that they don’t.
But she doesn’t want to break ranks and say the pills don’t work, either. I’m holding my breath. How will she do it? Is there a linguistic structure complex enough, passive enough, to negotiate through this? If there is, Dr Thompson doesn’t find it: ‘They might flick through and they might just be spot-on…[but] you’ve got to be very lucky to walk in and just get the right remedy.’ So the power is, and is not, in the pill: ‘P, and not-P’, as philosophers of logic would say.
If they can’t finesse it with the ‘power is not in the pill’ paradox, how else do the homeopaths get around all this negative data? Dr Thompson—from what I have seen—is a fairly clear-thinking and civilised homeopath. She is, in many respects, alone. Homeopaths have been careful to keep themselves outside of the civilising environment of the university, where the influence and questioning of colleagues can help to refine ideas, and weed out the bad ones. In their rare forays, they enter them secretively, walling themselves and their ideas off from criticism or review, refusing to share even what is in their exam papers with outsiders.
It is rare to find a homeopath engaging on the issue of the evidence, but what happens when they do? I can tell you. They get angry, they threaten to sue, they scream and shout at you at meetings, they complain spuriously and with ludicrous misrepresentations—time-consuming to expose, of course, but that’s the point of harassment—to the Press Complaints Commission and your editor, they send hate mail, and accuse you repeatedly of somehow being in the pocket of big pharma (falsely, although you start to wonder why you bother having principles when faced with this kind of behaviour). They bully, they smear, to the absolute top of the profession, and they do anything they can in a desperate bid to shut you up, and avoid having a discussion about the evidence. They have even been known to threaten violence (I won’t go into it here, but I manage these issues extremely seriously).
I’m not saying I don’t enjoy a bit of banter. I’m just pointing out that you don’t get anything quite like this in most other fields, and homeopaths, among all the people in this book, with the exception of the odd nutritionist, seem to me to be a uniquely angry breed. Experiment for yourself by chatting with them about evidence, and let me know what you find.
By now your head is hurting, because of all those mischievous, confusing homeopaths and their weird, labyrinthine defences: you need a lovely science massage. Why is evidence so complicated? Why do we need all of these clever tricks, these special research paradigms? The answer is simple: the world is much more complicated than simple stories about pills making people get better. We are human, we are irrational, we have foibles, and the power of the mind over the body is greater than anything you have previously imagined.



Ben Goldacre's books