The Science of Discworld IV Judgement Da

TWO



* * *



GREAT BIG THINKING





Great Big Things have a seductive allure, to which Roundworld’s scientists are by no means immune. Most science requires relatively modest equipment, some is inherently expensive, and some would finance a small nation. Governments worldwide are addicted to big science, and often find it easier to authorise a ten-billion dollar project than one costing ten thousand – much as a committee will agree to a new building in five minutes, but then spend an hour debating the cost of biscuits. We all know why: it takes an expert to evaluate the design and price of a building, but everyone understands biscuits. The funding of big science is sometimes depressingly similar. Moreover, for administrators and politicians seeking to enhance their careers, big science is more prestigious than small science, because it involves more money.

However, there can also be a more admirable motive for huge scientific projects: big problems sometimes require big answers. Putting together a faster-than-light drive on the kitchen table using old baked bean cans may work in a science fiction story, but it’s seldom a realistic way to proceed. Sometimes you get what you pay for.

Big science can be traced back to the Manhattan project in World War II, which developed the atomic bomb. This was an extraordinarily complex task, involving tens of thousands of people with a variety of skills. It stretched the boundaries of science, engineering and, above all, organisation and logistics. We don’t want to suggest that finding really effective ways to blow people to smithereens is necessarily a sensible criterion for success, but the Manhattan project convinced a lot of people that big science can have a huge impact on the entire planet. Governments have promoted big science ever since; the Apollo Moon landings and the human genome project are familiar examples.

Some areas of science are unable to function at all without Great Big Things. Perhaps the most prominent is particle physics, which has given the world a series of gigantic machines, called particle accelerators, which probe the small-scale structure of matter. The most powerful of these are colliders, which smash subatomic particles into stationary targets, or into each other in head-on collisions, to see what gets spat out. As particle physics progresses, the new particles that theorists are predicting become more exotic and harder to detect. It takes a more energetic collision to spit them out, and more mathematical detective work and more powerful computers to compile evidence that they were, for an almost infinitesimal moment of time, actually present. So each new accelerator has to be bigger, hence more expensive, than its predecessors.

The latest and greatest is the Large Hadron Collider (LHC). ‘Collider’ we know about, ‘hadron’ is the name of a class of subatomic particles, and ‘large’ is fully justified. The LHC is housed in two circular tunnels, deep underground; they are mostly in Switzerland but wander across the border into France as well. The main tunnel is eight kilometres across, and the other one is about half as big. The tunnels contain two tubes, along which the particles of interest – electrons, protons, positrons and so on – are propelled at speeds close to that of light by 1,624 magnets. The magnets have to be kept at a temperature close to absolute zero, which requires 96 tonnes of liquid helium; they are absolutely enormous, and most weigh over 27 tonnes.

The tubes cross at four locations, where the particles can be smashed into each other. This is the time-honoured way for physicists to probe the structure of matter, because the collisions generate a swarm of other particles, the bits and pieces out of which the original particles are made. Six enormously complex detectors, located at various points along the tunnels, collect data on this swarm, and powerful computers analyse the data to work out what’s going on.

The LHC cost €7.5 billion – about £6 billion or $9 billion – to build. Not surprisingly, it is a multinational project, so big politics gets in on the act as well.

Ponder Stibbons has two reasons for wanting a Great Big Thing. One is the spirit of intellectual enquiry, the mental fuel on which the High Energy Magic building runs. The bright young wizards who inhabit that building want to discover the fundamental basis of magic, a quest that has led them to such esoteric theories as quantum thaumodynamics and the third slood derivative, as well as the fateful experiment in splitting the thaum that inadvertently brought Roundworld into existence in the first place. The second reason opened the previous chapter: every university that wants to be considered a university has to have its very own Great Big Thing.

It is much the same in Roundworld – and not only for universities.

Particle physics began with small equipment and a big idea. The word ‘atom’ means ‘indivisible’, a choice of terminology that was a hostage to fortune from the day it was minted. Once physicists had swallowed the proposition that atoms exist, which they did just over a century ago, a few began to wonder if it might be a mistake to take the name literally. In 1897 Joseph John Thomson showed that they had a point when he discovered cathode rays, tiny particles emanating from atoms. These particles were named electrons.

You can hang around waiting for atoms to emit new particles, you can encourage them to do so, or you can make them an offer they can’t refuse by bashing them into things to see what breaks off and where it goes. In 1932 John Cockroft and Ernest Walton built a small particle accelerator and memorably ‘split the atom’. It soon emerged that atoms are made from three types of particle: electrons, protons and neutrons. These particles are extremely small, and even the most powerful microscopes yet invented cannot make them visible – though atoms can now be ‘seen’ using very sensitive microscopes that exploit quantum effects.

All of the elements – hydrogen, helium, carbon, sulphur and so on – are made from these three particles. Their chemical properties differ because their atoms contain different numbers of particles. There are some basic rules. In particular, the particles have electrical charges: negative for the electron, positive for the proton, and zero for the neutron. So the number of protons should be the same as the number of electrons, to make the total charge zero. A hydrogen atom is the simplest possible, with one electron and one proton; helium has two electrons, two protons and two neutrons.

The main chemical properties of an atom depend on the number of electrons, so you can throw in different numbers of neutrons without changing the chemistry dramatically. However, it does change a bit. This explains the existence of isotopes: variants of a given element with subtly different chemistry. An atom of the commonest form of carbon, for instance, has six electrons, six protons and six neutrons. There are other isotopes, which have between two and sixteen neutrons. Carbon-14, used by archaeologists to date ancient organic materials, has eight neutrons. An atom of the commonest form of sulphur has sixteen electrons, sixteen protons and sixteen neutrons; 25 isotopes are known.

Electrons are especially important for the atom’s chemical properties because they are on the outside, where they can make contact with other atoms to form molecules. The protons and neutrons are clustered closely together at the centre of the atom, forming its nucleus. In an early theory, electrons were thought to orbit the nucleus like planets going round the Sun. Then this image was replaced by one in which an electron is a fuzzy probability cloud, which tells us not where the particle is, but where it is likely to be found if you try to observe it. Today, even that image is seen as an oversimplification of some pretty advanced mathematics in which the electron is nowhere and everywhere at the same time.

Those three particles – electrons, protons and neutrons – unified the whole of physics and chemistry. They explained the entire list of chemical elements from hydrogen up to californium, the most complex naturally occurring element, and indeed various short-lived man-made elements of even greater complexity. To get matter in all its glorious variety, all you needed was a short list of particles, which were ‘fundamental’ in the sense that they couldn’t be split into even smaller particles. It was simple and straightforward.

Of course, it didn’t stay simple. First, quantum mechanics had to be introduced to explain a vast range of experimental observations about matter on its smallest scales. Then several other equally fundamental particles turned up, such as the photon – a particle of light – and the neutrino – an electrically neutral particle that interacts so rarely with everything else that it would be able to pass though thousands of miles of solid lead without difficulty. Every night, countless neutrinos generated by nuclear reactions in the Sun pass right through the solid Earth, and through you, and hardly any of them have any effect on anything.

Neutrinos and photons were only the beginning. Within a few years there were more fundamental particles than chemical elements, which was a bit worrying because the explanation was becoming more complicated than the things it was trying to explain. But eventually physicists worked out that some particles are more fundamental than others. A proton, for example, is made from three smaller particles called quarks. The same goes for the neutron, but the combination is different. Electrons, neutrinos and photons, however, remain fundamental; as far as we know, they’re not made out of anything simpler.fn1

One of the main reasons for constructing the LHC was to investigate the final missing ingredient of the standard model, which despite its modest name seems to explain almost everything in particle physics. This model maintains, with strong supporting evidence, that all particles are made from sixteen truly fundamental ones. Six are called quarks, and they come in pairs with quirky names: up/down, charmed/strange, and top/bottom. A neutron is one up quark plus two down quarks; a proton is one down quark plus two up quarks.

Next come six so-called leptons, also in pairs: the electron, muon, and tauon (usually just called tau) and their associated neutrinos. The original neutrino is now called the electron neutrino, and it is paired with the electron. These twelve particles – quarks and leptons – are collectively called fermions, after the great Italian-born American physicist Enrico Fermi.

The remaining four particles are associated with forces, so they hold everything else together. Physicists recognise four basic forces of nature: gravity, electromagnetism, the strong nuclear force and the weak nuclear force. Gravity plays no role in the standard model because it hasn’t yet been fitted into a quantum-mechanical picture. The other three forces are associated with specific particles known as bosons in honour of the Indian physicist Satyendra Nath Bose. The distinction between fermions and bosons is important: they have different statistical properties.

The four bosons ‘mediate’ the forces, much as two tennis players are held together by their mutual attention to the ball. The electromagnetic force is mediated by the photon, the weak nuclear force is mediated by the Z-boson and the W-boson, and the strong nuclear force is mediated by the gluon. So that’s the standard model: twelve fermions (six quarks, six leptons) held together by four bosons.

Sixteen fundamental particles.

Oh, and the Higgs boson – seventeen fundamental particles.

Assuming, of course, that the fabled Higgs (as it is colloquially called) actually existed. Which, until 2012, was moot.

Despite its successes, the standard model fails to explain why most particles have masses (for one particular technical meaning of ‘mass’). The Higgs came to prominence in the 1960s, when several physicists realised that a boson with unusual features might solve one important aspect of this riddle. Among them was Peter Higgs, who worked out some of the hypothetical particle’s properties and predicted that it should exist. The Higgs boson creates a Higgs field: a sea of Higgs bosons. The main unusual feature is that the strength of the Higgs field is not zero, even in empty space. When a particle moves through this all-pervasive Higgs field it interacts with it, and the effect can be interpreted as mass. One analogy is moving a spoon through treacle, but that misrepresents mass as resistance, and Higgs is critical of that way of describing his theory. Another analogy views the Higgs as a celebrity at a party, who attracts a cluster of admirers.

The existence (or not) of the Higgs boson was the main reason, though by no means the only one, for spending billions of euros on the LHC. And in July 2012 it duly delivered, with the announcement by two independent experimental teams of the discovery of a previously unknown particle. It was a boson with a mass of about 126 GeV (billion electronvolts, a standard unit used in particle physics), and the observations were consistent with the Higgs in the sense that those features that could be measured were what Higgs had predicted.

This discovery of the long-sought Higgs, if it holds up, completes the standard model. It could not have been made without big science, and it represents a major triumph for the LHC. However, the main impact to date has been in theoretical physics. The existence of the Higgs does not greatly affect the rest of science, which already assumes that particles have mass. So it could be argued that the same amount of money, spent on less spectacular projects, would almost certainly have produced results with more practical utility. However, it is in the nature of Great Big Things that if the money isn’t spent on them, it isn’t spent on smaller scientific projects either. Small projects don’t advance bureaucratic or political careers as effectively as big ones.

The discovery of the Higgs exemplifies some basic issues about how scientists view the world, and about the nature of scientific knowledge. The actual evidence for the Higgs is a tiny bump on a statistical graph. In what sense can we be confident that the bump actually represents a new particle? The answer is extremely technical. It is impossible to observe a Higgs boson directly, because it splits spontaneously and very rapidly into a swarm of other particles. These collide with yet other particles, creating a huge mess. It takes very clever mathematics, and very fast computers, to tease out of this mess the characteristic signature of a Higgs boson. In order to be sure that what you’ve seen isn’t just coincidence, you need to observe a large number of these Higgs-like events. Since they are very rare, you need to run the experiments many times and perform some sophisticated statistical analysis. Only when the chance of that bump being coincidence falls below one in a million do physicists allow themselves to express confidence that the Higgs is real.

We say ‘the’ Higgs, but there are alternative theories with more than one Higgs-like particle – eighteen fundamental particles. Or nineteen, or twenty. But now we know there is at least one, when before it might have been none.

Understanding all this requires considerable expertise in esoteric areas of theoretical physics and mathematics. Even understanding the aspect of ‘mass’ involved, and which particles it applies to, is complicated. Performing the experiment successfully requires a range of engineering skills, in addition to a deep background in experimental physics. Even the word ‘particle’ has a technical meaning, nothing like the comfortable image of a tiny ball bearing. So in what sense can scientists claim to ‘know’ how the universe behaves, on such a small scale that no human can perceive it directly? It’s not like looking through a telescope and seeing that Jupiter has four smaller bodies going round it, as Galileo did; or like looking down a microscope and realising that living things are made from tiny cells, as Robert Hooke did. The evidence for the Higgs, like that for most basic aspects of science, is not exactly in your face.

To come to grips with these questions, we take a look at the nature of scientific knowledge, using more familiar examples than the Higgs. Then we distinguish two fundamentally different ways to think about the world, which will form a running theme throughout the book.

Science is often thought to be a collection of ‘facts’, which make unequivocal statements about the world. The Earth goes round the Sun. Prisms separate light into its component colours. If it quacks and waddles, it’s a duck. Learn the facts, master the technical jargon (here being: orbit, spectrum, Anatidae), tick the boxes, and you understand science. Government administrators in charge of education often take this view, because they can count the ticks (Ixodidae – no, scratch that).

Oddly, the people who disagree most strongly are scientists. They know that science is nothing of the kind. There are no hard-and-fast facts. Every scientific statement is provisional. Politicians hate this. How can anyone trust scientists? If new evidence comes along, they change their minds.

Of course, some parts of science are less provisional than others. No scientist expects the accepted description of the shape of the Earth to change overnight from round to flat. But they have already seen it change from a plane to a sphere, from a sphere to a spheroid flattened at the poles, and from a perfect spheroid to a bumpy one. A recent press release announced that the Earth is shaped like a lumpy potato.fn2 On the other hand, no one would be surprised if new measurements revealed that the Earth’s seventeenth spherical harmonic – one component of the mathematical description of its shape – needed to be increased by two per cent. Most changes in science are gradual and progressive, and they don’t affect the big picture.

Sometimes, however, the scientific worldview changes radically. Four elements became 98 (now 118 as we’ve learned how to make new ones). Newton’s gravity, a force acting mysteriously at a distance, morphed into Einstein’s curved spacetime. Fundamental particles such as the electron changed from tiny hard spheres to probability waves, and are now considered to be localised excitations in a quantum field. The field is a sea of particles and the particles are isolated waves in that sea. The Higgs field is an example: here the corresponding particles are Higgs bosons. You can’t have one without the other: if you want to be a particle physicist, you have to understand the physics of quantum fields as well. So the word ‘particle’ necessarily acquires a different meaning.

Scientific revolutions don’t change the universe. They change how humans interpret it. Many scientific controversies are mainly about interpretations, not ‘the facts’. For example, many creationists don’t dispute the results of DNA sequencing;fn3 instead, they dispute the interpretation of those results as evidence for evolution.

Humans are hot on interpretation. It lets them wriggle out of awkward positions. In 2012, in a televised debate about sexism in religion and the vexed issue of female bishops in the Church of England, some months before the General Synod voted against the proposal, one participant quoted 1 Timothy 2:12-14: ‘But I suffer not a woman to teach, nor to usurp authority over the man, but to be in silence. For Adam was first formed, then Eve. And Adam was not deceived, but the woman being deceived was in the transgression.’ It seems hard to interpret this as anything other than a statement that women are inferior to men, that they should be subservient and shut up, and that moreover, original sin is entirely the fault of women, not men, because Eve fell for the serpent’s temptation. Despite this apparently unequivocal reading, another participant stoutly maintained that the verses meant nothing of the kind. It was just a matter of interpretation.

Interpretations matter, because ‘the facts’ seldom explain how the universe relates to us. ‘The facts’ tell us that the Sun’s heat comes from nuclear reactions, mainly hydrogen fusing to helium. But we want more. We want to know why. Did the Sun come into existence in order to provide us with heat? Or is it the other way round: are we on this planet because the Sun’s heat provided an environment in which creatures like us could evolve? The facts are the same either way, but their implications depend on how we interpret them.

Our default interpretation is to view the world in human terms. This is no great surprise. If a cat has a point of view, it surely views the world in feline terms. But humanity’s natural mode of operation has had a profound effect on how we think about our world, and on what kinds of explanation we find convincing. It also has a profound effect on what world we think about. Our brains perceive the world on a human scale, and interpret those perceptions in terms of what is – or sometimes was – important to us.

Our focus on the human scale may seem entirely reasonable. How else would we view our world? But rhetorical questions deserve rhetorical answers, and for us, unlike the rest of the animal kingdom, there are alternatives. The human brain can consciously modify its own thought-patterns. We can teach ourselves to think on other scales, both smaller and larger. We can train ourselves to avoid psychological traps, such as believing what we want to because we want to. We can think in even more alien ways: mathematicians routinely contemplate spaces with more than three dimensions, shapes so complicated that they have no meaningful volume, surfaces with only one side, and different sizes of infinity.

Humans can think inhuman thoughts.

That kind of thinking is said to be analytic. It may not come naturally, and its outcomes may not always be terribly comforting, but it’s possible. It has been the main path to today’s world, in which analytic thinking has become increasingly necessary for our survival. If you spend your time comfortably telling yourself that the world is what you want it to be, you will get some nasty surprises, and it may be too late to do anything about them. Unfortunately, the need to think analytically places a huge barrier between science and many human desires and beliefs that re-emerge in every generation. Battles scientists fondly imagined were won in the nineteenth century must continually be re-fought; rationality and evidence alone may not be enough to prevail.

There is a reason for our natural thought-patterns. They evolved, along with us, because they had survival value. A million years ago, human ancestors roamed the African savannahs, and their lives depended – day in, day out – on finding enough food to keep them alive, and avoiding becoming food themselves. The most important things in their lives were their fellow human beings, the animals and plants that they ate, and the animals that wanted to eat them.

Their world also included many things that were not alive: rocks; rivers, lakes and seas; the weather; fires (perhaps started by lightning); the Sun, Moon and stars. But even these often seemed to share some of the features of life. Many of them moved; some changed without any apparent pattern, as if acting on their own impulses; and many could kill. So it is not surprising that as human culture developed, we came to view our world as the outcome of conscious actions by living entities. The Sun, Moon and stars were gods, visible evidence for the existence of supernatural beings that lived in the heavens. A rumble of thunder, a flash of lightning – these were signs of the gods’ displeasure. The evidence was all around us on a daily basis, which put it beyond dispute.

In particular, animals and plants were central to the lives of early humans. You only have to browse through a book of Egyptian hieroglyphs to notice just how many of them are animals, birds, fish, plants … or bits of animals, birds, fish and plants. Egyptian gods were depicted with the heads of animals; in one extreme case, the god Khepri, the head was an entire dung beetle, neatly placed on top of an otherwise headless human body. Khepri was one aspect of the Sun-god, and the dung beetle (or scarab) got in on the act because dung beetles roll balls of dung around and dig them into the ground. Therefore the Sun, a giant ball, is pushed around by a giant dung beetle; as proof, the Sun also disappears into the ground (the underworld) every evening at sunset.

The physicist and science fiction author Gregory Benford has written many essays with a common theme: broadly speaking, human styles of thought tend to fall into two categories.fn4 One is to see humanity as the context for the universe; the other is to see the universe as the context for humanity. The same person can think both ways of course, but most of us tend to default to one of them. Most ways to separate people into two kinds are nonsense: as the old joke goes, there are two kinds of people: those who think there are two kinds of people, and those who don’t. But Benford’s distinction is an illuminating one, and it holds more than a grain of truth.

We can paraphrase it like this. Many people see the surrounding world – the universe – as a resource for humans to exploit; they also see it as a reflection of themselves. What matters most, in this view, is always human-centred. ‘What can this do for me?’ (or ‘for us?’) is the main, and often the only, question worth asking. From such a viewpoint, to understand something is to express it in terms of human agency. What matters is its purpose, and that is whatever we use it for. In this worldview, rain exists in order to make crops grow and to provide fresh water for us to drink. The Sun is there because it warms our bodies. The universe was designed with us in mind, constructed so that we could live in it, and it would have no meaning if we were not present.

It is a short and natural step to see human beings as the pinnacle of creation, rulers of the planet, masters of the universe. Moreover, you can do all of that without any conscious recognition of how narrowly human-centred your worldview is, and maintain that you are acting out of humility, not arrogance, because of course we are subservient to the universe’s creator. Which is basically a superhuman version of us – a king, an emperor, a pharaoh, a lord – whose powers are expanded to the limits of our imagination.

The alternative view is that human beings are just one tiny feature of a vast cosmos, most of which does not function on a human scale or take any notice of what we want. Crops grow because rain exists, but rain exists for reasons that have virtually nothing to do with crops. Rain has been in existence for billions of years, crops for about ten thousand. In the cosmic scheme of things, human beings are just one tiny incidental detail on an insignificant ball of rock, most of whose history happened before we turned up to wonder what was going on. We may be the most important thing in the universe as far as we are concerned, but nothing that happens outside our tiny planet depends on our existence, with a few obvious exceptions like various small but complicated bits of metal and plastic now littering the surface of the Moon and Mars, in orbit around Mercury, Jupiter and Saturn, or wandering through the outer edges of our solar system. We might say that the universe is indifferent to us, but even that statement is too self-conscious; it endows the universe with the human attribute of indifference. There is no ‘it’ to be indifferent. The system of the world does not function in human terms.

We’ll refer to these ways of thinking as ‘human-centred’ and ‘universe-centred’. Many controversies that grab the headlines stem, to a greater or lesser extent, from the deep differences between them. Instead of assuming that one must be superior to the other, and then arguing vehemently about which one it is, we should first learn to recognise the difference. Both have advantages, in their proper spheres of influence. What causes trouble is when they tread on each other’s toes.

Before the early twentieth century, scientists used to think that phenomena like light could either be particles or waves, but not both. They argued – often nastily – about which was correct. When quantum theory was invented, it turned out that matter had both aspects, inseparably intertwined. At about the time that all reputable scientists knew that light was a wave, photons turned up, and those were particles of light. Electrons, which were obviously particles when they were discovered, turned out to have wavelike features as well. So quantum physicists got used to the idea that things that seemed to be particles were actually tiny clumps of waves.

Then quantum field theory came along, and the waves stopped being clumped. They could spread out. So now particle physicists have to know about quantum fields, and our best explanation of why ‘particles’ have mass is the existence of an all-pervading Higgs field. On the other hand, the current evidence only supports the existence of the particle-like aspect of this field: the Higgs boson. The field itself has not been observed. It might not exist, and that would be interesting, because it would overturn the way physicists currently think about particles and fields. It would also be somewhat annoying.

In everyday life, we encounter solid, compact objects, such as rocks, and they make it easy for us to think about tiny particles. We encounter sloshy but well-defined structures that move around on water, and we feel comfortable with waves. In a human-centred view, there are no sloshy rocks, which makes us assume – almost without questioning it – that nothing can be both particle and wave at the same time. But universe-centred thinking has shown that this assumption can be wrong outside the human domain.

The human-centred view is as old as humanity itself. It seems to be the default pattern of thinking for most of us, and that makes sound evolutionary sense. The universe-centred view appeared more recently. In the sense that we’re thinking of – science and the scientific method – universe-centred thinking has become widespread only in the last three or four hundred years. It is still a minority view, but a very influential one. To see why, we must understand two things: how science goes about its business, and what constitutes scientific evidence.

For those of us who are willing to pay attention, the universe-centred view has revealed just how big, how ancient, and how awe-inspiring the universe is. Even on a human scale, it’s a very impressive place, but our parochial perceptions pale into insignificance when confronted by the mind-numbing reality.

When early humans roamed the plains of Africa, the world must have seemed huge, but it was actually extremely small. A big distance was what you could walk in a month. An individual’s experience of the world was limited to the immediate region in which he or she lived. For most purposes, a human-centred view works very well for such a small world. The important plants and animals – the ones useful to specific groups of humans – were relatively few in number, and located in their immediate vicinity. One person could encompass them all, learn their names, know how to milk a goat or to make a roof from palm fronds. The deeper message of the Egyptian hieroglyphs is not how diverse that culture’s flora and fauna were, but how narrowly its symbolism was tailored to the organisms that were important to everyday Egyptian life.

As we came to understand our world more deeply, and asked new questions, comfortable answers in terms that we could intuitively understand began to make less and less sense. Conceivably the Sun might, metaphorically, be pushed around by an invisible giant dung-beetle, but the Sun is a vast ball of very hot gas and no ordinary beetle could survive the heat. You either fix things up by attributing supernatural powers to your beetle, or you accept that a beetle can’t hack it. You then have to accept that the motion of the Sun occurs for reasons that differ significantly from the purposeful shoving of a beetle storing up food for its larvae, raising the interesting question ‘why or how does it move?’. Similarly, although the setting Sun looks as if it is disappearing underground, you can come to understand that it is being obscured by the rotating bulk of the Earth. Instead of telling a story that offers little real insight, you’ve learned something new about the world.

It took time for humanity to realise all this, because our planet is far larger than a village. If you walked 40 kilometres every day it would take you three years to travel all the way round the world, ignoring ocean crossings and other obstacles. The Moon is nearly ten times as far away; the Sun is 390 times as far away as the Moon. To get to the nearest star, you must multiply that figure by a further 270,000. The diameter of our home galaxy is 25,000 times as great again. The nearest galaxy of comparable size, the Andromeda galaxy, is 25 times as far away. The distance from Earth to the edge of the observable visible universe is more than 18,000 times as great as that. In round figures, 400,000,000,000,000,000,000,000 kilometres.

Four hundred sextillion. That’s some village.

We have no intuitive feel for anything that large. In fact, we have little intuitive feel for distances of more than a few thousand miles, and those only because many of us now travel such distances by air – which shrinks the world to a size we can comprehend. From London, New York is just a meal away.

We know that the universe is that big, and that old, because we have developed a technique that consciously and deliberately sets aside the human-centred view of the world. It does so by searching not just for evidence to confirm our ideas, which human beings have done since the dawn of time, but for evidence that could disprove them, a new and rather disturbing thought. This technique is called science. It replaces blind faith by carefully targeted doubt. It has existed in its current form for no more than a few centuries, although precursors go back a few thousand years. There is a sense in which ‘know’ is too strong a word, for scientists consider all knowledge to be provisional. But what we ‘know’ through science rests on much more secure foundations than anything else that we claim to know, because those foundations have survived being tested to destruction.

Through science, we know how big and how old the Earth is. We know how big and how old our solar system is. We know how big and how old the observable part of the universe is. We know that the temperature at the centre of the Sun is about 15 million degrees Celsius. We know that the Earth has a roughly spherical core of molten iron. We know that the Earth is roughly, though not exactly, spherical, and that (with suitable caveats about moving frames of reference) our planet goes round the Sun rather than being fixed in space while the Sun goes round it. We know that many features of an animal’s form are determined, to a significant degree, by a long, complicated molecule that lives inside the nucleus of its cells. We know that bacteria and viruses cause most of the world’s diseases. We know that everything is made from seventeen fundamental particles.

‘Know’ is one of those simple yet difficult words. How can we know, to take a typical example, what the temperature is at the Sun’s centre? Has anyone been there to find out?

Well, hardly. If scientists are right about the temperature at the centre of the Sun, nobody who was suddenly transported there would survive for a nanosecond. In fact, they’d burn up long before they even reached the Sun. We haven’t sent measuring instruments to the centre of the Sun, for the same reason. So how can we possibly know how hot it is at the centre, when no person or instrument can be sent there to find out?

We know such things because science is not limited to just observing the world. If it were, it would be firmly back in the human-centred realm. Its power derives from the possibility of thinking about the world, as well as experiencing it. The main tool of science is logical inference: deducing features of the world from a combination of observation, experiment and theory. Mathematics has long played a key role here, being the best tool we currently have for making quantitative inferences.

Most of us understand in broad terms what an observation is: you take a look at things, you measure some numbers. Theories are trickier. Confusingly, the word ‘theory’ has two distinct meanings. One is ‘an idea about the world that has been proposed, but has not yet been tested sufficiently for us to have much confidence that it is valid’. A lot of science consists of proposing theories in this sense, and then testing them over and over again in as many ways as possible. The other meaning is ‘an extensive, interconnected body of ideas that have survived countless independent attempts at disproof’. These are the theories that inform the scientific worldview. Anyone who tries to convince you that evolution is ‘only a theory’ is confusing the second use with the first, either through intention to mislead, or ignorance.

There is a fancy word for the first meaning: ‘hypothesis’. Few people actually use this because it always sounds pedantic, although ‘hypothetical’ is familiar enough. The closest ordinary word to the second meaning is ‘fact’, but this has an air of finality that is at odds with how science works. In science, facts are always provisional. However, well-established facts – well-developed and well-supported theories – are not very provisional. It takes a lot of evidence to change them, and often a change is only a slight modification.

Occasionally, however, there may be a genuine revolution, such as relativity or quantum theory. Even then, the previous theories often survive in a suitable domain, where they remain accurate and effective. NASA mostly uses Newton’s dynamics and his theory of gravity to compute the trajectories of spacecraft, not Einstein’s. An exception is the GPS system of navigational satellites, which has to take relativistic dynamics into account to compute accurate positions.

Science is almost unique among human ways of thinking in not only permitting this kind of revisionism, but actively encouraging it. Science is consciously and deliberately universe-centred. That is what the ‘scientific method’ is about. It is like that because the pioneers of science understood the tricks that the human mind uses to convince itself that what it wants to be true is true – and took steps to combat them, rather than promoting them or exploiting them.

There is a common misconception of the scientific method, in which it is argued that there is no such thing because specific scientists stuck to their guns despite apparent contrary evidence. So science is just another belief system, right?

Not entirely. The mistake is to focus on the conservatism and arrogance of individuals, who often fail to conform to the scientific ideal. When they turn out to have been right all along, we hail them as maverick geniuses; when they don’t, we forget their views and move on. And that’s how the real scientific method works. All the other scientists keep the individuals in check.

The beauty of this set-up is that it would work even if no individual operated according to the ideal model of dispassionate science. Each scientist could have personal biases – indeed, it seems likely that they do – and the scientific process would still follow a universe-centred trajectory. When a scientist proposes a new theory, a new idea, other scientists seldom rush to congratulate him or her for such a wonderful thought. Instead, they try very hard to shoot it down. Usually, the scientist proposing the idea has already done the same thing. It’s much better to catch the flaw yourself, before publication, than to risk public humiliation when someone else notices it.

In short, you can be objective about what everyone else is doing, even if you are subjective about your own work. So it is not the actions of particular individuals that produce something close to the textbook scientific method. It is the overall activity of the whole community of scientists, where the emphasis is on spotting mistakes and trying to find something better. It takes only one bright scientist to notice a mistaken assumption. A PhD student can prove a Nobel prize-winner wrong.

If at some future date new observations conflict with what we think we know today, scientists will – after considerable soul-searching, some stubborn conservatism, and a lot of heated argument – revise their theories to resolve the difficulties. This does not imply that they are merely making everything up as they go along: each successive refinement has to fit more and more observations. The absence of complete certainty may seem a weakness, but it is why science has been so successful. The truth of a statement about the universe does not depend on how strongly you believe it.

Sometimes an entire area of science can become trapped in a massive conceptual error. A classic instance is ‘phlogiston’. The underlying scientific problem was to explain the changes that occur in materials when they burn. Wood, for instance, gives off smoke and flame, and turns into ash. This led to the theory that wood emits a substance, phlogiston, when it burns, and that fire is made from phlogiston.

Volume 2 of the first edition of Encyclopaedia Britannica, dated 1771, says: ‘Inflammable bodies … really contain the element fire as a constituent principle … To this substance … chemists have assigned the peculiar title of the Phlogiston, which is indeed no other than a Greek word for the inflammable matter … The inflammability of a body is an infallible sign that it contains a phlogiston …’ The same edition considers ‘element’ to mean earth, air, fire or water, and it has a fascinating analysis of the size of Noah’s Ark, based on its need to contain only a few hundred species.

As chemists investigated gases, and started weighing substances, they made a discovery that spelt doom for the phlogiston theory. Although ash is lighter than wood, the total weight of all combustion products – ash, gas and especially steam – is greater than that of the original wood. Burning wood gains weight. So, if it is emitting phlogiston, then phlogiston must have negative weight. Given enough imagination, this is not impossible, and it would be very useful as an antigravity device if it were true, but it’s unlikely. The discovery of the gas oxygen was the clincher: materials burn only in the presence of oxygen, and when they do, they take up oxygen from their surroundings. Phlogiston was a mistaken concept of ‘negative oxygen’. In fact, for a time oxygen was referred to as ‘dephlogisticated air’.

Significant changes in scientific orthodoxy often occur when new kinds of evidence become available. One of the biggest changes to our understanding of stars came when nuclear reactions were discovered. Before that, it seemed that stars ought to burn up their store of matter very rapidly, and go out. Since they visibly didn’t, this was a puzzle. An awful lot of argument about the Sun’s remarkable ability to stay alight disappeared as soon as scientists realised it shone by nuclear reactions, not chemical ones.

This discovery also changed scientists’ estimate of the age of the solar system. If the Sun is a very large bonfire, and is still alight, it must have been lit fairly recently. If it runs on nuclear reactions, it can be much older, and by studying those reactions, you can work out how much older. The same goes for the Earth. In 1862 the physicist William Thompson (later Lord Kelvin) calculated that on the ‘bonfire’ theory, the planet’s internal heat would have disappeared within 20-400 million years. His approach ignored convection currents in the Earth’s mantle, and when these were taken into account by John Perry in 1895 the age of the planet was revised to 2-3 billion years. Following the discovery of radioactivity, George Darwin and John Joly pointed out in 1903 that the Earth had its own internal source of heat, caused by radioactive decay. Understanding the physics of radioactive decay led to a very effective method for dating ancient rocks … and so it went. In 1956 Clair Cameron Patterson used the physics of uranium decaying into lead, and observations of these elements in several meteorites, to deduce the currently accepted age of the Earth: 4.54 billion years. (The material in meteorites formed at the same time as the planets, but has not been subjected to the same complicated processes as the material of the Earth. Meteorites are a ‘frozen’ record of the early solar system.)

Independent verification has come from Earth’s own rocks; in particular, tiny particles of rock called zircons. Chemically, these rocks are zirconium silicate, an extremely hard material that survives destructive geological processes such as erosion, and even metamorphism, where rocks are heated to extreme temperatures by volcanic intrusions. They can be dated using radioactive decay of uranium and thorium. The most ancient zircons yet observed – small crystals found in the Jack Hills of Western Australia – are 4.404 billion years old. Many different lines of evidence all converge on a similar figure for the age of our planet. This is why scientists are adamant that contrary to the claims of Young Earth creationists, a 10,000-year-old planet is completely inconsistent with the evidence and makes absolutely no sense. And they have come to this conclusion not through belief, or by seeking only confirmatory evidence and ignoring anything that conflicts, but by trying to prove themselves wrong.

No other system of human thought has the same kind of self-scrutiny. Some come close: philosophy, the law. Faith-based systems do change, usually very slowly, but few of them advocate self-doubt as a desirable instrument of change. In religion, doubt is often anathema: what counts is how strongly you believe things. This is rather evidently a human-based view: the world is what we sincerely and deeply believe it to be. Science is a universe-based view, and has shown many times that the world is not what we sincerely and deeply believe it to be.

One of Benford’s examples illustrates this point: James Clerk Maxwell’s discovery of electromagnetic waves travelling at the speed of light, implying that light itself is a wave. Human-centred thinking could not have made this discovery, indeed would have been sceptical that it was possible: ‘The poets’ and philosophers’ inability to see a connection between sloshing currents in waves and luminous sunset beauty revealed a gap in the human imagination, not in reality,’ Benford wrote.

Similarly, the Higgs boson, by completing the standard model, tells us that there is far more to our universe than meets the eye. The standard model, and much of the research that led to it, starts from the idea that everything is made from atoms, which is already far removed from everyday experience, and takes it to a new level. What are atoms made of? Even to ask such a question, you have to be able to think outside the box of human-level concerns. To answer it, you have to develop that kind of thinking into a powerful way to find out how the universe behaves. And you don’t get very far until you understand that this may be very different from how it appears to behave, and from how human beings might want it to behave.

That method is science, and it occupies the second of Benford’s categories: the universe as a context for humanity. In fact, that is where its power originates. Science is done by people, for people, but it works very hard to circumvent natural human thought-patterns, which are centred on us. But the universe does not work the way we want it to; it does its own thing and we mostly go with the flow. Except that, being part of the universe, we have evolved to feel comfortable in our own little corner of it. We can interact with little pieces of it, and sometimes we can bend them to our will. But the universe does not exist in order for us to exist. Instead, we exist because the universe is that kind of universe.

Our social lives, on the other hand, operate almost exclusively in Benford’s first category: humans as a context for the universe. We have spent millennia arranging this, re-engineering our world so that things happen because we want them to. Too cold? Build a fire. Dangerous predators? Exterminate them. Hunting too difficult? Domesticate useful animals. Get wet in the rain? Build a house with a roof. Too dark? Switch on the light. Looking for the Higgs? Spend €7.5 billion.

As a result, most of the things we now encounter in our daily lives have been made by humans or extensively modified by humans. Even the landscape has been determined by human activity. Britain’s hills have been shaped by extensive ancient earthworks, and most of its forests were cut down in the Iron Age so that farms could exploit the land. That wonderful scenery that you find at places like the stately home of Chatsworth – ‘nature in all its glory’, with a river flowing between sweeping hills, dotted with mature trees? Well, most of it was constructed by Capability Brown. Even the Amazon rainforest now seems to be the result of agricultural and architectural activity by ancient South American civilisations.

The differences between the two Benfordian worldviews are profound, but remain manageable as long as they don’t overtly clash. Trouble arises when both worldviews are applied to the same things. Then, they may conflict with each other, and intellectual conflict can turn into political conflict. The uneasy relation between science and religion is a case in point. There are comforting ways to resolve the apparent conflict, and there are plenty of religious scientists, although few of them are Biblical literalists. But the default ways of thinking in science and religion are fundamentally different, and even determined social relativists tend to feel uneasy when they try to claim there’s no serious conflict. Benford’s distinction explains why.

Most religious explanations of the world are human-centred. They endow the world with purpose, a human attribute; they place humans at the pinnacle of creation; they consider animals and plants to be resources placed on Earth for the benefit of humanity. In order to explain human intelligence and will, they introduce ideas like the soul or the spirit, even though no corresponding organs can be found in the human body, and from there it is a short step to the afterlife, whose existence is based entirely on faith, not evidence. So it should be no surprise that throughout history, science and religion have clashed. Moderates in both camps have always understood that these clashes are in a sense unnecessary. Looking back after enough time has passed, it is often hard to understand what all the fuss was about. But at the time, those two distinct worldviews simply could not accommodate each other.

The biggest battleground, in this context, is life. The astonishing world of living organisms: Life with a capital L. And, even more so, human consciousness. We are surrounded by life, we ourselves are conscious living beings … and we find it all terribly mysterious. Thirty thousand years ago some humans could carve quite realistic animals and people from bone or ivory, but no one, even today, knows how to breathe life into an inanimate object. Indeed, the idea that life is something you can ‘breathe into’ an inanimate object is not particularly sensible. Living creatures are not made by starting with a dead version and bringing it to life. Universe-centred thinkers understand this, but human-centred thinkers often see the body – especially the human body – as a dead thing that is animated by a separate, and immaterial, soul or spirit.

The proof of course is that we observe the reverse process on a regular basis. When someone dies, life seems to pass from their body, leaving a corpse. Where did the life go?

Agreed, science doesn’t fully understand what gives us our personalities and consciousness, but it is pretty clear that personality derives from the structure and operation of a brain inside a body, interacting with the external world, especially other human beings. The person develops as the human develops. It’s not a supernatural thing, inserted at conception or birth, with a separate existence of its own. It’s a process carried out by ordinary matter in a living person, and when that person dies, their process stops. It doesn’t depart into a new existence outside the ordinary universe.

In a human-centred view, souls make sense. In a universe-centred one, they look like a philosophical category error. In centuries of studying human beings, not a shred of convincing scientific evidence has ever been found for a soul. The same goes for all of the supernatural elements of all of the world’s religions. Science and religion can coexist peacefully, and it’s probably best that they do. But until religions discard the supernatural, these two very different worldviews can never be fully reconciled. And when fundamentalists try to discredit science because it conflicts with their beliefs, they bring their beliefs into disrepute and provoke unnecessary conflict.

However, even though human-centred thinking can be abused, we cannot understand our place in the universe by using only universe-centred thinking. It’s a human-centred question, and our relationship to the universe involves both points of view. Even though everything in the universe is made from seventeen fundamental particles, it’s how those particles are combined, and how the resulting systems behave, that make us what we are.

fn1 Ever since the 1970s physicists have speculated that quarks and electrons are actually made from even smaller particles, variously named alphons, haplons, helons, maons, prequarks, primons, quinks, rishons, subquarks, tweedles and Y-particles. The generic name for such particles is currently ‘preon’.

fn2 Provided all irregularities are exaggerated by a factor of 7000. http://www.newscientist.com/article/dn20335-earth-is-shaped-like-a-lumpy-potato.html

fn3 Recall that DNA stands for ‘deoxyribonucleic acid’, a type of molecule that famously takes the form of a double helix, like two interwound spiral staircases. The ‘steps’ of the staircase come in four kinds, called bases, which are like code letters. The sequence of bases differs from one organism to the next, and it represents genetic information about that organism.

fn4 Gregory Benford, a creature of double vision, in Science Fiction and the Two Cultures: Essays on Bridging the Gap between the Sciences and the Humanities, edited by Gary Westfahl and George Slusser, McFarland Publishers 2009, pages 228-236.





Terry Pratchett, Ian Stewart's books