I know you’re all following the Minute Physics videos (that we talked about here), but just in case my knowledge is somehow fallible you really should start following them. After taking care of why stones are round, and why there is no pink light, Henry Reich is now explaining the fundamental nature of our everyday world: quantum field theory and the Standard Model. It’s a multi-part series, since some things deserve more than a minute, dammit.
Two parts have been posted so far. The first is just an intro, pointing out something we’ve already heard: the Standard Model of Particle physics describes all the world we experience in our everyday lives.
The second one, just up, tackles quantum field theory and the Pauli exclusion principle, of which we’ve been recently speaking. (Admittedly it’s two minutes long, but these are big topics!)
The world is made of fields, which appear to us as particles when we look at them. Something everyone should know.
They do things differently over in Britain. For one thing, their idea of a fun and entertaining night out includes going to listen to a lecture/demonstration on quantum mechanics and the laws of physics. Of course, it helps when the lecture is given by someone as charismatic as Brian Cox, and the front row seats are filled with celebrities. (And yes I know, there are people here in the US who would find that entertaining as well — I’m one of them.) In particular, this snippet about harmonics and QM has gotten a lot of well-deserved play on the intertubes.
More recently, though, another excerpt from this lecture has been passed around, this one about ramifications of the Pauli Exclusion Principle. (Headline at io9: “Brian Cox explains the interconnectedness of the universe, explodes your brain.”)
The problem is that, in this video, the proffered mind-bending consequences of quantum mechanics aren’t actually correct. Some people pointed this out, including Tom Swanson in a somewhat intemperately-worded blog post, to which I pointed in a tweet. Which led to some tiresome sniping on Twitter, which you can dig up if you’re really fascinated. Much more interesting to me is getting the physics right.
One thing should be clear: getting the physics right isn’t easy. For one thing, going from simple quantum problems of a single particle in a textbook to the messy real world is often a complicated and confusing process. For another, the measurement process in quantum mechanics is famously confusing and not completely settled, even among professional physicists.
And finally, when one translates from the relative clarity of the equations to a natural-language description in order to reach a broad audience, it’s always possible to quibble about the best way to translate. It’s completely unfair in these situations to declare a certain popular exposition “wrong” just because it isn’t the way you would have done it, or even because it assumes certain technical details that the presenter did not fully footnote. It’s a popular lecture, not a scholarly tome. In this kind of format, there are two relevant questions: (1) is there an interpretation of what’s being said that matches the informal description onto a correct formal statement within the mathematical formulation of the theory?; and (2) has the formalism been translated in such a way that a non-expert listener will come away with an understanding that is reasonably close to reality? We should be charitable interpreters, in other words.
In the video, Cox displays a piece of diamond, in order to illustrate the Pauli Exclusion Principle. The exclusion principle says that no two fermions — “matter” particles in quantum mechanics, as contrasted with the boson “force” particles — can exist in exactly the same quantum state. This principle is why chemistry is interesting, because electrons have to have increasingly baroque-looking orbitals in order to be bound to the same atom. It’s also why matter (like diamond) is solid, because atoms can’t all be squeezed into the same place. So far, so good.
But then he tries to draw a more profound conclusion: that interacting with the diamond right here instantaneously affects every electron in the universe. Here’s the quote:
So here’s the amazing thing: the exclusion principle still applies, so none of the electrons in the universe can sit in precisely the same energy level. But that must mean something very odd. See, let me take this diamond, and let me just heat it up a bit between my hands. Just gently warming it up, and put a bit of energy into it, so I’m shifting the electrons around. Some of the electrons are jumping into different energy levels. But this shift of the electron configuration inside the diamond has consequences, because the sum total of all the electrons in the universe must respect Pauli. Therefore, every electron around every atom in the universe must be shifted as I heat the diamond up to make sure that none of them end up in the same energy level. When I heat this diamond up all the electrons across the universe instantly but imperceptibly change their energy levels.
(Minor quibble: I don’t think that rubbing the diamond causes any “jumping” of electrons; the heating comes from exciting vibrational modes of the atoms in the crystal. But maybe I’m wrong about that? And in any event it’s irrelevant to this particular discussion.)
At face value, there’s no question that what he says here lies somewhere between misleading and wrong. It seems quite plain (that’s the problem with being a clear speaker) that he’s saying that the energy levels of electrons throughout the universe must change because we’ve changed the energy levels of some electrons here in the diamond, and the Pauli exclusion principle says that two electrons can’t be in the same energy level. But the exclusion principle doesn’t say that; it says that no two identical particles can be in the same quantum state. The energy is part of a quantum state, but doesn’t define it completely; we need to include other things like the position, or the spin. (The ground state of a helium atom, for example, has two electrons with precisely the same energy, just different spins.)
Consider a box with non-interacting fermions, all in distinct quantum states (as they must be). Take just one of them and zap it to move it into a different quantum state, one unoccupied by any other particle. What happens to the other particles in the box? Precisely nothing. Of course if you zap it into a quantum state that is already occupied by another particle, that particle gets bumped somewhere else — but in the real universe there are vastly more unoccupied states than occupied ones, so that can’t be what’s going on. Taken literally as a consequence of the exclusion principle, the statement is wrong.
But it’s possible that there is a more carefully-worded version of the statement that relies on other physics and is correct. And we might learn some physics by thinking about it, so it’s worth a bit of effort. I think it’s possible to come up with interpretations of the statement that make it correct, but in doing so the implications become so completely different from what the audience actually heard that I don’t think we can give it a pass.
The two possibilities for additional physics (over and above the exclusion principle) that could be taken into account to make the statement true are (1) electromagnetic interactions of the electrons, and (2) quantum entanglement and collapse of the wave function. Let’s look at each in turn.
The first possibility, and the one I actually think is lurking behind Cox’s explanation, is that electrons aren’t simply non-interacting fermions; they have an electric field, which means they can interact with other electrons, not to mention protons and other charged particles. If we change the ambient electric field — e.g., by moving the diamond around — it changes the wave function of the electrons, because the energy changes. Physicists would say the we changed the Hamiltonian, the expression for the energy of the system.
There is an interesting and important point to be made here: in quantum mechanics, the wave function for a particle will generically be spread out all over the universe, not confined to a small region. In practice, the overwhelming majority of the wave function might be localized to one particular place, but in principle there’s a very tiny bit of it at almost every point in space. (At some points it might be precisely zero, but those will be relatively rare.) Consequently, when I change the electric field anywhere in the universe, in principle the wave function of every electron changes just a little bit. I suspect that is the physical effect that Cox is relying on in his explanation.
But there are serious problems in accepting this as an interpretation of what he actually said. For one thing, it has nothing to do with the exclusion principle; bosons (who can happily pile on top of each other in the same quantum state) would be affected just as much as fermions. More importantly, it fails as a job of translation, by giving people a completely incorrect idea of what is going on.
The point of this last statement is that when you say “When I heat this diamond up all the electrons across the universe instantly but imperceptibly change their energy levels,” people are naturally going to believe that something has changed about electrons very far away. But that’s not true, in the most accurate meaning we can attach to those words. In particular, imagine there is some physicist located in the Andromeda galaxy, doing experiments on the energy levels of electrons. This is a really good experimenter, with lots of electrons available and the ability to measure energies to arbitrarily good precision. When we rub the diamond here on Earth, is there any change at all in what that experimenter would measure?
Of course the answer is “none whatsoever.” Not just in practice, but in principle. The Hamiltonian of the universe will change when we heat up the diamond, which changes the instantaneous time-independent solutions to the Schoedinger equation throughout space, so in principle the energy levels of all the electrons in the universe do change. But that change is completely invisible to the far-off experimenter; there will be a change, but it won’t happen until the change in the electromagnetic field itself has had time to propagate out to Andromeda, which is at the speed of light. Another way of saying it is that “energy levels” are static, unchanging states, and what really happens is that we poke the electron into a non-static state that gradually evolves. (If it were any other way, we could send signals faster than light using this technique.)
Verdict: if this is what’s going on, there is an interpretation under which Cox’s statement is correct, except that it has nothing to do with the exclusion principle, and more importantly it gives a quite false impression to anyone who might be listening.
The other possibly relevant bit of physics is quantum entanglement and wave function collapse. This is usually the topic where people start talking about instantaneous changes throughout space, and we get mired in interpretive messes. Again, these concepts weren’t mentioned in this part of the lecture, and aren’t directly tied to the exclusion principle, but it’s worth discussing them.
There is something amazing and magical about quantum mechanics that is worth emphasizing over and over again. To wit: unlike in classical mechanics, there are not separate states for every particle in the universe. There is only one state, describing all the particles; modest people call it the “many-particle wave function,” while visionaries call it the “wave function of the universe.” But the point is that you can’t necessarily describe (or measure) what one particle is doing without also having implications for what other particles are doing — even “instantaneously” throughout space (although in ways that have to be carefully parsed).
Imagine we have a situation with two electrons, each in a separate atom, with different energy levels in each atom. Quantum mechanics tells us that it’s possible for the system to be in the following kind of state: each electron is either in energy level 1 or energy level 2, and we don’t know which one (more carefully, they are in a superposition), but we do know that they are in different energy levels. So if we measure the first electron and find it in level 1, we know for sure that the other electron is in level 2, and vice-versa. This is true even if the two electrons are a jillion miles away from each other.
As far as I can tell, this isn’t at all what Brian Cox was talking about; he discusses heating up the electrons in a diamond by rubbing on it, not measuring their energies by observing them and then drawing conclusions about entangled electrons very far away. (In a real-world context it’s very unlikely that distant electrons are entangled in any noticeable way, although strictly speaking you could argue that everything is slightly entangled with everything else.) But there is some underlying moral similarity — this is, as mentioned, the context in which people traditionally talk about instantaneous changed in quantum mechanics.
So let’s go back to our observer in Andromeda. Imagine that we have such a situation with two electrons in two atoms, in a mutually entangled state. We measure our electron to be in energy level 1. Is it true that we instantly know that our far-away friend will measure their electron to be in energy level 2? Yes, absolutely true.
But consider the same experiment from the point of view of our far-away friend. They know what the state of the electrons is, so they know that when they observe their electron it will be either in level 1 or level 2, and ours will be in the other one. And let’s say they even know that we are going to make a measurement at some particular moment in time. What changes about any measurement they could make on their electron, before and after we measure ours?
Absolutely nothing. Before we made our measurement, they didn’t know the energy level of their electron, and would give 50/50 chances for finding it in level 1 or 2. After we made our measurement, it’s in some particular state, but they don’t know what that state is. So again they would give a 50/50 chance for getting either result. From their point of view, nothing has changed.
It has to work out this way, of course. Otherwise we could indeed use quantum entanglement to send signals faster than light (which we can’t). Indeed, note that we had to refer to “time” in some particular reference frame, stretching across millions of light-years. In some other frame, relativity teaches us that the order of measurements could be completely different. So it can’t actually matter. It’s possible to say that the wave function of the universe changes instantaneously throughout space when we make a measurement; but that statement has no consequences. It’s just one of an infinite number of legitimate descriptions of the situation, corresponding to different choices of how we define “time.”
Verdict: I don’t think this is what Cox was talking about. He doesn’t mention entanglement, or collapse of the wave function, or anything like that. But even if he had, I would personally judge it extremely misleading to tell people that the energy of very far-away electrons suddenly changed because I was rubbing a diamond here in this room.
Just to complicate things a bit more, Brian in a tweet refers to this discussion of the double-well potential as some quantitative justification for what he’s getting at in the lecture. These notes are a bit confusing, but I’ve had a go at them.
The reason they are confusing is because they start off talking about the exclusion principle and indistinguishable particles, but when it comes time to look at equations they only consider single-particle quantum mechanics. They have a situation with two “potential wells” — think of two atoms, perhaps quite far away, in which an electron might find itself. They then consider the wave function for a single electron, ψ(x). And they show, perfectly correctly, that the lowest energy states of this system have nearly identical energies, and have the feature that the electron has an equal probability of being in either of the two atoms.
Which, as far as it goes, is completely fine. It illustrates an interesting example where the lowest-energy state of the electron can be really spread out in space, rather than being localized on a single atom. In particular, the very existence of the other atom far away has a tiny but (in principle) perceptible effect on the shape of the wave function in the vicinity of the nearby atom.
But this says very little about what we purportedly care about, which is the Pauli exclusion principle, something that only makes sense when we have more than one electron. (It says that no two electrons can be in the same state; it has nothing interesting to say about what one electron can do.) It’s almost as if the notes cut off before they could be finished. If we wanted to think about the exclusion principle, we would need to think about two electrons, with positions let’s say x1 and x2, and a joint quantum wave function ψ(x1, x2). Then we would note that fermions have the property that such a wave function must be “odd” in its arguments: ψ(x1, x2) = -ψ(x2, x1). Physically, we’re saying that the wave function goes to minus itself when we exchange the two particles. But if the two particles were in exactly the same state, the wave function would necessarily be unchanged when we exchanged the particles. And a function that is both equal to another function and equal to minus that function is necessarily zero. So that’s the exclusion principle: given that minus sign under exchange, two particles can never be in precisely the same quantum state.
The notes don’t say any of that, however; they just talk about the two lowest energy levels in a double-well potential for a single electron. They don’t demonstrate anything interesting about the exclusion principle. The analysis does imply, correctly, that changing the Hamiltonian of a particle somewhere far away (e.g. by altering the shape of one of the wells) changes, even if by just a little bit, the energy of the wave function defined over all space. That’s connected to the first possible interpretation of Cox’s lecture above, that heating up the diamond changes the Hamiltonian of the universe and therefore affects the wave function of every electron. Which also has nothing to do with the exclusion principle, so at least it’s consistent.
In terms of explaining the mysteries of quantum mechanics to a wide audience, which is the point here, I think the bottom line is this: rubbing a diamond here in this room does not have any instantaneous effect whatsoever on experiments being done on electrons very far away. There are two very interesting and conceptually central points worth making: that the Pauli exclusion principle helps explain the stability of matter, and that quantum mechanics says there is a single state for the whole universe rather than separate states for each individual particle. But in this case these became mixed up a bit, and I suspect that this part of the lecture wasn’t the most edifying for the audience. (The rest of the lecture still remains pretty awesome.)
Update: I added this as a comment, but I’m promoting it to the body of the post because hopefully it makes things clearer for people who like a bit more technical precision in their quantum mechanics.
Consider the double-well potential talked about in the notes I linked to near the end of the post. Think of this as representing two hydrogen nuclei, very far away. And imagine two electrons in this background, close to their ground states.
To start, think of the electrons as free particles, not interacting with each other. (That’s a very bad approximation in this case, contrary to what is said in the notes, but we can fix it later.) As the notes correctly state, for any single electron there will be two low-lying states, one that is even E(x) and one that is odd O(x). When we now add the other electron in, they can’t both be in the same lowest-lying state (the even one), because that would violate Pauli. So you are tempted to put one in E(x1) and the other in O(x2).
But that’s not right, because they’re indistinguishable fermions. The two-particle wave function needs to obey ψ(x1, x2) = -ψ(x2, x1). So the correct state is the antisymmetric product: ψ(x1, x2) = E(x1) O(x2) – O(x1) E(x2).
That means that neither electron is really in an energy level; they are both part of an entangled superposition. If you zap one of them into a completely different energy, nothing whatsoever happens to the other one. It would now be possible for the other one to decay to be purely in the ground state, rather than a superposition of E and O, but that would require some interaction to allow the decay. (All this is ignoring spins. If we allow for spin, they could both be in the ground-state energy level, just with opposite spins. When we zapped one, what happens to the other is again precisely nothing. That’s what you get for considering non-interacting particles.)
But of course it’s a very bad approximation to ignore the interaction between the two electrons, precisely because of the above analysis; it’s not true that one is here and one is far away, they both are equally distributed between being here and being far away, and can interact noticeably.
Since electrons repel, the true ground state is one in which the wave function for one is strongly concentrated one one hydrogen atom, and the wave function for the other is strongly concentrated on the other. Of course it’s the antisymmetrized product of those two possibilities, because they are identical fermions. The energies of both are identical.
Now when you zap one electron to change its energy, you do change the energy of the other one, in principle. But it has nothing to do with the exclusion principle; it’s just because you’ve changed the amount of electrostatic repulsion by changing the spatial wave function of one of the electrons.
Furthermore, while you instantaneously change “the energy levels” available to the far-away electron by jiggling the one nearby, you don’t actually change the position-space wave function in the far-away region at all. As I said in the post, you’ve poked the other electron into a superposition rather than being in an energy eigenstate. Its wave function (to the extent that we can talk about it, e.g. by integrating out the other particles) is now a function of time. And the place where it’s actually evolving is completely inside your light cone, not infinitely far away. So there is literally nothing someone could do, in principle as well as practice, to detect any change as a far-away observer.
According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos’ flight and an electronic card in a computer. After tightening the connection and then measuring the time it takes data to travel the length of the fiber, researchers found that the data arrive 60 nanoseconds earlier than assumed. Since this time is subtracted from the overall time of flight, it appears to explain the early arrival of the neutrinos. New data, however, will be needed to confirm this hypothesis.
I suppose it’s possible. But man, that would make the experimenters look really bad. And the sourcing in the article is just about as weak as it could be: “according to sources familiar with the experiment” is as far as it goes. (What is this, politics?)
So it’s my duty to pass it along, but I would tend to reserve judgment until a better-sourced account comes along. Not that there’s much chance that neutrinos are actually moving faster than light; that was always one of the less-likely explanations for the result. But this isn’t how we usually learn about experimental goofs.
Update again: and here is the official CERN press release. Not exactly admitting that a loose cable is at the heart of everything, or even that the result was wrong, but saying that there were problems that could potentially invalidate the result.
The conventional presentation of a book — words and images printed on sheets, bound together in a folio — is a perfected technology. It hasn’t changed much in centuries, and likely will be with us for centuries to come.
But that doesn’t mean that other technologies won’t be nudging their way into the same conceptual space. Everyone knows that the practice of publishing is being dramatically altered by the appearance of ebooks — a very broad designation for book-length content that is meant to be read on an electronic device. At the simplest level, an ebook can simply be a text file displayed by a reading program. But the possibilities are much more flexible, allowing for different kinds of images, video, interactivity with the user, and two-way connections with the outside world. The production and distribution process is also much easier, which opens the door to books that are faster, shorter, longer, and quirkier than the usual set of hardbacks and paperbacks. If I put my mind to it, I could meander through this blog’s archives, pick out a few posts, and have an ebook published by this evening. It would suck — editing and presenting a good collection requires effort — but it would be published.
In the current state of the market, one question is: how do you find good ebooks to read, ones that don’t suck? Into this breach leaps Download The Universe, a new website devoted to reviewing ebooks about science. Not just “science books with electronic editions,” but books that only exist in the e- format. (Apparently we have already passed through the awkward hypenation phase, and gone from “e-book” right to “ebook.”) Because it would be embarrassing not to, we also have a Twitter account at @downloadtheuni.
This brand-new project has been led by our inestimable blog neighbor Carl Zimmer, who has assembled a crack editorial team consisting of some of the world’s leading new-media science journalists and also me. We’ll be contributing regular (one hopes) reviews of ebooks old and new, all with a science focus. Suggestions welcome, of course.
The world is going to change, whether we like it or not. It always feels good to help channel that change in constructive ways.
Everyone who has been paying attention knows that there is a strong anti-science movement in this country — driven partly by populist anti-intellectualism, but increasingly by corporate interests that just don’t like what science has to say. It’s an old problem — tobacco companies succeeded for years in sowing doubt about the health effects of smoking — but it’s become significantly worse in recent years.
Nina Fedoroff is the president of the American Association for the Advancement of Science (AAAS), which is holding its annual meeting right now. She is not holding back about the problem, but tackling it directly. From a weekend article in the Guardian (h/t Dan Gillmor):
“We are sliding back into a dark era,” she said. “And there seems little we can do about it. I am profoundly depressed at just how difficult it has become merely to get a realistic conversation started on issues such as climate change or genetically modified organisms.”
Tim F. at Balloon Juice points to this flowchart at Climate Progress that illustrates how the money and message gets sent around to sow doubt about scientific findings. (Okay, it’s not really a flow chart, but you get the point.) I was also struck by a link to an older article by Ian Sample, which put the problem in its starkest terms: the American Enterprise Institute was offering $10,000 to scientists and economists who were willing to write op-eds or essays critiquing the IPCC climate report — before it was published. Money goes a long way.
Relatedly, here’s Ruth Bader Ginsburg trying to push the Supreme Court away from its ruling in Citizens United, the notorious case that led to the creation of SuperPACs by deciding that corporations were persons, and not letting them advertise anonymously would be a grievous violation of their free-speech rights. We’ll see how well she does. Scientists, meanwhile, need to keep speaking out about the integrity of our field. When researchers are attacked and their jobs threatened by politicians who disagree with their results, it’s time to stand up for what science really means.
Though Darwinian theory dramatically revolutionized biological understanding, its strictly biological focus has resulted in a widening conceptual gulf between the biological and physical sciences. In this paper we strive to extend and reformulate Darwinian theory in physicochemical terms so it can accommodate both animate and inanimate systems, thereby helping to bridge this scientific divide. The extended formulation is based on the recently proposed concept of dynamic kinetic stability and data from the newly emerging area of systems chemistry. The analysis leads us to conclude that abiogenesis and evolution, rather than manifesting two discrete stages in the emergence of complex life, actually constitute one single physicochemical process. Based on that proposed unification, the extended theory offers some additional insights into life’s unique characteristics, as well as added means for addressing the three central questions of biology: what is life, how did it emerge, and how would one make it?
It’s a paper by a chemist, published in the Journal of Systems Chemistry, but doesn’t seem to require much in the way of specialized knowledge in order to read it, have a look. The central idea seems to be something called “dynamic kinetic stability.” A stable system is one that doesn’t change over time; a dynamic-kinetically stable system is one that doesn’t change in some particular features, but only by taking advantage of some other kind of change. The water in a river flows, but what we think of as “the river” remains fairly stable over time; an organism metabolizes, but maintains its structure for an extended period; individuals within a population come and go, while the population itself can be stable.
I’m very sympathetic to these kinds of ideas — they are reminiscent of Chapter Nine of From Eternity to Here. But my first impression is that the synthesis is going in the wrong direction. Biological organisms are made of the same kind of atoms as everything else, subject to the same kind of rules, so it’s not surprising to think that their evolution should be described by a theory that also applies to inanimate objects. But (maybe this is my physicist’s bias showing) I would tend to reserve “Darwinism” for actual biology, and instead try to develop a general theory of the evolution of complex structures and information that reduced to biological Darwinism in the appropriate circumstances. I’m willing to be talked out of it, though.
Thoughts? Especially from anyone familiar with the relevant chemistry or biology?
Chattering classes here in the U.S. have recently been absorbed in discussions that dance around, but never quite address, a question that cuts to the heart of how we think about the basic architecture of reality: are human beings purely material, or something more?
The first skirmish broke out when a major breast-cancer charity, Susan Komen for the Cure (the folks responsible for the ubiquitous pink ribbons), decided to cut their grants to Planned Parenthood, a decision they quickly reversed after facing an enormous public backlash. Planned Parenthood provides a wide variety of women’s health services, including birth control and screening for breast cancer, but is widely associated with abortion services. The Komen leaders offered numerous (mutually contradictory) reasons for their original action, but there is no doubt that their true motive was to end support to a major abortion provider, even if their grants weren’t being used to fund abortions.
Abortion, of course, is a perennial political hot potato, but the other recent kerfuffle focuses on a seemingly less contentious issue: birth control. Catholics, who officially are opposed to birth control of any sort, objected to rules promulgated by the Obama administration, under which birth control would have to be covered by employer-sponsored insurance plans. The original objection seemed to be that Catholic hospitals and other Church-sponsored institutions would essentially be paying for something they though was immoral, in response to which a work-around compromise was quickly adopted. This didn’t satisfy everyone (anyone?), however, and now the ground has shifted to an argument that no individual Catholic employer should be forced to pay for birth-control insurance, whether or not the organization is sponsored by the Church. This position has been staked out by the US Conference of Catholic Bishops, and underlies a new bill proposed by Florida Senator Mark Rubio.
Topics like this are never simple, but they can be especially challenging for a secular democracy. On the one hand, our society is based on religious pluralism. We have freedom of conscience, and try to formulate our laws in such a way that everyone’s rights are protected. But on the other hand, people have incompatible beliefs about fundamental issues. Such beliefs are often of central importance, and the duct tape of political liberalism isn’t always sufficient to hold things together.
When it comes to abortion and birth control, there’s no question that down-and-dirty political and social aspects are front and center. Different political parties want to score points with their constituencies by standing firm in the current culture wars. And there’s also no question that restricting access to contraception and abortion is driven in part (we can argue about how big that part is) by a desire to control women’s sexuality.
But there is also a serious question about human life and the nature of reality. What actually happens when that sperm and ovum get together to make a zygote? Is it just one step of many in an enormously complex chemical reaction that ultimately gives rise to a new person, who is at heart just a complex chemical reaction him-or-herself? Or is it the moment when an immaterial soul, distinct from the material body, first comes into being? Question like this matter — but as a society we hardly ever discuss them, at least not in any serious and open way. As a result, different sides talk past each other, trying to squeeze metaphysical stances into political boxes.
If it were really true that “a human life” was defined by the association of an immaterial soul with a physical body, and that association began at the moment of conception, then making abortion illegal would be perfectly sensible. It would be murder, pure and simple. (Very few people are actually consistent here, believing that mothers who have abortions should be treated like someone who has committed murder; but there are some.) But this view of reality is not true.
Naturalism, which describes human beings in the same physical terms as other objects in the universe, doesn’t actually provide a cut-and-dried answer to the abortion question, because it doesn’t draw a bright line between “a separate living person” and “a collection of cells.” But it provides an utterly different context for addressing the question. Naturalists are generally against murder, but it’s because they recognize certain collections of atoms as “people,” and endow those people with rights and privileges as part of the structure of society. It all comes from distinctions that we human beings ultimately invent, not ones that are handed down from a higher authority. Consequently, the appropriate rules are less clear. A naturalist wants to know whether the purported person can think, feel, react, and so on. They also will balance the interests of the fetus, whatever they may be, against the interests of the mother, who is unquestionably a living and functioning person. It’s perfectly natural that those interests will seem more important than those of a fetus that isn’t even viable outside the womb.
Most everyone, religious believers and naturalists alike, agrees that killing innocent one-year-old children is morally wrong. Consequently, we can happily live together in a society where that kind of action is illegal. But our beliefs about aborting one-month-old embryos are understandably very different. The disagreements about these issues aren’t simply political, they run much deeper than that.
It matters how people think about the world. Political liberalism is a good system, but it only works insofar as the citizens can agree on a core set of values and push cultural/religious differences to the periphery. Naturalism doesn’t answer all the value-oriented questions we might have; it simply provides a sensible framework in which they can be profitably discussed. But between naturalists and non-naturalists, profitable discussion is much more difficult. Which is why we naturalists have to keep pressing, making the best case we can, trying to convince as many people as we can reach that there is only one realm of existence, governed by unbreakable laws, and that we are part of it.
I continue to believe that “quantum field theory” is a concept that we physicists don’t do nearly enough to explain to a wider audience. And I’m not going to do it here! But I will link to other people thinking about how to think about quantum field theory.
Over on the Google+, I linked to an informal essay by John Norton, in which he recounts the activities of a workshop on QFT at the Center for the Philosophy of Science at the University of Pittsburgh last October. In Norton’s telling, the important conceptual divide was between those who want to study “axiomatic” QFT on the one hand, and those who want to study “heuristic” QFT on the other. Axiomatic QFT is an attempt to make everything absolutely perfectly mathematically rigorous. It is severely handicapped by the fact that it is nearly impossible to get results in QFT that are both interesting and rigorous. Heuristic QFT, on the other hand, is what the vast majority of working field theorists actually do — putting aside delicate questions of whether series converge and integrals are well defined, and instead leaping forward and attempting to match predictions to the data. Philosophers like things to be well-defined, so it’s not surprising that many of them are sympathetic to the axiomatic QFT program, tangible results be damned.
The question of whether or not the interesting parts of QFT can be made rigorous is a good one, but not one that keeps many physicists awake at night. All of the difficulty in making QFT rigorous can be traced to what happens at very short distances and very high energies. And that’s certainly important to understand. But the great insight of Ken Wilson and the effective field theory approach is that, as far as particle physics is concerned, it just doesn’t matter. Many different things can happen at high energies, and we can still get the same low-energy physics at the end of the day. So putting great intellectual effort into “doing things right” at high energies might be misplaced, at least until we actually have some data about what is going on there.
Something like that attitude is defended here by our former guest blogger David Wallace. (Hat tip to Cliff Harvey on G+.) Not the best video quality, but here is David trying to convince his philosophy colleagues to concentrate on “Lagrangian QFT,” which is essentially what Norton called “heuristic QFT,” rather than axiomatic QFT. His reasoning very much follows the Wilsonian effective field theory approach.
The concluding quote says it all:
LQFT is the most successful, precise scientific theory in human history. Insofar as philosophy of physics is about drawing conclusions about the world from our best physical theories, LQFT is the place to look.
Comments Off on How To Think About Quantum Field Theory
Every professional football game begins with the flip of a coin, to determine who gets the ball first. In the case of the Super Bowl, the teams represent the National Football Conference (NFC) or American Football Conference (AFC). Interestingly, the last 14 coin flips have been won by the NFC.
Working out the numbers, the chances of 14 coin flips in a row being equal is 1 in 8,192. (The linked article says 1 in 16,000, which comes from 2^14; but that first coin flip has to be something, so the chances of 14 in a row are really 1 in 2^13. The anomaly would be just as strange if the AFC had won every time.) That’s a better than 3.8-sigma effect! Enough to call a press conference, if this were particle physics.
The question is … is this really a signal, or did we just get lucky? Is it a fair coin and the NFC has just been the happy recipient of a statistical fluctuation, or is there something fishy about the coin? Remember Barry Greenstein’s parable about how different people compute probabilities.
And let it be a lesson the next time you’re excited about 3-sigma anomalies.
The annual Edge Question Center has now gone live. This year’s question: “What is your favorite deep, elegant, or beautiful explanation?” Find the answers here.
I was invited to contribute, but wasn’t feeling very imaginative, so I moved quickly and picked one of the most obvious elegant explanations of all time: Einstein’s explanation for the universality of gravitation in terms of the curvature of spacetime. Steve Giddings and Roger Highfield had the same idea, although Steve rightly points out that Einstein won’t really end up having the final word on spacetime. Lenny Susskind picks Boltzmann’s explanation of why entropy increases as his favorite explanation, and mentions the puzzle of why entropy was lower in the past as his favorite unsolved problem — couldn’t have said it better myself. For those of you how prefer a little provocation, Martin Rees picks the anthropic principle.
But as usual, the most interesting responses to me are those from far outside physics. What’s your favorite?
Full text of my entry below the fold.
Einstein Explains Why Gravity Is Universal
The ancient Greeks believed that heavier objects fall faster than lighter ones. They had good reason to do so; a heavy stone falls quickly, while a light piece of paper flutters gently to the ground. But a thought experiment by Galileo pointed out a flaw. Imagine taking the piece of paper and tying it to the stone. Together, the new system is heavier than either of its components, and should fall faster. But in reality, the piece of paper slows down the descent of the stone.
Galileo argued that the rate at which objects fall would actually be a universal quantity, independent of their mass or their composition, if it weren’t for the interference of air resistance. Apollo 15 astronaut Dave Scott once illustrated this point by dropping a feather and a hammer while standing in vacuum on the surface of the Moon; as Galileo predicted, they fell at the same rate.
Subsequently, many scientists wondered why this should be the case. In contrast to gravity, particles in an electric field can respond very differently; positive charges are pushed one way, negative charges the other, and neutral particles not at all. But gravity is universal; everything responds to it in the same way.
Thinking about this problem led Albert Einstein to what he called “the happiest thought of my life.” Imagine an astronaut in a spaceship with no windows, and no other way to peer at the outside world. If the ship were far away from any stars or planets, everything inside would be in free fall, there would be no gravitational field to push them around. But put the ship in orbit around a massive object, where gravity is considerable. Everything inside will still be in free fall: because all objects are affected by gravity in the same way, no one object is pushed toward or away from any other one. Sticking just to what is observed inside the spaceship, there’s no way we could detect the existence of gravity.
Einstein, in his genius, realized the profound implication of this situation: if gravity affects everything equally, it’s not right to think of gravity as a “force” at all. Rather, gravity is a feature of spacetime itself, through which all objects move. In particular, gravity is the curvature of spacetime. The space and time through which we move are not fixed and absolute, as Newton would have had it; they bend and stretch due to the influence of matter and energy. In response, objects are pushed in different directions by spacetime’s curvature, a phenomenon we call “gravity.” Using a combination of intimidating mathematics and unparalleled physical intuition, Einstein was able to explain a puzzle that had been unsolved since Galileo’s time.
Comments Off on Your Favorite Deep, Elegant, or Beautiful Explanation
Can we define “life” in just three words? Carl Zimmer of Loom fame has written a piece for Txchnologist in which he reports on an interesting attempt: biologist Edward Trifonov looked at other people’s definitions, rather than thinking about life itself. Sifting through over a hundred suggested definitions, Trifonov looked for what they had in common, and boiled life down to “self-reproduction with variations.” Just three words, although one of them is compound so I would argue that morally it’s really four.
We’ve discussed this question before, and the idea of reproduction looms large in many people’s definitions of life. But I don’t think it really belongs. If you built an organism from scratch, that was as complicated and organic and lifelike as any living thing currently walking this Earth, except that it had no reproductive capacity, it would be silly to exclude it from “life” just because it was non-reproducing. Even worse, I realized that I myself wouldn’t even qualify as alive under Trifonov’s definition, since I don’t have kids and don’t plan on having any. (And no, those lawsuits were frivolous and the court records were sealed.)
It’s the yellow-taxi problem: in a city where all cars are blue except for taxis, which are yellow, it’s tempting to define “taxi” as “a yellow car.” But that doesn’t get anywhere near the essence of taxi-ness. Likewise, living species generally reproduce themselves; but that’s not really what makes them alive. Not that I have the one true definition (and maybe there shouldn’t be one). But any such definition better capture the idea of an ongoing complex material process far from equilibrium, or it’s barking up the wrong Tree.
Sorry for the light blogging of late. Actual work intervenes, and it might remain that way for a while. But I’ll try to pop in whenever I can.
Stephen Hawking is celebrating his 70th birthday today. That in itself is an amazing fact, just as it was amazing when he celebrated his 40th, and 50th, and 60th birthdays, as well as every other day he’s lived and thrived with a debilitating neuron disease. The extra fact that he continues to make contributions to science pushes beyond amazing to practically unbelievable.
Everyone likes to tell Hawking stories, and this blog is no exception. So here is mine, meagre as it is. I’ve gotten more than enough mileage out of this one in person, I might as well put it on the blog so I won’t be tempted to tell it any more.
At the end of 1992 I was a finishing grad student, applying for postdocs. One of the places I applied was Cambridge, to Hawking’s group at DAMTP. There is a slight potential barrier for American students to travel to the UK for postdocs, so they like to get out ahead of things and offer jobs early. Unfortunately I was out of my office the day Hawking called to offer me a position. Fortunately, my future-Nobel-Laureate officemate was there, and he took the call. He explained that Stephen Hawking had called to offer me a job — I was thrilled about the offer, but understood “Hawking called” as metaphorical. But no, Brian later convinced me that it actually was Hawking on the other end of the line, which he described as a somewhat surreal experience. Of course after the initial introduction the phone gets handed over to someone else, but still.
Cambridge is one of the world’s best places to do theoretical physics, and I was sorely tempted, but I ended up going to MIT instead. Three years later, I went through the process again, as postdocs typically do. And again Cambridge offered me the job — and again, after a very tough decision, I said no, heading of the the ITP in Santa Barbara instead.
Up to this point I had never actually met Hawking in person, although I had been in the audience for one of his lectures. But every year he visits Caltech and Santa Barbara, so I finally got to be with him in the same place. The first time he visited he brought along a young grad student named Raphael Bousso, who has gone on to do quite well for himself in his own right. As a group of us went to lunch, I mentioned to Raphael that I had never said hi to Stephen in person, so I’d appreciate it if he would introduce us. But, I cautioned, I hope he wasn’t upset with me, because he had offered me a postdoc and I turned it down.
Raphael just laughed and said, “Don’t worry, there’s this one guy who he offered a postdoc to twice, and he turned it down both times!” So I had to explain that this guy was actually me. At which point Raphael ran up to Hawking, exclaiming “Stephen! Stephen, this is the guy — the one who turned down DAMTP for postdocs twice in a row!”
That was my personal introduction to Stephen. He just smiled, no big deal — life goes on for him whether or not some callow American student wants to fly across the puddle to work as a postdoc.
Since then I’ve had the privilege of interacting with Hawking more substantively a few times. Once a long conversation just after the discovery of the acceleration of the universe, when he was interested in hearing more about the supernova observations. And once at a whisky tasting organized at an international cosmology conference. Handicaps notwithstanding, Hawking never misses a chance to experience life to its fullest. Another time when I picked up him and his retinue at the airport — which gave me a tiny glimpse of the massive logistical operation it is to move Hawking from place to place. The simplest things that we take for granted are for him an elaborate production.
Happy birthday, Stephen. I know I won’t make the contributions you have to science, but I hope I can live as long, and approach life with your gusto and good humor.
‘Tis the season when bloggers, playing out the string between Xmas and New Year’s, fill the void with greatest-hits lists from the year just passed. But a question inevitably arises: how does one decide which posts to include? There are many different criteria, and preferring one to another might lead to very different lists. This is what’s known as the measure problem in blogospheric cosmology.
This year I’ve decided to confront the problem pluralistically. Thus: here we have five different Top Five lists, chosen according to completely different criteria. Let us know if your favorite Cosmic Variance post of the year somehow managed to not be on any of the lists.
First, the most crude and common measure, the posts with the most page views this year.
The Pith: You are expected to have 30 new mutations which differentiate you from your parents. But, there is wiggle room around this number, and you may have more or less. This number may vary across siblings, and explain differences across siblings. Additionally, previously used estimates of mutation rates which may have been too high by a factor of 2. This may push the “last common ancestor” of many human and human-related lineages back by a factor of 2 in terms of time.
There’s a new letter in Nature Genetics on de novo mutations in humans which is sending the headline writers in the press into a natural frenzy trying to “hook” the results into the X-Men franchise. I implicitly assume most people understand that they all have new genetic mutations specific and identifiable to them. The important issue in relation to “mutants” as commonly understood is that they have salient identifiable phenotypes, not that they have subtle genetic variants which are invisible to us. Another implicit aspect is that phenotypes are an accurate signal or representation of high underlying mutational load. In other words, if you can see that someone is weird in ...
Last summer I made a thoughtless and silly error in relation to a model of human population history when asked by a reader the question: “which population is most distantly related to Africans?” I contended that all non-African populations are equally distant. This is obviously wrong on the face of it if you look at any genetic distance measures. West Eurasians, even those without recent Sub-Saharan African admixture (e.g., North Europeans) are closer than East Eurasians, who are often closer than Oceanians and Amerindians. One explanation I offered is that these latter groups were subject to greater genetic drift through a series of population bottlenecks. In this framework the number of generations until the last common ancestor with Sub-Saharan Africans for all groups outside of Africa should be about the same, but due to evolutionary factors such as more extreme genetic drift or different selective pressures some non-African groups had diverged more from Africans than others in terms of their genetic state. In other words, the most genetically divergent groups in relation to Africans did not diverge any earlier, but simply diverged more ...
Comments Off on The continuing tangling of the human tree
The column to the left has ways to contact me, as well as places I've contributed to over the years, and various links of interest relating to me, or by me. Below this section is a live stream of my Twitter feed. To the right is the output of my RSS feed, which mostly displays blog content.
Needless to say any opinions that I express on the internet do not reflect those of any institution with which I am affiliated, or any individual with whom I am associated.