Skip to main content

Nobel Pursuits: Decades of Wisdom from Prizewinning Physicists

The tools of science have changed since the golden age of physics, but many of the same questions remain

Every summer nobel laureates converge on Lindau, Germany, to share their wisdom with, and to learn from, up-and-coming scientists hailing from many corners of the globe. This year the 62nd meeting focuses on physics. In honor of that event, the two of us have selected excerpts from some of the most fascinating articles that Nobel winners have published in the magazine over the years, on topics ranging from cosmology to particle physics to technology.

As we gathered these selections, which begin on the opposite page, we were struck anew by the way the problems that puzzled physicists decades ago continue to drive research today. Yes, the field has changed since the days of Albert Einstein, P.A.M. Dirac and Enrico Fermi. Physicists have made vast leaps (such as constructing and honing the Standard Model of particle physics) and encountered strange turns (such as dark energy). Yet many of the questions being tackled now are the same, at root, as those that have spurred research throughout the past century—among them: Why is matter so much more abundant than antimatter? Does the Higgs boson, widely believed to account for the mass of subatomic particles, truly exist? And what does “spooky action at a distance” betray about the workings of the world?

Matter is everywhere. It makes up this magazine, your hand and even the air between the page and your face. Antimatter, on the other hand, is exceedingly rare. (That is a good thing for us creatures of matter because particles and antiparticles annihilate on contact.) But matter and antimatter should have existed in balance at the dawn of the universe; somehow matter won out to allow the formation of galaxies, solar systems and people. Physicists have long wondered what tipped the scales.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


In 1956 Emilio Segrè and Clyde E. Wiegand detailed in the pages of Scientific American their team's discovery of the antiproton, the antimatter counterpart to the familiar proton at the heart of every atom. Segrè and Wiegand's group had identified the short-lived antiparticles just the year before at the now defunct Bevatron particle accelerator at the University of California, Berkeley, and Segrè and Owen Chamberlain, his Berkeley colleague, would share the 1959 Nobel Prize in Physics for the discovery. Their detection of antiprotons followed the discovery in 1932 by Carl D. Anderson of antielectrons, or positrons, which itself followed a theoretical description in 1930 of the electron by Dirac, which suggested the existence of such antiparticles.

Physicists have since taken the next logical step in the footsteps of Dirac, Anderson, Chamberlain and Segrè: cobbling together rudimentary atoms of antimatter to see if they differ in some crucial aspect from ordinary atoms. At CERN near Geneva, researchers combine antiprotons with positrons to produce antihydrogen atoms. Last year one group succeeded in protecting the antiatoms from annihilation for several minutes—plenty of time to run tests on the stuff. If gravity or radiation interacts differently with antimatter, that might offer clues to why matter is so much more abundant today.

Exploring another corner of physics, Martinus J. G. Veltman wrote in 1986 in Scientific American of a slight problem with the Standard Model, the otherwise spectacularly solid framework that describes the elementary particles of our universe. One key particle within the Standard Model had yet to be observed, Veltman noted, and indeed that particle seemed to be working hard to avoid detection. Without it, the masses of other particles would be difficult to explain.

The particle is, of course, the Higgs boson. More than 25 years after Veltman wrote of the possibility that the Higgs could be discovered at the planned Superconducting Super Collider (SSC) in Texas, physicists still await their first look at the all-important boson. The SSC was never completed, so the chase moved to the Large Hadron Collider (LHC) at CERN, which has been running since 2009. CERN has gradually ramped up the energy of LHC collisions and expects to have enough data by year's end to finally declare whether the Standard Model's Higgs exists.

Even before the Standard Model was pieced together, physicists were picking apart the behaviors of the particles it describes. In 1935 Einstein, along with two colleagues, authored a paper pointing out that quantum mechanics, as formulated at the time, necessitated an uncomfortable phenomenon known as nonlocality. An observer measuring a particle in one location, the physicists noted, could instantaneously affect the state of a particle in another location, however distant. Such an effect seemed absurd. Nonlocality was a problem, Einstein and his colleagues held, that could cast doubt on the viability of quantum mechanics.

It took decades for experimental physicists to verify that particles can indeed share nonlocal connections via a phenomenon known as quantum entanglement. Physicists now routinely produce pairs of entangled photons that share, say, one polarization state between them. Individual atoms have also been entangled, as have macroscopic objects, such as wafers of synthetic diamond. And entanglement is not just a quantum parlor trick—one day it may enable communications and computation vastly more powerful than today's electronics can muster.

The key to those experiments has been the laser, the quantum flashlight whose well-behaved photons can themselves be entangled or can be used to establish entanglement between other particles. In a 1961 article excerpted on page 71, Arthur L. Schawlow touts the considerable promise of the laser, originally known as the optical maser, which at the time was just a year old. Schawlow received a Nobel Prize in 1981 for his role in the laser's invention. His intellectual descendants, those optical physicists who have harnessed the laser to explore quantum entanglement, have often been flagged as front-runners for a Nobel in the near term.

Where will the next generation of Nobel Prize–winning physicists, some of whom may be found at this year's Lindau gathering, lead the field? If history is a guide, some hints of future glory may be found in the prizewinners—and the magazine articles—of decades past.

Astrophysics

The Secret Message of the Cosmic Ray

By Arthur H. Compton
Published in July 1933
Nobel Prize in 1927

The study of cosmic rays has been described as “unique in modern physics for the minuteness of the phenomena, the delicacy of the observations, the adventurous excursion of the observers, the subtlety of the analyses, and the grandeur of the inferences.” These rays are bringing us, we believe, some important message. Perhaps they are telling us how our world has evolved, or perhaps news of the innermost structure of the atomic nucleus. We are now engaged in trying to decode this message.

About five years ago, two German physicists, Bothe and Kolhörster, did an experiment with counting tubes which convinced them that the cosmic rays are electrically charged particles. If this conclusion is correct, it means, however, that there should be a difference in intensity of the rays over different parts of the earth. For the earth acts as a huge magnet, and this huge magnet should deflect the electrified particles as they shoot toward the earth. The effect should be least near the magnetic poles, and greatest near the equator, resulting in an increasing intensity as we go from the equator toward the poles. A series of half a dozen different experiments designed to detect such effects resulted in inconclusive data.

Accordingly, with financial help from the Carnegie Institution, a group of us at the University of Chicago have organized nine different expeditions during the past 18 months, going into different portions of the globe to measure cosmic rays from sea level to the tops of mountains nearly four miles high in the Andes and the Himalayas. Two capable mountaineers, Carpe and Koven, lost their lives on a glacier on the side of mighty Mt. McKinley in Alaska, but they got the highest altitude data yet obtained for latitudes so close to the pole.

On bringing together the results of these expeditions, it was found that the cosmic ray intensity near the poles is about 15 percent greater than near the equator. Furthermore, it varies with latitude, just as predicted, due to the effect of the earth's magnetism on incoming electrified particles. At high altitudes the effect of the earth's magnetism is found to be several times as great as at sea level.

These results show that a considerable part, at least, of the cosmic rays consists of electrified particles. Some of the cosmic rays, however, are not appreciably affected by the earth's magnetic field. Other types of measurements, such as those of Piccard and Regener in their high-altitude balloon flights and Bothe and Kolhörster's counter experiments, lead us to the conclusion that very little of these rays is in the form of photons, like light, but that there is probably a considerable quantity of radiation in the form of atoms or atomic nuclei of low atomic weight.

A word should be said regarding the tremendous energy represented by individual cosmic rays. Let us take as our unit of energy the electron-volt. About two such units are liberated by burning a hydrogen atom. Two million units appear when radium shoots out an alpha particle. But it requires ten thousand million of these units to make a cosmic ray. Where does this tremendous energy come from? In the answer to this question lies perhaps the solution of the riddle as to how our universe came to be.

X-ray Stars

By Riccardo Giacconi
Published in December 1967
Nobel Prize in 2002

Although interstellar space is suffused with radiation over the entire electromagnetic spectrum, from the extremely short waves of gamma rays and X rays to the very long radio waves, relatively little of the cosmic radiation reaches the earth's surface. Our atmosphere screens out most of the wavelengths. In particular the atmosphere is completely opaque to wavelengths shorter than 2,000 angstrom units. Hence X radiation from space can be detected only by sending instruments to the outer regions of our atmosphere in balloons or rockets.

As rocket flights and opportunities to send up instrumented payloads became more frequent, Bruno B. Rossi of the Massachusetts Institute of Technology suggested an X-ray survey of the sky, and a group of us at American Science and Engineering, Inc., undertook the study.

The instrumented Aerobee rocket was launched at the White Sands Missile Range at midnight on June 18, 1962. Our experiment had been prepared by Herbert Gursky, F. R. Paolini and me, with Rossi's collaboration. Some time before the rocket arrived at its peak altitude 225 kilometers (140 miles) above the earth's surface, doors opened to expose the detectors. With the rocket spinning on its axis, the detectors scanned a 120-degree belt of the sky, including the position of the moon.

The telemeter signals from the detectors showed no indication of any X radiation coming from the moon. From the direction of the constellation Scorpio in the southern sky, however, the detectors revealed the presence of an intense source of X rays. The intensity registered by the counters was a million times greater than one would expect (on the basis of the sun's rate of X-ray emission) to arrive from any distant cosmic source!

Three months of close study of the records verified that the radiation was indeed X rays (two to eight angstroms in wavelength), that it came from outside the solar system and that the source was roughly in the direction of the center of our galaxy. What kind of object could be emitting such a powerful flux of X rays?

We made two additional rocket surveys at different times of the year (in October, 1962, and June, 1963) that narrowed down the location of the strong X-ray source by triangulation, and we found that it was not actually in the galactic center. Meanwhile Herbert Friedman and his collaborators at the Naval Research Laboratory succeeded in locating the position of the source within a two-degree arc in the sky, which suggested that the X-ray emitter was a single star rather than a large collection of them.

By this time the evidence that the source was a discrete object had become so strong that we named it Sco (for Scorpius) X-1. One might have expected that an object pouring out so much energy in X radiation would be distinctly visible as at least a rather bright star. The region of the source was barren, however, of conspicuous stars.

The problem then was to identify the X-ray star among the visible stars at the indicated location. The position of Sco X-1 was known only within about one degree, and in its region of the sky there are about 100 13th-magnitude stars in each square degree. A detailed analysis of the new data was made to pinpoint the position more closely. This analysis narrowed the location to two equally probable positions where the star might be found.

Given these positions, the Tokyo Astronomical Observatory and the Mount Wilson and Palomar Observatories made a telescopic search for Sco X-1. The Tokyo astronomers found the X-ray star immediately, and within a week the Palomar observers confirmed the identification.

Now that Sco X-1 can be examined with optical telescopes, it is beginning to yield some striking new information. The most provocative fact is that this star emits 1,000 times more energy in X rays than in visible light, a situation astronomers had never anticipated from their studies of the many varieties of known stars. There are indications that the X-ray emission of Sco X-1 is equal to the total energy output of the sun at all wavelengths.

How a Supernova Explodes

By Hans A. Bethe and Gerald Brown
Published in May 1985
Nobel Prize in 1967 (Bethe)

A supernova begins as a collapse, or implosion; how does it come about, then, that a major part of the star's mass is expelled? At some point the inward movement of stellar material must be stopped and reversed; an implosion must be transformed into an explosion.

Through a combination of computer simulation and theoretical analysis a coherent view of the supernova mechanism is beginning to emerge. It appears the crucial event in the turnaround is the formation of a shock wave that travels outward.

When the center of the core reaches nuclear density, it is brought to rest with a jolt. This gives rise to sound waves that propagate back through the medium of the core, rather like the vibrations in the handle of a hammer when it strikes an anvil. The waves slow as they move out through the homologous core, both because the local speed of sound declines and because they are moving upstream against a flow that gets steadily faster. At the sonic point they stop entirely. Meanwhile additional material is falling onto the hard sphere of nuclear matter in the center, generating more waves. For a fraction of a millisecond the waves collect at the sonic point, building up pressure there. The bump in pressure slows the material falling through the sonic point, creating a discontinuity in velocity. Such a discontinuous change in velocity constitutes a shock wave.

At the surface of the hard sphere in the heart of the star infalling material stops suddenly but not instantaneously. Momentum carries the collapse beyond the point of equilibrium, compressing the central core to a density even higher than that of an atomic nucleus. We call this point the instant of “maximum scrunch.” After the maximum scrunch the sphere of nuclear matter bounces back, like a rubber ball that has been compressed. The bounce sets off still more sound waves, which join the growing shock wave.

A shock wave differs from a sound wave in two respects. First, a sound wave causes no permanent change in its medium; when the wave has passed, the material is restored to its former state. The passage of a shock wave can induce large changes in density, pressure and entropy. Second, a sound wave—by definition—moves at the speed of sound. A shock wave moves faster, at a speed determined by the energy of the wave. Hence once the pressure discontinuity at the sonic point has built up into a shock wave, it is no longer pinned in place by the infalling matter. The wave can continue outward, into the overlying strata of the star. According to computer simulations, it does so with great speed, between 30,000 and 50,000 kilometers per second.

After the outer layers of a star have been blown off, the fate of the core remains to be decided. The explosion of lighter stars presumably leaves behind a stable neutron star. In Wilson's calculations any star of more than about 20 solar masses leaves a compact remnant of more than two solar masses. It would appear that the remnant will become a black hole, a region of space where matter has been crushed to infinite density.

Life in the Universe

By Steven Weinberg
Published in October 1994
Nobel Prize in 1979

Life as we know it would be impossible if any one of several physical quantities had slightly different values. The best known of these quantities is the energy of one of the excited states of the carbon 12 nucleus. There is an essential step in the chain of nuclear reactions that build up heavy elements in stars. In this step, two helium nuclei join together to form the unstable nucleus of beryllium 8, which sometimes before fissioning absorbs another helium nucleus, forming carbon 12 in this excited state. The carbon 12 nucleus then emits a photon and decays into the stable state of lowest energy. In subsequent nuclear reactions carbon is built up into oxygen and nitrogen and the other heavy elements necessary for life. But the capture of helium by beryllium 8 is a resonant process, whose reaction rate is a sharply peaked function of the energies of the nuclei involved. If the energy of the excited state of carbon 12 were just a little higher, the rate of its formation would be much less, so that almost all the beryllium 8 nuclei would fission into helium nuclei before carbon could be formed. The universe would then consist almost entirely of hydrogen and helium, without the ingredients for life.

Opinions differ as to the degree to which the constants of nature must be fine-tuned to make life necessary. There are independent reasons to expect an excited state of carbon 12 near the resonant energy. But one constant does seem to require an incredible fine-tuning: it is the vacuum energy, or cosmological constant, mentioned in connection with inflationary cosmologies.

Although we cannot calculate this quantity, we can calculate some contributions to it (such as the energy of quantum fluctuations in the gravitational field that have wavelengths no shorter than about 10

−33 centimeter). These contributions come out about 120 orders of magnitude larger than the maximum value allowed by our observations of the present rate of cosmic expansion. If the various contributions to the vacuum energy did not nearly cancel, then, depending on the value of the total vacuum energy, the universe either would go through a complete cycle of expansion and contraction before life could arise or would expand so rapidly that no galaxies or stars could form.

Thus, the existence of life of any kind seems to require a cancellation between different contributions to the vacuum energy, accurate to about 120 decimal places. It is possible that this cancellation will be explained in terms of some future theory. So far, in string theory as well as in quantum field theory, the vacuum energy involves arbitrary constants, which must be carefully adjusted to make the total vacuum energy small enough for life to be possible.

All these problems can be solved without supposing that life or consciousness plays any special role in the fundamental laws of nature or initial conditions. It may be that what we now call the constants of nature actually vary from one part of the universe to another. (Here “different parts of the universe” could be understood in various senses. The phrase could, for example, refer to different local expansions arising from episodes of inflation in which the fields pervading the universe took different values or else to the different quantum-mechanical “worldtracks” that arise in some versions of quantum cosmology.) If this were the case, then it would not be surprising to find that life is possible in some parts of the universe, though perhaps not in most.

Naturally, any living beings who evolve to the point where they can measure the constants of nature will always find that these constants have values that allow life to exist. The constants have other values in other parts of the universe, but there is no one there to measure them. Still, this presumption would not indicate any special role for life in the fundamental laws, any more than the fact that the sun has a planet on which life is possible indicates that life played a role in the origin of the solar system.

Particles and Atoms

What Is Light?

By Ernest O. Lawrence and J. W. Beams
Published in April 1928
Nobel Prize in 1939 (Lawrence)

Light is one of the most familiar physical realities. All of us are acquainted with a large number of its properties, while some of us who are physicists know a great many more marvelous characteristics which it displays. The sum total of our knowledge of the physical effects produced by light is very considerable, and yet we have no satisfactory conception of what it is.

More than two centuries ago Newton conceived that light was corpuscular in nature; he believed that light consisted of little darts shooting through space. Others regarded light as a wave phenomenon; in a manner analogous to the propagation of waves in water, light waves were propagated in a medium pervading all space, called the ether. A lively controversy ensued between the adherents of these two conceptions of the nature of light, and as new experiments were carried out revealing more of its properties, it appeared that the undulatory theory accounted for many things quite unintelligible on the corpuscular hypothesis.

As time has progressed, many additional phenomena concerned with the interaction of light and matter have been discovered which are impossible of understanding on the wave theory and which have compelled scientists to revert to the conception of light which was in Newton's mind centuries ago. Such recent facts of observation suggest that light beams contain amounts of energy which are exact multiples of a definite smallest amount—a light quantum—just as matter seems to be made up of definite multiples of a smallest particle of matter or electricity—the electron. Thus, we have atomicity of light as well as atomicity of matter and electricity.

A seemingly very peculiar circumstance exists in this modern quantum theory of light, for the very thing concerned in the theory is entirely obscure.

And so the question of the physical nature of quanta presents itself. Are they a yard or a mile or an inch in length, or are they of infinitesimal dimensions? Many experimental facts can be interpreted as indicating that quanta are at least a yard in length, yet nothing really certain can be inferred from past observations. The dimensions in space of the quanta remain complete mysteries.

There is at least one way of measuring the length of quanta, provided that the scheme may be carried out in practice, which is essentially as follows: Suppose one had a light shutter that could obstruct or let pass a beam of light as quickly as desired. Such an apparatus would be able to cut up a beam of light into segments, much in the same way that a meat cutter slices a bologna sausage. It is clear that if the slices of the light beam so produced were shorter than the light quanta in the beam, the short light flashes coming from the shutter would contain only parts of quanta. In effect, the apparatus would be cutting off the heads or tails of quanta. To eject an electron from a metal surface a whole quantum is necessary because part of one quantum does not contain enough energy to do the trick. One therefore would definitely establish an upper limit to the length of light quanta by simply observing the shortest light flashes able to produce a photo-electric effect.

One does not have to be very familiar with mechanical things to realize that no mechanical shutter could possibly work at this speed. Happily, however, Nature has endowed matter with properties other than purely mechanical ones. By making use of a certain electro-optical property of some liquids a device was conceived which actually operated as a shutter, turning on and off in about one ten thousand millionth of a second.

The short flashes of light produced in this way were allowed to fall on a sensitive photo-electric cell, and it was found that the cell responded to the shortest flashes obtained—which were only a few feet in length.

The importance of this simple experimental observation cannot be overestimated, for it definitely demonstrated that light quanta are less than a few feet in length and probably occupy only very minute regions of space.

The Structure of the Nucleus

By Maria G. Mayer
Published in March 1951
Nobel Prize in 1963

For the atom as a whole modern physicists have developed a useful model based on our planetary system: it consists of a central nucleus, corresponding to the sun, and satellite electrons revolving around it, like planets, in certain orbits. This model, although it leaves many questions still unanswered, has been helpful in accounting for much of the observed behavior of the electrons. The nucleus itself, however, is very poorly understood. Even the question of how the particles of the nucleus are held together has not received a satisfactory answer.

Recently several physicists, including the author, have independently suggested a very simple model for the nucleus. It pictures the nucleus as having a shell structure like that of the atom as a whole, with the nuclear protons and neutrons grouped in certain orbits, or shells, like those in which the satellite electrons are bound to the atom. This model is capable of explaining a surprisingly large number of the known facts about the composition of nuclei and the behavior of their particles.

It is possible to discern some rather remarkable patterns in the properties of particular combinations of protons and neutrons, and it is these patterns that suggest our shell model for the nucleus. One of these remarkable coincidences is the fact that the nuclear particles, like electrons, favor certain “magic numbers.”

Every nucleus (except hydrogen, which consists of but one proton) is characterized by two numbers: the number of protons and the number of neutrons. The sum of the two is the atomic weight of the nucleus. The number of protons determines the nature of the atom; thus a nucleus with two protons is always helium, one with three protons is lithium, and so on. A given number of protons may, however, be combined with varying numbers of neutrons, forming several isotopes of the same element. Now it is a very interesting fact that protons and neutrons favor even-numbered combinations; in other words, both protons and neutrons, like electrons, show a strong tendency to pair. In the entire list of some 1,000 isotopes of the known elements, there are no more than six stable nuclei made up of an odd number of protons and an odd number of neutrons.

Moreover, certain even-numbered aggregations of protons or neutrons are particularly stable. One of these magic numbers is 2. The helium nucleus, with 2 protons and 2 neutrons, is one of the most stable nuclei known. The next magic number is 8, representing oxygen, whose common isotope has 8 protons and 8 neutrons and is remarkably stable. The next magic number is 20, that of calcium.

The list of magic numbers is: 2, 8, 20, 28, 50, 82 and 126. Nuclei with these numbers of protons or neutrons have unusual stability. It is tempting to assume that these magic numbers represent closed shells in the nucleus, like the electronic shells in the outer part of the atom.

The shell model can explain other features of nuclear behavior, including the phenomenon known as isomerism, which is the existence of long-lived excited states in nuclei. Perhaps the most important application of the model is in the study of beta-decay, i.e., emission of an electron by a nucleus. The lifetime of a nucleus that is capable of emitting an electron depends on the change of spin it must undergo to release the electron. Present theories of beta-decay are not in a very satisfactory state, and it is not easy to check on these theories because only in a few cases are the states of radioactive nuclei known. The shell model can help in this situation, for it is capable of predicting spins in cases in which they have not been measured. Certainly the simple model described here falls short of giving a complete and exact description of the structure of the nucleus. Nonetheless, the success of the model in describing so many features of nuclei indicates that it is not a bad approximation of the truth.

The Antiproton

By Emilio Segrè and Clyde E. Wiegand
Published in June 1956
Nobel Prize in 1959 (Segrè)

A quarter of a century ago P.A.M. Dirac of the University of Cambridge developed an equation, based on the most general principles of relativity and quantum mechanics, which described in a quantitative way various properties of the electron. He had to put in only the charge and mass of the electron—and then its spin, its associated magnetic moment and its behavior in the hydrogen atom followed with mathematical necessity. Its discoverer found, however, that the equation required the existence of both positive and negative electrons: that is, it described not only the known negative electron but also an exactly symmetrical particle which was identical with the electron in every way except that its charge was positive instead of negative.

A few years after Dirac's prediction, Carl D. Anderson of the California Institute of Technology found positive electrons (positrons) among the particles produced by cosmic rays in a cloud chamber. This discovery set physicists off on a new and more formidable search for another hypothetical particle—a search which was finally rewarded only a few months ago.

Dirac's general equation, slightly modified, should be applicable to the proton as well as to the electron. In this instance too it predicts the existence of an antiparticle—an antiproton identical to the proton but with a negative instead of a positive charge.

The question then arose as to how much energy would be needed to create antiprotons in the laboratory with an accelerator. Because an antiproton can be created only in a pair with a proton, we need at least the energy equivalent to the mass of two protons (i.e., about two billion electron volts). However, we need much more than two Bev in the proposed laboratory experiment. To convert energy into particles we must concentrate the energy at a point; this is best accomplished by hurling a high-energy particle at a target—e.g., a proton against a proton. After the collision we shall have four particles: the two original protons plus the newly created proton-antiproton pair. Each of the four will emerge from the collision with a kinetic energy amounting to about one Bev. Thus the generation of an antiproton takes two Bev (creation of the proton-antiproton pair) plus four Bev (the kinetic energy of the four emerging particles). It was with these numbers in mind that the Bevatron at the University of California was designed.

When the Bevatron began to bombard a target made of copper with six-Bev protons, the next problem was to detect and identify any antiprotons created. A plan for the search was devised by Owen Chamberlain, Thomas Ypsilantis and the authors of this article. The plan was based on three properties which could conveniently be determined. First, the stability of the particle meant that it should live long enough to pass through a long apparatus. Second, its negative charge could be identified by the direction of deflection of the particle by an applied magnetic field, and the magnitude of its charge could be gauged by the amount of ionization it produced along its path. Third, its mass could be calculated from the curve of its trajectory in a given magnetic field if its velocity was known.

When the discovery of the antiproton was announced last October, 60 of them had been recorded, at an average rate of about four to each hour of operation of the Bevatron. They had passed all the tests which we had preordained before the start of the experiment. We were quite gratified by the comment of a highly esteemed colleague who had just finished an important and difficult experiment on mesons. After examining our tests, he said, “I wish that my own experiments on mu mesons were as convincing as this.” At this time several long-standing bets on the existence of the antiproton started to be paid. The largest we know of was for $500. (We were not personally involved.)

The Higgs Boson

By Martinus J. G. Veltman
Published in November 1986
Nobel Prize in 1999

The Higgs boson, which is named after Peter W. Higgs of the University of Edinburgh, is the chief missing ingredient in what is now called the standard model of elementary processes: the prevailing theory that describes the basic constituents of matter and the fundamental forces by which they interact. According to the standard model, all matter is made up of quarks and leptons, which interact with one another through four forces: gravity, electromagnetism, the weak force and the strong force. The strong force, for instance, binds quarks together to make protons and neutrons, and the residual strong force binds protons and neutrons together into nuclei. The electromagnetic force binds nuclei and electrons, which are one kind of lepton, into atoms, and the residual electromagnetic force binds atoms into molecules. The weak force is responsible for certain kinds of nuclear decay. The influence of the weak force and the strong force extends only over a short range, no larger than the radius of an atomic nucleus; gravity and electromagnetism have an unlimited range and are therefore the most familiar of the forces.

In spite of all that is known about the standard model, there are reasons to think it is incomplete. That is where the Higgs boson comes in. Specifically, it is held that the Higgs boson gives mathematical consistency to the standard model, making it applicable to energy ranges beyond the capabilities of the current generation of particle accelerators but that may soon be reached by future accelerators. Moreover, the Higgs boson is thought to generate the masses of all the fundamental particles; in a manner of speaking, particles “eat” the Higgs boson to gain weight.

The biggest drawback of the Higgs boson is that so far no evidence of its existence has been found. Instead a fair amount of indirect evidence already suggests that the elusive particle does not exist. Indeed, modern theoretical physics is constantly filling the vacuum with so many contraptions such as the Higgs boson that it is amazing a person can even see the stars on a clear night! Although future accelerators may well find direct evidence of the Higgs boson and show that the motivations for postulating its existence are correct, I believe things will not be so simple. I must point out that this does not mean the entire standard model is wrong. Rather, the standard model is probably only an approximation—albeit a good one—of reality.

Forces among elementary particles are investigated in high-energy-physics laboratories by means of scattering experiments. A beam of electrons might, for instance, be scattered off a proton. By analyzing the scattering pattern of the incident particles, knowledge of the forces can be gleaned.

The electroweak theory successfully predicts the scattering pattern when electrons interact with protons. It also successfully predicts the interactions of electrons with photons, with W bosons [particles that make the weak field felt] and with particles called neutrinos. The theory runs into trouble, however, when it tries to predict the interaction of W bosons with one another. In particular, the theory indicates that at sufficiently high energies the probability of scattering one W boson off another W boson is greater than 1. Such a result is clearly nonsense. The statement is analogous to saying that even if a dart thrower is aiming in the opposite direction from a target, he or she will still score a bull's-eye.

It is here that the Higgs boson enters as a savior. The Higgs boson couples with the W bosons in such a way that the probability of scattering falls within allowable bounds: a certain fixed value between 0 and 1. In other words, incorporating the Higgs boson in the electroweak theory “subtracts off” the bad behavior.

Armed with the insight that the Higgs boson is necessary to make the electroweak theory renormalizable, it is easy to see how the search for the elusive particle should proceed: [W bosons] must be scattered off one another at extremely high energies, at or above one trillion electron volts (TeV). The necessary energies could be achieved at the proposed 20-TeV Superconducting Super Collider (SSC), which is currently under consideration in the U.S. If the pattern of the scattered particles follows the predictions of the renormalized electroweak theory, then there must be a compensating force, for which the Higgs boson would be the obvious candidate. If the pattern does not follow the prediction, then the [W bosons] would most likely be interacting through a strong force, and an entire new area of physics would be opened up.

Technology

Optical Masers

By Arthur L. Schawlow
Published in June 1961
Nobel Prize in 1981

For at least half a century communications engineers have dreamed of having a device that would generate light waves as efficiently and precisely as radio waves can be generated. The contrast in purity between the electromagnetic waves emitted by an ordinary incandescent lamp and those emitted by a radio-wave generator could scarcely be greater. Radio waves from an electromagnetic oscillator are confined to a fairly narrow region of the electromagnetic spectrum and are so free from “noise” that they can be used for carrying signals. In contrast, all conventional light sources are essentially noise generators that are unsuited for anything more than the crudest signaling purposes. It is only within the last year, with the advent of the optical maser, that it has been possible to attain precise control of the generation of light waves.

Although optical masers are still very new, they have already provided enormously intense and sharply directed beams of light. These beams are much more monochromatic than those from other light sources.

The optical maser is such a radically new kind of light source that it taxes the imagination to canvass its possible applications. Message-carrying, of course, is the most obvious use and the one that is receiving the most technological attention. Signaling with light, although it has been used by men since ancient times, has been limited by the weakness and noisiness of available light sources. An ordinary light beam can be compared to a pure, smooth carrier wave that has already been modulated with noise by short bursts of light randomly emitted by the individual atoms in the light source. The maser, on the other hand, can provide an almost ideally smooth wave, carrying nothing but what one puts on it.

If suitable methods of modulation can be found, coherent light waves should be able to carry an enormous volume of information. This is so because the frequency of light is so high that even a very narrow band of the visible spectrum includes an enormous number of cycles per second; the amount of information that can be transmitted is directly proportional to the number of cycles per second and therefore to the width of the band. In television transmission the carrier wave carries a signal that produces an effective bandwidth of four megacycles. A single maser beam might reasonably carry a signal with a frequency, or bandwidth, of 100,000 megacycles, assuming a way could be found to generate such a signal. A signal of this frequency could carry as much information as all the radiocommunication channels now in existence. It must be admitted that no light beam will penetrate fog, rain or snow very well. Therefore to be useful in earthbound communication systems light beams will have to be enclosed in pipes.

Space-Based Ballistic-Missile Defense

By Hans A. Bethe, Richard L. Garwin, Kurt Gottfried and Henry W. Kendall
Published inOctober 1984
Nobel Prize in 1967 (Bethe) and in 1990 (Kendall)

In his televised speech last year calling on the nation's scientific community “to give us the means of rendering these nuclear weapons impotent and obsolete” the president [Ronald Reagan] expressed the hope that a technological revolution would enable the U.S. to “intercept and destroy strategic ballistic missiles before they reached our own soil or that of our allies.”

Can any system for ballistic-missile defense eliminate the threat of nuclear annihilation?

Our analysis of the prospects for a space-based defensive system against ballistic-missile attack will focus on the problem of boost-phase interception.

The boost-phase layer of the defense would require many components that are not weapons in themselves. They would provide early warning of an attack by sensing the boosters' exhaust plumes; ascertain the precise number of the attacking missiles and, if possible, their identities; determine the trajectories of the missiles and get a fix on them; assign, aim and fire the defensive weapons; assess whether or not interception was successful; and, if time allowed, fire additional rounds.

Because the boosters would have to be attacked while they could not yet be seen from any point on the earth's surface accessible to the defense, the defensive system would have to initiate boost-phase interception from a point in space, at a range measured in thousands of kilometers. Two types of “directed energy” weapon are currently under investigation for this purpose: one type based on the use of laser beams, which travel at the speed of light (300,000 kilometers per second), and the other based on the use of particle beams, which are almost as fast. Nonexplosive projectiles that home in on the booster's infrared signal have also been proposed.

Other interception schemes proposed for ballistic-missile defense include chemical-laser weapons, neutral-particle-beam weapons and nonexplosive homing vehicles, all of which would have to be stationed in low orbits.

The brightest laser beam attained so far is an infrared beam produced by a chemical laser that utilizes hydrogen fluoride. The U.S. Department of Defense plans to demonstrate a two-megawatt version of this laser by 1987. Assuming that 25-megawatt hydrogen fluoride lasers and optically perfect 10-meter mirrors eventually become available, a weapon with a “kill radius” of 3,000 kilometers would be at hand. A total of 300 such lasers in low orbits could destroy 1,400 ICBM boosters in the absence of countermeasures if every component worked to its theoretical limit.

A particle-beam weapon could fire a stream of energetic charged particles that could penetrate deep into a missile and disrupt the semiconductors in its guidance system. A charged-particle beam, however, would be bent by the earth's magnetic field and therefore could not be aimed accurately at distant targets. Hence any plausible particle-beam weapon would have to produce a neutral beam. Furthermore, by using gallium arsenide semiconductors, which are about 1,000 times more resistant to radiation damage than silicon semiconductors, it would be possible to protect the missiles' guidance computer from such a weapon.

Accurate Measurement of Time

By Wayne M. Itano and Norman F. Ramsey
Published in July 1993
Nobel Prize in 1989 (Ramsey)

New technologies, relying on the trapping and cooling of atoms and ions, offer every reason to believe that clocks can be 1,000 times more precise than existing ones.

One of the most promising depends on the resonance frequency of trapped, electrically charged ions. Trapped ions can be suspended in a vacuum so that they are almost per-fectly isolated from disturbing influences. Hence, they do not suffer collisions with other particles or with the walls of the chamber.

Two different types of traps are used. In a Penning trap, a combination of static, nonuniform electric fields and a static, uniform magnetic field holds the ions. In a radio frequency trap (often called a Paul trap), an oscillating, nonuniform electric field does the job. Workers at Hewlett-Packard, the Jet Propulsion Laboratory in Pasadena, Calif., and elsewhere have fabricated experimental standard devices using Paul traps. The particles trapped were mercury 199 ions. The maximum Qs [a measure of relative energy absorption and loss] of trapped-ion standards exceed 10

12. This value is 10,000 times greater than that for current cesium beam clocks [the higher the Q, the more stable the clock].

During the past few years, there have been spectacular developments in trapping and cooling neutral atoms, which had been more difficult to achieve than trapping ions. Particularly effective laser cooling results from the use of three pairs of oppositely directed laser-cooling beams along three mutually perpendicular paths. A moving atom is then slowed down in whatever direction it moves. This effect gives rise to the designation “optical molasses.” Neutral-atom traps can store higher densities of atoms than can ion traps, because ions, being electrically charged, are kept apart by their mutual repulsion. Other things being equal, a larger number of atoms result in a higher signal-to-noise ratio.

The main hurdle in using neutral atoms as frequency standards is that the resonances of atoms in a trap are strongly affected by the laser fields. A device called the atomic fountain surmounts the difficulty. The traps capture and cool a sample of atoms that are then given a lift upward so that they move into a region free of laser light. The atoms then fall back down under the influence of gravity. On the way up and again on the way down, the atoms pass through an oscillatory field. In this way, resonance transitions are induced, just as they are in the separated oscillatory field beam apparatus.

Much current research is directed toward laser-cooled ions in traps that resonate in the optical realm, where frequencies are many thousands of gigahertz. Such standards provide a promising basis for accurate clocks because of their high Q. Investigators at NIST have observed a Q of 10

13 in the ultraviolet resonance of a single laser-cooled, trapped ion. This value is the highest Q that has ever been seen in an optical or microwave atomic resonance.

The anticipated improvements in standards will increase the effectiveness of the current uses and open the way for new functions. Only time will tell what these uses will be.

Carbon Wonderland

By Andre K. Geim and Philip Kim
Published in April 2008
Nobel Prize in 2010 (Geim)

Every time someone scribes a line with a pencil, the resulting mark includes bits of the hottest new material in physics and nanotechnology: graphene. Graphite, the “lead” in a pencil, is a kind of pure carbon formed from flat, stacked layers of atoms. Graphene is the name given to one such sheet. It is made up entirely of carbon atoms bound together in a network of repeating hexagons within a single plane just one atom thick. Not only is it the thinnest of all possible materials, it is also extremely strong and stiff. Moreover, in its pure form it conducts electrons faster at room temperature than any other substance. Engineers at laboratories worldwide are currently scrutinizing the stuff to determine whether it can be fabricated into smart displays, ultrafast transistors and quantum-dot computers.

In the meantime, the peculiar nature of graphene at the atomic scale is enabling physicists to delve into phenomena that must be described by relativistic quantum physics. Investigating such phenomena has heretofore been the exclusive preserve of astrophysicists and high-energy particle physicists working with multimillion-dollar telescopes or multibillion-dollar particle accelerators. Graphene makes it possible for experimenters to test the predictions of relativistic quantum mechanics with laboratory benchtop apparatus.

Two features of graphene make it an exceptional material. First, despite the relatively crude ways it is still being made, graphene exhibits remarkably high quality—resulting from a combination of the purity of its carbon content and the orderliness of the lattice into which its carbon atoms are arranged. Investigators have so far failed to find a single atomic defect in graphene—say, a vacancy at some atomic position in the lattice or an atom out of place. That perfect crystalline order seems to stem from the strong yet highly flexible interatomic bonds, which create a substance harder than diamond yet allow the planes to bend when mechanical force is applied. The quality of its crystal lattice is also responsible for the remarkably high electrical conductivity of graphene. Its electrons can travel without being scattered off course by lattice imperfections and foreign atoms.

The second exceptional feature of graphene is that its conduction electrons move much faster and as if they had far less mass than do the electrons that wander about through ordinary metals and semiconductors. Indeed, the electrons in graphene—perhaps “electric charge carriers” is a more appropriate term—are curious creatures that live in the weird world where rules analogous to those of relativistic quantum mechanics play an important role. That kind of interaction inside a solid, so far as anyone knows, is unique to graphene. Thanks to this novel material from a pencil, relativistic quantum mechanics is no longer confined to cosmology or high-energy physics; it has now entered the laboratory.

One engineering direction deserves special mention: graphene-based electronics. We have emphasized that the charge carriers in graphene move at high speed and lose relatively little energy to colliding with atoms in its crystal lattice. That property should make it possible to build ballistic transistors, ultrahigh-frequency devices that would respond much more quickly than existing transistors.

Even more tantalizing is the possibility that graphene could help the microelectronics industry prolong the life of Moore's law. The remarkable stability and electrical conductivity of graphene even at nanometer scales could enable the manufacture of individual transistors substantially less than 10 nanometers across and perhaps even as small as a single benzene ring. In the long run, one can envision entire integrated circuits carved out of a single graphene sheet.

Whatever the future brings, the one-atom-thick wonderland will almost certainly remain in the limelight for decades to come. Engineers will continue to work to bring its innovative by-products to market, and physicists will continue to test its exotic quantum properties. But what is truly astonishing is the realization that all this richness and complexity had for centuries lain hidden in nearly every ordinary pencil mark.

MORE TO EXPLORE

Black Holes and Time Warps: Einstein's Outrageous Legacy. Kip S. Thorne. W. W. Norton, 1995.

The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom. Graham Farmelo. Basic Books, 2009.

From Eternity to Here: The Quest for the Ultimate Theory of Time. Sean Carroll. Plume, 2010.

Massive: The Missing Particle That Sparked the Greatest Hunt in Science. Ian Sample. Basic Books, 2012.

John Matson is a former reporter and editor for Scientific American who has written extensively about astronomy and physics.

More by John Matson

Ferris Jabr is a contributing writer for Scientific American. He has also written for the New York Times Magazine, the New Yorker and Outside.

More by Ferris Jabr
Scientific American Magazine Vol 307 Issue 1This article was originally published with the title “Nobel Pursuits” in Scientific American Magazine Vol. 307 No. 1 (), p. 62
doi:10.1038/scientificamerican0712-62