摘录
Chapter One [ATOM.sup.10] Today, we feel quite at home in an atomic world. We take it for granted that all the stuff we encounter in everyday life is made of trillions upon trillions of individual units called atoms, and that they in turn contain even smaller components in the form of electrons, protons, and neutrons. Those ideas now seem so comfortable and fundamental that it's hard to imagine that they were considered suspiciously radical in our grandparents' lifetimes. Indeed, the discovery and characterization of atoms is perhaps the preeminent triumph of twentieth-century physics. "Atomism" was not a new concept. Greek sages had posited it in the fifth century B.C., and by the early nineteenth century, British scientist John Dalton had become convinced that matter was built from tiny entities that were "absolutely indecomposable." In addition, chemists had learned that elements combine only in certain specific ratios by weight--a notion implying that each element must occur in discrete units. Yet at the dawn of the modern era--with automobiles, telephones, and radios already in widespread use--there were distinguished scientists who doubted that atoms had a real, physical existence. After all, nobody had ever seen one. (And for good reason: as physicists later determined, even a fairly hefty atom is about [10.sup.-8] meters wide, about 1/10,000 the width of a human hair. And 99.9 percent of its mass is in the nucleus, which is 10,000 times smaller yet!) So at the turn of the century even such formidable figures as Austrian physicist Ernst Mach were still insisting that the supposed atom was no more than a useful fiction. Within a few wondrous decades, however, scientists had not only revealed the structure and behavior of the atom in exquisite and astonishing detail, but were using that knowledge to understand natural phenomena on scales from the submicroscopic to the cosmic. Electrons The first convincing clue was the discovery of the electron--or, as it was then known, the "cathode ray." In the mid-nineteenth century, scientists had found that if electrodes were placed in a vacuum tube, the negative pole, or cathode, appeared to emit some strange form of radiation. By the 1890s, voltage generators had become powerful enough, and vacuum conditions good enough, that this effect could be observed in detail. But no one knew what it was . Many Continental researchers were betting that it was indeed radiation. German physicist Heinrich Hertz had shown that cathode rays could penetrate a thin metal foil, which was very ray-like behavior. Moreover, X-rays and radioactivity had just been discovered, and mysterious sorts of radiation suddenly seemed to be cropping up almost monthly. But some British physicists had suspected for years that the rays were actually streams of an unknown kind of particles carrying negative electrical charges. One of the physicists was Joseph John (J.J.) Thomson, son of a Manchester bookseller, who in 1884 had been elected as director of Cambridge University's famed Cavendish Laboratory. In the mid-1890s, he set out to examine the phenomenon using a then-high-tech apparatus designed by Sir William Crookes. The device was in many ways no different from a modern television set: in a sealed glass tube from which most of the air had been pumped to create a vacuum, the cathode emitted its rays in straight lines that made sections of the glass near the cathode glow brightly, just as a TV "electron gun" shoots particles at a glass screen coated with phosphors. Thanks to the laws governing the interaction of charges and fields--worked out in the nineteenth century by British physicists Michael Faraday, James Clerk Maxwell, and others--Thomson knew that if cathode rays were actually streams of charged particles, they should be deflected by electric and magnetic fields. The direction in which the beam was bent would reveal the type of charge, and the amount of the deflection would depend on the size of the charge and the speed of the particles. Previous attempts to demonstrate this effect had failed. But Thomson surmised that the fields might have been too weak. Using improved induction coils and a better vacuum, he found that he could bend the beams, causing the glow to shift to a different part of the tube. In 1897, he wrote, "I can see no escape from the conclusion that they are charges of negative electricity carried by particles of matter." He had also been able to determine the ratio of the mass to the charge. Though he could calculate neither quantity separately, it appeared likely that the mass was shockingly scant--around 1/1,000 that of the positively charged hydrogen ion, which had been approximated by chemists and was "the smallest mass hitherto recognized as being capable of a separate existence." (More than a decade would pass before American physicist Robert Millikan was able to determine the new particle's charge accurately. First he noted how fast tiny oil droplets fell in air. Then he induced an electric charge on the droplets that would push them in the opposite direction and measured how much charge was necessary to propel them upward. His data came out in multiples of a single, presumably minimal charge. In 1909, he determined this quantity to within a few percentage points of the currently accepted value. Once he had established the charge, he could calculate the mass, which would prove to be 1/1,837 that of the hydrogen nucleus.) Thomson had called his particle a "corpuscle." But the name that stuck had been invented in 1891 by his Irish contemporary G. Johnstone Stoney: "electron." Its discovery jolted science into a new way of imagining the composition of matter--and suddenly raised several serious questions. For one thing, if atoms existed, they were supposed to be the smallest elementary units of matter. But here was something thousands of times smaller. And it couldn't be the atom itself. Scientists had long known that, if there were such things as atoms, they were electrically neutral, although they could be made to take on a positive charge (that is, become ions) if exposed to enough energy. Today, we understand that the energy dislodges one or more electrons, leaving the ionized atom with a net positive charge from the protons in its nucleus. At the end of the nineteenth century, however, all Thomson knew was that if the electron was negative, something in the atom had to carry a corresponding positive charge. But what was it? And how were the charges arranged? Various ingenious models were proposed. But early in the twentieth century the favored conception was the one endorsed by Thomson: an atom was a composite in which a number of electrons are imbedded in a wad of undifferentiated positive matter "like raisins in a pudding." This agreeable notion didn't last long. Nucleus While Thomson was examining his corpuscles, researchers on the Continent were pondering another new phenomenon--radioactivity. First observed in uranium by French physicist Henri Becquerel in 1896, it was initially inexplicable. Certain substances gave off emissions that left an image on a photographic plate. But what was being emitted? The first comprehensive answer was provided by Ernest Rutherford, a bluff and hearty New Zealander who worked under Thomson at Cambridge before moving to McGill University in Montreal, Canada, and then back to England at the University of Manchester. By the turn of the century, Rutherford had examined uranium emissions and determined that there were two very different kinds. One, which he called alpha, was very easily absorbed by materials as thin as a piece of paper. The other, beta, was much more penetrating and could much more easily pass through a thin sheet of aluminum. (Becquerel would later confirm that these beta rays were actually electrons.) Shortly after he arrived at Manchester in 1907, Rutherford had concluded that alpha particles were positively charged because of the way they were deflected by electric and magnetic fields. The relatively small amount of that deflection suggested a sizable mass. He eventually decided that the alpha particles "must consist of atoms of helium"--an element whose mass was known approximately from chemistry--and ejected from certain unstable elements at about one-twentieth the speed of light. We know now that the alpha particle consists of two protons and two neutrons (precisely the same as the nucleus of the most common isotope of helium), which explains its positive charge. At the time, however, all Rutherford could discern was that these comparatively massive, high-energy particles could be stopped or greatly diverted by nothing more than a tissue-thin sheet of metal--suggesting that there were some kind of diminutive but impenetrable barriers lurking in matter. Using radioactive material as a sort of mini-cannon for firing alpha particles at a piece of sheer gold foil, Rutherford and his associates set out to study what happened to the particles as they collided with the gold film. By that time other researchers had devised a cunning method of tracking such trajectories: a charged particle striking a layer of zinc sulfide would produce a burst of light, or "scintillation," at the impact point. Rutherford's team arranged such a layer near the target. By observing the location of the light bursts carefully with a microscope, they began to get a good count and a very accurate measure of the small angles through which the alpha particles were deflected. And so they might have continued. But one day in 1909, Rutherford's research assistant Hans Geiger (inventor of the eponymous radiation counter) teamed up with Rutherford's student, Ernest Marsden. Rutherford suggested that they see whether any particles were scattered through relatively large angles. It didn't seem a terribly promising idea, but it would transform physics. To everyone's amazement, Geiger and Marsden found that one in every few thousand of the particles bounced back out of the foil in the general direction of the source, as if they had collided with something solid inside. "It was as incredible as if you had fired a fifteen-inch shell at a piece of tissue paper and it came back and hit you," Rutherford was later fond of saying. The Thomson pudding model of the atom simply could not account for this kind of behavior because it was supposed to be homogeneous. Rutherford became convinced that the atom was not uniform in density. It must contain a tiny "nucleus" of matter about 1/100,000 the size of the whole atom. That solid core, on the order of a few hundredths of a trillionth of a meter wide, must contain something with a positive charge strong enough to repel the occasional alpha particle that came close to it. Nearly all of the atom was nothing but empty space! So what was in the nucleus, and what was carrying the positive charge? To explore that question, Rutherford and Marsden switched to smaller target atoms. Gold was so massive, and its nucleus presumably had such a large positive charge, that the modest alpha particle was unlikely to knock anything out of it. But atoms in various lightweight gases such as hydrogen and nitrogen might be much more susceptible. And so they proved to be. In 1914, when Marsden bombarded air with alphas, he dislodged a strange, positively charged object that was several times less massive than the helium nucleus. Pursuing this observation intermittently during World War, I by early 1919 Rutherford had found proof that "H particles," which he soon named "protons," could indeed be dislodged from the nitrogen nucleus. The "H" particle had the same characteristics as the hydrogen ion, which had been extensively studied. This suggested that atoms of nitrogen contained the same kind of basic particles found at the core of hydrogen. The nuclear proton appeared to be the electrical counterpart to the electron, with an identical but opposite charge. The atom was becoming comprehensible. But like all truly profound revelations in science, the new findings raised troubling new questions. For example, how, exactly, were electrons bound to atoms? Rutherford and others posed a comprehensible, if extraordinary solution: electrons must orbit the nucleus like planets around the sun. But that model had a fatal weakness. Electrons are charged particles. And Maxwell's laws demanded that if such a charge were traveling in a circular orbit, it would give off energy as electromagnetic radiation in much the same way that a radio antenna emits waves. If that were the case, the electrons would very quickly dissipate their energies, slow down, and plunge into the nucleus. Some process beyond classical physics, it seemed, prevented this from happening. But what? The man who confronted that quandary was Niels Bohr, son of a Copenhagen physiology professor. Bohr had worked briefly in Cambridge with Thomson and then went to Rutherford's lab at Manchester, where in 1912 he began to devise a revolutionary picture. Bohr retained the idea of orbits, but rejected the classic idea that the electron would radiate continuously. Instead, he turned to a bizarre and then somewhat disreputable concept posited in 1900 by German physicist Max Planck. Planck had argued against the entirely sensible assumption that the energy emitted by excited atoms comes in a smooth, unbroken gradation of values. Experimental observations of heated bodies could be better explained, he determined, if matter gave off energy in discrete units that would be called "quanta" from the Latin word for "how much." The amount of energy in each discontinuous quantum was a function of its frequency. Similarly, Bohr reasoned that electrons could encircle atoms only in certain allowed orbits at particular distances from the nucleus. Each permitted orbit corresponded to a slightly different energy, and electrons only emitted or absorbed radiation when they changed from one of those energy conditions, or "levels," to another. Further, once an electron was at the lowest possible energy, called the "ground state," it could not radiate at all. (As we will see in the next chapter, today we go one step further and accept that when an electron drops from a higher to a lower energy level, radiation in the form of light quanta is emitted. In 1905, Einstein described the photoelectric effect by postulating that light is made up of quantized energetic particles. At the very beginning of the century, however, that notion was so completely outlandish that physicists, including Bohr, did not generally accept it. Nearly two decades would pass before experiments by American physicist Arthur Holly Compton confirmed the notion and made Einstein's "corpuscular concept" widely credible.) The Bohr atom was not convincing when first presented in 1913, but it made excellent sense of a phenomenon that had baffled scientists for centuries: spectra. When an element is heated, it gives off radiation that can be split into its component wavelengths using prisms or gratings to spread out the pattern. By the nineteenth century, it was evident that each element had its own unique spectrum with a telltale pattern of lines: black ones marking wavelengths absorbed by the substance, and bright ones representing the wavelengths emitted. But there was no suitable explanation for why this happened, although in 1885 Swiss mathematician Johann Balmer had devised a formula that neatly described the relationship among the wavelengths that made up the major lines in the visible hydrogen spectrum (called the Balmer series). As it turned out, Bohr's theory exactly predicted their placement. To many, it seemed that the atom at last was comprehensible. The jubilation was premature. Bohr's model failed to work on more complicated atoms--a situation that spurred scores of researchers to seek, and eventually find, utterly unexpected new characteristics. Nonetheless, by 1913 the general outline of the modern atom was becoming clearer. The conviction was spreading that a new age of physics had arrived. It would unfold as investigators learned to use ever more sophisticated means to probe deeper and deeper into the secrets of matter. On the Firing Line It would not be easy. Much of physics involves poking objects with something and observing the result. But in the subatomic realm, there are relatively few objects small enough to do the poking. Rutherford's alpha particles were far too large--and insufficiently energetic--to reveal the fine structure of the atom. "If alpha particles--or similar projectiles--of still greater energy were available for experiment," Rutherford wrote in 1919, "we might expect to break down the nuclear structure of many of the lighter atoms." Projecting lightweight electrons from a cathode was one thing. Boosting the energy of a proton high enough to disintegrate a nucleus was quite another. It would require what in the 1920s seemed unattainable electrical potentials: millions of electron volts (eV). (One eV is the energy an electron acquires when accelerated across a potential difference of 1 volt.) Nonetheless, two researchers at the Cavendish Laboratory, John Cockcroft and Ernest Walton, set out to see how close they could get. Using a high-voltage transformer, they began accelerating protons down a tube between the two oppositely charged plates at energies up to 750,000 eV. To their delighted surprise, in 1932 they succeeded in banging a proton into an atom of lithium (the third-lightest element) so hard that its nucleus absorbed the proton and split into two separate nuclei of helium, the second-lightest element. Meanwhile, on the other side of the Atlantic, a young American physicist named Robert Van de Graaff had invented an even more powerful device. It used an insulated conveyor belt to carry positive charges to the interior of a hollow metal dome. The longer the belt ran, the more voltage built up in the dome. Positively charged hydrogen ions--that is, protons--placed inside the dome were repelled by its powerful electric field and propelled out of the apparatus through an accelerator tube to energies of 1.5 million eV. The Cockcroft-Walton and Van de Graaff machines conferred energy to particles in a single stupendous blast. Thus they were inherently limited by the amount of charge that could be stored. But a Norwegian engineer, Rolf Wideröe, had designed a system whereby drastically greater amounts of energy could be conveyed to particles by applying relatively small accelerations, but doing it many times. In Wideröe's scheme, a charged particle would proceed down the center of a series of metal tubes separated by short gaps. Just as the particle reached the gap, the tube in front of it would be given the opposite electrical charge, which would attract it. Thus at each gap, the particle would accelerate. The limiting factor was size: Kicking particles up to very high energies would require an extremely, and perhaps prohibitively, long line of tubes. In 1929, a young Berkeley physics professor named Ernest Lawrence was studying Wideröe's ideas and realized that the process could be made circular, forcing the particles to pass across the same accelerating gap again and again, by exploiting a well-known principle. That is, a charged particle moving through a magnetic field experiences a sideways force that causes it to begin to rotate in circular fashion. Lawrence's "cyclotron" design had two semicircular, D-shaped hollow metal chambers, called Dees, arranged with their straight sides facing each other--but separated by a small gap. He then put the Dees in a magnetic field. A charged particle in the device would start to revolve. When it reached the gap, the charge on the Dee it was approaching would be made opposite to the particle's, accelerating it across the gap. After it had gone halfway around, the voltage would be reversed: the particle would be repelled by the chamber it was leaving and attracted by the chamber it was approaching. The key was to make the timing of the voltage change match the time it took the particle to complete half of its circuit. Smashing Success Lawrence's prototype, built in 1931, was just 5 inches in diameter, yet it accelerated protons to a whopping 80,000 eV. Larger models quickly followed that reached millions of eV. The early machines benefited from a happy fact of physics. Each time a particle was accelerated in the cyclotron, the radius of its trajectory increased slightly and it traveled through a larger arc; but since it was moving faster, it completed one circular lap in exactly the same amount of time. Thus the timing frequency of the voltage change could remain constant as the stream of particles gathered more and more energy. But as the energies got higher, Lawrence soon ran into one of nature's few insuperable barriers: the speed of light. As a particle approached light speed, it began to behave in strange ways predicted by Einstein decades earlier. He had postulated that any entity with mass could never attain that speed (designated c) because as it got close, its mass would start to increase. Lawrence's team began to see that effect in the mid-1940s. When the particles reached orbits over 10 feet in diameter and speeds of 0.2 c, they began to grow so heavy that they took slightly longer to complete each circuit, spoiling the timing. The solution, physicists found, was to synchronize the frequency of the accelerating field with the circulating particles to compensate for relativistic deceleration. There was a price to pay: accelerators could no longer contain a continuous stream of particles at different energies, as the original cyclotron had; they could propel only one bunch of particles, all at the same energy. But the new "synchrotrons" would eventually boost their projectiles to within a whisker of the speed of light, attaining energies of billions and even trillions of electron volts in rings that were miles in diameter. Collisions in those behemoths would reveal the inner structure of the atom in dazzling and unprecedented detail. Thus, within the course of a few brief decades, science went from corpuscles, "H-particles," and raisin puddings to a highly sophisticated understanding of the atom and its components. That, in turn, enabled physicists to devise a striking array of investigative techniques that would speed the process of discovery and lead to dozens of practical inventions that revolutionized biology, chemistry, and medicine. Among them were the electron microscope, the scanning tunneling microscope, and nuclear magnetic resonance. Microscopes without Light Investigating matter on the tiniest scales has an inherent problem: it is impossible to get an image of something smaller than the smallest units used to view it. Thus waves of visible light (which average around 500 nanometers--nm, or billionths of a meter) are too big to reveal even a sizable atom, which is about 0.2 nm wide, or indeed anything much smaller than a few millionths of a meter. But in the early 1920s, as we shall see later, French theorist Louis de Broglie had made the then outrageous argument that matter had wave-like properties, and that the wavelength of a particle got shorter as its velocity increased. Soon researchers discerned that electrons could be used to observe very small structures, since an electron accelerated by tens of thousands of volts has a wavelength on the order of 0.005 nm--around 100,000 times smaller than visible light. Handling Atoms It took somewhat longer for another quantum-mechanical concept-called tunneling--to work its way into a practical imaging device. Shortly after de Broglie advanced his notion of matter waves, Austrian physicist Erwin Schrödinger invented a landmark equation that described one of the weirder aspects of matter at the subatomic level: matter does not actually exist in one definite place or condition; instead, it has a certain probability of existing in a variety of places and conditions. In the case of a single electron, that means that even if it is confined inside solid matter, there is a small but real possibility that it can leak outside and enter another solid that is very close by. This seemingly magical (but strictly natural and mathematically explicable) ability is called "tunneling." One of its many uses was conceived in 1981 when two scientists at IBM's Zurich research center, Gerd Binnig and Heinrich Rohrer, set out to investigate the effect. They made a needle whose tip was only a few atoms thick. When they moved it over a sheet of gold--no more than a couple of atoms' width from the surface--they could detect the tunneling current as it moved from individual atoms to the needle tip. Their "scanning tunneling microscope" (STM) could easily distinguish one atom from another and map the terrain of solid surfaces in fantastic detail. Atoms as Resonators Perhaps no single technique or discovery more spectacularly illustrates the twentieth-century conquest of the atom than nuclear magnetic resonance (NMR)--a phenomenon that has transformed the process of chemical analysis and, in its medical incarnations, eliminated the need for thousands of painful and hazardous exploratory surgical operations every year. In 1922, Otto Stern and Walter Gerlach conducted an experiment that unknowingly anticipated a basic property of subatomic particles. In their experiment, they showed that silver atoms behaved, in effect, like tiny bar magnets with a north and south pole. Three years later, in 1925, the critical insight came that particles have an intrinsic "spin." It was the spin of the outermost electron in the silver atom that gave the dramatic results observed by Stern and Gerlach. Now physicists know that systems of many charged particles, such as an atomic nucleus, have a net collective magnetic property called spin that is unique to each element. In 1938, American physicist I. I. Rabi and colleagues found that when beams of molecules are placed in a strong external magnetic field, many of the nuclei try to align themselves with the outside field. But they do so incompletely, wobbling on their axes like a top that is slowing down. Rabi's group showed that, in that condition, if the nuclei are struck by a second, oscillating magnetic field from an electromagnetic wave that is at exactly the same frequency as their rate of wobble (the "resonant" frequency), they will absorb the field energy and "flip" their spin states--that is, reverse their north and south poles. This process diverts the molecules from the beam in a way that can be easily measured. In the 1940s, Edward M. Purcell of Harvard and Felix Bloch of Stanford found other methods to induce and measure this effect. Soon NMR was being used to examine the composition of chemical compounds by detecting their resonances. Not only does each element have its own distinctive resonant frequency, but in compounds that frequency varies in slight but predictable ways as the magnetic fields of different kinds of neighboring atoms influence the target nuclei of the element in question--allowing NMR researchers to determine the structure of an unknown molecule. Several scientists soon realized that NMR could also be used for medical applications. If living tissue were placed in a strong field, and some of its hydrogen atoms were spin-flipped, the resulting emissions could be assembled by software into an image of the relevant body part. By the 1980s, magnetic resonance imaging, or MRI, was in widespread use. And a seemingly arcane property--unveiled in the quest to understand the atom--has saved countless thousands of lives. Chapter Two [SPECTRUM.sup.40] The last decade of the nineteenth century was positively aglow with scientific optimism. Amazing progress had been made in chemistry, astronomy, biology, and a dozen other fields. But no ensemble of achievements inspired more exuberant confidence in human ability to understand nature than the revolutionary discovery of how electricity and magnetism were interrelated, and how both were entwined in a wide spectrum of "electromagnetic" radiation--including light. Isaac Newton's once authoritative conviction that light was made up of tiny particles had, it seemed, been completely discredited. In its place was a comprehensive wave theory, showing that types of radiation as seemingly different as warmth emanating from the sides of a wood stove, sunlight and candlelight, and radio signals were all variations on the same kind of thing. Each was made up of two components superimposed on one another: an oscillating electric field and a correspondingly fluctuating magnetic field. Electromagnetic waves could vary in length from a few trillionths of a meter to tens of thousands of meters. This theory had been elegantly embodied in a set of equations devised by James Clerk Maxwell in 1873. By 1887, Heinrich Hertz had confirmed the model with dozens of convincing experiments demonstrating that electromagnetic radiation embodied many classic properties of waves, including interference, refraction, reflection, and polarization. Moreover, physicists had determined that light and all other forms of electromagnetic radiation traveled at only one speed, 300 million meters per second in a vacuum. (Speed is distance per unit time. For electromagnetic radiation, that means wavelength [distance] times frequency [number of waves per second]. Because the speed is constant for all kinds of radiation, those with longer wavelengths must have lower frequencies, and vice versa.) Yet at the turn of the century, a few stubborn puzzles persisted. Their eventual solutions would reveal that light has a dual identity utterly different from what anyone imagined. And they would lead to a panoply of astonishing discoveries and inventions from photocells and radar to lasers and holograms. Copyright © 1999 American Institute of Physics/American Physical Society. All rights reserved.