Love, built on beauty, soon as beauty, dies.
—John Donne (1572–1631)
AS A GROUP, we have a pretty inconsistent attitude toward celebrity. For those people on the way to fame, either we support them, as the underdog foil to those whose star is already well established, or else we're completely unaware of their existence.
Once they've arrived, there is a range of possibilities. If the star burns steadily with a modest light, there is every chance that they can sustain a long and fruitful career. Most people will continue to be completely unaware of their existence, but a small though loyal band of supporters will be enough to drive them forward, with very infrequent detraction.
For those fateful few who reach superstardom, though, it seems almost inevitable that they will eventually reach the point of saturation, beyond which for every loyal fan there is an equally loyal naysayer. And those naysayers seem only too ready for the slightest slip. In these days of social media, fame is a multiplier for public sin. The bigger they are, apparently, the harder they fall, though not necessarily all at once. Sometimes, instead, they seem to teeter on the edge of destruction for a tantalizingly long time.
But astronomers are used to all of this. It is not just celebrities who can fall from great heights, after all. There are also their namesakes, the literal stars.
Those wholly remarkable people, the ancient Greeks, generally believed in an eternal and unchanging cosmos. By this was meant that although the planets and stars might move from hour to hour, night to night, season to season, they did so in perfectly regular fashion, so that it was possible, at least in principle, to predict where a heavenly body would be at a given point in time. Nature was complex, so that humans might have, at any moment, an incomplete and imperfect understanding of its machinations, but in principle, it could be done. All that was needed was improvements in understanding.
To be sure, certain astronomical phenomena seemed to cast occasional doubt on the permanence of the heavens. For example, there were the comets. Comets had the gall to be large and asymmetrical, so that they were an undeniable blemish on the skies. What's more, they hung around for weeks or months at a time, and then disappeared. Aristotle (384–322 BC) resolved this apparent conundrum by placing them in Earth's atmosphere. Since everyone agreed that the atmosphere, being part of the imperfect Earth, was subject to constant change, atmospheric comets would not reflect poorly on the heavens.
In India, where comets were not seen as a blemish, the sixth-century astronomer Vaharamihira (505–587) compiled a listing of comets observed by ancient Indian seers, and expressed the notion that comets belonged properly to the heavens.
Then, in the 16th century, the Danish astronomer Tycho Brahe (1546–1601), who was the foremost practical astronomer in the pre-telescopic era (the telescope was invented less than a decade after his death), carefully measured the positions of the Great Comet of 1577 as seen from multiple locations on the Earth. Now if that comet were really near to the Earth, then as seen from those different locations, the comet should appear in different positions in the sky, the same way that your left eye and right eye will see your finger in slightly different positions against the distant background.
But the measurements Tycho obtained showed very little difference, as measured from different locations on the Earth. They varied by less than a degree, indicating that the comet had to be at least four times as far from the Earth as the Moon was, and the ancient Greek astronomer Hipparchus (190–120 BC) had already determined the Earth-Moon distance to be about 400,000 km, in modern units. This thrust the comets and their imperfections irrevocably beyond the earthly sphere and up amongst the heavens.
However, that was just the comets, which after all were a motley bunch of celestial objects and rather unlike the stars, which (it was still generally felt) were eternal. Even after the law of conservation of energy was discovered in 1847 by the German physicist Hermann von Helmholtz (1821–1894), and it became clear that stars were not in fact eternal, but had a finite lifespan (albeit an enormously long one), the consensus was still that stars would burn more or less steadily until their fuel was exhausted, and then they would fizzle out, rather like a candle at the end of its wax.
There were annoying wrinkles in this basic story, though, which were known long before stars were known to be mortal. The second brightest star in the northern constellation of Perseus is called Algol. Algol, rather unusually, varies considerably in brightness over a period of days; at its brightest, it is more than three times brighter than it is at its dimmest. Its name—which as with most bright stars is Arabic (ra's al-ghul "the head of the ogre [ghoul]")—may indicate that ancient astronomers were aware of this strange property. (There is even a mention in an Egyptian document from about 1200 BC that may allude to this variability.)
The first unambiguous mention of Algol's variability was made in 1667 by the Italian astronomer Geminiano Montanari (1633–1687). About a century later, in 1783, the English astronomer John Goodricke (1764–1786) finally came up with the proper explanation of Algol's behavior. Goodricke had been prodded to observe Algol by his friend and astronomy mentor, Edward Pigott (1753–1825). At that time, Algol was known simply as one of a number of stars whose brightness was known to vary from time to time. Much more familiar was Mira, in the constellation of Cetus the Whale, which was known to vary gradually up and down in a roughly 11-month cycle, so Goodricke and Pigott's first thought was that Algol varied in similar fashion.
No! Goodricke observed Algol on November 12, 1782, and had not been watching it for more than about an hour before being sure that it had dropped from the second magnitude to nearly the fourth—a change in brightness by a factor of about three. The next night he checked again and it was back to second magnitude again. The two men, spurred to action by this unprecedented rate of change, watched the star closely every night they could, but it wasn't until December 28 that they saw it drop again. Over time, they were able to determine that Algol followed a cycle of 2.87 days, during which time it seemed of constant brightness, except for a period of dimming that lasted about seven hours. (See my essay "A Matter of Great Magnitude" (December 2007) for more details on the magnitude system.)
Goodricke even proposed a mechanism for such this perfectly regular cycle. He suggested that Algol was actually two stars, a large brighter star and a smaller dimmer star, revolving around each other. Every 2.87 days, Goodricke proposed, the dimmer star swung in front of the brighter one, as seen from the Earth. Since Algol was too far from the Earth for the two stars to be seen separately, all we could see was their total brightness, and the net effect of the periodic eclipse was a temporary dimming while the dimmer star obscured the brighter one.
Goodricke turned out to be perfectly correct in all this, but his suggestion was discarded at the time in favor of another idea, which even Isaac Newton (1642–1727) had proposed in his monumental work, the Principia Mathematica: that stars had starspots, just as the Sun has sunspots, and that variable stars varied because as they turned, they alternately showed sides with more or fewer starspots toward us. Goodricke himself was persuaded by the starspot hypothesis because his own observations seemed to show that Algol did not always dim to the same minimum brightness. He used as his guide the neighboring star rho Persei, whose brightness was known to be magnitude 3.4.
Alas for Goodricke, rho Persei itself turns out to be variable, and is occasionally as dim as magnitude 4.0. If he had known this, he might not have been so quick to disclaim the eclipse theory. As it happens, though, he was in due time to receive full and substantial credit, for both his discovery and his suggestion. He received the Copley medal from the Royal Society for his discoveries, and in 1786, the Society made him a Fellow. Sadly, though, Goodricke never learned about this last and greatest honor; he died just fourteen days after his election, of pneumonia.
In the last few years before his death, however, Goodricke and Pigott embarked on a more serious study of variable stars. Among the stars he examined was a seemingly modest star in the constellation of Cepheus the King, delta Cephei. He noted its variability in 1784.
Usually about fourth magnitude, delta Cephei did not vary as much as Algol did, with its maximum brightness being about twice that of its minimum. However, unlike Algol, its variation could not be explained by a periodic eclipse. As far as anyone could tell, delta Cephei was a single star. It was, in other words, an intrinsic variable, one whose changes in brightness were due to physical changes in the star, rather than one star periodically obscuring another. Goodricke observed the star for several months, spanning late 1784 and early 1785, and then published his results in early 1786, showing that delta Cephei would gradually grow dimmer over the course of a couple of days, and then abruptly brighten back to its original level. Then the cycle would repeat.
There the matter stood for many years. Then, in 1892, Henrietta Swan Leavitt (1868–1921) graduated from Harvard Annex (later Radcliffe College). She had earned what today would be called a Bachelor of Arts degree in mathematics, but having had some freedom in classes in her fourth and final year, she took astronomy, in which she did well—well enough that she considered going on for a graduate degree in astronomy, and began taking classes in that direction.
As part of her studies, she began working at the Harvard College Observatory, then under the direction of Edward Charles Pickering (1846–1919). Pickering was compiling a catalog of star positions and brightnesses as captured by the observatory telescopes. Leavitt joined a group of women at the observatory known as "computers." These women included Annie Jump Cannon (1863–1941), who pioneered the classification of stars by temperature spectral class; Williamina Fleming (1857–1911), who discovered the Horsehead Nebula and the first white dwarf; and later add Cecilia Payne (1900–1979), who demonstrated that the stars were primarily made of hydrogen and helium.
Leavitt, like Cannon, was mostly deaf; Cannon had lost her hearing when she was about 30 from a bout of scarlet fever, while Leavitt lost it more gradually over a life of generally poor health (possibly owing to her family constantly moving from home to home). They both seem to have responded by withdrawing into their work. Women at the time were not permitted to operate the telescopes; instead, they were set the task of analyzing the photographs.
This meant in part that although Pickering and other men directed the research program, they were not for the most part directly involved in analyzing the gathered data, and followed the pre-designed research objectives. It was the women, who were in constant contact with the data, who were more likely to find new and unexpected patterns, a benefit that was not fully appreciated at the time. And in 1908, in an analysis of photographic plates taken of the Small and Large Magellanic Clouds, small satellite galaxies of the Milky Way, Leavitt noticed that the brighter a Cepheid variable was, the longer the period of variation tended to be.
Normally, such a pattern might be considered a feature of random chance, since there isn't generally a straightforward way of telling whether a given star of a certain magnitude is that way because it's nearby and bright, or faraway and dim. In this case, though, Leavitt could assume that all of the stars in the Small Magellanic Cloud, say, were of nearly equal distance, and therefore the brightness differences were genuine and not the result of varying distances.
Four years later, she embarked on a more systematic study of 25 Cepheid variables in the Small Magellanic Cloud (SMC), and derived a specific mathematical relationship between magnitude and period:
where M was the star's average magnitude, and T was its period, from minimum to maximum back down to minimum again. For instance, if a star's period was two days, then Leavitt's rule would predict that the star's magnitude to be 16.2 – 2.05 × 2 = 12.1.
To be sure, that would be the star's magnitude if it were in the SMC, whose distance was not known at the time. If the star were closer—in the Milky Way proper, for example—it would appear brighter and therefore have a lower magnitude. That could still be useful, however. If Leavitt's rule predicted the star to have a magnitude of 12.1, but instead it had a magnitude of 10.6, that would indicate that the star appeared four times as bright as it would be if it were in the SMC. Since the intensity of light drops as the square of the distance, that would mean that the star were half as far as the SMC. In this way, the distance to any Cepheid variable could be determined, at least as a fraction of the distance to the SMC.
A year later, in 1913, the Danish astronomer Ejnar Hertzsprung (1873–1967) attempted to use this relationship to determine the distance to the SMC. He found 13 relatively nearby Cepheid variables whose period was a single day. Such a star would have a magnitude of 14.15 if it were in the SMC, but because they were closer, they were correspondingly brighter. Unfortunately, Hertzsprung didn't know exactly how close they actually were, and had to make a best guess based on what was known of the geometry of the Milky Way at the time. Based on these estimates, he determined the distance to be 37,000 light-years away.
We know today that the distance is closer to 200,000 light-years, so Hertzsprung's figure was low by nearly a factor of six, but that was pretty good for a first try, so well done Hertzsprung.
That answered the question of what the changes in brightness told us. But why did Cepheids vary the way they did?
The faint beginnings of an answer had in fact come some time before the significance of Cepheids was even discovered. In 1870, the American astrophysicist Jonathan Lane (1819–1880) developed a model of the Sun, which he later expanded on in conjunction with the Swiss astrophysicist Robert Emden (1862–1940). In this model, the Sun's internal heat (produced by an as then yet unknown source of energy) tended to cause it to expand, while the force of gravity tended to cause it to contract. The equilibrium between these two forces kept the Sun at a stable size.
This model was predicated on energy being distributed through convection, with hotter gas welling up toward the surface, but around the turn of the 20th century, the German astronomer Karl Schwarzchild (1873–1916), whose sister married Emden, suggested that radiation also played a role. That suggestion was taken up in a more analytical way, after Schwarzchild's death at the German front in World War I, by the British astronomer Sir Arthur Eddington (1882–1940), who tried to understand Cepheid variability in terms of the Lane-Emden model. Eddington showed that radiation pressure was somehow responsible for keeping stars from collapsing onto themselves.
This enhanced model was not widely accepted at first, despite the fact that it accurately predicted many observed stellar properties, because it was not initially based on well-understood physical principles. Eddington answered testily that the accuracy of the model spoke for itself, and in fact, it did eventually form the foundation of our modern understanding of stellar interiors.
Part of the model's fruit was a mechanism that Eddington thought could explain why Cepheids were variable. He suggested that the stars actually pulsated as they swung from minimum brightness to maximum brightness and back to minimum again. At minimum brightness, they were small and compressed, and therefore hot. This may seem counter to intuition, since hot objects tend to shine more intensely, and in fact, Cepheids do shine more intensely at minimum size, but that intensity is emitted by such a smaller surface area so that less light overall is radiated by the star.
At maximum brightness, in contrast, they were large and bloated, and cooler. Again, their coolness contributed to a lower intensity of light, but this lower-intensity light is emitted over such a large surface area that more light overall is radiated by the star. What we see from our distant vantage point is never the size or intensity of the light; all the stars, aside from our own Sun, are too far away to see them as anything other than point sources. Instead, we see only their aggregate brightness and color.
But what caused this pulsation? Eddington proposed that it was due to radiation from the core of the star. As he was looking into the role of radiation pressure in holding stars up against the pull of gravity, he found that the magnitude of the radiation pressure depended quite vitally on the opacity of the gas inside the star. The less opaque the gas, the more freely radiation could move within the star, and the more light escaped from the star. Escaped light did not contribute to pushing the star outward against gravity; if it had, it wouldn't have escaped, but would have been pushed back by the star's outer layers.
Cepheid pulsations could then be explained if stars were more opaque when they were compressed and hotter, and more transparent when they were bloated and cooler, in much the same way that a balloon looks more transparent when it is larger and stretched thin. At minimum brightness, when they were compressed and hotter, their gas was more opaque, and radiation pressure increased. This pushed the star outward, toward its bloated, cooler, maximum phase. In this state, the star became more transparent to its core's radiation, the outward pressure dropped, and gravity was freer to push the star back down. This back-and-forth between radiation pressure and gravity, rather like the cyclic motion of an automobile engine piston, resulted in the observed variability of the Cepheids.
There was just one tiny fly in the ointment of this explanation: Everything that was understood at the time about Cepheid variables indicated that its core should have been full of helium, and as far as was known, helium was less opaque under compression, not more. The helium would pump the star down, in other words, at times when all the observations showed it was actually being pumped up. Disappointed, Eddington put the model aside and searched for other explanations for the Cepheid variations.
In 1953, however, the Russian astrophysicist Sergey Chuvakhin showed that at about 5300 kelvins, the outer layers of a dying star could actually become less transparent under compression, and that this effect was enough to allow stars at this surface temperature to pulsate in the way that Eddington proposed. (Eddington himself had died thirteen years previously.) A decade later, a team at the Max Planck Institute led by the German astrophysicist Rudolf Kippenhahn (born 1926) and the U.S. astronomer Norman Baker (1931–2005) simulated stars of the appropriate mass (about 7 solar masses) and showed that they pulsated as Cepheid variables for five periods of time near the end of their lifetimes, changing in brightness exactly as Leavitt had observed.
Cepheids have a number of properties that make them useful to astronomers. One is that they are much larger and brighter than the Sun. While the Sun is bright enough to be seen by the unaided eye from a distance of perhaps 50 to 100 light-years, depending on the acuity of that eye, delta Cephei is about 2000 times intrinsically brighter, and can therefore be seen from perhaps 3000 light-years. With the advent of reliable long-exposure photography, it could be seen even from another galaxy.
A second useful property is, of course, the period-luminosity relationship that Leavitt discovered. One early problem was that Leavitt determined her relationship with stars that were only in the SMC. No one seriously believed that Cepheids were necessarily different in the SMC than they were, say, in the Milky Way, but then again, the distance to the SMC was not well known—recall that Hertzsprung had severely underestimated the distance, and he had done that based already on the Cepheid relationship. As far as anyone knew, if the SMC were further than expected, that would make the Cepheids brighter than believed (otherwise, they would not be visible at that great distance), and that would quantitatively affect the period-luminosity relationship. The best that anyone could do, so long as that situation persisted, was to measure distances to objects in terms of the distance to the SMC.
Nevertheless, that was something. In 1924, the American astronomer Edwin Hubble (1889–1953) identified a handful of Cepheids in the Andromeda Nebula. Astronomers at the time were divided as to the nature of the Nebula: Many astronomers, including the influential American scientist Harlow Shapley (1885–1972), felt that the Milky Way was all there was to the universe, and that the Andromeda Nebula was simply a cloud of gas embedded in the Milky Way. On the other side were those such as Heber Curtis (1872–1942), who thought that the Milky Way was simply one of a number of galaxies ("island galaxies," they were called).
Shapley had, in fact, recently conducted a larger study of Cepheid variables than Hertzsprung had done a decade earlier. From this larger base, he concluded that the distance to the SMC was not 37,000 light-years after all, but more like 95,000 light-years (about half the current best estimate).
In so doing he had to contend with the relative scarcity of Cepheid variables generally. They could only be massive stars, and massive stars close to death at that. With that in mind, the mere fact that the Andromeda Nebula contained multiple Cepheids suggested that it was large and distant, rather than small and nearby.
But it was the period-luminosity relationship that nailed the case in favor of Andromeda being a separate galaxy. The Cepheids that Hubble detected were both dim and of relatively long period, suggesting that they were intrinsically bright and very far away—about eight times the distance to the SMC, in fact.
Now whatever the distance to the SMC happened to be (and it was believed to be the then-unimaginably large distance of 95,000 light-years off, after all), it was uniformly believed to be outside the Milky Way. There was no way that Andromeda could be simultaneously eight times further than the SMC and still inside the the Milky Way.
What's more, it was later recognized by Walter Baade (1893–1960) that there are two kinds of Cepheids, each with their separate period-luminosity relationship. What Hubble had observed were the more intrinsically bright of the two, so Andromeda was still further than expected. That, in conjunction with a better determination of the distance to the SMC, pushed the distance to the Andromeda Galaxy (as it is now universally known) to the current value of 2.5 million light-years.
In order to pin the distance to galaxies and other distant objects by means of Cepheid variables, we need the absolute distance to a Cepheid variable, not just its distance in terms of the distance to the SMC. Ideally, it would be a collection of nearby Cepheids, just in case some of them were unusual in some way.
But, as we mentioned earlier, Cepheids are scarce. The closest one is Polaris, the North Pole star, and as recently as 1990, even its distance was not very well known. The gold standard for determining the distance to an individual star is something called parallax. A star is observed twice, six months apart, and change in apparent position from the two vantage points on opposite sides of the Earth's orbit, 300 million kilometers apart, is used to triangulate the star's distance, in much the same way that the view from our two eyes is used by our brain to determine the distance to objects. (See my essay "Double Vision" (October 1999) for more details about parallax.)
The challenge is that even the nearest stars are so far that the parallax is tiny, amounting to just 0.7 arcseconds. Each arcsecond is 1/3600 of a degree, so the largest parallaxes are about 1/5000 of a degree. For most stars, the parallax is even smaller. Earthbound instruments peering up through the turbulent atmosphere simply couldn't resolve differences that small.
So in 1989, the European Space Agency launched Hipparcos (named after the ancient Greek astronomer Hipparchus), a satellite designed to measure parallaxes to a precision of milli-arcseconds. In its purview was Polaris, for which Hipparcos measured a parallax of about 7.5 milli-arcseconds, which corresponds to a distance of 433 light-years. This was more or less in line with previous estimates, but to an unprecedented level of accuracy and certainty.
Or so it seemed. Later spectroscopic studies suggested that Polaris might be up to 100 light-years closer than Hipparcos measured. However, the recent GAIA satellite, also launched by the European Space Agency, and designed to measure parallaxes down to the micro-arcsecond, seems to have definitely determined the distance to Polaris as 445.3 light-years.
Part of the problem is that sensitive parallax determinations depend on an exact measurement of a star's position on a satellite image, and bright stars like Polaris (which is of magnitude about 2) tend to produce big blobs. Sophisticated data analysis techniques are needed to reduce the blobs down to their precise location.
But there might be another reason, having to do with Polaris specifically. Polaris has never been a deeply variable star, with its maximum being at most perhaps a tenth of a magnitude or so brighter than its minimum.
Over the course of the 20th century, however, its pulsing diminished even further, and by the 1990s, it was down to about a hundredth of a magnitude. One reason might be that Polaris was about to exit one of its stages of Cepheid pulsation, and one research team predicted that Polaris would become perfectly stable by 1993.
But Polaris confounded expectations and did not become stable. Since 1995, it has steadily been on the rebound, and is now back to varying over about a tenth of a magnitude. And Polaris may have other surprises in store. Based on a historical study of Polaris observations, it appears that its average brightness may have increased over the last millennium or so, by about one full magnitude. What could be causing this anomalous brightening? Astronomers aren't sure.
But that's an essay for another time!
Copyright (c) 2019 Brian Tung