Astronomical Games: December 2007

A Matter of Great Magnitude

Contemplating the backward-seeming magnitude scale

Don't you see—I am very busy with matters of consequence!

—Antoine de Saint Exupéry, The Little Prince

HUMANS ARE fascinated and impressed with great size. I don't know if there's any survival advantage associated with being that impressed by size, but whether there is or not, I think it explains our fascination (as young children, anyway) with the dinosaurs. The idea that there were land-lubbing animals as big as a building captures the imagination. And what does that say of the force that snuffed them out? Whether it was an asteoroid, a volcano, or climactic change (or all three at once), it was no mere trifle.

Part of why great size fascinates us is that it's rare. The world is full of small organisms, but large ones are rare, the main reason being that, by and large, small organisms persist through fecundity, while large ones survive through individual robustness; there just isn't enough food in the world for them to survive through fecundity, too. It's a rather substantial simplification, but it explains, at a high level at least, the relative rarity of great size.

Of course, there are matters of great size outside the biological world. There are the stars, for instance.

Originally, the notion that the stars had great size was dismissed. They were points of light in the sky, perhaps holes in some great sheet that shielded us from some outer fire.

It was the Babylonians, and then the ancient Greeks, who first tried to put significant order to the sky; the word astronomy itself means "law or order of the stars." One of these was Hipparchus, who lived in or near Rhodes from about 190 to 120 B.C. He put together a catalogue of stars in which he grouped the stars by brightness into six categories, called megethos, a Greek word meaning "size" or "magnitude." Stars that were of the first rank—that is, very bright indeed—were placed into the first magnitude, while those that were just bright enough to be seen by an ordinarily sensitive eye were placed into the sixth magnitude.

It's unclear whether Hipparchus's use of the term megethos actually meant that he believed first-magnitude stars to be larger than sixth-magnitude ones. In fact, it's unclear what most of Hipparchus's beliefs on stars were, because aside from a commentary on a book by Aratus, all of Hipparchus's works have been lost, and we only know of Hipparchus's catalogue by way of the late Greek astronomer Ptolemy (c. 85–165), who appropriated it for his masterwork on classical astronomy, the Almagest. The Almagest was transcribed into both Latin and Arabic, and thus spread Hipparchus's catalogue and its notion of stellar magnitude throughout the post-classical western world.

As with organisms, there were lots of the lowly sixth-magnitude stars—thousands of them. But there were fewer of the fifth-magnitude stars, fewer still of the fourth, and so on, until there were only about a couple of dozen first-magnitude stars in the entire sky.

If you think about it, six magnitude classes is just about right. If there are too few classes, then stars of widely different brightnesses will have to be grouped into the same category; on the other hand, if there are too many classes, they will likely be too fine, and there will be confusion as to whether a particular star ought to fall into the 235th magnitude class, or the 236th. Perhaps in part because of its efficient design, the magnitude system remained essentially unchanged for some two millennia.

Still, there were problems even with six magnitude classes. The very act of classifying stars into distinct categories raised boundary issues. The dimmest of the third-magnitude stars were bound to be just barely brighter than the brightest of the fourth-magnitude stars; did they really merit being an entire magnitude up? Worse yet, later investigations found that the dimmest stars in one category were often dimmer than the brightest ones in the next category down, due to errors in brightness comparison, which prior to the development of astrophotography were performed by the human eye, with all its frailties.

Then, too, it happened that the range of brightness of the brightest stars, the first-magnitude ones, was broader by far (subjectively, at least) than those of the second magnitude, or the third, or any other magnitude. What's more, the development of the astronomical telescope led to the discovery by the Italian scientist Galileo Galilei (1564–1642) of stars much fainter than the previously accepted bottom-of-the-heap sixth-magnitude stars. Either these much dimmer stars had to be shoehorned into the sixth magnitude, or else the existence of seventh, eighth, and even ninth-magnitude stars had to be acknowledged. The time was ripe for the magnitude system to be placed on a steadier, formal foundation.

One of the first to attempt to provide a precise basis for the magnitude system was the German-English astronomer and musician William Herschel (1738–1822). Herschel ran a series of observations in which he would simultaneously view two stars with two different telescopes. He would then gradually dim the brighter star by covering up "its" telescope's opening, until the two stars were equally bright at the eyepiece. By measuring the amount of the telescope opening, or aperture, that was blocked, he determined how much brighter one star was than the other. After some time, he concluded that the light of a star was inversely proportional to the square of its magnitude. That is, for instance, a sixth-magnitude star was 1/36 as bright as a first-magnitude star.

His son, John Herschel (1792–1871), also took an interest in the magnitude system. A dozen or so years after his father's death, he devised an instrument, called an astrometer, which he used to compare stars against a reduced image of the Moon. After careful consideration, he decided that his father's rule had been too conservative, and that sixth-magnitude stars were actually only about 1/100 as bright as first-magnitude stars.

Around the same time, the German physicist Carl August von Steinheil (1801–1870) suggested, on the basis of similar observations, that the magnitude classes were actually related by a constant ratio, which he determined to be 2.83. That is, the average first-magnitude star was 2.83 times brighter than the average second-magnitude star, which was 2.83 times brighter than the average third-magnitude star, and so on. This was the first time that a logarithmic scale had been proposed for the magnitude system.

Still, this was all just academic meandering. What was needed was a practical system to be formulated and adopted.

Enter the British astronomer Norman Robert Pogson (1829–1891). Born in Nottingham, England, he was an avid astronomer even as a child, despite receiving little formal education. By the time he was 18, he had computed the orbits of two comets. His studies led him to become an assistant at the Radcliffe Observatory in Oxford, in 1851. A decade later, he moved to Madras, India, to become the government astronomer at the Madras Observatory, where he remained until his death.

While he was still at Oxford, Pogson wrote a paper for the Royal Astronomical Society, in which he repeated John Herschel's observation that first-magnitude stars were, on average, about 100 times brighter than sixth-magnitude stars. He doesn't seem to have been aware of Herschel's findings, but he did adopt Steinheil's notion of a logarithmic scale for magnitude.

In order to reconcile both of these properties—a factor of 100 between the first and sixth magnitudes, and a logarithmic scale—the constant ratio between magnitude classes had to be carefully chosen. In particular, when multiplied by itself five times (the difference between the first and sixth magnitude classes), the result had to equal 100. In other words, Pogson wanted the fifth root of 100, which is about 2.512.

The mathematics of logarithms allowed this system to be extended to any brightness ratio whatever. If the ratio of brightnesses was L, the difference in magnitude would be the logarithm of L, divided by 0.4. (The 0.4 comes from the fact that 2.512 is equal to 10 raised to the 0.4 power.) Bolstered by the relatively simple computations it involved and its close agreement with existing empirical magnitude classes, Pogson's system made slow but steady inroads into the astronomical community, culminating with the publication in 1905, 14 years after his death, of two massive star catalogues based on his work.

Pogson's system also preserved a small insanity in notation that perplexes a brand new population of astronomy novices every holiday season. In formalizing stellar magnitudes, Pogson was careful to keep stars near the magnitudes that Hipparchus had originally classified them, and in this he was mostly successful. By and large, a star that Hipparchus had classified into the fourth magnitude was likely to have a Pogson magnitude of near 4.00. This, Pogson predicted, made it more likely for his system to be adopted.

However, in reducing magnitudes to precise values, Pogson had stripped them of all connotations of class, so that the "first equals brightest" principle that made perfect sense in Hipparchus's system made somewhat less than perfect sense in Pogson's. We are therefore left with stars of lower magnitudes being brighter, rather than dimmer, as you'd expect if magnitude truly meant size.

But this is a minor quibble. Pogson's innovation made it possible for astronomers to analyze the stars with unprecedented precision and ease, and as far as it goes, has changed only trivially in the century and a half since.

So far, we've ignored what makes one star a different magnitude than another. What about that? Why is one star a first-magnitude star, and another merely a fourth-magnitude star? Is it something about the stars themselves, or where they lie in space, or both?

Soon after we learned that the stars were suns and the Sun was a star, astronomers began asking themselves this very question. The Dutch astronomer Christian Huygens (1629–1695), in attempting to measure the distance to Sirius, assumed that stars—including the Sun—were of equal intrinsic brightness and only appeared brighter or dimmer because they were closer or further away, although he hastened to emphasize the uncertain nature of that assumption.

It was nearly impossible to say for sure either way. Whether Huygens was right or wrong, it was evident that the stars were far enough away that one couldn't tell, just by looking at them, even through a telescope, whether one was nearer or further than another. What was needed was a method for determining stellar distances that was independent of their intrinsic brightnesses. That method was stellar parallax, and even though it took some time for more than a handful of stars to have their distances measured, it was gradually discovered that stars in fact varied dramatically in intrinsic brightness.

You see, if the stars were of equal intrinsic brightness, then any differences in apparent brightness would have to result from corresponding differences in distance. One way to understand this is that the stars might be emitting equal amounts of light—that's what it means for them to be equally bright—but since one is further away, less of its light gets into our eyes because our eyes provide a smaller "target." For instance, if one star was twice as far away, then our eyes would look half as wide, half as high, and therefore cover only one-fourth the apparent area, and the star would look only one-fourth as bright as the other star. In general, a star n times as far away looks only 1/n2 as bright—a principle known as the inverse square rule of the propagation of light. (It also seems vaguely reminiscent of William Herschel's original suggestion regarding stellar magnitude.)

Until the distances to the stars were known and could be compared to one another, it was still possible that this alone was enough to explain the varied appearance of the stars. But in 1838, the German astronomer and mathematician Friedrich Bessel (1784–1846) measured the distance to a star by the name of 61 Cygni as 10.4 light-years, and around the same time, the Scottish astronomer Thomas Henderson (1798–1844) measured the distance to alpha Centauri as 3.3 light-years.

These two stars have magnitudes 5.21 and –0.01, respectively. Since these differ by a bit over 5.00, alpha Centauri must be a little more than 100 times as bright as 61 Cygni, meaning that if distance were the only factor involved in their appearance, 61 Cygni had to be more than 10 times further away. In fact, at 10.4 light-years, 61 Cygni is only a bit more than 3 times further away, demonstrating that it must be intrinsically dimmer than alpha Centauri.

For such reasons, astronomers began to speak of two different magnitudes for stars. One was the apparent magnitude, which is what magnitude had meant all along, and one was the absolute magnitude, which is what magnitude the stars would have if they were all moved to the same distance away. What that distance was, exactly, was a matter of some discussion. It didn't affect the relative brightness of the stars—a star that was intrinsically brighter than another would remain so whether they were 10 light-years away or 1,000—but it would affect the numerical magnitude.

Eventually, it was decided that the reference distance for a star's absolute magnitude should be 10 parsecs, or about 32.6 light-years. (A parsec is the distance from which the Earth-Sun distance covers one arcsecond—just 1/3,600 of a degree.) For example, Sirius, as mentioned earlier, has an apparent magnitude of –1.44. However, it is that bright in large part because it's very close: just 8.6 light-years away. If it were moved further away, to a distance of 32.6 light-years, it would become dimmer by a ratio of (32.6/8.6)2, or about 14.4. This translates to an increase in magnitude of 2.89, so Sirius has an absolute magnitude of –1.44 plus 2.89, or 1.45.

Another way of expressing the distance to Sirius, making use of the magnitude figures, is to say that Sirius has a distance modulus of –2.89. That is, if you knew the apparent magnitude of Sirius (which you could easily do just by measuring it), you could obtain its absolute magnitude (which is not directly measurable) by subtracting –2.89 (or by adding 2.89, as we did above), without having to go through any logarithms.

At first glance, the distance modulus doesn't help us much, because in order to compute it, we need to know the distance to Sirius, and to calculate the distance modulus, we need to do all the logarithms anyway.

However, that's only the case if Sirius is alone. As it happens, Sirius has (at least) one companion, called Sirius B, which has an apparent magnitude of about 8.5. (It's difficult to measure precisely, because it is so close to Sirius that it's easily lost in the glare.) Since we've already calculated Sirius's distance modulus, we can use it again with Sirius B, which is essentially at the same distance, and calculate its absolute magnitude as 8.5, minus –2.89, or about 11.4.

Distance modulus is even more useful for the really large collections of stars, like galaxies. Most galaxies are at least tens of millions of light-years away, whereas even the largest galaxies are perhaps a million light-years across, so most of the objects in a galaxy are basically the same distance away from us, and therefore have the same distance modulus. Therefore, once we've computed the distance modulus for the galaxy, we can determine the absolute magnitude of any object in that galaxy by subtracting the distance modulus of the galaxy from the object's apparent magnitude. Astronomers can then use this information to compare objects in the remote galaxy and our own.

It's a maxim of astronomy that when it comes to telescope performance, aperture—the size of the main mirror or lens—wins. One main reason for that is the larger the mirror or lens (the objective), the more light it collects, and the brighter an individual object appears in the eyepiece. Not only does it make each object easier to see, but it also makes it possible to see more objects than with a smaller telescope, since there are always objects just beyond the light grasp of any telescope.

Just how many more objects can a larger telescope see? To begin with, a telescope gathers light in proportion to the area of its objective, and that area goes up as the square of the aperture. For instance, compared to a 4-inch telescope, an 8-inch telescope gathers

82/42 = 64/16

or four times as much light. Does that mean that it sees four times as many objects? Not quite.

Suppose that all objects are intrinsically equally bright. (I know, we said that they aren't, but for the moment, just suppose.) Then any objects that appear dimmer than others do so purely by virtue of their distance. Remember that according to the inverse-square rule of the propagation of light, objects that are n times as far appear 1/n2 as bright. In particular, objects that are twice as far appear 1/4 as bright.

This means that the 8-inch telescope, which gathers four times as much light as the 4-inch telescope, can see objects that are twice as far. For instance, the 4-inch telescope might help us to see objects up to 100 light-years away, but no further. Objects 200 light-years away would be too dim, by a factor of four, to see in that telescope. But the 8-inch telescope would amplify that light by a factor of four, so that it would be just bright enough to see.

The 4-inch telescope would thus be able to see only those objects within a 100-light-year radius, whereas the 8-inch would see all objects within a 200-light-year radius. Since a sphere's volume goes up in proportion to the cube of its radius, there should be 23 or eight times as many objects in the larger sphere, assuming that objects are evenly distributed throughout space.

In general, we can make the following general rule:

A telescope n times larger than another one gathers n2 times as much light, which allows it to see objects within a sphere n times as wide across, which in turn contains n3 times as many objects.

Of course, this all assumes that all objects are equally bright. Or does it?

Let's say that there are two different brightness classes, Bright and Dim, so that the 4-inch telescope can see Bright objects within 100 light-years, but Dim objects only to 20 light-years. In that case, the 8-inch telescope sees Bright objects to 200 light years, but Dim objects only to 40 light-years. The 8-inch therefore sees eight times as many of the Bright objects as the 4-inch, and also eight times as many of the Dim objects, too. In short, it sees eight times as many objects overall, just as before.

The same argument can clearly be made if there are three brightness classes, or four, or indeed any number of brightness classes. So it turns out that we didn't need the initial assumption that all objects are equally bright; the same rule works out either way.

There's no requirement that n be an integer, either. It could be any positive number: 1/2, 5.3, or 3.14159. For instance, if n = 1.585, say, then the larger telescope gathers n2 = 2.512 times as much light, and therefore sees n3 = 3.982 times as many objects.

Recall that n2 = 2.512 is just the ratio in brightness over a magnitude gap of 1.00. So, if the smaller telescope can see objects only down to magnitude 12, say, the larger one can see them down to magnitude 13, and will therefore see roughly four times as many objects. (Well, 3.982, but who's keeping track that close?)

In fact, we can dispense with the telescopes altogether (only for the sake of discussion!) and say that there are four times as many objects of magnitude 13 or brighter as there are of magnitude 12 or brighter. There's nothing special about 12 and 13; it could be any pair of numbers separated by 1.

This is a pretty handy formula, for it allows us to envision how the number of objects in various magnitude classes grows as the magnitude gets larger. (Remember that larger magnitudes mean dimmer objects, not brighter ones.) However, it is a little cumbersome to be forever saying "or brighter." It would be nice if we could just say "of the twelfth magnitude." Can we compare the number of those to the number of objects of the thirteenth magnitude?

It turns out that we can do so simply by removing the words "or brighter." You may think this is some mistake, but no: There really are four times as many objects of the thirteenth magnitude as there are of the twelfth magnitude.

Suppose we have a catalogue containing all objects of the thirteenth magnitude or brighter. Assume, for the sake of argument, that it contains 16 million objects. Then the number of objects of the twelfth magnitude or brighter would have to be one-fourth of that, or 4 million. That leaves 12 million as the number of objects of exactly the thirteenth magnitude.

We can go further: Since there are 4 million objects of the twelfth magnitude or brighter, the number of objects of the eleventh magnitude or brighter must be one-fourth of that, or just 1 million, leaving just 3 million as the number of objects of exactly the twelfth magnitude. And sure enough, 3 million is one-fourth of 12 million. Again, there's nothing special about 12 and 13. As long as the magnitudes are separated by exactly 1, the numbers of objects of those magnitudes should differ by a ratio of about 4.

They should. Do they in fact?

In Hipparchus's original star catalogue, as extended somewhat by Ptolemy, there are 918 stars for which a magnitude is given. (Some of the other "stars," in fact, are actually nebulae, though this was not completely understood until well after Ptolemy's time.) I have given a short table of the number of stars in each magnitude class:

Magnitude # Stars
1 15
2 41
3 186
4 436
5 194
6 46

Obviously, Hipparchus and Ptolemy didn't capture all of the fifth and sixth-magnitude stars in the sky, but even taking that into account, there don't seem to be as many of the dimmer high-magnitude stars as there "ought" to be. However, Hipparchus and Ptolemy's catalogues were based on unaided-eye observations, and might be susceptible to human error, careful though they might have been.

Fortunately, we have more comprehensive and accurate magnitude data from a European Space Agency satellite called HIPPARCOS, which I've previously mentioned. From the results of that mission, we can construct the following table of 8,827 stars:

Magnitude # Stars
–1 2
0 8
1 12
2 71
3 192
4 620
5 1,926
6 5,996

I've plotted those counts below on a semi-logarithmic scale (meaning that the number of stars, at left, are scaled with powers of 10), along with two lines showing trends associated with per-magnitude ratios of 3 and 4. As you can see, the fit is much better with a ratio of 3.

magnitude graph

But what does it mean? It means that one further assumption we made—that objects (visible objects, at least) are evenly distributed throughout space—must not be true. But if they aren't evenly distributed, in what way are they clumped together? And why?

I'll discuss that in my next essay.

Copyright (c) 2007 Brian Tung