So far, all of the tests that Grasshoff has reviewed to try to determine whether or not Ptolemy used a Hipparchan star catalog have proven inconclusive. In the last post, we showed that the solar theory was quite obviously in use in constructing the star catalog. This provides a reasonable explanation for the mean error being $\approx 1º$ as well as why the distribution of errors could have matched Hipparchus’ even without Ptolemy having used his data1. But just because it provides a reasonable explanation of how Ptolemy could have gotten similar incorrect results doesn’t mean that he did. Thus, Grasshoff needs another way to try to distinguish between these two historical interpretations.
Grasshoff now turns to the frequency of the fractions of degrees which he describes as a “powerful criterion to decided between suggested historical interpretations.”
Review
As a reminder, the coordinates given in the Almagest aren’t in decimals. Rather, they are whole number and fraction combinations. Most of the values are given to a precision of $\frac{1}{6}º$ but a small number of them are given to a precision of $\frac{1}{4}º$. Obviously, some results are indistinguishable between the two precisions as $\frac{3}{6} = \frac{2}{4}$ and $\frac{0}{6} = \frac{0}{4}$.
First, let’s put together a quick table of how many stars fall into each increment for both ecliptic longitude and latitude:
Increment | Longitude | Latitude | Total |
---|---|---|---|
$0’$ | $222$ | $228$ | $450$ |
$10’$ | $180$ | $101$ | $281$ |
$15’$ | $4$ | $95$ | $99$ |
$20’$ | $179$ | $110$ | $289$ |
$30’$ | $98$ | $212$ | $310$ |
$40’$ | $241$ | $126$ | $387$ |
$45’$ | $0$ | $49$ | $49$ |
$50’$ | $101$ | $104$ | $205$ |
Grasshoff considers four “anomalies” in the distribution of these that require explanation:
1) There are only four stars that have a longitude ending with $15’$. Three of the four are in Virgo2.
2)The number of stars with an increment of $40’$ is suspiciously high. Grasshoff claims that, if the “longitudes represent readings from an observational instrument3, one would expect the full and half degrees to be more frequent than the others.”
I was rather skeptical of this statement so I tested it using the altitudes from my own quadrant and it turns out to be true:
The data clearly shows that I read something as a a whole number of half degree more frequently than anything else4. Interesting observational bias there!
3) Similar to the above, the number of half degree increments is suspiciously low.
4) The increments of $15’$ and $45’$ are bizarrely different between latitude and longitude.
Grasshoff next explores various interpretations that might explain these anomalies.
Two Instruments
The idea that this represents two instruments was proposed by Vogt in which he assumed one device had either $\frac{1}{3}º$5 increments and the other was either $\frac{1}{2}º$ increments. Vogt considered evidence for this to be that the odd increments of $\frac{1}{6}º$ intervals are less common.
Again, comparing to the data from my quadrant, we can clearly see that the instances in which I would have to estimate between divisions is less common than using the division6.
However, Vogt’s interpretation runs into trouble with the immediately identifiable $\frac{1}{4}º$ stars, as their frequency is highly imbalanced with nearly twice as many with an increment of $\frac{1}{4}º$ than $\frac{3}{4}º$. There is not a good explanation for this without further hypotheses.
Thus, Grasshoff considers the possibility that some rounding occurred. To analyze this, he begins with a few assumptions about the base number of $\frac{1}{6}º$ and $\frac{1}{4}º$ stars. First, for the $\frac{1}{6}º$ stars, he notes that the average count of the unambiguously $\frac{1}{6}º$ stars is $110.25$7 which he rounds to the nearest whole number: $110$. Then, he assumes this is the number of $\frac{1}{6}º$ stars for the two increments that not unambiguous8.
That accounts for $660$ of the $1025$ stars in the Almagest leaving $365$ which he divides evenly among the four possible $\frac{1}{4}º$ increments for $91$ each there. However, he must contend with why the $\frac{3}{4}º$ increment is barely half of the $\frac{1}{4}º$ increment.
Thus, he considers two rounding scenarios. In the first, is that stars of this increment were split between the whole and half increments. Here, his procedure becomes odd because he assumes that $28$ would have gone to the whole number increment bucket and $10$ would have gone to the half degree increment. Why? Because that gives the best fit to the data. But unfortunately, it doesn’t really have an initial hypothesis, so this again becomes a case in which Grasshoff is flirting with p-hacking.
So overall, I don’t find this to be a good hypothesis because 1) it provides no reason someone would have rounded the $\frac{3}{4}º$ increment stars but not the $\frac{1}{4}º$ and 2) why they would have rounded the $\frac{3}{4}º$ stars inconsistently.
But Grasshoff also considers the situation in which half of those $\frac{3}{4}º$ stars were rounded in a consistent manner to whole degree increments. Again, this provides no rationale as to why someone would do that for this increment but not the other, but at least it removes the second objection.
The results are summarized in a table9:
To Grasshoff, this is enough to “confirm the assumption of two differently graduated [instrument] readings”.
I generally disagree that this is sound methodology as there is no external reason to suppose rounding. It’s making an assumption to make the data fit and then acting surprised the data fits10.
To bolster this hypothesis, Grasshoff does come up with a further way to test it. Specifically, if values that should have fallen into the $\frac{3}{4}º$ bin but instead got improperly rounded into the whole number or half bin, then this should increase the error for the stars in those bins. Grasshoff goes through some math and shows that, if $\frac{1}{2}$ of the $\frac{3}{4}º$ stars are rounded up to the nearest whole number, this would result in the mean error in the whole number bin increasing by $0.08º$11. If that same proportion of $\frac{3}{4}º$ stars are rounded down to the nearest whole number12 then it would result in a change to the mean error of the whole number of $-0.13º$13.
Grasshoff then creates a histogram of the mean error for each of the various increment bins, but it turns out it’s not a sensitive enough test to determine whether or not this is true. These small changes to the mean error for each of them is dwarfed by the random error. As Grasshoff puts it:
The uncertainty of the mean values with a standard deviation of about $0.08º$is nevertheless so large that there is a considerable change for random variations without significance for the particular rounding procedure.
But Grasshoff isn’t deterred. Instead, he tries slicing the data in a different manner to see if that improves things. Specifically, he recalculates for only ecliptic latitudes from $0º$ to $20º$ ignoring southern latitudes. Slicing the data until you find something that supports a hypothesis without a hypothesis as to why you should slice the data that way crosses firmly into p-hacking territory.
Doing this, Grasshoff is able to create a mean error in the whole number bin that lies somewhat outside the standard range. Grasshoff only presents a chart, so it’s hard to tell what the exact errors in each bin are, but attempting to estimate, I find a standard deviation of about $0.08º$14 and the value for the error in the whole number bin, being just over $0.2$. Thus, a standard deviation of about $3\sigma$.
Next, Grasshoff produces a histogram of errors in the whole number bin showing that the distribution is notably not Gaussian which can also be interpreted as supporting the hypothesis that some stars were truncated into this bin but may also be due to random noise given the processes described above to favorably filter it.
Here, we can clearly see that the distribution had an odd hump on the slightly negative side of the peak which is indeed consistent with the rounding in question.
Grasshoff ends his analysis there, which I think was rather premature. There is a further question that I feel went unchecked. Specifically, if the hump here is caused by a truncation of the $\frac{3}{4}º$ stars, then we should expect to see that secondary distribution have a mean error of $\frac{3}{4}º$. To analyze that, we would need to perform a cluster analysis to find the best fit for two normal distributions. Sadly, I don’t have the precise numbers in each of the bins to do it myself, but eyeballing it, I feel like the peak of a second curve would be a bit more around $-0.5º$ instead of $-0.75º$, but reasonably close.
If it did come out to be close to $-0.75º$, I think would represent better evidence than what Grasshoff has presented thus far.
One Instrument
Grasshoff’s analysis of the possibility that the different precisions listed are the result of a single instrument is much shorter. He reminds us that R. R. Newton proposed a single instrument and describes a potential way this could have happened. He suggests an instrument with graduations of $\frac{1}{2}º$ in which the observer would estimate fourths if it were between the two divisions and sixths if it were more to one side than the other.
Grasshoff rejects this for three reasons:
1) Making such estimations is difficult, and if there were sufficient room to make such an estimation, why would the maker not add another graduation? Furthermore, there is no other historical example of which we know in which astronomers engaged in such estimations.
2) The distribution of errors for the $\frac{1}{4}º$ stars is different than those of the $\frac{1}{6}º$ stars.
3) There is only one star with a magnitude $>3$ (Pollux) that is an unambiguous $\frac{1}{4}º$ star. If this were observed with a single instrument, there is no reason this should be the case.
As a final thought, Grasshoff also considers the possibility that this isn’t representative of different observers or instruments, but of technique. He rightly points out that the use of an armillary sphere is complicated. Various rings need to be aligned and an object of known position sighted and then the second object sighted without disturbing the setup and quickly enough that there isn’t a significant interval of time since the sky drags the coordinate system out of position by $\approx 1º$ every four minutes.
Instead, he proposes that some objects may have been catalogued using a method along the lines of what I have been doing with the quadrant. Finding the much simpler altitude as the object crosses the meridian and then transforming the coordinates to the desired ecliptic coordinates. A single minute of time corresponds to $\approx \frac{1}{4}º$ of the rotation of the sky.
The aspect I like about this is that it provides a reasonable explanation for why the $\frac{1}{4}º$ stars tend to be fainter in magnitude: This is a simpler observation to make. Trying to sight a faint star takes more time than a bright one and, if time is of the essence (as with the armillary sphere), then this provides a rationale for adopting it.
My objection here is that Grasshoff does not discuss a method by which the observer could accurately determine the time. In my observations, I’ve been using an app on my phone with the justification that clocks of reasonable accuracy and Brahe’s illustration of his mural quadrant clearly depicts a clock for this purpose and Brahe’s description of his observing process also specifies that it was available to him.
However, Ptolemy had no such instrument available to him. Thus, there appears to be an irreconcilable problem for this explanation.
I’ll leave off here and in the next post, we’ll explore the difference in these increments for ecliptic longitude.
- As a reminder, the only place anyone has tried to compare the distribution of errors head to head was Vogt who found that they didn’t match up well, but Grasshoff found that his attempt to reconstruct the Hipparchan catalog and methodology was likely not clean enough to be of much use.
- Grasshoff notes the possibility that this is simply a scribal error but no known texts support this.
- Note the singular here. An overstatement of whole numbers and half degrees could be readily explained explained by two instruments since these are the overlaps.
- As a note, my quadrant is divided into tenths of a degree for altitude. We can clearly see that I do frequently estimate the half between two readings, but less than should likely be done. I can easily attribute this to the fact that the reading is taken against the plumb line which often swings slightly and I can really only estimate that half if it is perfectly still. The armillary sphere which Ptolemy describes should not have such a problem.
- In which case the observer would have to estimate between increments to achieve the precision of $\frac{1}{6}º$ much as I estimate between the $\frac{1}{10}º$ divisions to get a precision of $\frac{1}{20}º$ with my quadrant’s altitude.
- Again, I think some of this for my quadrant is due to how the plumb line swings which should not be an issue with an armillary sphere, although such an instrument may have other instabilities with which I’m not familiar.
- There appears to be a typo in this footnote as Grasshoff states the mean is $110.5$ which would have rounded to $111$.
- This seems to me to be a very poor strategy since we just discussed above that the distribution should not be consistent across all increments.
- The “R” in the $45’$ column for the $\frac{1}{4}º$ stars indicates this is going to be split based on the above rounding assumptions. Otherwise it should be $91$ as the other $\frac{1}{4}º$ stars.
- Grasshoff doesn’t lay out all of his assumptions, but I expect this is assuming that there is zero mean error in the $\frac{3}{4}º$ stars before they are moved.
- In other words, truncated. Not sure why Grasshoff brings up this possibility as this wasn’t something he previously asserted.
- Recalling that the error can be negative indicating a lower observed ecliptic latitude than the true value.
- Excluding the whole number bin.