EOSDIS faces issues of widely varying requirements for precision and volume, grossly different formats, and conflicting external standards with many different heritages. We have a primary commitment to establish internal standards and formats that will forestall the degradation of incoming data, preserve a sufficient record of its processing, and facilitate its ready retrieval - all to be done before we see more than an whiff of typical data. This paper describes the standards set up for the internal handling of time and of Earth figure, the time translations offered now, and plans to augment these at the level of file labeling and data retrieval. Problems exist with vague Federal standards for Earth figure and latitude, and conflicting time formats - some sanctioned and some not, but nevertheless popular.
Most of the metadata for geolocation are incoming - things like instrument parameters, leap seconds and data on variations in Earth rotation. On the outgoing end, the job is to provide labels that are both readable and computable. These two purposes conflict sufficiently, in the case of time labels, that we imagine many users will want one kind of time at the smallest grain size, and another for labeling data sets at the top level. For the latter purpose, we provide time formats such as FGDC and CCSDS that are readable and competent to meet our accuracy requirements. But easily readable time formats are not suitable for computation or subsetting. Therefore, deeper inside the data sets many science teams are choosing to imbed Toolkit time (Sec. 2.4); in some cases they use their own favorite time stream. Recipients of large data sets who need time data at the smallest grain size level will therefore need to translate toolkit time into their favorite form.
The key items we need for geolocation are clear, consistent, and computable time definitions (Sect. 2), clear descriptions of the figure of the Earth and its orientation (rotation), of latitude and of longitude (Sect. 3), accurate spacecraft position and attitude, and knowledge of the instrument line of sight. (Sect. 5) The spacecraft data are a little too specialized to discuss herein; the other items we'll cover.
Finally, an interesting side effect of our system, and a drawback for some, is that our metadata representing the same thing sometimes change over time (Sect. 6).
Broadly speaking, time streams can be divided into ones that are used for timekeeping, and ones that are used as measures of Earth rotation, though originally they may well have served for timekeeping. Sidereal time and Universal time in the flavor "UT1" are of the latter kind and will be discussed first. We need them to know which way the Earth was turned. To say "when", we need the other time streams, which are discussed in Sect. 2.2.
One of the first surprises we had was in regards to something called "Greenwich Mean Sidereal Time," or GMST - actually a measure of Earth rotation more than of time - and a necessary item to determine the geometrical relationship of a spacecraft to the Earth. EOSDIS spacecraft positions are specified in inertial coordinates that do not rotate with the Earth. Therefore, we must have GMST to determine how the Earth is oriented in regards to the inertial reference frame. Numerous widely used software packages offer algorithms to calculate GMST from Coordinated Universal Time (UTC), which is civil time at the prime meridian in Greenwich, England, and which is adjusted by zone differences for local civil time everywhere. The Astronomical Almanac, published by the Naval Observatory, publishes annually, in advance, tables to calculate GMST from Universal Time. These tables and the aforementioned software packages take into account the precession and nutation (a small nodding motion) of the Earth's rotation axis, which are predictable more than a year in advance with an accuracy equivalent to under a meter change in Earth position. But, if you read the fine print, you find that the tables require a form of Universal Time that is not UTC; it is UT1, a "time" that can be determined only by measuring Earth rotation. The older software packages may or may not specify UT1, and you can find examples where care was not taken to distinguish UT1 from UTC. The penalty is an equivalent positional error of about 400 meters East or West near the equator, less near the poles. And the position will jump from by up to 450 meters when a leap second is introduced by the International Earth Rotation service in Paris. All this has a good purpose; it keeps laboratory clocks running at the speed of atomic time while the Earth does its own thing, and it keeps civil noon near midday on a long term average. But it means you have to import tables or other measures of the vagaries of Earth motion  - and you have to do it monthly to meet our requirements. So we do just that; in fact, we'll probably set up our script to run weekly. From the standpoint of metadata, the results of high precision space geodesy to Earth rotation experts have become our metadata. The geodesists work up these data to a precision of about 0.0005 arc seconds; we degrade that by about a factor twenty, which is still very conservative practice.
Actually, to use the tables of UT1-UTC it was necessary to deal as well with Greenwich Apparent Sidereal Time, which differs from GMST by up to a few seconds. In 1993, the precise difference depended on whom you talked to or what you read - you were either to use the mean or the current value of the tilt of the Earth's axis in doing the correction. The difference has negligible effect for our purposes, but we wanted to set it up right, and it was difficult to find out which to use. The problem was resolved through conversations with the Naval Observatory and by our obtaining the IERS Standards, which lay it all out.
Our SDP Toolkit now sets a new standard in the field by bringing in all the necessary tables - UT1-UTC, leap seconds, and polar motion (another small correction ~5 to 15 meters) off the appropriate data server with automatic scripts run as periodic processes, and accessing the (meta) data when needed in a way that is transparent to the user. The resulting Earth position is good to better than a meter.
Since 1972 atomic time has served as the best standard for a uniform time measure. Coordinated Universal Time (UTC) runs at the same rate as International Atomic Time (TAI) for periods typically as long as a year to a year and a half, following which a leap second is "inserted." The purpose of this leap second is to keep UTC approximately in step with UT1, a form of Universal Time tied to the Earth's rotation (Section 2.0). On the average, UT1 runs slower than atomic time; hence the leap second is used as needed to extend a UTC day (typically June 30 or Dec 31) by one second. Therefore, if UTC is converted to a real number, counted from some epoch, the first second of the next day will correspond to the same range of real numbers as the leap second. Non unique data labels and a backward jump in the time stream result. UTC is favored over other times in such areas communications and remote sensing because it is readily related to civil time (Sect 2.0). TAI, International Atomic Time, is not so user-friendly because its midnight and noon are about a half minute off UTC, and the gap is growing . It is not even usual to put it in ASCII format - it is kept as a real number measured from some epoch. Aside from the awkward short term jogs in its time scale, UTC, because it is slaved in the long term to Earth rotation, also has a long term quadratic dependence (curvature) to its time scale in terms of any uniform time scale, such as TAI. Uncompensated, this will corrupt time series analysis, interpolation, integration, and the determination of trends over very long time scales. UTC must be kept in segmented form, and cannot ever be put in the form of a count of seconds (etc.) from some epoch, because of the backwards jump. Appendix A contains a short section of times around the 1995 Dec 31 leap second, converted to TAI and to UTC Julian dates. Observe that UTC (as a float) jumps backwards, which is why we recommend against using UTC Julian Dates for any precision work. Note that during the leap second, the seconds field in CCSDS ASCII format starts at 60.00000 and extends to 60.99999 with as many 9's as you care to have. If you convert such a time to a floating point format with the usual algorithms, you will end up with a time in the next day, which will then duplicate one of your forthcoming time tags. If you drop back one second in your conversion when the number 60 appears in the seconds field, you instead duplicate the last second of the dying day. In the Appendix, the leap second UTC Julian Date times that duplicate other times are italicized; for brevity, only a few of the original times matching the duplicates are shown.
The last column of the table in Appendix A shows the apparent Greenwich Hour Angle (closely related to GMST), which continues to change smoothly right through the leap second. Of course, that's what you expect, and that's what you need in order to do geolocation during the leap second interval. The data are included just to exhibit the good performance of our software for both leap seconds and UT1-UTC. When UTC jumps back, TAI and the hour angle march merrily along. The way that works is that the quantities UT1-UTC, and TAI-UTC, which we access from tables, has to jump ahead (increase) by one second exactly when UTC as a Julian Date jumps back. The final implication, however, is that other software systems which use only leap seconds, and not tables of UT1 - UTC or the equivalent, will exhibit discontinuous Earth motion at the leap second.
It is impossible to compare UTC with TAI on a long time scale, since TAI and UTC started in  1972. But UTC is slaved to UT1 over the long haul, and TAI is equivalent to dynamical time, a new term for Ephemeris Time, (ET) minus 32.184 seconds. Dynamical time can be derived from astronomical data far into the past. So we can get the long term picture almost equivalent to a comparison of TAI with UTC by comparing Dynamical Time (ET) to UT1, as shown in Fig. 2-1 (the "data", which the sharp of eye will note extend past 1995, include a few long term predictions that aren't very good).
Remember, in viewing Fig. 2-1, that UT1 serves well as a long term surrogate for UTC; the difference is never more than 0.9 seconds, by decision of the international bodies in control. That difference would not even show up on the scale of the figure. Also keep in mind that ET is the best standard we have for a "uniform" time stream, extending atomic time into the distant past, when the best "clocks" were the motions of the Earth, Moon, and planets. The quadratic fit is optimized by eye to the last decade. The curvature is evidence of the slowing rotation of the Earth, and the deviations of the data from a smooth curve show variations in that process.
Another episode of confusion came from the widespread idea that Julian Dates are a panacea for problems with time streams. First of all, what are Julian Dates? They are a measurement of time, expressed in days, counted from Greenwich Mean Solar noon, 4713 BC. I can't go into the history or the reasons here; see the Supplement to the Astronomical Almanac, Chapter 12. The term "Julian" comes, however, from the name "Julius Scaliger", the father of the originator of these dates, Joseph Scaliger, and it has nothing to do with Julius Caesar. We all know how to find the current calendar date, and most of us know it's a Gregorian calendar date. More about that soon, but first, to put the subject more firmly in front of you, here's a table of conversions between Gregorian and Julian dates:
The foregoing expression of calendar Gregorian date is in CCSDS type "A" format. The full CCSDS ASCII format for UTC is exemplified by:
where the terminal (optional) "Z" means referenced to the Greenwich meridian. The "B" variant replaces the month-day combination with day number during the year (1-365 except on leap years, when it is 1-366). We use the "A" format on output but accept either on input, distinguishing them by the pattern of hyphens. The Julian day starts at noon, so when you read the table for the whole day numbers only, if it is past noon Gregorian truncate, if past noon, round. Tables for conversion are published in the Astronomical Almanac and software for conversion either way is on line on a server at the Naval Observatory. To obtain the programs, use e-mail as follows:
"Mail -s cdecm firstname.lastname@example.org < /dev/null . Or anonymous ftp off ftp://maia.usno.navy.mil/ can also be used.
There are technical and political/psychological issues with Julian Dates. Technically, as used by astronomers and serious space geodesists, each time stream (UTC, UT1, TAI, etc.) has a little different value at the same time. UT1 - UTC varies according to fluctuations in Earth rotation. TAI now leads UTC by 30 seconds, each representing a leap second where UTC as a Julian Date jumped backwards. Dynamical time, used by astronomers, is basically TAI + 32.184 seconds, so now it's more than a minute ahead of UTC. (The seconds are to be converted to days when you use Julian Dates.) We also convert to and from GPS time, which is (within our accuracy requirements) the same as TAI but based on the epoch Jan 6, 1980. After one understands all these things and writes all the software, one then finds that in parts of the business and military community, and even among some earth scientists, it is common to call the day number (1-366) within the Gregorian year the Julian Date. There is no authority for this practice and much against it  , and it should be stopped. Gregorian is Gregorian and Julian is Julian and never the twain should be confused. The chants I refer to in the title of this section are our chant that everyone should call a Gregorian date a Gregorian date, and the counter-chants of others calling day-of-year a Julian Date. A definitive statement on the matter is posted on the Internet at:
There are also Modified Julian Dates, Truncated Julian Dates, etc. The Modified Julian Date, (MJD), which is used internally some places in the Toolkit, is JD -2,400,000.5 and thus its day starts at midnight. It is sanctioned by the CCIR. The Truncated Julian Date is peculiar to a NASA "PB-5J" format. Its day count is based on the epoch of midnight May 24, 1968.
The biggest danger with using Julian Dates is the idea that they are a panacea. Everybody loves absolute time and everybody wants a universal label. The Julian date for TAI, however, is 30 seconds ahead of UTC and the gap is growing, dynamical time is even less intuitive, UT1 has wobbles that make it unsuitable for most kinds of mathematical work, and UTC Julian Date has backwards jumps of 1 second every 12 to 30 months. Furthermore, the rate of these jumps (leap seconds) is very slowly growing, so the UTC time stream, on the scale of decades and centuries, is nonlinear (Fig. 2-1).
The biggest technical problem with Julian Date is accuracy. Because all those days from 4713 BC to (say) 1995 take up six decimal figures, our accuracy would be degraded to the level of several milliseconds if we used Julian Dates as (64 bit ) double precision numbers. So, following the JPL planetary ephemeris group, we use two doubles in tandem - the first half-integral, the second in the range (0,1). This accuracy problem led us into new territory for our internal time standard.
We were lucky in the Toolkit in having been given two requirements that we feel were very wisely drawn: To use one internal time standard, and to maintain the precision of our most accurate spacecraft clock - the AM1 clock at 1 microsecond resolution. Obviously you can meet the requirement with 28 ASCII characters for UTC, but that's bulky and you can't do math with ASCII. We could have used TAI Julian Date, but, as mentioned in the previous paragraph, we'd have to use two 64 bit floating point numbers or else be degraded to ~10 millisecond accuracy. But if you measure time in days, seconds, or whatever from some time in this decade, you can maintain microsecond precision with one 64 bit floating point double precision number from about 1960 to 2020. So we picked Jan 1, 1993, UTC midnight as a reference date and we use seconds from then. We had to make a decision in 1993, and we couldn't be sure what leap seconds were coming our way in 1994, so that's why we didn't reach ahead further. We are just doing our homework and are not intentionally creating a new standard. But we have found it to be reasonably popular so far (based on small statistics, so far, by the way). And we haven't found another way to meet the requirements.
The apparent popularity of our Toolkit time with science teams does leave us with some concerns. Floating point times are not user friendly at the browse level. What does a student or teacher in Tallahassee or Texarkana do with a data set having TAI seconds from Jan 1, 1993 in it? We provide translation tools with the Toolkit. The tools require the leap seconds file, but it is only about 40 lines of ASCII text. It grows by a line every year or so, however. We plan to provide some amenity for users of our data who do not have or want the whole Toolkit. Sufficient high level labeling will be done in CCSDS or FGDC time formats, legible to all. The TAI times, if present as time tags at the smallest scales within the granule, can be used as offsets in seconds by anyone needing fine grained resolution within the data set. That's true providing they can translate one of the times to UTC. Once they have one reference time translated, they can translate all the rest by converting seconds to minutes, hours, and days. If there is a leap second within the time span they need to know that. The UTC or FGDC date/time stamp in our file header metadata will therefore be accompanied by a translation to Toolkit time, a leap seconds flag to show if a leap second is present in the granule time span, and, if so, the time of the leap second. Thus every time in the data set can be interpreted at once without needing the Toolkit.
We are offering several format conversions - including to and from FGDC. These will accommodate data "migrating" in and arriving from outside sources, and permit data queries in a broader set of formats than we use for science data processing. The FGDC system splits date and time into separate fields, and allows a trailer on the time field for zone times. It is a widely used system. Therefore we provide translations between it and the CCSDS format. We have run into several formats which are floating point numbers whose decimal expression is some compressed version of the CCSDS format, for example "20021103132159.22" for "2002-11-03T13:21:59.22Z". This saves some storage, but there are so many different preferences for the coding, that we can't offer all possible translations. How many more will be covered is not decided. Some COTS database tools and spreadsheets can do sophisticated transformations among time formats, but generally there is not much capability to handle changes between different time streams.
There is nothing to prevent users of our system to imbed any time of their choice in their data. It is only to use our tools that they would have to use some format we support or translate.
It is important, finally, not to confuse time format changes with time stream changes. For example the difference in early 1996, TAI - UTC = 30s won't go away no matter how you format the two time streams. If you use Julian Dates you can make the 30 second difference look small, because it gets divided by 86,400 in being converted to days. But it's still there, and will continue to change.
When people asked us what kind of seconds we were using in defining Toolkit time, we just said SI (international system) seconds, and, if necessary, we referred them to handbooks such as the Astronomical Almanac. But, dealing with all these time streams, one gets "curioser and curioser" about what the SI (International System) second really is. It is defined on the geoid (essentially a sea-level surface on Earth) as the duration of 9192631770 cycles of transition between the hyperfine levels of the ground state of Cesium 133, when the atom is undisturbed. But what about off the geoid? Einstein's Principle of Equivalence says that a well made clock has to be trusted as a laboratory standard at all different velocities and gravitational potentials. That way, the laws of physics and the fundamental constants of Nature don't depend on where you are or how fast you are moving. As a consequence of relativity, "proper" clocks such as these in relative motion or at different gravitational potentials won't stay synchronized. But synchronized time standards are needed for civil timekeeping, space geodesy, Very Long Baseline Interferometry (VLBI) and other needs. Thus, various bodies in charge of time standards and many experts on clocks have put the most effort into perfecting the determination of time streams such as TAI that are broadcast, and are used to synchronize clocks worldwide, irrespective of their elevations or states of motion. As a physicist rather attached to the Principle of Equivalence, without which laboratory physics would collapse (because physics would be different in virtually each laboratory), I became concerned about the prevalence of statements suggesting that the SI second is reserved for TAI and that clocks off the geoid must be slaved to it. On making further inquiries, it turns out that the International Astronomical Union has affirmed in Resolution A4 that the SI second is, in the first instance, a unit of proper time. That means that it is the unit measured by a "proper clock." The problem that remains to be addressed is that the SI second is based on the "undisturbed" state of the atom, which is understood by some professionals in the field of time standards to mean that you "correct for" the gravitational potential. If you do that and take the resulting second as your laboratory SI second, you define a different physics in violation of the Equivalence Principle. A commission named the "Comite Consultatif pour la Definition de la Seconde," affiliated with the Bureau International des Poids et Mesures - in Sevres, France (BIPM) is preparing a report on the application of general relativity to metrology which we hope will resolve this issue in favor of not correcting proper clocks for the gravitational potential.
To summarize the correct procedure: if you're off the geoid, you can't measure SI seconds as such by counting the ticks of a remote TAI clock (which is on the geoid or controlled as if it were) by telemetry; instead you calibrate to its rate times (1+gh/c2) as shown in Fig. 3-2. For illustration, that figure shows two out of many "official" clocks on or very near the geoid, which are assumed to have been synchronized by telemetry using some kinds of weights reflecting their performance. They are a model for the definition of TAI. An ideal clock on a mountain top like Clock 2 will seem to run fast when compared by telemetry. This is an effect of spacetime curvature; nothing at all is wrong with the clock. One in a deep valley, like Clock 1, will seem to run slow. If you want to do radio interferometry on a distant quasar using these clocks, or to use them for civil timekeeping, you have to slave them to the TAI network. If you want to do local laboratory physics, you should instead calibrate them against TAI with the correction factor shown in the caption. It is easy to go astray in this work for two reasons. One is that in practice, there are several other corrections that have to be made to correct the atomic frequency for various effects, including, of all things, the individual atom's velocity relative to the clock! It's easy to just keep correcting and do too much. The second reason is that, in order to get your network of TAI clocks synchronized, you have to tune them so that ones up high are forced to slow down by the reciprocal of the same factor (1+gh/c2) that was discussed. So if you want the best laboratory standard, you first count on the people setting up TAI to correct all their clocks to the geoid by the troublesome gravitational potential factor, and you then proceed to use the same factor in the other sense, to adjust your clock out of apparent synchronism with theirs.
A space based Earth observing mission needs to deal with concepts such as the figure of the Earth, latitude, longitude, etc. We looked to existing definitions and standards in this area. We found plenty of useful definitions and numerical values in textbooks, the Astronomical Almanac and its Supplement, and the IERS Standards. We found problem areas in the Federal Metadata Standards.
I will give a few definitions here because there is some confusion in the FGDC standards about the flattening ratio. The flattening ratio f for an Earth ellipsoid is normally defined (Wertz, 1978) as f = (A - C)/A ~ 1/298
The definition of the metadata for the earth flattening in the FGDC Standards is
"22.214.171.124 Denominator of Flattening Ratio -- the denominator of the ratio of the difference between the equatorial and polar radii of the ellipsoid when the numerator is set to 1. [editorial boldface]
Domain: Denominator of Flattening > 0.0 "
this material is introduced via Section 4, viz:
"4 Spatial Reference Information -- the description of the reference frame for, and the means to encode, coordinates in the data set.
. (skipping down the page)
Denominator_of_Flattening_Ratio (editorial boldface)
there is a missing phrase in the words "the denominator of the ratio of the difference between the equatorial and polar radii of the ellipsoid" The ratio of this difference to WHAT? A ratio is always the ratio of something to something else. I believe the FGDC means:
"the denominator of the ratio of the difference between the equatorial and polar radii of the ellipsoid to the equatorial radius" We at SDP are waiting for a reply on this matter. With the added words, the "ratio" would be a number on the order of 298 - the reciprocal of the usual flattening factor.
"latitude - angular distance measured on a meridian north or south from the equator"
Now, most Earth scientists and geographers use geodetic latitude. That's defined by the angle of the local normal to the ellipsoid with the Earth's equatorial plane. (The local normal is the same as the local vertical if you ignore gravity anomalies). If you look at Fig. 4-1, you can see that as latitude changes, for example by going from P1 to P2, the lines from Earth center O to P1 or P1 swing around O. The meridian reaches from P1 to P2 in this planar example. So we can say that the difference in geocentric latitudes is represented by an angular distance about O, and along the meridian, of magnitude phi'2 - phi'1. But the lines Q2-P2 and Q1-P1 which define geodetic latitude do not swing around a common fulcrum. So it is hard to speak of an angular distance on the meridian in this case - there is no obvious point about which an angle can be measured. Thus, it seems that the FGDC is leaning towards a preference for geocentric latitude although we can't find it stated which is really intended. The maximum difference is about 10 minutes arc at mid-latitudes. We also look forward to clarification on this matter.
After generating terabytes and eventually petabytes of Earth science data, the project will need to make it readily accessible. To do this requires salting the right metadata in the right places in the files and, more importantly, a table or inventory in the database system. Much data can be classified as "scene data" - effectively snapshots of limited portions of the globe, perhaps a few degrees square, or even ten degrees on a side. Such data can be identified for later retrieval by overlaying a quadrilateral or other polygon and using database tool capability for intersecting user requests with polygons. Other data sets are global maps on global grids. But much data arrive as "swath data", batched in granules that represent an instrument's swath for a half orbit or an orbit. Such a swath does not lend itself to being tagged by an overlying polygon. We intend to tag the swaths by the geographic longitude where the spacecraft crossed the equator. That identifies a unique relationship of the orbit (for a fixed spacecraft) to the Earth system of longitudes. Then when we archive the data, we can file it according to that fiducial or reference longitude. Knowing the basic shape of the spacecraft orbit, we will have enough information to overlay or associate with the swath a family of polygons that can be used by the database tool to find the data. The challenge is that we have to generate the required metadata early, at ingest or initial processing of the spacecraft ephemeris. After that, the metadata have to be propagated through the system to a final resting place in the data server. The original orbit data are common metadata for many instruments and their products. Thus, they must, in a sense, be "exploded" eventually so as to become attached to all the swath type products generated from the original spacecraft orbit. We are in the process of building the system for that.
To generate suitable metadata for retrieving swath products, we need to know the instrument field of view. This requires interfaces to the science data processing teams. The interface must be ongoing, because the instrument may be mounted and even used a little differently from what was planned. That brings us to the to time dependencies.
A problem with real time systems like ours is that users will want to process early, using predictive spacecraft ephemeris and earth rotation data, and reprocess later with definitive data. There is a tradeoff between reproducibility and accuracy which has to be faced. The compromises will be made in a way that is not clear at the time of this report. The solution will probably differ according to the accuracy requirements of different users. Those processing data on clouds or ocean may want to keep their first run; those making stereo images of land targets may want to reprocess. Thus far, it seems that many existing data processing centers are aware of the problems with predicted versus refined spacecraft orbit and attitude data, but few if any bring in the Earth rotation data other than leap seconds. Thus, initially, our system will present some shocks to users, but eventually, we hope that the merits will be clear.
When you process with out of date parameters, that is called a "latency." One might define the "latency problem" for real time systems as the tradeoff between accuracy and reproducibility. If an instrument team recalibrates its mounting angle by a small amount, or the spacecraft ephemeris is revised, does one want to reprocess or not? We have tried to be open to user choices in all these matters; that's the best we can do.
Our metadata for Earth motion and leap seconds also change with time. We expect to import one new line of Earth rotation data a day, in weekly batches, and one new leap second every twelve to eighteen, or even longer, whenever they are announced . Furthermore, the USNO solution for the rotation is sometimes refined months later. The changes are negligible for Earth observing use, but users may notice changes in their results if they reprocess after a few weeks or months. We look forward with interest and some trepidation to the response.
Content Standards for Digital Spatial Metadata, Federal Geographic Data Committee (FGDC) , U.S. Geological Survey, Reston, VA. June, 1994
The IERS Standards, Ed. Dr. Dennis D. McCarthy, U.S. Naval Observatory, Washington, 1992.
Time Code Formats - CCSDS Blue Book 301.0-B-2, Consultative Committee for Space Data Systems, Washington DC. Issue 2, April 1990
The Astronomical Almanac , 1996, 1995, 1994 .... (U.S. Government Printing Office; written by U.S. Naval Observatory), published annually and available several months in advance of each year's beginning.
The Explanatory Supplement to The Astronomical Almanac (Prepared by the U.S. Naval Observatory, Ed: P. Kenneth Seidelmann ; University Science Books, Mill Valley, CA 1992)
Spacecraft Attitude Determination and Control, Ed. J. R. Wertz (D. Reidel, Holland, 1978)
 The irregularities result from tides, winds, changes in ocean currents, and motions in the Earth's core.
 Of course, if you have a good wristwatch or wall clock it essentially keeps TAI time better than UTC. When they add a leap second, then the next day, if you are fussy, you'll set your watch back one second, because it kept TAI time overnight.
 Atomic time values consistent with TAI can be established back to 1956, so some of our software uses conversions based on "TAI" before it officially existed. We don't expect to receive any spacecraft data from before 1972 where it really matters, but including the conversion prevents our software from breaking if someone passes us data from before 1972.
 The practice is frowned on by the USNO, the CCIR (advisory group to the International Telecommunications Union), the Internatonal Astronomical Union, and the U.S. General Services Administration Office of Information Resources Management.
 A spheroid is an ellipsoid with two of its axes equal - a sort of squashed sphere. In the case of the Earth the more general term "ellipsoid" is normally used instead of "spheroid." In that spirit, we label the axes "A" and "C", not "A" and "B", in case at some time in the future someone wants to allow for a non-axisymmetric deformation. In the case of the Moon, this is necessary for good mapping. The Earth is so nearly axisymmetric that it is doubtful anyone will take the step of using three axes.
 There was a two year gap from July 1983 to July 1985, and a 30 month gap immediately after that. The last gap of more than eighteen months was immediately after that, from Jan 1988 to Jan 1990.