SEVEN SUGGESTION IMPROVEMENT OPPORTUNITIES FOR GPS
David W. Allan (January 2021)
To whom it may concern:
One of the great privileges I have enjoyed in my life has been to work with those who helped to develop GPS. One of the key players has been Brad Parkinson, whom I consider to be the Father of GPS. I first met Brad, when he and several AF blue-suiters came to Boulder, CO, to the National Bureau of Standards (NBS) to take seminars that my colleagues and I gave on how to best use atomic clocks, which are the heart of how GPS works.
My work with GPS has been both with the developers, but also in external application research opportunities, which involved many man-years of my professional life, (1) and Brad and I have been in many meetings together.
My good friend, Dr. John Vig, who is a past president of the IEEE felt to nominate me to be an IEEE FELLOW during 2020. My work with GPS was one of the reasons. I learned in December my nomination had been approved. I e-mailed Brad and shared the news on the 30th of December, and shared one concern I had some years ago that could have made GPS better. Brad is retired, but still in contact with the current GPS leadership. He forwarded my e-mail to them, and they responded with a request to share my insights. They also shared with me the marvelous improvements that have already been made since I was involved.
Below are those suggestions – hopefully complementing what they are doing:
______________________________________________________________________
January 1, 2021
Dear GPS colleagues,
Here are some suggestions for you to consider in your path forward toward GPS improvements:
1. Noise and Systematics
At the NBS/NIST time and frequency seminars we taught that it was most effective to remove the systematic characteristics from the data for clocks before analyzing the noise. This was sometimes circular because you could often better determine the systematics – like frequency drift and temperature dependence – if you knew the noise characteristics of the clocks. Those noise characteristics could be determined with the Allan variance (AVAR).
Later in the GPS program, because of the frequency drift in Rb clocks, the folks working with GPS decided to replace AVAR with the Hadamard variance, since it would remove the effects of frequency drift from the noise analysis. I counseled against this at the time because the Hadamard variance is not parsimonious. Because it performs a third-difference operation on the time error data, one can show that it is not nearly as efficient a noise characterizer as is AVAR. AVAR can be shown to be the optimum estimator of the ideal noise in atomic clocks, white-noise FM. Hence, AVAR can illustrate flicker-noise floors and random-walk FM levels more efficiently than can the Hadamard variance.
Simple, real-time data filters can remove the effects of frequency drift from the data so that AVAR, as a second difference operator technique, is not contaminated with this systematic effect. This would be a good step forward for GPS. The math for the Kalman filter parameters is easier with AVAR as well and already exists from before the Hadamard variance approach was introduced.
2. GPS Common-view Receiver
In 1980-81, at NBS/NIST we built a high-accuracy GPS Common-view timing receiver (0.1 nanosecond precision and an accuracy of about a nanosecond). JPL funded that development when I showed them how efficient it would be for synchronizing their Deep Space Network (DSN). We then demonstrated how effective this would be for communicating the times of clocks around the globe in the generation of International Atomic Time (TAI) and UTC by the BIPM. (2)
Then Marc Weiss and I used it to characterize the performance of all the GPS clocks. The GPS program contracted with my group to ascertain problems and solutions as consultants during the several years of GPS deployment. Later on, Marc and I were successful in inverting a five-element matrix to separate with some degree of success, the SV-clock error, GPS data word error, atmospheric error, ephemeris error, and the ground clock error contributions. (3)
This software never got transferred to the GPS folks. We prepared a monthly report to help them adjust the Kalman filter parameters. I surmise more could be done in this regard to help with GPS performance.
3. Advanced GPS Common-view
I continued my research following my retirement in 1992 – first consulting for Hewlett Packard in the development of what was called a SMART CLOCK, in which I taught them how to overcome the effects of SA (Selective Availability) for civilian GPS receivers for synchronizing cell sites throughout the world.
While consulting for them, HP asked me to write an application note for the communication world folks to understand GPS. With the help of Professor Neil Ashby and Dr. Cliff Hodge, I wrote the booklet, The Science of Timekeeping. It is available on my original web site. (4)
On page 42 of this booklet is a figure showing the performance of Advanced GPS Common-view (ACV). Robin Giffard, one of Len Cutler’s side-kicks who helped develop the 5071-A amazing Cs clock, and I did this research. I had the idea that if we could get rid of all the systematic problems in GPS Common-view, we could get much more useful timing error performance. We demonstrated with special receivers at USNO in Washington D. C. and HP Palo Alto, CA, TDEV performance, as shown in the above-mentioned figure of better than 10 ns τ^-1/2, which is the ideal expected measurement noise for this process with the potential of going down to 10^-18 at three months tau averaging time on an MDEV diagram. This is unprecedented, and no one has picked up on this ACV idea. This is about a factor of 50 times better than GPS Common-view.
The Two-way Satellite Time and Frequency Transfer (TWSTFT) technique has been developed to improve upon the GPS Common-view technique but requires communication satellite time at significant expense. The TWSTFT technique is significantly better than the ACV technique in the short-term, but long-term systematic effects make it worse than ACV. The carrier-phase technique is showing significant promise. At this point in time, I don’t know which is better in the long-term – ACV or carrier-phase. The latter is being researched; the former lies dormant with a potential that could be very exciting and much more cost-effective than TWSTFT.
4. GPS Orthogonality Issues
Jim Barnes and I at NBS contracted with Professor Neil Ashby to do the relativity for GPS. GPS became the first practical use of Einstein’s theory of relativity. Neil and I have written several papers together. I shared with Neil the idea of using Kepler’s Third Law to obtain orthogonality for determining GPS ephemeris. We did the simulations and worked this up into a paper which we gave in the UK.
As you all know because of the G-dop calculations, the orthogonality for estimating the vertical component of each GPS satellite’s dynamic position is currently the most challenging because the satellites are at an altitude of about 4.2 earth radii making the geometry for the tracking stations poor for determining the vertical component for the ephemeris.
What is needed is an orthogonal view of the satellites. Kepler’s third law gives that naturally.
Where T is the orbital period of the satellites (43,080 s), G is the universal gravitational constant, M is the mass of the earth, and r is the radius of the satellite orbits. If T can be determined to better than 25 microseconds, then r can be known to better than 1 centimeter. Clocks with sufficient frequency accuracy and the technology exist to do this. Professor Ashby and I went through the simulations in the 1990s to show this accuracy could be obtained. The earth speeds up and slows down in the order of 0.1 milliseconds in a day, but currently, the VLBI folks can measure it to the order of a microsecond. In a microsecond, the satellites only travel about 4 millimeters.
New technology in atomic clocks has low enough noise and high enough frequency accuracy to make these measurements at this level. In other words, with sufficiently accurate clocks and low enough noise for the measurements, the Doppler shift will give the point of closest approach to know the period, T, of the satellite, which translates to its orbital position, which through Kepler’s Third Law translates to a knowledge of the absolute value of the radius vector at that tracking station point with an uncertainty that is sub-centimeter.
Kepler’s law assumes that the orbiting body is in free-fall. The Gravity Probe-B program succeeded in engineering such a design of their space vehicle. Hence, measurements could be made at the microsecond level of accuracy, which would allow knowledge of the on-track position of the satellite and its radius vector at a few millimeters level. Since the satellites are like a giant flywheel in space, the cross-track error will drift extremely slowly, and the GPS cross-link measurements could probably easily sort this component of the position.
In due process, these Keplerian GPS satellites could replace the VLBI measurements at the microsecond level for ascertaining the fluctuations in the earth’s rotation. These measurements could also be very useful in earthquake monitoring and prediction.
It is personally interesting to me that my friend and colleague, Brad, who I consider to be the father of GPS, is also a Gravity Probe-B expert!
We have an interesting possibility in the above concept. Each satellite so equipped with this Keplerian capability could do its own ephemeris calculations, which could in the due process remove much of the very burdensome load now carried by the GPS control segment. However, if the Keplerian SV concept were developed to be complementary to the existing Kalman state estimator, the high accuracy of the vertical component could greatly reduce the other two position components.
This approach, of course, would be a major change in the architecture but could lead to a more robust system having important redundancy, while greatly relaxing, in due process, the enormous data transfer burden of the control segment, which is an annual expense of no small amount with the massive data collection from all the tracking stations and all the satellites into one place for processing and then redistribution for uploading to the satellites to provide as much real-time accuracy as possible 24/7.
As you well know, the current accuracy is well under a meter for a full-up military receiver. Having sub-centimeter accuracies for the GPS satellites’ ephemerides could improve GPS real-time accuracies considerably.
Now the best atomic clock frequency accuracies are 10^-18. At a level 1 nanosecond of time transfer accuracy, one would have to average over 30 years to have a measurement noise of 10^-18, which, of course, is absurd.
Currently, there is no method good enough to compare these most accurate clocks that are remote from each other. The ideal measurement noise is white-noise phase (or time) modulation (PM), and given that ideal condition, the precision with which frequencies can be compared is given by MDEV.
Since white-noise PM is a random uncorrelated noise process, the data points do not need to be equally spaced, which is very convenient in this application. To see the power in this approach where the convergence on an accurate frequency comparison improves as N^-3/2, where N is the number of time-difference measurements, consider the following example. Suppose a time transfer measurement system as achieved by Bob Vessot in his Gravity Probe-A hydrogen maser relativity experiment of σ = 10 ps and τo = 1 s, then one could reach a measurement uncertainty of 10^-18 with N = 10^5, which is about a day. One day versus 30 years; this approach could yield a massive improvement.
Because of its relevance, I share Prof. Ashby’s letter to me here:
Dear Dave—
20 March 2014
Concerning Kepler’s third law. There are significant gravitational forces arising from higher multipole moments of the earth, as well as tidal forces from the sun and the moon; these make Kepler’s Law a first–but very good–approximation. For example, Earth’s quadrupole moment can affect the satellite orbit radius with (approximately) a 6-hour period, 1km amplitude oscillation which can give rise to periodic variations in the frequencies of the orbiting clocks. Solar and Lunar tidal potentials can combine to give a frequency shift of orbiting clocks of 3.5 x 10^(-15), which is close to the limiting stabilities of recently launched clocks.
Fortunately, these effects can be calculated since the multipole moments of the earth are well known. The tidal effects are very complicated and have many frequencies due to the geometry–the satellite’s orbits and the moon’s orbit, as well as the sun’s “orbit”, are in different planes; but these can be calculated also since the ephemerides of sun and moon are very well known.
The calculations are anything but simple and require substantial effort. Tidal effects are not yet included in calculations by the control segment, as far as I can find out. Also, there are significant non-gravitational forces arising from solar radiation pressure and from thrusters that keep the satellites’ antennas pointing toward the earth and reorient the satellites with respect to the solar panels when they are in shadow so the cables don’t get twisted up.
Accounting for all such effects requires a huge effort, currently done at a few places such as IGS and JPL that provide orbits and clocks for the GNSS satellites. Non-gravitational forces could be effectively eliminated by making them “drag-free” as was done for GP-B.
There are quite a few other satellites that are also drag-free such as GRACE and GOCE and drag-free masses are envisioned for the LISA gravitational wave antenna. The technology for drag-free satellites has been highly developed. I have no idea what the cost of implementing such technology on new GPS satellites would be, but I suspect it is substantial–say 20% of the cost? (based on the size of the accelerometers in relation to the overall size of GOCE?)
I heard that laser retroreflectors will be placed on forthcoming GPS satellites, but I can’t remember where I heard or read that. Laser ranging would give very accurate distances and would greatly contribute to orbit determination.
There are 4 laser ranging stations in the US and an International Laser Ranging Service that has information about the 40 or so international laser ranging sites. I very much agree with you when you write below that “Currently, there is no method good enough to compare these most accurate clocks that are remote from each other.” The problem is, they can’t even get such accurate time out of one lab yet, so their claims involve two more-or-less identically constructed clocks in the same lab. Qualification of optical clocks for use in space is a crucial step in making them useful.
GPS Rubidium clocks have unpredictable frequency offsets that occur during launch stresses. Cross-link ranging would be a very good thing in terms of improving autonomy. I don’t know why more hasn’t been done about this since it has been discussed for a very long time. Maybe it has to do with the directionality of the existing antennas. Attached [we can get this paper from Neil if needed] is some work I did in 2003 in connection with the now-defunct XNAV project, to use signals from millisecond pulsars as alternatives to GPS signals. It was stimulated by a discussion with Larry Young of JPL, the designer of the BlackJack GPS receiver. The relevant equations are 7, 15, 25, 26, and the proofs in the Appendix.
I hope you find it interesting.
Best regards,
Neil
5. Frequency Drift
Frequency drift in atomic clocks is a major long-term problem. The timing-error dispersion rate goes as δt = δx + δy t + ½ δD t^2, where δx is the synchronization error at the last calibration (t = 0), δy is the syntonization error at the last calibration, δD is the uncertainty in the estimate of the frequency drift at t = 0, and t is the time since the last calibration (or upload), or it can also be thought of as the prediction interval.
Since the clocks onboard the GPS satellites are not continuously calibrated, minimizing the prediction error and using optimum prediction algorithms are both very important, so the broadcast values for the ephemerides and clock times are as accurate as possible. Because of the t^2 effect on the frequency drift uncertainty, that error always becomes predominant in the long term.
The size of the error also depends on how stable and predictable the drift (D) is and on how close to optimum the prediction algorithm is. As you know, the main clocks used in GPS satellites today are rubidium gas-cell frequency standards and were first developed by Bill Riley and his team at EG&G. They did a great job. But inherent in the physics of rubidium gas-cell frequency standards are significant systematics that cause residual frequency drift in this kind of standard.
The reason these standards are used – outside of their reliability – is because of their excellent short-term stability (low level of white-noise frequency modulation (FM)), which results in very small time prediction errors in the short term. The problem comes in a military-compromised situation. If the GPS control-segment upload stations were taken out, then the long-term frequency drift error outlined above will become predominant in the long term for the GPS timing error.
GPS cross-link ranging has the potential to keep the navigation and position solutions consistent across the constellation; however, the solutions will be degraded because of the cross-link timing errors. At the same time, the GPS timing error will greatly degrade — departing as t^2 without bound because of not being able to calibrate the frequency drift of the rubidium clocks being used in the GPS satellites. This error degradation will not affect the position determination as much as the time error.
The solution lies in using state-of-the-art frequency standards with no (or negligible) frequency drift as one would want to use in the above Keplerian-type satellites. This could reduce the long-term GPS timing errors to negligible levels. At the same time, such standards would assist in the improvement of international time and frequency comparisons. Improved algorithms for determining the Kalman states for the GPS constellation could enhance the drift estimates as well.
6. Redundancy for GPS
Dr. Steve Smith of Oak Ridge National Laboratory asked me to study Loran-C as a potential backup giving redundancy to GPS. Sam Stein had installed 5071-A Cs clocks at the Great Plains’ stations. Gus German was then working with me and we used that data and a couple of ideas to see how well we might improve on the timing error and the position error. Gus and I did the research using the actual data from these stations, and we were able to get the timing errors down to about 6 ns and position errors to about two meters.
We published these results with Steve Smith as co-author with Gus and I. I don’t have the link for that paper, but if it is of interest to you, I can get it. These numbers are both the precision and accuracy of the results, as I remember. The techniques we developed are not part of the publication and are still resident with me.
Sincerely yours,
David W. Allan
___________________________________________________________________________
Dear Ranwa,
January 4, 2021
Thank you for your kind response; I am pleased.
I have one more suggestion to add to the six that I made. I mentioned to Brad that I was not very happy with the original algorithm for GPS time implemented many years ago and that the features of the AT-1 algorithm generating UTC(NIST) that I had shared as well in a GPS meeting in El Segundo had not been appreciated.
After seeing the impressive improvement in GPS timing in the graphs that Ed Powers sent me and going over the excellent paper of Ken Senior and Michael Coleman, I can see you have made great progress. The AT-1 algorithm would not work well in your application because of the significant level of measurement noise, which the Kalman filter can nicely accommodate the way it is configured for GPS.
Back in 1968, when I wrote AT-1, I solved the measurement noise problem with a 0.1 ns precision time interval counter and by taking the measurements once a day, which I did manually back then between the eight clocks in the ensemble. This gave me a frequency measurement precision of 0.1 ns/86400s = 1.2 e-15, which was far better than the commercial cesium clocks making up the clock ensemble.
Since AT-1 is optimized for white-noise FM, flicker-noise FM, random-walk FM, removing systematics, and locking to a primary frequency standard, while the Kalman filter cannot model flicker-noise without arduous mathematical manipulation, my suggestion is that the Kalman filter could provide your operating GPS time with a software modification to lock it’s time to an AT-1 like scale to solve the frequency drift uncertainty problem in the Rb clocks.
You could use a long measurement-time interval to ensure that the measurement noise was less than the best of the clocks, and one or more of the member clocks would be a non-frequency drift primary-like standard. Since time-interval measurement noise is white PM, which averages as tau^-3/2 on an MDEV diagram, and the ideal clock noise is white FM, which averages as tau^-1/2 on an MDEV diagram, you are always guaranteed to have the measurement noise less than the clock noise if you average long enough, as I did back in 1968. Then the AT-1 algorithm requirement of negligible measurement noise would be satisfied.
The goal in this suggestion is to both get rid of the frequency drift uncertainty error which grows as t^2 and to accommodate clocks with flicker-floor long-term stability, which is quite common and which the Kalman filter cannot accommodate optimally. As mentioned in my 1 January suggestion list the t^2 always becomes the dominant dispersion error in the long term.
With this suggestion, this t^2 dispersion error could be eliminated. The technology exists now for non-frequency drift “primary-like standard” atomic clocks, which AT-1 would lock to. These could be configured in the control segment or in the space segment. If they were in both that would give greater robustness. Having more than one in both places would give excellent redundancy and robustness.
All the best,
David W. Allan
PS: To my knowledge, as NIST is now using AT-1, they don’t have it tied to a primary standard, but there is a parameter in the algorithm that can be set that makes that easy to do.
For those who don’t know the features of AT-1:
Systematics like time offset, frequency offset, and frequency drift are compensated for each clock with no measurement noise;
The weighting factors for each clock are optimized for both the short-term and long-term stability;
The output is better than the best clock;
The best clock cannot take over under the assumption of a normal distribution of errors, since it is looking at itself in the computation with each measurement cycle;
The worst clock optimally improves the output as well;
It is robust and outliers are automatically rejected and are brought back in optimally, whether it be a time-step error or a frequency-step error;
The short-term stability is optimally and adaptively modified with each measurement cycle as the clocks may improve or degrade over time;
The long-term stability is given by data from a sigma-tau (ADEV) diagram or output, as it may be calculated from each clock’s performance characteristics.
By the way, Prof. Ashby, Gus German, and I gave a paper at the 2001 IONGPS meeting on the Keplerian high accuracy orbit determination idea. (5) We have witnessed great improvements in atomic clocks since 2001, such that I believe the Keplerian satellite approach could be sub-centimeter.
Gus and I modified the AT-1 algorithm for an oven-less, six-quartz-oscillator, orthogonal clock set demonstrating atomic-clock-like performance in a dynamic (5g) and mil-spec temperature environment in the little laboratory I had set up in our solar home. Since I have PV panels and backup batteries as part of the solar home, I made a 3.5 kw UPS to run the environmental control oven and the oscillators for these experiments. We had fail-safe operations over the three years we did these experiments. We did this with Bliley, Inc. and they supplied special dual-mode oscillators, which gave us micro-degree oven-less temperature measurement precision. We gained some insights as we modified AT-1. We shared these results at a Frequency Control Symposium in SF, CA.
References:
(1) My Involvement With GPS Development | It’s About Time (itsabouttimebook.com)
(2) https//tf.nist.gov/general/pdf/689.pdf
(3) https://tf.nist.gov/general/pdf/794.pdf
(4) http://www.allanstime.com/Publications/DWA/Science_Timekeeping/index.html
(5) http://www.allanstime.com/Publications/DWA/IONGPS01/index.html