Skip to main content

The Cassini/Huygens Navigation Ground Data System: Design, Implementation, and Operations

  • Chapter
  • First Online:
Space Operations: Inspiring Humankind's Future
  • 1061 Accesses

Abstract

The highly successful Cassini/Huygens mission conducted almost 20 years of scientific research in both its journey across the solar system and its 13-year reconnaissance of the Saturnian system. This operational effort was orchestrated by the Cassini/Huygens Spacecraft Navigation team on a network of computer systems that met a requirement for no more than two minutes of unplanned downtime a year (99.9995% availability). The work of spacecraft navigation involved rigorous requirements for accuracy and completeness carried out often under uncompromising critical time pressures and resulted from a complex interplay between several teams within the Cassini Project, conducted on the Ground Data System. To support the Navigation function, a fault-tolerant, secure, high-reliability/high-availability computational environment was necessary to support operations data processing. This paper discusses the design, implementation, re-implementation, and operation of the Navigation Ground Data System. Systems analysis and performance tuning based on a review of science goals and user consultation informed the initial launch and cruise configuration requirements, and then those requirements were subsequently upgraded for support of the demanding orbital tour of the Saturn System. Configuration management was integrated with fault-tolerant design and security engineering, according to cornerstone principles of Confidentiality, Integrity, and Availability, and strategic design approaches such as Defense in Depth, Least Privilege, and Vulnerability Removal. Included with this approach were security benchmarks and validation to meet strict confidence levels. The implementation of this computational environment incorporated a secure, modular system that met its reliability metrics and experienced almost no downtime throughout tour operations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In one particularly egregious case, this author was headed out one evening to enjoy a two-week winter break in Oregon, and made the poor personal decision to answer “just one more call” on his office line. After some discussion with the caller, it was determined that the MMNAV network had gone silent. Over a twelve-hour period, this author assembled a team of four system and network administrators from three separate organizations to help isolate, and finally replace what turned out to be a bad optical Ethernet transceiver. Sadly, this ended up canceling the trip as the author missed his transportation, and promptly got sick for a week. This unfortunately would not resolve the issue either, as the same transceiver would fail again the following May (at least there was a suspicion where to look) [42]. It would take a network upgrade/overhaul some years later (see Section IV for more details) to finally put these problems to rest.

  2. 2.

    No general systems metric exists over this timespan, and it represents the combination of results from the Standard Performance Evaluation Corporation [25], specific Navigation benchmark utilities (NBODY), and performance comparison of Navigation software on differing hardware platforms to come up with this scale.

  3. 3.

    Valuable digressions to go into detail about relevant key areas of high-level systems design or systems engineering of interest will be denoted in the text as Strategic Considerations, while areas of relevant technical concerns (particularly covering areas of interest for systems administration) will be called out in the text as Tactical Considerations. In addition, the “Observations and Lessons Learned” section will cover a number of such observations.

  4. 4.

    During this time, nearly 75% of all 4-mm DDS-2 tape drives (the standard used for backup for Cassini Navigation at the time) shipped to the remote site would fail shortly after arrival. After some investigation, we suspected that the poor desert roads and high altitudes (nearly 5000 ft. in some locations) were probably contributing factors. Some improvement was achieved by flying both the tapes and the drives to the site in carry-on luggage.

  5. 5.

    These two terms are not, as popularly believed, the same. Patches and other software updates may introduce bugs, especially in the currently popular Agile/DevOps software engineering paradigms. Absent other testing schemes, it is wise to adopt a “wait and see” approach to patching of critical software systems—especially in times of stress when major computer security bugs are announced. Even large companies can make mistakes! A patch failure that causes a home computer to stop working can be painful. A similar failure on a critical operations machine could end the mission. An examination of the release schedule for Microsoft and Intel software and firmware updates during the MELTDOWN/SPECTER vulnerability in January 2018 may be instructive.

  6. 6.

    Versions of this problem involving significantly larger numbers of actors are considered in the case of the Byzantine General’s Problem, or more simply Byzantine failure, while solutions to this problem are classified as Byzantine Fault Tolerance [43].

  7. 7.

    Not being able to program your own network switches for such things as server failover can be irksome when trying to explain why a particular complex setup is necessary to another engineer; however, it is hard indeed to ignore a nearly immediate 100-fold upgrade from 10 MB/s Ethernet to 1000 MB/s Ethernet.

  8. 8.

    These estimates were, as all such estimates are, woefully inadequate. See the “Observations and Lessons Learned” section and the derivation of this estimate in the appendix for more detail.

  9. 9.

    See: Amiga, NeXT, VMS, SGI-IRIX.

  10. 10.

    The astute observer will note the central benchmark requirement denoted as [4.18] in Ref. [18] and the success of the interim upgrades in increasing performance, “… by a factor of three, and the servers by a factor of six.” Increasing the speed and capability of the “…current operational [Launch/Cruise] state, on both client and server systems” meant that the goalposts would be moved significantly higher for the requirements for tour. This may have not been accidental.

  11. 11.

    One may recall the protracted struggles in the microprocessor industry at this time between CISC—complex instruction set computer (×86 processors), its branch off to EPIC—explicitly parallel instruction computing (Itanium), and RISC—reduced instruction set computer (PA-RISC, ARM, others). CISC and RISC had radically different processor architectures, while EPIC attempted to merge some traits between the two others. Code compiled for these differing processors might have significantly different performance characteristics in different areas—much like trying to compare the performance of a heavy-duty truck and a sports car. All of these processor types were in this evaluation.

  12. 12.

    For example, this author took days off to support several graduate final exams.

  13. 13.

    We did not have to pay for these, and they were not part of our evaluation. They were provided for interface with other Cassini Project Operations teams and are included for completeness.

  14. 14.

    This would be another example of “cleverness” and “ownership”—while having a redundant network connection running in parallel is par for the course for such a critical fileserver, it took considerable effort (as well as late night discussions with the facility electrician) to get a second circuit installed, powering the N + 1 redundant power strip, connected to a different Power Distribution Unit. This meant that a power failure would have to impact not only two different rooms, but two different wings of the building before the server would lose power.

  15. 15.

    To be clear, we would have been happy to support him, we just needed a larger account if he wanted to continue… .

  16. 16.

    If one thinks of a computer on a network being akin to a building on a street, NMAP will attempt to find all the openings, and NESSUS will try to open all the doors and windows.

Abbreviations

CIA:

Confidentiality, integrity, and availability (foundational security principles)

CIS:

Center for Internet Security

CISscan:

CIS internal host security benchmark scanner

CM:

Configuration management

DOS:

Denial-of-service attack

DSN:

Deep Space Network

DR:

Disaster recovery

ECC:

Emergency Control Center, a GCC facility for DR support

GDS:

Ground Data System

GCC:

Goldstone Communications Complex, located in California, one of three main DSN sites

HP-UX:

Hewlett-Packard (Unix System V-based OS) for Hewlett-Packard computers

IGNITE:

System imaging and installation software for HP-UX systems

LAN:

Local area network

Linux:

Open-source OS derived from Unix, System V-based OS

OS:

Operating system

MTTR:

Mean Time To Restore

MMNAV:

Multi-Mission Navigation operations coordinating organization

NAS:

Network-attached storage

NESSUS:

Nessus network security scanner from Tenable Network Security, Inc

N+1:

System with one redundant component for every point of failure

NFS:

Network file system

ORT:

Operational readiness test

QoS:

Quality of Service

RAID:

Redundant array of inexpensive disks

RLOGIN:

Remote log-in

RCP:

Remote copy

RSYNC:

Remote synchronization (file distribution) program

Solaris:

Sun (Unix System V-based OS) for Sun computers

SFOC:

Space Flight Operations Facility—Mission Critical Spacecraft Operations building at JPL

SSH:

Secure Shell communications replacement for RLOGIN, RCP, and other “R” commands

SYSTEMIMAGER:

System imaging and installation software for Linux computers

TMR:

Triple modular redundant (three redundant components for every point of failure)

References

  1. Antreasian, P. G., Ardalan, S. M., Beswick, R. M., Criddle, K. E., Ionasescu, R., Jacobson, R. A., et al. (2008). Orbit determination processes for the navigation of the Cassini/Huygens mission. In AIAA-2008-3433, SpaceOps Conference, Heidelberg, Germany, May 12–16, 2008. https://doi.org/10.2514/6.2008-3433.

  2. Williams, P. N., Gist, E. M., Goodson, T. D., Hahn, Y., Stumpf, P. W., & Wagner, S. V. (2008). Orbit control operations for the Cassini-Huygens mission. In AIAA-2008-3429, SpaceOps Conference, Heidelberg, Germany, May 12–16, 2008. https://doi.org/10.2514/6.2008-3429.

  3. Beswick, R., Antreasian, P., Gillam, S., Hahn, Y. H., Roth, D., & Jones, J. (2008). Navigation ground data system engineering for the Cassini/Huygens mission. In AIAA 2008-3247, SpaceOps 2008 Conference, Heidelberg, Germany, May 12–16, 2008. https://doi.org/10.2514/6.2008-3247.

  4. Beswick, R. M., & Roth, D. C. (2012). A gilded cage: Cassini/Huygens Navigation ground data system engineering for security. In AIAA 2012-1267202, SpaceOps 2012 Conference, Stockholm, Sweden, June 11–15, 2012. https://doi.org/10.2514/6.2012-1267202.

  5. Beswick, R. M. (2017). Computer security as an engineering practice: A system engineering discussion. In IEEE: 6th International Conference on Space Mission Challenges for Information Technology (SMC-IT), September 27–29, 2017. https://doi.org/10.1109/smc-it.2017.18.

  6. Beswick, R. M. (2018). Computer security as an engineering practice: A system engineering discussion. In Advances in Science, Technology and Engineering Systems Journal (ASTESJ), vol. Special Issue 5, no. Multidisciplinary sciences and Engineering, p. (to be published).

    Google Scholar 

  7. Byrne, D., Frantz, C., Weymouth, T., & Harrison, J. (1980). Composers, once in a lifetime [sound recording]. Sire Records.

    Google Scholar 

  8. Wikipedia, Whack-A-Mole, [online encyclopedia], Wikimedia Foundation, December 15, 2017. [Online]. http://en.wikipedia.org/wiki/Whac-A-Mole. Accessed March 28, 2018.

  9. Coulouris, G., Dollimore, J., & Kindberg, T. (2005). Distributed systems, concepts and design (4th ed., p. 519). New York: Addison-Wesley.

    Google Scholar 

  10. Rich, B. R. (1995). Clarence Leonard (Kelly) Johnson, 1910–1990. In A biographical memoir (p. 231), National Academy of Sciences, National Academies Press, Washington, D.C.

    Google Scholar 

  11. Kranz, G. (2009). Failure is not an option: Mission control from Mercury to Apollo 13 and beyond (p. 392). New York: Simon & Schuster.

    Google Scholar 

  12. Affleck, B. (2012). Argo. [Film]. USA: Warner Brothers.

    Google Scholar 

  13. Beswick, R. M. (2003). Response to RFA #3, of review for Cassini Navigation, of 28 August 2003. IOM 312.D/006-2003, Jet Propulsion Laboratory, NASA, Pasadena, CA, October 15, 2003.

    Google Scholar 

  14. Goddard Technical Standard, Risk management reporting, GSFC-STD-0002, Goddard Space Flight Center, NASA, Greenbelt, MD, May 8, 2009.

    Google Scholar 

  15. Hewlett Packard Enterprise, HP Ignite-UX, Hewlett Packard Enterprise Development. (2018). [Online]. https://www.hpe.com/us/en/product-catalog/detail/pip.4077173.html. Accessed March 30, 2018.

  16. Cheswick, W. R., Bellovin, S. M., & Rubin, A. D. (2003). Firewalls and internet security, repelling the Wily Hacker (2nd ed., pp. 10–14). New York: Addison-Wesley.

    MATH  Google Scholar 

  17. Ekelund, J. E. (2000). Functional requirements document for the navigation software system—Encounter version. 699-SCO/NAV-FRD-501-ENC, Jet Propulsion Laboratory, NASA, Pasadena, CA, April 25, 2000.

    Google Scholar 

  18. Jones, J. (1992). Navigation requirements reference document for Cassini, 699-500-4. Jet Propulsion Laboratory, NASA, Pasadena, CA, December 1992.

    Google Scholar 

  19. Beswick, R. M. (2002). Cassini Navigation hardware requirements. IOM 312.D/007-2002, Jet Propulsion Lab, NASA, Pasadena, CA, September 30, 2002.

    Google Scholar 

  20. Moore, G. E. (1965, April 19). Cramming more components onto integrated circuits 38(8), 114–117.

    Google Scholar 

  21. Intel, Excerpts from a conversation with Gordon Moore: Moore’s Law, Intel Corporation. (2005). http://large.stanford.edu/courses/2012/ph250/lee1/docs/Excepts_A_Conversation_with_Gordon_Moore.pdf. Accessed March 30, 2018.

  22. Walter, C. (2005, August). Kryder’s law (pp. 32–33). Scientific American.

    Google Scholar 

  23. Wall, L., Christiansen, T., & Schwartz, R. (1996, September). Programming perl (2nd ed.). O’Reilly & Associates.

    Google Scholar 

  24. Beswick, R. M. (2002). Initial product evaluation for Cassini Navigation upgrades. IOM 312.D/008-2002, Jet Propulsion Laboratory, NASA, Pasadena, CA, November 24, 2002.

    Google Scholar 

  25. Standard Performance Evaluation Corporation, SPEC: Standard Performance Evaluation Corporation, Standard Performance Evaluation Corporation, March 1, 2018. [Online]. https://www.spec.org. Accessed March 30, 2018.

  26. Finley, B. E. (2015). SystemImager, September 2, 2015. [Online]. https://github.com/finley/SystemImager/wiki. Accessed March 30, 2018.

  27. Yeh, Y. C. (2001). Safety critical avionics for the 777 primary flight controls system. In IEEE—Digital avionics systems, Daytona Beach, FL, DASC. 20th Conference, October 14–18, 2001. https://doi.org/10.1109/dasc.2001.963311.

  28. Beswick, R. M. (2017). Cassini Navigation file server storage estimates through EOM. IOM 392K-17-001, Jet Propulsion Laboratory, NASA, Pasadena, CA, March 10, 2017.

    Google Scholar 

  29. Beswick, R. M. (2018). Final disposition of Cassini Assets. IOM 392K-18-002, Jet Propulsion Laboratory, NASA, Pasadena, CA, September 24, 2018.

    Google Scholar 

  30. Gray, J., & Siewiorek, D. P. (1991, September). High-availability computer systems (pp. 39–48). Los Alamitos, CA: IEEE Computer Society. https://doi.org/10.1109/2.84898.

    Article  Google Scholar 

  31. Twain, M. (1894). Pudd’nhead Wilson. New York City: Charles L. Webster & Co.

    Google Scholar 

  32. Skodis, E., & Liston, T. (2006). Counter hack reloaded: A step-by-step guide to computer attacks and effective defenses (2nd ed.). New York: Prentice Hall.

    Google Scholar 

  33. Bishop, M. (2003). Computer security, art and science (pp. 344–345). New York: Addison-Wesley.

    Google Scholar 

  34. Anderson, R. J. (2008). Security engineering: A guide to building dependable distributed systems (2nd ed.). New York: Wiley.

    Google Scholar 

  35. Information Assurance Directorate, Operating Systems guidance, National Security Agency, [Online]. https://www.iad.gov/iad/library/ia-guidance/security-configuration/operating-systems/index.cfm. Accessed April 20, 2017.

  36. Center for Internet Security, CIS—Center for Internet Security, CIS, [Online]. http://www.cisecurity.org. Accessed March 30, 2018.

  37. National Vulnerability Database, National Checklist Program Repository, National Institute of Standards and Technology, [Online]. https://nvd.nist.gov/ncp/repository. Accessed March 30, 2018.

  38. NMAP, Nmap, [Online]. http://www.nmap.org. Accessed March 30, 2018.

  39. Nessus, Tenable security, Tenable Inc, [Online]. http://www.tenable.com/products. Accessed March 30, 2018.

  40. Shakespeare, W. (1599). Henry V, Act IV, Scene III. [Performance].

    Google Scholar 

  41. CloudSquare, CloudHarmony—Service status (comparison), CloudSquare, March 30, 2018. [Online]. https://cloudharmony.com/status. Accessed March 30, 2018.

  42. Beswick, R. M. (1997). Saturday, May 24th, [MMNAV NAV-OPS LAN] NETDOWN, JPL NETDOWN report (MMNAV NAV-OPS archive: email distribution list), Pasadena, CA, Saturday, May 24, 1997.

    Google Scholar 

  43. Castro, M., & Liskov, B. (2002, November). Practical Byzantine fault tolerance and proactive recovery. ACM Transactions on Computer Systems, 20(4), 398–461. https://doi.org/10.1145/571637.571640.

    Article  Google Scholar 

Download references

Acknowledgements

No work of this size can be carried out alone. The editors who assisted in this document did tremendous service, and to William Owen, Zachary Porcu, Duane Roth, and Sean Wagner a considerable debt is due. Without their aid, the author could not imagine successfully accomplishing such an effort.

I would also like to acknowledge the help and assistance of all the very many individuals involved with the Cassini Project over the years without whose help this effort would not have been possible. In particular, the author would like to commend all those system and network administrators who have been a part of the Cassini Navigation team [40] and directly supported the effort described in this paper. To Charles W. Rhoades, Jaime C. Mantel, Scott E. Fullner, Tomas Y. Hsieh, Katherine D. Nakazono, David M. Bajot, Dimitri Gerasimatos, Frank Yu, Elizabeth Real, Jae H. Lee, and Navnit “Nick” C. Patel, “But we in it shall be rememberedwe few, we happy few, we band of brothers” (and sisters) to whom we give our deepest thanks.

This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Reference to any specific commercial product, process, or service by trade name, trademark, manufacturer or otherwise, does not constitute or imply its endorsement by the US Government or the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2018 California Institute of Technology. US Government sponsorship acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. M. Beswick .

Editor information

Editors and Affiliations

Appendix—Key Requirements for Tour

Appendix—Key Requirements for Tour

As there is considerable interest in the parts of the evaluation process used to consider the decisions made for hardware for the orbital tour, key functional requirements impacted our system choices. We now discuss these key requirements from the model effort here and show how these led to the design choices made for the Navigation computational system.

We have already discussed the central requirement of enhanced speed and processing. Other core requirements would provide similar metrics for system availability:

4.8 The Navigation Hardware and Operating System Software shall provide 99.97% [i.e. 2–3 h of unplanned downtime per year] uptime capability. (from 5.1.3.7, 3.2.1.1)

4.9 The Navigation Computer System shall be configured to have a mean-time to restore overall system functionality of 30 min during critical periods and 60 min during non-critical periods. While this does not imply that all subsystems will be functional, all systems necessary to fulfill the NAV operational requirements will be restored in this period.

4.11 The Navigation Hardware and Operating System Software shall provide 24-7 uptime capability. (5.1.3.8,3.2.1.11) [19]

Requirements on systems design also incorporated the learned experience of our systems model and the importance of CM that had evolved since launch:

2.3 The design of the Navigation Hardware shall use an approach which stresses interface simplicity. (5.1.1.4, 3.1.1.8, 3.2.1.3)

2.8 Each Navigation Engineer shall have a Navigation Workstation located in the Navigation Mission Support Area (MSA). (5.1.1.8, 3.1.1.55)

2.9 The Navigation Workstations shall be connected to the Navigation Computer Servers by the Multi-Mission Navigation (MMNAV) Local Area Network (LAN). (5.1.1.9, 3.1.1.55, 3.1.1.57)

2.10 The Navigation Hardware shall support local file storage and retrieval capabilities. (5.1.1.10, 3.1.1.43, 3.2.1.5)

These concerns would be coupled with fault-tolerant and modular design considerations, especially those promoting redundancy and resiliency:

2.1 The Navigation Hardware shall support the Cassini Mission from Launch to EOM. (5.1.1.1, 3.1.1.4, 3.1.1.5, 3.2.1.1)

2.17 The Navigation Hardware and Operating System Software shall be designed such that to the maximum extent feasible, single points of failure shall be eliminated in favor of multiple-redundant sub-systems.

2.18 The Navigation Hardware and Operating System Software shall be designed such that, to the maximum extent feasible, degraded performance shall be accommodated in preference to non-operating states in the event of component failure. (from 57-98)

2.20 The Navigation Hardware and Operating System Software should be designed so that, by the use of modular components and software systems, whole-system copying and configuration management utilities, such as Solaris Jumpstart, HP Ignite, Linux kickstart/system imager or the like, and/or spare/redundant systems and components, to the maximum extent feasible, system maintenance issues are minimized in terms of effort and time. [19]

Some design requirements promoted the idea of interchangeable components, particularly that each workstation or server could be exchanged efficiently with any other Navigation workstations or server:

4.10 The Navigation Hardware and Operating System Software shall provide a capability to cluster the primary servers used by the Navigation Computer System to permit a failover from a faulty server to a functional server rapidly and without modification of the other parts of the Navigation Computer System in under one minute [under one second goal]. This means that, aside from any operations or software runs in progress during such a failure, no data will be corrupted or lost.

4.17 The Navigation Hardware and Operating System Software should be capable of meeting the functional requirements for performance on each user’s personal workstation. [Rationale: the previous model of having centralized compute servers doing most of the processing work lead to underutilized personal workstations and very overloaded central servers. As compute costs have come down significantly over time it is desired to shift more of the processing load to the individual workstations and leave the central servers as file and configuration servers.] (5.1.3.11, 3.2.1.1) [19]

Some helpful ideas were introduced that would prove useful in increasing the robustness of the computational environment:

4.13 The Navigation Hardware shall be configured to include a online “hot-spare” workstation and server, so that in the event of a server-cluster failure or a workstation failure the recovery time is limited only by the time necessary to change the configuration of the workstation or server to its new mode serving as an operational replacement, within 60 min of the outage of the prime workstation. This should not preclude additional like spares being so configured if feasible. (4571-7) [19]

This would end up proving a highly useful requirement—it forced the question of maintaining a spare capability which allowed for rapid resolution of problems. This will be discussed in more detail in the next section.

Separate performance requirements were levied against specific software sets that were viewed as more technically difficult than the first general requirement (4.18) given above. They were considered more operationally stringent than previous requirements [17] and included upgraded constraints specific to the orbital tour that could serve as a good benchmark for generalized system performance:

4.1 The Navigation Hardware and Software shall be capable of updating the ephemerides of all nine major Saturnian satellites in a single run. This is an encounter requirement. (from 3.1.1.42) from (6421-43)

Rationale: Needed to reduce the time required to update spacecraft ephemeris.
Note: This implies a temporary working filespace of at least 1 GB to complete this task.

4.2 The Navigation Hardware shall be capable of updating an orbit determination solution, including the satellite ephemerides update spanning 4 years (see 1.3.2 above), and estimations of 150 bias params, 6 stochastic params (up to 200 batches) for 50,000 data points within 5 min [ 1 min goal] of receipt of the input NAVIO tracking data file. (from 5.1.3.2, 3.1.1.29, 3.2.1.9, 4.2.3.1 among others)

4.3 The Navigation Hardware shall be capable of performing at least 5 iterations of a maneuver design update computer run within 1 min [10 s goal] of receipt of the final orbit estimate. (from 5.1.3.3, 3.1.1.30, 3.2.1.10)

4.4 The Navigation Hardware shall be capable of running the LAMBIC software capable of simulating a maximum set of 180 maneuvers and 100 encounters with 100 K-matrix files, with a 1000 sample monte-carlo run using a baseline tour maneuver strategy in 15 min or less. (from 4.3.1.10) from (6421-23).

4.5 The Navigation Hardware shall be capable of processing up to 200 ISS pictures per day at a peak rate of 30 pictures per hour. (5.1.3.4) from (3.1.2.37)

4.6 The Navigation Hardware shall be capable of producing optical navigation picture schedules for one month of the mission within two working days, where the two working days also includes the analysts time. (5.1.3.5, 3.1.2.35, 3.2.2.4)

4.7 The Navigation Hardware shall provide convenient on-line access to the prior six months of optical navigation images. (3.2.2.7)

Note: this requirement is probably met under most configurations by 4.15. [19]

Individual workstation and fileserver disk storage capability were determined through an extensive series of user interviews. These became minimal system requirements for the central file server:

4.15 Data Storage Requirements

A. The Navigation Hardware and Operating System Software shall be configured to provide an offline backup of all Navigation online systems at least twice a week.

B. The Navigation Hardware and Software (this could involve changes to the archive S/W) shall be configured to provide a long term offline archive/backup capability in a stable media [a CD-ROM writer or some other similar stable media]

C. The Navigation Hardware and Operating System Software shall be configured to provide online data storage for all navigation delivery files and files necessary to duplicate such deliveries until the prime EOM. (from 3.2.1.1)

Note: Part C implies a necessary critical disk space of at least 1 TB (3 TB total) as follows:

ODP: 150 MB per run ×150 runs +50 GB additional/overhead = ~100 GB

TRAJ: 11.75 GB between two probe deliveries and twelve project deliveries.

MAN: TCM: 70 MB per maneuver ×150 runs = 51.3 GB +

LAMBIC: 10 MB per encounter ×50 encounters ×5 analysts = 2.5 GB +

CATO: 10 MB per encounter ×50 encounters ×5 analysts = 2.5 GB

Total MAN: ~60 GB

OMAS: 25,000 pictures ×2 MB per picture = ~50 GB

SA: 100 GB software C/M repository (current + old flight software) +

50 GB workstation and server OS software + turn-key images +

100 GB file system overhead +


471.3 GB Snapshot space for above disk area = 943.5 GB

Total minimum disk space: 950 GB

Total critical disk space (assuming 70% utilization): 1350 GB

Total critical disk space, including online mirror: 2670 GB (across two or more systems)

D. The Navigation Hardware and Operating System Software shall be configured in such a manner as to allow online data storage to be easily scaled up to five times its capacity, in order to provide for future growth. [19]

And the individual workstations:

4.19 The Navigation Hardware should be capable of storing at a minimum, 150 GB of data in a local file system, 150 GB for the mirror of local data for a total of 300 GB on each individual workstation and have 2 GB of memory (RAM) capacity. [Rationale: in order to fulfill 4.17 and noting the general requirements of 4.18, these specifications round out the performance requirements noted previously.] [19]

These requirements served as a useful yardstick for the systems evaluation process. Against this background, we would be able to consider the hardware choices critically.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Beswick, R.M. (2019). The Cassini/Huygens Navigation Ground Data System: Design, Implementation, and Operations. In: Pasquier, H., Cruzen, C., Schmidhuber, M., Lee, Y. (eds) Space Operations: Inspiring Humankind's Future. Springer, Cham. https://doi.org/10.1007/978-3-030-11536-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-11536-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-11535-7

  • Online ISBN: 978-3-030-11536-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics