Provenance-Enabled Automatic Data Publishing

  • James Frew
  • Greg Janée
  • Peter Slaughter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6809)


Scientists are increasingly being called upon to publish their data as well as their conclusions. Yet computational science often necessarily occurs in exploratory, unstructured environments. Scientists are as likely to use one-off scripts, legacy programs, and volatile collections of data and parametric assumptions as they are to frame their investigations using easily reproducible workflows. The ES3 system can capture the provenance of such unstructured computations and make it available so that the results of such computations can be evaluated in the overall context of their inputs, implementation, and assumptions. Additionally, we find that such provenance can serve as an automatic “checklist” whereby the suitability of data (or other computational artifacts) for publication can be evaluated. We describe a system that, given the request to publish a particular computational artifact, traverses that artifact’s provenance and applies rule-based tests to each of the artifact’s computational antecedents to determine whether the artifact’s provenance is robust enough to justify its publication. Generically, such tests check for proper curation of the artifacts, which specifically can mean such things as: source code checked into a source control system; data accessible from a well-known repository; etc. Minimally, publish requests yield a report on an object’s fitness for publication, although such reports can easily drive an automated cleanup process that remedies many of the identified shortcomings.


provenance publishing curation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bose, R., Frew, J.: Lineage Retrieval for Scientific Data Processing: A Survey. ACM Computing Surveys 37(1), 1–28 (2005), doi:10.1145/1057977.1057978CrossRefGoogle Scholar
  2. 2.
    Brandes, U., Eiglsperger, M., Herman, I., Himsolt, M., Marshall, M.S.: GraphML progress report: structural layer proposal. In: Mutzel, P., Jünger, M., Leipert, S. (eds.) GD 2001. LNCS, vol. 2265, pp. 109–112. Springer, Heidelberg (2002), doi:10.1007/3-540-45848-4Google Scholar
  3. 3.
    Frew, J., Metzger, D., Slaughter, P.: Automatic capture and reconstruction of computational provenance. Concurrency and Computation: Practice and Experience 20, 485–496 (2008), doi:10.1002/cpe.1247CrossRefGoogle Scholar
  4. 4.
    Gil, Y., Cheney, J., Groth, P., Hartig, O., Miles, S., Moreau, L., da Silva, P.P.: Provenance XG Final Report. W3C Provenance Incubator Group (2010),
  5. 5.
    Guo, P.: CDE: Automatically create portable Linux applications,
  6. 6.
    Moreau, L.: The Foundations for Provenance on the Web. Foundations and Trends in Web Science 2(2-3), 99–241 (2010), doi:10.1561/1800000010CrossRefGoogle Scholar
  7. 7.
    Moreau, L., Clifford, B., Freire, J., Futrelle, J., Gil, Y., Groth, P., Kwasnikowska, N., Miles, S., Missier, P., Myers, J., Plale, B., Simmhan, Y., Stephan, E., Van den Bussche, J.: The Open Provenance Model core specification (v1.1). Future Generation Computer Systems (2010) (in press), doi:10.1016/j.future, 07.005Google Scholar
  8. 8.
    Osterweil, E., Zhang, L.: lbsh: Pounding Science into the Command-Line,
  9. 9.
    Simmhan, Y.L., Plale, B., Gannon, D.: A survey of data provenance in e-science. ACM SIGMOD Record 34, 31–36 (2005), doi:10.1145/1084805.1084812CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • James Frew
    • 1
  • Greg Janée
    • 1
  • Peter Slaughter
    • 1
  1. 1.Earth Research InstituteUniversity of CaliforniaSanta BarbaraUSA

Personalised recommendations