SciData: a data model and ontology for semantic representation of scientific data
- First Online:
- Cite this article as:
- Chalk, S.J. J Cheminform (2016) 8: 54. doi:10.1186/s13321-016-0168-9
- 775 Downloads
KeywordsScience data Semantic annotation Ontology JSON-LD RDF Scientific data model
Cambridge Crystallographic Data Centre
nuclear magnetic resonance
not only SQL
Web Ontology Language
Representational State Transfer
Resource Description Framework
scientific data model
scientific data model ontology
SPARQL Protocol and RDF Query Language
Structured Query Language
Uniform Resource Identifier
World Wide Web Consortium
Extensible Markup Language
XML Stylesheet Language Transformation
For almost 40 years, scientists have been storing scientific data on computers. With the advent of the Internet, research data could be shared between scientists, first via email and later using web pages, FTP sites, and online databases. With the advancement of Internet technologies and online and local storage capabilities, the options for collecting and stored scientific information have become unlimited.
Yet, with all these advancements science faces an increasingly important issue of interoperability. Data are commonly stored in different formats, organized in different ways, and available via different tools/services severely impacting curation . In addition, data is often without context (no metadata describing it), and if there is metadata it is minimal and often not based on standards. Though the Internet has promoted the creation of open standards in many areas, scientific data has, in a sense, been left behind because of its inherent complexity. The strange part about this scenario is that scientific data itself is not the biggest problem. The problem is the contextualization of the scientific data—the metadata that describes system that it applies to, the way it was investigated, the scientists that determined it, and the quality of the measurements.
So, what is scientific data and where is the metadata? Peter Murray-Rust grappled with these questions in 2010 and concluded that it is “factual data that shows up in research papers” . When writing scientific articles, researchers add most (in most cases not all) of the valuable metadata in the description of the research they have performed. The motivation of course is open sharing of knowledge for the advancement of science, with appropriate attribution and provenance of research work. As we move toward the fourth paradigm , where large aggregations of data are the key to discovery, it is imperative that the context of the data are articulated completely (or as completely as possible), not only to identify it’s origin and authenticity, but more importantly to allow the data to be located correctly on the “scientific data map”.
To address these issue, this paper describes a generic scientific data model (SDM)/framework for scientific data derived from (1) the common structure of scientific articles, (2) the needs of electronic notebooks to capture scientific research data and metadata, and (3) the clear need to organize scientific data and its contextual descriptors (metadata). The SDM is intended to be data format/software agnostic and extremely flexible, so that it can be implemented as the scientific research dictates. While the SDM is abstract in nature, it defines a concrete framework that can be easily implemented in any database and does not constrain the data and metadata that can be stored. It therefore serves as a backbone upon which data and its associated metadata can be ‘attached’.
In addition, this paper describes an ontology that defines the terms in the SDM, which can be used to semantically annotate the structure of the data reported. In this way, scientific data can be integrated together by storage in Resource Description Framework (RDF)  triple stores and searched using SPARQL Protocol and RDF Query Language (SPARQL) queries .
JSON-LD is a recent solution to allow transfer of any type of data via the web’s architecture—Representational State Transfer (REST) —using a simple text-based format—JSON.  JSON-LD allows data to be transmitted with meaning, that is, the “@context” section of a JSON-LD document is used to provide aliases to the names of data reported and link them to ontological definitions using a Uniform Resource Identifier (URI)—often a Uniform Resource Locator (URL). In addition, the structure/data of the JSON-LD file can be automatically be serialized to Resource Description Format (RDF) using a JSON-LD processor, e.g. the JSON-LD Playground . This capability makes JSON-LD files not only useful as a data format but also a compact representation of the meaning of the data.
Aim, design and setting of the study
The aim of this work was to develop a serialization of scientific data and its contextual metadata. The design was encoded using the JSON-LD specification  because it is both a human readable and editable format and can easily be converted to RDF  triples for ingestion into a triple store and subsequent SPARQL searching . The intent was that the data model, developed to afford the serialization, would be able to structure any scientific data (see examples).
Description of materials
laboratory notebook data
research article data
spectral data (NMR)
computational chemistry data
PubChem download as XML
Dortmund Data Bank webpage as HTML
Crystallographic Information Framework (CIF) file as text
Description of all processes and methodologies employed
In this work different pieces of scientific data were selected and an analysis performed of the required metadata that was necessary to completely describe the context of how the data were obtained. After looking at the data and its context, reading a number of research articles on what scientific data is, and reviewing journal guidelines for submission of research, a preliminary generic structure of scientific data and metadata was developed. This was iteratively improved by encoding the data of higher and higher complexity into the framework and adding/deleting/adjusting as necessary to make the model fit the needs of the data.
Statistical analyses were not performed.
Results and discussion
Considerations for a scientific data model
What is scientific data?
Define a research question What is the scope of the work? What area of science is the investigation in? What phenomena are we investigating?
Formulate a hypothesis What parameters/conditions do we control or monitor in order to evaluate the effect on our system?
Design experiments What instrumentation/equipment do we use? What are the settings and/or conditions? What procedures are used?
Make observations What are the values of the controlled parameters, experimental variables, measured data, and/or observations?
Generate results How is data aggregated? What calculations are used? What statistical analysis is done?
Make conclusions/decisions What are the outcomes? Is the data good quality? Do they help answer the question(s) asked? How does the data influence/impact subsequent experiments?
The process above defines the types of information scientists collect as they perform science and once a project is complete they aggregate all of the important details (data, metadata, and results) from the process and synthesize one or more research papers to inform the world of their work. Thus, scientific papers can be considered a pseudo data model for science. Yet, this format has significant flaws as, in general, it is not typically setup uniformly, often has only a subset of all the metadata of the research process, and is influenced by the biases of authors and the constraints of publication guidelines.
How is scientific data structured?
Scientists have grappled with structuring scientific data since its inception. Communication of scientific information in easy to understand formats is extremely important for comprehension and hypothesis development, especially as the size and complexity of data grows. Its representation is also highly dependent on the research area both in terms of size/complexity of captured data and common practices of the discipline.
In chemistry the best example of data representation is the periodic table , the fundamental organization of data about elemental properties, structure and reactivity, and it is impossible to be chemist without appreciating the depth of knowledge it represents. The same is true in biology about the classification of species [13, 14], or in physics the data model underlying the grand unification of forces .
Data representation/standardization in chemistry has since evolved primarily in two areas: Chemical structure representation and analytical instrument data capture .
Chemical structure representation
Communication of chemical structure has been an area of significant development since John Dalton introduced the idea that matter was composed of atoms in 1808, and developed circular symbols to represent known atoms . It wasn’t long before Berzelius wrote the first text based chemical formula, H2SO4, showing the relative number of atoms of each element. Since these early steps chemists have found need to create representations of molecular structure for many different applications. In the Twentieth century this has brought us text string notations such as Wiswesser Line Notation (WLN) , simplified molecular-input line-entry system (SMILES) , and most recently the International Chemical Identifier (InChI)  in addition to the classical condensed molecular formula. Both SMILES and InChI are elegant solutions to encoding structural information in text where the string to structure conversion (and vice versa) can be done accurately by computer for small molecules. Solutions for large molecules, crystals and polymers are still needed, as are definitive representation of stereocenters.
Analytical instrument data capture
Since the introduction of microcomputers in the early 1970’s, chemists have used a number of formats to deal with the large amounts of data produce by scientific instruments. The significant initial limitation, that of available storage space, resulted in two different approaches (1) the use of a ASCII text file format (JCAMP-DX)  with options for text based compression of data and (2) binary file format (netCDF)  where the file structure is inherently more space efficient. Both the Analytical Data Interchange (ANDI) format [31, 32] (built using netCDF) and JCAMP-DX are still in use today with the JCAMP-DX specification more prevalent because of its text-based format.
Although the JCAMP-DX file format is widely used for export and sharing of spectral data, the specification has not been updated for over 10 years and as a result has limitations in terms general metadata support (static set of LDRs), technique coverage, and is prone to errors/alteration for unintended uses—which breaks compatibility with readers. As a result, an effort was started in 2001 to develop an XML format to replace the suite of JCAMP-DX specifications. The Analytical Information Markup Language (AnIML)  is an effort to ‘develop a data standard that can be used to store data from any analytical instrument’. This lofty goal has led to a long development process that will be completed in 2016, and result in a formal standard through the American Society for Testing and Materials (ASTM).
How is scientific data stored?
In addition to knowing what scientific data is and how it is represented, it is important to consider how it is stored (and hopefully annotated). Outside of scientific articles, scientific data is published in many databases where the data can be compared with other like data in order to show trends/patterns and afford a higher-level of knowledge mining. Commonly, these are implemented using Structured Query Language (SQL) based relational databases such as MySQL , MS SQL Server , or Oracle . These software store data in tables and link them together via fields that are unique keys. SQL based software is very good for well-structured information that can be represented in a tree format (rigid schema). However, large sets of research data do not fit rigid data models, as by its very nature scientific data is high variable in structure.
Advances in the area of big data have attempted to address the non-uniformity in aggregate datasets by using different data models. Recently, there has been a major shift toward graph databases in support of big data applications across a variety of disciplines. Storing and searching large, often heterogeneous, datasets in relational databases creates problems with speed and scale up . As a result, many companies with large amounts of data have turned to graph databases (one of many NoSQL type databases where ‘NoSQL’ stands for ‘Not only SQL’) where data is stored as RDF subject-object-predicate ‘triples’. In comparison to relational databases, graph databases are considered schema-less where the organization of the data is more natural and not defined by a rigid data model. Essentially, any set of RDF subject-predicate-object triples can be thought of as a three-column table in a relational database. Software used to store RDF data is called triple stores —or quad stores  if an additional column for a named graph identifier is added. Data in these databases can be searched using the World Wide Web consortium (W3C) defined SPARQL query language .
PubChem —chemical, substance, and assay data available with over 91 million compounds. Has user API to downloading data and RDF querying.
ChemSpider —chemicals, instrument data, and property data for over 56 million compounds. Links to suppliers, literature articles, patents. Has limited API and RDF/XML download
Dortmund Data Bank —curated property data for over 53,000 compounds. Limited set can be searched for free.
Cambridge Crystallographic Data Centre —over 833,000 crystal structures (CIF files). Limited set can be searched for free.
What is the best way to communicate context?
Given that the global aggregation of research data is the goal, an important component that is needed relative to any type of framework is a formal definition of the meaning of the data and metadata (contextual data). As mentioned above, current scientific practices are lacking in the generation/reporting of contextual data as researchers are only considering their audience to be human (where meaning is either implicit or can be inferred). If data/metadata is migrated to computers systems, some mechanism to articulate the meaning of the data and metadata is required as storing text in a database is just that—text—to a computer. Through the development of the semantic web this can be achieved through the use of an ontology, or a suite of ontologies. Ontologies are the ‘formal explicit description of concepts in a domain of discourse’ , or an agreed standard for describing the concepts within a field of study. In the recent move toward the semantic web, the importance of ontologies and their unified representation cannot be understated. In 2004 (and updated in 2009) the W3C released the Web Ontology Language (OWL)  as a standard way to represent ontologies in RDF.
How best to save, organize, archive, and share data?
Even with all the developments mentioned above there are still challenges that have not been solved. In a nutshell, the problem is that the solutions currently available have been built in isolation (by necessity limiting the scope makes projects more tractable), have little/no machine actionable semantic meaning, are too rigid, are not easy to extend (without breaking existing systems), and are tied heavily to their implementation. As a result, although data is available from many sources it is difficult and time consuming to integrate that data. It is also difficult to search across this heterogeneous pool of information as everyone identifies things differently—there is no broad use of agreed ontological definitions of terms.
A solution to these problem requires abstracting the scenario to a higher level where the structure of the data is normalized in the broadest sense such that any data/metadata can be placed in that structure. This is the essence of the SDM. It does not try to define the data/metadata needed to accurately record and contextualize the scientific data, rather it defines its metaframework, and via an ontology its meaning.
In order to reinvent how science saves, searches, and re-uses data the implemented solution must have a low barrier to adoption by scientists. While the individual researcher may be excited to use a globally searchable dataset(s), they do not want to be burdened with IT related issues in order to access or implement it. Although the SDM is designed to be format/implementation agnostic, the JSON-LD standard is perfect for representation of the data model as it is a simple text-based encoding, that can handle the types of data needed for the model, and is built to translate to RDF. Examples below that use the SDM are formatted in JSON-LD.
The goal of science is to share research data such that the community can search and use it to advance science. Based on the discussion above, initially one might think that a system for this should be based on a graph database because of its inherent flexibility (anything can be linked to anything) as opposed to relational databases (where data is in tables and linked via unique keys). However, implementing a graph database without any kind of structure would be equivalent to trying to search the current heterogeneous landscape of research data—impossible because nothing is standardized (for example, think about how many ways a scientist could indicate that they used spectrophotometry in their work). What is needed is a hybrid model where a framework for the data and metadata from scientific experiments is used to provide organization (separate from the scientific data/metadata), yet allows flexibility in the types of data put on the framework via creation of discipline specific descriptions and/or ontologies. This is the premise behind the development of the SDM.
Description of the SciData scientific data model
Detailed below is an initial attempt to create a framework upon which to organize scientific data and its metadata. It is by no means a definitive or complete framework and serves only as a starting point to demonstrate the potential of this idea, and act as a catalyst to encourage other scientists to contribute to its development. None of the elements described below are required, other elements can be added (as long as they have a semantic definition and logically fit the scope), and all elements are open to revision (readers are encouraged to provide feedback). Readers are also encouraged to visit the project website  for the current version of the data model.
The generic container for the data and metadata in the model is ‘scidata’. This contains metadata descriptors for the types and formats of data, as well a list of the properties for the data that is being reported. What follows are the three main sections that describe the research undertaken: ‘methodology’, ‘system’, and ‘dataset’.
Similar to the ‘Experimental’ section in a research paper the ‘methodology’ section contains metadata about how laboratory experiments, computer calculations, or theoretical analysis was done to arrive at the data in the packet. The ‘evaluation’ term indicates which of these approaches was used to obtain the data. Each of the different ‘aspects’ of the methodology is reported as a separate section (JSON object) and any/all metadata that are relevant to the methodology of the research can and should be included. Although in the diagram above the aspects of ‘measurement’, ‘procedure’, ‘resource’, ‘calculation’, ‘basisset’, and ‘software’ are shown, these are just examples of aspects that might be reported here. The SDM defines only ‘methodology’, ‘evaluation’ and ‘aspects’ here with metadata under ‘aspects’ included as needed/available, be it semantically annotated or not. It is envisioned that both general and discipline specific ‘aspects’ will be developed based on domain specific agreement on best practices for inclusion of minimal metadata and/or default ontological definitions (as discussed above).
The ‘system‘section contains data that is normally reported in different places in a research article. For any scientific research that is performed there is a system that the research is working with/on. A description of the compound(s), organism(s), or material(s) that the data is about needs to be articulated so that the scope of what the data describes can be characterized. It is also important to be able to report the condition(s) under which the data was recorded, the time-point or timeframe at or over which it was performed, etc. Several example facets are listed in the schema below, but again none are required and others can be included as needed to characterize the data. Just like the ‘methodology’ sections’ ‘aspects’, the ‘system’ sections’ ‘facets’ is a flexible part of the framework that can hold metadata about one to many ‘facets’ in addition to general descriptive terms about the discipline that the data is nominally from. Again the data model defines only ‘system’, ‘discipline’, ‘subdiscipline’, and ‘facets’ here with metadata under ‘facets’ included as needed/available, be it semantically annotated or not.
A ‘datagroup’ section is used where there is a need to aggregate data together based on a higher-level structure, and the ability to nest a ‘datagroup’ inside another ‘datagroup’ makes for hierarchical organization of data that fits the researcher need. It is also important to point out here that the use of ‘datagroup’, ‘dataseries’, and ‘datapoint’ on their own do not provide a semantic meaning of how the data relates to anything else, rather they are a way to compartmentalize data so that it can be related to other things through the use of the ‘source’ (methodology ‘aspect’) and ‘scope’ (system ‘facet’) metadata elements. In reports that contain large datasets, across many experiments, this structure provides the maximum flexibility to report data yet still affords the structure that the data model provides.
Also, in the dataset sub-framework there are references to ‘parameter’ types for certain elements. The ‘parameter’ object is a generic structure that is used as the basis for metadata in ‘datapoints’, ‘attributes’, ‘conditions’ (under ‘system’), ‘settings’ (within ‘methodology’) and many more. A parameter is a report of a property (with/without its quantity) and its value and thus can be used to describe a wide variety of data/metadata in the data model. Parameter values may either be numeric (‘value’) or textual (‘textstring’), single values or arrays of values. Numeric values are described by metadata that indicates its type (decimal, integer, float, etc.), significant figures, error, and if the value is an exact number or not (useful for calculations). Text values are described by their type (plain, JSON, Extensible Markup Language (XML), etc.) and language. Implementers of the SDM can use the ‘parameter’ type in the definition of ‘aspects’ or ‘facets’ instead of having to invent their own data structures. This makes implementation easier and more consist and other parts of the SDM can be re-used in the same manner.
A scientific data model ontology
Encoding data in the data model
A full discussion of the use of JSON-LD to encode all of the metadata terms described above is beyond the scope of this paper, however, readers interested in viewing/using JSON-LD to explore this approach can go to the project website  where the full set of context files can be accessed along with example data documents. In addition, taking example files  and loading them into the JSON-LD playground  allows readers to see the RDF generated from JSON-LD encoded data.
Application of the data model
The following portions of four examples show the application of the data model to different data needs. Each of these examples can be found in full on the example page of the project website 1. Additional examples showing the conversion of example data from PubChem, the Dortmund Data Bank, and a CIF file are also included on this page along with XML Stylesheet Language Transformation (XSLT)  files used to convert them.
The pH of a solution
A measured property extracted from the literature
NMR spectrum of a sample of R-(+)-Limonene
Computational chemistry calculation of electronic properties of glucose
With the current interest in big data and the movement toward open science there is a need for approaches to allow science data to be made available in an open and easily searchable format. This format needs to be flexible enough to accommodate data from scientific experiments of all kinds and the SciData data model and its implementation in JSON-LD fits that need. Assuming that this, or another framework, is accepted by the scientific community to collect, store, and disseminate semantically annotated scientific data, we can move to the next phase of tool development and data integration to move us toward the utopia of open, accessible and reliable semantically annotated scientific data.
Thanks to Neil Ostlund and Mirek Sopek for comments on the data model.
The author declares no competing interests
Availability of data and materials
All data associated with this project is available at https://github.com/stuchalk/scidata.
The author was not funded by any entity for this work
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.