1 Introduction

In recent years, the adjective “open” has been applied to many aspects of scientific knowledge discovery and dissemination , including open-source software and hardware, open access journal articles, massive open online courses, and open data . Applying the term open to these entities emphasizes an intention for them to be accessible—a quality increasingly emphasized as desirable in science. But there is nothing new about the idea of “open science.” Science shed its character of secrecy in sixteenth-century Europe, when the wealthy players in the patronage system that was increasingly supporting scientific inquiry at that time found it in their interest for the scientists they supported to go public with their discoveries, as a form of ornamental display to enhance the patrons’ own reputations and power (David 2004). Around this time, scientific discovery also became so complex and mathematically grounded that the patrons could no longer assess quality on their own; fearing charlatanism, patrons began encouraging systems of peer vetting, in which scientists would open their work to other scientists for the purpose of review and confirmation (David 2004). Clearly, the historically instituted practices of peer review and experiment replication are still followed in the current era, so what is different about science today that has led to the emergence of “open” as a buzzword for so many initiatives?

Whereas in the Renaissance, the new and esoteric language of mathematics drove the requirement of peer review, today, the turn toward computational and data intensive science in ecohydrology is driving new mechanisms for enabling replication, validation, and extension of research results. At the same time, governments are increasingly demanding that the results from publicly funded research be released into the public domain (e.g., the United States Fair Access to Public Research Act) to return knowledge to the taxpayers who funded its creation. Furthermore, the specter of cataclysmic consequences of climate change is increasing calls for more efficient collaboration around this shared global problem. Just as climate change recognizes no private ownership boundaries, so too, it is argued, must environmental knowledge come to be understood as a shared, open resource to combat this and other threats to the global common good.

This chapter explores what it means for technology to be open and how open technology is transforming the field of ecohydrology today. Starting from the concept of open science, the next section explores the open science ideals of open source, open method, open data , and open hardware and develops a consistent definition of open technology. Based on this definition, a review of open technology applications and development within the field of hydrology is presented that categorizes technology into truly open and quasi-open and discusses how this technology is enabling hydrologic research. The chapter then concludes with a discussion of the potential of open technology to advance the field of ecohydrology.

2 Contextualizing Open Technology

2.1 The Philosophy of Open Movements

Open science is driven by at least two distinct philosophical agendas: one centered on scientific efficiency and the other on humanistic principles. In the case of the first, advocates argue that scientific processes can be optimized through openness (Fecher and Friesike 2014). Open data makes way for reanalysis of one’s work by other researchers and permits comparisons between different researchers’ methods for prediction and analysis. Open hardware can be leveraged for new projects, rather than scholars having to reinvent the wheel. Open access to scientific literature allows a larger pool of scientists to correctly spot the research frontiers of a given discipline and maximizes the amount of researcher skill and perspective that can be brought to bear on particular problems. By increasing the number of expert eyes on a research problem and reducing redundancy of effort, the overall scientific enterprise advances further.

In the case of the humanistic agenda driving open science, advocates assert that knowledge is fundamental to human development. Some working within this perspective focus on the importance of facilitating direct citizen participation in research efforts (i.e., “citizen science”), arguing that such involvement can demystify the process of scientific knowledge creation, empower citizens at problem-solving, promote engagement with the natural world, and create appreciation for the scientific endeavor (Bonney et al. 2014; Cohn 2008). Others focus on the importance of citizen access to research results—either in traditional academic form or in adapted formats designed for a nontechnical audience. Arguments for open access to academic outputs include the ethical stance that, because much research is funded by taxpayers, it rightfully belongs to them; when public libraries must purchase subscriptions (to journals and so forth) to make research available to readers, taxpayers effectively “pay twice for research,” once to support its production and once to support public purchase of access to its results (Phelps et al. 2012). They also include the moral stance that knowledge should not be used to exacerbate a world of haves and have-nots. Scientific knowledge is integral to human development and well-being in the twenty-first century, and its power must not be taken lightly (Fecher and Friesike 2014). When it comes to scientific knowledge about the environment—such as in the study of water resources—an additional case can be made based on the object of study’s status as a common good. In economic theory, a common good is one that is of a shared, public nature, such that individual ownership rights cannot be assigned, but also of a nature that one person’s use of that good can reduce another person’s ability to use it. Environmental resources are common goods par excellence, with all needing to partake of them in order to survive, but all (and some more than others) having the ability to affect them in ways that make them less usable to others.

In this chapter, we take the position that, while scientific efficiency is certainly valuable, the moral imperative posed by the reality of water resources as a global common good, and the power of knowledge to enable their better management, is sufficient to champion openness as a goal for the future of hydrologic science. In taking this position, however, we are aware that complete openness in science faces many implementation obstacles. Thus, we invoke openness as a value statement—a principle to work toward—while recognizing that pragmatic compromises will often make sense in pursuit of more open ends.

2.2 The Open-Source-Software Movement

The open-source movement , originating in the 1990s, is arguably the most widely known open movement. The name of the open-source movement was selected to distinguish it from the earlier free software movement, which originated in the 1980s. The free software movement seeks to protect the freedom of computer users to run, study, change, and redistribute copies or derivative products of software (Stallman n.d.). The free software movement espouses a strong moral responsibility to end the creation and use of proprietary software through the creation and adoption of alternative free software applications. Free software, as defined by the free software movement, is software that gives the user freedom to distribute copies of the software, access the source code of the software, change the software, or use parts of the software in new free software applications. Free software does not, however, necessitate that the software have no cost (GPL v3). To protect these freedoms, free software is distributed under a free software license, such as the GNUFootnote 1 Public License (GPL, https://www.gnu.org/licenses/gpl.html).

The open-source software movement emerged out of the free software movement due to fundamental differences in philosophy. The underlying philosophy behind free software is that non-free software is a social ill, whereas the philosophy behind open-source software development is that it generates a better solution by involving a broader developer base than non-open-source-software. In practice, what this means is that software developed as open-source software may include components or dependencies, which constitute proprietary software. Thus, while free software is accessible to anyone, open-source software is accessible only to those who possess any proprietary software necessary to run the open-source software. For example, a plugin developed for a proprietary software application, such as a geographic information system (GIS), could be developed as an open-source software project, whereas such a project could not be developed as a free software project due to the requirement of the proprietary GIS.

The goals of the open-source software movement are to motivate and support a community of like-minded software developers to collaboratively create and improve openly available source code to create better software solutions to computational problems. To accomplish these goals, the open-source software movement has established a set of commonly accepted practices surrounding the sharing, use, and distribution of software source code that is defined by the Open Source Definition (OSD) and maintained by the Open Source Initiative (OSI). The fundamental tenets of open-source software are that software developed under this paradigm is released in a manner which allows for it to be redistributed without discrimination, without charge or royalty, and that derivative works may be produced (Open Source Initiative 2018). To protect the freedoms of open-source software users and the rights of open-source software developers, open-source software is distributed under an open-source license such as the Apache License (http://www.apache.org/licenses/) or the Mozilla Public License (MPL, https://www.mozilla.org/en-US/MPL/) or under a free software license such as the GPL.

Despite the philosophical differences between the free and open-source software communities, which revolve around the motivation for sharing source code and derivative software products, the software products developed as part of the open-source software and free software movements are very similar in terms of their ability to be run, studied, modified, and redistributed, as well as in the terms of user freedoms and developer rights that are protected by the various open or free software licenses. In fact, all free software can be considered open-source software, while most, but not all, open-source software qualifies as free. To describe software that has the qualities of both free and open source, regardless of the underlying philosophy of the developer, the acronym FOSS (free and open-source software) is used. Because these software are free, they must adhere to the more restrictive philosophy of free software, which eschews non-free software dependencies. Thus, FOSS provides access to the software to run, study, modify, and distribute copies or derivative products in accordance with FOSS licenses. Within FOSS communities, there is a wide array of licenses designed to protect authors and enshrine legal frameworks to enforce specific principles. Table 1.1 provides a listing of several commonly used licenses, their abbreviations, and a brief description of their unique attributes. Some FOSS licenses, such as GPL (https://www.gnu.org/licenses/gpl.html), are designed such that all subsequent work built upon the original licensed work must also inherit the same licensing term (sometimes derogatorily termed a “viral” license). This style of licensing differs from the more permissive licenses, such as the MPL mentioned previously, which permit derivative works to implement restrictions (including patents). Fundamentally, it is a matter of the developer’s preference as to whether an open-source software project is developed as FOSS or simply open source. It is necessary to be aware of the differences between the two when leveraging previously published work.

Table 1.1 Commonly used free and open licenses for software, hardware, and data

Notable FOSS projects relevant to ecohydrology include Linux,Footnote 2 R, QGIS, GRASS, MapServer, OpenLayers, GDAL/OGR, PostgreSQL, and Hadoop. The Linux kernel is a UNIX-like environment that is maintained by the Linux Foundation and distributed under the GPL. R is a statistical computing environment that is maintained by the R Core Team and distributed under the GPL. QGISFootnote 3and GRASSFootnote 4 are desktop GISs that are maintained by the Open Source Geospatial Foundation (OSGeo) and released under the GPL. MapServerFootnote 5 is a server-side GIS that is maintained by OSGeo and distributed under the X/MIT license. OpenLayersFootnote 6 is a JavaScript library for displaying maps in Web browsers. OpenLayers is maintained by OSGeo and distributed under the Berkley Software Distribution (BSD) license. The geospatial data abstraction library (GDAL) and OGR simple features libraryFootnote 7 are software libraries for the manipulation of vector and raster spatial data features that are maintained by OSGeo and distributed under the X/MIT license. PostgreSQLFootnote 8 is a database management system released under the PostgreSQL License. HadoopFootnote 9 is a suite of utilities that enable the creation of distributed computing architectures. Hadoop is maintained by the Apache Software Foundation and released under the Apache License.

2.3 Open-Source Hardware

The open-source hardware movement builds upon the principles of open-source software, extending them to the material world. It leverages developmental approaches similar to those utilized in the open-source software community to create, share, modify, and reuse designs and processes related to objects ranging from electronics to industrial machinery. The movement has roots in the amateur radio and homebrew computing communities common in the 1970s and 1980s. These groups of enthusiasts shared designs and expertise to facilitate development of hardware and propagate understanding. The definition of open-source hardware is curated by the Open Source Hardware Association (OSHWA) and is based upon the OSD.

The open-source hardware movement is built upon sharing the knowledge necessary for the design and construction of hardware. The movement’s motivations mirror those of the open-source software movement, wherein it both facilitates and encourages the creation, use, distribution, and/or modification of community-developed resources. Differences between the two arise from the material nature of hardware. With open-source hardware, the expectation is that all involved software as well as original design files, instructive documentation, and component lists necessary for reproduction are included with the project’s public release and that each possesses a compliant open license (Ackermann 2009; Open Source Hardware Association n.d.). Open-source hardware is also expected (though not necessarily required) to be created from readily available, standardized components (i.e., common fasteners, mass-produced integrated circuits), which ensures that open-source hardware users can replicate or adapt designs in an affordable manner. While comparable to open-source software in many ways, open-source hardware is incompatible with the FOSS movement due to the use of non-free and proprietary components and designs. Open-source software has the luxury of reinventing necessary proprietary software, thanks to the inherent portability and accessibility of computer code. The legal, logistical, and financial barriers of reinventing necessary technologies in the physical world afford the open-source hardware movement much less flexibility when creating 100% open solutions.

The goal of the open-source hardware movement is to develop products, platforms, tools, and devices that facilitate freedom to control technology, share knowledge, and openly exchange designs (Open Source Hardware Association, n.d.). The open-source hardware movement seeks to enable communities of non-expert end users to create and adapt technology which they may not otherwise feel capable of interfacing with. An intended byproduct of the movement is to propagate understanding of how common hardware functions. The long-term objective of the open-source hardware movement is to improve quality of life and the environment globally, by means of the innovation and dissemination enabled by the open-source hardware philosophy. This philosophy enables the creation and recreation of universally accessible devices that are integral to modern society.

Given the hobbyist roots of the open-source hardware movement, as well as its relatively recent mainstream emergence, there are few widely known projects at this time. Among the most recognizable are the ArduinoFootnote 10 and the RepRapFootnote 11 projects. Arduino is a brand associated with a range of microcontroller-powered electronic boards. Initially envisioned as a tool for designers with no prior electronics experience, it has since become a mainstay of science, technology, engineering, and math (STEM) education, Internet of Things (IoT) project development, low-cost environmental sensing platforms (Fisher and Gould 2012; Prescott et al. 2016), and do-it-yourself lab equipment (Koenka et al. 2014; Pearce 2012). The RepRap project is a community developed around the creation of affordable and accessible 3D printers capable of recreating themselves.

2.4 Open Data

The open data movement originated in the mid-1990s, growing from the tradition of knowledge sharing in science. The term “open data” was first applied to represent the notion of sharing data in the geosciences to facilitate a holistic understanding of the biosphere which “transcends borders” (National Research Council 1995). The modern understanding of open data emerged in 2007, when leaders from both the free software and open-source software movements established the principles of open access and public ownership, which permitted open data to expand beyond the scientific community (Chignard 2013). Their work and advocacy represented the first push toward open public data (Chignard 2013). Building upon the base principles set forth in 2007, several organizations have asserted competing definitions of open data. For this discussion, the Open Definition (www.opendefinition.org), which is maintained by the Open Knowledge Initiative (OKI), will be adopted to define open data. This definition was selected to guide this discussion due to its emphasis on the freedoms associated with open data, which, in some ways, parallel FOSS, and due to its adoption by the Government of Canada (the jurisdiction in which the authors of this chapter conduct research) for its open government initiative (Government of Canada 2017).

The core philosophy of the modern open data movement can be traced back to Mertonian philosophies of communalism (and, therefore, communal knowledge sharing) as an inherent character of the scientific pursuit (Merton 1973). The movement seeks to establish and maintain a commons in which data can be freely accessed and utilized. Unlike the usage of “free” when referring to free software, free in the context of open data implies both freedom of use, a trait of free software, and freedom from cost, not necessarily a trait of free software. The open data movement is motivated by two primary concerns. The first is the need to release data, particularly government data, as a means of democratizing knowledge and enabling stakeholder participation. Several governments and governing bodies have made commitments to open data in recent years, including the United States, Canada, and the European Union. The second is the realization that, for further progress to be made in science and for it to benefit the greatest number of people, free access to usable data is critical. With these motivations in mind, open data is meant to be an invitation for any individual, academic or otherwise, to investigate and contribute to understanding the massive volumes of information accumulated in the Internet age.

The goal of the open data movement is to have all relevant information stored in a manner which allows for it to be easily and freely retrieved in a nonproprietary, open, machine-readable format. The open data movement expects that, given time, global access to truly open data will enable better science (Heidorn 2008; Molloy 2011). The objective of open data, however, cannot yet be fully realized due to costs associated with data storage and transmission. Thus, the open definition implements a pragmatic stance by stating that a reasonable one-time reproduction fee may be charged. For example, a data consumer may be asked to pay for the services of an archivist when requested data must be retrieved from a physical, non-networked archive.

Creators of open data are encouraged to release their work under an established open data license such as Creative CommonsFootnote 12 or Open Data Commons,Footnote 13 when sharing it with the public (Open Data Handbook n.d.). Often, data creators are inclined to release their work into the public domain unlicensed believing that it will be accessible to all. Being unlicensed in the public domain is anathema to open data practices, however, as it is likely to cause confusion over who owns the data, especially once time has passed since publication. An analogy would be to place a stack of money in a public space with the expectation that everyone will be willing and able to take from it. Without indication that it is free for everyone, many will avoid it fearing that they are stealing, while others may take it and claim that it was theirs all along. Placing a sign with the cash stating that it is free for all to take and that it is a public good removes the confusion that the unexplained stack of money would cause. This illustrates the value of an open license: it does not alter the spirit of the data shared, but it does enshrine its availability for the greatest number of users.

Within ecohydrology research, leveraging and/or creating open data is not uncommon. Notable examples of organizations working to curate open datasets for the discipline include the Consortium of Universities for the Advancement of Hydrologic Sciences, Inc. (CUAHSI) and Data Observation Network for Earth (DataONE), as well as government-sponsored data providers such as Natural Resources Canada. CUAHSI encourages the publication of data through its Web portal, making it searchable for other interested users. DataONE hosts a series of coordinating server nodes, which index several distributed data servers, making them searchable and allowing the data to reach end users more easily. Natural Resources Canada offers data related to geomorphology, geology, and hydrology which has been gathered through publicly funded research and institutions. Outside of ecohydrology, exemplars of open data in action include the Human Genome Project and Statistics Canada.

Two important ecohydrology data portals that do not meet our definition of open also deserve mention, the FLUXNET and the long-term ecological research network (LTER) portals. Both of these portals offer a two-tier licensing system. The tier 1 FLUXNET data are distributed under a custom fair use license, which requires that the authors of the data be notified of who is using the data and for what purposes the data will be used before the data are accessed, and that an appropriate citation or acknowledgment be made in all published work that uses the data. The LTER tier 1 data, on the other hand, are distributed under a Creative Commons BY ATTRIBUTION license. Giving credit to data owners through appropriate citations to datasets is an important tenant of professional research ethics that is recognized by the open technology movement and enshrined in open data licenses, such as those applied to the FLUXNET and LTER datasets. However, there is an important difference in how these two datasets are served to potential users. The FLUXNET dataset license requires that the potential users notify data owners before the data can be downloaded and implements this rule by requiring all potential users to create a FLUXNET account which will be used to connect the potential user to the data owner and communicate the intended use of the data. On the other hand, tier 1 LTER data can be downloaded anonymously by clicking a weblink to the datasets. It could be argued that the requirement of notifying the data owners before using the data could be considered to fetter access to the data, and thus the FLUXNET dataset would not meet our definition of open data. Furthermore, tier 2 data in both portals are restricted for use only by select individuals, and tier 2 status supersedes tier 1 status, so data requests that include both tier 1 and 2 data (even if they are from the same instrument at the same location) are considered tier 2. For these reasons, these portals do not meet our definition of open data portals.

2.5 What Is Open Technology

In this chapter, technology is defined broadly as a knowledge-based output that can be put to use toward a specific purpose. This definition encompasses such things as computing hardware, software, electronic data repositories (particularly those that are Web-enabled), and Web services. Thus, open technology can be seen as a superset of the type of products developed through the free or open-source software , open hardware , and open data movements.

2.6 Trends in Open Technology in Ecohydrology

Figure 1.1 illustrates the growth of “open technology” in the scientific literature since the year 2000. The figure shows two distinct periods when open technology research accelerated. The first period is between 2002 and 2009, followed by a brief plateau ending in 2012. The second phase of growth in open technology research happened between 2012 and 2016.

Fig. 1.1
figure 1

Open related sources as a percentage of total returned results

Data illustrated in Fig. 1.1 were gathered using a keyword search on the Web of Science index, which includes the Science Citation Index (SCI-EXPANDED), Conference Proceedings Citation Index–Science (CPCI-S), and the Emerging Sources Citation Index (ESCI) from 2000 to 2018. Searches were constrained by a wildcard involving “∗hydrolog∗” plus one of four key phrases: “open source ,” “open hardware ,” “open source hardware,” and “open data .” The four key phrases collectively contribute to the total open technology results gathered. The wildcard was used to capture results from the maximum number of hydrology-related topics. A secondary search was completed, which was not constrained to the hydrologic sciences, using the same four key phrases and date range. This secondary search determined the trend of open technology across the entirety of the selected Web of Science indices. The results from each search were grouped as “Hydrologic” and “All,” respectively, for the purpose of analysis. Each data point represents the percentage of the total returned results that were open technology related against the sum of results over the period unconstrained by “open” keywords (total papers indexed).

Figure 1.1 indicates a clear overall upward trend in the adoption of open technology in the literature. As a percentage of the returned results, hydrology appears to be slightly ahead of the curve when compared to the entirety of the searched indices. However, due to the comparatively insignificant number of results matching open technology terms in the body of hydrology research, as well as a lack of additional constraints on the broad search of Web of Science, it is not possible to state that it is indeed more common within the discipline.

Figure 1.2 illustrates the distribution of the various components of open technology over the period of 2000–2018 for both groups of search results. Figure 1.2a, b demonstrates a similar distribution, with open source being the most dominant paradigm present in the open technology literature. This dominance is both a product of its relative maturity, when compared to all other open movements, as well as the increasing adoption of open-source software packages as de facto tools across a variety of disciplines. Much of the open data and open hardware research illustrated in these figures has emerged in the last 5 years.

Fig. 1.2
figure 2

Displays (a) the share of each aspect of open technology within the open results from the hydrology-relevant literature and (b) the share of each aspect of open technology within the open results from all open literature

The outlook for open technology over the next decade is uncertain and will depend on adoption of open ideals by developers, adoption of open technology by users, and advances in computing and network technology. In the case of open-source software , the largest component of open technology, the advances that researchers make are being drawn into, and developed within, large existing open-source projects, to increase their reach and capability. Examples of this include the development of plugins which interface common modeling software (such as SWATFootnote 14) with open-source geographic information systems. This will inevitably lead to a slow reduction in the number of open-source articles published over time, as there is a finite number of plugins that need creation. Open data is likely to increase, as the need for large, well-documented, and publicly available data becomes more pressing. Its growth will also be aided by the requirements of public institutions, such as the National Science Foundation of the United States, that require all research funded with public money to be publicly accessible. Open hardware is slowly gaining momentum in mainstream research. In the near future, we expect it will be a very minor contributor to open technology within ecohydrology. Many of the applications of open hardware which are relevant to the discipline are not demonstrated within the literature, but instead are found in spaces of amateur or citizen science or in grey publications; however, this may change as hydrology journals offer increasing opportunities to publish the results of open technology development, for example, as “methods papers.”

3 Open Technology in Review

Open technology has been applied to a variety of topics of interest in the field of hydrology. A literature review was completed to highlight many of the mainstream applications exhibiting the ideals and promise of open technology. The results of this literature review have been divided into two categories based on the intended target of the research: (1) water resource management and (2) hydrologic system dynamics and interactions work. The first category captures research concerned with the quantification and/or conservation of water resources, as well as systems and standards for data and data sharing (a key component of management). The second category includes research concerned with topics ranging from rainfall interception to runoff mechanics; it is intended to capture all aspects of how water moves through and interacts with the built and/or natural environment. Each category is further divided to outline projects by their focus: software and computational modeling, hardware, data, and services or platforms.

All articles were reviewed from the perspective of ecohydrology. Despite casting a wide net, each of the selected papers is pertinent to the discipline. A broad search was completed to highlight as many projects within the discipline as possible that leverage open technology. Where relevant, notes are added regarding the current state of projects (whether they are still accessible/supported). Comprehensive assessments of the level of openness of individual projects have not been included here. For more information on the difference between a truly open and quasi-open project, please refer to the discussion section. It is worth noting that the literature review captured very few research projects that leveraged open governmental data, such as that which is provided by Natural Resources Canada.

3.1 Water Resource Management

3.1.1 Software and Computational Modeling

Many projects concerned with water resource management have grown around the need to enhance the capabilities of GISs through extensions and integration. Researchers have approached via different avenues for enhancing the analytical abilities of these software packages. The most common is the adaptation of open hydrologic models (Pontes et al. 2017; George and Leon 2008; van Griensven et al. 2006) to a particular GIS suite, such as open-source software like QGIS (Pontes et al. 2017; Dile et al. 2016) and Map window (Rahman et al. 2017; George and Leon 2008), or proprietary software like ArcGIS (Holmes et al. 2017) and ArcView (van Griensven et al. 2006). In the case of the latter two projects, which are based on proprietary software, their openness is questionable, as they rely upon software that is not readily available and modifiable. Another adaptation involving GIS software is to better integrate it into workflows, either by demonstrating the robustness of a suite’s capabilities (Rudiyanto et al. 2018) or by leveraging it as a tool for simulation and prediction (Holmes et al. 2017; Rahman et al. 2017; Sanhouse-Garcia et al. 2017; Thorp and Bronson 2013).

As the breadth and depth of water resource management research grow and the capability of both data collection methods and computing hardware increases, there is a need for open software, which can adapt to meet new demands. To this end, researchers have undertaken projects focused on simulation and model-building, which seek to optimize and support processing and decision-making. For example, Hofierka et al. (2017) utilized the open-source GRASS GIS software to build an open, fully parallelized processing pipeline for large-scale geodatasets. The parallelization method they demonstrated allowed for a near threefold increase in the processing speed of large-scale datasets. In another example, Kneis (2015) created a software solution that serves as a generic basis upon which a variety of catchment models may be constructed.

3.1.2 Hardware

Open hardware is the least leveraged portion of open technology, particularly in the segment of water resource management. That said, there are still stand-out examples of what is possible when open hardware is applied to research. Bartos et al. (2018) used open hardware to create a system of storm water monitors. The goal of the project was to create a network of sensors which would be capable of informing flood control and stormwater management infrastructure in real time, to maintain a balance of optimal retention and discharge. Prescott et al. (2016) created a system called HydroSense, which was built on open software , hardware, and standards and served as a universal hub/datalogger for sensors . Their system employed a standardized interface, which allowed it to be used with a range of proprietary and open sensors. This allows the system to be flexible and adaptable to many different applications. In both projects, however, the researchers created customized circuit boards to accommodate the designs of their projects, potentially limiting the portability of their core design to new research, as these circuit boards are not commercially available.

3.1.3 Data

Much of the concern surrounding data in research involves finding methods to deliver and maintain vital datasets of varying sizes (Flint et al. 2017). In most cases, vital data is scattered and poorly documented, making research arduous (Heidorn 2008). As a result, many initiatives now exist to take openly available datasets and collate them into singular, accessible, and well-documented databases (Flint et al. 2017; Horsburgh et al. 2016; Soranno et al. 2015). These databases are designed to serve multiple stakeholders across disciplines. In the case of Flint et al. (2017), their iUtah database included hydrologic data as well as social data regarding citizen attitudes toward water resources. Related to the iUtah project is the work of Jones et al. (2015), which discusses the necessary workflows and procedures that allow for the creation of a robust and well-documented database.

Beyond databases, there is also the issue of data standards. To ensure that information is presented in a manner which is widely adopted, machine readability and platform independence are central to open data sharing. To this end, Swain et al. (2015) undertook a review of the various data standards maintained by the open geospatial consortium (OGC). Their work specifically addressed the use of customized versions of OGC standards across a number of popular Web applications. The authors noted that there was more value in sticking with the general implementation of data standards, as it increased portability. Relevant to this is the discussion surrounding the use of the WaterML standard for hydrologic data (Challco et al. 2017; Yu et al. 2015). Work in this area is iterative and serves as a check-and-balance between what the standards offer and what they necessarily sacrifice in the interest of universality.

3.1.4 Platforms and Services

There is a need within the water resource management community for a means by which stakeholders can interface with and understand the outcomes and results of data collection, modeling, and research. To this end, a number of open technology projects have been undertaken, which seek to streamline the interaction between end-users and knowledge producers. Web applications built upon open-source software , open data standards, and open technology serve as data portals (Horsburgh et al. 2016; Soranno et al. 2015; Steiner et al. 2009) and decision support systems (Casadei et al. 2018; Sommerlot et al. 2016). Services may also include Web interfaces, which serve as a GIS system capable of outputting information specific to a topic (Siles et al. 2018; Hill et al. 2011). Swain et al. (2015) created a tool known as Tehys suite, which was designed to be a simple-to-use package for the creation and distribution of geographic Web applications.

3.2 Hydrologic Systems Dynamics and Interactions

3.2.1 Software and Computational Modeling

Modeling is the primary application of open-source code in the area of hydrologic systems dynamics and interactions. Research in this area is largely concerned with modeling of watershed-scale inputs and outputs (Brown et al. 2018; Tesfatsion et al. 2017; Hartanto et al. 2017; Sommerlot et al. 2016; Lampert and Wu 2015; López-Vicente et al. 2014; Zhang et al. 2011). Noteworthy among these projects is a reduced emphasis on building upon or implementing existing modeling software. Tesfatsion et al. (2017) took an open-source modeling tool from the OpenDanubia project and stripped it of a number of elements, including its graphical user interface, in order to expose model functions as an application programming interface (API), and thus make it more generally applicable. Also of interest within this area of research are the effects of hydrologic processes on terrestrial systems, such as shorelines (McCall et al. 2014), and their impacts on geomorphological processes, such as landslides (Strauch et al. 2017).

3.2.2 Hardware

Within the space of hydrologic interactions, open hardware has a wide range of applications. Examples range from soil temperature monitoring (Bitella et al. 2014) and stemflow measurement (Turner et al. 2019) to open satellite remote sensing via CubeSat (McCabe et al. 2017), which offers a lower cost, highly customizable satellite remote sensing platform. Lee et al. (2016) offer a more common application of open hardware in their research. They leveraged open-source hardware to log data and control aspects of an experiment on brackish water wetlands.

3.2.3 Data

Open data for the study of hydrologic systems dynamics and interactions is vital. In order for deeper linkages to be made between the multitudes of physical processes which relate to water, it is necessary to pull data from many different possible sources (Hill et al. 2014). To this end, many researchers have leveraged open datasets, as well as preserved their data for others to use. As an example of the latter point, Strauch et al. (2017), in their project on landslide probability, ensured that the data they produced was appropriately documented and shared by first entering it into CUAHSI’s HydroShare. They also devoted a portion of the final publication to explaining how and where to find datasets and software used in their work. Other applications of open data have seen researchers attempt to enhance ground measurements using small datasets gathered from nonprofit organizations and citizen science groups, in addition to public and open government data (Flint et al. 2017; Niemi et al. 2017; Soranno et al. 2015). These fused data resources, combined with openly available earth observation resources, offer an exciting frontier for understanding the impact of hydrologic processes on the environment (McCabe et al. 2017).

3.2.4 Platforms and Services

Projects are developed with the intent that they are to be utilized by specific and insular research or professional communities. Software such as OpenGeoSys (Kolditz et al. 2012), which is a tool for developing simulations of water flow through porous media, exemplifies the focused scope and technical demand of tools in this area of research. Tesfatsion et al. (2017) similarly developed a platform known as WACCShed, which was targeted at the development of watershed-scale models and follows the same pattern of technicality and precision in one target area of inquiry. Orfeo, a toolbox program designed as an accompaniment to the commercial European Pleiades Earth-observing satellites, is an open-source remote sensing platform, which is capable of handling data from many different sensors (Tinel et al. 2012). Orfeo is targeted at a professional and expert community to serve as an intermediary tool for working with satellite imagery.

4 Discussion

Open has become a widely used descriptor of research and technology development in recent years. Some projects fully adhere to the definition of openness described in this chapter, while others, for reasons of pragmatism, do not; the latter can thus better be described as quasi-open. As open technology continues to grow, it will be necessary for new projects to integrate and construct upon the existing body of open projects. If the project a researcher is building upon is quasi-open, then there may be unforeseen barriers to development and dissemination . Such barriers could be as basic as a lack of metadata for a critical dataset or as serious as a copyright infringement.

Water is a basic human right (United Nations General Assembly 2010) and a common good. In the face of a changing climate that is predicted to increase water stress globally (Vörösmarty et al. 2000), water researchers have a moral obligation to adopt open principles and practices. Mainstream media outlets have started to raise the issue of morality in the fight against climate change (CBC Radio 2018). Ecohydrology considers the interaction between hydrologic and ecologic systems and is thus in a prime position to benefit from openness in terms of research efficiency while at the same time enhancing social good though the democratization of research outputs.

4.1 Truly Open Versus Quasi-Open Projects

Truly open projects are those which are released and maintained according to the principles of a widely accepted open organization’s open definition (e.g., OSI, OSHW, or OKI). These projects are released with as few barriers to their advancement as possible. An example of truly open projects is the R project for statistical computing. Created as a GNU-compliant version of the S programming language, R has been widely adopted. The Central R Archive Network (CRAN), at the time of writing, includes over 12,000 hosted packages, each of which is FOSS and each of which extends the core capabilities of the language, allowing it to adapt to and evolve with changing research needs. R stands as an example of a platform that fosters innovation, due to its mature nature, open community, and transparent source.

Quasi-open projects are those which are released to the public domain but are not maintained, are not readily available, or rely upon some form of closed or constrained resource. Quasi-open projects do not offer a simple means of open collaboration or widespread adoption. Examples of quasi-open projects include open-source extensions to proprietary software packages. While there is value in these extensions, they cannot be easily tested or implemented by the global scientific community (including academics, governments, and citizens) without a researcher first acquiring the proprietary software that they are seeking to extend. As an example, Holmes et al. (2017) created a package for ArcGIS that was capable of identifying natural water storage basins. While the code for the package is open, it relies upon an expensive proprietary software package, which is inaccessible to many researchers. Quasi-open projects may also have an appearance of openness as an afterthought. In this case, the creators of the project may have made a decision late in the project development that the data, code, or designs should be open and then decided to release them in an “as is, where is” state that lacks appropriate documentation, accessibility, and/or licensing.

True openness should be the ultimate goal of all research projects, but the reality of the current research culture means that it rarely is. The result is a large number of quasi-open projects within the open technology arena that fail to achieve true openness for one or more of several reasons. First, in cases where the decision to make a project open comes late in the development process, documentation of the project may be insufficient or absent. Second, researchers will sometimes opt to not make their project open at all, due to the time-cost of this documentation, in light of productivity metrics to that do not value such contributions (Heidorn 2008). Third, researchers may choose to make their project only selectively open, by sharing the project information and products only in instances in which they expect reciprocal benefit (Campbell et al. 2002; Haeussler 2011). Fourth, journals that require data submission as supplementary information may hold copyrights to these data, thus preventing the original data collector from sharing the data openly (Fecher and Friesike 2014). Finally, researchers may choose to withhold their data from the public domain because data collection without accompanying analysis has no currency in academic career advancement, and thus they may fear that they would have insufficient advantage over others to publish outcomes from their data collection and receive credit, if others were to have unfettered access to their data.

4.2 Successes

There are a number of individual projects which stand out as successes within open technology when viewed from the perspective of ecohydrology. Modeling tools such as the Soil and Water Assessment Tool (SWAT) and the Hydrological Simulation Program (HSP) have been widely adopted and heavily integrated into the research of the community at large. Academic communities such as CUAHSI have demonstrated how open technology could potentially be integrated to form a feature-rich and robust data repository.

As shown in Fig. 1.2 and discussed previously, open-source software comprises the most significant portion of the broader open technology movement. The open-source software movement has become a powerful force in the digital age. Since the mid-2000s, the FOSS community has gained mainstream recognition, and the projects created there have begun to rival the closed source, proprietary, and expensive software on the market (Haefliger et al. 2008). The FOSS movement has now redeveloped, implemented, and optimized a number of core computing technologies, placing them firmly in the public sphere. The open exposure of these software packages allows for researchers to delve into the source code to understand how each portion of the software operates and potentially improve upon it. As an example, it is possible to modify the source code of an open-source GIS, such as GRASS, and optimize the way it processes data to attain better performance (Hofierka et al. 2017).

The same principles which have led to the success of the open-software movement are now carrying over into other open movements. Open-source hardware has seen a number of proponents leveraging simple and commonly available technology (some of which may be proprietary but still highly accessible) in order to reimagine the basic tools of laboratory science (Fisher and Gould 2012) and environmental monitoring technology (Bartos et al. 2018; McCabe et al. 2017; Bitella et al. 2014). Developing open hardware , however, is not just about constructing new hardware from open components. It also requires a commitment to openly share all information necessary to replicate the hardware project. For example, Oregon State’s OPEnS LabFootnote 15 demonstrates this philosophy by providing all relevant instruction files, software code, and metadata necessary to replicate the environmental monitoring solutions they have created via an open license.

Open data appears to be at a point not unlike where open-source software was in the mid-2000s. We are starting to see serious efforts toward optimization and standardization in how we collect and store data. Exemplifying this is the check-and-balance we are seeing with open standards such as WaterML (Yu et al. 2015). This type of review of a common standard is similar to the optimization and adoption process of open-source software . An additional success for open data is that it has become mainstream and has been adopted as a governmental mandate by a number of governing bodies around the world (including Canada, much of the European Union, and Mexico).

4.3 Challenges

In our experience with open technology development in the hydrologic sciences, we have identified four key challenges to the growth of open projects and adoption of open principles within hydrology and forest-water interactions . First, current research culture hinders or discourages researchers from embracing open practices in project design and in the publication of research outputs, including data, software, and hardware (Molloy 2011; Soranno et al. 2015; Flint et al. 2017). Development of open projects requires a commitment to the creation of metadata necessary for others to leverage the project outputs, a time-consuming process, which is not rewarded in typical metrics for career progression (Heidorn 2008). The withholding of research outputs is not unique to the hydrologic sciences, but rather endemic to academic research practices in general. For example, in a survey of academic geneticists, Campbell et al. (2002) found that only around 25% of researchers share their data, even upon request. This study found that the most common reason stated for choosing to not to share data was the time required to properly format and document the data—time that could be used to perform activities for which one would be rewarded by career advancement. When researchers do decide to share data, this decision is not always altruistically driven, but rather based on the perception that the act of sharing data will increase one’s social capital (Haeussler 2011); thus, even shared data may not be shared in a way that is truly democratic. There are also few avenues for publishing open data , software, or hardware designs, and so these publications often end up in the gray literature, which is generally held in less regard. Important research products, including data, software, and hardware designs, thus tend to get lost, becoming “dark” research products (Heidorn 2008), not visible to others who could use them fruitfully.

Second, the movement by academic journals in the fields of hydrology and forest-water interactions to encourage or require data, software, or hardware designs to accompany manuscripts as supplementary information helps to get data into the public sphere. Although the intention of publishing data, software, and hardware produced in the course of a research project is to increase access, when this supplementary information is held in copyright by the publisher, it limits access and renders the data, software, or hardware designs closed (Fecher and Friesike 2014). Third, open source, open data, and open hardware all have multiple user groups, communities, or organizations with subtly differing definitions of what constitutes open. Without agreeing to a common standard, these groups are challenged in working together to advance open technology. Finally, the open hardware movement is facing challenges in gaining mainstream acceptance. We suspect that this is due to the jack-of-all-trades nature of many open hardware designs. Researchers may choose to design their own more limited capability hardware, rather than use the original open hardware project (e.g., Kerkez et al. 2012), or use of the hardware may come with additional regulatory oversight (as discussed in this volume by Hill et al., with respect to open flight controllers for unpiloted aerial vehicles ).

4.4 Opportunities

As open technology projects increase within the discipline of ecohydrology, there are many opportunities to improve knowledge production in hydrology and forest-water interactions by addressing some of the challenges noted above. First, projects working to integrate open technology to produce higher-level analysis tools can prevent important research outcomes from being lost, becoming dark technology (Heidorn 2008). For example, Sorrano et al. (2015) addressed this problem by integrating numerous small and distributed datasets to create a unified data portal for analyzing lake ecosystems at the macroscale. In our opinion, the rise of system-oriented analysis as the dominant paradigm for understanding and managing complex phenomena, such as forest-water interactions (Liu et al. 2007), could be greatly enhanced by the availability of open technology that can be leveraged for analyzing systems-level interactions.

Second, mechanisms for publishing open datasets , created by individual researchers, are becoming increasingly available in hydrology and forest-water interactions research. In addition to Web repositories hosted by universities and research collaboratives (e.g., CUAHSI), academic journals are beginning to publish data not only as supplementary information but also as “data papers,” which provide metadata describing the data and make the data easier for other researchers to access. Similarly, mechanisms to publish open hardware are also becoming more available. In addition to methods-oriented journals (e.g., Sensors, MDPI and MethodsX, Elsevier), “methods papers” are increasingly published in hydrologic journals. These journal-based opportunities for publishing open technology development permit researchers to get credit for the substantial amount of time required to create the technology, potentially increasing the appeal of creating open technology. However, for technology published as a data or methods paper to be considered open, not only must the journal provide open access to these papers, but also the technology itself must be designed in accordance with open principles and be distributed with an open license.

Third, joining forces with movements such as slow science can help to reform metric-based academic cultures that promote speed of discovery at the expense of openness. The slow science movement advocates for research impact over volume, recognizing that impactful research is that which can be easily replicated, validated, and expanded upon (Stengers 2018)—goals facilitated by open principles. The slow science movement recognizes the additional, currently unrewarded, time required to properly document and share one’s research and advocates for such invisible costs of true impact to be brought into account.

Finally, there exist a number of opportunities resulting from advancements in electronics and computing hardware that could enhance research in hydrology and forest-water interactions. With cellular and satellite technology continuing to become smaller, more capable, and more affordable, there exist opportunities to create larger-scale, more robust sensor networks and remote sensing devices that improve our understanding of Earth systems processes. We are also presented with more powerful computing technology, which can handle increasingly demanding simulation tasks. Changes in research practices and improvements in infrastructure supporting research, such as data storage and archiving, as well as tools for collaboration, will support the further emergence of open technology and open science as a whole (Fecher and Friesike 2014).