Earth Science Informatics

, Volume 4, Issue 4, pp 169–179

Google Earth and Google Fusion Tables in support of time-critical collaboration: Mapping the deepwater horizon oil spill with the AVIRIS airborne spectrometer

Authors

    • Department of GeographyUniversity of California
  • Dar A. Roberts
    • Department of GeographyUniversity of California
  • Philip E. Dennison
    • Department of Geography and Center for Natural and Technological HazardsUniversity of Utah
  • Robert O. Green
    • Jet Propulsion LaboratoryCalifornia Institute of Technology
  • Michael Eastwood
    • Jet Propulsion LaboratoryCalifornia Institute of Technology
  • Sarah R. Lundeen
    • Jet Propulsion LaboratoryCalifornia Institute of Technology
  • Ian B. McCubbin
    • Desert Research InstituteStorm Peak Laboratory
  • Ira Leifer
    • Marine Science InstituteUniversity of California Santa Barbara
Research Article

DOI: 10.1007/s12145-011-0085-4

Cite this article as:
Bradley, E.S., Roberts, D.A., Dennison, P.E. et al. Earth Sci Inform (2011) 4: 169. doi:10.1007/s12145-011-0085-4

Abstract

Web interfaces have made remote sensing image resources more accessible and interactive. However, many web-based and Digital Earth opportunities for remote sensing have not yet been fully explored and could greatly facilitate scientific collaboration. In many cases, these resources can augment traditional proprietary software packages, which can have limited flexibility, spatiotemporal controls, and data synthesis abilities. In this paper, we discuss how web services and Google Earth were used for time-critical geovisualizations of the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Deepwater Horizon oil spill imaging campaign. In particular, we describe how (1) AVIRIS Google Earth products were used to visualize the spatial and temporal characteristics of the campaign’s image acquisitions, critically needed for flight planning, (2) the Google Fusion Table cloud-based service was applied to create a highly-interactive image archive and mapping display, and (3) the Google Fusion Table API was utilized to create a flexible PHP-based interface for metadata creation and as the basis for an interactive data catalog. Although there are other possible software and programming approaches to these activities, we highlight freely-accessible and flexible solutions and bring attention to the newly introduced Google Fusion Tables as a collaborative scientific platform.

Keywords

Remote sensing image databaseAirborne imaging spectrometryGoogle EarthGoogle Fusion TablesAVIRISDeepwater horizon oil spill

Introduction

Oil released into the marine environment due to petroleum extraction and transport accidents can have disastrous and broad-reaching consequences (National Research Council 2003). Acute effects include high mortality for marine birds and mammals, plankton contamination, and oil-coated shorelines. Long-term ecological effects are multi-faceted, e.g. damage to wildlife populations, fisheries, and coastal areas used for recreational activities (National Research Council 2003; Peterson et al. 2003). Oil spills are a health hazard both for response crews and area residents, contributing to neurovestibular, respiratory, and psychological symptoms (Sim et al. 2010; Solomon and Janssen 2010; Lyons et al. 1999). Oil spills can lead to air pollution in the form of elevated ozone levels (Song et al. 2011) as well as airborne heavy metals such as mercury (Pandey et al. 2009). Polycyclic aromatic hydrocarbon bioaccumulation in seafood poses a consumer health risk that extends to populations outside of the local spill area (Aguilera et al. 2010). Given the high cleanup costs and the breadth of direct and indirect effects, the overall societal impact of oil spills is considerable (Garza-Gil et al. 2006; Liu and Wirtz 2009). Because the fate of oil in the sea and the trajectory of the spill are highly variable and impacted by a number of different processes including weathering, spreading, advection, and Langmuir circulation, oil spills are a spatially and temporally dynamic hazard (National Research Council 2003).

The Deepwater Horizon blowout on 20 April 2010 in the Gulf of Mexico led to the deaths of 11 workers and the largest oil spill in U.S. history, releasing an estimated 4.4 × 106 ± 20% barrels (Crone and Tolstoy 2010; Lehr et al. 2010). This environmental disaster was a persistent hazard with continually changing oil slick patterns and approximately 84 days of leakage before the well head finally was sealed on 15 July 2010 (Lehr et al. 2010). Environmental and economic damages were considerable, given the close proximity of the spill site to coastal wetlands, important seabird habitat, highly productive fisheries, and popular beaches.

The scientific response to the Deepwater Horizon oil spill involved a collaborative effort of government agencies and university groups to track the oil, estimate the oil spill volume, and the assess ecological damage. The expedited decision-making required rapid data assessment, both in terms of understanding the oil spill and for identifying potential landfall areas for directing cleanup and protective efforts, as well as for planning subsequent data acquisitions. Timeliness and repeat acquisitions were important for capturing pre- and post- oil impact data to assess coastal ecosystem damage.

An Earth-imaging airborne sensor, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) (Green et al. 1998), was deployed as a key asset for characterizing the oil spill and resultant ecological impacts. AVIRIS, with 224 bands from 380 to 2,500 nm, is sensitive to the spectral signature of surface hydrocarbons, providing a mechanism to estimate oil slick thickness (Clark et al. 2010). Also, vegetation species and oil damage can be classified and detected from AVIRIS data. Both oil mapping and pre-oiled ecosystem assessment are time-sensitive activities, thus, flight planning needs to take into account the oil spill trajectory, weather conditions, existing coverage, and other factors.

For typical AVIRIS data collections, the flight design, scheduling, acquisition, processing, and visualization can require several months and involves communication only between the principle investigator tasking AVIRIS and the AVIRIS team. In the case of the 2010 AVIRIS Gulf of Mexico campaign, this timeline was greatly expedited and collaborations involved a much broader and diverse, multi-agency and multi-university team including USGS Spectroscopy Laboratory; University of California, Santa Barbara; University of Utah; University of California, Davis; and NOAA Hazmat. AVIRIS data distribution was challenging given the size of files (typical scenes being several gigabytes), which made it desirable to have an alternate means for quickly reviewing image quality and coverage. These rapid response data products can save critical hours or days of effort enabling them to meet the fast paced demands of next day flight-planning. The standard AVIRIS processing protocol includes creation of quick-looks, grayscale single-band JPEG images, which are disseminated more easily but are not spatially referenced. Although adequate for tightly constrained campaigns with direct communication between single investigators and the AVIRIS team; the necessary spatial context is lacking for broader collaboration and synthesis. Similarly, the traditional AVIRIS metadata website (http://aviris.jpl.nasa.gov/cgi/flights.cgi?step=view_all_flights) is intended for more limited-scope acquisitions, with less flexibility for extended campaigns and user-generated content and interaction.

Collaborative objectives for the AVIRIS Gulf of Mexico campaign included flight planning and image selection for optimized data analysis and drew upon interagency knowledge and expertise. Given the time constraints and diversity of the work group, it was imperative to have easily distributable, geospatially-referenced AVIRIS products to assimilate with other data allowing exploration of diverse options for information exchange. In this paper, we describe the application of Google Earth, Google Fusion Tables (GFT), and the GFT Application Programming Interface (API) to the planning and visualization of the AVIRIS Gulf of Mexico campaign. These applications helped facilitate the time-critical collaboration of a diverse, ad hoc community, in order to optimize image acquisition and data usage. We discuss the implementation of these applications, highlight key features, and discuss limitations and further developments within the context of collaborative geospatial platforms. In particular, we propose that GFT could be highly useful for group collaborations with pre-defined focus, but also may facilitate development and interaction of emergent communities.

Background

Web 2.0 and collaborative science

Collaboration and knowledge sharing have been, and will continue to be, key research components that traditionally have involved peer-reviewed journal publications, conferences, and interpersonal interactions (Hara et al. 2003; National Research Council 1993). With Web 2.0 technologies, the collaborative possibilities are greatly expanded and enhanced (Pierce et al. 2009). Web 2.0 places emphasis on interactivity, simplicity, and user participation (O’Reilly 2005). It supports modular collaborative tasks such as online manuscript preparation and proposal development (Dekeyser and Watson 2006; Myhill et al. 2009) as well as integrated experiences in multi-faceted environments (e.g. the Elsevier 2collab platform (http://www.2collab.com) for reference management, networking, and group organization). There already appears to be a dramatic growth in the number of researchers using social network applications, increasing from 4% in 2006 to 25% in 2008 of ScienceDirect survey respondents (Brynko 2008). In the 2008 survey, more than half believed that social applications will play important roles in research workflows within 5 years. A 2010 Pew Internet survey found that 83% of 18–33 year olds utilize social network sites, compared to 62% for ages 34–45 and 50% for ages 46–55 (Zickuhr 2010).

Web 2.0 tools are envisioned to meet core Virtual Research Environment (VRE) competencies. This includes identifying research questions (via search engines, web feeds), learning about funding opportunities (email alerts, RSS), and finding collaborators (social sites such as Facebook) (Myhill et al. 2009). Other VRE activities include proposal development and sharing research information (Google Docs, Skype, wikis), project management and report writing (Google Docs), and discussing results (Open access repositories, webinars, virtual conferencing) (Myhill et al. 2009). Myhill et al. (2009) commented on the lack of fully-operational examples of Web 2.0-based VREs and that many VRE examples are subject-specific. However, evidence to the contrary can be found in VREs such as Talkoot, which provides a customizable means for creating collaborative “open science” portals based on the Drupal web content management framework (Ramachandran et al. 2009).

Google Earth and Google Fusion tables

Virtual Globes have aided many collaborative efforts and have emerged as important Earth science geospatial visualization tools (Goodchild 2008; Butler 2006). Of these software programs, Google Earth (GE) is the most ubiquitous, with widespread usage and over 500 million unique downloads (http://en.oreilly.com/where2010) (Ballagh et al. 2011). GE has served as the platform for a wealth of scientific visualizations, including 3-D models of atmospheric vertical profiles (Chen et al. 2009), air quality maps (Prados et al. 2010), tropical cyclone tracking (Turk et al. 2011), USDA Forest Service FireMapper thermal-imaging (Riggan and Tissell 2008), and animated simulations of ash-aviation encounters (Webley 2011). Key to GE’s success for dynamic scientific applications is the inclusion of the fourth dimension, with a time slider for temporal control with KML timestamp specifications. Also, GE interactivity (e.g. selecting layers for display and specifying opacity or activating pop-up info windows through click events) greatly enhances the data browsing experience. For example, the GE visualization for the Geoscience Laser Altimeter System goes beyond geolocation of laser footprints and adds value through info windows with corresponding charts of waveform data and color-coded data quality (Ballagh et al. 2011).

Although GE is both easy to use and provides powerful visualizations, it has a number of limitations. Unlike traditional GIS, digital globes tend to have minimal direct geographic modeling and analytic capabilities (Goodchild 2008) and are less conducive for higher level spatial thinking. Thus, GE examines the “what” rather than the “why” of spatial distributions (Schöning et al. 2008). The GE time slider can be used to perform simple temporal filtering (e.g. displaying features for a given time range); however, GE does not directly support other types of filtering (e.g. restricting displays to values greater than a certain threshold). This would be a key step towards spatial knowledge discovery. Additionally, while GE is an excellent tool for users to share geospatial data through the exchange of KML, it is not necessarily designed as a collaborative space, although there are examples of its adaptation for these purposes, e.g. real-time group awareness of viewing activity through color-coded areal symbolization (Tomaszewski 2011).

One collaborative platform with value both in terms of integration with GE and VREs as well as stand-alone use, is Google Fusion Tables (GFT) (Gonzalez et al. 2010). GFT is a freely available cloud-based service for database management, visualization, and integration, which was introduced in June 2009. Tables can be easily generated through uploading files at http://tables.googlelabs.com/; CSV, XLS(X), KML, and Google spreadsheet data formats are all supported. Fusion Table visibility can be set as Public, Unlisted, or Private with sign-in required. Tables can be shared with others who are invited as viewers (read and comment permissions), collaborators (also may edit data), or owners (permission to invite others to view or collaborate).

GFT has many data visualizations and web publishing options, e.g. different types of charts can be generated and embedded easily in other websites. These embedded visualizations, because they are based on table queries instead of raw data, automatically update for any database changes. Similar results can be achieved using other approaches, e.g. from creating a MySQL database with a PHP-online frontend and JpGraph visualizations to uploading data for visualizations at the IBM Many Eyes application (http://www-958.ibm.com/software/data/cognos/manyeyes/). However, GFT is notable for its integrated approach, flexibility, and ease of use. For example, a user can go from an Excel spreadsheet to an online, collaborative database, filtered data, and then visualization in less than 10 min.

A key GFT feature is the mapping visualization and geospatial support. Users can configure map styles for points, polygons, and lines. For example, polygon fill colors can be assigned from specifications directly provided in a table column or through gradient and bucket schemes based on column numerical values. Selective mapping is possible with attribute filters and clicking on map features presents more detailed information in customizable popup info windows, which can include dynamically generated charts from GFT data with the Google Charts API, as shown at http://www.google.com/fusiontables/DataSource?snapid=65503. Map data are KML exportable, with an option to create a KML network link, and Google Maps applications with the FusionTablesLayer class can incorporate GFT spatial data.

The 2010 AVIRIS Gulf of Mexico campaign

The AVIRIS Gulf of Mexico campaign included 41 flight days and over 450 flight lines, totaling more than two Terabytes of data. These images comprise a temporally and thematically diverse dataset, with varying utility for different parties and a range in quality as impacted by cloud cover. Besides the high importance of the AVIRIS Gulf data for characterizing the oil spill (Clark et al. 2010) and addressing spill-related science questions, the campaign served as an excellent collaboration test case of GE visualization approaches, GFT implementation, and GFT API integration.

The Gulf of Mexico AVIRIS data were processed on a highly expedited schedule, with the calibrated and georeferenced radiance products and single-band grayscale quick-looks often produced on the same day of acquisition. These quick-looks were an important component of subsequent flight planning, which was facilitated by placing them in a geographic context, visualized within GE. This was particularly important for image interpretation, e.g. assessing if cloud cover obscured a specific area of interest, and for data synthesis, particularly for evaluating the need for repeat acquisitions. For ocean scenes with different flight geometries and headings, the geographic framework was helpful for orienting the viewer, as opposed to the standard quick-look product, which is displayed as a vertical image strip, regardless of the flight heading (http://aviris.jpl.nasa.gov/cgi/flights.cgi?step=view_all_flights).

For each AVIRIS flight run there were four KML features generated: (1) Label, (2) Bounding box, (3) True data outline, and (4) Quick-look image overlay. Data extracted from the AVIRIS GeoTIFF images were used to construct the KML. For the image overlay layer, a reference to the corresponding quick-look file location was incorporated with the four corner coordinates. The GeoTIFFs were batch processed with IDL code, which extracted the flight date (parsed from filename) and the latitude/longitude coordinates for the four corners. To create a more precise outline of the valid image data within the larger scene extent for the flight run, we found the true data borders of each GeoTIFF by determining the left and right edge every 25 lines, with finer spacing at the image top and bottom due to typically more rapid aircraft trajectory shifts in these portions.

The final KML file (Fig. 1) was organized first by type (Label, Bounding box, True data outline, Quick-look) and then by acquisition date, with the GE timestamp specified for each date-folder. Timestamps were a key component. The Google time slider allowed users to display temporal subsets of the archive and create animations of the image acquisitions through time. The time slider was particularly helpful given overlapping flights and the temporal sensitivity of the oil spill trajectory and pre- and post- ecosystem impact. With the time slider, users were able to focus on relevant data and to compare multiple overlapping data sets, to confirm, for example, that coastal ecosystem acquisitions with excessive (above criteria) cloud cover were prioritized for repeat acquisitions.
https://static-content.springer.com/image/art%3A10.1007%2Fs12145-011-0085-4/MediaObjects/12145_2011_85_Fig1_HTML.gif
Fig. 1

Google Earth KMZ for the AVIRIS Gulf of Mexico campaign demonstrating different functionalities. a, Quick-look image overlays for full archive. b, Subset of data outline polygons given time-slider set for August 2010. c, Demonstration of data outline polygons where transparency aids the assessment of image overlap. d, Zoom view of quick-look image overlays for 6 May 2010

The GE format also was useful for incorporating ancillary datasets, e.g. the Deepwater Horizon well head location, NOAA weather and ocean current data, and MODIS imagery (Fig. 2). Additionally, Synthetic Aperture Radar satellite data, other aircraft data, NOAA slick predictions, and even subsurface currents were also integrated. One AVIRIS mission objective was overflights of surfacing oil, which required considering the predicted full water column currents and surface winds. Due to model inaccuracies, comparison of the trend in model accuracy with remote sensing data allowed flight planning to incorporate offsets. Estimating these offsets was greatly facilitated by the GE time slider reducing the clutter of the full image display.
https://static-content.springer.com/image/art%3A10.1007%2Fs12145-011-0085-4/MediaObjects/12145_2011_85_Fig2_HTML.gif
Fig. 2

Google Earth display of AVIRIS images overlain on a MODIS Terra false-color image (Bands 7,2,1: ~2.1, 0.85, 0.65 um) from 17 May 2010 with marine observation placemarks from the NDBC (www.ndbc.noaa.gov/kml/marineobs_by_pgm.kml), which provide real-time meteorological and oceanographic data

For careful comparison between ancillary data and AVIRIS, modifying GE features’ opacity was highly beneficial. For example, making images semi-transparent allowed predicted oil slick locations to be compared with AVIRIS or MODIS or other imagery of the actual slick location without toggling between images. The quick-look image overlays in GE proved advantageous for geospatial interpretation of the images, e.g. for assessing the impact of cloud cover or marking the location of in situ oil burning and spill response crews.

The AVIRIS Google Fusion Table (Fig. 3) was created from a CSV file, which was generated as part of the IDL GeoTIFF processing. The CSV table included extracted metadata for each scene (e.g. pixel size, dimensions, rotation, date) and its location in the form of a KML polygon string for the four corners. This CSV table was uploaded to GFT (tables.googlelabs.com) to provide a web-accessible means for collaborators to query and to map the AVIRIS campaign data (http://www.google.com/fusiontables/DataSource?dsrcid=337729). The GFT information windows for scene polygons were customized to include links to the flight log and quick-look image thumbnails (Fig. 3b). Given the large number of scenes and varying spatiotemporal attributes, this linked database/map visualization provides a useful way to manage and search this information, such as displaying only scenes with pixel sizes less than 5 m (Fig. 3b). We also found GFT helpful for flight planning. Specifically, we used GFT to show prospective flight lines with group priority indicated with line color and within-group priority by line width (Fig. 3c). This was an expedient way to visualize flight lines that were created in an Excel spreadsheet and to confirm that the line spacing and location were reasonable.
https://static-content.springer.com/image/art%3A10.1007%2Fs12145-011-0085-4/MediaObjects/12145_2011_85_Fig3_HTML.gif
Fig. 3

AVIRIS Google Fusion Table maps (a & b: http://www.google.com/fusiontables/DataSource?dsrcid=337729, c: http://tables.googlelabs.com/DataSource?dsrcid=253302). a, Map display of flight bounding boxes. b, Filtered map display based for image pixel size less than 5 m. Clicking on data polygons opens the customized-template info window display, which includes a thumbnail of the quick-look image and link to AVIRIS flight log c, AVIRIS flight planning example, where line color is specified by group and line width is specified by within-group priority. The text in info windows in b and c has been edited from the original GFT display to enhance legibility

We also explored the GFT API (http://code.google.com/apis/fusiontables/) for querying the AVIRIS Gulf of Mexico catalog and for metadata creation. The API enables programmatic access to GFT and we utilized PHP code for submitting the SQL-like requests to the Fusion Table. The resultant PHP web interface (http://aviris.dri.edu/alt/aviris_batch_classify.html) was used for reviewing temporal subsets of the imagery and inputting collaborator classifications of cloud cover, scene composition (land versus ocean), and data quality/sensor issues. The end goal of this application was for expedient metadata creation, which would allow dataset filtering for different objectives, e.g. identifying all land images with primarily clear sky conditions. The large quantity and size of images in the database makes a priori image selection very important for meeting collaborator’s particular goals.

The custom GFT API interface allowed users to select a month and review all of the Gulf of Mexico images for that month (Fig. 4). Via a PHP for-loop, all of the images in the given set were presented with higher resolution versions available as popup windows. Directly above each image in this table is a form input. This was used to submit image assessments as a function of three parameters: cloud cover, presence of land in the scene, and geometric distortions/data artifacts. Given that the database could be queried using the “contains” substring, we utilized a single variable as opposed to separate variables for each of the three attributes. The key code convention was CxLxWx, where x represents the state for the preceding category, wherein C = clouds, L = land, W = data artifacts. A clear land image without data anomalies would be represented as C0L1W0. We specified that only certain users were able to submit these notations to the database, but the text output option allows guest users to create an html table version.
https://static-content.springer.com/image/art%3A10.1007%2Fs12145-011-0085-4/MediaObjects/12145_2011_85_Fig4_HTML.gif
Fig. 4

Fusion Tables API example for assigning classes to AVIRIS images through a batch display (a, http://aviris.dri.edu/alt/aviris_batch_classify.html), with image display controls that adjust the image screen size from 10 to 300 pixels. To apply groups to the images, the operator enters a six digit code that signifies cloudiness, land cover, and data anomalies for each image. The key code convention was CxLxWx, where x represents the state for the preceding category, wherein C = clouds, L = land, W = data artifacts. A clear land image without data anomalies would be represented as C0L1W0. After submission, this metadata is automatically updated to the corresponding Fusion Table or can be output to an html table

GFT also provided the basis for an online AVIRIS Gulf of Mexico catalog developed at a later date for the purpose of making the full 224-band image files more broadly accessible (http://aviris.jpl.nasa.gov/html/gulfoilspill.html). Visitors to the website can initiate data downloads from info window links in the embedded Fusion Table map. As the configuration of the GFT info windows is customizable and can include the table fields in different ways, it is simple to implement scene-specific download links. In addition, a GFT API advanced interface allows users to search the data based on scene attributes (e.g. date, pixel size, heading, solar azimuth, location) and return the results in a map or image table, from which they can then download the AVIRIS files (Fig. 5).
https://static-content.springer.com/image/art%3A10.1007%2Fs12145-011-0085-4/MediaObjects/12145_2011_85_Fig5_HTML.gif
Fig. 5

AVIRIS Gulf of Mexico advanced online catalog (accessible from: http://aviris.jpl.nasa.gov/html/gulfoilspill.html). Users can search the database by different scene attributes including date, pixel size, filename, and solar azimuth and display the results in an image table as shown or as a map with info windows, similar to Fig. 3b

Improvements for future AVIRIS collaborations

Distributing the AVIRIS geospatial data as GE and GFT products addressed the diversity of the user audience, with both platforms being free and GE being widely used and GFT directly accessible online. For large imaging spectrometry datasets, there is a need for manageable geo-visualizations and file catalogs. These data products summarize the archive’s information and allow users to assimilate data and apply spatiotemporal filters. We found that GE and GFT were appropriate platforms for addressing these objectives, yet, a number of aspects were identified for improvement.

Examples of new directions for developments include incorporation of inflight data transfer with GE visualizations (Crowley et al. 2006) and direct integration into the NASA Real Time Mission Monitor situational awareness tool (Conover et al. 2010). Also greater collaborative functionality would have benefited the AVIRIS GE instantiation. Improvements could include group viewing awareness (Tomaszewski 2011) and online decision support tools built with the GE Plug-in (e.g. http://marinemap.org).

GE data visualization could be further enhanced through alternate image displays that leverage AVIRIS spectral richness; currently only panchromatic quick-looks are included. A test case for 6 May 2010 AVIRIS data with true color JPGs was successful, although file sizes were larger and image standardization and optimizing color scales require further exploration. Other imaging spectrometry visualization techniques could be utilized to provide application-specific information, e.g. images of different band combinations for studying fire (Dennison and Roberts 2009). Additionally, there are a number of methods for reducing data dimensionality and highlighting spectral variance, e.g. pseudo-color combinations derived from principle component analysis (Tyo et al. 2003), expedited one-bit transform band selection (Demir et al. 2009), and non-stationary Markovian fusion models (Mignotte 2010). However, as discussed by Jacobson and Gupta (2005), hyperspectral visualization involves many different objectives, including edge preservation and consistency. In order to obtain a natural, intuitive palette, other approaches such as stretched color matching function envelopes could be preferable (Jacobson and Gupta 2005). Ideally, users would have access to a variety of AVIRIS image visualizations in GE.

Other improvements involve modifications within the GE software. In the case of extended flight run AVIRIS scenes, we encountered a file size limitation for image overlays (images that exceeded 20,000 pixels in length required resizing before successful display in GE). Additionally, other attribute sliders similar to the temporal control would add desirable functionality, e.g. the ability to display only imagery with pixel resolution less than a threshold (e.g. 5 m) or all April images from a 20-year dataset. Implementation could use the KML < ExtendedData> tag and the attribute search discussed for Google Maps (http://code.google.com/apis/maps/documentation/mapsdata/developers_guide_protocol.html#AttributeSearch). The desire to filter the AVIRIS image display based on non-temporal attributes was one of the factors that led us to explore GFT.

There are many opportunities for extending airborne remote sensing collaboration with GFT. The comment functionality in GFT could help facilitate asynchronous online discussion of flight plans. A selection variable with comma-separated tags would allow collaborators to input their flight line priorities and filter displays based on this attribute. Reviewing these comments could prove instructive in understanding the interactions and thought processes leading to mission decisions, of great concern where science and policy overlap. GFT also allows users to save display links for given filter conditions and visualizations as part of the table. These links can be re-accessed later or the link can be distributed via email or instant messaging. This functionality could be particularly beneficial for online meetings. With GFT, all collaborators can view the most current version of the dataset and have interactive control, but also can share a given display quickly through the links option. Because the link only saves the filter and visualization state, but not the embedded data, links always are current. However, this may be undesirable for some users, because GFT does not support versioned visualization support (Gonzalez et al. 2010). Older versions of tables cannot be retrieved and users need to manually download the table or save visualizations as images if tracking changes is important and undocumented through comments, or other means.

Other current GFT areas for improvement involve querying and display, which could further enhance data exploration. Given the underlying infrastructure of GFT, some common query options are restricted (Gonzalez et al. 2010). For example, currently the ‘OR’ operator cannot be used and spatial queries in the GFT API are nascent with only spatial intersections for point data with circles and rectangles (http://code.google.com/apis/fusiontables/docs/developers_reference.html#Spatial). GFT also would benefit from GE time slider functionality for investigating spatiotemporal patterns. Lastly, cartographic standards, such as color bars and scale legends should be included. However, developments of GFT are ongoing and these issues can also be addressed independently by developing customized interfaces with the GFT API.

GFT also has the potential to serve as a community-based geoportal (De Longueville 2010), although current functionality is limited. GFT currently has a simple text search interface, where users can search the collection of public tables for different terms, e.g. ‘Marcellus Shale’ or ‘Soil’. However, this is a highly generalized search; tables are returned if the term is present in the table name, metadata, column heading, or rows. The search interface could be adapted into a more semantically intelligent, ontology-driven form, where the search would include spatial attributes as well as other fields including date of creation, and could also take the form of a tree-based visualization for ontology-based metadata browsing (Athanasis et al. 2009; Yang et al. 2010; Gahegan et al. 2009). The Fusion Table creation template could further guide the metadata creation process (current fields are: “Table Name”, “Description”, “Attribution”, and “Attribution Link”) with the addition of more specific prompts and incorporation of geoscience or other domain ontologies (Raskin and Pan 2005). Information such as spatial extent and number of entries could be extracted directly from tables.

The Web 2.0 social dimension of GFT also could be developed further. Currently public tables include the owner’s email address, but this information is not directly integrated with user profiles, the ability to retrieve other tables created by a given user, or initiate forum-based discussions. Further, there could be additional development of meta-resources, “information about resources provided by a user’s community” which includes: activity (use statistics indicative of popularity), ratings (community assigned numerical evaluations of usefulness and quality), tags (user-defined labels), and comments (De Longueville 2010). Already, users can comment on tables, but comments are only viewable from the table itself, as opposed to also in an aggregated and searchable form. Additionally, support for more communication and integration aspects, e.g. embedded chat functionality as is found for Google Documents, could greatly aid collaborations, while the ability to simultaneously display multiple Fusion Tables as map layers–similar to the ArcGIS.com web map viewer (ESRI, Redlands, CA), would provide new data visualization and exploration opportunities.

Discussion

For the AVIRIS Gulf of Mexico campaign, there was a need for effective and timely sharing of information amongst a distributed group. Group members from diverse institutions, NASA JPL AVIRIS team; University of California, Santa Barbara; University of Utah; USGS; and NASA Headquarters, among others, regularly accessed the rapid response data products multiple times per day over an extended period for a range of collaborative activities. These activities included flight planning daily report dissemination, and interpretation discussions. In these regards, GE and GFT were capable platforms for distributing time-critical geospatial information. GE KML files provided multidimensional and effective data sharing pathways compared to traditional AVIRIS quick-look JPG images.

The GE and GFT platforms also proved to be scalable with active use by both the full team, and smaller teams as part of the visualization and prioritization of flight planning for the multiple mission objectives and for image selection. However, many GFT collaborative capabilities were underutilized: live-links to visualizations were distributed, yet no comments were created for the table and it was not merged with others. This is thought to be primarily due to the strength and persistence of other forms of communication within these groups (verbal and email). Also, time constraints and the daunting work load made introduction and adoption of new software more difficult. Although not fully realized in this case, we recognize the potential for even greater collaboration with GFT. Also we found the GFT embedded map functionality and API to be very useful for quickly creating a means for data search, visualization, and dissemination (http://aviris.jpl.nasa.gov/html/gulfoilspill.html).

GFT represents an example of geospatial cloud computing that contributes towards the advancement of Geospatial Cyberinfrastructure (GCI) as discussed by Yang et al. (2010). Due to its ease of use, it is well-suited for enabling users to focus more on exploring and using datasets, than on the technical details of coding their display. However, two key GCI concepts, archival capability and interoperability (Yang et al. 2010) merit further discussion with respect to GFT. GFT archival capability could be compromised given company redirection of efforts or collapse. Fortunately, Google is a well-established company and table creators likely would still have the native data and GFT has table/KML export capability. Also Google has demonstrated a commitment to user notification, data export, and transitioning for applications that have been discontinued, such as Google Wave (http://googlewave.blogspot.com/).

Interoperability is a key GCI enabling technology and supports data publishing, discovery, and synthesis through frameworks such as XML/GML, JavaScript, and AJAX (Yang et al. 2010). The Open Geospatial Consortium (OGC) and ISO/TC211 has developed specifications for web services in support of online geographic applications (see http://www.opengeospatial.org/standards) with currently over 1,000 instances providing over 10,000 Web Mapping Service (WMS) layers online (Wu et al. 2011). However, GFT does not currently support WMS integration in the mapping display. Even more fundamentally, currently only one GFT feature layer can be displayed through the GFT map interface at a given time (although multiple GFT tables can be displayed as layers in Google Map applications with minimal code). In contrast, ArcGIS.com includes searching for and displaying multiple layers from the ArcGIS domain, general web, or particular GIS servers. Also, while GFTs can be constructed from CSV and KML, there currently is no GFT option to import data directly through OGC web services. However, with such tools as WMS to KML converters (Wei et al. 2009), WMS layers could be converted to KML and then imported into GFT. Also, a commercial solution, Arc2Cloud (Arc2Earth, Somerset NJ; http://beta.arc2cloud.com/), already is available for extending GFT. This supports integration with existing mapping interfaces including ArcGIS.com, web services such as Web Feature Service (WFS) and GeoServices REST Specification, and different data formats, e.g. Simple GML and GeoJson.

Conclusion

Google Earth and Google Fusion Tables helped facilitate scientific collaboration through the visualization, distribution, and communication of planning objectives and acquired data as part of the AVIRIS Gulf of Mexico oil spill airborne remote sensing campaign. In this paper, we highlighted key aspects of these technologies that were particularly relevant to this mission, e.g. the time-slider control in Google Earth and the filtered map display and application programming interface of Google Fusion Tables. Google Fusion Tables, with its cloud-based database and visualization service, could be useful for a diverse array of Earth science applications. Being free and simple to use, it holds considerable potential for the classroom and the quickness to results, online collaboration, and flexibility would be desirable for research groups. However, social media meta-resources, semantic web searches, and interoperability within the Open Geospatial Consortium standards framework, are aspects requiring further discussion and development. We envision that Google Fusion Tables could serve as a nexus for virtual communities interested in exploring and sharing data, but currently the application appears more designed for pre-arranged collaboration than for emergent groups.

Acknowledgments

Special thanks to the JPL AVIRIS team, the ER-2 and Twin Otter pilots and support personnel, the USGS Denver Spectroscopy Lab for flight planning input and analysis (Roger Clark, Raymond Kokaly, Gregg Swayze, K. Eric Livo, and Todd Hoefen), Diane Wickland and Michael Goodman for their leadership roles in campaign support and coordination, and Seth Peterson for GFT imagery classifications. We also greatly appreciate Google’s development and support for the Google Earth and Google Fusion Tables applications.

Copyright information

© Springer-Verlag 2011