Introduction

Changes in weather are of great interest to many people around the world since they often lead to serious effects on agriculture, industry, transportation, activities, and lives. In terms of disaster mitigations, real-time weather monitoring and reporting can help people preparing for bad weather conditions, which are caused by storms, typhoons, snowfalls, blizzards, and other severe weather patterns. Currently, geostationary meteorological satellitesFootnote 1 play a key role in providing continuous atmospheric observations/monitoring for weather and a wide variety of environmental phenomena.

The Meteorological Satellite Center (MSC) of Japan Meteorological Agency (JMA) has been operating a series of geostationary meteorological satellites since 1978. The GMS series, which are spin-stabilized units, are generally referred to as the first-generation geostationary meteorological satellites. The Multi-functional Transport SATellite (MTSAT) series, which are created as three-axis body-stabilized satellites similar to the contemporary Geostationary Operational Environmental Satellite (GOES) series by National Oceanic and Atmospheric Administration/National Aeronautics and Space Administration (NOAA/NASA), are generally referred to as the second-generation geostationary meteorological satellites. The GMS and MTSAT series by JMA are also called the Himawari series (himawari means “sunflower” in Japanese). The initial Himawari to Himawari-5 are the first-generation geostationary meteorological satellites, while Himawari-6 and Himawari-7 are the second-generation ones. Meteorological agencies in Japan, USA, Europe and other countries have been planning the third-generation geostationary meteorological satellites. The main instrument of Himawari-8/-9, the third-generation satellites, is the Advanced Himawari Imager (AHI), which is a multispectral imaging payload developed by Exelis (now Harris) (Bessho et al. 2016). The imager onboard GOES series is named by NOAA/NASA as Advanced Baseline Imager (ABI) (Schmit et al. 2017). Table 1 shows the specifications of the third-generation geostationary meteorological satellites where Himawari-8 is the first launched satellite among them. The basic specifications of three series of geostationary meteorological satellites are similar in the number of bands, spatial resolutions, and observation intervals, which exceed those of the second-generation ones.

Table 1 Third-generation geostationary meteorological satellites by Japan, USA, and Europe

With the development of multi-band imagers with improved spatial resolution onboard the third-generation geostationary meteorological satellites, big meteorological data are generated. The third-generation ones are expected not only to produce basic data for weather monitoring, but also to observe the Earth’s environment. The target users of these observationdata include the meteorologists and ordinary people who have special interests on Earth’s environments as well. In this paper, we develop a web-based real-time and full-resolution data visualization for the Himawari-8 satellite sensed images. The real-time data visualization helps in better understanding of the current weather conditions and quick responses to weather changes, while the full-resolution data visualization helps in our recognition and awareness of the detailed weather conditions. The real-time and full-resolution data visualization are also important to facilitate the investigation of the regional and global meteorological phenomena. Our techniques of data visualization are based on ecosystems working on an academic cloud.

The rest of this paper is organized as follows. We first introduce the brief overview of Himawari-8, including specifications and websites, and then describes the design and development of our web-based real-time and full-resolution data visualization for the Himawari-8 satellite sensed images. We carry out laboratory experiments for domestic and international network accesses to the website. Finally, we conclude the paper and discuss future works.

Overview of Himawari-8

Specifications

Himawari-8 was launched from Japan’s Tanegashima Space Center using an H-IIA rocket on 7 October 2014 and settled in geostationary orbit on 16 October 2014. Himawari-8 is located at 140.7 degrees East and observes the East Asia and Western Pacific regions as a successor to the MTSAT-2 (Himawari-7). JMA has commenced the unit’s operation of the satellite since 7 July 2015. Himawari-9 was also launched on 2 November 2016 for in-orbit standby service and will eventually replace Himawari-8. Both satellites will be operational for seven years in plan, with Himawari-9’s observation work continuing to 2029. Himawari-8/-9 have entered service phase ahead of other third-generation geostationary meteorological satellites such as the GOES-16 unit of NOAA/NASA launched on 19 November 2016 and the Meteosat Third Generation (MTG) unit of European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) to be scheduled in 2021 (EUMETSAT 2016), as described in Table 1.

The AHI onboard Himawari-8 (hereafter Himawari-8/AHI) has a rich observation functionality compared with the MTSAT-1R/Japanese Advanced Meteorological Imager (JAMI) and the MTSAT-2/Imager. Table 2 shows that the Himawari-8/AHI has 16 observation bands compared with five bands of the previous MTSAT-1R/JAMI and the MTSAT-2/Imager. The first three bands are in visible frequencies: blue, green, and red. Therefore, Himawari-8 is able to provide full color image of the Earth, following the ATS-3 satellite by NASA from 1967 to 2001. Such color images are important and useful, especially for ordinary people. The spatial resolutions of the Himawari-8/AHI visible and infrared bands are twice as high as those of the MTSAT-1R/JAMI and the MTSAT-2/Imager, as shown in Table 2.

Table 2 Himawari-8/AHI bands and correspondence to bands on Himawari-6/-7

There are five areas scanned by the Himawari-8/AHI: full disk, Japan Area (regions 1 and 2), Target Area (region 3) and two other Landmark Areas (regions 4 and 5). The full disk is an image of the whole Earth as seen from the satellite. The scan ranges for full disk and the Japan Area are preliminarily fixed, while those for the Target Area and Landmark Areas are flexible to enable various operations. The 10-minute divisions are basic units of an observation schedule called a timeline. On each timeline, the Himawari-8/AHI scans the full disk once, the Japan Area and Target Area four times, and each Landmark Area twenty times. Accordingly, full disk images are taken every 10 minutes, Japan Area and Target Area images every 2.5 minutes, and Landmark Area images every 30 seconds. In Himawari-8/9 baseline observations, the timeline is repeated every 10 minutes other than in housekeeping operation. The time intervals of full disk, Japan Area, and Target Area are given in Table 3.

Table 3 Time interval of Himawari-8 observations and data files provided by JMA: observation area, time interval, data format, data size, and segment

As for the observation frequency, the MTSAT-1R/JAMI and the MTSAT-2/Imager scan the full disk image every hour and the northern half image every half hour. The enhancements of the third-generation geostationary meteorological satellites lead to about 50 times more data obtained by the Himawari-8/AHI than the previous generation imagers (Miyoshi et al. 2016). Therefore, the size of these big data makes it difficult for satellite operator to provide real-time live satellite feeds, with full-resolution over the entire Earth on public networks in which the network bandwidths are not occasionally high enough.

Survey of Himawari websites

After the commencement of the operation of the Himawari-8, there are several websites over the world started to provide the Himawari-8 imagery. The website in Meteorological Satellite Center (MSC) of Japan Meteorological Agency (JMA) (2016) presents the red, green, and blue (RGB) composite imagery in 24 hours. The user can select the various areas including the full disk and select various bands’ imagery including RGB composite imagery, and adjust animation settings. The website in Regional and Mesoscale Meteorology Branch (2016) presents the latest image and 4 week archive as well as loop of the day, in which the RGB composite imagery is produced by Himawari-8. All of them are provided for download in low-resolution image, high-resolution image (11,000 × 11,000 pixels for the full disk), animated graphic interchange format (GIF), and MPEG-4 (MP4) video file. The user can select the observation area or the full Earth disk. The website in Australian Government Bureau of Meteorology (BOM) (2016) presents high/low-definition satellite images for Australia region or full disk and high-definition (HD) satellite images for Australia region only. Moreover, for the HD satellite images, the user can adjust animation settings and speed, and show cities, roads, boundaries, and coastal, as well as show coordinates. The website in Hong Kong Observatory (HKO) (2015) presents weather satellite imagery in 48 hours. The user can adjust animation settings and show cities as well as download high-resolution image. The website in Office of Satellite and Product Operations (OSPO) (2016) presents full disk and hemispheric of Himawari-8 imagery and loops in 24 hours. The user can select the type of imagery, such as infrared, IR Ch. 4, water vapor, water vapor in blue, visible, aviation (AVN) infrared, funktop infrared, and enhanced (RBTop) infrared, Ch. 4. The funktop is a color enhancement scheme, which is applied to infrared satellite imagery to highlight certain cloud-top temperatures for assistance with precipitation analysis. In addition, for loops, the user can adjust animation speed, and show latitude and longitude as well as zoom in or out of imagery. The website in Philippine Atmospheric Geophysical and Astronomical Services Administration (PAGASA) (2016) presents Himawari-8 imagery in 12 hours. The user can select the observation time and show animated image. The website in Kochi University (2015) presents the latest weather satellite images in each area. The user can select the area or the full disk to display in the form of full scale image, mobile image, thumbnail image, and video. The website in Japan Aerospace Exploration Agency (JAXA) (2015) presents Himawari-8 imagery in Asia and Oceania. The user can select the date and time, the overlay, and the type of imagery such as sea surface temperature, sea surface temperature (night mode), aerosol optical thickness, aerosol angstrom exponent, etc. The website in Data Integration and Analysis System (DIAS) (2016) presents the Himawari-8 latest images. The user can select area (full disk or Japan Area), sensor, and image (still or 6 hours animation or 24 hours animation), and show or hide land/sea. This website works on a cloud system that constructs data infrastructure and integrates Earth observation data, numerical model outputs, and socio-economic data effectively (Kinutani et al. 2014).

In summary of the existing Himawari-8 websites mentioned above, many of them provide data in near real time. The time delays to provide data on these websites make little difference; most of them provide data within 1 hour after observation. In addition, it is hard for the websites to provide full-resolution data visualization in real time or even in near real time. This suggests that a special technique, which is provided by not only high computer resources but also middleware, software, and ecosystems, is needed. The contribution of this paper is to present the techniques used to develop a web-based real-time and full-resolution data visualization for Himawari-8 satellite sensed images.

Himawari-8 real-time web

We design a web-based data visualization to handle Himawari-8 satellite sensed images in real time and with full resolution, called Himawari-8 real-time web (NICT/Himawari-8 real-time web 2016). It is developed and operated on the National Institute of Information and Communications Technology (NICT) Science Cloud (Murata et al. 2013). The original data are supplied by the JMA and concurrently processed on the cloud.

Science cloud

Recently, cloud systems have played important roles in many academic fields. There are two main kinds of cloud system: business cloud system and academic cloud system. The business cloud systems are mainly expected for general uses, e.g., computational resources including storage services. On the other hand, the academic cloud systems rather evaluate themselves by providing ecosystems customized to users’ needs and objectives.

Workflow in cloud systems for academic research with observational big data is usually composed of multiple processes: data sensing, data crawling, data processing, data analysis, data visualization, data preservation, data provision, and so on. Figure 1 shows a concept and design of the NICT Science Cloud.

Fig. 1
figure 1

Basic concept and design of NICT Science Cloud

The NICT Science Cloud provides not only computing resources such as servers, storage, and visualization tools, but also ecosystems that are available for a variety of research projects (Murata et al. 2014a). With the combination of ecosystems, the clouds enable us to provide an environment for constructing a system to process the observational big data in real time. For instance, a remote sensing data visualization system, which consists of sensing, processing, and transfer of observational data, is successfully constructed on the NICT Science Cloud (Murata et al. 2016; Murata et al. 2016). Concurrently, the sensing is carried out at multiple sensor sites and the transferred observation data are processed on the cloud. Each ecosystem not only plays important roles, but also eventually reduces the cost to develop the system successfully.

Table 4 shows a list of the ecosystems used in the Himawari-8 real-time web on the NICT Science Cloud. An update of Himawari data on the HimawariCloud in Figs. 2 and 3 is monitored and transferred to the NICT Science Cloud by the crawler, named NICTY (Murata et al. 2009). The transferred data are stored on the Gfarm storage system (Tatebe et al. 2010). The Gfarm is a middleware to provide reliable and available distributed file systems in low cost. These data are then processed using the Pwrake (Tanaka 2010; Tanaka and Tatebe 2012b, 2014). The Pwrake is a parallel workflow extension of Ruby’s standard build tool Rake and designed to show good synchronization with the Gfarm (Tanaka 2010). Moreover, it has been succeeded in parallel processing of observation data of scientific satellites (Murata et al. 2014b; Yagi et al. 2015), weather radar system (Murata et al. 2016), and astrophysics data (Tanaka and Tatebe 2012a). The processed image data files are accessible from the Internet through World Science Data Bank (WSDBank). The WSDBank is a web application designed to provide science data (Murata et al. 2014a). We customize this web application for the Himawari data, as shown in Fig. 4. A user makes a choice of data (among full disk, Japan Area and Target Area) and time interval to search for, and then downloads data files through the web browser.

Table 4 Ecosystems used for the Himawari-8 real-time web on the NICT Science Cloud
Fig. 2
figure 2

Himawari data flow from JMA

Fig. 3
figure 3

Data flow from HimawariCloud and data processing on NICT Science Cloud

Fig. 4
figure 4

Himawari data download web (WSDBank): data file search between 0:00 am and 0:15 am on January 1, 2016

Data flow

Figure 2 shows the data flow from Himawari-8 to domestic and international organizations to deal with the meteorological data over networks. Himawari raw data generated onboard the Himawari-8 are first transferred to the JMA/MSC through the Himawari OPeration Enterprize corporation (HOPE) and Himawari Standard Format (HSF) data are created from those raw data. The HSF is an original numerical data format designed for Himawari-8/-9 satellites. The HSF data are then converted to Network Common Data Form (NetCDF) and color image files in Portable Network Graphics (PNG) format. All of the data files are disseminated to meteorological business entities via the Japan Meteorological Business Support Center (JMBSC) for commercial uses. To provide the full resolution data to the national meteorological and hydrological services in foreign countries and Japanese domestic researchers there are several paths; through domestic and international networks, over the Internet (content delivery network, or CDN) and dedicated lines.

For academic and educational uses, the JMA/MSC provides Himawari-8 data to four organizations: NICT Science Cloud, DIAS at the university of Tokyo, Earth Observation Research Center (EORC) at JAXA, and Center for Environmental Remote Sensing (CEReS) at Chiba University in quasi real time through the HimawariCloud over academic networks, such as SINET (National Institute of Informatics (NII) 2016) and JGN (NICT/JGN 2016). The HimawariCloud plays a simple role to collect data files from JMA/MSC and provide them by hypertext transfer protocol (HTTP) to four organizations. These four organizations are permitted by JMA to work on these data for their own research works and to provide them to outside researchers. NICT Science Cloud and CEReS conduct a special collaboration of business continuity planning (BCP) to facilitate backup and restore the overall stored data in each other.

Figure 3 shows the data flow from HimawariCloud to NICT Science Cloud, and data processing and provisions inside the cloud. We develop a data workflow system for the Himawari-8 real-time web on the NICT Science Cloud, which is composed of a crawler, cluster servers with storages, a NAS storage, and a file server that also works as a web server. The NICTY/data download agent (DLA) is used to crawl the numerical and graphic files on the HimawariCloud. An agent of NICTY monitors updates on the HimawariCloud and downloads only the updated files as soon as possible. The maximum interval of NICTY to detect the data file update on the cloud is 1 minute. Since the total data size of Himawari-8 is large (about 160 TB/year), these data files are stored on the Gfarm file system. The Gfarm provides us with a low-cost and reliable storage system. Another reason to utilize the Gfarm is for parallel processing of the data via the Pwrake. The created pyramid tile image files (to be discussed later) via the Pwrake are stored on the NAS storage. The numerical and graphic files from the HimawariCloud and created pyramid tile image files are provided to the users through the WSDBank and the Himawari-8 real-time web, respectively.

Himawari-8 data and image

AHI data

As discussed above, HimawariCloud provides three area data: full disk, Japan Area, and Target Area. Table 3 shows a list of time interval, data format, data size, and segments of each area. There are three types of data format to be provided in Table 3. The NetCDF and PNG are common formats of numerical and graphic data, respectively. Table 5 depicts a number of bands and a spatial resolution of these data formats. Both HSF data file and NetCDF file provide all of 16 bands, including 3 visible bands, 3 near infrared bands, and 10 infrared bands. The PNG supports full-color images and can be used for visible images, which are made from 3 visible bands above. Note that the spatial resolution of data depends on the band. As shown in Table 2, the highest resolution is 0.5 km on band 3 and the lowest resolution is 2 km.

Table 5 Number of bands and spatial resolution of each data format

Table 6 indicates that the Himawari-8 real-time web provides visible images of full disk and Japan Area, 16 bands full disk images, and movies of full disk, Japan Area, and Target Area. In this paper, we focus on the visible images of full disk and Japan Area. These visible images are made from the original PNG files in Table 3. The spatial resolution of provided PNG is 1 km in both full disk and Japan Area, as shown Table 5. Note that the PNG effectively compresses black area, hence the file size changes depending on local times. The typical total file sizes of both areas at noon are 370 MB and 27 MB, respectively, as shown in Table 6. One of the important objectives of this paper is to develop a web system with highest resolution of Himawari image data. However, it does not make sense to display such large-size image files on web pages. In general, the desired graphic file size on the web pages is less than 1 MB. A special technique for such large-size image files is needed.

Table 6 Data file size and number provided by Himawari-8 real-time web: HR and LR represent high resolution and low resolution, respectively

Pyramid tile technique

In order to achieve better performance to show high-resolution image files on web pages over the Internet, the Himawari-8 real-time web adopts a pyramid tile technique for scalable visualization on the web. The pyramid tile technique is firstly proposed by Baker and Sullivan (1980). A pyramid (or pyramid representation) is a type of multi-scale signal representation developed by the computer vision, image processing and signal processing communities, in which a signal or an image is subject to repeated smoothing and subsampling (Adelson et al. 1984). The pyramid representation is a predecessor to scale-space representation and multiresolution analysis. Image overviews, tiling, and pyramids are techniques to view large image more effectively. There are many web sites based on the pyramid tile such as Google Maps (Google 2016), STARS touch (Murata et al. 2014c), and Global Mapper (Blue Marble Geographics 2016). Google Maps is a desktop web mapping service developed by Google. STARS touch, which is a web application for scalable data in time, takes a dynamic look at the short durational data (e.g, 1 minute) and long durational data (e.g, 1 decade). Global Mapper is an affordable and easy-to-use Geographic Information System (GIS) application that offers access to an unparalleled variety of spatial datasets and provides the right level of functionality to satisfy both experienced GIS professionals and beginning users. In this study, the pyramid tile technique is used for the high-resolution image files of Himawari-8 that are continuously updated in time.

The Himawari-8 real-time web generates multi-scale pyramid tile files of visible images from the original PNG files in Table 3. The Himawari-8 real-time web provides 12 zoom levels for full disk and 11 zoom levels for Japan Area as shown in Tables 7 and 8, respectively. The original spatial resolutions (full resolutions) of both full disk and Japan Area are 11,000 × 11,000 pixels and 3301 × 2701 pixels respectively, as shown in Table 3. The file sizes of single tile image are fixed in 550 × 550 pixels for full disk and 600 × 480 pixels for Japan Area. Note in Tables 7 and 8 that some multiple zoom levels share the same image file on both full disk and Japan Area, as will be discussed later.

Table 7 Number of pyramid tile image files depending on zoom level for full disk (550 × 550 pixels)
Table 8 Number of pyramid tile image files depending on zoom level for Japan Area (600 × 480 pixels)

Image conversion

The image source files on the Himawari-8 real-time web are provided in PNG file format. The left panels of Fig. 5 gives a set of the original PNG images of both full disk and Japan Area provided by JMA. These images are too dark for general uses on the Himawari-8 real-time web. Therefore, the original graphic files is converted using ImageMagick (ImageMagick Studio LLC 2016) to adjust the brightness of the original images. The ImageMagick is a free and open-source software suite to create, edit, compose, or convert raster and vector image files. The right panels of Fig. 5 are the results of the conversion for full disk and Japan Area, respectively. We can clearly see the difference between them in both full disk and Japan Area.

Fig. 5
figure 5

PNG files provided by JMA and after our conversion of a full disk and b Japan Area at 12:00 JST on January 1, 2016

The technique used in the ImageMagick is as follows. The visible color image has information on the Earth’s surface, but it is difficult to distinguish in the low brightness image. We adjust the gamma value (up to 2.0) for better contrast between cloud and ground surface to both full disk and Japan Area for the users who are interested in information on the surface. Miller et al. (2016) pointed out the deviation of the Himawari-8/AHI’s visible RGB visual appearance from true-color imagery. The Himawari-8/AHI’s green band is not aligned with chlorophyll signal of green vegetation, but rather shifts to blue color side. They advocated hybrid green band to compensate for the deficiency of the Himawari-8/AHI’s green band (0.51 μ m) using the Himawari-8/AHI band 4 (0.86 μ m).

Image division into pyramid tile

At each time step of Himawari-8 image sensing, the Himawari-8 real-time web creates a set of pyramid tile image files of 12 levels (for full disk) and 11 levels (for Japan Area). Figure 6 shows a flow diagram for the creation of the pyramid tile image files and the movie files of full disk, Japan Area, and Target Area. We first create a set of tile image files for the highest level (the level 12 for full disk and the level 11 for Japan Area) from the converted original PNG files in full resolution. As shown in Fig. 6, the converted original image files are then divided into 400 files of level 12 (550 × 550 pixels for full disk) and 25 files of levels 8-11 (600 × 480 pixels for Japan Area), respectively, in order to create a set of tile images. For lower levels (level 11 for full disk and level 7 for Japan Area), the PNG image files with original resolutions are next compressed into smaller sizes: 8800 × 8800 pixels for full disk and 2400 × 1920 pixels for Japan Area. These compressed PNG images are again divided into the tile images with the same pixels above. Tables 7 and 8 indicate that these processes are conducted 5 times for full disk and 3 times for Japan Area at each time step, respectively. The compression of the original PNG files and the number of divided tile image files are shown in these tables.

Fig. 6
figure 6

Flow diagram of creation of pyramid tile image files and movie files. WSDB represents the files are downloadable on the WSDBank

For easy parallelization of these division processes, the Pwrake in Table 4 is applicable. We allocate one CPU core for one image division process (corresponding to one element in Fig. 6) dynamically using its task scheduling function. Table 9 shows CPU cores and the original (input) and created (output) files in this division processes. To save cost, we occasionally use the same tile image files of multiple zoom levels, as described in Tables 7 and 8. For example, zoom levels between 1 and 4 of full disk share the same tile image file. It means that the spatial resolutions of displayed single tile image size vary on each zoom level. For the full disk, the displayed resolutions of zoom levels 1, 2, 3, and 4 are 275 × 275 pixels, 385 × 385 pixels, 550 × 550 pixels, and 770 × 770 pixels, respectively. At zoom level 1, a 550 × 550 pixels image is compressed by a quarter to be displayed.

Table 9 Required specification for real-time concurrent operation via Pwrake

The Himawari-8 real-time web provides movie files as well. These movie files are created from the converted original PNG files with full resolution of full disk, Japan Area, and Target Area. The movie files are provided in high resolution for general personal computers (PCs) and in low resolution for smartphones (SPs). For example, for the Target Area, the PNG files are compressed to 680 × 720 pixels or 830 × 720 pixels for high-resolution movie and 452 × 480 pixels or 554 × 480 pixels for low-resolution ones, as described in Fig. 6. The creation of the movies starts at 3:00 Japan Standard Time (JST) every day since before that moment the sky is almost dark in any season of year. At every moment when new files for high and low resolutions are created on the system, the system replaces the movie files and the latest movies are updated on the web. Eventually, there are only two near real-time movie files with different spatial resolutions on the web every day for full disk and Japan Area, respectively.

Number of displayed images

When the pyramid tile images are displayed on the Himawari-8 real-time web applications, the user interfaces (UI) are slightly different between PC and SP as shown in Fig. 7. Hereafter, we mainly consider a PC display that has higher resolution than SP in general. When the user takes a look at the full disk at a certain zoom level, there are one or more tile image files placed in good alignment without demarcation lines. Figure 8 shows the expansion of full disk images on zoom levels of Himawari-8 real-time web on PC display. The zoom levels are 1, 2, 4, 6, 8, 10, and 12 and the numbers of previewed pyramid tile image files on the screen are 1, 1, 1, 6, 12, 12, and 15, respectively. It should be noted that the spatial resolution of each tile image is fixed in 550 × 550 pixels independently of the zoom levels. In case of Japan Area, the spatial resolution of single tile image is 600 × 480 pixels, as shown in Table 8. Since the number of displayed tile image files depends on the zoom level, the total amount of transfer file sizes from the Himawari-8 real-time web server to the client PC is in proportion to the zoom level.

Fig. 7
figure 7

Screen-captured images of full disk view in zoom level 1 on a PC and b SP where step forward/backward buttons are represented by red circle. Note that one of the Himawari-8 real-time web functions is to draw coast lines, which do not show up in Fig. 5

Fig. 8
figure 8

Expansion of zoom level of Himawari-8 real-time web on PC: a 1, b 2, c 4, d 6, e 8, f 10, and g 12. The number of previewed pyramid tile image files are a 1, b 1, c 1, d 6, e 12, f 12, and g 15, respectively

Nevertheless, it is not too fast to depict the images on a user’s web browser for comfortable preview of the Himawari images. The number of pyramid tile image files displayed on a web browser also depends on the browser window size on the PC display. If the user fully expands the browser window size on a large-size display, such as HD or 4K (a horizontal resolution of approximately 4000 pixels) display, the number of image files to be displayed on the browser increases. Figure 9 shows the maximum number of previewed images of full disk and Japan Area depending on the zoom level; SP and HD in the figure represent smartphone and high-definition display, respectively. The resolution of SP in this study is 375 × 667 pixels, which is currently most popular size. These figures indicate that the maximum number of displayed image files on the browser on HD is 15. There is only one image file previewed on the browser between zoom levels 1 and 4 on both SP and HD. Note that, as indicated in Tables 7 and 8, the displayed image sizes are not fixed since the tile images are occasionally expanded depending on the zoom levels. For example, on the zoom levels 7 and 8, the tile sizes are fixed in 550 × 550 pixels, but the sizes and numbers of displayed images are different depending on the browser window size.

Fig. 9
figure 9

Maximum numbers of previewed full disk and Japan Area image files: SP and HD are for smartphone and high-definition display, respectively. The resolution of SP is 375 × 667 pixels that is currently most popular size

Discussion of concurrent processing

As mentioned above, we develop a real-time and full-resolution Himawari-8 web in combination with the ecosystems provided by the NICT Science Cloud system developed and operated by Murata et al. (2013). For the real-time web, we need to create the pyramid tile image files well synchronized with transfer of PNG files from the HimawariCloud to the NICT Science Cloud, as shown in Fig. 3. Figure 10 shows concurrent processes of Himawari-8 satellite image data and time interval of data flow of both full disk and Japan Area. The longest duration of the data processing on the NICT Science Cloud comes from the process to create tile image files. We succeed in decreasing this duration to 15.7 minutes and 3 minutes for full disk and Japan Area respectively, with help of the Pwrake (a task scheduler). The method adopted in the creation of pyramid tile images is the same as that used for 3D weather radar data by Murata et al. (2016). Eventually, we visualize the full-resolution images within 9.3 minutes for Japan Area and 25 minutes for full disk after the observation via Himawari-8 satellite. Our challenge in this study is that no specified technique restricted for meteorological uses is assembled in the system. It enables us to avoid the expensive system development cost and to realize concurrent data processing within satisfied time delay. We can conclude that the proposed system is developed at low cost based on common ecosystems in a cloud.

Fig. 10
figure 10

Concurrent processes of Himawari-8 satellite image data and time interval of data flow of a full disk and b Japan Area

Laboratory experiments

International access statistics

Figure 11 shows the domestic (Japanese) and international access numbers to the Himawari-8 real-time web from January to November 2016. It should be noted that 25% of total page view (PV) is shared by the international accesses. This implies that the need for the Himawari-8 real-time web from other countries are not negligible. Taking into account that one of the objectives of a series of Himawari satellite is JMA’s contributions to Asia and Oceania countries, we are motivated to investigate the international access performances to the web server in this section.

Fig. 11
figure 11

Domestic and international access ratio to the Himawari-8 real-time web from January to November 2016

It is well known that both Taiwan and Philippines are located on the main path of typhoons and, therefore, they suffer damage from the typhoon attacks almost every year. They have had serious social damages by the large-scale typhoons so far. As seen in Fig. 11, the access number from Taiwan is ranked third among 18 countries, while that from Philippines is ranked out of the top 10. Note that the populations of Taiwan and Philippines are about 23 million and 100 million, respectively. From the viewpoint of typhoon disaster mitigation, we should get more accesses from the Philippines and, of course, other countries as well. More detailed statistical analyzes of access log will be discussed in our other paper.

Network communication for full-resolution preview

Web access examination

For better use of the Himawari-8 real-time web, we need to overcome the network issues addressed above. It is not easy, thus not reasonable, to examine the performance to access to the Himawari-8 real-time web on domestic and international networks since the network conditions are quite different depending on the countries. In this study, we examine the web access performance in laboratory experiments. It is well known that the access speed (or called the network throughput) to the web via HTTP is significantly affected by the packet loss or latency on the network. In general, the access speed of transmission control protocol (TCP), which is a protocol on the transport layer of the HTTP, decreases as either the packet loss or the latency of the network becomes larger (Murata et al. 2016). However, it is hard to avoid them on public networks such as the Internet, especially on the international networks, since these network conditions are usually out of controls by users. The amount of packet loss is usually measured by packet loss ratio (PLR) and the latency is often estimated by round trip time (RTT). Both PLR and RTT from a client PC to a web server are able to be estimated by network measurement tools such as iperf (Barayuga and Yu 2015) and hperf (Murata et al. 2016). For example, in our previous studies via the hperf on the international network between Japan and the USA, we successfully examined network throughputs on laboratory experiments to simulate on the international long fat network (LFN) environment.

Herein we attempt to simulate the international networks to access to the Himawari-8 real-time web. Figure 12 shows a model of the laboratory experiments. The web server and the client PC are respectively Linux-based and Windows-based computers with dual Intel®;Xeon E5-2630v4 (10 Core/2.2 GHz/25 MB/QPI 8 GT/s, 85 W), 128 GB (DDR4-2400 ECC Registered) of memory, and Intel®;I350 Gigabit Network Connection. A set of the web server and the client PC is connected with 1 Gbps connectivity. In between them on the 1 Gbps network, we set a network emulator, Anue H Series GE mode (Ixia 2016), which generates several types of packet loss (PLR from 0% to 100%) and latency (RTT from 0 ms to 500 ms). The client PC accesses to the web server using HTTP protocol (HTTP/1.1). In this experiment, we examine the different network conditions by changing the PLR and the RTT in between the client PC and the web server, and survey the access speed from the client PC to the web server in order to operate the Himawari-8 real-time web.

Fig. 12
figure 12

Model of laboratory experiments

Dependence of image file size on local time

As discussed above, the tile image file format in the Himawari-8 real-time web is PNG. The size of image file depends on the local time, indicated in JST. Figure 13 shows file size dependence of full disk and Japan Area in the zoom level 1 (Fig. 5) on the local time on 1 January 2016. The efficiency of graphic data compression in PNG is higher in the night time since the black area is larger. On that day, the dawn and dusk times are 7:07 JST and 17:00 JST, respectively. Before 6:00 JST and after 18:30 JST, it is almost dark and file size is less than 1.1 KB in Japan Area image file. The maximum and minimum size of Japan Area image file are 448 KB at 11:42 JST and 0.37 KB at 0:30 JST, respectively. The image file sizes of full disk are larger than those of Japan Area because of the less dark areas in the images at most of the time of the day. The maximum and minimum file size of full disk is 477 KB at 11:30 JST and 21 KB at 23:30 JST, respectively. It should be noted that the local time dependence changes at seasons in a year, but shows similar tendency on a day.

Fig. 13
figure 13

File size dependence on local time (JST) on 1 January 2016: full disk and Japan Area of zoom level 1

Performance of time-dependent preview

HTTP protocol (HTTP/1.x), which is the most popular protocol for web access and is used in the Himawari-8 real-time web as well, provides a single connection between a web server and a client PC. For high access speed, the HTTP supports techniques to accelerate its responses to clients’ requests. One of the significant techniques is the persistent connections (also called keep-alive, or connection reuse), which is supported by default in HTTP/1.1. The HTTP/1.1 is a revision of the original HTTP (HTTP/1.0). Therefore, the HTTP/1.1 provides for persistent connections between the web server and the client PC to minimize some of the overheads associated with fetching multiple objects by establishing separate connections for each transaction. Almost all modern web browsers such as Google Chrome (hereafter Chrome) and Microsoft Internet Explorer (hereafter IE) use the HTTP/1.1 persistent connections. Another technique is the parallel connections. The HTTP allows clients to open multiple connections and perform multiple HTTP transactions in parallel. According to Fielding et al. (1999), the maximum number of the simultaneous persistent connections is 2. However, in reality, the maximum number of simultaneous persistent connections is 6 for Chrome and 8 for IE (Smith 2013). In this study, by default, the number of simultaneous persistent connections of Chrome and IE is 6.

Figure 14 shows schematic pictures to describe transfer of tile image files and browsing of the files in case that the maximum number of simultaneous persistent connections is 4. At each time step between browsing images, the web application transfers multiple pyramid tile image files in 4 parallel streams.

Fig. 14
figure 14

File transfer from Himawari-8 real-time web server to user client PC in parallel (the maximum number of simultaneous persistent connections is 4) and browsing of pyramid tile image files in case that the number of previewed image files are a 4 and b 6, respectively. Pyramid tile image file of [i, j] represents the j th image file at i th time step

The persistent connection uses the same TCP connection to send and receive multiple HTTP requests/responses instead of opening a new connection for every single request. One of the important performances of the Himawari-8 real-time web is users’ sequential preview speed of page views along with time series. Figure 7 shows the full image of the Earth in zoom level 1 on the Himawari-8 real-time web on PC (Windows platform) and SP. In this paper, we discuss the preview speed of full disk views in the zoom level 1 on PC. To preview time series of the view continuously, the user clicks the step forward button or step backward button indicated by red circles in the figure. As discussed above in Fig. 13, the image file size depends on the local time. The image file of the largest file size shows up around noon in JST when we focus on the time-series preview in this study. Figures 14a and b are the cases when the number of previewed images on browser at one time step are 4 and 6, respectively. We investigate the browsing intervals and the previewed tile file sizes at each time step in our laboratory experiments.

Figure 15 shows the preview performance on the Himawari-8 real-time web in case of no PLR and no RTT measured on the laboratory experiment system. Both parts of Fig. 15 represent the speed to preview time series of Himawari-8 full disk images, showing their dependence on the number of image files on the browser window. Figure 15a depicts the previewed page numbers per second, which is usually represented as page rate expressed by page per second (pps). It is equal to the reciprocal of the browsing interval in Fig. 14. As the number of image file increases on both Chrome and IE in Fig. 15a, the page rate decreases since the number of transferred files increases. Since the numbers of simultaneous persistent connections in both web browsers are equal to 6, the preview speed on Chrome is close to that on IE. When the number of image files is 12 on the browser window corresponding to the zoom level 7 or more in Fig. 9a, the page rate in Fig. 15a exhibits between 3 and 5 pps with no latency and no packet loss on the network. According to our visually checking, the page rate of 3 pps is a minimum for our practical uses of time-sequential image view.

Fig. 15
figure 15

Dependence of image preview speeds on the number of image files to preview full disk images on Himawari-8 real-time web in laboratory experiments: a page rate and b throughput or averaged page access speed

The averaged page access speed in Fig. 15b, which is calculated by total image file size (summation of 4 or 6 file sizes included in one time step in Fig. 14) transferred in one time step divided by page access duration (browsing interval in Fig. 14), increases as the number of image files increases. This is the effect of parallelization of file transfer in Fig. 14. However, it is not enough to maintain or enhance the page rate in Fig. 15a at larger number of image preview.

Experiment of domestic and international networks

The packet loss and latency are inevitable in the real networks and occasionally not negligible to our web accesses on the international networks. Figure 16 shows a map of nominal PLR and RTT to Japan from other countries and domestic. The PLR and the RTT have profound impacts on the throughput of HTTP protocol, but it is not easy to estimate exact PLR and RTT since they generally depend on the places, networks, and time to measure. Figure 16 is obtained through our experiences of 10 years collaborations with our partners in other countries. The latency (RTT) is almost in proportion to the real geographical distance from Japan. The packet loss is serious especially in Europe and China.

Fig. 16
figure 16

Nominal packet loss ratio (PLR) and round trip time (RTT) to Japan from other countries and domestic

We investigate the effect of these PLR and RTT on the access speed to the Himawari-8 real-time web in the laboratory experiment network in Fig. 12. Figure 17 depicts the preview speed of images corresponding to Fig. 15a and b depending on the PLR and the RTT using Chrome. Graphs in Fig. 17 show two cases that (1) single image file and (2) 8 image files on the browser window are transferred during one preview interval, where these numbers correspond to the horizontal axis of Fig. 15. The RTT and the PLR in the axes of the figures are respectively same as in the vertical and the horizontal axes in Fig. 16. The height of the bars in Fig. 17 represents the network throughputs. The case with no PLR and no RTT, at the left-bottom corner, corresponds to the case of Fig. 15. In Fig. 17a, the page rate in case with no or little packet loss and latency are more than 1 pps. However, the image preview speed drastically decreases as either the PLR or the RTT increases. Comparing with Fig. 16, one may find that the paging speed to preview images on the browser is more than 1 pps on the domestic network, but less than 1 pps on any international networks, which is lower than our satisfaction. Averaged page access speed in Fig. 17b is less than 100 Mbps in the presence of a little packet loss or latency.

Fig. 17
figure 17

Image preview speeds depending on the packet loss ratio (PLR) and latency (RTT) (a page rate and b averaged page access speed corresponding to Fig. 15) of full disk images on Himawari-8 real-time web. The number of previewed image files that is the horizontal axis of Fig. 15 in (1) and (2) are 1 and 8, respectively. Note that no bar is depicted where the page rate is less than 0.14 fps in a and the averaged page access speed is less than 2.0 Mbps in b

Conclusions

For disaster mitigations, real-time circulation of the Earth observation data plays a crucial role. The Internet is one of the most important network media to access to the real-time data. Due to the tremendous developments of the sensor and sensing technologies, the volume of observation data has increased drastically over the past decade. The disaster prevention organizations are necessary to provide this large-scale data to the users in real time for more precise decision makings. However, they often dispense with providing a part of the observation data in real time due to the data size issue, that provides citizens with less chance to acquire rich and necessary information for disaster mitigations. We cannot pretermit their awareness training effects by taking a look at higher-resolution data in quality and quantity as well.

Himawari-8/-9 are a set of the third generation of Japanese geostationary meteorological satellites that carry state-of-the-art optical sensors with significantly higher radiometric, spectral, and spatial resolution than those previously available in the geostationary orbit (Bessho et al. 2016) and enable better nowcasting, improved numerical weather prediction accuracy and enhanced environmental monitoring. Color images, which are newly introduced in this geostationary meteorological satellite, are derived by compositing three visible bands (blue: 0.47 μ m; green: 0.51 μ m; red: 0.64 μ m). The Himawari-8/AHI provides the color images of full disk scan every 10 minutes, and regional scans such as Japan Area and Target Area every 2.5 minutes.

For convenient and practical uses of the Himawari-8 real-time web, the quick preview of the Himawari-8 sensed images is one of the crucial issues. To the best of our knowledge, no web-based data visualization for Himawari-8 satellite sensed images has ever succeeded in satisfying the conditions of real time and full resolution. In this paper, we developed a web-based data visualization for Himawari-8 satellite sensed images to satisfy both conditions. The motivated concept to develop the website is “for anyone, anywhere, and anytime (whoever, wherever, and whenever)”. For these purposes, we need new techniques for large-scale satellite observation data dissemination. In addition, we cannot neglect the cost (in both senses of economy and effort) to develop such real-time and full-resolution data web. The development of the system on cloud is effective and practical solutions for the cost issue. We succeeded in development of real-time and full-resolution visualization web on the NICT Science Cloud (Murata et al. 2013). It should be noted that the data crawling and file transfer technologies, the high-speed data processing technologies, and the dynamic tile image technologies in this study are applicable to other real-time observation data such as seismic, oceanic, volcanic, and atmospheric ones.

To get a quick view in time of interested area, we need to handle with rich information of data, not only on the Himawari-8 real-time web but also on any types of time-dependent observation data web. We studied the preview performance of the Himawari-8 real-time web on a laboratory experiment environment. On a pyramid tile image web, a page is composed of multiple image files. The number of pages to be previewed on the web browsers per second depends on the multiple file transferred speed from the web server to the client PC, which corresponds to the network throughput. The experimental result indicates that the throughput is improved due to the HTTP/1.1 simultaneous persistent connections. Another significant effect on our image preview performance on the Himawawari-8 web is network conditions. We proved that our image preview speed on the web strongly depend on the PLR and RTT on the network. Our experimental results indicate that the access speed to the web server located in Japan from the client PC deployed in other countries, e.g., Asia and Oceania countries, United States, and Europe countries, has slower performance than practical; the page rate is less than 1 pps. These trends suggest that the simultaneous persistent connections are not necessarily enough on the international networks. For more useful web, especially for the international uses, we need other network techniques to overcome this issue. One of the potential solutions is to adopt web acceleration, e.g., NGINX Plus (NGINX Inc. 2016) or high-speed data transfer protocol, e.g., HpFP (Murata et al. 2016).