Soft Computing

, Volume 20, Issue 5, pp 1671–1681

Data stream visualization framework for smart cities

  • F. J. Villanueva
  • C. Aguirre
  • A. Rubio
  • D. Villa
  • M. J. Santofimia
  • J. C. López
Focus

DOI: 10.1007/s00500-015-1829-8

Cite this article as:
Villanueva, F.J., Aguirre, C., Rubio, A. et al. Soft Comput (2016) 20: 1671. doi:10.1007/s00500-015-1829-8

Abstract

Monitoring smart cities is a key challenge due the variety of data streams generated from different process (traffic, human dynamics, pollution, energy supply, water supply, etc.). All these streams show us what is happening and as to where and when in the city. The purpose of this paper was to apply different types of glyphs for showing real-time stream evolution of data gathered in the city. The use of glyphs is intended to make the most out of the human capacity for detecting visual patterns.

Keywords

Smart cities Data visualization Human behaviour understanding 

1 Introduction

Visualization is the study of the transformation, from data to visual representations, intended to develop effective and efficient cognitive processes to gain insight into that data (Rhyne and Chen 2013).

Precisely, one of the main issues in smart city development is to transform the great amount of data streams into information and finally, into strategic and tactical decisions. In a close future, data streams coming from different sources (energy metering, pollution monitoring, mobility, social media, etc.) will provide information in real-time about what is happening and where is it taking place in any smart city of the world.

Analysing all these data flows represents a big challenge partially addressed by the Big Data paradigm. However, despite producing useful gathered from raw-data provided by sensors, citizen’s smart phones, etc., Big Data fails when unusual circumstances take place. In other words, Big Data can only find what it was originally designed for.

Currently, only human mind can successfully infer new information from raw-data where something unusual is occurring, and only when this raw-data is showed in an appropriate manner. As stated in Tory and Mller (2004), the way how people perceive and interact with a visualization tool can strongly influence their data understanding as well as the system’s usefulness.

This work is inspired in the instinctive computing term developed by Cai et al. (2007) and defined as a computational simulation of biological and cognitive instincts. In this work, as it will be exposed later on, we adapt real-time streaming from smart city to help identifying anomalous patterns in such streams.

Our main motivation is to provide an effective visualization tool for assessing, in a day-to-day environment, tactical decisions under the smart city application field. For achieving this goal, we will use the excellent capacity of the human brain to identify visual patterns even under complex scenarios. Summarily, two are the main contributions of this paper: first, this work proposes a visualization method for real-time data streams using glyphs; and second, this work also proposes a distributed object-oriented architecture complying with all the needs of the considered visualization paradigm, including scalability, modularity, etc.

The remainder of this article is organized as follows: The next section describes the state of art in data stream visualization and it also establishes a set of requirements for an appropriate visualization method. Section 2 shows how previous approaches fail to handle these requirements. Section 3 defines a glyph and proposes several examples; next we will see the proposed architecture along with interface specification and data management. Section 6 is devoted to glyph evaluation with an experiment done by several users. Finally, last section describes some of the most relevant aspects of the implemented prototype along with the main conclusions drawn for this work.

2 State of art

If we analyse current or ongoing methods for monitoring a city, one may find out that most of them are carried out with a forensic perspective. For example, Security cameras are useful, apart from their dissuasion effect, for analysing the records of an event after it has concluded (Fig. 1). Pollution-sensor monitoring networks provide with a set of parameters recorded to be analyzed with statistical graphics.
Fig. 1

CCTV surveillance cameras flood our cities (pictures from http://mylondonpics.com and  Blog 2014)

Statistical graphics very often summarize very well the historical data but they are more oriented to strategic decisions, most of the time showing only two variables. For example, if we want to see the temperature evolution we can show a graphic with the evolution temperature over time. It is easy to elaborate real-time heat maps of the smart city and even programming alerts and warnings according to the temperature evolution.

For geo-localized streaming data, “heat” maps represents a very common visualization technique; an example of this type of maps is shown in Park et al. (2010) for urban noise. Combining 3D and heat maps also provides a good visualization tool when visualizing one variable. An example of this technique is showed, for urban air pollution, in Park et al. (2011). Following the same principle in Hao et al. (2012) shows a Radial Pixel Visualization associated with just one variable and using a gradient color to express different types of warning level.

When we have more than two variables to consider and specially if the variables are correlated, 2D graphics are not so useful and 3D graphics also have their limitations. Sometimes if we want to identify a pattern in some real-time stream, common visualization techniques are very limited. For example, treemap is a good visualization method for seeing the relation between several streams about at most two variables. Streamgraphs show one variable evolution of several streams but it is difficult to introduce new variables.

In the smart city, the variety and quantity of streams encourage us to research in new visualization techniques. For example, if we want to know the status of traffic flow in a street, a sensor can be deployed to detect velocity and separation between cars (e.g. number of vehicles per minute). The more velocity and separation between vehicles, the less collapsed is the street. However, according to the type of the street, the time of the day, the day of the week, etc., a traffic flow can be normal or abnormal.

The cognitive capacity of human being to detect visual patterns and visual shapes is far too complex to be emulated by machines. In the present work we want to improve the use of such capacity in detecting abnormal situations in the smart city.

3 Data stream visualization

Figure 2 shows the composition of a glyph devoted to represent pollution monitoring. Each variable represented in this graphic is associated with a part of the glyph or lobe with its own scale. The glyph adopts a different form according to the values metered by pollution sensors. The main purpose of this work was to detect abnormal situations even when the surveillance worker does not have specific training. Effectively, as we can appreciate in Fig. 3, our mind detects easily the different shapes between a mosaic of glyphs in which a dissonant form appears. This is important because detecting an abnormal situation is the first step facing it.
Fig. 2

Example of a glyph composition for pollution monitoring

Fig. 3

Anomaly in pollution monitoring

Table 1

Data stream characterization

Source

Type

Variables

Source

Type

Variables

Citizen

e-health

Heart rate, blood pressure, temperature

Companies

Financial

Billing, expenses

Citizen

Dynamic

Velocity, distance, transport

Companies

Transport

Frequency, passengers

Government

Energy

Power, intensity, phase

Companies

Energy

Power, intensity, phase

Citizen

Financial

Number of operations, amount, type of expenses

Companies

Water

Volume, intensity

Citizen

Food consumption

Type, branch, amount

Government

Pollution

Carbon, oxygen, nitrogen

Buildings

Energy consumption

Power, intensity, phase

Streets

Traffic

Intensity, velocity,...

Citizen

Social media

Intensity of the activity, location

Companies

Social media

Activity, location,...

The shape of the glyph has been chosen according to the type of data stream and variables involved in such stream. In this paper, we are going to use an asterisk as example of glyph due its simplicity. However, a research in depth is necessary to study which glyph is more appropriate to each data stream.

According to the features of the data stream, we can “attach” specific glyphs, for example, in the position where the data stream is generated if the data stream is geo-located. There are more similar features to be analyzed:
  • Geo-located: most information requires to be geo-located in order to be useful. For example, traffic flow sensors provide information about velocity and distance between cars at specific points in the smart city.

  • Granularity: what area represents the data stream shown through a specific glyph. Some data streams are representative of small areas, whereas other types can represent the whole smart cities.

  • Glyph arithmetic: glyph operations need further research in order to explore whether a set of operations could be representative of some city phenomenon.

  • Privacy: probably one of the key issues in smart cities is privacy of data involved in different processes. Privacy is out of scope of this paper, however, we would like to point out that data stream distribution is sensible information so the platform should take care about who, when and how is accessing information.

In Table 1 we analyze some of the candidate data streams for our visualization framework. We identify three main sources of information in a smart city, as they are citizens, companies and government; however, as the reader can see, we can fine-tune sources to streets, buildings, etc.

4 Framework architecture

The architecture devoted to support this way of visualization is showed in Fig. 4. Our research project has four main sources of data streams:
  • From simulations: we can generate data from simulations in order to test the architecture, to explore the evolution/propagation of different parameters, to test different types of glyph or to see the most suitable for analyzing such data stream, etc.

  • From databases: similarly to data streams from simulations, we are using this source of data streams for studying how glyphs take different forms according to different data streams recorded. From these records, we are more interested in records with abnormal behaviours to see what type of form adopts the glyph.

  • From sensors: this is the final source of information and the main purpose of our architecture. Real sensors, which are deployed in the smart city, will provide with data streams related with different physical parameters (pollution, noise, etc.). We also collect information about logical sensors devoted to get information from different sources. For example, credit-card reader could be an excellent logical sensor, with an appropriate anonymization layer, to analyze and to study economic activity. The surveillance video cameras are providing real-time information about the number of citizen, the dynamic of the city, traffic, etc., the challenge is to develop algorithms to extract such information.

  • From social media: lately, social media has become an interesting source of information. We believe that we can extract useful data streams for providing valuable information about what is happening in the city at each moment in time and in real-time. For example, the use of hashtags associated with a city attached to the messages in Twitter can provide us with a valuable information about events, incidents, etc., in real time.

Fig. 4

Architecture

We are modeling the architecture as a service-oriented distributed platform inside Civitas (Villanueva et al. 2013). Civitas is specially devoted to support the service development process for the Smart City paradigm providing, among other mechanisms, methods for integrating devices ranging from small footprint devices to flexible hardware devices (e.g. FPGA) and a set of standards and tools that can be the backbone of the future smart city ecosystems.

So following Civitas design principles, each flow in our platform is an Internet Communication Engine (Ice) service (Henning 2004), Ice is an object-oriented middleware designed for massive distributed systems. Some issues related with highly distributed systems are mitigated starting from this middleware. For example, IceGrid is a location and activation service easing software management. Similar to IceGrid, IceBox makes easy software distribution, deployment and configuration and security issues are also considered using SSL connections.

We agree that the most successful visualization libraries to date have made it easy for programmers to develop and maintain algorithms that work on many data (Childs et al. 2013). Our vision layer can be provided, as we will see in the next section, as an Ice service or as a programming library.

Every entity in Civitas exposes its functionality as a set of methods regardless its nature. According to this design guideline, our vision layer is composed of a set of distributed services offered to data streams sources. Every distributed service presents an specialized interface for the data stream.

In the following lines of code we can see a simplification of the pollution interface:

Similar interfaces are being defined for different domains to have a set of common interfaces for smart cities (with similar functionality to POSIX interfaces in Operating Systems Domain).

In the visualization layer, the service supporting that interface draws a glyph according to each received report. In the current implementation, a Gnome/GTK visualization tool shows the data stream received as we can see in the Fig. 6. Over a map from OpenStreetMap project (Haklay and Weber 2008) the pollution service paints and updates the glyph associated with each stream identified by a sensorID. Every sensor updates its glyph invoking the method report which is possible because every deployed sensor implements a low-footprint version of the Ice middleware developed in our previous work (Moya et al. 2009).
Fig. 5

Federation

Fig. 6

Screenshot of the demo app visualizing pollution related glyphs

We define also a federation mechanism to distribute the data stream to several sinks of visualization in a broker-like behaviour. We can see the PollutionAdmin interface:

By using PollutionAdmin interface, any update in a service should be communicated to all services subscribed invoking the method report with the new data. With this mechanism we also enable a powerful and flexible mechanism for information federation, filter, processing, composition, etc.

Effectively, this federation mechanism is a key component for the scalability of our framework. With the federation mechanism we can join, aggregate, apply different filters, to different data streams in order to reduce the amount of bytes transmitted.

Figure 5 shows a federation of several services; we identify three main roles in a federation graph:
  • Access services: this type of service is instantiated in nodes close to source streams and they are the first distribution point. According to the type of the stream, in this type of services we can perform some actions in the stream, for example, following the pollution example; this service is instantiated in the gateway between the Wireless Sensor Network and the local area network and we add a timestamp to each sensor value.

  • Core services: this type of role is the core of the scalability and flexibility of the framework; we can instantiate as much core services as we need according to the requirements of the scenario. For example, we could instantiate a core service per district.

  • Visualization service: each entity of the city which requires to access data should instantiate one of the visualization services available. Police department, fire department, etc. are candidates to have a visualization service in their offices.

In Fig. 6 we can see the prototype of a Visualization Service implemented for checking the framework; the main frame represents the map of Ciudad Real (Spain) from OpenStreetMap project; the right lateral frame is divided into two parts. In the top we can see all glyphs together forming a mosaic to detect pollution anomalies; meanwhile in the bottom we see specific values of a selected glyph, which is highlighted in red in both frames, the map and the mosaic.

Glyphs shown in Fig. 6 are from simulations. These simulations are based on real data collected by a test sensor (Fig. 7). This pollution sensor is one example of the type of sensors which we are going to deploy in a future real scenario.

Pollution variables (nitrogen, carbon dioxide, etc.) show a complex relation among them. For example, usual values in summer (with specific traffic conditions, lack of wind, high temperature, etc.) could be tagged as anomaly in winter while are normal in summer. The glyphs help to detect this type of complex anomalies since show variability with respect to the state of the city in each moment. Even the individual variables that can be parameterized clearly sometimes cannot be used due to that big cities usually exceed individual thresholds established by health authorities. So we are interested in exceptional events that could represent a risk for citizens. Glyphs are excellent in showing that type of anomalies.

Some variables are related and according to specific situations the relation between them could be significant or not if, for example, only is produced in one part of the city. Traditional visualization tools use 2d charts to show each variable evolution with respect to the time so it requires of high-qualified workers to relate all the variables together and understand what is happening. With our glyph-based tool the identification of unusual events is much more intuitive.
Fig. 7

Pollution sensor ready to be deployed

Finally, our current effort is to periodically match the form adopted by a glyph at a specific moment in time to a set of forms adopted by the same glyph associated with specific states of the data stream. Let us take an example with a glyph associated with a data stream of traffic flow in a street involving velocity and separation between cars. We can identify several states of the traffic flow (fast, fluent, slow, traffic jam, etc.) with a specific glyph form (Fig. 8). Then, we use libraries (the libraries used for recognizing handwritten symbols works fine) to match what is happening in a street, or in other words, what form is taking its glyph and what form from recorded data set look like the current glyph form so we can extract complex states even if the current glyph does not exactly match to any of the previously stored glyphs.

5 Interpreting human actions

Probably one of the key elements in understanding the smart city is to interpret human dynamics. Our target is to predict anomalies in citizen behaviour to detect accidents, catastrophe, etc. Extending the visualisation framework to the interpretation of citizen actions implies monitoring even millions of data streams from their smart phones.

Approaches to human action recognition range from intrusive ones such as body sensor networks (Cavallari et al. 2014) to non-intrusive ones such as those based on information retrieved from smart phones. However, the visualisation of the information retrieved from different smart phone sensors is not always possible nor feasible using glyphs. For example, the gyroscope sensor provides three values: x, y, and z coordinates. Using glyphs for visualisation this information would not be very useful.
Fig. 8

Glyphs associated with different states of traffic flow

In order to overcome this visualisation problem, information must be turned into a greater granularity. To this end, rather than using glyphs to represent raw sensor data, high-level human activities such as walk, run, or kick will be used as glyphs inputs.

The problem of recognising human activities has been faced here as machine learning problem. To this end, several modules have been developed, run, and evaluated. First, an Android application has been developed to collect information from the smartphone sensors. Then a sliding window approach has been implemented to segment the signal. The segmented signals are sent to a server in which a data reduction is carried out, employing to this end a discrete wavelet transform (DWT). This transformation reduces noise and eliminates redundant information. Once the feature vector characterizing signal segments have been constructed out the reduced and segmented signal, these feature vectors are subjected to an additional dimensionality reduction. To this end, feature vectors are then clustered. After that, each feature vector is described as an histogram measuring the distance to each of the identified clusters. This process is known as Bag of Words. Histograms will be provided to a support vector machine (SVM) classifier (Sapankevych and Sankar 2009) that will compute a model for the training dataset and will use that model for later recognition processes. An outline of the system is depicted in Fig. 9.
Fig. 9

Outline of the different stages involved in the process

The data used for the SVM classifier to generate a model has been recorded using a Samsung S4 smart phone. Out of the 9 sensors available in this smart phone, only the gyroscope and accelerometer were used during the data gathering stage. 16 volunteers participated in the experiment from which 154 actions were recorded altogether. Each action was manually labelled with a number between 1 and 14 as follows:
  1. 1.

    Wait: the actor is still and, therefore, no action is being carried out.

     
  2. 2.

    Walk: the actor is walking.

     
  3. 3.

    Run: the actor is running.

     
  4. 4.

    Sit down: the actor lowers down and sits in a chair.

     
  5. 5.

    Stand up: the actor had previously sat in a chair and stands up.

     
  6. 6.

    Lay down: the actor was previously standing up and lays down in a flat surface.

     
  7. 7.

    Fall down: the actor is walking and suddenly lower down to the floor level pretending a fall down.

     
  8. 8.

    Go upstairs: the actor is going upstairs.

     
  9. 9.

    Go downstairs: the actor is going downstairs.

     
  10. 10.

    Elevator moving up: the actor gets into the elevator and moves up.

     
  11. 11.

    Elevator moving down: the actor gets into the elevator and moves down.

     
  12. 12.

    Punch: the actor moves his/her arms as though he/she was punching something or someone.

     
  13. 13.

    Kick: the actor moves his/her legs upwards as though he/she was kicking something or someone.

     
  14. 14.

    Push: the actor moves violently his/her arms as though he/she was pushing something or someone.

     
One file is generated per action, actor, and sensor. Since a total of 154 actions were recorded using two sensors per action, 308 files were used as the SVM classifier input. The contents of the file were organised as depicted in Fig. 10. Sensor values are timestamped and printed one value per line. In this case, since the accelerometer has three values per measure, x, y, and z are used to identified the collected values.
Fig. 10

Extract of a file containing data of the accelerometer, for the waiting action carried out by actor Alba

Fig. 11

Example of data stated in the SVM format

The obtained files out of the data collection process are organised in two sets: one for the accelerometer model and a different one for the gyroscope. Two models will be obtained so; therefore, two training stages need to be carried out. For every training stage, data need to be labelled using the format stated by the Libsvm library (Hsu et al. 2010) as depicted in Fig. 11.
Table 2

Confusion matrix for accelerometer sensor

#

1

2

3

4

5

6

7

8

9

10

11

12

13

14

1

192

197

0

0

0

0

0

0

0

69

0

0

0

0

2

59

5825

10

0

0

20

0

0

0

67

0

0

0

0

3

2

280

183

0

0

1

0

0

0

3

0

0

0

0

4

3

295

1

0

0

12

0

0

0

10

0

6

0

0

5

2

173

1

0

0

3

0

0

0

5

0

0

0

0

6

8

212

3

0

0

34

0

0

1

31

1

5

0

0

7

2

121

4

0

0

6

0

0

0

9

1

0

1

0

8

5

370

6

0

0

1

0

0

0

7

1

1

0

0

9

5

350

16

0

0

4

0

0

0

2

0

0

0

0

10

88

266

1

0

0

4

0

0

0

425

1

0

0

0

11

97

149

0

1

0

2

0

0

0

94

1

0

0

0

12

24

218

4

0

0

2

0

0

0

9

0

0

0

0

13

12

192

18

0

0

1

0

0

0

5

0

0

1

0

14

3

123

0

0

0

1

0

0

0

16

0

0

0

0

Table 2 summarizes the obtained accuracy rates for the activity recognition system. The diagonal of the confusion matrix represents the number of actions that have been correctly recognized, whereas the other cells represent the actions that were wrongly recognized and also, with which actions were they mistaken.

The output of the training process is the generation of two models, one per sensor used in the training stage. This output turns into the input of the following stage, as it is the testing or recognising stage. At the stage, data are being retrieved from the accelerometer and gyroscope and combined with the aforementioned models to obtain a patter match. Smart phone sensors are queried periodically and the retrieved information is provided to the SVM classifier along with the model corresponding to the sensor being analysed. The classifier will output a number, from 1 to 14, corresponding to the action with a higher confidence degree. This action is afterwards provided to the visualisation tool that translates the action in a characteristic glyph.

The use of glyphs for representing the activities carried out by the citizens of the smart city opens up many more opportunities. For example, a higher level analysis could be carried at the glyph level to determine the evolutionary patterns. In park area, people are expected to walk, approach a bench, and sit down for a while. This sequence will give rise to a characteristic change pattern in glyphs that can be used to model normality. However, if punching or kicking activities are being recognised in a context under which these are not normal activities, alerts can be triggered. Visual patterns can be easily recognised once sensor information has been translated into high-level human activities.

6 Evaluating human sensibility to glyph

Visualizing information through glyph helps users to detect anomalies, but how do users interact with the information? Which type of variations is more sensible from the user point of view?
Fig. 12

Test number 7

To explore this, we test a set of snapshot of 16 glyph panels with ten users to check the “sensibility” of users to specific glyph forms and changes. Each panel has 64 glyphs of three lobs each one of them. An example of one of these panels is shown in Fig. 12.

The 16 glyph panels are different; in some of them all glyph are identical; meanwhile other panels have one glyph different. In Table 3 we can see which test shows no variability (e.g in Fig. 12 we can see the test number 7 with value 0) and the degree of variability in degrees of the glyph when there is one different (e.g Fig. 13).

The setup of our test was an application where the glyph panels were shown to the user sequentially from test number one to test number two, asking them whether all glyph of this panel were identical. With two buttons: one for identical another for different. The application stores the time taken by each user to answer to each panel. Ten users did the experiment.
Table 3

Number of test and variability of the glyph in each test

N\(^{\underline{o}}\) Test

1

2

3

4

5

6

7

8

Variability

0

\(68^{\circ }\)

0

\(5.7^{\circ }\)

0

\(11.4^{\circ }\)

0

\(23^{\circ }\)

N\(^{\underline{o}}\) Test

9

10

11

12

13

14

15

16

Variability

0

\(34.4^{\circ }\)

\(46.8^{\circ }\)

0

0

\(57.2^{\circ }\)

0

0

Fig. 13

Test number 14

The results are shown in form of boxlots. For each data set, a yellow box is drawn from the first quartile to the third quartile, and the median is marked with a thick line. Additional whiskers extend from the edges of the box towards the minimum and maximum of the data set, but no further than 1.5 times the interquartile range. Data points outside the range of box and whiskers are considered outliers and drawn separately, as small circles.

As we can see in Fig. 14, by analysing the time that the users take in choose the answer to each panel, we observe a set of users faster than others in the answer.
Fig. 14

Delay in answer per user

Fig. 15

Delay in answer per test

In each panel, the users spend time looking if all glyph are equal or not, as shown in Fig. 15; the average time in answer is lower when the difference is obvious. For example, in Fig. 13 panel fourteen is shown; the different glyph is quickly detected by the user and answer. However, when there is a panel with all glyph equals, users tend to look up for the difference and delay the answer; this is the case of test seven, which has the longest time to answer in average, which we can see in Fig. 12. As we expected, panels with variation in glyph formed by right angles are more easy to detect, for example, the test number ten has one of the lowest delay in answer for all users (Fig. 16).
Fig. 16

Test number 10

Fig. 17

Errors in human perception of glyph

In spite of all panels showing the same type of glyph (with 3 lobs), of course, the human sensibility to the change in one of the glyph is detected for values around to 30\(^\circ \). As we can see in Fig. 17, all users fail in detecting the variation in test number 4. In this test, a glyph is different but only with a variation of 5.7\(^\circ \) in one of its lobs. Six users also fail the test number six and four the test number eight with a variation of 11\(^\circ \) and 23\(^\circ \) respectively. Finally, the rest of the tests have one or no failure in the human perception.

Again It is necessary to point out that by passing this experiment, users are emulating to monitor 64 variable streams with 3 variables in each stream detecting anomalies in few seconds (Fig. 14)

7 Conclusions

One of the key problems in smart cities is analyzing and understanding the great amount of data generated by different processes in the city (traffic, pollution monitoring, human dynamics, etc.). In this work we analyze glyphs like tool for data stream visualization

First we describe as to how we can use the glyphs for visualization of multi-variable streams; then we build a flexible and scalable distributed framework for collecting, fusion, filter and visualize the information. Unfortunately, not all data stream can be presented raw to users. For example, accelerometers from mobile phones do not show valid information without a previous understanding process. For this reason, we build a SVM approach to human action recognitions for visualization of human dynamics process inside smart cities.

Finally, we have analyzed with users their responsiveness and sensibility to glyph changes by exposing them to a testbed of 16 panels. This study will help us design more appropriate glyph for data stream visualization.

As main conclusion, we strongly believe that a glyph-based tool can improve the monitoring task and reduce time reaction to abnormal situations in future smart cities.

As future work, we are involved now in several research lines extending this work. First we are planning to extend the study with users’ perception to different type of glyph forms to improve our glyph design skills. Also simulation of a smart city generating data streams is necessary to test the scalability of the framework and also see how different situations are translated to glyph visualization. Protest, panic situations, and traffic accident are examples of events which we are interested in. Only simulation can provide us with enough data-stream flows to test this type of situations at affordable cost.

Finally, in spite our tool is devoted to smart city monitoring, we are also interested in automatic smart city understanding.

Acknowledgments

This work has been partly funded by the Spanish Ministry of Economy and Competitiveness under project REBECCA (TEC2014-58036-C4-1-R) and by the Regional Government of Castilla-La Mancha under project SAND (PEII_2014_046_P).

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • F. J. Villanueva
    • 1
  • C. Aguirre
    • 1
  • A. Rubio
    • 1
  • D. Villa
    • 1
  • M. J. Santofimia
    • 1
  • J. C. López
    • 1
  1. 1.University of Castilla-La ManchaCiudad RealSpain

Personalised recommendations