As shown in the related work section, there are quite a few middleware systems focusing on urban planning [3, 7–9]. We introduce an open middleware for urban planning that includes a geo-data repository and an asynchronous job management engine that allows a supervised execution of modeling, simulation and visualization tasks in a network. Our solution focuses on simple configurability and usability, parallel computation of urban simulations and flexible web integration. In contrast to existing solutions LUCI is platform independent, open source and allows to encapsulate complex simulation tasks for urban planning in a straight-forward manner that relates more to tools that support creative design tasks compared to typical GIS tools.
It is implemented combining the Message Broker model with a simple Server-Client architecture. We use MQTT for notification and a separate TCP socket for the content ex-change. Similar to MQTT the content exchange sockets remain open as long as the client is connected. Content messages follow a fairly simple protocol, in which messages consists of a JSON header followed by binary data if needed. The JSON header must contain one of the three keywords “action”, “result”, “error”, where action and error are strings and result a JSON object.
The name “action” corresponds to a terminology we define in LUCI as follows. “Actions” are small pieces of code used for the administration of data, simulation and data conversion tasks. They are supposed to run very fast and when called from a client they run synchronously. The client only gets an answer once the action is finished. “Services” on the other hand is the name for asynchronous tasks supposed to run for a longer time that would not be conceived as responsive by a user. “Convertors” are a class in-between running synchronously at the moment but being designed to run asynchronously with only a few modifications. “Actions”, “Services” and “Converters” are part of a plugin system, i.e. dynamically (re-) loaded at runtime. Thus a restart of LUCI that would reset all socket connections is not necessary. As shown in Fig. 4 there is also a category for “Database” plugins. Even though not reloadable at runtime this allows LUCI to have different database adaptors, which in turn enables users to work with their preferred database. Apart from the plugin structure, Fig. 4 also shows the basic idea of the data structure. We map service in- and outputs to separate tables. “Scenarios” denote the main unit in which the geo-data repository is divided into. They are shared among the services. For more information on the data structure refer to Sect. 3.4.
Graphical user interface(s)
Administrative controls including an interactive console are available from a Desktop System Tray Menu as shown in Fig. 3. Nevertheless, LUCI can also be configured to run headless without any GUI. By now, the most important control available from this menu is the interactive console. It is intended for developers to send actions to LUCI as raw JSON strings most likely for testing reasons.
Other commands available from that menu are Starting and Stopping LUCI, open a PDF documentation (Open Specification) and a few more, as listed in Fig. 5.
A major highlight of the LUCI middleware is the fact that it can be embedded in any web system through web sockets. This opens the door for a wide variety of HTML5 web applications. We use ActiveMQ as a MQTT broker (see Sect. 3.8). Upon starting up the broker, it also starts a jetty webserver to support MQTT over websockets. This jetty instance also serves LUCI’s webcontent. In the future the web interface should develop into the main instance from which a user can administrate and monitor LUCI, its service instances and scenarios. Among others we envision a flow diagram to interactively visualize service instances and perhaps other parts of a LUCI scenario. As a showcase of LUCI’s capabilities offered by websockets, please refer to Sect. 4.2 “Teaching the Unkown”.
LUCI is supposed to run database-agnostic. This is achieved through mostly standard SQL code and all the agnostic parts being part of a database specific plugin in the database layer (see Fig. 4). At the moment Postgres and H2 are supported. The data structure can be subdivided into two main topics: the inputs and outputs of the services as well as the geo-data repository, which in the perspective of the individual services could also be termed as “shared data”.
The most important feature of the service related data is that we need to be able to relate every generated output to the corresponding input data. We solve this with timestamps: The services can operate on either only their input data or as well their input data plus shared data. Therefore, in an additional table we store the newest available timestamp from input table, shared geo-data and the newer out of the two in a third column, which at the same time also is being used as the identifier of a call to a service. This call-ID then is being used to identify the service outputs; the outputs table holds a column call-ID, which is the newest timestamp of its inputs.
The data structure of the shared data is driven by the goal to make query code as simple as possible and put the complexity into the insert and update code. As introduced in the terminology Sect. 3.1, the shared data is organized in scenarios. Among seven predefined columns a scenario is defined by its attributes and an optional coordinate system identifier. In other words a scenario defines a project space in which all geometry shares the same attributes. Besides self-explanatory attributes like “geomID”, “geom”, a general purpose “flag”, “userID”, “timestamp” a scenario holds:
a “batchID”: LUCI among other use cases is being used for an evolutionary optimization process in which many different variations of the same scenario are created and evaluated. To avoid mistake such variations with versions planned to be implemented in the future, we call those variations “batches”.
a “layer”: Similar to layers in CAD applications a layer in LUCI is exclusive. So, geometry cannot be part of two layers.
For each scenario four tables are being created. Besides the main table holding the current state of a scenario, there is also a history table with a nearly identical structure as the main table. The only addition is a timestamp column to store the time at which a record from the main table has been deleted. The two remaining tables will be used for versioning in the future development of LUCI.
Among a few “local” services that reside in plugins/services folder a major benefit of using LUCI results out of its network orientation. It features distributed computing, load balancing. “Remote” services are a key element in the implementation of the parallelization and/or distributed computation capabilities of LUCI. They are characterized by two main attributes: Firstly, from a client’s perspective remote services are indistinguishable from local services when being called. Secondly, any client can register as a service.
Upon registration a service describes its inputs and outputs. The inputs of any future call to that service will be verified by LUCI using this input description. The input/output description is very similar to the capabilities of web process services (WPS). In the future we could even think of converting the description to WPS when exposing the available services to the web.
Since the data is extensively being marked with timestamps, LUCI is able to send only the updates in the scenario data and input parameters since the last execution. Therefore the “get_scenario” and “get_service_inputs” actions both support the concept of time-range-selections, which is basically a parameter of the action call consisting of one of the keywords “from”, “before”, “until”, “after” and a timestamp as the value. To make use of this partial data extraction, the remote service must implement some sort of data cache to which the updates can be added.
Messages in LUCI consist of a JSON header and optional binary attachments. The first 16 bytes of all messages encode the length of each header and attachments with an 8 bytes big endian number respectively. This is crucial since connections are not being closed, but remain open during a session (web sockets) or until the connection gets closed by either client or server (TCP/IP). The attachments part can contain multiple byte arrays. All of them must be described in the JSON header by a streaminfo object; a JSON object with predefined structure and keywords. If processing of the header fails, using the informaiton oft he first 16 bytes, all subsequent bytes can still be read, which clears the socket for the next message.
At the moment messages can be sent through TCP/IP and web sockets. Parallel messages are not allowed, so each message must be answered before the next message can be sent. This shifts the complexity of parallelization away from the client to LUCI and the term call ID remains free for services. As mentioned in the terminology Sect. 3.1, we distinguish between actions and services. Actions are similar to remote procedure calls with the exception of not having a call ID. Messages always call an action by using the “action” keyword. Any message in LUCI must either contain one of the keywords “action”, “error”, “result”, which “action” and “error” holding a string-value, and “result” a json object.
Actions themselves are plugins similar to local services, database adapters, or data converters. LUCI comes with a standard set of actions, which can be extended or adapted to the specific needs of a project, just as services can. In Sect. 4.2 we show an example of how LUCI can be adapted to special needs by implementing dedicated services and/or adapting actions and converters.
Converters are plugins that call predefined functions of database adapters to store geometry in the scenario table. Supported formats so far are:
DXF and other formats closer related to CAD are on the task list for future development. Converters must not only translate the information from one format to the database, but also implement a few features specific to LUCI:
Attribute Mapping: a JSON object being part of the stream info object that tells the converter which attribute (e.g. ID) should be mapped to the seven predefined attributes (e.g. geomID) described in the data structure section.
Delete_list: a property being stored in the format itself that tells the converter which elements should be deleted from the table, i.e. moved to the history table. E.g. in GeoJSON the delete_list is a property of a feature that holds no geometry.
Jobs, in LUCI being called service instance, can be run synchronously or asynchronously. In case it should be run asynchronously the service instance must be created first in order to retrieve a service instance ID (SObjID). As discussed in the data structure Sect. 3.4 services can have inputs and outputs, which they define at runtime. Upon instance creation all input parameters of a service instance are being stored to the data-base. Whenever the service is being run, its inputs are loaded from the database. In theory the service can be re-run as many times as desired. Still, the service can store the outputs that belong to one single call ID (see Sect. 3.4) only once. Since the call ID is always equal to the newest timestamp of services inputs re-running services only makes sense, if one of its input parameters has changed.
To listen for such changes we use the Message Queue Transport Transfer (MQTT) protocol, a publish-subscribe framework. It was developed by IBM, is open source and builds on top of TCP/IP and web sockets. It is referred to as the protocol for the Internet of things. A LUCI service instance can either subscribe one of its inputs to the output of another service instance or subscribe the instance as a whole to the termination of other services, which will cause the instance to run immediately after another service instance has finished. With this setup service instances can be represented in a flow diagram, which is the intention of the configuration interface mentioned in Sect. 3.3. Using MQTT enables client applications to run previously created service instances simultaneously with one publication to MQTT. Furthermore, it enables them to monitor all service instance related activity.
Synchronous calls cannot be called through MQTT, but they must be called through the “run” action built-into LUCI. All service inputs and outputs are not transferred to the data-base but directly the (remote) service and back to the client. The run action will wait until the service completes.
Accessibility and availability