Microsystem Technologies

, Volume 14, Issue 4–5, pp 453–462 | Cite as

VHDL Implementation of a communication interface for integrated MEMS

  • E. Magdaleno Castelló
  • M. Rodríguez Valido
  • A. J. Ayala Alfonso
Technical Paper


The main objective of this paper is to develop a distributed architecture for integrating micro-electromechanical system (MEMS or microsystem) based on a hierarchical communications system governed by a master node. A MEM integrates a sensor with its signal conditioner and communications interface, thus reducing mass, volume and power consumption. In pursuing this objective, we developed an interface to connect MEMS on a sensor or microinstrument network. Interface model was developed using VHSIC hardware description language (VHDL). The implemented model or intellectual property (IP) core can be easily added to the microsystem or MEM. The core thus developed contains an interface file system (IFS) that supplies all the information related to the micro-system that we wish to connect to the network, allowing the specific characteristics to be isolated to the micro-instrument. The IFS allows all the nodes to have the same interface from the network point of view. In order to support complexity management and composability of the microinstrument, the IFS has a real-time service interface and a configuration interface. A functional characteristic of this configuration interface is the automatic new node integration or plug and play on network. The design was implemented in a field programmable gate array (FPGA) and was successfully tested. The FPGA implementation makes the designed nodes small-size, flexible, customizable, reconfigurable or reprogrammable with advantages of well-customized, cost-effective, integration, accessibility and expandability. The VHDL hardware solution is a key feature for size reduction. The system can be resized according to its needs taking advantages of the VHDL configurability.


VHDL FPGA Communication protocol Distributed architecture Smart sensors MEMS 

1 Introduction

Advances in our understanding of silicon and microcircuit manufacturing technologies have brought about an increasingly greater integration of functions on a wafer or on circuits with a common support. These breakthroughs have led to the development of sensors that are able to perform functions instead of simply producing a signal based on a physical parameter, thus facilitating distributed control. MEMS offer many advantages, including reduced mass, volume and power consumption, more features and simpler architectures. The extensive variety of these characteristics suggest that the widespread application of these “micro-sensors” would be feasible if a way to lower costs and simplify the electronic interface were found (Bartek 1998; Ferrer and Lorente 2003)

Sensing devices were traditionally limited to measuring local parameters. These sensors were usually separated spatially when geographical coverage was needed and they had to be connected to a processor with a point-to-point connection. The availability of MEMS has increased demand for distributed architectures. This new class of device (smart sensors) requires a new design methodology for instruments based on a system of sensors. Such designs are simpler, cheaper and faster, and the resulting systems are more scalable and offer more features than traditional systems.

These advantages are obtained by providing the sensor with two characteristics: (1) the ability to compute and, (2) a network interface. Smart sensors in a measurement and control system reduce the load on PC controllers and increase reliability. The evolution in sensor technology has led to a device, which is now capable of independently performing tasks such as signal conditioning, linearity correction, unit conversion, self diagnostics and plug and play. Moreover, the network interface allows for the connection of a large number of smart sensor nodes in a distributed measurement or control system, with a reduced system cost.

A smart sensor is made up of three integrated elements: the physical transducer (including the sensor/actuator itself, signal processing and an ACD/DCA unit), a network interface and a processing/memory core (Fig. 1). The design of the network interface is very important. Transducers found in the market use different interfaces created from different manufacturers. However, the interface for a smart transducer should be as generic as possible to support all transducers currently available or future versions (Lee 2001).
Fig. 1

Smart sensor node elements

The technical requirements for a sensor network vary: the communications service must guarantee real-time data transmission at predetermined intervals while minimizing delays, the interface should be as simple as possible and the system should be easily expandable. At the same time the sensor network needs to minimize purchase and maintenance costs (Kopetz et al. 2000; Bosse et al. 1996).

An effort has already been made to settle on a network interface for smart transducers via the IEEE1451 standard (Lee 2000; IEEE 1997, 1999). However, this standard contains a large number of communication lines. Our approach implements a network interface for smart transducers based on a time-triggered architecture (TTA) (Kopetz and Bauer 2003).

Our goal in this paper is to implement a communications interface that allows a MEMS integrated in a local bus (through a master node) to be connected with a high-level micro-instrumentation communications bus.

The rest of the paper is organized as follows: Sect. 2 gives an overview on the communication interfaces and features of smart field-buses. Section 3 describes the design of a distributed architecture using a smart transducer interface and the time-triggered protocol class A (TTP/A) protocol. The implemented plug and play functionality is descried in Sect. 4 while Sect. 5 explains the user interface. Section 6 compares the implemented protocol with the original. Experimental results and conclusions are offered in Sect. 7.

2 Sensor interface, definition and properties

An interface is defined as the boundary between two information-exchanging systems. This exchange is only possible if the connected systems share the same concepts and rules. The design of the distributed system in this paper includes a cluster made up of sensors, actuators and processing nodes that are connected through a specific communications medium.

The main task of the interface in the smart transducer is to exchange observations between the two sub-connected systems at specified times (Kopetz et al. 2000; Lee 2001).

2.1 Observations

The dynamics of a real-time application are modeled using a set of state variables, or real-time entities that modify their states with time. Such variables are temperature, liquid flow, position, etc. The information about the state of an entity in real time at a given instant is determined by an observation composed of the following three variables:
$$ < {\text{name}},{\text{observation}}\;{\text{instant}},{\text{value}} > , $$
where name is an element in the space of name entities in real time, instant is the point in time space and value is an element in the domain of values for that entity. An observation provides information on the state of an entity at any given instant.

2.2 Flow control

Communications between two systems are usually controlled by a petition on the part of the transmitting (push style) or the receiving (pull style) node. Both nodes are communications schemes based on an event-triggered architecture, as shown in Fig. 2, and must communicate in such a way that A transmits data and B receives it. A successful communication exchange requires that both nodes agree on the mechanism and direction for the data transfer (Kruger 1997).
Fig. 2

Push style (a), pull style (b) and time-triggered communication (c)

In Fig. 2a, the control signal is established by data source (A). This method is suitable for the sender, but the receiver must constantly check for a data transmission, which can arrive at any time. Push-based communications form the basic mechanism in event-triggered systems.

Figure 2b shows how the data flow is controlled by the data sink (pull method). Whenever the sink wants a specific piece of information, the sender must reply to the petition. This approach facilitates the receiver’s task, but the sender must now be on the lookout for incoming petitions. Pull-based communications form the basic mechanism in client-server systems.

Lastly, Fig. 2c presents a time-triggered communications model where the flow control is executed at predetermined global times.

All of these three communication methods translate into resource costs and cause design problems, however, the time-triggered communication is the least expensive.

The control flow between the nodes in this communications model is normally decoupled by means of a temporal firewall (Fig. 3). This model uses a combination of the three flow-control techniques shown in Fig. 2. Each component has a memory, which acts as the source or destination for the data in all communications events. Conceptually, this element acts as an interface file system (IFS) (Kopetz et al. 2000). In this way, the data to be sent can be written in the memory through a push-type interface. The data transmission is overseen by a time-triggered communications model. After the transmission, the data sink accesses the data through a pull-type interface.
Fig. 3

Decoupled flow control

Values are stored in memory and can be interpreted as status messages until their contents are updated and overwritten. Possible conflicts between simultaneous memory read-write operations are avoided by managing the memory with a time-triggered protocol. One feature in this type of architecture is that all the nodes know the transmission formats and have access to the same system clock, thus every component knows the instant in which the protocol updates the memory.

This smart transducer interface using IFS has the advantage of encapsulating and isolating internal complexity.

2.3 Internet connectivity

Connecting an industrial control system across internet offers numerous advantages, such as the remote monitoring of an industrial plant, fault diagnostics and correction from anywhere in the world, etc. Nevertheless, current Internet technology is faced with two problems: security and unpredictable connectivity. The security issue has made significant advances thanks to the growing interest in developing e-commerce.

The implementation of an event-based system over the Internet is clearly an inadequate approach, given the difficulties in the synchronization between the sender and the receiver. In a time-triggered system, however, the loss or delay of a message would only result in the unavailability of the system’s current state until the message’s arrival or until the next transmission interval.

2.4 Plug and play

The protocol can support plug and play functionality. In this case, the node identification and the configuration programming rely on the field bus network level.

3 Description and design of the smart instrumentation interface

3.1 Architecture model

We apply a two-level design approach for the distributed system architecture (Fig. 4). The first level is the node level and consists of nodes where each one can be considered as a simple transducer (sensor or actuator) or microinstrument. The second level is the cluster level or microinstruments network that integrates all the transducer nodes (sensors, actuators or microinstruments) from the distributed instrument. The network performs instrumentation fusion and presents the data to the instrument master at the second level. Decisions about control values and configuration are made at the instrument master, and, if necessary, user interaction is handled.
Fig. 4

Architecture of the distributed microinstrument

Well-defined interfaces are introduced between these levels. A smart transducer interface handles data flow from node to cluster level and the cluster level presents results of all the instruments to the instrument master. The motivation for the introduction of these levels is a reduction of system complexity at cluster and control application in the instrument master and the possibility of hardware reuse or the implementation of the levels in different hardware technologies.

At the sensor level, all the transducers in the microinstrument are connected to a controller using a communication bus called IBIS (Ferrer and Lorente 2003; Lorente et al 2004; Rodriguez et al. 2005). The IBIS bus allows up to 32 transducers to be connected, and is a modification of the IS2 bus. This change in bus design brought about increased connectivity at a reduced cost. The IBIS master manages communication with transducers. An interface was designed and implemented to integrate the transducer level to the proposed architecture model. This microinstrumentation interface is based on the time-triggered master-slave protocol (TTP/A) (Kopetz and Bauer 2003) to obtain the standardization of the smart microinstrument interface and the resultant benefits. We chose this protocol because contains an IFS that easily identifies the particular characteristics of each transducer or microinstrument from the instrument master point of view. In effect, the microinstrumentation interface acts as a glue between microinstruments and control application (instrument master).

3.2 Principles of operation

The interface protocol is controlled by an active master (instrument master in Fig. 4) that supplies the synchronization to all the slave nodes (up to 255). The communication takes place through one wire, which minimizes the connections between components and, consequently, the size of the micro-instrument. This method also facilitates fiber optics based communications

The communication is round-based. Every round starts with a fireworks byte sent by the master that is used for synchronization and round identification. Bus access conflicts are avoided by a strict time-division multiple access (TDMA) schedule for each round. A round consists of several slots and a slot is a unit for transmission of one byte of data. Data bytes are transmitted in a standard UART format (Fig. 5). There are two types of rounds:
  1. 1.

    Master-slave round. This round contains two frames. The first frame is for control and is sent by the master and includes several fields. One of these fields is the address of the slave with which it wants to communicate. The second frame is for the slave’s response. This round is used for reading from or writing to the memory acting as the interface with the slave node’s sensor. Plug and play functionality uses these master-slave rounds to identify the new nodes, to obtain documentation and to download the new configuration.

  2. 2.

    Multi-user round. This round begins with a fireworks byte from the master and continues with data frames from previously specified nodes. These rounds are periodic and are used to transmit observations to the master node. Master-slave rounds are used in the programming of these multi-partner rounds. Six different multi-user rounds can be programmed.

A regular sequence between multi-user rounds and master-slave rounds occurs during the protocol’s normal operation (see Fig. 5). The rounds are separated by inter-round gaps (IRG) lasting 13 temporal bits, during which time the bus is idle. Each frame or slot within a round is in turn made up of 13 bits, eight for data, the initial, final and parity bits, and two bits, which form the inter-byte gap (IBG).
Fig. 5

The communication in the micro-instrumentation bus is round based

Each round is described in a round descriptor list (RODL) stored in every node into a structure called IFS (see Fig. 3). RODL files can be modified using master-slave rounds.

A slave operation is shown in Fig. 6. The master sends a master-slave fireworks byte after initialization or reset. This byte has been designed to generate a regular bit pattern and can be used for startup synchronization by slave nodes. After synchronization, the master transmits fireworks bytes over the bus in predefined slots. When the slaves receive this byte, they access the RODL file, which is stored in the IFS. This file contains the information about the actions performed by a node for a particular round (write, read, execute or idle) in each slot into a particular round. The IFS of all slaves is programmed to avoid conflicts (for example, two or more nodes writing to the bus). When the round is finished, the slaves wait for the following fireworks byte from the master.
Fig. 6

Diagram showing the slave module operation

3.3 Hardware requirements

We implemented VHDL modules to the master and slave nodes, respectively, to construct the smart microinstrument interface previously described. The master node contains a TX/RX unit, a buffer tri-state to access to the bus, a master controller (a state machine with the TTP/A protocol) and a dual-port memory acting as IFS (see Fig. 7).
Fig. 7

Block diagram of the master node module

Figure 8 depicts a block diagram of a slave node module. These nodes need an additional hardware divisor in order to obtain the transmission rate. The slave control unit is a state machine, as shown in Fig. 6. The slave IFS unit is a dual-port memory. Port A of this memory accesses the bus via the TX/RX unit and port B communicates the interface with the sensor, actuator or microinstrument. This design allows the specific interface and internal complexity of the nodes to be isolated. Each interface memory (IFS) is grouped into files and records as follows: each node has a memory of 1K × 8 that is, the memory has 1,024 addresses (10 address lines). This memory is divided into 16 files, each containing 16 4-byte records, so that each file has a fixed size of 64 bytes (see Fig. 9). This organization simplifies the design considerably. The four most significant bits 9–6 of memory address select the IFS file, bits 5–2 are used to select the record, and the remaining two bits select the byte.
Fig. 8

Block diagram of a slave node module

Fig. 9

Memory file arrangement of the designed interface

3.4 Description of communication

An overview of the slave interface operation is shown in Fig. 10. After a network reset or re-initialization, the transmission rate from the slave node is initially unknown. After waiting for a determined time period, the protocol starts looking for the 55 pattern in hexadecimal (master-slave fireworks byte) on rxd signal, which belong to round five. Once this pattern is detected, the hardware divisor is activated via a ce_div signal and given the necessary parameters (dividend and divisor). The divisor then proceeds to perform its task. Once finished, it sends the quotient and the remainder to the controller and activates the termination signal (div_end). The slave controller uses this data to calculate the sample rate for the Rx unit (const_baud) and to enable the TX/RX unit. The slot time and error time are also calculated since they are necessary for the robust operation of the machine. The round is then interpreted once these synchronizing operations are completed. In this case 5 bytes are received and are stored in registers one and two in file 14. After concluding the round (val = 1), an IRG elapses (reset_cnt to high and ce_divisor to low) and the byte from header 78 is received (round 0). This round begins with the transmission of byte 0 in the first register in file 15.
Fig. 10

Functional simulation of the slave’s controller

4 Node integration

The master controller not only manages the bus communication in our case but also includes a hardware baptism algorithm that performs a binary search on all node identifiers to obtain the documentation number. The master has to store three 32 bit integer variables, lo, hi, and the comparison identifier ci. The identifier of a node is found using iterations of the binary search used by Elmenreich et al. 2002 and the master assigns a new alias.

The master sends a lower limit of a node identifier. The memory for this lower limit is located in the slave node’s IFS (file 8, see Table 1) described as the comparison identifier (ci). The bus interactions include a master round to communicate to unbaptized nodes (with FF alias), a slave round to send ci, a master round to execute a comparison between ci and documentation number (di), and a final slave round. In this round, slave nodes with alias FF whose di is greater than or equal to the ci write a data frame with 00. Then the master calculates a new ci using lo, hi and actual ci.
Table 1

File 8 organization (documentation and configuration file) for slave nodes

File 8

Byte 0

Byte 1

Byte 2

Byte 3

Record 0


Record 1

Documentation number

Record 2

Actual alias (FF)

New alias



Record 3

Comparison identifier received

Record 4


\( \vdots \)

\( \vdots \)

Record E


Record F


Identifier comparison iteration ends when there are no nodes writing a data frame with 00 in the last slave round. The master then obtains the documentation number and changes the node alias of the particular node from unbaptized (FF) to its new alias overwriting record 2 of file 8 in a baptize operation.

An initial baptizing for four nodes is shown in Fig. 11. There are four nodes with a FF alias in the beginning. The master node starts with the binary search to obtain the documentation node of the slaves (DDDDDDDD, AAAAAAAA, 66666666, 11111111 in this case). At the end (1,729.81 ms), the four nodes are baptized with alias from 01 to 04.
Fig. 11

Initial baptizing for four nodes

5 Programming and monitoring

The master node was connected via a RS232 cable to the serial port of a PC. A software interface was developed using Visual C# to configure and monitor the network. In the round-configuration option, the user configures the multi-partner rounds (Fig. 12). When these rounds are specified, the user writes the IFS of the all slaves connected to the bus through the master node using successive master-slave rounds. The program stores the real time data received from field bus in the monitor-tool. These data can be saved or displayed in real-time.
Fig. 12

Screenshot of the rounds configuration tool

6 Comparison of the Ttp/A with the protocol implemented

The implemented protocol resulted in a simplification of the needed TTP/A protocol to adapt its integration in a smart node. In both protocols each node has a unique address and the network can have up to 255 nodes. The main difference stems from the size of the interface memory. In the TTP/A protocol, the IFS can have a variable size of up to 64 kbytes of maximum capacity while in our protocol the interface memory has a fixed 1 kbyte size. Two objectives are satisfied: the first is to reduce the occupied space and the second is to simplify the protocol itself. The TTP/A protocol includes error control, but that function is simplified in our case in order to reduce the protocol size, the duration for baptizing task and to improve its integration into the smart node.

The documentation and configuration file has been integrated in the same file of the IFS (file 08). This distribution permits the use of one additional file for other tasks such as data sensor storing or round information. Furthermore, the documentation number of slaves is only 4 byte size. TTP/A original protocol uses 8 bytes for this number. As a result, each binary search in the baptize algorithm only needs four bus interactions instead of six.

TTP/A protocols are implemented on microcontrollers (PIC 16F874, Atmel AVR AT90S2313, Motorola) (Obermaisser 2001; Trödhandl 2002) using C or Assembler languages, thus these software implementations are technology-dependent. The hardware implementation using VDHL developed in this paper, however, is technology-independent.

7 Experimental results

A simple prototype of the protocol compliant smart sensor was implemented in a FPGA. The target FPGA for a master unit and a slave unit is Xilinx’s XC3S200-FT256 device, which belongs to the Xilinx Spartan-3 family. The Spartan-3 board has a 50 MHz free-running oscillator to drive the custom logics. All custom logics for this FPGA are designed to operate at this clock frequency. The XC3S200-FT256 device has 1,920 slices, 12 block-RAM.

All of the digital modules from the smart sensor were developed in VHDL and implemented in FPGA. The master unit has three blocks: a protocol controller, a Tx/Rx unit and an interface module. The slaves units have four blocks because they need a hardware divisor for synchronization tasks. For the master module the design occupied 378 slices (just 19% for the logic) and one block-RAM for the interface memory (8%). The slave module used 485 slices (25%) and one block-RAM. The theoretical maximum operating frequency was 84.5 MHz, although in practice, satisfactory results were obtained with the 50 MHz board frequency.

A test network was defined which ran at 19,200 bps containing 10 slave nodes and one master for purposes of comparison. Table 2 shows the calculated durations with our algorithm and with the original TTP/A computation times.
Table 2

Computation time of different algorithms for node identification


Not optimized (Elmenreich et al. 2002)

Optimized (Elmenreich et al. 2002)

FPGA design

Initial baptizing (10 nodes) (s)




Identifying a new node during operation (s)




FPGA design is faster than other implementations due to:
  1. 1.

    Each binary search in the baptize algorithm only needs four bus interactions instead of six. The original protocol needs additional master-slave rounds to send the other 4 bytes of ci number to the slaves.

  2. 2.

    The original protocol needs 64 iterations to obtain the documentation number of an unbaptized slave. The proposed variation only needs 32 iterations.


8 Conclusions

We developed a communications interface for smart transducers based on an interface file system, which conceals the micro-instrument’s properties and internal complexities by decoupling the application from the communications systems. The prototype includes a software tool to monitor and configure the network with a PC.

One advantage in the design is the ease in which the interface can be added to the micro-instrument. Distributed architecture can be used to easily connect MEMS with this interface.

The proposed architecture boasts a single wire bus, which minimizes the interconnection between components, reduces overall size and lowers the cost of implementing the network. The system is developed with VHDL and is technology-independent. These characteristics allow the smart instrument to be small-sized, flexible, customized, reconfigurable and reprogrammable. Consequently the proposed smart instrument is ideal for distributed measurement and control system applications. Clear advantages are seen in its well-customized, cost-effective, integration, accessibility and expandability properties. For example, the distributed architecture with the software tool designed in Visual C# can be used in greenhouses to detect pressure, temperature, humidity and luminosity by using smart sensor. Water actuators can also be controlled, in addition to other tasks.

Additionally the design, based on a time-triggered architecture, allows the sensor/actuator network to be easily controlled or remotely monitored via Internet.



This work has been supported by “Programa Nacional de Diseño y Producción Industrial” (Project DPI 2006-07906) of the “Ministerio de Educación y Ciencia” (Spain), and by “European Regional Development Fund” (ERDF).


  1. Bartek M (1998) “Semi-hybrid” techniques for microsystem integration. In: Proceedings of the Microsystems Symposium, Delft, The Netherlands, pp 17–26Google Scholar
  2. Bosse E, Roy J, Grenier D (1996) Data fusion concepts applied to a suite of dissimilar sensors. Canadian conference on electrical and computer engineering, pp 692–695Google Scholar
  3. Elmenreich W, Haidinger W, Peti P, Schneider L (2002) New Node Integration in TTP/A Networks. In: Proceedings of the 20th IASTED international conference on applied informatics, pp 173–178Google Scholar
  4. Ferrer C, Lorente B (2003), Smart sensors development based on a distributed Bus for microsystems applications. In: Proceedings of the SPIE’s first international symposium on microtechnologies, Gran Canaria, Spain, pp 19–21Google Scholar
  5. IEEE (1997) IEEE Std 1451.2–1997, Standard for a smart transducer to microprocessor communication protocols and electronics transducer data sheet formats. IEEE Inc., Piscataway, NJ, 1997Google Scholar
  6. (IEEE 1999) IEEE Std 1451.1–1999, Standard for a smart transducer interface for sensors and actuators—nertwork capable application processor (NCAP) information model, IEEE Inc., Piscataway, NJ, 1997Google Scholar
  7. Kopetz H, Holzmann M, Elmenreich W (2000) A universal smart transducer interface: TTP/A.In: Proceedings of the 3rd international symposium on object-oriented real-time distributed computing (ISORC), pp 16–23Google Scholar
  8. Kopetz H, Bauer G (2003), The time-triggered architecture. In: Proceedings of IEEE, pp 112–126Google Scholar
  9. Kruger A (1997) Interface design for time-triggered real-time system architectures. PhD thesis, Technische Universitat Wien, Institut f¨ur Technische Informatik, ViennaGoogle Scholar
  10. Lee K (2000) IEEE 1451: A standard in support of smart transducer networking. Instrumentation and measurement technology conference 2000 (IMTC 2000). In: Proceedings of the 17th IEEE, pp 525–528, May 2000Google Scholar
  11. Lee K (2001) Sensor networking and interface standardization. IEEE international and measurement technology conference, Hungary, pp 147–152Google Scholar
  12. Lorente B, Oliver J, Ferrer C (2004) IBIS Bus: towards a distributed architecture for MEMS integration. Sensor and actuators A: Physical, pp 470–475Google Scholar
  13. Obermaisser R (2001) Design and implementation of a smart transducer network. Master’s thesis, Teschnische Universität Wien, AustriaGoogle Scholar
  14. Rodriguez M, Magdaleno E, Ferrer C, Lorente B, Ayala A (2005), High level communication interface design for integrated MEMS and microinstrument bus. In: Proceedings of SPIE smart sensors, actuators and MEMS II, pp 107–115Google Scholar
  15. Trödhandl C (2002) Architectural requirements for TTP/A nodes. Master’s thesis, Technische Universität Wien, AustriaGoogle Scholar

Copyright information

© Springer-Verlag 2007

Authors and Affiliations

  • E. Magdaleno Castelló
    • 1
  • M. Rodríguez Valido
    • 1
  • A. J. Ayala Alfonso
    • 1
  1. 1.Grupo de Comunicaciones y Teledetección, Departamento de Física Fundamental, Experimental, Electrónica y SistemasUniversidad de La LagunaLa LagunaSpain

Personalised recommendations