In the first session, the status of IPFIX standardization was discussed, together with some other problems that have impact on the practical implementation of the protocol. The session was opened with a presentation on the status of IPFIX in the IETF. As in the previous workshop, Benoit Claise (Cisco Systems) gave an overview about the history of IPFIX and the main differences between IPFIX and NetFlow Version 9. In addition, he compared IPFIX with PSAMP [5], showing that these protocols are complementary. He finished by summarizing the current work in the IETF:
-
The IPFIX File Format specification, which defines a format for storing IPFIX data, has been completed.
-
There are three network management related drafts under discussion: (1) Definitions of Management Objects for IP Flow Information Export, (2) Definitions of Managed Objects for Package Sampling, and (3) Configuration Data Model for IPFIX and PSAMP.
-
There is still work in progress related to IPFIX Structured Data, Mediation Function and IPFIX Export per SCTP Stream.
-
New items have been added to the charter: Flow Anonymization, Flow Selection and IPFIX Benchmarking.
More information about the current work in the IETF can be found in the IPFIX status pages [15].
Carsten Schmoll (Fraunhofer FOKUS) proposed a solution for making transmission of IPFIX data more secure. Network flow data must be treated as confidential, since they contain information that can, for example, be misused during attacks. His solution addresses two major threats:
-
Anonymity disclosure: NetFlow/IPFIX records contain information about active flows, addresses of involved nodes, and traffic patterns in the network. Such information can be used by attackers to identify users’ behavior and reveal details about the network infrastructure, easing attacks against other network elements.
-
Attacks against the measurement system: applications that depend on network flow data can be affected if the measurement structure is damaged. For example, unprotected collectors are vulnerable to flooding attacks, which can disrupt accounting systems.
Schmoll proposed to encrypt exported IPFIX data and to decrypt them only when strictly necessary. His solution uses a different encryption key for each collector device, allowing exporting devices to decide which collectors will decrypt which portion of the data. All communication for key exchange is protected by standard TLS (Transport Layer Security), and all standard security measures–such as protection by firewalls and access control policies–should also be in place. However, a comprehensive evaluation of the effectiveness of his approach is still to be performed.
Cristian Morariu’s presentation targeted the bottlenecks in handling NetFlow/IPFIX data. Since NetFlow/IPFIX meters are often used in high-speed network environments, the infrastructure to transport and to process those data must be designed to support heavy workloads. Bottlenecks can occur if, for example, NetFlow/IPFIX data arrive at a collector device at rates higher than the writing speed of the storage hardware, if the bandwidth available in the network is not sufficient, or if the time required to process a NetFlow/IPFIX record is longer than the inter-arrival time of such records. These bottlenecks are normally addressed at the metering point, by sampling packets or flows before exporting any data. However, some applications require highly accurate measurements, and sampling approaches may have a negative impact on that.
Morariu proposed a new architecture, suitable for situations in which sampling is not acceptable. His solution, based on the Kademlia distributed hash table [11], aims to increase the number of flows that can be processed, by distributing the workload across several network nodes. Furthermore, his solution is more robust, since peer-to-peer networks provide redundancy and avoid single points of failure. Although a prototypical implementation already exists, further analysis is needed to ensure its feasibility.