Flow data, such as NetFlow and IPFIX, is well known for usage billing, network capacity planning, and DDoS protection, particularly among telecommunications providers. Enterprises have only recently started exploring the potential for networking and security. Even today, myths are preventing faster adoption of flow technology, and most are based on a lack of information or misunderstood details. Here are four major myths.
(Image: Pixbay)
Myth 1: Flow data is sampled and highly inaccurate
This is true for sFlow or NetFlow Lite standards supported on obsolete devices sometimes used by SMBs. In contrast, all major enterprise network equipment vendors provide routers and switches capable of exporting non-sampled, highly accurate traffic statistics. All major firewall vendors enable flow export, including virtualized platforms. Access to non-sampled flow data is commonplace, enabling precise measurements and high accuracy.
Full, non-sampled flow data enables visualization of east-west network traffic, showing utilization of individual uplinks in various locations. Another important use is in network incident troubleshooting. A user, for example, may be experiencing trouble when connecting to a server via an SSH service. With flow data, one can confirm that there was no response from the server and can exclude many potential root causes, such as server or network downtime or poor configuration. The range of causes can be narrowed to the service not running or communication blocked by a firewall or other device. This process vastly reduces the Mean-Time-To-Resolve (MTTR).
Myth 2: Flow is limited to L3/L4 visibility
Originally, flow data was limited to Layer 3 and 4. Today that is not the case. Flow data represents a single packet flow in the network and uses the 5-tuple identification bits, showing source and destination IP address, source port, destination port, and communications protocol. Such packets are aggregated into flow records that accumulate the amount of transferred data, number of packets, and other information from the network and transport layer. To gain even more useful data, five years ago Flowmon developed flow data enriched with information from the application layer. This concept has recently been adopted by many other vendors. So, having detailed visibility into application protocols, such as HTTP, DNS, and DHCP, for troubleshooting is now achievable.
In another example, a user may have an unresponsive service. By analyzing traditional flow data, a technician can see the traffic pattern and details about individual connections established by the user’s computer. Nothing may seem to be wrong using standard flow analysis, but it is not possible to see any traffic towards the service. With the extended visibility from an enriched flow data engine, the technician can easily troubleshoot the problem. It could turn out that the requested service name is not properly configured in DNS and returns “NXDOMAIN”, indicating that the requested domain name does not exist, and the corresponding IP address cannot be provided. In this case no session is established, hence the problem.
Myth 3: Flow data misses network performance metrics
Besides troubleshooting, another important consideration is network performance monitoring. Network monitoring is no longer the sole domain of packet capture tools; metrics can be easily extracted from packet data and exported as part of flow statistics. Performance indicators like RTT (round trip time), SRT (server response time), jitter or number of retransmissions are available transparently for all network traffic, regardless of the application protocol.
Myth 4: Flow is not a comprehensive tool for Network Performance and Diagnostics (NPMD)
According to Gartner, NPMD tools should provide performance metrics by leveraging full packet data and to investigate network issues by analyzing full packet traces. Instead of a packet capture solution, enriched flow data provides accurate traffic statistics, visibility into L7 (application protocols) and network performance metrics, so it is fully capable of performing NPMD use cases.
In reality, with the rise of encrypted traffic, heterogeneous environments and an increase in network speeds, it is inevitable that flow will become a predominant approach in NPMD. The trend towards greater bandwidth strongly challenges legacy packet solutions. Consider a network backbone of 10G capacity requires up to 108TB of storage to track network traffic for 24 hours. This is a massive amount of data to collect, store and analyze, making the process extremely expensive, if not impractical or impossible.
Alternatively, with enriched data flow, one needs just a fraction of the data. Flow data in the same situation requires around 250GB storage, enabling 30 days of history on a collector equipped with 8TB storage. So, by leveraging flow data, one can keep network operations tuned and troubleshoot network-related issues for a fraction of resources.
To troubleshoot a specific issue on an unsupported application protocol where there is no visibility in flow data, one can use a probe to access to full packet data. So, instead of just extracting all the metadata from the packets, one simply instructs the probe to run a packet capture task. This task will be time-limited and strictly focused on recording packets relevant to an investigation. In reality, on-demand selective packet capture is easy to handle even on a multi 10G environment, without the need of massive storage.
Flow data for future dynamics
Enriched flow data technology has greatly matured with granularity for solving network incidents, configuration issues, capacity planning, and more. Compared to continuous packet capture tools, enriched flow data solutions provide broad scalability, flexibility and ease of use. As a result, flow data saves time, reduces MTTR and lowers the total cost of network operations. Flow data also provides network behavior analysis and detection of active attacker behaviors, including indicators of compromise, lateral movement and APTs.
Organizations have gradually abandoned packet capture technology and replaced it with flow. Continuous adoption of cloud, IoT, SDN and the ubiquitous bandwidth explosion will only continue this trend.