3 Things You Need to Know About Deep Buffering

A buffer is used to absorb bursts in activity and ensure that a system runs smoothly; buffering is used everywhere, even in some places you may not expect!

3 Things You Need to Know About Deep BufferingBy Ken Jinks    May 21, 2015      Thinking

3 Things You Need to Know About Deep Buffering



There continues to be a lot of talk in the industry about buffering, deep buffering, and the usefulness and benefits of both. As a primer on the situation, I’ve distilled three of the big things you should know about deep buffering.

1. Buffering is not new - but it is high time for some innovation.

Buffering, as a concept, has been around for a long time. As a very basic definition, a buffer is anything used to absorb bursts in activity and ensure that a system runs smoothly. Buffering is used everywhere, even in some places you might not expect, like production line manufacturing. Imagine a factory that produces one widget every minute. Various types of buffering can be put into place to ensure the factory continues to produce at maximum efficiency.

Smaller buffers can save costs: holding less inventory in the supply chain (i.e. widget-making material) means less costs, but can lead to shortages if there is an unexpected issue early on in the supply chain. Conversely, large buffers of inventory in the supply chain avoid production bottlenecks coming from shortages, but can be very wasteful in terms of the costs incurred.

Similarly, buffers have been used in the cores of large networks for a long time. They allow the network to accommodate some level of variation in the network traffic, which ensures good delivery of application traffic. Simply put, a buffer is a reserved portion of memory where “overflow” data can be accurately stored to be transmitted when network traffic levels allow. The depth of these buffers have historically been kept very small, meaning that if the buffers overrun and loss of data occurs, the only thing most applications can do is rely on control protocols to have the data retransmitted.

However, new solutions are now being recognized for their efficiencies and advantages. Our partner Arista has recently introduced the concept of deep buffering into the network. They’ve recently release a whitepaper that describes the benefits of this, especially in terms of throughput for applications in big data environments.

2. The current Industry approach is not sustainable.

Currently network data analytics products will quote their analysis rates or their packet captures rates in terms of packets per second or Gigabits per second. Customers purchase products that they know can handle their peak 1-second network traffic rates, to ensure that everything is captured and analysed. The challenge is that it’s typically during these peak periods when you’ll most need the analytics. So, you purchase your network data analysis product based on the peak demands of your infrastructure.

Sizing your infrastructure such as firewalls, switches, load balancers, servers, etc. for this peak demand makes perfect sense. These demands are driven by business needs, and as the business grows and peak rates increase, infrastructure upgrades are justified. However, sizing your analytics to support the infrastructure based off the peak rate can no longer be justified. Peaks are constantly growing and changing as applications usage and deployments change. Having to continually justify budgets to upgrade your network analytics and troubleshooting tools based off of these peaks is not sustainable.

3. Deep buffers will save you money.

Not many current network analytics solutions take the approach of buffering for long periods of time. The thinking seems to be that, as a worse-case scenario, the network may burst to a 40Gbps line rate for a couple of seconds. So why would you need a deep buffer?

It’s commonly understood that 1 second peak rates are much larger than 60 second averages. For example, traffic that bursts to 40 Gbps for a couple of seconds may only average 10 Gbps over the course of 60 seconds. With deep enough buffering in your network data and analytics product, you can deploy the product that sustains capture and analysis rates at the 60 second average rate instead of the 1 second peak rate. This means a mid-level network data appliance that can sustain 10Gbps of packet analysis, along with deep buffers, can replace four similar appliances that are designed for the 1-second peak with minimum buffering.

Deep buffers can have a huge affect on the cost of capturing and analysing 100% of your network data during periods of peak congestion. The good news is that buffering in memory is cheaper and cheaper these days - certainly when compared to buffering deep levels of inventory in your supply chain to feed your widget production line!

Stay tuned for further parts in this series.

3 Things You Need to Know About Deep Buffering

Ken Jinks, Director, Product Management, Corvil
Corvil is the leader in performance monitoring and analytics for electronic financial markets. The world’s financial markets companies turn to Corvil analytics for the unique visibility and intelligence we provide to assure the speed, transparency, and compliance of their businesses globally. Corvil watches over and assures the outcome of electronic transactions with a value in excess of $1 trillion, every day.
@corvilinc

You might also be interested in...