fbpx
The Stack Archive

Uncovering the value of your network traffic data

Thu 28 Apr 2016

Big data

alexhenthorne-iwane(1)Alex Henthorn-Iwane, Kentik, discusses the power of rich data in boosting the speed and accuracy of network performance, and overall user experience…

Recently, the New York Times published an article about a a rug designer named Luke Irwin living in the county of Wiltshire, England, who by chance discovered that his family home was on top of the one of the richest Roman-era archaeological finds in recent memory.

When he hired a contractor to lay some electrical cables under his yard, they discovered an intricate mosaic floor of red, blue and and white tiles only 18 inches down. The mosaic is part of a luxurious villa that was owned by an elite Roman family between A.D. 175 and 220.

Wiltshire Archaeology Service

Wiltshire Archaeology Service

Oh, and that stone container Mr. Irwin had been planting geraniums in for years? It turns out that it was probably originally carved to be a child’s coffin.  Go figure.

According to the Times article, the find is of stunning value:

Historic England called the find “unparalleled in recent years,” in part because the remains of the villa, with its outbuildings, were so undisturbed, and it is hoping to get more funds for a more complete dig. It estimates that the villa had 20 to 25 rooms on the ground floor alone.

Network data value: Hidden and out of reach

Jon Wilks

Jon Wilks

What does this have to do with network traffic data? As a company founded by network engineers and operators with decades of experience building and running some of the world’s biggest and most complex networks, we believe deeply in the power of data. Rich data, if made readily accessible to engineering and operations folks, can not only make routine tasks far faster and more accurate. It can also power innovation.

I’m not talking about just innovation with a huge capital “I” that can seem so unattainable, like creating practical cold fusion-based energy. I’m talking about the continuous improvements to the way you do digital business. Improvements made possible by using rich data to unlock the institutional knowledge in network teams’ heads, and turn it into valuable insights that improve network performance, reduce costs, and improve user/customer satisfaction. That sort of continuous innovation is what drives incremental improvements in user experience, achieves huge efficiencies over time, makes a business more competitive, enables new features that were previously infeasible, creates new revenue streams, and increases profits.

Rich, accessible data isn’t just good for the business in some cold, isolated way. It makes teams happy, because they can accomplish their tasks and go beyond them to drive the business forward, on a routine basis. It can help turn drudgery into passion, excellence and creativity.

The problem is that the vast majority of the value of network data has been hidden because previous approaches to network traffic and other other data like BGP, GeoIP, etc. have been so limiting. The result is that, like the aforementioned family living above and yet never accessing the riches lying hidden beneath them, too many network organizations are separated from the true value of their network data. And too many network managers and operators are trapped into a whack-a-mole existence, with insufficient data to make decisions and insufficient tools and resources to close the gap.

Traditional network traffic analysis: Too shallow

Traditional network traffic analysis tools, primarily built on appliances, text files or SQL databases reduce rich, raw data into a few indexed tables while discarding the details. They’re too limited, slow and costly to get you even 18 inches down, as it were, to the true value of your network data. Sure, you can get a few geraniums worth of value – some pretty graphs only good for summary views. But without any real depth of analysis, that’s mostly where it stops. For the practitioners who have to operate, engineer, and improve service delivery, shallow data is a bit of a curse. To criminally mix metaphors…

wading

Typical big data solution: Too slow or costly

Big data solutions can get at that data, but are either prohibitively slow for operational use (MapReduce), or prohibitively costly to give you raw data ingest and ad-hoc analytics in operational time frames (Spark, ELK). Plus, in the OSS big data case, you have to build the user-friendly interface for network data analysis by yourself, or else utility is limited to a tiny cadre of folks rather than being applicable to a broad set of users where you can get a real return on investment for all that hard work, capital and operational expense. For all their promise, big data solutions that you have to build on your own can be a scary (business) proposition to try to bring home.

Uncover the value of network data – fast!

We must uncover and unlock the value of network data. This requires depth – the ability to retain and dig instantly into massive volumes of raw data details. It also requires unconstrained data exploration. No stingy indexes, no fragile BI data cubes. The ability to perform any ad-hoc query on any subset of your data. It also requires speed – the answers to those queries need to come in a few seconds.  It requires fast time to value – progressing from sign-up to using that data in fifteen minutes (or less!). It also has to be open and affordable to all sizes of organizations.

Tags:

data feature networking
Send us a correction about this article Send us a news tip