In fact the so-called 'bandwidth hog' may simply be a made-up user category to bolster the business case for bandwidth caps on fixed line connections. By Ian Scales.
Consultancy Diffraction Analysis (http://www.diffractionanalysis.com/) last month published the results of a number crunching investigation undertaken following the posting of an article by Benoit Felten and Herman Wagter in late 2009.
Their quest has been to find that Higgs Boson of the Internet - the fabled bandwidth hog. Like the Higgs Boson it had never been scientifically identified but it HAD to exist because, without it, the entire case for data capping would collapse.
Now the numbers are in and, as the two consultants (and many of the rest of us) had suspected all along, the performance disrupting bandwidth hog is a bit of a myth.
Quite simple really: Internet congestion is the result of contention for resources during peak periods - too many users using the same link at the same time. It's the Internet equivalent of the old phone switch 'busy hour'.
Many ISPs say that a looming peak time congestion problem as more video is streamed across the Internet means they will have to invest more in infrastructure without commensurate return since they will simply be supporting the data consumption habits of the few 'heavy' users. Cap the heavy users out of existence, it's implied, and the problem will be alleviated.
But the real problem here is that the multi-gigabit data cap, which many ISPs have imposed, seems strangely ill-suited to alleviating the Internet busy hour. It is applied to total monthly consumption so a heavy user could theoretically download gigabytes of data every night (well away from peak usage periods) and do nothing at all to increase congestion at those crucial times. A light user, on the other hand, could conceivably use the internet only at peak (to check email and Facebook, say) and so contribute very much to congestion. In this case the 'guilty parties' could (just could) be the thousands of very light users, rather than the tens of very heavy ones.
Then again, it could be that the main motivation for telco and cable operator data caps is to make heavy video streaming to connected TVs uneconomic when compared to the pay TV packages on offer. Just a thought.
But, unless someone delves we'll never know whether there is a significant link between heavy total usage and peak hour performance.
This was what Benoit and Walter wanted to find out.
They specified the data set they wanted to analyse and asked ISPs to contribute. They only got one data set to play with (so it's possible that other ISPs' data might yield significantly different results), but the one data set, carefully analysed, didn't identify any disruptive bandwidth hoggery.
According to part of Benoit's precis of the report:
"42% of all customers (and nearly 48% of active customers) are amongst the top 10% of bandwidth users at onepoint or another during peak hours.
6% of all customers (and 7.5% of active customers) are amongst the top 1% of bandwidth users at one point or another during peak hours.
Assuming that if disruptive users exist (which, as mentioned above we could not prove) they would be amongst those that populate the top 1% of bandwidth users during peak periods. To test this theory, we crossed that population with users that are over cap (simulating AT&T’s established data caps) and found out that only 78% of customers over cap are amongst the top 1%, which means that one fifth of customers being punished by the data cap policy cannot possibly be considered to be disruptive (even assuming that the remaining four fifths are).
Data caps, therefore, are a very crude and unfair tool when it comes to targeting potentially disruptive users. The correlation between real-time bandwidth usage and data downloaded over time is weak and the net cast by data caps captures users that cannot possibly be responsible for congestion. Furthermore, many users who are "as guilty" as the ones who are over cap (again, if there is such a thing as a disruptive user) are not captured by that same net.
In conclusion, we state that policies honestly implemented to reduce bandwidth usage during peak hours should be based on better understanding of real usage patterns and should only consider customers’ behavior during these hours; their behavior when the link isn’t loaded cannot possibly impact other users’ experience or increase aggregation costs. Furthermore, data caps as currently implemented may act as deterrents for all users at all times, but can also spur customers to look for fairer offerings in competitive markets."
To read Benoit's entire executive summary of the report (or buy it) click here.
please sign in to rate this article