TI E2E Community
100 Gbps Enterprise Trends
I remember the day the Internet (then ARPANET) found me… or should I say, I found it. I was a young engineering student working for a telecommunications company. The year was 1983 and 10 Mbps Ethernet (half-duplex) was the state-of-the-art computer interconnect. Then, Ethernet was brand new and used large coaxial cables with “vampire” taps to pierce the outer insulator and shield. They were spaced no closer than a meter apart to give the collision detection schemes time to react. It was so much faster than anything I had seen at the time and extremely expensive with a single NIC card pricing out at over $1000 U.S. (in 2012 dollars that would be around $2500 each). But my personal connection speed was nowhere near 10 Mbps… it was a mere 1200 bps (yes… 1200 bits per second). It was a dial-up line using an FSK modem connected to my University’s VAX computer which allowed (limited) access to one of only a few nodes that existed at the time. That year, TCP/IP replaced the original NCP (Network Control Program) protocol becoming the primary transport of the Internet.
So why the networking nostalgia? In the last 30 years, what originally was a connection of roughly 1000 nodes (mostly government, military and university sites) has exploded into billions of connected devices with fantastic bandwidth that was considered science fiction at the time. Many of these devices are mobile and have more computing power than super computers of the 1980’s that would fill a floor of a building yet they draw less energy than a light bulb. There is obviously a trend here… and Cisco tracks it every year. They provide the Cisco Visual Networking Index (VNI) which tracks the growth and span of the Internet and the current prediction is that the compound annual growth rate for IP traffic through 2016 will exceed 29%. The major driver of this traffic is now M2M or Machine-To-Machine connections - things talking to other things without human interaction.
The Need for Speed
Take a look at the figure below:
This shows the progression of Ethernet local area network (LAN) connection speeds. It is not completely accurate in that the original standards were half-duplex. The newer standards are full-duplex allowing simultaneous receive and transmit capability affectively doubling the capacity. This trend is being driven by the service providers that handle all the aggregate accesses to the Internet – those web pages live somewhere and search engines and servers must connect to something. I have written blogs about how much energy you consume when you click on a browser link (see: “The True Cost of an Internet Click") and it may seem trivial, but when you multiply that by billions of clicks, the number suddenly grows gigantic… and continuous! There is always a time zone where people are accessing some service found on the World Wide Web (thus the name).
As more people get connected, providers are looking for technology to make their “services” faster while not consuming additional power. Also, all those M2M connections run at the speed of embedded computers (not people clicking a browser), so the system bandwidth must accommodate that load increase as well. As pointed out by the Cisco study, this trend is growing rapidly which is driving not only the service providers, but the equipment suppliers as well. “Green” initiatives to use less power have spawned “Energy Czar” positions at many large Internet companies whose function is to manage everything from the procurement of equipment to the power they buy to run it in an effort to reduce their carbon footprint. This is serious business since there are tax credits at stake which will directly affect their bottom line.
However, in the end it’s all about how fast and efficiently you can move information. In this age, I get annoyed when I need to wait more than a second for a web page to load or more than 30 seconds to transfer a large file. I’m not alone and to provide this level of service, bandwidth to homes and businesses has grown into the tens of megabits per second. If you multiply out millions of homes and businesses with 10 megabit per second or more (I personally have a 30 Mbps down-load rate), you can see that all of that bandwidth aggregates to extremely large volumes of traffic. What is interesting to note is that even with all the external traffic, much of the traffic moving through switches and servers are within the data centers or from one data center to another. This is the unseen traffic that is the consequence of an external transaction such as paying a bill or selling stock online. There may be several of these for every user transaction multiplying the traffic once again.
All of this has pushed the enterprise interconnection speeds past 1 Gbps per lane to now 10 Gbps. The “Small Form-factor Pluggable” or SFP+ standard has expanded to the Quad SFP or QSFP connections with 4 transmit and 4 receive lanes with an aggregate bandwidth of 80 Gbps - but it doesn’t stop there. Lanes are now being pushed to 16 Gbps and soon to 25 Gbps. The primary transport is optical at these speeds, but the majority of interconnects within (or adjacent to) a rack are less than 5 meters. These connections are now moving to “Active Cables” or cabling that incorporates active signal conditioning to compensate for the loss and jitter induced by the copper wires. Soon, the physical layer for copper wire may move away from binary NRZ data to multilevel signaling (see my post, “Will Binary Communications Survive”). There are limits of physics that prevent going faster over copper wires and primarily the culprit is noise. The Shannon-Hartley Capacity Theorem (shown below) states that the capacity of a channel is directly related to the channel bandwidth (B) and the signal-to-noise ratio (S/N).
C = B·log2(1+S/N)
This cannot be proven directly, but in general it can be justified with a little math. In the end, there is a finite capacity called the Shannon Limit that is dependent on the SNR even if the bandwidth is infinite (noise power increases with channel bandwidth). One can see that the physical world has handed engineers another problem to work around if we are going to move information even faster.
If you’d like to read more detail on this subject, see my column in Electronic Design Magazine entitled, “The Enterprise Prepares For Life Beyond 100 Gbits/s”. And as always, comments are welcome. Till next time…
Email me at: email@example.com or visit the entire content of the blog at http://ti.com/energyzarr
All content and materials on this site are provided "as is". TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement of any third party intellectual property right. TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with respect to these materials. No license, either express or implied, by estoppel or otherwise, is granted by TI. Use of the information on this site may require a license from a third party, or a license from TI.
TI is a global semiconductor design and manufacturing company. Innovate with 100,000+ analog ICs andembedded processors, along with software, tools and the industry’s largest sales/support staff.