SEPTEMBER 5, 2019

Lately, it seems like software gets all the love. Innovations like SDN, SD-WAN, intent-based networking, containerization and service mesh have taken center stage over the past few years. But even the coolest software needs to run on something—and fortunately, we’re entering a period of hardware innovation that will help networking professionals make the most of these advances.

Rather than a single revolutionary change, hardware is advancing on four important fronts:

  • SmartNICs are boosting network performance and efficiency, by off-loading compute-intensive networking jobs—much as adding a graphical processing board to a PC cranks up its game-playing power without slowing down other applications.
  • New programmable chips give networking practitioners the ability to quickly adapt to emerging problems and opportunities.
  • Adoption of 400-gigabit Ethernet will provide a generational turbocharge.
  • The ongoing migration from hardwired three-tier networks to flatter “leaf-spine” topologies allow packets to take the quickest path available to their destinations.

Together and in combination, these innovations add up to an important evolution in the physical infrastructure of networks, boosting efficiency, security, flexibility and reliability. “Although we’re increasingly moving to software brains, we still need to assure that the hardware is capable of meeting these very demanding requirements,” says IDC analyst Brad Casemore.

The changes could also drive a historic shift in the balance of power between enterprise customers and their networking vendors, says Greg Ferro, host of the Packet Pushers, a podcast and media site for networking professionals. Specifically, increased popularity of field-programmable NICs and the P4 scripting language are giving customers the flexibility to reconfigure and reprogram network devices from any vendor on the fly, rather than be limited by the capabilities baked into the gear by its creator, he says.

It’s a good thing, too. To get the benefits of software trends such as cloud computing, virtualization, and containerization, companies need to transform the way they write and deploy applications. But such advances don’t happen in a vacuum. While hyperscale cloud providers and sophisticated enterprise IT groups have made great strides in implementing SDN and NFV to control, accelerate and optimize the flow of network traffic, they’ve often cobbled together custom hardware to realize real-world improvements.

Skyrocketing user demand often leads to east-west bottlenecks as applications try to connect with the hundreds or thousands of servers and micro-services they need to draw from to process the requests. Faced with such complexity, these companies need to squeeze every last ounce of performance and efficiency out of the hardware—especially given mounting signs of the death of Moore’s Law.

“In some ways, hardware is becoming more important than ever,” says Jennifer Rexford, a co-founder of the P4 Consortium and the Gordon Y. S. Wu Professor in Engineering at Princeton University. “We’ve already seen the rise of domain-specific hardware for doing things like machine learning. I think we’ll see the same thing happen to make networks programmable.”

Here’s a closer look at these key hardware trends and the role each can play in addressing networking issues.

 


SmartNICs

A SmartNIC is a network adapter that offloads processing tasks that a server or storage CPU would normally handle. Using its own on-board processor, a SmartNIC can take over a variety of lower-level networking functions, including encryption/decryption, firewall, TCP/IP and HTTP processing, plus network, storage or GPU virtualization.

“We’re seeing SmartNICs being deployed in a lot of data centers right now,” says Casemore, often acting as a sort of co-processor in highly virtualized and containerized environments. By taking some of the strain off server CPUs, organizations are reducing energy costs, avoiding the cost of buying additional servers, and generally improving data center performance.

There are two basic types of SmartNICs, including ASIC and FPGA-based models. ASICs are, by definition, application specific, which means once delivered by the vendor, their functionality is fixed for the life of the device.

Many analysts see the most promise for field programmable gate arrays (FPGAs), which let customers program their NICs in response to changing conditions and requirements. In fact, FPGA-based NICs give organizations the capability to completely re-purpose a device over the course of its lifecycle. Such programmability comes at a cost, since ASICs can be engineered to do a very particular job at higher performance and lower cost. But both Microsoft and Amazon have already developed and deployed their own FPGA-based SmartNICs technologies in order to help ease traffic bottlenecks in their cloud data centers.

 

Programmable Chips

The shift towards programmable networking technology is the most nascent of the four trends, but it may ultimately be the most impactful, says Rexton. While networking groups have in the past been limited to the capabilities hardwired into commercial equipment by vendors, now they will be able to program their networks to meet ongoing needs—to research and respond to a new kind of denial-of-service attack, say, or to set a new policy to slow down delivery of Netflix movies to students to speed up delivery of time sensitive research data.

Such programmability has been a holy grail since the mid-1990s, but progress picked up in 2014 when the P4 Language Consortium began work on a programming language just for networking (as opposed to programs for developing computer applications, such as JavaScript or Python.) While SDN typically refers to how companies control the routers, switches and other forwarding elements in their networks, P4 lets operators specify how packets are processed. Gear can be programmed to support new protocols or remove ones that have become obsolete, and network engineers can not only see but respond as problems emerge.

As with almost everything else, the hyper-scale cloud crowd has been doing the most so far on the programmability front. For example, Microsoft has been deploying its custom built FPGA-based SmartNICs on all new Azure servers since 2015. In March, the P4 Language Consortium joined up with the Open Networking Foundation and the Linux Foundation to accelerate adoption of P4..

Now, commercial vendors are starting to get serious about making these capabilities available to enterprise customers. In June, chip giant Intel bought Barefoot Networks, the leading provider of chips designed to run software created with P4. A few days later, Broadcom introduced the latest version of its own programmable networking chips, called the Trident 4.

It won’t happen overnight, but over time adoption of P4 and programmable hardware will help businesses avoid vendor lock-in and future-proof their investments, because they would be able to reprogram equipment as networking requirements change over time, says Ferro.

The combination of P4-based software and SmartNICs could be especially powerful in situations where adoption of containerization is creating network bottlenecks, says Ferro. He says it would not be unusual for a company to be running 100 containers on a single physical server, creating large amounts of east-west traffic struggling to get in and out. Offloading traffic to a SmartNIC breaks up that logjam, and P4-based programming lets you set policies to manage and orchestrate traffic more efficiently—say, to optimize videoconferencing sessions over social media surfing.

 

400 Gigabit Ethernet

Just because new kinds of innovation are coming to networking doesn’t mean the old ones end. It appears this blossoming of hardware innovation will coincide with the next generational shift in data center switching, from gear that moves traffic between servers at 100Gbit/sec to ones that operate at 400Gbit/sec. The IEEE approved the standard in 2017, and analysts expect products to be commercially available by 2020. The new 400Gbit/sec is not only four times faster than 100 Gbit/sec,but offers better economies of scale since this generation of gear will be able to be configured more densely. That’s in part because the IEEE okayed a 200Gbit/sec spec that’s based on the same 400 Gbit/sec spec, but provides a cheaper, less disruptive alternative for companies that aren’t ready to make the leap to full 400 Gbit/sec performance.

Of course, that leap is inevitable. Casemore says the demand for 400Gbit/sec is driven by the needs of the hyperscale cloud service providers, but early adopters will also include major telcos, large SaaS providers, major financial services companies, and any organization with a specific high-traffic requirement, such as massive AI workloads or video-on-demand. Looking farther out, Casemore predicts that we will eventually see 800 Gbit/sec or even 1.6 Tbit.sec Ethernet, once technical challenges like the design and manufacture of onboard optics are met.

 

Leaf-spine topology

As individual devices become more flexible and efficient, so are the basic topologies network architects are using to lay out their networks. For decades, companies used a rigid three-tier approach. All traffic moved from an access switch at the edge to an aggregation level and finally to the core of the network—even though this path was rarely the most efficient way to get packets to their destination. In recent years, more companies are moving to a two-layer “leaf-spine” topology in which every leaf switch can connect to any spine. This architecture minimizes the number of hops that are required for traffic to reach its destination, and lets companies build out their networks as they go, rather than have to pay up front for a few large switches.

The technology to support the shift to leaf-spine is also accelerating. Amazon Google have developed their own high-speed networking fabric switches just for this purpose, and analysts say it won’t be too long before Broadcom, a leader in this market, brings out its first fabric chip that runs at more than 10 terabits per second.

 

Putting it all together

These aren’t the only trends happening in networking hardware. With traffic growth showing no signs of slowing—but with definite signs that Moore’s Law may be doing just that—hardware makers are going to have to become far more innovative in how they cobble together technologies to answer the call. The Amazon’s, Google’s and Microsoft’s of the world will no doubt be building bespoke new form factors, to achieve performance that general purpose devices won’t be up to, says Rexton.

Now, increased adoption of SmartNICs, programmable chips, 400 GbE hardware and flatter topologies, organizations have the opportunity to build programmability into the entire network stack, all the way down to the silicon.There’s little doubt that done right, these technologies add up to a rock-solid foundation for initiatives such as hybrid cloud, edge computing, Internet of Things and artificial intelligence.

Of course, making these transitions requires investment, new skills and an openness to change. Some experts worry that at the end of the day, adding programmability will only make networks more complex and difficult to manage. But given how much more is being asked of enterprise networks these days — deliver more services at lower cost, integrate with the cloud, be a bulwark against cyber-attacks—chances are the familiarity of yesterday’s infrastructure will be up to the job for long.