5G industry news Acceleration & Innovation

Innovation Briefing Issue 9 | Cheap as Chips

  • 19 minute read
  • Published by Crispin Moller on 16 Jun 2022
  • Last modified 8 Jun 2022
In the first years of the century, vast data centres began to take on many computing tasks traditionally handled in offices.

They did so typically on generic PC hardware, on racks and blades. Other opportunities opened up too, as more high capacity data-carrying fibre was installed. The result of these two factors – cheap hardware and fibre - is only just being felt in the network marketplace today. Network services are becoming more fungible, more capable and cheaper, and it’s finally reached the world of mobile too.

Around a decade ago, mobile networks began to think about how they could take advantage of these two trends: fibre and ever cheapening hardware. The MNOs own network centres still relied on specialised but expensive core networking equipment from companies like Cisco and Brocade. The radio end meanwhile came from four or five vendors including Nokia and Ericsson, who provided a range of well-integrated specialised but proprietary equipment, too.

Yet many industrial sectors had felt the impact of COTS (commercial off-the-shelf) hardware, also known as ‘white box’ computing or ‘merchant silicon’ typically referring to a PC, a modified PC, or some other assembly of generic PC components.

The large operators embarked first on the process of moving key parts of their core network – often described as the ‘brains’ of the network - onto cheaper data centre style hardware. That work continues, as the relentless demands to discover cost efficiencies continue unabated. Network managers can re-fashion their networks not only to save money, but also to add new features. As an example of the former, fibre allows them to consolidate boxes that might be in a different places into one or two. As an example of the latter, fast connectivity allows them to put more computing power closer to the user, at the ‘edge’ of the network. 5G was developed with such potential benefits in mind: parts of the specification explicitly require more computing at the edge, cutting the number of milliseconds a network user must wait for a round trip. In short, today, networks are finally opening up, which means that the long-promised cost savings from commodity hardware that IT departments have reaped for many years should now be accessible to many more network customers. No one is more keenly aware of how dependent it has become on specialist and expensive equipment than the world’s MNOs, which had become reliant on just three end-to-end providers for a radio network: Nokia, Ericsson and Huawei. 

“It’s taken place over a long period of time, and you can trace it back twenty years to the 2G networks,” explains Samsung Research CTO Dan Warren, the former head of the GSM Association. “To optimise the performance of the network, you needed tight integration between the hardware and the software. That led to vendor lock-in – whoever you bought the network from, you had to buy the network management system from too. And that’s how certain vendors became massive. It was difficult for any vendor to break in and make it a multivendor system.”

Of course for many markets, including the UK, that choice of three RAN vendors became two. Today the topic of supply chain security is now taken seriously by policy makers.

“Even then it was acknowledged that it presented a risk,” says Warren.

Hence the enthusiasm for encouraging the market to create more competition at the edge or the RAN, the radio network. What this article discusses is how the benefits so far explored by a few dozen large companies – the mobile operators – can benefit this new market of network users – and if so, then by how much.

Understanding the Changes

The rapidly-changing network market requires us to think about two related but separate trends. One is that the commoditisation benefits pioneered in cloud data centre are being in networks, first by the core, and now the edge. The other is that the giant cloud computing platforms - the “hyperscalers”, like Amazon’s AWS (Amazon Web Services) and Microsoft’s Azure - are now part of the picture. They are now performing a lot of the data crunching that was previously carried out by network operators. The cloud is where the cost savings of cheap generic hardware ‘COTS’ - really begin to pay off. And with Amazon’s 5G network in a box (marketed as AWS Private 5G), which is available for a private preview in the United States, those hyperscalers have driven their tanks right onto the mobile networks’ lawns. AWS will install the small cell base stations, any local equipment they need, and allow you to operate your own private mobile network from a dashboard.

Today, for example, Vodafone, operates several of what it calls “command-and-control Network Operation Centres” (or NOCs) each the size of a warehouse, linked up by its RedStream fibre network. Other MNOs, however, are looking at offloading such facilities, handing the job to the hyperscalers. Cloud computing giants such as Amazon and Microsoft can now offer to run the core network itself, via their own vast data centres. How many cheap servers these cloud hyperscalers have available is a closely guarded secret, but we know it’s in the millions, and that’s another level of commodity computing cost saving. As the buyers and designers of COTS equipment, they they can demand the lowest prices for every component.

AT&T is shifting its 5G core to Microsoft’s Azure platform, with hundreds of former AT&T technical and operations staff moving to Microsoft. Three years ago, IBM paid $34 billion for the open-source Linux company Red Hat, which operates independently. HP Enterprise is another provider with telco roots looking to take advantage of this.

“HPE took a decision that with 3GPP and with 5G we would drive forward a very high level of disaggregation,” Jeff Edlund, Chief Technology Officer of the telco side of HP Enterprise’s Communications Media and Solutions (CMS) division.

Disaggregation simply means decoupling two functions that were tightly integrated before. A first step on the great disaggregation journey has not necessarily involved cheaper COTS hardware – or at least, not straight away. This has been more about moving the pieces about to achieve operational expenditure savings - consolidation, in other words.

“It’s both a fibre revolution and the result of Moore’s Law – the fibre is the underappreciated part”, says Dan Warren, Director, Advanced Network Research at Samsung.

What does he mean? Moore’s Law, once referred to the doubling of transistors every 18 months, but it’s now really just a short hand way of saying “chips get more powerful all the time.” This is important to our story because it means that processing tasks once performed by specialist custom electronic circuits can be “software-ised” or virtualised, and therefore run on the cheaper more familiar commodity hardware – PCs, in other words.

The “cheap” part of COTS is a little misleading, says Warren, at least when it comes to the rigs out in the towns and our countryside.

“We’ve got to the point now where it’s not cheap PCs or servers – they have to be fairly high end. But the high-end server can come from anyone.”

Fibre, as Warren says, allows network builders large and small to decide where to put the network equipment. Pressure to reduce operational expenditure, or ‘opex’ , means consolidation can take place. But long term it means something more significant: the network doesn’t have to run on the jumble of boxes. This trend is referred to “disaggregation”, and it goes hand in hand with “software-isation”, or virtualisation. IBM’s Camille Vautier, Leader 5G Telco Cloud and Network transformation told us that in one recent demonstration, IBM ran a network job – in this case, facial recognition – with a radio network in Munich, Germany (Munich), and the core network in Nice, France.

Today, a great deal of energy and capital is being spent repeating the COTS trick, but at the other end of the network. The virtualisation that first took place at the core is now taking place at or near the cell tower, in what’s known as the radio access network or RAN. This is what plugs into the core network.  “5G CORE networks are becoming cloud-ified and the radio is becoming disaggregated too. It’s early days but especially on the Open RAN side - the ecosystem is catching up,” says Vautier.

Here too, packages are being unbundled, and interfaces opened up for different vendors to interoperate; heterogeneity is beginning to replace homogeneity. What were once mysterious proprietary boxes besides the antennas laid out by Nokia and Ericsson are increasingly recognisable and familiar pieces of equipment. COTS is coming to the RAN, too.

The trend has been talked about for years, and not all previous efforts have succeeded. The best known came in 2016, backed by Facebook, the TIP (Telecom Infra Project) announced the creation of the Open Core Network Project Group. But out of this has come Open RAN, and the large mobile operators are pushing this hard in the hope that their enormous equipment costs will eventually come down, as an open market drives down prices set by Nokia and Ericsson, the two dominant RAN suppliers.

Virtualised RAN or vRAN describes a processing job that was previously performed by specialist hardware being turned into another software job, running on generic hardware. So that software processing can be run right next to the antenna, or at a NOC, or in the cloud, or somewhere in between. While it’s often mentioned alongside Open RAN, senior director analyst Bill Ray at Gartner has led to two explains the subtle difference.

“Historically, you’d have a tower, an antenna on top, a coaxial cable running down and a big box underneath. Fibre replaced the coaxial cable. Those big metal boxes were expensive. So suppose you have five base stations: instead of a metal box on each one, you can take ten and run it into one metal box, and that’s called RAN pooling. You then need only one air conditioning unit. You can virtualise them too.”

Open RAN is a related trend, an attempt to bring order to the great disaggregation of functions that were tightly coupled.

“The big thing with Open RAN is that it standardises interfaces between components,” says Ray. “

In theory, OpenRAN and vRAN are not mutually dependent. You can have a virtualised RAN that isn’t open, or you can have an Open RAN installation that isn’t virtualised. But the two go nicely together.

“The point is, once you’ve virtualised a bit the cloud then that lets you virtualise more.”

The IT world can yawn and even express some scepticism – it’s witnessed a great unbundling before. This was the open systems hype of the late 1980s, and it didn’t work out as hoped.

COTS savings: are they real?

Given what we’ve learned, how much can we anticipate from the shift to COTS hardware? Happily, some work has been performed here.

Analysys Mason was able to look at the trend in network operation centres as they replaced expensive Cisco and Brocade equipment with COTS in their core networks, and extrapolate the calculations to the radio end of the network. Last year Gorkem Yigit, Principal Analyst at Analysys Mason and two colleagues wrote up some of their work in a paper, ‘Making the case for the true costs, benefits and risks of disaggregation in CSP IP networks’, and developed a model for evaluating the total cost of Open RAN.

“We wanted to look at the real cost of disaggregation in the operator IP core networks - but the findings are transferrable. The challenges are pretty consistent.”

And what did they find?

“Yes, you can save money - from between 28 per cent to 40 per cent,” Yigit summarises.  Disaggregation is not straightforward, he cautions, and achieving savings only really happens once the skills base is up to speed:

“For the operators it’s a major departure,” he explains. “They have to make a lot of upfront investments to make disaggregation work. We included this operational expenditure (opex) in  this calculation, and found that the investments you need to make in additional resources, the skills, ends up being expensive today. I’m emphasising this today because this is a long journey. Over time, as they gain more experience, some of those costs go down.”

In addition, he explains, “RAN has very demanding requirements for real-time processing and latency. What is really important is the cloud infrastructure that’s going to run this Open RAN. There are no savings if the cloud is not optimised.”

And Yigit points out that for some network operators, the question of how you validate and integrate and test systems when “everything disaggregated has to be re-aggregated” remains a live issue. Nevertheless, the market is rapidly meeting this challenge, he notes.

“One of the assumptions we made is the Open RAN can be best of breed, that operators can mix and match. We based our model on a pre-validated ecosystem approach, where the cloud and the network are already tested and validated. So this is not a completely new mix of vendors. Strong ecosystems are appearing to meet the integration costs of disaggregation.”

Yigit adds that “there are a lot of expectations from the Open RAN ecosystem. Intel is working on next generation chipsets. Radio head antennae are becoming standardised. The potential savings can go up to 28 per cent.”

“We are seeing a strong push from operators - they really want an open interface. Most operators will continue to work with Nokia and Ericsson but demand that their interfaces are open – either Open RAN or TIP. That’s the battle that’s going on. If Nokia and Ericsson deliver equipment with open interfaces today, an operator may replace them with Mavenir or someone else in the future,” is the logic. While this adds flexibility — somenetwork features may not be available from the market leaders — the focus on the benefit here is cost.

Gartner’s Ray also points out that the earliest cost-benefit estimates don’t tell the full story.  “There’s no inherent reason why Open RAN is cheaper, and it should be more expensive. It’s the indirect benefit that has networks interested: innovation drives down price,” he explains.

“Open RAN is going to be really important. It not only provides a new way of doing things but it disrupts an industry that’s been an oligopoly for twenty years. It’s been an industry with four or five vendors and now only three. OpenRAN cuts the system up into little chunks so I can make a living selling one part of that. It’s more about creating a ramp to allow startups to compete.”

You might think the potent combination of fibre and software virtualisation will rapidly bring us to a cloud utopia – with networks large and small able to tap into hyperscale computing power from Amazon and Microsoft- but there is a major obstacle: legacy networks.

Ray explains: “In a perfect world, we might have antennas connected to fibre, and the fibre connected directly to the cloud – and all the radio decoding can be done in the cloud. But only one operator, Rakuten in Japan, has done that – and there, only because it’s a greenfield site with no legacy networks or equipment. And who else in the world has got fibre to every base station?” 

So Wot’s Going COTS?

The part of the network nearest the user is where the most dramatic change is taking place today. Let’s look at how the base station, the crux of the radio access network, is going COTS.

“We’ve been ‘software-ised’ for some time,” explains Samsung’s Dan Warren. 

On a traditional mobile phone site a decade ago we would find a number of boxes: a baseband unit, a radio head, and an antenna. The radio head or radio unit (RU) is a transceiver which converts analog radio signals into digital bits. These are output as a digital stream, much like an MPEG video stream, only this is a stream of bits in the CPRI protocol.

The radio head will remain relatively unchanged, Warren and Parallel Wireless’ Zihad Ghadialy, Senior Director of Strategic Marketing, both agree.  It’s going to continue to be a specialist piece of equipment converting optical signals from the fibre into a bit stream that a microprocessor can understand. What’s different is that many more vendors will be able to interpret and process what comes out of the end of the CPRI pipe.

“The remote radio unit has become divorced from the Big Box at bottom of the base station,” is how Ghadialy, explains it. Ghadialy is well known for his YouTube channel explaining the intricacies and jargon of mobile network architectures. [https://www.youtube.com/c/3G4G5G], a channel with over 15,000 subscribers.

Samsung’s Warren describes disaggregation as, “maintaining the quality of service, but shifting the economics based on where network redundancy – the duplicate boxes – are really needed. You can then reduce the opex and energy consumption considerably”

But what’s coming with open radio networks is going to be much more dramatic. The hitherto tightly integrated workings of the box are being split up in several places.

“Conceptual layers of different parts of the network have become partitioned off,” Warren explains. This entails a separation of the actual traffic from the bits of the network that enable it to work. “The data plane is the actual traffic – the content, such as the actually video bits. The control plane is the part that establishes and maintains a session,” he explains. In addition, the management plane includes services such as routing is opening up too.

“Most people equate disaggregation with the white box movement and the separation of hardware and software for data centre switches,” notes Juniper on its website. “While this is an example of disaggregation, it is a narrow definition of a much broader movement and ignores the true value of disaggregation across other elements within networking.”

This is where OpenRAN comes in. More vendors can now interpret the CPRI stream – and more can do the management functions too. And they can do these jobs either nearby, or far away.

“Opening up the RAN was once thought of as something impossible to do,” Parallel Wireless Zahid Ghadialy reflects. “Parallel was one of the first to look at virtualisation, and for the first couple of years we didn’t make a lot of progress. Analysts told us this can’t happen. Today we have not only the core virtualised, but 2G to 5G completely virtualised as well.”

Now thanks to the Open RAN and TIP efforts, the control plane and the management plane are separated out, and opened up so third parties can plug in their version to complement or replace the default management or security features, similar to an app store allowing you to download a third party web browser. Parallel has helped implement an open RAN virtual design already, with Etisalat in the Gulf and to 5,000 sites and 21 operations for operator MTN in Africa.

“A lot of algorithms that today are proprietary today will become apps,” predicts Ghadialy.

Déjà VDU Open systems: repeating history

Cheap commodity hardware became a hot topic in IT in the 1980s and one can find academic literature referring to COTS and ‘open systems’ well into the 1990s. (example: www.bit.ly/3sJNUP4)

By the mid-Eighties, the 32-bit microprocessor was advancing rapidly, and beginning to nibble away at the large vertically integrated suppliers of mainframes and mini computers, like IBM and the Digital Equipment Corporation. For COTS read proprietary telco equipment. In place of IBM and DEC, read Ericsson, Nokia and Huawei. In place of OpenSystems, read OpenRAN and neutral host. The economics is the same: COTS hardware and common standards will unbundle the vertical giants. The network world is confident it won’t be a repeat, although there are some parallels, analysts warn. In the IT world, although the established giants had to appear ready to embrace the new open systems world, they also had plenty of reasons not to jump too far and too fast. Ultimately, the promise of open systems for the customer was never really fulfilled. Why?

Open systems really meant Unix, but different Unix systems proved not to be very interoperable. Key features that IT managers relied onto for 24x7 operation that the expensive proprietary equipment provided, like failover and redundancy, needed to be added to the new, raw and immature ‘open’ Unix systems. Much of the 1990s was spent engineering that in. Ultimately this drove departmental client-server computing into Microsoft’s hands:  IT departments could at last take advantage of COTS hardware cost savings.

Experienced network operators are mindful of the open systems story, and wary of making wild promises. However some parallels are eerie. Proprietary vendors such as  Nokia demonstrate ‘open’ RAN systems where the parts are all supplied by Nokia. The first Open RAN site was a ‘greenfield’ 5G network from Rakuten. The need to support legacy networks - 4G and older - will keep the proprietary players in the game for a long time to come.

Yet the impact of COTS may only just be beginning to be felt, as the hyperscalers offer private 5G networks at the push of a button.

HP’s Edlund says its telco know-how is increasingly focussed on helping the hyperscalers (like Microsoft’s Azure and Amazon’s AWS) build their ‘wholesale’ networks that are then ‘retailed’ to private customers.

“You would’t see us traditionally in the radio network, but that is changing with Open RAN. We have purpose-built COTS hardware, with, say, more slots for graphical processing units. These are designed to have Open RAN software run on top of it. We’re not building the Open RAN software itself - we partner for that - but we do make the management software that will allow you to manage and automate many Open RAN Intelligent Controllers. These are not well sorted now but should become vending machine items, put a quarter in and there’s a micro service.” HP sells this through the channel, or directly to a hyperscaler.

“We’re in the background - you don’t even see us” he says. 

Related projects

Topics

Tweet