5G is looking to support a variety of different services with different requirements. As a result, the network needs to be more adaptable, more flexible, and eventually more intelligence. At the moment, there are a lot of innovation and standardization activities that are ongoing to address the challenges of network automation and intelligence, for example, in ETSI ISGs ENI and ZSM. However, the use of AI on the device side - what we often referred to as ‘on-device AI’ - does not fully reflect our needs and vision in terms of how to provide enhanced network connection and reduced network cost leveraging AI at the device side, pointed out by Yue Wang from Samsung R&D UK in a presentation from at a recent CW (Cambridge Wireless) event 'The inevitable automation of Next Generation Networks'.
The intelligence required to enhance the connections of the device to the network, or to reduce such cost, is different from the ‘application-level’ AI we have in our phone today
says Yue, ‘in the sense that it is not necessarily just application-level AI, and not necessarily just applied to end user devices’ – for which she termed as ‘device-level’ AI.
In her view, AI in telco requires a mix of device-level AI, localised AI, and end-to-end AI. Device-level AI is used to solve self-contained problems in individual network components, where no data is required to be passed in the network. Localised AI is where AI is applied to one network domain or cross network domains, it does need data to be passed in the network, however is constrained to a local network domain, for example, at the RAN, or fronthaul. End-to-end AI is where the network needs to have visibility of the whole network, and needs to collect data and knowledge from different domains of the network, for AI to be properly applied. Examples of end-to-end AI can include slice management and network service assurance
Device-level AI is the green field innovation, as it requires the minimum interaction with the network.
Yue went on to say “The network is taking a one-size-fit-for-all approach to address all the requirements and services. At the device side though, we do have the flexibility of implementation, where we can plug in specific intelligent functionalities to serve dedicated purposes”. “Having said that, because the devices are connected to the network, the plugged-in AI capability will inevitably affect the network, and be affected by the network”.
In the presentation, an example was given to show how device-level AI can benefit the UE design with the benefits of lower overhead and reduced power. More importantly, it pointed out some interesting challenges on applying device-level AI in the network.
Yue pointed out lots of the research activities on going are based on the synthesized data. It may be a good first step, but given lots of AI algorithms are based on exhaustive experimenting of the data, it is actually very important to obtain real data. However, getting real data is very challenging, especially getting the accurate data set and extracting useful information from the data.
On the point of learning, as devices are connected to the network, to fully achieve the capability of AI, the devices need to understand the contexts to and from the network. Isolated AI could result in sub/local optimal or even negative impacts to the network end to end, this will not be desired by the operators. More importantly, given the effects of device-level AI to the network, how much autonomy do we want to empower the devices? For this point, Yue provided in her conclusion slide that maybe we will eventually need network instructed or network controlled device-level AI.
The presentation raised interesting discussions between delegates on Twitter (example from Dean Bubley below) as well as further insight on the 3g4g blog which you can read here.
On standard, Yue pointed out that in the near future, with the fast progression of AI, the network may need to look to support both ‘legacy’ and intelligent devices, where the ‘legacy’ devices are referring to the 5GNR devices without intelligence yet. It is also possible that the network needs to support UEs with different levels of intelligence capabilities, similar to that the current standard supports different categories of the UEs.
When it comes production, of course we will have the usual computation and power constraints.
We need also to consider the balance between the power saved by using intelligence, compared to the power used due to the use of intelligence.
“It is also important to bear in mind that not every device needs to compete to be the most intelligent device in the world, not every device needs the 6.9 billion transistors and neural cores – the devices do have the flexibility of plugging in specific AI functionalities for required services”. Challenges in validation and deployment are also pointed out. In summary, “what you think AI can do can be very different from when they are deployed in the real network”.
In her summary slide, Yue pointed out there is inevitable change that is needed in the industry. It is not just the technology enhancement, but also the standards and strategies.
Image courtesy Charles Sturman