The 4 Most Common Misconceptions About Edge Computing
Posted by Ido Gur on Jul 13, 2020
Tags: • 5G • IoT • edgecomputing •
Edge computing has generated quite a lot of buzz recently, with good reason. When implemented, edge computing reduces latency, improves data security, removes bandwidth pressure, and lowers the data transmission costs.
In simple terms, edge computing brings the network, data storage, and computing closer to the end-user. It can have many applications and could completely redefine the way we use our devices, but there are still a lot of misconceptions about the topic. We set out to dispel some common edge computing myths.
Myth 1: Edge-Computing is the Same as Local-Break Out (LBO)
While both LBO and edge computing nodes handle data streams, they’re by no means the same. In fact, LBO is an integral part of a wireless network that allows some streams of incoming data to “break out” from the original network. LBO history starting already in 3G networks with the introduction of 3GPP LIPA and SIPTO recommendations that were hardly implemented. Since the big issue was always where and what are the applications
In 5G networks, edge computing platforms can use LBOs as a feature that allows access to wireless networks and connection to services and data further upstream. So while edge computing platforms can use LBOs, they are not the same (source).
Myth 2: 5G Solves the Problem of Edge Computing with Localized UPF
Edge computing and 5G networks are intrinsically linked. 5G demands extremely reliable high performance and distributed nodes on the edge are the only feasible solution. However, the implementation itself poses quite a bit of challenge, namely capital and operational costs.
Flexible deployment of localized UPFs is crucial for 5G and edge nodes to function in harmony and deliver the expected results to end-users. Similar to LBOs, UPFs need to become a feature operating within the edge computing infrastructure.
Myth 3: There are no Commercial Use Cases for Edge Computing
Nothing could be further from the truth—there are indeed many commercial uses for edge computing.
In fact, South Korean Telecom is set to build 12 Mobile Edge Computing points in 5G networks nationwide. This will allow end-users to run AR, VR, and AI applications on their smartphones.
In Germany, Continental, Saguna and Vodafone ran a long study which concluded that communications between vehicles and road infrastructure services are possible only if edge computing functions are brought close to the roadside and implemented correctly.
Across the world, China is building a smart petroleum refinery using MEC to enable video surveillance and on-site data collection and transmission.
And this is all without mentioning the many applications 5G networks and edge computing can find in the world of IoT and smart cities.
Myth 4: Edge Computing is Too Expensive to Deploy
It’s true that edge computing implementation can be costly. But new edge computing architectures which are completely SW-based can run on any commercial off-the-shelf (COTS) servers. This implies drastically lower total costs for edge computing platforms if the solutions are designed well. Even 1U servers could be used to host local container applications. What’s more, micro data centers can be placed in already existing buildings, where power and real estate expenditures are already sunk cost (source).
As technology moves forward, it’s becoming clear that edge computing will have an important role to play in the development of many new and exciting applications and use-cases. Smart and well-designed solutions will help resolve the current challenges edge computing presents.