Demystifying Edge Cloud Networking

Updated: Nov 18

As we all know, the digital transformation of everything is driven by exponential growth in traffic. Lately AI & Industry 4.0 applications are driving new requirements for digital networks. Cloud services (or backends) powering apps are getting pushed nearer and nearer from the central cloud (like AWS, Google Cloud etc.) towards the user at the edge. Edge cloud itself is not a new paradigm but new breed of applications are driving it to the tipping point.


This has given rise to people coining different terms like Edge-Cloud, Edge-Computing etc. In this post, we will try to demystify these terms and briefly analyze what does it take to deliver hyper-scale networking for the Edge.


Edge Cloud is basically an implementation of the Cloud at edge locations. For example, a radio access network (RAN) may connect to a small building in the middle of a city. The building could have a few server racks, making it a so-called “micro data center.” One can run an edge cloud on those servers. That would consist of complete cloud architecture, including cloud computing, storage, and network, managed by a cloud operating system. It’s a cloud. It’s at the edge. So, by definition, it is an edge cloud. It may or may not be connected with other edge cloud instances, but it most definitely would connect to main cloud or core network.

Edge Computing however refers to the computing/activity that happens in an Edge Cloud. It is the intelligence that runs in the Edge Cloud. Some examples:

1. CDN (Content Delivery Network) serving cached contents faster to the end user

2. AI engines controlling autonomous connected cars via Edge Car Cloud

3. IoT collection and monitoring entities controlling various connected IoT devices

3. Distributed gaming/metaverse services providing lighting fast reaction times


So, edge cloud and edge computing are the two faces of the same coin but there are subtle differences in how one interprets them and for what purpose. Now that we are clear on the “edge” terminology, one would question what is driving adoption for such technology ? We partially answered the question in our definition of Edge Computing. But to answer in more detail, we must analyze the past, present and future of the digital traffic sources:



As is evident, video which comprised most of the internet traffic will soon have contenders in form of Metaverse and AI applications. As compared to video traffic, these apps need bi-directional streaming data. And the data related to these apps are bound to certain latency thresholds to be useful. It is a no brainer that physical proximity of such services to the user would play an essential role for application performance. Edge architectures would play an even more important role for rural population, those who have sketchy connection to mainland or central-cloud locations. Cost is another factor which is driving this adoption as the cost of data travelling across various ISPs and into the main cloud location is always more compared to what can be served directly from the edge.


But, as simple as it may sound, does simply moving your services to the edge boost your application performance? The answer is an astounding NO of course. The critical component in the transition to the Edge i.e. the network itself was never designed for such traffic patterns. Moreover, there is the need to enforce increasingly complicated security policies to make sure only authorized devices can communicate with each other and that too at a scale which has never been seen before. Further, there is the need to provide visibility to the enterprises which run such services with high degree of accuracy. In other words, there is no conventional solution to these seemingly intimidating problems.


Conclusion


For enterprises/providers to win at the edge cloud, the following factors would play the most crucial role:


Floor space and form-factor —High density will contribute to success in an edge cloud instance. Also, lack of space means the need to provide more with less. One cannot dedicate ludicrous number of nodes for networking, storage or compute.


Total Cost of Ownership (TCO) —Cost always matters in IT. Edge would mandate the need for usage of standard hardware for compute, networking, and storage.

Serviceability— Remote software manageability and upgradability is a must for edge clouds. It would be almost impossible for service personnel to visit edge locations for routine servicing.


Agility —Agility as in any cloud architecture is very important to achieve the dynamic needs of workloads and applications. For example, the edge-cloud implementation should be fully prepared for unseen surges or new requirements from workloads while at the same time without being over-provisioned.


Security—Edge cloud would provide new grounds for cyber security attacks from various different vectors. Since edge cloud would live outside secure campuses that main clouds could afford, all components from hardware to software need to be fully prepared and enhanced against rogue attacks. Sustained DDoS attacks can wreak havoc for Edge Cloud components if not auto detected and resolved without any human intervention.


Latency—Real-time or semi real-time data-processing would be a new normal for edge architectures. All components especially networking needs to be fully hardened to tackle this.


At NetLOX, we fully understand these challenges and are poised to provide best in class cloud-native networking solutions to our customers to alleviate such challenges and ease their pain-points. Get in touch with us to discuss more on how we are helping the biggest enterprises chalk out their cloud strategies and networking implementations for the future.

144 views0 comments

Recent Posts

See All