Edge-Computing: A Technology that Will Change the Future of Cloud-Computing

A taste of the 5G Era.


Michel Abdel Nour

3 years ago | 7 min read

A taste of the 5G Era.

Driven by the visions of the Internet of Things (IoT) and fifth generation (5G) communications, Edge Computing is an emerging technology that is currently receiving vast amounts of attention and industry investment.

The concept of having storage nodes placed in close proximity to mobile devices enables the delivery of cloud services at higher speeds by overcoming the classic problem relating to high end-to-end network latency, perceived frequently by mobile-users.

In a world where technology is taking over every aspect of our life, we are witnessing the arrival of unprecedented services and applications on the individual scale that ensure lower latency, better reliability, larger information throughput and improved energy efficiency.

In this paper, we will cover details about how Edge Computing works, the infrastructure of 5G technologies that makes it possible, an overview of the edge computing architecture,

and finally, the advancements brought forward by edge computing with regards to computing speed and solving the latency problem by presenting examples of work being done in that direction.

I — Introduction

The last decade has seen cloud computing rise as the new paradigm of computing by combining two main aspects: the centralization of computing and storage and network management in the Cloud (M. Armbrust et al.).

When leveraged, the latter is a great resource that can enable the delivery of elastic computing power and the sufficient storage to support end-users.

Nonetheless, the growing popularity of mobile devices, such as smartphones, tablet computers, and wearable devices, is accelerating the advent of the Internet of Things (IoT) and triggering a revolution of mobile applications (Gubbi et al., 2013). Nowadays, businesses and individuals run an increasing number of applications in the cloud.

Thus, end-user devices require evermore computing power to support the computationally intensive activities associated with their behavior and needs, which brings up the aforementioned problem related to high network latency.

The performance of an application running in the cloud depends on the data center conditions, and upon the resources committed to an application.

Small network delays may lead to a significant performance degradation. In order to support end-user activities, network operators are pushing a lot of effort towards R&D (Mao et al, 2016).

By introducing Edge Computing, which allows cloud networks to overcome issues related to current cloud architecture and especially the network delays caused by high network latency, the IT industry is bringing us closer to the 5G era.

The idea involves having mini-versions of the cloud distributed widely to service a plethora of end-users (Mao et al., 2017).

The rest of this paper is organized as follows:

Section II is concerned with the causes and details of the latency problems faced in current cloud infrastructures.

Section III, on the other hand, will present details about Edge Computing, discussing how it works and the effective infrastructure required to support it.

Section IV will mainly focus on recent and current work being done to improve and establish Edge Computing technology.

Finally, Section V will wrap up all of the sections, in addition to adding future projections of the advancements that could be achieved in Edge Computing before ending with the conclusion.

II — The Current State of Cloud Computing

A. Current Cloud Architecture

Cloud architecture is the basis on which cloud computing relies.

Although cloud services are usually perceived differently by users, a general definition would be: services that are delivered over the internet on demand, and which vary between infrastructure hosting, computing and storage.

In the case of current cloud architectures, all user requests are transmitted from their devices to a main datacenter through different network connections and based on specific protocols. Cloud hosting is capable of handling workloads seamlessly without any possibility of failure.

Since it functions as a network, even if there is a failure in one of the components, the services are available from the other active components.

B. Cloud Service Latency and Network delays

Cloud service latency is the delay between a client request and a cloud service provider’s response.

Latency greatly affects processing speed in the Cloud and how usable and enjoyable devices and communications are, which results in problems that can be magnified for cloud service communications.

A myriad of factors affect latency, such as the standard number of router hops or ground-to-satellite communication hops on the way to the target server, which can lead to considerable delay on the network communications’ side.

Because cloud service data centers can be physically located anywhere in the world, network delays can vary dependently and increase significantly based on the distance from the provider’s main data center. In a cloud environment, the larger and less predictable workload also leads to greater variability in service delivery.

Visualization can introduce packet delays*, especially if virtual machines are on separate networks. In case a customer network’s wide area network (WAN) is experience a lot of requests, this can also have a significant effect on network latency (Satyanarayanan, 2017).

Nonetheless, the upcoming 5G technologies that are currently being developed are laying the foundations for edge computing as a wide-scale solution for the current latency problem.

III — Into the 5G Era: The Emergence of Edge Computing

A. Introducing 5G technologies

With the current rise of IoT and the computationally heavy services and applications accompanying it, such as smart homes and artificial intelligence, these rapid changes need to be handled by more diverse and complicated wireless environments.

Future 5G networks are expected to ensure massive capacity and connectivity, seamless heterogeneity, high flexibility and adaptability.

Typical requirements of 5G networks include ultra-low latency and ultra-high reliability, reduced costs, low energy consumption, and the support of different types of devices and applications (Amburst et al., 2009). However, to render the 5G vision a reality, huge challenges need to be addressed.

Edge computing is an upcoming trend in cloud computing that will potentially help overcome all these challenges.

B. Edge Computing: A solution to the latency problem

The roots of edge computing (also known as fog computing), go back to the late 1990s, when Akamai introduced content delivery networks (CDNs) to accelerate web performance (Satyanarayanan, 2017). A CDN uses nodes at the edge close to users to pre-fetch and cache web content.

Similarly, Edge Computing is a Cloud computing architecture that provides users with widely and evenly distributed cloudlets (cloud nodes) in a given geographical context.

In fact, end-to-end latency is impacted by physical proximity; a cloudlet’s physical proximity to a mobile device makes it easier to achieve low end-to-end latency, high bandwidth, and low jitter* to services located on the cloudlet. This is valuable for applications and services that offload computation*.

In the same way, in the case where a cloud service becomes unavailable, a fallback service on a nearby cloudlet can temporarily mask the failure (Satyanarayanan, 2017).

Clearly, relying on a cloud datacenter is not advisable for applications that require end-to-end delays to be tightly controlled to less than a few tens of milliseconds. Many experts in the IT industry have joined efforts and are currently experimenting with edge computing to bring the idea to life.

IV — Advancements in the Field of Edge Computing

In this section, we will be going over some of the progress that has been recently completed, or is developed in order to advance edge computing and make use of its projected functionality.

Recently, Google has been working on implementing a wearable cognitive assistance based on Google Glass, which relies on edge computing to rapidly pull real-time data from the cloud. Many other initiatives entailed utilizing fog architecture to process IoT data (i.e: Cisco Kinetic platform) and run IoT applications (i.e: Cisco IOx platform) on the Cloud (Bangui et al., 2018).

In fact, the latter serves as a mean of describing the benefits brought about by deploying edge technologies as extenders of cloud services.

By taking on serious R&D initiatives in the context of developing fog computing technologies, companies are bringing us closer to the wide-scale fully-connected world of the 5G Era.

V — Conclusion: What the Future Holds for Edge Computing

As discussed in the previous sections, edge computing is gradually asserting itself as the new standard for cloud computing.

Thus, by ensuring the presence of evenly-distributed edge nodes, users can interact with these cloudlets which are closer to the end-users, thus allowing to reduce the observed effects of latency during user interactions with the network.

People’s lifestyle today requires an increasing number of multi-purpose applications at their disposal at all times, leading data traffic to increase significantly every year.

Network providers and large IT companies are conducting a lot of research in order to improve on their current cloud architecture and implement edge computing with the end goal of providing users with the best experience possible.

Edge computing is a major step into the 5G era that will allow serve to significantly reduce network delays, accelerate the processing time in the cloud, as well as improve multiple different aspects of current cloud computing.


Created by

Michel Abdel Nour







Related Articles