Skip navigation

The Rise of Edge Computing

Tyler McMullen is CTO of Fastly.

When you take a look back at the history of networked applications over the past five decades, it’s easy to see what seems like a cyclical pattern of centralization and decentralization.

There were mainframes from the 1960s and 1970s, early internet of autonomous systems in the 1980s, the centralized architectures of early websites in the 1990s and early 2000s, and the cloud-based present day.

However, for once, it appears that the next step is actually to become more distributed, rather than contracting back to a centralized model.

Edge computing is on the rise because of changes in the way that we use the internet every day; or, rather, because of the internet-connected devices that we use to order food, monitor our homes, and change the color of the light bulbs. Edge computing is on the rise because it is necessary in order to run the real-time services we’ve come to expect across the internet.

We’ve had “smart” devices for many years now: Tiny computers have been embedded in our consumer devices for quite some time. These early smart devices led to what we have begun referring to as the Internet of Things (or IoT). What makes IoT different is that now those devices need to communicate across the internet. The original form of “smart devices” was a more isolated experience. This meant that the user experience with those devices was reliant on the hardware and software.

See also: What’s Behind AT&T’s Big Bet on Edge Computing

Now that they’re connected to the internet, things are trickier. The user experience now involves the user’s local internet connection, ISP, backbones of the internet, and servers in a data center somewhere. If an IoT device relies on both its local hardware and software, as well as a multitude of remote systems, you run into an interesting problems. The more systems that must be up, functioning, and accessible for a product to work, the higher the likelihood of failure. If it relies on, say, 10 systems, each with 99 percent availability, the overall ability falls to 90 percent. It’s easy to see that the user is going to have a bad time.

This is where edge computing comes in. Developers must make sure that their device performs as many functions as it can without an internet connection. (A degraded user experience is still better than none at all, of course.)

But not all functions of IoT devices can be performed locally. Imagine, for instance, a home security system that you can control remotely via your mobile device. Clearly, if you are not at home and you want to control the security system, it will need to be connected to the internet so it can receive commands from you. Here, edge computing can used by doing the necessary routing of commands and retrieval of information as close as possible to the user, via an edge cloud platform.

The key part here, though, is that for devices people rely on every day, slow response time and outages are unacceptable. We saw this last summer with the outage of a major IoT thermostat provider: The air conditioning suddenly shut off at many homes around the world because the device could not communicate with a centralized service—exactly the type of problem that edge computing is meant to prevent. Again, by moving logic as close as possible to the device—be that in the device itself or at the edge of the internet—we can prevent such outages.

Forty percent of people will abandon a page that takes more than three seconds to load, and both Google and Facebook prioritize fast-loading webpages — the difference between a fast and slow user experience can be one that makes or breaks a company. The next step in ensuring faster page loads is to avoid having to talk to a centralized server at all. By moving computation and logic as close to users as possible we minimize the amount of communication they have to do and the time it takes to do it.

However, the danger and potential pain of centralization is worse than unhappy customers: Centralized systems are an easy target for distributed denial of service (or DDoS) attacks. DDoS attacks have been increasing both in size and frequency.

According to Arbor Networks, attack size has grown 1,233 percent in the past five years, and attacks are up 44 percent from last year. The past two years in particular have seen a rash of extortion attempts against small and large companies where the threat is that the attacker will use a botnet to take down the business’ website, as was widely reported in 2015 and 2016. Worse still are attacks against critical infrastructure, which we saw last October when major DNS provider Dyn was hit with a massive DDoS, taking down many major sites.

Our response to these security concerns needs to be a stronger push toward decentralization. Well-designed systems that are widely distributed and capable of operating independently are both better for our users’ experience, and also much more difficult to take down with attacks.

Edge computing—moving computation to the edges of the network and toward the users—is both a natural progression and ideal solution to the challenges the internet will face in the coming decades.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish