Edge Computing

by | Dec 20, 2016

Edge computing is the newest tech buzz phrase you might want to consider with some seriousness. I recently heard a pitch from a company building rain sensors for reclaimed water systems. It all sounded good until they said, “And when the system isn’t checking for rain water, it can be working on bitcoin mining in between cycles.” I looked at them like you would look at a penguin arriving in your office in a FedEx shirt. But the more I thought about it, the more I realized this whole idea could be at least practical in theory.

While “the cloud” and all of its associated jargon is the darling of VCs, “edge computing” is becoming a contender for the new cool kid on the block. While the “cloud” ecosystem has focused on moving standard business operations software to the web, charging subscriptions for computing power, and generally getting data off your local systems and onto servers on the web, edge computing is basically the opposite of this.

I’ll try and explain a little about the concept of edge computing and how it might change software development and business operations over the next few years. Keep in mind that this is, like everything, mainly a jargon-based argument (like most ideas in tech).

A Quick History Recap

Currently we have a client-server model in computing. This means that the server (on the Internet somewhere) is a big, powerful computer, and we send it the hard stuff to do for us, and it sends us back the results. The client server model was popular in the 70s and early 80s, because we really only had the concept of “terminals” where you would access a mainframe (which would be doing the real work).

The 90s and 2000s; however, focused on powerful, individual desktop and laptop PCs. This meant a lot more horsepower in the end user’s hands. Gaming, multimedia, and publishing really drove this power race. So we mostly abandoned client-server models for stand-alone ones.

The mobile revolution caused a return to the client-server model. This was mainly because mobile devices didn’t have a lot of power, and bandwidth was fairly inexpensive. Most apps these days are just a front-end to a web-based application that does the lion’s share of the work. Take a photo, upload it to the cloud, and the server adds cool effects to it with sparkles and then downloads it back to the phone! The Internet of Things (IoT) explosion that we’re witnessing now has changed some of this dynamic. We’re now moving BACK to a stand alone model, mainly because there is just too much data for the servers to deal with.

Today

Everything these days has a Wi-Fi connection. If you run a modern day office building, you might have tens-of-thousands of light bulbs, air handlers, key card entry pads, signage devices, and sprinkler systems that are all “Internet-connected” devices. These are becoming increasingly powerful computing devices as chip manufacturing becomes cheaper, more reliable, and more efficient. So what does this really mean in the long run?

This means that you’ll soon have light bulbs in your office with more computing power than an early cell phone. Yes, a light bulb.

A current example is the self-driving car, with thousands of sensors, chips, cameras, and computers on board. There is nowhere near enough bandwidth out there to handle sending that data to the cloud and then back to the car in enough time to avoid crashing into a group of caroling preschoolers. So it’s an important tech problem to get right.

What’s to Come

Those devices, those sensors, those processors, and so forth, are the “edges” in edge computing. There is going to be so much computing power out there in 10 years, which will be available all the time, that it will completely eclipse the amount of processors that we have in all the data centers in the entire world right now. So the question that edge computing is trying to get to is, “What do we do with it all?”

Bitcoin mining might not be the most interesting use of a ton of extra processing power, but it’s the first thing that comes to mind for many folks. I’m sure projects like crowd-sourcing protein-folding simulation and searching for extra terrestrial life, using extra CPU cycles, could get a massive boost from a few billion extra processors.

Recent Posts