If you followed a recent announcements of the big IT companies like Microsoft, Google, Apple, Facebook or Amazon odds are that you heard a lot about Artificial Intelligence, Conversational Interfaces, Natural Language Processing or Augmented/Virtual Reality. There seems to be a consent among these companies that we’ll see intelligent assistants built into all kind of household appliances. Consumers already have Amazon Dot/Echo, Google Home and Apple is also entering the game with its Siri powered HomePod.
All this devices provide considerable processing power: for example Apple’s smart speaker is rumored to have an A8 CPU which is the same chip that is used in the iPhone 6. One cannot help but wonder: what is possible do with all this kind of (local) processing power?
From a architectural perspective this development is a bit of a Déjà vu, at least on a conceptual level. Basically, we witnessed a similar transformation that happened with the rise of the personal computer: as terminals became more and more powerful computing moved away from mainframes towards the desktop of the business user and ultimately into everybody’s home. Now we see a similar development with all kind of smart household appliances. They have enough processing power, memory and storage capacity to do interesting things locally. For example, it is possible to do Natural Language Processing or Image Recognition directly on the device, without the help of remote cloud processing power.
While the aforementioned consumer devices like Amazon Dot/Echo or Google Home provide the means for doing a lot of calculations locally, they are still connecting to a remote cloud for their services. Consumers might not care if their smart devices are connecting to the cloud, if the services work. But consumers getting used to this kind of smart (EDGE) devices pave the path to a future without a central cloud. As pointed out by Pedro Garcia Lopez et. al. [1]:
• Proximity is in the edge: This is the old but still valid argument of peer-to-peer (P2P) systems and content distribution networks (CDNs). It is more efficient to communicate and distribute information between close-by nodes than to use far-away centralized intermediaries. Here, “close-by” can be understood both in a physical and a logical sense.
• Intelligence is in the edge: As miniaturization still continues and computing capacity still increases, edge sensors and devices become more powerful. This opens the way to autonomous decision-making in the edge such as novel distributed crowdsensing applications, but also human-controlled actuators or agents reacting to the incoming information flows.
• Trust is in the edge: Personal and social sensitive data is clearly located in the edge. The control of trust relation and the management of sensitive information flows in a secure and private way must therefore also belong to the edges.
• Control is in the edge The management of the application and the coordination also comes from the edge machines that can assign or delegate computation, synchronization or storage to other nodes or to the core selectively.
• Humans are in the edge: Human-centered designs should put humans in the control loop, so that users can retake control of their information. This should lead to the design of novel crowdsourced and socially informed architectures where users control the links of their networks. Finally, is also opens opportunities for novel and innovative forms of human-centered applications.
If the personal computing revolution is any indication for the magnitude of future developments for EDGE computing (and its market potential), we are in for an interesting future on the edge.