You’ll have heard the term, ‘edge computing’. But what does it mean? More pertinently, what does it mean for the healthcare industry? Stanford University’s Dr. Timothy Chou, who started the first class on cloud computing in 2005, provides some clarity.
Numerous analysts have already identified edge computing as one of the next big things. Thomas Bittman from market research company, Gartner, wrote; the “Edge will eat the cloud.” Phillip Cases, founder of Topio Networks says, “Edge computing is one of the top 10 trends for 2019. This coming tsunami will require us to think about infrastructure differently.”
So what exactly is it? Why would you care? And more importantly, can edge computing change medicine?
Over the past thirty years or more, the cost of computing has come down every year. Gordon Moore, founder of Intel, is famous for stating the overall processing power for computers will double every two years, and he said that in 1965. As evidence of how low the cost of computing is becoming, today a 2.3GHz processor with 2GB of memory can be purchased on the public cloud for less than 3 cents for an hour of computing.
So with the cost of computing being driven down, coupled with higher performance, there’s obviously been a move to put more computing in more places. Most visible was the rise of personal computers in the 1980s and more recently, the introduction of Dell laptops, iPads and Samsung smart phones. All contain powerful low cost computers. But this transition hasn’t been limited to just what you think of as a computer. Many modern-day items also have computers embedded in them. The current Porsche Panamera ships with 100,000,000 lines of code, an AGCO combine harvester has more than 5,000,000 and modern Siemens MRI scanners have over 10,000,000 lines of code. Lines of code are a rudimentary measure of the amount of software on the computer.
In simple terms, edge computing is putting computing ever closer to the demand for, and the source of the data. Today the public cloud (AWS, Azure, Google Cloud) is deployed out of a handful of data centers around the world. So if you live in Palo Alto, and want to watch the Crown, rather than go all the way to the Amazon data center in Virginia to retrieve your Netflix movie, you pull it from a server in San Francisco and feed your Chromecast dongle in your home. That Chromecast dongle is an edge computer. That content server in San Francisco is an edge computer. Now, if communication costs were free and infinitely fast then there would be no need for edge computing. But we’re a long way from that, although the work on 5G networks is bringing us ever closer. Doing computing closer to the edge of the network lets organizations analyze important data in near real-time – a need of organizations across many industries.
In the world of healthcare, the Apple Watch monitoring your heart rate is an edge computer; the embedded PC in the Siemens Magnetom Skyra 3.0T scanner is an edge computer. So while all medical equipment will get smarter, how can edge computing change healthcare?
Consider the possibility of connecting all the healthcare machines in every children’s hospital in the world. With around 500 children’s hospitals and an average of 1,000 machines in each hospital that would amount to 500,000 machines. Since each of these healthcare machines
are from different suppliers (Abbott, GE, Siemens, Phillips, Beckman, Leica Microsystems, etc.) you’d have to come up with a standard edge computer which could interface, to all these machines. The data would include both machine data (the laser power level of the gene sequencer) as well as nomic data (the actual MRI scan or gene sequence). Security features would have to be designed into the edge computers and not left as an afterthought. Furthermore, you’d need to engineer for privacy, as the GPDR and HIPPA are only the beginning of the regulatory frameworks.
With powerful enough edge computers, you could also make the machines smarter. For example, the machines would be able to speak with you and tell you when they needed maintenance, or even answer questions without you ever having to type or move a mouse. But making each machine smarter would be just the beginning. An edge computer connected to high performance 5G networks would create the ability to securely aggregate data in public or private cloud services.
As we’ve seen in the consumer world, there have been major advances in AI in image recognition. The famous ImageNet competition, which pitted people against machine in the task of recognizing millions of images, was started in 2005. By the time of the 2015 competition, the computers were better than the humans. Companies like Facebook are also pushing the state of the art in facial image recognition. While, in the quest to build autonomous cars, Tesla, Porsche, Mercedes and other automobile companies are also advancing scene recognition.
Why are all of these getting better? There is a famous PPT slide authored by Jeff Dean at Google Brain, which shows that with neural network technology increasing the amount of computing and the amount of data achieves nearly linear improvements in accuracy. Public cloud computing is bringing large amounts of compute for very little money – a thousand servers for an hour costs $100. So what if we could bring terabytes of imaging (MRI, CT, Xray, Ultrasound, etc.) data together? Could we build increasingly accurate pneumonia diagnostics? How many of the ACR DSI use cases (www.acrdsi.org/DSI- Services/Define-AI) could be solved?
The implications for global healthcare are huge. With only 3,000 pediatric radiologists on the planet and two billion children, health care is spread unevenly. Pneumonia is the number one cause of death of children in Africa. It doesn’t have to be this way. Using data from 500,000 connected health care machines; we can build AI doctors to diagnose thousands of conditions. With edge computing, medicine may indeed be at the edge of change.