Compute and storage cloud computing from Amazon, Google, and Microsoft have many advantages over traditional on-premises computing.

On the first day of my Stanford class, I describe how these cloud service providers function. First, they take on the responsibility of purchasing compute, storage, and networking equipment. They also commit to managing the security, availability, performance, and change of the infrastructure. Amazon, Google, Baidu, and Microsoft deliver these centralized services from about 10-20 data centers around the world.

In the traditional on-premises model, you would have had to purchase the capital, rent the data center, pay the power and cooling bills and on top of that hire the staff to manage security, performance, and availability of the compute and storage infrastructure. The result is both lower cost and higher quality.

While there are many advantages to moving to these cloud services, there are some applications that are challenged to run in a centralized environment. They include:

  • Real-time applications. Some applications require near-real-time decision-making at the location the data is captured
  • Big data applications. A factory can produce tons of data a day, but pushing this data up to a central cloud can be slow and expensive, even over 5G
  • Isolated applications. Some applications need to work even when disconnected for substantial periods of time, especially where data connections are unreliable or intermittent, for example, on an oil platform in the middle of the ocean
  • Privacy, security, and compliance applications. Sometimes data cannot leave the location where it is generated or needs sufficient processing before it can be shared, for example, in healthcare

As a result, there is a need for a decentralized cloud computing infrastructure. We’re going to call this decentralized infrastructure an “edge cloud”. Similar to central cloud services, a decentralized cloud computing service provider still acquires the compute, storage, and network equipment; manages the performance, availability security, and change of the infrastructure; and delivers it in an OPEX business model. The only difference is that the edge cloud needs to be engineered to deploy in any building, rather than in a single, or few (10-20) buildings. In order to achieve both real-time and privacy-preserving features, applications require the computing infrastructure to be decentralized and in the building – the hospital, the clinic, or ultimately the home. Therefore, the edge cloud has a unique set of requirements.

  1. Secure decentralized compute and storage

Central cloud services are delivered from a handful of data centers where physical access to the building is strictly enforced. Putting servers into a hospital or clinic cannot realistically require the same level of physical access control. Unable to rely so heavily on physically restricted access, a decentralized edge cloud needs to use hardware-based encryption technology, trusted boot, and the ability to deactivate the server should it leave the edge zone.

  1. Network security

One of the main reasons the data computing infrastructure needs to be in the building, is the healthcare machines generating the data are in the building, and the only way to communicate with those data-generating machines is to be on the same secure, managed network. So, an edge network service must support intra-zone (inside the building communications) In addition the edge network service must support secure extra-zone (outside of the building) communications as well as inter-zone communications.

  1. Access to data from healthcare machines

While there is useful data in the electronic medical record (EMR) or electronic health record (EHR), there is far more data in the imaging machines, blood analyzers, drug infusion pumps, ventilators, and gene sequencers. Architecture that supports access to this data real-time and in its uncompressed form, as with edge computing, provides access to a rich source of  untapped and highly valuable data.

  1. Fine-grained data sharing

One of the fundamentals of privacy is purpose limitation. The architecture needs to allow for fine-grained data sharing such that a machine owner should be able to choose specific, decentralized edge applications with which to share data (and which ones not to). Doing so will clearly define not only which data can be shared and with whom, but also for what specific purpose(s) for commercial and research applications alike.

  1. Decentralized edge application control

In addition to fine-grained data sharing, the decentralized architecture should also offer a rigorous process for allowing decentralized edge cloud applications into the building. This process should include security vulnerability testing, application security review, and defined whitelists for any external communication.

  1. Image sanitization

Given the intent to share medically related images outside of the hospital or clinic, the infrastructure should support what is referred to as “image sanitization.”  In other words, it needs to be able to automatically identify and redact any personally identifying information (PII) present on the images.

  1. Real-time inference

The architecture should support real-time AI inference. Regardless of where an AI application’s training takes place (using a decentralized or a centralized architecture), the servers at the point of care must ultimately be able to execute locally on that learning (i.e. execute the resulting AI application) without having to rely on or make use of servers outside of the building.

  1. Federated learning capabilities

Finally, the decentralized edge cloud service should be optimized for privacy-preserving, network-preserving, federated learning. Rather than a centralized architecture that learns on 6,000,000 TB of shared, aggregated ultrasound data, for example, federated learning would allow the algorithmic learning to take place in a decentralized fashion across the 7,000 servers located in all 500 children’s hospitals around the globe.

Creating real-time, privacy preserving AI in medicine applications requires a new decentralized computing infrastructure.  We cannot build a new AI-powered future without it.

AI tools and their deployment, will be discussed at the in-person AIMed Global Summit scheduled for June 4-7th 2023 in San Diego with the remainder of the week filled with exciting AI in medicine events like the Stanford AIMI Symposium on June 8th. Book your place now!

We believe in changing healthcare one connection at a time. If you are interested in the opinions in this piece, in connecting with the author, or the opportunity to submit an article, let us know. We love to help bring people together! [email protected]