Welcome To The Supercloud 

Welcome-To-The-Supercloud

In Cloudflare’s S-1 document, there’s a section that begins: “The Internet was not built for what it has become”.

That sentence expresses the idea that the Internet, which started as an experiment, has blossomed into something we all need to rely upon for our daily lives and work. And that more is needed than just the Internet as it was designed; it needs security and performance and privacy.

Something similar can be said about the cloud: the cloud was not designed for what it must become.

The introduction of services like Amazon EC2 was undoubtedly a huge improvement on the old way of buying and installing racks and racks of servers and storage systems and then maintaining them.

But by its nature, the cloud was a virtualisation of the older real-world infrastructure and not a radical rethink of what computing should look like to meet the demands of Internet-scale businesses. It’s as if steam locomotives were replaced with efficient electric engines but still required a chimney on top and stopped to take on water every two hundred miles.

The cloud replaced buying servers and installing operating systems with new and now familiar rituals of choosing regions, provisioning virtual machines, and keeping code artificially warm.

But along the way, glimpses of light are seen through the cloud in the form of lambdas, edges, functions, or serverless. All are trying to give a name to a cloud computing model that promises to make developers highly productive at scaling from one to Internet-scale. It’s a model that, rather than virtualising machines or disks or wrapping things in containers, says: “write code, we’ll run it, don’t sweat the details like scaling or location”.

We are calling that the Supercloud.

The foundations of the Supercloud are compute and data services that make running any size application efficient and infinitely scalable without the baggage of the cloud as it exists today.

The foundations of the Supercloud

Some years ago a movement called NoSQL developed new ways of storing and processing data that didn’t rely on databases. Key-value stores and document stores flourished because rather than thinking about data at the granularity of databases or tables or even rows, they directly connected code and data at a simple level.

NoSQL is a drive towards granularity. And it worked. NoSQL stores, KVs, and object stores (like R2) abound. The rise of MapReduce for processing data is also about granularity; by breaking data processing into easily scaled pieces (the map and the reduce), it was possible to handle huge amounts of data efficiently and scale up and down as needed.

The same thing is happening for cloud code. Just as programmers didn’t always want to think in database-sized chunks, they shouldn’t have to think about VM- or container-sized chunks. It’s inefficient and has nothing to do with writing code to create a service. It’s unnecessary work that distracts from the real value of programming something into existence.

In distributed programming theory, granularity has been around for a long time. The CSP model is of tiny processes performing tasks and passing data (it helped inspire the Go language); the Actor model has messages passed between multitudes of actors changing internal state; even the lambda calculus is about discrete functions acting on data.

Object-oriented programming has developers reasoning about objects (not virtual machines or disks). And in CORBA, and similar systems, there’s the concept of an object request broker allowing objects to run and be accessed remotely in a distributed system without knowing details of where or how the object executes.

The theory of computing points away from dedicated machines (virtual or real) and to code and data that run on the Supercloud handling the details of code execution and data locality automatically and efficiently.

So whether you write your code by breaking it up into functions or ship large pieces of functionality or entire programs, the foundations of the Supercloud means that your code benefits from its efficiency. And more.

The Supercloud advantage

The Supercloud makes scaling easy because no one has to think about how many VMs to provision, no one has to keep hot standby VMs in case there’s a flood of visitors. Just as MapReduce (which traces its heritage to lambda calculus) scales up and down, so should general-purpose computing.

And it’s not just about scaling. In the Supercloud, both code and data are mobile and move around the network. Attach data to the code (such as with Durable Objects; hello Actor model) and you have a foundation for applications that can scale to any size and move close to users as needed to provide the best performance.

Alternatively, if your data is immovable, we move your code closer to it, no matter how many times you need to access it.

Not only that but working at this level of flexibility means that code enforcing a data privacy or data residence law about where data can be processed or stored can operate at the level of individual users or objects. The same code can behave differently and even be executed in a completely different country based on where its associated data is stored.

A Supercloud has two interesting effects on the cost of running a program. Firstly, it makes it more economical because you only run what you need. There’s never any need for committed VMs waiting for work, or idle machines you’re paying for just in case. Code either runs or it doesn’t. It scales up and down as needed. You only pay for precisely what you need.

Secondly, it creates a more efficient compute platform which is better for everyone. It forces the compute platform (e.g. us) to be as efficient as possible. We have to be able to start code quickly for performance and scale up reasons. We need to efficiently use CPUs because no customer is paying us to keep idle CPUs around. And it’s better for the environment because cloud machines run at very high levels of utilisation. This level of efficiency is what allows our platform to scale to the 10 million requests that Cloudflare Workers processed in the time it took you to read the last word of this sentence.

And this compute platform scales well beyond a machine, or a data centre, or a country. With the right software (which we’ve built) it scales to the size of the Internet. Software allocates resources automatically across the globe, moving connections, data and processing around for high efficiency and optimal end user experience.

Efficient compute and storage, a global network that’s everywhere everyone is, bound together by software that turns the globe into a single cloud. The Supercloud.

Welcome to the Supercloud

The Supercloud is performant, scalable, available, private, and cost-efficient. Choosing a region for your application, provisioning virtual machines, working out how to auto-scale containers, or worrying about cold starts seems ridiculous, hard, anachronistic, a waste of time, rigid and expensive.

Happily, Cloudflare’s been building the alternative to that traditional cloud into our network and our developer platform for years. The Supercloud. The term may be new, but that doesn’t mean that it’s not real. Today, we have over a million developers building on the Supercloud.

Each of those developers wants to get code running on one machine and perfect it. It’s so much easier to work that way. We just happen to have one machine that scales to the size of the Internet: a global, distributed supercomputer. It’s the Supercloud and we build our own products on it, and you can join those one million developers and build on it too.

We’ve been building the Supercloud for 12 years, and five years ago opened it up to developers through Cloudflare Workers. Cloudflare Workers was built for scale and performance since day one, by running on our global network.

If you liked reading this, you might like our other stories
What The Shift To Cloud-native Technologies Means For Digital Transformation Strategies 
Waking Up To The Dawn Of Hybrid Multi-cloud