Thoughtworks Misses the Mark — Re: Serverless vs. Kubernetes
My friend Arash Rohani recently forwarded an article published by a team from Thoughtworks from their XConf Tech Talk Series: Serverless vs. Kubernetes when deploying microservices.
As I was reading through it, I could not help but think that the authors really missed the mark by leaving a gaping hole in their analysis.
They start by defining four principles “to call something a serverless service”:
- Servers need to be hidden
- Pay only for what you use
- Have high availability out of the box
- Be able to scale on demand
And they provide this diagram:
By the authors’ own definition, they have completely missed an emerging slice of serverless: serverless containers.
This class of serverless is defined by cloud services such as:
While Containers as a Service or CaaS is a relatively new class of offerings, they bridge a significant complexity, cost, and functional gap between Kubernetes and Functions as a Service (FaaS).
I have come to personally prefer these serverless services over FaaS; let’s explore them a bit deeper and see if it changes your team’s calculus on serverless adoption.
Are Containers as a Service “Serverless”?
Before evaluating CaaS based on the facets outlined by the Thoughtworks team above, it is important to recognize that FaaS is a form of CaaS. When a Lambda executes, AWS is stuffing your code and assets into a highly specialized “container” (I use this term in quotes because it is technically described as a microVM) and launching that serverless workload on your behalf.
You can read more about Firecraker:
On the other hand, Azure Functions appears to operate on top of Docker container images:
So do services like Google Cloud Run, AWS App Runner, and Azure Container Apps meet the Thoughtworks authors’ criteria for serverless?
✅ Servers need to be hidden
Indeed, when working with any of the three CaaS services, there is no access to the underlying Kubernetes API and infrastructure.
✅ Pay only for what you use
✅ Have high availability out of the box
CaaS, like its cousin FaaS, doesn’t require management of service uptime beyond deploying into different availability zones or regions.
✅ Be able to scale on demand
Like FaaS, CaaS scales on demand and provides a bit more control of the scaling parameters in my opinion. For example, Azure Container Apps allow configuration of scale triggers. Google Cloud Run also allows various controls for tuning auto-scaling.
How CaaS Changes the Calculus
It’s important to mention why this is such a critical gap and why I think CaaS will seriously change adoption of serverless workloads. We can simply evaluate the authors’ key takeaways comparing Kubernetes to FaaS and understand why these services need to be in the discussion for every team.
“Standardization/vendor lock-in: there is no Cloud Native Computing Foundation (CNCF)-backed serverless codebase like there is for Kubernetes. Each provider has its own implementation and features. You will need to adapt to these differences”
Because Google Cloud Run, AWS App Runner, and Azure Container Apps all operate on Docker images, it means that your application runtime is — for the most part — portable. In all cases, the services simply require that your application listens on a given port for HTTP traffic. What you do within the container? The service doesn’t care.
In fact, it’s even better: because CaaS operates on Docker container images, it’s relatively straightforward to migrate workloads into Kubernetes in the future should there be a need to operate the underlying infrastructure. You lift-and-shift the images into a self-managed Kubernetes cluster if you need it in the future or you need to operate it on premise.
“Execution time: long-running or batch processing kinds of use cases cannot run seamlessly with serverless yet. There is limited support for runtime and languages as well.”
This is one of the biggest wins of CaaS! Because they shatter or eliminate the limitations of their FaaS counter parts. Google Cloud Run offers a maximum request time out of 60 minutes compared to 540 seconds (9 minutes) for Functions. (This information is not as clear for the AWS and Azure services).
And of course, it is possible to build these container images in any language that can be stuffed into a Docker container; this is a huge win for teams that have shied away from serverless functions for limited language support or lack of platform features in the FaaS runtime.
“Cold start: to overcome the problem of latency in serverless, cloud providers have launched pre-warmed instances. If your application is latency-sensitive, use cloud providers.”
I assume that the authors intend to state that if the application is latency sensitive, use Kubernetes. However, CaaS services can overcome this by having a minimum instance set to 1 (even FaaS can overcome this by either periodically pre-warming the service or in the case of Azure Functions, using the premium plan). Carefully sizing the container resources (vCPU and vMem) as well as concurrency limits can control and mitigate cold starts as new instances are brought online.
The authors miss the mark by leaving the emerging Containers as a Service options out of their evaluation and in doing so, the recommendations and guidance offered comes up short in my opinion.
I am a big proponent and believer in incorporating serverless functions into greenfield architecture, but I’ve since fallen in love with serverless containers for the flexibility that is provided while still offering all of the key benefits of FaaS. The freedom to use any language, deploy (almost) any workload, and ease of operation make it a go-to for future-facing architecture. The flexibility to build your applications as you see fit with the libraries, frameworks, and tooling that your team is already using — it is a huge win in terms of productivity. In some cases, existing workloads can even be lifted directly into serverless containers.
I’ve built a simple example full-stack application with .NET 6 Web API, React, and MongoDB for Google Cloud Run (GitHub repo) that is deployed into Google Cloud Run to show how it can be used to “lift” existing web application workloads without refactoring to FaaS.
Google Cloud Run, in particular, has blown me away and in a future article I want to explore how it can deliver on the promise of microservices with a fraction of the complexity of operating Kubernetes or using FaaS options.