One of the most obvious trends in enterprise computing has been the migration from on-premise hardware to cloud-managed services. Virtualization allowed on-premise hardware to be used as a container for many virtual servers, and that architecture easily migrated into the cloud with the advent of Amazon Web Services (AWS), Azure, and other cloud providers.
The abstraction of the operating system host did not stop with the virtual machine, however. AWS began as an infrastructure as a service (IaaS) cloud—it provided virtualized machine images which could be used to assemble the familiar architectural components of a traditional application stack—database server, application server, web server, and so on. The alternative platform as a service (PaaS) model hides the details of the virtual machine architecture from the application developer and user, presenting only the application stack APIs and not the underlying virtual machines. Azure is more oriented to PaaS and AWS more to IaaS, though both services and other providers, such as Google as well, give you the choice of either model.
The serverless computing architecture—sometimes called function as a service or FaaS—hides not just the underlying virtual machine, but also the application server itself. The cloud simply agrees to execute your code on demand or in response to an event.
In PaaS, you still have some form of enduring application server code running—even if you aren’t aware of it—in a virtual machine. This server must be kept running to provide low-?latency responsiveness so you pay for services even when idle. In the FaaS model, this dedicated application server is unnecessary. Of course, servers are still running in the cloud to manage the demands of the FaaS transactions, but these servers are shared among all users and no individual user has to pay for idle resources.
As well as the cost advantage, FaaS promotes a higher degree of scalability than IaaS or PaaS. There’s no need to explicitly spin up additional servers when load increases—the FaaS platform will add or remove capacity transparently as those changes occur. It’s true that many PaaS platforms will also adjust elastically to changes in demand. However, in PaaS, there may be a significant delay and increase in cost when a second application server (for instance) is launched. In FaaS, the experience is of a highly granular increase in capacity as demand grows.
Is this too good to be true? In truth, there are some significant compromises associated with the serverless model. Most notably, code in a FaaS application must be stateless. For instance, a function that performs an operation to modify user credentials will need to authenticate the user and establish a database connection every time it is called. In a server model, this authentication would often occur only once in the lifetime of a thread of execution. For many high-throughput OLTP-type applications, the overhead of re-establishing state will outweigh the benefits of the serverless elastic scaling and reduced cost overhead.
Database as a service (DBaaS) is gaining steam as a viable alternative to dedicated database services on premise or in IaaS clouds. However, these database systems are hardly serverless—an application typically maintains a long-running connection with the database. Nevertheless, cloud-native databases such as DynamoDB are edging closer to the serverless model and are often more natural choices when developing a new serverless application.
Kubernetes—the increasingly ubiquitous Docker container orchestration platform—solves a different set of problems. However, Kubernetes can be used to run a FaaS platform, and a few frameworks—such as Kubeless—have been developed that provide a Kubernetes-managed FaaS platform.
Containers and serverless computing appear set to become dominant models for next-generation enterprise applications.