In the past, with cloud computing, if an organization was deploying and scaling an application, they would provision a server or servers and deploy the app. The more workload the app received, the more servers could be provisioned to distribute the workload. This model worked, and still works, very well in terms of scaling apps.
However, there were disadvantages with doing this. Depending on your app, provisioning an entire server may be too much. Also, you needed to be on top of the app usage – if you provisioned a new server and the workload decreased, you would need to remember to deallocate it or be hit with higher server costs. Then think about the server maintenance, patches, monitoring app usage etc.
Serverless attempts to solve these problems. The term implies there are no servers, but it refers to having the server management hidden – the app scales based on how the cloud provider sees the app performing. This has an advantage in that the app pays for its usage instead of prepaying for provisioning a server.
Serverless was first introduced as a public offering by Amazon as AWS Lambda. The catch phrase they use is “Run code without thinking about servers. Pay only for the compute time you consume.”
It has features such as allowing you to monitor real-time usage and fees through Amazon CloudWatch. Code deployed is called a Lambda Function, which you deploy by uploading as a ZIP file or build in the AWS Management Console’s IDE. It currently supports Node.js, Python, Java and C#.
Google has Cloud Functions currently in beta.
Microsoft’s serverless offering is called Azure Functions. As Microsoft describes it, “Scale based on demand and pay only for the resources you consume.”
There are some disadvantages with the serverless model, such as ability to easily debug issues with code, and potential latency when apps are started based on demand. However these issues will most likely be solved as serverless computing becomes more popular.
I will go through more examples on this topic in future posts.