when relevant content is
added and updated.
Serverless computing is making a lot of headlines, but it has limitations, drawbacks and provisos.
Before any developer team (or indeed, more likely, any DevOps team) considers widespread adoption of serverless computing, it should think about what is and isn’t possible in this essentially Backend-as-a-Service (BaaS) rich environment.
What is serverless computing?
As also defined here, serverless computing refers to the creation of software applications that are not necessarily provisioned to work on any given server. In a serverless computing architecture, software programmers (and the operations teams they work with) do not need to spend time worrying about setting up, tuning and scaling applications to work in a certain way – that’s all looked after at the backend is looked by the cloud provider
Serverless computing limitations
CTO at software development services company Ness Digital Engineering Moshe Kranc speaks on the limitations of serverless computing and points to its limitations, the following bullets are written by Kranc himself:
- Low latency apps: If you use a dedicated cloud server, your code is already up and running when an event arrives, so the event can be processed within milliseconds. If you use serverless computing, then it can take several hundred milliseconds from the time the event occurs until it is processed. You must wait until the cloud platform allocates a server to your code, deploys the code and starts the runtime environment needed to run the code (e.g., a Java Virtual Machine).
NOTE: This makes serverless computing a poor choice for applications that require quick single digit millisecond response to events.
- Resource limits: Each cloud platform places limits on the server size available to run a serverless function, as well as on the total execution time of the code. For example, Amazon Lambda limits a serverless function to 1.5 GB of memory and no more than five minutes of execution time.
NOTE: This makes serverless programming a poor choice for applications that are memory intensive or require a long time to complete.
- Development challenges: In a traditional procedural or object-oriented software architecture, a program consists of code that executes serially. A serverless program, on the other hand, consists of a set of code fragments whose execution order is determined entirely by the order in which events occur. This presents a challenge to the developer, because many of these events (e.g., a change to an Amazon S3 object) can only be generated in the cloud – there are currently no good tools to emulate cloud events in a local development environment.
NOTE: This can reduce developer productivity, because coding, especially at the initial stages, is far easier in the local desktop environment than in the cloud.
- Testing challenges: It’s not enough to individually test the code associated with each event. To implement a real-world use case that accomplishes useful work, you have to simulate the flow of events in the correct order as well as all other feasible orders. This requires a new set of test tools that is still evolving.
When is serverless computing a good idea?
Ness Digital Engineering’s Kranc says that serverless is an excellent choice for applications where:
- The flow of the application can be expressed as responses to a series of events.
- Events occur sporadically. If your application is going to be constantly bombarded with events, it will be cheaper to rent an entire dedicated server rather than paying per event.
- Event processing is not resource intensive, e.g., does not require a lot of time or memory.
- High latency (having to wait several seconds before an event is processed) is acceptable.
Moshe Kranc has worked in the high tech industry for over 30 years in the United States and Israel. He was part of the Emmy award-winning team that designed the scrambling system for DIRECTV and he holds six patents in areas related to pay television, computer security and text mining.