"If you haven’t twigged already, aspects of DevOps related to infrastructure are on the path to creating tomorrow’s legacy. Give it 10 years." Simon Wardley
The future is serverless
While working at Yubl last year, I was fortunate to have the opportunity to help build a reasonably complex serverless architecture with an incredible team of engineers. Working mostly within the AWS stack, we were able to achieve a huge amount in a relatively short space of time. If you’re interested in learning more, my good friend and former colleague Yan Cui has started a series of blogposts outlining some of what we did in a bit more detail.
Although Yubl came to an untimely end in November 2016, I left convinced that serverless technologies are here for the long haul and I expect to see many more serverless architectures emerging in 2017. Seeing as I’ll probably be harping on about serverless quite a bit via this blog, I thought I’d kick things off by quickly summarising the biggest wins it can offer from my own experience so far.
I believe agility and productivity are subtly different. Agile practices should yield the rapid, incremental delivery of business value. Conversely, it’s possible to be very productive each day without actually delivering anything of real value to the business.
In more traditional microservices architectures, it’s all too easy to become bogged down with provisioning, de-provisioning, maintaining and scaling out your numerous clusters. Some really great tools have emerged to help automate these tasks- e.g. Kubernetes- but it’s fair to say this is complex stuff that requires time and human resources. Typically, it’s also completely secondary to to what the companies we work with / for, are actually trying to achieve as a business!
Serverless cloud services such as AWS Lambda abstract away most of these complexities and enable us to concentrate on writing the code that will deliver the real business value e.g. new features that customers are crying out for. At Yubl, we began to feel liberated from the usual raft of infrastructure concerns, which empowered us to be inherently user-focussed and very agile indeed. In our final month, our little team of 6 server developers deployed to production over 250 times, and in less than 9 months we had delivered over 150 lambda functions to production, all comfortably powering a social network with a rapidly expanding user base.
Significant cost savings
Not only can (expensive) developers be much more agile when infrastructure concerns are largely abstracted away from them, but as a business, you no longer have to worry about paying for idle / under-utilised servers. AWS Lambda for example gives you 1 million free requests per account per month and you’ll pay $0.0000002 per request thereafter. This translates to potentially free dev environments and significantly reduced running costs for your average microservice in production.
You don’t need to pay people to help your systems scale either e.g. DevOps engineers tactically introducing new servers in anticipation of a big surge in traffic. Which brings me neatly onto…
Near-instant elastic scalability
I’ve seen first-hand how Lambda can scale elastically out of the box. What’s even more impressive is how quickly this happens. It will scale up in a matter of seconds during a sudden surge in load and right back down again as the increase in load dies down. By comparison, you can enable autoscaling on your Amazon EC2 instances but you’re probably looking at many minutes before a new server is up and serving traffic, and it may well be too late by then.
As a new, upcoming social network, one way Yubl was attracting new users was through influencers who- for example- ran competitions from within the app. Quite early on during my time there, our biggest influencer told all of her followers that she’d be announcing the winner of a handbag competition at a specific time on Sunday evening. Predictably, there was a huge surge in traffic as all of her fans returned to the app in their droves, at the exact same time. Our CloudWatch graphs clearly illustrated how the new portions of the codebase utilising AWS Lambda and DynamoDB scaled up almost instantly, where as the legacy servers we were maintaining faired far less well.
It really does just work.
So what’s the catch?
To some extent, there isn’t one! You should absolutely check out what is happening in the serverless space and have a think about how you might be able to make use of these new technologies going forward. All of the big boys are increasingly getting involved now. AWS Lambda has been around since 2014 but we now also have Microsoft Azure functions, Google Cloud Functions and IBM OpenWhisk, among several others.
Having said all that, there’s no such thing as a silver bullet when it comes to designing a software architecture. For example, maybe:
- You have already invested a huge amount of time, money and resources into automating your infrastructure concerns.
- The load on your APIs is so high that a “pay for what you use” pricing model will actually prove more costly.
- You’re serving a real-time application.
- You simply cannot accept the risk of vendor lock-in and need to have the option of deploying your software anywhere and everywhere.
These are just a handful of examples where serverless may not offer the right tools for the job.
For many more use cases though, using serverless technologies really is a no-brainer. You’ll be delivering loosely coupled, scalable services in next to no time, for a fraction of the cost.