Faster and Easier Lambda Deployments with Stackery and SAM SLI
Stackery sponsored this post.
Serverless can serve as a case study in simplicity — but the platform also has its share of caveats.
The advantages serverless offers compared to traditional virtual machine servers include how your service can be available with just the availability you actually need with no random open ports within a few hours time. All the concerns about updating and time management become your vendor’s problem.
But when it’s time to write a more complex serverless function, a problem rears its head: what do you do when deploying code you’ve created for the first time?
Here is when a deploy tool, such as the one Stackery offers, makes it much easier to deploy multiple small changes to your stack.
And yet — no matter what tool you use, you’ll end up waiting a few minutes to observe any changes to your Lambda.
That extra few minutes per-code change is a best-case scenario: if you’re part of a team and don’t have deploy permissions, you might have to wait on your lead to come back from lunch.
But by using Stackery’s tool with the SAM standard, everything just becomes better and faster — you can create your stack visually, get a basic configuration and then run it with AWS’ tools.
Here is a real-world example based on my experience: while writing this guide, I was bogged down trying to install the SAM CLI on my new OSX machine. The issue was that I was running the pre-installed version of Python. A complete write-up of the solution would be a separate article, but if you follow this guide on setting up pyenv and then run pyenv install 3.6.1 && pyenv global 3.6.1 such that python –version returns 3.6.1 you should be ready to follow the link below and set up the CLI.
After you install the SAM CLI, go to the folder with your template.yml file and use: sam local start-api to start up your stack.
Your stack will start up in a Docker container, and in the output you will see the URL for your function. Copy and paste it into a browser and you’ll see your same form page served — it’s really that easy.
The tool also watches your file — you can update your index.js file and see the page update as soon as you refresh the local URL.
In this example, our index.js file also has a line console.log(message) that logged the whole input that our lambda was getting. After loading the local URL, you can go back to your shell and see the SAM CLI has output this logging here:
You can also see, in green, the standard logging that every Lambda does when it’s invoked, showing you when it starts and when it ends, while the “report” line shows how much time was billed. I wouldn’t put too much stock in this local version’s attempt to estimate that time, but for these early tests, we shouldn’t ever go over the 100ms minimum threshold.
The SAM CLI does have a few limitations: it can’t run everything that a CloudFormation SAM template can describe, although it can run all the API endpoints and lambdas you need. Once you’ve gotten the basics down, see the SAM CLI repo documentation for advanced tricks, such as sending events from multiple mock services to your lambda.
In a time before my career, developers would write working code by hand on paper, then enter hundreds of lines all at once into a compiler. Impressive to be sure. However, as we’ve shortened the “development loop” from writing to testing and fixing; this faster pace has been a driving force of our technological revolution. With local testing, your dev loop should now be shorter, and a shorter dev loop can lead to hundreds of more iterations of your code every week.
To further accelerate your development, consider using Stackery to build stacks and work on them as a team, Rookout to test running lambdas dynamically and IOPipe to increase visibility into your production code.
Feature image via Pixabay.
The post Faster and Easier Lambda Deployments with Stackery and SAM SLI appeared first on The New Stack.