Part of my series on Automation of cybersecurity measures. AWS Organizations. I AM. Deploy a static website. THE Coded.
Free content on Cybersecurity Jobs | Register at Broadcast list
In the last article I changed the way I pass parameters to functions and my container so that I can reference parameters by name rather than position.
Before that, I showed you how to combine individual scripts to deploy multiple resources at once.
But we can do even more to accelerate our deployments. In this article, I want to explore a parallel processing method for micro-models in a container.
One of the advantages of CloudFormation: parallel processing
One benefit of consolidating multiple resources into a single stack is that CloudFormation handles some parallel processing for you. But I haven’t found that this processing always handles dependencies accurately (it does most of the time) and I don’t think it always recognizes when things can be deployed in parallel. The way I do it, you can have more control over all of this if you want.
Although the parallel processing provided by CloudFormation is useful, if you have a very large stack, it still needs to check for any changes made to each resource in the stack. If you separate your resources, you have the option of deploying a single or only certain resources in the full stack. Using individual templates gives you fine-grained control over your deployments to manage the resources you deploy in a particular task.
Let’s see if we can implement our own form of parallel processing.