The perfect pipeline
The optimal delivery pipeline is one that takes the most direct route from keyboard to production while providing full confidence to engineers about the validity and integrity of the changes being released.
When building your delivery pipelines, try to prioritize the following properties:
Atomic
Give each service or stack in your application its own isolated delivery pipeline.
Avoid batching changes to a service and deploy each change individually.
Automated
Adopt a GitOps approach to delivery. This involves triggering atomic delivery pipelines based on changes to distinct directories in your codebase being merged to your trunk branch. Automate all steps, including tests and changelog genera‐ tion and publication. Any manual intervention, such as final approvals, usually indicates a lack of confidence or fear of deployment.
Observable
Any pipeline issues should trigger alerts to chat applications or ticket systems and be diagnosable.
Rapid
You should define a maximum acceptable execution time for your pipelines, con‐ tinuously monitor average durations, and optimize regularly to ensure pipelines are always as efficient as possible.
Using a third-party continuous integration and deployment (CI/CD) platform to run delivery pipelines will typically involve storing AWS access credentials on the platform. If this is necessary, credentials should always be stored securely with encryption and should only be readable by the pipeline while it is running.
Some CI/CD platforms, including GitHub Actions, support the use of OpenID Connect (OIDC). OIDC allows your pipelines to authenticate directly to AWS without the need to store long-lived access credentials outside of your account.
Now that you have an understanding of how to implement and deliver your server‐ less application, let’s finally cover one of the less glamorous but equally important aspects of serverless implementation: documentation.