cfn modules
during my time working with cfn, i have found some common patterns that i always end up using. i created some basic modules that can help reduce complexity in templates, and help manage different sections of the infra.
the upstream guide for developing modules is here
the list of modules i have created is the following:
- util: a misc module that provides utilities for bucket emptying on delete
- s3 cdn: manage a cloudfront cdn sourced from an s3 bucket
- pipeline: generic pipeline from github repositories that can be configured easily to deploy to either s3 bucket, eb environment, or both
to use this templates you will have to read the guide and build your modules from them
util module
as of now this only includes a lambda that empties an s3 bucket when the resource is deleted. use this to automatically delete buckets when you delete a stack.
usually deletion of a bucket will fail if it is not empty. this lambda takes care of that for you. see the usage below
Resources:
Util:
Type: Org::Infra::Util::MODULE
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub ${AWS::StackName}-bucket-${AWS::AccountId}
BucketPreDeletion:
Type: Custom::BucketPreDeletion
Properties:
ServiceToken: !GetAtt Util.EmptyBucketLambda.Arn
BucketName: !Ref Bucket
just make sure you reference the correct lambda name
s3 cdn module
this module creates a bunch of resources to manage a cloudfront cdn, with custom alias and managed ACM certificate
you will need to own a domain, ideally hosted in route 53 to make things easier
once you specify the parameters, the certificate generation will begin. if you have your domain with route 53, this should be automatic. otherwise, you will need to set up the dns records manually.
the module configures a bucket policy to allow cloudfront to access, this way the bucket does not have to be public and we protect ourselves.
we can expand on the example from the util module to add a cloudfront cdn and have the bucket deletion managed automatically as well. this is how we can use multiple modules together:
Resources:
Util:
Type: Org::Infra::Util::MODULE
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub ${AWS::StackName}-bucket-${AWS::AccountId}
BucketPreDeletion:
Type: Custom::BucketPreDeletion
Properties:
ServiceToken: !GetAtt Util.EmptyBucketLambda.Arn
BucketName: !Ref Bucket
S3CDN:
Type: Org::Infra::S3CDN::MODULE
Properties:
HostedZoneName: example.com
SiteAlias: www.example.com
BucketName: !Sub ${AWS::StackName}-bucket-${AWS::AccountId}
here we have to be careful, we _cannot_ reference the bucket resource directly in the module parameters as part of the module limitations. to get around this, use the name directly.
and that's it! you get a working cdn from a bucket. you can also adjust the cdn price class and cache policy, which is disabled by default
pipeline module
this module is a little more complex, it creates a managed pipeline, with a dedicated build project and artifacts bucket. the pipeline sources code from a github repository, and can deploy to an s3 bucket, an elastic beanstalk environment, or both
this module _depends_ on the util module, so make sure to include that as well in your template
the way the automatic deployments work, is by us providing either a DeployBucketName, or a DeployApplicationName and DeployEnvironmentName. if we provide the bucket name, the pipeline will deploy to that bucket. if we provide the application and environment name, the pipeline will deploy to that environment. if we provide both, the pipeline will deploy to both.
but how does the pipeline know what code to deploy where ? the answer is, by using _artifacts_
the pipeline uses a buildspec.yml file from the repo as instructions on how to build. once the build is done, it also specifies which files are used to build which artifacts.
if using a single deploy target, the main artifact will do just fine, here is an example for an hypothetical nodejs app:
version: 0.2
phases:
install:
commands:
- cd ${CODEBUILD_SRC_DIR} && yarn
build:
commands:
- cd ${CODEBUILD_SRC_DIR} && yarn build
artifacts:
base-directory: ${CODEBUILD_SRC_DIR}/dist
files:
- "**/*"
if you need multiple targets, use secondary artifacts, and _make sure to name them exactly like the pipeline expects_:
version: 0.2
phases:
install:
commands:
- cd ${CODEBUILD_SRC_DIR}/frontend && yarn
- cd ${CODEBUILD_SRC_DIR}/backend && yarn
build:
commands:
- cd ${CODEBUILD_SRC_DIR}/frontend && yarn build
- cd ${CODEBUILD_SRC_DIR}/backend && yarn build
artifacts:
base-directory: ${CODEBUILD_SRC_DIR}/
files:
- "**/*"
secondary-artifacts:
WebsiteArtifact:
name: WebsiteArtifact
base-directory: ${CODEBUILD_SRC_DIR}/frontend/dist
files:
- "**/*"
BackendArtifact:
name: BackendArtifact
base-directory: ${CODEBUILD_SRC_DIR}/backend
files:
- "**/*"
the key is that each artifact is used for a specific deploy target. with the base-directory option we can specify which files to deploy in that target