Skip to content

Releases: awslabs/aws-deployment-framework

v1.0.0

11 Jun 11:19
34b290f
Compare
Choose a tag to compare

v1.0.0 ✨

ADF is now available via the Serverless Application Repository (search "adf" or "aws-deployment-framework") 🎉 🚀

ADF is now a single click deployment from the Serverless Application Repository. You can now navigate to the SAR from your master account in us-east-1 region to launch and update the ADF. This not only makes deploying ADF for the first time a more streamlined experience but it also means that when new updates of ADF become available you can also just hit deploy without needing to pass in any parameters to receive the latest changes.

When Deploying via the SAR for the first time you will need to enter the required parameters to ensure you setup of ADF goes smoothly. Firstly you will need to enter an AWS Region for the parameter DeploymentAccountMainRegion. This is what you would consider your default AWS region and will be where your deployment pipelines are housed. The parameters under the DeploymentAccount heading relate to your deployment account, if you already have an account that you would like to use as your deployment account then you can pass in its AccountId in the DeploymentAccountId section and leave the rest of these fields empty. If you do not yet have an AWS Account that you wish to use as the deployment account the initial install of ADF will create one for you, the same goes for an AWS Organization and even the deployment OU.

In the InitialCommit section, specify the regions that you may want to deploy resources or applications to via ADF. You can always update this later on, this content is just used as part of the initial commit that the installation makes against CodeCommit as a way to kickstart your experience with ADF. The same goes for NotificationEndpoint, you can specify an email address or slack channel (see docs) that will receive notifications for changes against your bootstrap repository and its associated pipeline. The other parameters can either be left as default or can be further investigated in our documentation.

When updating between version of ADF (after v1.0.0) you do not need to enter any parameters, just by hitting deploy ADF can determine the values previously used and roll with those. When updating, changes are non-intrusive, in that, any change that has been made between those two version will be opened as a Pull Request against your bootstrap repository (first) which allows you to get full insight into what changes have been made and allows users to be selective about which updates they wish to consume. Once merged, the bootstrap pipeline runs and will open a PR against the deployment account's pipeline repository with any changes that are required for the deployment account and its ADF specific content.

We will not make changes to the configuration type files such as deployment_map.yml, adfconfig.yml, global.yml (with the exception of deployment/global.yml where needed) or regional.yml.

Other changes:

  • Resolves #52 💪
  • Resolves #47 (ADF Version out now is in stack outputs and log outputs)
  • Resolves #54 (Stack that allows Deployment account access to read organizations has had roles split to help lock down permissions.)
  • Resolves #55 (Prefix log messages with Account ID's to make debugging easier)
  • Resolves #53 (Logging Levels can now be defined via the SAR and updated as required, more debugging messages will come in future releases)
  • tox and travis now run against Python 3.7 exclusively 💯
  • Updated samples to use aws/codebuild/ubuntu:2.0 and include short example of their respective pipeline syntax.
  • updated docs to reflect v1.0.0 changes (more to come)

Updating from v0.3.3

Updating of ADF from v0.3.3 to v1.0.0 is a major change and thus we have outlined some steps to execute this process. The main thing to consider here is that the SAR adds the prefix serverlessrepo- to stacks it creates, because of this its not possible to launch the new SAR version of ADF and the current existing version at the same time since the resources would name clash. Because of this, to upgrade to v1.0.0 it is a requirement to delete the old aws-deployment-framework-base stack and redeploy via the SAR. This upgrade will only effect the master account and the bootstrapping process. The deployment account and its associated pipelines are not effected and require no changes.

Upgrading from v0.3.3 can be done in the following steps:

  • Ensure you have latest content from bootstrap repository locally on your workstation, we're going to need to re-push this content back to CodeCommit after the deployment of 1.0.0.
  • Remove the stack aws-deployment-framework-base from us-east-1 in the master account, you may need to manually empty the S3 bucket in order the delete process to complete successfully. Ensure the stack is full removed before continuing.
  • Navigate to the SAR and click find ADF, ensure you set the parameter CommitInitialBootstrapContent to false - this will stop the initial skeleton structure being initially committed to CodeCommit. Fill in the DeploymentAccountMainRegion and DeploymentAccountId parameters accordingly.
  • Hit Deploy and wait for the new ADF base stack to complete.
  • Once complete, push back your bootstrap-templates repository content to the new empty repository, this will trigger your normal bootstrap pipeline to run.
  • Once the pipeline has run successfully with your original content, its time to get the PR with the v1.0.0 changes in it for the bootstrap repository. Go back to the SAR, go to aws-deployment-framework, this time, change CommitInitialContent to true and hit deploy again. As you hit deploy, head over to AWS CloudFormation to ensure the stack updates correctly.
  • Once complete open CodeCommit and see the PR waiting to be merged containing the v1.0.0 changes. Once merged it will run CodePipeline to update any changes. Once complete, switch roles to the deployment account and check CodeCommit there for a PR related to the deployment account changes.
  • Remove the un-required aws-deployment-framework-master-bucket stack in us-east-1 - master account.

v0.3.3

06 Jun 16:00
cdf08b1
Compare
Choose a tag to compare

v0.3.3

This release is a bridging upgrade path for users that are already on v0.3.2 to go to v1.0.0.

Description of changes:

  • Temp change of .pylintrc to avoid errors in duplicate code as we transition to 1.0.0
  • Restructure folders to suit the upcoming 1.0.0 change which will be self service via the Serverless Application Repository (SAR)
  • Adding in ADF Version as an output value in stacks and Parameter Store so it can be tracked and make it easier for users to know which version they are on. Resolves #47 (However, CodeBuild version log is added in 1.0.0)
  • Simplify Pipeline Schedules by removing un-required Lambda in favor of single CloudWatch Event.
  • Fix CloudFormation spacing and alphabetical ordering.
  • Change default CodeBuild Image to ubuntu:2.0 in order to streamline the use of different base runtimes.

Steps for the upgrade from 0.3.2 to 1.0.0 will be detailed in the v1.0.0 release notes.

v0.3.2

08 May 18:43
566a39b
Compare
Choose a tag to compare

v0.3.2

  • Adding in the concept of helper scripts in their own folder within pipelines-repository/adf-build/helpers directory. These will be available to use in any CodeBuild job (due to s3 sync) and can help perform general build/package tasks. Currently we have a basic shell script (package_transform.sh) that is made available to help package and distribute serverless applications into an S3 Bucket within the target region(s) (requirement for Lambda functions).
  • Adding in the ability for any pipeline to take a cron or rate expression. This will trigger the pipeline on a schedule and is useful for pipelines that need to run and output some specific content (AMI Building, data processing/pipelining etc).
  • Adding in import functionality into Parameter Files. using this new import syntax you can import exported values form CloudFormation stacks in other AWS Accounts and regions directly into parameter files. Example: import:123456789101:eu-west-1:stack_name:export_key.
  • BUG FIX: Adding Pagination to list_pipelines calls to ensure removing large amounts of pipelines at once is seamless. Previously was not paginating correctly.
  • Updating the default src/bootstrap_repository/global.yml to have kms:createKey access to offer a more predictable default (removing datapipeline:)*.
  • Updating samples (buildspec.yml) to not be so verbose with pip commands.
  • Updating CodeBuildRole to be able to assume 'arn:aws:iam::*:role/adf-cloudformation-deployment-role' on target accounts within the Organization. This is required in order to allow the CodeBuild step to retrieve CloudFormation stack outputs from target accounts when using the new "import" parameter injection.
  • Removing PYTHONPATH export in each buildspec.yml in favor of moving it into CodeBuild Environment Variables. (Once pipelines have been updated you can remove it from buildspec.yml)
  • Update docs to include syntax examples for targets.
  • Updating generate_param.py syntax

v0.3.1

01 May 15:28
66190a6
Compare
Choose a tag to compare

v0.3.1

  • Allowing custom step names in deployment_map.yml targets section (examples in #46)
  • Allowing CloudFormation action types in pipelines (examples in #46)
  • Updating sample guide for clarity + examples
  • Giving definition and example of shorthand targets syntax vs path, regions, name style.
  • Updating j2 templates to support new step name structure
  • Updating tests to support new step naming structure
  • Update SCP logging to output OU path name not OU id
  • Cleaning up leftover imports/exports logic in j2 templates (was not used)
  • Slight refactoring of using getters/setters where possible (more to come on this refactoring)

v0.3.0

28 Apr 17:47
f770f3f
Compare
Choose a tag to compare

v0.3.0

New Functionality - SCPs

You can now use ADF to manage and automate the process of applying Service Control Policies throughout your Organization. Application of SCPs works in a similar way to bootstrapping base stacks in ADF. Place a scp.json file in the corresponding Organizational Unit folder in the bootstrap_repository and it will be automatically applied to that OU, if its removed it will be detached from the OU and deleted. To get started see the scp-example.json, also read the admin guide and SCP Documentation.

Description of changes:

  • SCPs functionality - Resolves #25

  • The master account can now be bootstrapped like any other account and can also have Deployment Pipelines target it. Resolves #19

  • Pyyaml has been updated to 5.1 stable in requirements.txt

  • Removed relative pathing for tests

  • Adding tests for account_bootstrap.py

  • Updated 'remove_base' in adfconfig.yml to also accept 'remove-base' - removing underscores in config options

  • Cleaned up unnecessary passing of boto3 library into class constructors

  • Updated code-commit-role on the master account to be titled adf-codecommit-role-base (avoid naming clash if default global.yml is applied)

  • Added BucketPolicy to the BootstrapTemplatesBucket to allow the Organization access to streamline master account bootstrap + pipeline capability.

  • CloudFormation no longer attempts to use template_body at all throughout ADF but uses template_url for all base and pipelines stacks - allowing much high limits on sizing. (previous it was just pipelines using template_url)

  • Organize Documentation to better suit admin vs user guide

  • General Code Cleanup (more to come)

  • The role that is assumed by the Deployment Account back to the master account to query Organizations has been renamed from "${CrossAccountAccessRole}-org-access-adf" to match the CrossAccountAccessRole that has been defined in the adfconfig.yml. CloudFormation will update this role on the next run of the UpdateBaseStacks pipeline.

  • The base stack created in the deployment region on the master account adf-global-base-adf-build is now updatable and will update on each run of the UpdateBaseStacks pipeline.

  • BUG FIX: Fixed a bug that caused 'organization_id' to be unavailable if performing fresh install

  • BUG FIX: Notification Emails were being sent when they shouldn't need to, defining the account had been bootstrapped into 'None'

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

v0.2.4

04 Apr 07:55
ed82b31
Compare
Choose a tag to compare

Notes

  • Updating and Improving Error Handling (Adding more refined error types).
  • Updating Documentation to remove invalid sections (local pipeline / parameter creation).
  • Updating deployment/global.yml to ensure S3:PutObject* is allowed (required for sam package type commands).
  • Updating global.yml to include s3:* as a default and also updating the comment to highlight that the CloudFormation Deployment Role Policy within global.yml is a base example for users to alter as required.
  • Ensuring KMS and Bucket resources are kept up to date in cloudformation-deployment-role and cloudformation-role on all target accounts each time the bootstrap pipeline executes.
  • Raising a valid exception when an OU is passed into deployment_map.yml that contains no accounts. Resolves #15 .
  • Updating of the KMS Key Policy with specific accounts is no longer required -> Changed to using PrincipalOrgId as we do for S3 + IAM access.
  • Updating the Logger to include the line number in the formatter message.
  • Ensuring all pipeline templates get validated by Cloudformation Validate API call prior to the stack being launched.
  • Updating the parameter 'resolve:' keyword to now accept region specification that allow you to resolve parameters from parameter store in the deployment account from different regions. For example, resolve:/my/path/to/key will resolve a parameter from parameter store on the deployment account main region however resolve:eu-west-1:/my/path/to/key will resolve a parameter from the deployment account in eu-west-1.
  • Updating Deployment Map to allow replace_on_failure (Boolean) which creates a stack if the specified stack doesn't exist. If the stack exists and is in a failed state it will delete it and create it (good for testing pipelines). more info on REPLACE_ON_FAILURE

v0.2.3

14 Mar 11:56
d90078b
Compare
Choose a tag to compare

Fixes:

  • Fixed a bug that causes an error to occur when moving an account back to the root of the organization that was failing due to the deployment account id being 'None' for the UpdatePipelines action.

v0.2.2

13 Mar 10:56
154c184
Compare
Choose a tag to compare

Fixes:

  • Ensuring UpdatePiplines in Step Functions on Deployment Account returns a valid response for newly boostrapped accounts before calling Notify.

v0.2.1

12 Mar 21:40
ccd8ac7
Compare
Choose a tag to compare

Fixes:

  • Automatic execution of aws-deployment-framework-pipelines pipeline was not triggering on the bootstrapping of new accounts, resolved by amending correct structure to step functions.
  • Adding initial tests for main.py

v0.2.0

12 Mar 06:30
e239dae
Compare
Choose a tag to compare

New Functionality in 0.2.0

  • Added slack functionality for notification endpoints. See Admin guide for more details.
  • This release introduces the use of the TemplateURL parameter in CloudFormation create stack calls when creating pipeline stacks, this allows for much larger templates with many more stages to be created as opposed to using the TemplateBody parameter.

Notes:

  • github-cloudformation template has had typo fixed in BranchName specification for AWS CodePipeline web hook
  • regions are now sorted prior to being passed into J2 to avoid pipelines updating because of region list order change
  • added in slack.py and associated tests.
  • Stray parameter (adfconfig.yml) in master was being incorrectly saved to Parameter Store
  • Updated documentation, fixed incorrect document links
  • Renamed adfconfig.yml and deployment_map.yml to be example- prefixed to avoid merge conflicts, docs are updated to reflect this.
  • NotificationEndpoint now allows for type: 'slack' in adfconfig.yml in which you can specify a slack channel to receive updates for the deployment account pipeline generation pipeline. The delpoyment_map.yml now also supports Slack integration (see admin guide).
  • CloudFormation generate pipeline no longer passes in the template body from a file but rather uploads it to s3 and uses templateURL to allow larger template sizes.
  • Pipeline prefix parameter moved into global.yml for deployment account.
  • Notifications now also occur for bootstrapping accounts along with Pipeline FAILED/SUCCESS status.
  • CloudFormation error handling logic has been cleaned up. (Helps identify errors as described in Issue #9 )
  • Statemachine in deployment account has new notify step which sends to SNS the success an accounts bootstrap status.
  • Statemachine in deployment account also now has choice condition for when it only requires to update pipelines.