Skip to main content
Hugo’s IT journal Hugo's IT journal

AWS Certified DevOps Engineer - Professional

·
aws devops
DevOps choices>

DevOps choices #

Deployment>

Deployment #

  • faster boot time - opsworks slower; ami faster
  • using chef - opsworks
  • need to update config when new node online - opsworks Configure lifecycle event
  • less administrative overhead: eb > cloudformation when both solutions works
  • auto healing: opsworks, codedeploy, eb (bcoz of the asg)
  • rolling: eb, opsworks (not ideal, it is manual deploy), cloudformation+asg+ AutoScalingRollingUpdate policy, codedeploy
  • rolling = drop traffic to n instance > deploy > allow traffic
  • in-place = deploy the deploy to all instances (parallel)
  • blue/green deployment: eb ( cname swap), codedeploy, 2x(cfn+asg+elb)+route53 or 2(cfn+asg)+elb(weighted target groups)
  • blue/green deployment and want to delay the old asg termination: codedeploy
  • canary deployment: codedeploy only on lambda / ecs, eb ( traffic splitting), api gateway
  • eb’s immutable deployment: create 2nd asg. deploy code to new asg and create new resources in batch > delete old asg after the deployment. (kind of rolling deployment. not a blue/green, the new resource will accept the traffic during the deployment)
  • a/b test for a long time: (cloudformation+asg+alb)x2+route53 weighted round robin
  • multi app, multi dependencies: use docker: cfn, eb
  • rollback by CloudWatch alarm: codedeploy, eb not work (only by health check)
  • some nodes are not updated after a successful codedeploy deployment: asg create new node during deployment
  • codedeploy sucks in the lifecycle event hook (ec2 ~1hr): script error
  • codedeploy sucks at the AllowTraffic lifecycle event: elb health check failed
  • opsworks sucks at booting: agent doesn’t start or incorrect iam role in instance profile
  • all lifecycle event skipped in codedeploy: agent doesn’t start / security group blocked the communication
  • limit the resources on cfn launching: use catalog (like a marketplace). more iam control to the cloudformation template. With cloudfromation, you cannot limit user upload what cloudformation template.
Backup & Restore>

Backup & Restore #

  • cross region efs backup: lambda: (at region1) use ec2 with efs mount put data to region2 s3 -> (region2) ec2 with efs mount pull data from s3
Cloudformation>

Cloudformation #

ASG>

ASG #

Data analysis / Loggings>

Data analysis / Loggings #

  • batch jobs / reporting (like spark): EMR
  • visualise data (like BI): Redshift / Quicksight (cost effective)
  • log searching: elastic search (ES) (renamed to opensearch)
  • query S3 data: athena
  • report time slow: offload the job to other application(like lambda) with kinesis stream / scale-up the cluster with asg
  • apache hive ~= aws glue
DB>

DB #

  • dynamodb stream = kinesis stream (more advance)
  • throttle on dynamodb stream: limited to 2 connections at the moment. use 1 lambda > sns > other lambda(s)
  • dynamodb with many read: dynamodb accelerator (like redis)
  • multi-region read & write: dynamodb global table
  • multi-region read only: aurora
  • short DR time: read replica -> promote (/ aurora global database)
  • long DR time (few hours): lambda to backup and restore
  • conditionalcheckfailedexception at dynamodb: too many write on same record
  • data inconsistency in dynamodb: need to use strong consistent read
  • DB security: auth with iam
Config>

Config #

  • config aggreation: use stackset to enable config cross accounts > assign a dedicated administrator > auth config aggregator (like peer connection, request at one side and accept at other account) | use org organisation
  • config organisation rule: can use this to put rules to all account in organisation
Application Discovery Service>

Application Discovery Service #

Setting up>

Setting up #

CodeCommit>

CodeCommit #

approval rule (pull request)>

approval rule (pull request) #

  • targets: branch
  • approval pool members (iam user)

protect branches #

migrate from git #

  • git clone git_repo_url --mirror to create bare repo
  • git push codecommit_repo_url --all
  • git push codecommit_repo_url --tags
CodeBuild>

CodeBuild #

  • find the branch name in codebuild: CODEBUILD_SOURCE_VERSION
  • one codebuild can have one builspec.yml
  • data encryption: at rest and in transit
  • can use aws managed / custom docker image at build environment
  • custom docker image can be chosen from ecr (same/cross account) or custom registry
buildspec.yml>

buildspec.yml #

  • many phases but all are inline commmands
  • can use parameter-store and secrets-manager
  • can set “finally” block in each phase & on-failure behavior
CodeDeploy>

CodeDeploy #

appspec.yml>

appspec.yml #

  • resources + hooks (ecs & lambda)
  • files + permission + hooks (ec2)

lifecycle #

notification>

notification #

  • targets: sns / aws chatbot
  • event: any activities (push, merge, delete branch…)
trigger>

trigger #

  • targets: sns / lambda
  • event: push branches or tags
api gateway>

api gateway #

  • can do canary deploy
targets>

targets #

  • lambda
  • step functions
  • http
  • event
  • sqs
  • kinesis data stream
config>

config #

  • trigger type: configuration changes / periodic
  • scope: aws resource > ec2:securitygroup
notifications>

notifications #

  • Settings > Delivery method > sns topic - give you all changes (summary)
  • Settings > Amazon Cloudwatch Events rule - good for watching specific resource config change
Events>

Events #

  • can send to another account
  • can send to another account in organisation
Cloudwatch>

Cloudwatch #

create alarm from logs>

create alarm from logs #

  • logs > metric filter > metric > alarm > sns
send logs to other place for analysis>

send logs to other place for analysis #

  • logs > subscription filter > lambda (cannot cross account, kinesis can) > s3 > athena
  • logs > subscription filter > kinesis firehose > s3 > athena
  • logs > subscription filter > kinesis stream > kinesis firehose > s3 > athena
Kinesis>

Kinesis #

  • kinesis stream -> real time data stream (like enhanced version of DYNAMODB Stream) for analysis / aggregation
  • kinesis stream firehose -> for storage (s3) but can do some pre-processing
Supported Writer & Reader>

Supported Writer & Reader #

Security>

Security #

  • GuardDuty: threat detection
  • Macie: data level eg: s3
  • Security Hub: give advises / integrated with different aws products like
  • inspector: cvs scanning / hardending (cis)
S3>

S3 #

Cross account replication>

Cross account replication #

  • AcctA BuckA
  • AcctB BuckB
  • iam role in acctA
    • trust s3 to assume role
    • give permission to s3 to get buckA object
    • give permission to acctB to replica buckA object
    • give permission to s3 to encrypt & decrypt buckA object
  • bucket policy in acctB
    • allow roleA to replica and put object to buckB
Cloudformation>

Cloudformation #

Custom Resource>

Custom Resource #

ECS>

ECS #

AMI>

AMI #

use ECS-optimised AMI - have container agent installed

loggings>

loggings #