Redeploy
What happens internally ? Containers:
Deletes the old pod and create a new pod
EC2:
It follows a blue-green deployment strategy, and as part of that, two LBs, two weighted Route53s (blue and green), for each stack will get created
Shifts traffic gradually from the currently active stack to the passive stack
Example:
The above command redeploys the my_application
component for service auth
in environment dev-0001
with new artifact_version
as 1.2.0
Options for redploy
instance_pool
array
instance type pool for application autoscaling
--options='{"instance_pool": ["c5.xlarge", "c6a.2xlarge"]}'
lcu.external_lb
integer
The number of Load Balancer Capacity Units (LCU) for external LB
--options='{"lcu": {"external_lb": 6}}'
lcu.internal_lb
integer
The number of Load Balancer Capacity Units (LCU) for internal LB
--options='{"lcu": {"internal_lb": 6}}'
env_variables
json object
This field can be used to export environment variables to the application process
--options='{"env_variables": {"KEY_1": "VALUE_1"}}'
artifact_name
string
Name of application artifact
--options='{"artifact_name": "ARTIFACT_NAME"}'
build_type
string
Base image type on which the application runs
--options='{"build_type": "java"}'
build_version
string
Base image version on which the application runs
--options='{"build_version": "11"}'
add_legacy_tags
boolean
If this field is set to true, legacy tags such as master_service etc. will be applied on created infra
--options='{"add_legacy_tags": "true"}'
max_instances
integer
Maximum number of instances an ASG can have
--options='{"max_instances": 10}'
autoscaling
json object
Set autoscaling policy
--options='{"autoscaling": {"enabled":true, "policies": {"cpu_threshold":65}}}'
artifact_version
string
Application artefact version you want to deploy.
--options='{"artifact_version":"1.0.17"}'
imdsv2
String
Set the value of this field to "required" if you want to use imdsv2 for your component
--options '{ "imdsv2":"required"}'
logs
json object
If this field is enabled, logs will be pushed to datadog from mentioned log path.
--options='{ "logs": {"path": "", "enabled": true/false} }'
capacity_type
Enum - "ON_DEMAND", "SPOT"
Based on the provided values, deployment will be done on the spot or on-demand instance types.
--options='{"capacity_type": "ON_DEMAND"}'
num_instances
integer
Number of instances to be created.
--options='{ "num_instances" : 50}'
on_demand_base_capacity
integer
Number of instances you want to keep as on-demand. capacity_type should be ON_DEMAND to use this. Let's assume num_instances is set to 50, and on_demand_base_capacity is set to 10. Then on successful deployment, there will be 10 ON_DEMAND instances and 40 SPOT instances.
--options='{ "on_demand_base_capacity" : 10}'
config_namespace
string
Config manager branch name/tag to be used in the deployment.
--options='{ "config_namespace" : "feat/abc"}'
passive_downscale
boolean
If this field is set to true, the passive stack will be downscaled once the deployment is successful.
--options='{ "passive_downscale" : true}'
auto_routing
boolean
If this field is set to true, On the successful deployment, the traffic will be routed from the old active stack(old version) to the new active stack (new deployment version). And if this field is set to false, odin will not route traffic to the new deployment. Users will have to shift the traffic manually.
--options='{ "auto_routing" : true}'
canary
boolean
If this field is set to true, On the successful deployment, the traffic routing will be done in a canary fashion. Canary works in multiple phases,
Phase 1: 20% of traffic will be shifted to the new deployment, and odin will check for the 5XX errors if no errors are recorded, it will move to the next phase.
Phase 2: 100% of the traffic will be shifted to the new deployment, and odin will check for the 5XX errors if no errors are recorded, it will move to the next phase.
--options='{ "canary" : true}'
Last updated