manager, worker은 처음 클러스터 생성 시 자동으로 node.role이 각 노드에 부여됨
또는 추가적으로 label을 각 노드에 부여하여 컨테이너 배포 시 활용할 수 있다. (ex. zone:busan)
예를 들어 두가지 제약조건(role, label)을 부여하여 role==worker이고 label.zone==busan 처럼 사용할 수 있다.
aws, gcp에서 제공하는 컨테이너 클러스터링 서비스는 EKS, GKE가 있다. 이들은 별도의 manager용 VM을 제공해 주는 것이 아니라, 전체 관리용 VM(콘솔VM)에 도커를 설치하여 사용한다. 노드(worker)는 완전관리형 서비스로 제공 된다.
Swarm은 2가지 모드가 있다.
CPU | RAM | NIC (VMnet10) | |
---|---|---|---|
manager(private-registry) | 2 | 4 | 211.183.3.100 |
worker1 | 2 | 2 | 211.183.3.101 |
worker2 | 2 | 2 | 211.183.3.102 |
worker3 | 2 | 2 | 211.183.3.103 |
manager vm을 오른클릭 > Manage > Clone > 다음 > 다음 > Full clone으로 worker1,2,3 생성
rapa@manager:/root$ sudo vi /etc/netplan/01-network-manager-all.yaml
# Let NetworkManager manage all devices on this system
network:
ethernets:
ens32:
addresses: [211.183.3.100/24] # manager:100, worker1:101, worker2:102, worker3:103
gateway4: 211.183.3.2
nameservers:
addresses: [8.8.8.8, 168.126.63.1]
dhcp4: no
version: 2
# renderer: NetworkManager
rapa@manager:/root$ sudo netplan apply
sudo hostnamectl set-hostname [이름]
sudo su
sudo vi /etc/hosts
211.183.3.100 manager
211.183.3.101 worker1
211.183.3.102 worker2
211.183.3.103 worker3
추가
ping manager -c 3 ; ping worker1 -c 3 ; ping worker2 -c 3 ; ping worker3 -c 3
rapa@manager:~/0824$ docker swarm init --advertise-addr ens32
Swarm initialized: current node (mmq3j418myp0x5pktz3o0k1jt) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0klwpp8pig0mbpcn51656xxi34zswkbpztob9lvxyq6o9fnybs-6dvipms8zzjar4idajry66hyb 211.183.3.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
ip 대신 ens32같이 인터페이스를 지정. IP가 dhcp로 하면 계속 바뀔수있으니까(dhcp ip 임대기간을 infinite로 할수도)
docker swarm join ~ 부분을 복사하여 worker1, 2, 3에 각각 복사해 명령어 입력
rapa@worker1:~$ docker swarm join --token SWMTKN-1-0klwpp8pig0mbpcn51656xxi34zswkbpztob9lvxyq6o9fnybs-6dvipms8zzjar4idajry66hyb 211.183.3.100:2377
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
rapa@manager:~/0824$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
mmq3j418myp0x5pktz3o0k1jt * manager Ready Active Leader 20.10.17
ef41qj130k6k3glpy2sjiayd5 worker1 Ready Active 20.10.17
luwd64ouh37h3vagnt5eifxj8 worker2 Ready Active 20.10.17
osj6glwuhgm5psjai379s5mqj worker3 Ready Active 20.10.17
AVAILABILITY
Active : 컨테이너 생성이 가능한 상태
drain : 동작중인 모든 컨테이너가 종료되고 새로운 컨테이너를 생성할 수도 없는 상태
pause : 새로운 컨테이너를 생성 할 수는 없다. 하지만 drain과는 달리 기존 컨테이너가 종료되지는 않는다.
토큰 발행한 매니저가 리더 매니저
worker들이 가입할 때 사용할 토큰
rapa@manager:~/0824$ docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0klwpp8pig0mbpcn51656xxi34zswkbpztob9lvxyq6o9fnybs-6dvipms8zzjar4idajry66hyb 211.183.3.100:2377
rapa@manager:~/0824$ docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0klwpp8pig0mbpcn51656xxi34zswkbpztob9lvxyq6o9fnybs-2icqn40a9ikpyl73e2bqi5se3 211.183.3.100:2377
처음 매니저가 토큰을 발행하면 manager용, worker용 토큰이 발행된다. 워커 외에 추가로 manager을 두고싶다면 별도의 노드에서는 manager용 토큰으로 스웜 클러스터에 join 한다.
manager1(leader), manager2, worker1, worker2, worker3
만약 manager가 한개인 상태에서 해당 매니저가 다운된다면 전체 클러스터를 관리할 수 없으므로 실제 환경에서는 최소한 2대의 매니저를 두어야 한다.
만약 현재 상태에서 worker1(worker)을 manager로 변경하고 싶다면
rapa@manager:~/0824$ docker node inspect manager --format "{{.Spec.Role}}"
manager
rapa@manager:~/0824$ docker node inspect worker1 --format "{{.Spec.Role}}"
worker
rapa@manager:~/0824$ docker node promote worker1
Node worker1 promoted to a manager in the swarm.
rapa@manager:~/0824$ docker node inspect worker1 --format "{{.Spec.Role}}"
manager
rapa@worker1:~/0824$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
mmq3j418myp0x5pktz3o0k1jt * manager Ready Active Leader 20.10.17
ef41qj130k6k3glpy2sjiayd5 worker1 Ready Active Reachable 20.10.17
luwd64ouh37h3vagnt5eifxj8 worker2 Ready Active 20.10.17
osj6glwuhgm5psjai379s5mqj worker3 Ready Active 20.10.17
rapa@manager:~/0824$ docker node demote worker1
Manager worker1 demoted in the swarm.
rapa@manager:~/0824$ docker node inspect worker1 --format "{{.Spec.Role}}"
worker
워커에서 leave (docker swarm leave) → manager에서 rm (docker node rm [worker이름])
rapa@worker1:~$ docker swarm leave
Node left the swarm.
rapa@manager:~/0824$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
mmq3j418myp0x5pktz3o0k1jt * manager Ready Active Leader 20.10.17
ef41qj130k6k3glpy2sjiayd5 worker1 Down Active 20.10.17
luwd64ouh37h3vagnt5eifxj8 worker2 Ready Active 20.10.17
osj6glwuhgm5psjai379s5mqj worker3 Ready Active 20.10.17
삭제는 되지 않고, Down만 되어있는 상태
rapa@manager:~/0824$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
mmq3j418myp0x5pktz3o0k1jt * manager Ready Active Leader 20.10.17
ef41qj130k6k3glpy2sjiayd5 worker1 Down Active 20.10.17
luwd64ouh37h3vagnt5eifxj8 worker2 Down Active 20.10.17
osj6glwuhgm5psjai379s5mqj worker3 Down Active 20.10.17
rapa@manager:~/0824$ docker node rm worker1
worker1
rapa@manager:~/0824$ docker node rm worker2
worker2
rapa@manager:~/0824$ docker node rm worker3
worker3
매니저에서 leave하면 안된다고 뜬다. 왜냐하면 실수로 worker노드는 등록돠어있는 상태에서 manager노드가 떠나면 안되기 때문이다. 강제로 할 순 있다.
(뒤 실습을 위해 다시 토큰을 발행해 원래처럼 worker1,2,3에 복사해 붙여넣는다.)
단일 노드에서는 컨테이너 단위의 배포
compose, swarm에서는 서비스(다수개의 컨테이너) 단위로 배포
rapa@manager:~/0824$ docker service create --name web --constraint node.role==worker --replicas 3 -p 80:80 nginx
replicas 3: 몇개 배포할것인지. 항상 3개를 유지, 최소값
이미지도 worker 각각에서 받아오는 것
매니저가 아닌 노드에게만 배포
rapa@manager:~/0824$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lla5yziqwd8u web replicated 3/3 nginx:latest *:80->80/tcp
rapa@manager:~/0824$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
t38vjftdu3o1 web.1 nginx:latest worker1 Running Running 9 minutes ago
fbq9q2pxpklu web.2 nginx:latest worker2 Running Running 9 minutes ago
cbzmtsckptvn web.3 nginx:latest worker3 Running Running 11 minutes ago
만든 컨테이너 3개가 worker1,2,3에 각각 하나씩 동작하고 있다.
만약 _ web.2 ~~ Reject인 경우(컨테이너가 생성되지 않는 이유)
→ 자원 사용에 불균형이 발생할 수 있다. 즉, 한 대의 노드에서 여러 컨테이너가 동작하고 몇 노드에서는 컨테이너 생성이 되지 않는 문제
각 노드에 저장소 접근을 위한 인증정보가 config.json에 없다면 해결 방법
현재 211.183.3.101~103은 nginx가 제대로 뜨고, 100(manager)또한 nginx가 뜬다.
rapa@manager:~/0824$ docker service scale web=1
web scaled to 1
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
rapa@manager:~/0824$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lla5yziqwd8u web replicated 1/1 nginx:latest *:80->80/tcp
현재 한개만 돌아가는 상태
rapa@manager:~/0824$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
t38vjftdu3o1 web.1 nginx:latest worker1 Running Running 20 minutes ago
web1에만 동작하고 있는 상태로 바뀜
그러나 100,101,102,103 모두 들어와도 overlay때문에 모두 nginx가 보인다.
rapa@manager:~/0824$ docker service scale web=4
web scaled to 4
overall progress: 4 out of 4 tasks
1/4: running [==================================================>]
2/4: running [==================================================>]
3/4: running [==================================================>]
4/4: running [==================================================>]
verify: Service converged
rapa@manager:~/0824$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lla5yziqwd8u web replicated 4/4 nginx:latest *:80->80/tcp
rapa@manager:~/0824$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
t38vjftdu3o1 web.1 nginx:latest worker1 Running Running 22 minutes ago
qw27c0mtlv3c web.2 nginx:latest worker2 Running Running 44 seconds ago
o0botdf29tw9 web.3 nginx:latest worker3 Running Running 43 seconds ago
fklpgc4zg2oe web.4 nginx:latest worker3 Running Running 43 seconds ago
web3과 4가 둘다 worker3에서 동작하고 있는 상태이다.
rapa@worker3:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8814e84febf5 nginx:latest "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp web.3.o0botdf29tw9ezhcc92ntpigh
810390ebf608 nginx:latest "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp web.4.fklpgc4zg2oeg04npsj4qk85v
web3,4가 하나의 포트를 공유해 2개의 컨테이너가 동작하는 상태이다.
이 컨테이너를 inspect해보면 network에 자동으로 ingress가 부여되어 있다.
rapa@manager:~/0824$ docker service rm web
web
docker service create --name web --constraint node.role!=manager --replicas 3 -p 80:80 nginx
rapa@manager:~/0824$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
8vpa8eo0cse0 web.1 nginx:latest worker2 Running Running 13 minutes ago
zqndk7cjfjls web.2 nginx:latest worker3 Running Running 48 seconds ago
u3vz8l8t8oht \_ web.2 nginx:latest worker3 Shutdown Failed 55 seconds ago "task: non-zero exit (137)"
8zwbng412a83 web.3 nginx:latest worker1 Running Running 13 minutes ago
삭제했지만, 3개를 유지하기 위해(replica 3) worker1에서 돌아가고 있는 것이 보인다.
rapa@manager:~/0824$ docker service create --name web --constraint node.role!=manager --mode global -p 80:80 nginx
image nginx:latest could not be accessed on a registry to record
its digest. Each node will access nginx:latest independently,
possibly leading to different nodes running different
versions of the image.
zldorxuyixcf357npek7ry3tv
overall progress: 3 out of 3 tasks
o9obccagttlf: running [==================================================>]
qk27035jk5cd: running [==================================================>]
hripanq8twel: running [==================================================>]
verify: Service converged
rapa@manager:~/0824$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
zldorxuyixcf web global 3/3 nginx:latest *:80->80/tcp
rapa@manager:~/0824$ mkdir blue green
rapa@manager:~/0824$ ls
blue ctn1.py ctn2.py ctn.py green
rapa@manager:~/0824$ touch blue/Dockerfile green/Dockerfile
blue
FROM httpd
ADD index.html /usr/local/apache2/htdocs/index.html
CMD httpd -D FOREGROUND
도커 허브의 저장소로 들어가서 레포지토리 만들기
repository name : myweb
tag : blue, green
myweb:blue
myweb:green
rapa@manager:~/0824$ touch blue/index.html green/index.html
rapa@manager:~/0824$ echo "<h2>BLUE PAGE</h2>" > blue/index.html
rapa@manager:~/0824$ echo "<h2>GREEN PAGE</h2>" > green/index.html
FROM httpd
ADD index.html /usr/local/apache2/htdocs/index.html
CMD httpd -D FOREGROUND
rapa@manager:~/0824$ cd blue
rapa@manager:~/0824/blue$ ls
Dockerfile index.html
rapa@manager:~/0824/blue$ docker build -t myweb:blue .
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM httpd
---> f2a976f932ec
Step 2/3 : ADD index.html /usr/local/apache2/htdocs/index.html
---> b281c3ebdf83
Step 3/3 : CMD httpd -D FOREGROUND
---> Running in 5341aa158bf3
Removing intermediate container 5341aa158bf3
---> 0defb8bb2f6e
Successfully built 0defb8bb2f6e
Successfully tagged myweb:blue
rapa@manager:~/0824/blue$ cd ..
rapa@manager:~/0824$ cd green
rapa@manager:~/0824/green$ docker build -t myweb:green .
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM httpd
---> f2a976f932ec
Step 2/3 : ADD index.html /usr/local/apache2/htdocs/index.html
---> ba9bae1aede3
Step 3/3 : CMD httpd -D FOREGROUND
---> Running in 115f4c162d9a
Removing intermediate container 115f4c162d9a
---> 06dd0d9e041d
Successfully built 06dd0d9e041d
Successfully tagged myweb:green
rapa@manager:~/0824$ docker image tag myweb:blue dustndus8/myweb:blue
rapa@manager:~/0824$ docker image tag myweb:green dustndus8/myweb:green
rapa@manager:~/0824$ docker push dustndus8/myweb:blue
The push refers to repository [docker.io/dustndus8/myweb]
6bf5cd1d8560: Pushed
0c2dead5c030: Mounted from library/httpd
54fa52c69e00: Mounted from library/httpd
28a53545632f: Mounted from library/httpd
eea65516ea3b: Mounted from library/httpd
92a4e8a3140f: Mounted from dustndus8/mynginx
blue: digest: sha256:5a15a885ef41744201e438e9ca3e5ba1f365d89c54ce96f57573e43fb2c3c9c6 size: 1573
rapa@manager:~/0824$ docker push dustndus8/myweb:green
The push refers to repository [docker.io/dustndus8/myweb]
f2a2284890e1: Pushed
0c2dead5c030: Layer already exists
54fa52c69e00: Layer already exists
28a53545632f: Layer already exists
eea65516ea3b: Layer already exists
92a4e8a3140f: Layer already exists
green: digest: sha256:0e2632256f1f43bb2387faf6f5c6dff6936cdaa5f0ba2e428f453627492b9206 size: 1573
rapa@manager:~/0824$ docker service create --name web --replicas 3 --with-registry-auth --constraint node.role==worker -p 80:80 dustndus8/myweb:blue
r1hrndbgxyazab645q4g6pil2
overall progress: 3 out of 3 tasks
1/3: running [==================================================>]
2/3: running [==================================================>]
3/3: running [==================================================>]
verify: Service converged
—with-registry-auth : 인증정보를 worker에게 전달 해준다
도커 허브에 올린 이미지 dustndus8/myweb:blue를 배포
211.183.3.100:80, 101:80, 102:80, 103:80으로 접속하면 BLUE PAGE가 뜬다.(갱신하면됨)
rapa@manager:~/0824$ docker service scale web=6
web scaled to 6
overall progress: 6 out of 6 tasks
1/6: running [==================================================>]
2/6: running [==================================================>]
3/6: running [==================================================>]
4/6: running [==================================================>]
5/6: running [==================================================>]
6/6: running [==================================================>]
verify: Service converged
rapa@manager:~/0824$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
pc6bzua9uj7d web.1 dustndus8/myweb:blue worker2 Running Running 45 minutes ago
pvw7qhltei7u web.2 dustndus8/myweb:blue worker3 Running Running 45 minutes ago
ys5prl7l67qo web.3 dustndus8/myweb:blue worker1 Running Running 45 minutes ago
3xo9oohqybc3 web.4 dustndus8/myweb:blue worker2 Running Running 23 seconds ago
41msdi7gyh4z web.5 dustndus8/myweb:blue worker3 Running Running 22 seconds ago
whqgprhjfqc8 web.6 dustndus8/myweb:blue worker1 Running Running 23 seconds ago
rapa@manager:~/0824$ docker service update --image dustndus8/myweb:green web
image dustndus8/myweb:green could not be accessed on a registry to record
its digest. Each node will access dustndus8/myweb:green independently,
possibly leading to different nodes running different
versions of the image.
web
overall progress: 6 out of 6 tasks
1/6: running [==================================================>]
2/6: running [==================================================>]
3/6: running [==================================================>]
4/6: running [==================================================>]
5/6: running [==================================================>]
6/6: running [==================================================>]
verify: Service converged
rapa@manager:~/0824$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
61bv9hk1dfgs web.1 dustndus8/myweb:green worker2 Running Running 4 minutes ago
pc6bzua9uj7d \_ web.1 dustndus8/myweb:blue worker2 Shutdown Shutdown 4 minutes ago
mewxvqpqx55k web.2 dustndus8/myweb:green worker3 Running Running 6 minutes ago
pvw7qhltei7u \_ web.2 dustndus8/myweb:blue worker3 Shutdown Shutdown 6 minutes ago
t9mmk6fcqcts web.3 dustndus8/myweb:green worker1 Running Running 5 minutes ago
ys5prl7l67qo \_ web.3 dustndus8/myweb:blue worker1 Shutdown Shutdown 5 minutes ago
dqb3phx7nsg7 web.4 dustndus8/myweb:green worker2 Running Running 5 minutes ago
3xo9oohqybc3 \_ web.4 dustndus8/myweb:blue worker2 Shutdown Shutdown 5 minutes ago
p3m48ceyjdnu web.5 dustndus8/myweb:green worker3 Running Running 5 minutes ago
41msdi7gyh4z \_ web.5 dustndus8/myweb:blue worker3 Shutdown Shutdown 5 minutes ago
hh497i2gv276 web.6 dustndus8/myweb:green worker1 Running Running 5 minutes ago
whqgprhjfqc8 \_ web.6 dustndus8/myweb:blue worker1 Shutdown Shutdown 5 minutes ago
update를 하게 되면 기존에 동작하고 있던 컨테이너는 down 상태가 되고, 새로운 이미지로 생성한 컨테이너가 외부에 서비스를 제공하게 된다. 기존 컨테이너의 내용을 업데잍트 하는 것처럼 보이지만 그렇지 않다. green이 먼저 만들어 지고, 만들어진 후에 blue를 down시킨다
들어가보면 update 된 것을 확인할 수 있다.
rapa@manager:~/0824$ docker service rollback web
web
rollback: manually requested rollback
overall progress: rolling back update: 6 out of 6 tasks
1/6: running [> ]
2/6: running [> ]
3/6: running [> ]
4/6: running [> ]
5/6: running [> ]
6/6: running [> ]
verify: Service converged
확인하면 다시 BLUE로 ROLLBACK 된 것을 볼 수 있다.
rapa@manager:~/0824$ docker service create --name web --replicas 6 --update-delay 3s --update-parallelism 3 --constraint node.role==worker -p 80:80 dustndus8/myweb:blue
image dustndus8/myweb:blue could not be accessed on a registry to record
its digest. Each node will access dustndus8/myweb:blue independently,
possibly leading to different nodes running different
versions of the image.
nf8bqwmpgnliz3fp7u77q7nkc
overall progress: 6 out of 6 tasks
1/6: running [==================================================>]
2/6: running [==================================================>]
3/6: running [==================================================>]
4/6: running [==================================================>]
5/6: running [==================================================>]
6/6: running [==================================================>]
verify: Service converged
rapa@manager:~/0824$ docker service update --image dustndus8/myweb:green web
image dustndus8/myweb:green could not be accessed on a registry to record
its digest. Each node will access dustndus8/myweb:green independently,
possibly leading to different nodes running different
versions of the image.
web
overall progress: 6 out of 6 tasks
1/6: running [==================================================>]
2/6: running [==================================================>]
3/6: running [==================================================>]
4/6: running [==================================================>]
5/6: running [==================================================>]
6/6: running [==================================================>]
verify: Service converged
task(작업)에 동시에 3개의 컨테이너를 업데이트하고 3초 쉰 뒤 다음 테스트를 진행
rapa@manager:~/0824$ docker service rollback web
web
rollback: manually requested rollback
overall progress: rolling back update: 6 out of 6 tasks
1/6: running [> ]
2/6: running [> ]
3/6: running [> ]
4/6: running [> ]
5/6: running [> ]
6/6: running [> ]
verify: Service converged
rapa@manager:~/0824$ docker service ps web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ov3793sb0jqz web.1 dustndus8/myweb:blue worker1 Running Running about a minute ago
rvjh8i3nocu1 \_ web.1 dustndus8/myweb:green worker1 Shutdown Shutdown about a minute ago
u0uupacqnsqw \_ web.1 dustndus8/myweb:blue worker1 Shutdown Shutdown 5 minutes ago
jwb0tm9x1s64 web.2 dustndus8/myweb:blue worker2 Running Running about a minute ago
vg1mkldxj1aw \_ web.2 dustndus8/myweb:green worker2 Shutdown Shutdown about a minute ago
m1uyhctausof \_ web.2 dustndus8/myweb:blue worker2 Shutdown Shutdown 5 minutes ago
4wvpxt72uz4c web.3 dustndus8/myweb:blue worker1 Running Running 2 minutes ago
hazmv2lv9q0l \_ web.3 dustndus8/myweb:green worker1 Shutdown Shutdown 2 minutes ago
go12itdebviv \_ web.3 dustndus8/myweb:blue worker3 Shutdown Shutdown 5 minutes ago
i1y4y8jalj5h web.4 dustndus8/myweb:blue worker3 Running Running 45 seconds ago
ker84cvy3f0n \_ web.4 dustndus8/myweb:green worker3 Shutdown Shutdown 46 seconds ago
ervot1v0pz3e \_ web.4 dustndus8/myweb:blue worker1 Shutdown Shutdown 5 minutes ago
4phpcorv5xzc web.5 dustndus8/myweb:blue worker2 Running Running about a minute ago
9s65sp59lx4u \_ web.5 dustndus8/myweb:green worker2 Shutdown Shutdown about a minute ago
jtgxe9fhza8z \_ web.5 dustndus8/myweb:blue worker2 Shutdown Shutdown 5 minutes ago
on42li09bm4r web.6 dustndus8/myweb:blue worker3 Running Running about a minute ago
m83rnf7rprjs \_ web.6 dustndus8/myweb:green worker3 Shutdown Shutdown about a minute ago
i9xrctrldc5y \_ web.6 dustndus8/myweb:blue worker3 Shutdown Shutdown 5 minutes ago