Страницы

Поиск по вопросам

понедельник, 8 июля 2019 г.

Микросервисная архитектура при помощи Docker, Consul, Consul-Template и балансировщика нагрузки

Добрый день!
Я новичок в Docker. Разбираюсь со связкой Docker-machine + docker-comppose + consul + consul-template + registator. Нашел статью, в которой есть пример. Но есть некоторые отличия:
Я использую docker-machine:
docker-machine create -d azure --azure-subscription-id \ --azure-ssh-user \ --azure-open-port 80 \ --azure-subnet-prefix 10.0.2.0/24
так же использую docker-compose, а не fig:
app: image: tutum/hello-world:latest environment: SERVICE_NAME: app SERVICE_TAGS: production SERVICE_80_NAME: http SERVICE_80_CHECK_HTTP: . SERVICE_80_CHECK_INTERVAL: 15 ports: - "80"
lb: build: ./ links: - consul ports: - "80:80"
consul: command: -server -bootstrap -advertise 10.0.2.4 image: gliderlabs/consul-server ports: - "8300:8300" - "8400:8400" - "8500:8500" - "8600:53/udp" # # Service Discovery - Registrator # registrator: command: -ip=10.0.2.4 consul://consul:8500 image: gliderlabs/registrator:latest links: - consul volumes: - "/var/run/docker.sock:/tmp/docker.sock"
Проблема в следующем: После запуска контейнеров приложение не регистрируется и соответственно, при обращении на 80 порт, nginx возвращает ошибку 502. вывод консоли при этом:
docker-compose up Starting dockerloadbalancer_app_1 ... Starting dockerloadbalancer_app_1 Starting dockerloadbalancer_consul_1 ... Starting dockerloadbalancer_consul_1 ... done Starting dockerloadbalancer_lb_1 ... Starting dockerloadbalancer_registrator_1 ... Starting dockerloadbalancer_lb_1 ... done ... done Starting dockerloadbalancer_registrator_1 Attaching to dockerloadbalancer_app_1, dockerloadbalancer_consul_1, dockerloadbalancer_registrator_1, dockerloadbalancer_lb_1 consul_1 | ==> WARNING: Bootstrap mode enabled! Do not enable unless necessary consul_1 | ==> Starting Consul agent... consul_1 | ==> Starting Consul agent RPC... registrator_1 | 2017/08/30 05:09:13 Starting registrator v7 ... registrator_1 | 2017/08/30 05:09:13 Forcing host IP to 10.0.2.4 registrator_1 | 2017/08/30 05:09:13 Using consul adapter: consul://consul:8500 registrator_1 | 2017/08/30 05:09:13 Connecting to backend (0/0) lb_1 | nginx: the configuration file /etc/nginx/nginx.conf syntax is ok consul_1 | ==> Consul agent running! consul_1 | Node name: '66269e63b117' consul_1 | Datacenter: 'dc1' consul_1 | Server: true (bootstrap: true) consul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400) consul_1 | Cluster Addr: 10.0.2.4 (LAN: 8301, WAN: 8302) consul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false consul_1 | Atlas: consul_1 | consul_1 | ==> Log data will now stream in as it occurs: consul_1 | consul_1 | 2017/08/30 05:09:09 [INFO] raft: Node at 10.0.2.4:8300 [Follower] entering Follower state consul_1 | 2017/08/30 05:09:09 [INFO] serf: EventMemberJoin: 66269e63b117 10.0.2.4 consul_1 | 2017/08/30 05:09:09 [INFO] consul: adding LAN server 66269e63b117 (Addr: 10.0.2.4:8300) (DC: dc1) consul_1 | 2017/08/30 05:09:09 [INFO] serf: EventMemberJoin: 66269e63b117.dc1 10.0.2.4 consul_1 | 2017/08/30 05:09:09 [INFO] consul: adding WAN server 66269e63b117.dc1 (Addr: 10.0.2.4:8300) (DC: dc1) consul_1 | 2017/08/30 05:09:09 [ERR] agent: failed to sync remote state: No cluster leader consul_1 | 2017/08/30 05:09:10 [WARN] raft: Heartbeat timeout reached, starting election consul_1 | 2017/08/30 05:09:10 [INFO] raft: Node at 10.0.2.4:8300 [Candidate] entering Candidate state consul_1 | 2017/08/30 05:09:10 [INFO] raft: Election won. Tally: 1 consul_1 | 2017/08/30 05:09:10 [INFO] raft: Node at 10.0.2.4:8300 [Leader] entering Leader state consul_1 | 2017/08/30 05:09:10 [INFO] consul: cluster leadership acquired consul_1 | 2017/08/30 05:09:10 [INFO] consul: New leader elected: 66269e63b117 consul_1 | 2017/08/30 05:09:10 [INFO] raft: Disabling EnableSingleNode (bootstrap) consul_1 | 2017/08/30 05:09:13 [INFO] agent: Synced node info consul_1 | 2017/08/30 05:09:13 [WARN] Service name "dockerloadbalancer_lb-80" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes. consul_1 | 2017/08/30 05:09:13 [INFO] agent: Synced service '54a66f0e387b:dockerloadbalancer_lb_1:80' consul_1 | 2017/08/30 05:09:13 [INFO] agent: Synced service '54a66f0e387b:dockerloadbalancer_consul_1:8400' consul_1 | 2017/08/30 05:09:13 [INFO] agent: Synced service '54a66f0e387b:dockerloadbalancer_consul_1:8500' consul_1 | 2017/08/30 05:09:13 [INFO] agent: Synced service '54a66f0e387b:dockerloadbalancer_consul_1:53:udp' registrator_1 | 2017/08/30 05:09:13 consul: current leader 10.0.2.4:8300 registrator_1 | 2017/08/30 05:09:13 Listening for Docker events ... registrator_1 | 2017/08/30 05:09:13 Syncing services on 4 containers registrator_1 | 2017/08/30 05:09:13 added: 07b7de523f24 54a66f0e387b:dockerloadbalancer_lb_1:80 registrator_1 | 2017/08/30 05:09:13 ignored: 07b7de523f24 port 443 not published on host registrator_1 | 2017/08/30 05:09:13 ignored: 54a66f0e387b no published ports registrator_1 | 2017/08/30 05:09:13 ignored: 66269e63b117 port 8302 not published on host registrator_1 | 2017/08/30 05:09:13 ignored: 66269e63b117 port 8600 not published on host registrator_1 | 2017/08/30 05:09:13 ignored: 66269e63b117 port 8600 not published on host registrator_1 | 2017/08/30 05:09:13 ignored: 66269e63b117 port 8301 not published on host registrator_1 | 2017/08/30 05:09:13 ignored: 66269e63b117 port 8302 not published on host registrator_1 | 2017/08/30 05:09:13 added: 66269e63b117 54a66f0e387b:dockerloadbalancer_consul_1:8400 registrator_1 | 2017/08/30 05:09:13 added: 66269e63b117 54a66f0e387b:dockerloadbalancer_consul_1:8500 registrator_1 | 2017/08/30 05:09:13 added: 66269e63b117 54a66f0e387b:dockerloadbalancer_consul_1:53:udp registrator_1 | 2017/08/30 05:09:13 added: 66269e63b117 54a66f0e387b:dockerloadbalancer_consul_1:8300 registrator_1 | 2017/08/30 05:09:13 ignored: 66269e63b117 port 8301 not published on host registrator_1 | 2017/08/30 05:09:13 register failed: &{54a66f0e387b:dockerloadbalancer_app_1:80 http 32780 10.0.2.4 [production] map[check_http:. check_interval:15] 0 {32780 10.0.2.4 80 172.17.0.2 tcp 819676e02aa1 819676e02aa10e14b1eca7e80d934f77932e66001bb24f237c5cde7035fcf509 0xc2080f4a80}} Unexpected response code: 400 (Request decode failed: time: missing unit in duration 15) consul_1 | 2017/08/30 05:09:13 [INFO] agent: Synced service '54a66f0e387b:dockerloadbalancer_consul_1:8300' lb_1 | nginx: configuration file /etc/nginx/nginx.conf test is successful lb_1 | 2017/08/30 05:09:44 [error] 14#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 217.118.84.156, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:65535/", host: "13.64.158.172" lb_1 | 217.118.84.156 - - [30/Aug/2017:05:09:44 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36" "-" lb_1 | 2017/08/30 05:09:45 [error] 14#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 217.118.84.156, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:65535/favicon.ico", host: "13.64.158.172", referrer: "http://13.64.158.172/"
Подскажите в чем может быть проблема?


Ответ

Ошибка в статье, в файл nginx.conf, должен выглядеть так:
upstream app { least_conn; {{range service "production.http"}} server {{.Address}}:{{.Port}} max_fails=3 fail_timeout=60 weight=1; {{else}} server 127.0.0.1:65333; # force a 501 {{end}} }
server { listen 80 default_server;
location / { proxy_pass http://app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }
server { listen 65333;
location / { types { application/json json; } default_type "application/json"; return 501 '{ "success": false, "deploy": false, "status": 501, "body": { "message": "No available upstream servers at current route from consul" } }'; } }
Полностью рабочий пример выложил сюда

Комментариев нет:

Отправить комментарий