paullee714/Flask-Vue-ELK-Mongo-Docker
이전 글에서는 ELK 스택을 docker화 하기 위한 confg, Dockerfile 들을 작성하였다.
이번 글에서는 ELK스택으로 분석할 로그가 있는 flask서버를 작성하고, docker-compose에 추가
해 줄 것이다
Flask-Vue-ELK-Mongo-Docker
├── ELK
│ ├── elasticsearch
│ ├── kibana
│ └── logstash
├── README.md
├── docker-compose.yml
└── web
└── back
6 directories, 2 files
web
└── back
├── app.py
├── back.Dockerfile
├── back_lib
│ ├── __init__.py
│ └── logger.py
├── requirements.txt
├── route
│ ├── __init__.py
│ └── test_route.py
├── util
│ ├── __init__.py
│ └── my_log.py
└── venv
$ virtualenv venv
$ source venv/bin/activate
(venv) $ pip install flask elasticsearch python-logstash python3-logstash python-dotenv
from flask import Flask
# route pacakge import
from route.test_route import my_test
app = Flask(__name__)
# route 등록
app.register_blueprint(my_test)
if __name__ == '__main__':
app.run('0.0.0.0',port=5000,debug=True)
import logging
import logstash
'''
#################################################################################
SET LOG FORMAT
#################################################################################
'''
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s' , datefmt='%Y-%m-%d:%H:%M:%S')
'''
#################################################################################
Werkzeug LOGGER SETTING
#################################################################################
'''
werkzeug = logging.getLogger('werkzeug')
# werkzeug.disabled = True
'''
#################################################################################
MAIN LOGGER SETTING
#################################################################################
'''
web_logger_logstash = logging.getLogger('web_logger')
web_logger_logstash.setLevel(logging.DEBUG)
stash = logstash.TCPLogstashHandler('logstash',5001,version=1)
stash.setFormatter(formatter)
web_logger_logstash.addHandler(stash)
# web_logger_logstash.disabled = True
'''
#################################################################################
Stream LOGGER SETTING
#################################################################################
'''
web_logger_stream = logging.getLogger('web_stream')
web_logger_stream.setLevel(logging.DEBUG)
stream = logging.StreamHandler()
stream.setFormatter(formatter)
web_logger_stream.addHandler(stream)
# web_logger_stream.disabled = True
**docker-compose에서 같은 네트워크로 묶을 것이기 때문에 url을 서비스 이름으로 정해준다.**
from back_lib.logger import web_logger_stream,web_logger_logstash
def back_logger_info(msg,flag=0):
"""
flag = 0 -> all
flag = 1 -> only logstash
flag = 2 -> only stream
:param msg:
:param flag:
:return:
"""
if flag == 0:
web_logger_logstash.info(msg)
web_logger_stream.info(msg)
elif flag == 1:
web_logger_logstash.info(msg)
elif flag == 2:
web_logger_stream.info(msg)
else:
pass
from flask import Blueprint
# logging lib
from util.my_log import back_logger_info
my_test = Blueprint('test',__name__)
@my_test.route('/',methods=['GET'])
def test_router():
back_logger_info('hello world api!')
return "hello world!"
FROM python:3.8
WORKDIR /www
ADD . .
RUN python3 -m pip install -U pip
RUN pip3 install -r requirements.txt
CMD ["python3","app.py"]
EXPOSE 5000
docker-compose로 플라스크 서버까지 같이 묶어서 빌드 해 준다. 하나의 네트워크에서 ELK와 플라스크가 같이 돌아가게 된다.
version: '3.2'
services:
elasticsearch:
build:
context: "${PWD}/ELK/elasticsearch/"
dockerfile: elastic.Dockerfile
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: "${PWD}/ELK/elasticsearch/config/elasticsearch.yml"
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- "${PWD}/ELK/elasticsearch/data:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
logstash:
build:
context: "${PWD}/ELK/logstash/"
dockerfile: logstash.Dockerfile
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: "${PWD}/ELK/logstash/config/logstash.yml"
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: "${PWD}/ELK/logstash/pipeline"
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5001:5001/tcp"
- "5001:5001/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: ELK/kibana/
dockerfile: kibana.Dockerfile
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: "${PWD}/ELK/kibana/config/kibana.yml"
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
web-back:
build:
context: web/back/
dockerfile: back.Dockerfile
ports:
- "5000:5000"
expose:
- "5000"
networks:
- elk
depends_on:
- kibana
networks:
elk:
driver: bridge
$ docker-compose up --build
#혹은
$ docker-compose up --build -d