Toggle navigation
首页
技术
骑行
羽毛球
资讯
联络我
登录
快速搭建elasticsearch,logstash,kibana环境,导入db资料
2018-03-08
Docker
Elasticsearch
> 本文介绍如何通过 docker-compose 快速搭建 elastic 资料导入 & 搜索平台 # 环境准备 * 安装好 docker, docker-compose,可以参照[Ubuntu下Docker及Docker-Compose的环境快速搭建](http://www.supperxin.com/Coding/Details/quick-install-docker) # elasticsearch 1. 确认内存文件映射数量满足 elasticsearch 的需求 运行: sysctl vm.max_map_count 如果输出为 65536 ,则需要调整此参数: sysctl -w vm.max_map_count=262144 如果要让设定永久生效,需要修改文件: /etc/sysctl.conf,加入: vm.max_map_count=262144 参考:[Virtual memory](https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html) 2. 编写 docker-compose.elasticsearch.yml 档案: ``` version: '2.2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2 container_name: elasticsearch environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2 container_name: elasticsearch2 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata2:/usr/share/elasticsearch/data networks: - esnet volumes: esdata1: driver: local esdata2: driver: local networks: esnet: ``` 环境变量说明: * cluster.name=docker-cluster:声明elasticsearch cluster的名称,用于集群自动发现 * bootstrap.memory_lock=true:声明内存不被放入交换区(写入硬盘) * "ES_JAVA_OPTS=-Xms512m -Xmx512m":声明JAVA堆内存大小 参考:[Install Elasticsearch with Docker](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) # Kibana 1. 编写 docker-compose.kibana.yml 档案: ``` version: '2.2' services: kibana: image: docker.elastic.co/kibana/kibana:6.2.2 ports: - 5601:5601 environment: ELASTICSEARCH_URL: http://elasticsearch:9200 links: - elasticsearch networks: - esnet ``` # Logstash 1. 编写 docker-compose.logstash.yml 档案: ``` version: '2.2' services: logstash: image: docker.elastic.co/logstash/logstash:6.2.2 environment: ELASTICSEARCH_URL: http://elasticsearch:9200 CONFIG_RELOAD_AUTOMATIC: 'true' links: - elasticsearch volumes: - ~/elastic/logstash/pipeline/:/usr/share/logstash/pipeline/ - ~/elastic/logstash/drivers/mssql-jdbc-6.2.2.jre8.jar:/usr/share/java/mssql-jdbc-6.2.2.jre8.jar networks: - esnet ``` 参数说明: CONFIG_RELOAD_AUTOMATIC: 'true' 让 logstash 自动重新加载配置,方便在配置抓取资料job时调试配置 logstash的配置可以通过环境变量的方式来进行,配置的时候只需要将对应的变量名称全大写,然后把.替换为下划线即可,比如: config.reload.automatic --> CONFIG_RELOAD_AUTOMATIC 所有在这里列出的配置都可以用环境变量的方式进行: https://www.elastic.co/guide/en/logstash/current/logstash-settings-file.html 参考:[Configuring Logstash for Docker](https://www.elastic.co/guide/en/logstash/current/docker-config.html) 2. 使用 jdbc 的 input 插件来抓取 db 的资料到 elasticsearch 建立pipeline文件:logstash/pipeline/job.conf: ``` input { jdbc { jdbc_driver_library => "/usr/share/java/mssql-jdbc-6.2.2.jre8.jar" jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver" jdbc_connection_string => "jdbc:sqlserver://xxx:1433;databaseName=xxx;" jdbc_user => "xxx" jdbc_password => "xxx" schedule => "* * * * *" jdbc_default_timezone => "Asia/Shanghai" statement => "xxx" jdbc_fetch_size => 10000 } } output { elasticsearch { index => "job" document_id => "%{id}" hosts => ["elasticsearch:9200"] } } ``` db server的jdbc驱动可以到对应的db网站下载 参考:[Jdbc input plugin](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html) # 启动脚本 建立脚本:start.sh ```bash docker-compose -f docker-compose.elasticsearch.yml \ -f docker-compose.kibana.yml \ -f docker-compose.logstash.yml up -d ``` 添加可执行权限以后,就可以用 ./start.sh 来启动服务了。
×
本文为博主原创,如需转载,请注明出处:
http://www.supperxin.com
返回博客列表