ELK6.5.4+filebeat+kafka实时日志分析平台部署搭建详细实现过程 1、ELK平台介绍 在搜索ELK资料的时候,发现这篇文章比较好,于是摘抄一小段: 日志主要包括系统日志、应用程序日志和安全日志。系统运维和开发人员可以通过日志了解服务器软硬件信息、检查配置过程中的错误及错误发生的原因。经常分析日志可以了解服务器的负荷,性能安全性,从而及时采取措施纠正错误。 通常,日志被分散的储存不同的设备上。如果你管理数十上百台服务器,你还在使用依次登录每台机器的传统方法查阅日志。这样是不是感觉很繁琐和效率低下。当务之急我们使用集中化的日志管理,例如:开源的syslog,将所有服务器上的日志收集汇总。 集中化管理日志后,日志的统计和检索又成为一件比较麻烦的事情,一般我们使用grep、awk和wc等Linux命令能实现检索和统计,但是对于要求更高的查询、排序和统计等要求和庞大的机器数量依然使用这样的方法难免有点力不从心。 开源实时日志分析ELK平台能够完美的解决我们上述的问题,ELK由ElasticSearch、Logstash和Kiabana三个开源工具组成。官方网站: https://www.elastic.co/product部署下载包链接:https://elkguide.elasticsearch.cn/logstash/get-start/hello-world.html yum install -y net-tools lrzsz telnet vim dos2unix bash-completion\ ntpdate sysstat tcpdump traceroute nc wget 安装jdk环境 yum install -y java-1.8.0-openjdk-devel 安装下载elasticsearch [root@elk ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch [root@VM_0_9_centos ~]# vim /etc/yum.repos.d/elasticsearch.repo [elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@elk ~]# yum -y install elasticsearch [root@elk ~]# mkdir -p /data/es-data [root@VM_0_9_centos ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: huanqiu # 组名(同一个组,组名必须一致) node.name: elk-node1 # 节点名称,建议和主机名一致 path.data: /data/es-data # 数据存放的路径 path.logs: /var/log/elasticsearch/ # 日志存放的路径 bootstrap.mlockall: flase # 锁住内存,不被使用到交换分区去(通常在内部不足时,休眠的程序内存信息会交换到交换分区) network.host: 0.0.0.0 # 网络设置 http.port: 9200 # 端口 #增加新的参数,这样head插件可以访问es http.cors.enabled: true http.cors.allow-origin: "*" [root@elk ~]# chown -R elasticsearch.elasticsearch /data/ [root@elk ~]# systemctl start elasticsearch [root@elk ~]# systemctl enable elasticsearch [root@elk ~]# systemctl status elasticsearch ● elasticsearch.service - Elasticsearch Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2019-01-16 10:38:59 CST; 2s ago Docs: http://www.elastic.co Main PID: 3758 (java) CGroup: /system.slice/elasticsearch.service ├─3758 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupa... └─3812 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller Jan 16 10:38:59 elk systemd[1]: Started Elasticsearch. Jan 16 10:38:59 elk systemd[1]: Starting Elasticsearch... elasticsearch-head插件安装 通过web界面来查看elasticsearch集群状态信息 下载安装nodejs [root@VM_0_9_centos ~]# wget https://nodejs.org/dist/v11.2.0/node-v11.2.0-linux-x64.tar.gz --2019-01-16 11:22:32-- https://nodejs.org/dist/v11.2.0/node-v11.2.0-linux-x64.tar.gz Resolving nodejs.org (nodejs.org)... 104.20.23.46, 104.20.22.46, 2606:4700:10::6814:172e, ... Connecting to nodejs.org (nodejs.org)|104.20.23.46|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 18988744 (18M) [application/gzip] Saving to: ‘node-v11.2.0-linux-x64.tar.gz’ 100%[============================================================================================>] 18,988,744 6.51MB/s in 2.8s 2019-01-16 11:22:36 (6.51 MB/s) - ‘node-v11.2.0-linux-x64.tar.gz’ saved [18988744/18988744] [root@VM_0_9_centos ~]# ls node-v11.2.0-linux-x64.tar.gz [root@VM_0_9_centos ~]# tar -zxf node-v11.2.0-linux-x64.tar.gz -C /data/ [root@VM_0_9_centos ~]# cd /data/ [root@VM_0_9_centos data]# ls es-data node-v11.2.0-linux-x64 [root@VM_0_9_centos data]# mv node-v11.2.0-linux-x64 node-v11.2.0 [root@VM_0_9_centos ~]# ln -s /data/node-v11.2.0/bin/node /usr/bin/node [root@VM_0_9_centos ~]# node -v v11.2.0 [root@VM_0_9_centos ~]# ln -s /data/node-v11.2.0/bin/npm /usr/bin/npm [root@VM_0_9_centos ~]# npm -v 6.4.1 [root@VM_0_9_centos ~]# npm config set registry https://registry.npm.taobao.org [root@VM_0_9_centos ~]# vim ~/.npmrc registry=https://registry.npm.taobao.org strict-ssl = false [root@VM_0_9_centos ~]# npm install -g grunt-cli /data/node-v11.2.0/bin/grunt -> /data/node-v11.2.0/lib/node_modules/grunt-cli/bin/grunt - grunt-cli@1.3.2 added 152 packages from 122 contributors in 4.226s [root@VM_0_9_centos ~]# ln -s /data/node-v11.2.0/lib/node_modules/grunt-cli/bin/grunt /usr/bin/grunt 下载head二进制包 [root@VM_0_9_centos ~]# wget https://github.com/mobz/elasticsearch-head/archive/master.zip [root@VM_0_9_centos ~]# cd /data/elasticsearch-head-master/ [root@VM_0_9_centos elasticsearch-head-master]# npm install npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! phantomjs-prebuilt@2.1.16 install: `node install.js` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the phantomjs-prebuilt@2.1.16 install script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! /root/.npm/_logs/2019-01-16T03_43_51_976Z-debug.log [root@VM_0_9_centos elasticsearch-head-master]# npm install phantomjs-prebuilt@2.1.16 --ignore-scripts npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.6 (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.6: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) - phantomjs-prebuilt@2.1.16 added 62 packages from 64 contributors and removed 4 packages in 4.037s [root@VM_0_9_centos elasticsearch-head-master]# npm install npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.6 (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.6: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) added 9 packages from 13 contributors in 2.87s #如果速度较慢或安装失败,建议使用国内镜像 [root@elk elasticsearch-head-master]# npm install --ignore-scripts -g cnpm --registry=https://registry.npm.taobao.org [root@VM_0_9_centos ~]# vim /data/elasticsearch-head-master/Gruntfile.js #port: 9100上面增加hostname地址 hostname: "0.0.0.0", [root@VM_0_9_centos ~]# vim /data/elasticsearch-head-master/_site/app.js #localhost替换为IP地址 this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://129.211.125.21:9200"; [root@VM_0_9_centos elasticsearch-head-master]# grunt server & 访问IP:9100 安装kibana [root@VM_0_9_centos ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz --2019-01-16 14:22:19-- https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz Resolving artifacts.elastic.co (artifacts.elastic.co)... 184.72.242.47, 107.21.237.95, 184.73.245.233, ... Connecting to artifacts.elastic.co (artifacts.elastic.co)|184.72.242.47|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 206631363 (197M) [application/x-gzip] Saving to: ‘kibana-6.5.4-linux-x86_64.tar.gz’ 100%[============================================================================================>] 206,631,363 7.14MB/s in 29s 2019-01-16 14:22:50 (6.79 MB/s) - ‘kibana-6.5.4-linux-x86_64.tar.gz’ saved [206631363/206631363] [root@VM_0_9_centos ~]# ls kibana-6.5.4-linux-x86_64.tar.gz main.py master.zip node-v11.2.0-linux-x64.tar.gz [root@VM_0_9_centos ~]# tar -zxf kibana-6.5.4-linux-x86_64.tar.gz -C /data/ [root@VM_0_9_centos data]# mv kibana-6.5.4-linux-x86_64 kibana-6.5.4 [root@VM_0_9_centos data]# cd kibana-6.5.4/ [root@VM_0_9_centos kibana-6.5.4]# vim config/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://IP:9200" kibana.index: ".kibana" 运行 因为他一直运行在前台,要么选择开一个窗口,要么选择使用screen。 [root@elk kibana-6.5.4]# yum -y install screen [root@elk kibana-6.5.4]# screen #这样就另开启了一个终端窗口 [root@elk kibana-6.5.4]# ./bin/kibana 然后按ctrl+a+d组合键,暂时断开screen会话 这样在上面另启的screen屏里启动的kibana服务就一直运行在前台了.... [root@elk kibana-6.5.4]# screen -ls There is a screen on: 15041.pts-0.elk-node1 (Detached) 1 Socket in /var/run/screen/S-root. 注:screen重新连接会话 下例显示当前有两个处于detached状态的screen会话,你可以使用screen -r <screen_pid>重新连接上: [root@elk kibana-6.5.4t]# screen –ls There are screens on: 8736.pts-1.tivf18 (Detached) 8462.pts-0.tivf18 (Detached) 2 Sockets in /root/.screen. [root@elk kibana-6.5.4]# screen -r 8736 下面是关于部分汉化kibana教程 6.5.4版本好像不支持汉化。汉化以后有bug产生。 [root@VM_0_9_centos ~]# mkdir /data/Sinicization [root@VM_0_9_centos ~]# cd /data/Sinicization/ [root@VM_0_9_centos Sinicization]# git clone https://github.com/anbai-inc/Kibana_Hanization Cloning into 'Kibana_Hanization'... remote: Enumerating objects: 218, done. remote: Total 218 (delta 0), reused 0 (delta 0), pack-reused 218 Receiving objects: 100% (218/218), 2.03 MiB | 712.00 KiB/s, done. Resolving deltas: 100% (98/98), done. [root@VM_0_9_centos Sinicization]# cd Kibana_Hanization/ [root@VM_0_9_centos Kibana_Hanization]# ls config image main.py README.md requirements.txt [root@VM_0_9_centos Kibana_Hanization]# python main.py /data/kibana-6.5.4/ 文件[/data/kibana-6.5.4/src/core_plugins/kibana/ui_setting_defaults.js]已翻译。 文件[/data/kibana-6.5.4/src/core_plugins/kibana/index.js]已翻译。 文件[/data/kibana-6.5.4/src/core_plugins/kibana/public/dashboard/index.js]已翻译。 文件[/data/kibana-6.5.4/src/core_plugins/kibana/server/tutorials/kafka_logs/index.js]已翻译。 文件[/data/kibana-6.5.4/src/core_plugins/timelion/index.js]已翻译。 文件[/data/kibana-6.5.4/src/ui/public/chrome/directives/global_nav/global_nav.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/@elastic/eui/src/components/search_bar/search_box.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/@elastic/eui/lib/components/search_bar/search_box.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/@elastic/eui/dist/eui.min.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/@elastic/eui/dist/eui.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/@elastic/eui/es/components/search_bar/search_box.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/ml/index.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/ml/public/register_feature.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/monitoring/ui_exports.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/monitoring/public/register_feature.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/apm/public/components/app/TransactionOverview/DynamicBaseline/Button.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/index.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/public/register_feature.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/canvas_plugin/renderers/all.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/canvas_plugin/uis/arguments/all.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/canvas/canvas_plugin/uis/datasources/all.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/index.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/register_feature.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/components/manage_spaces_button.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/views/management/index.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/spaces/public/views/nav_control/components/spaces_description.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/infra/index.js]已翻译。 文件[/data/kibana-6.5.4/node_modules/x-pack/plugins/infra/public/register_feature.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/apm.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/canvas.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/timelion.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/vendors.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/ml.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/monitoring.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/kibana.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/commons.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/login.bundle.js]已翻译。 文件[/data/kibana-6.5.4/optimize/bundles/infra.bundle.js]已翻译。 恭喜,Kibana汉化完成! 安装logstash [root@VM_0_9_centos ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz [root@VM_0_9_centos logstash-6.5.4]# tar -zxf logstash-6.5.4.tar.gz -C /data/ [root@VM_0_9_centos ~]# vim /data/logstash-6.5.4/config/test.conf input { kafka { bootstrap_servers => "10.7.1.112:9092" topics => "nethospital_2" codec => "json" } } output { if [fields][tag] == "nethospital_2" { elasticsearch { hosts => ["10.7.1.111:9200"] index => "nethospital_2-%{+YYYY-MM-dd}" codec => "json" } } } [root@VM_0_9_centos logstash-6.5.4]# ./bin/logstash -f config/test.conf & # -f 指定配置文件 安装kafka [root@VM_0_9_centos ~]# wget https://archive.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgz [root@VM_0_9_centos ~]# gzip -dv kafka_2.11-1.0.0.tgz [root@VM_0_9_centos ~]# tar -xvf kafka_2.11-1.0.0.tar [root@VM_0_9_centos ~]# mv kafka_2.11-1.0.0 /data/ [root@VM_0_9_centos ~]# wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz 修改kafka参数及启动 [root@VM_0_9_centos ~]# cd /data/kafka_2.11-1.0.0 [root@VM_0_9_centos kafka_2.11-1.0.0]# vim config/zookeeper.properties dataDir=/tmp/zookeeper/data # 数据持久化路径 clientPort=2181 # 连接端口 maxClientCnxns=100 # 最大连接数 dataLogDir=/tmp/zookeeper/logs #日志存放路径 tickTime=2000 # Zookeeper服务器心跳时间,单位毫秒 initLimit=10 # 投票选举新leader的初始化时间。 启动zookeeper [root@VM_0_9_centos kafka_2.11-1.0.0]# ./bin/zookeeper-se zookeeper-security-migration.sh zookeeper-server-start.sh zookeeper-server-stop.sh [root@VM_0_9_centos kafka_2.11-1.0.0]# ./bin/zookeeper-server-start.sh config/zookeeper.properties [root@VM_0_9_centos kafka_2.11-1.0.0]# nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties & ##后台启动 修改kafka参数及启动 [root@VM_0_9_centos kafka_2.11-1.0.0]# vim config/server.properties broker.id=0 listeners=PLAINTEXT://localhost:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/data/logs/kafka num.partitions=2 num.recovery.threads.per.data.dir=1 log.retention.check.interval.ms=300000 zookeeper.connect=localhost:2181 zookeeper.connection.timeout.ms=6000 [root@VM_0_9_centos kafka_2.11-1.0.0]# ./bin/zookeeper-server-start.sh config/zookeeper.properties [root@VM_0_9_centos kafka_2.11-1.0.0]# nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties & ##后台启动 测试kafka [root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test Created topic "test". [root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-topics.sh -list -zookeeper localhost:2181 test #启动生产进程测试 [root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-console-producer.sh --broker-list 10.2.151.203:9092 --topic test #启动启动消费者进程 [root@VM_0_9_centos kafka_2.11-1.0.0]# bin/kafka-console-consumer.sh --zookeeper 10.2.151.203:2181 --topic test --from-beginning 安装filebeat 下载安装 [root@VM_0_9_centos ~]# wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gz [root@VM_0_9_centos ~]# tar -zxf filebeat-6.2.4-linux-x86_64.tar.gz -C /data/ [root@VM_0_9_centos ~]# mv /data/filebeat-6.2.4-linux-x86_64/ /data/filebeat-6.2.4 [root@VM_0_9_centos ~]# vim /data/filebeat-6.2.4/filebeat.yml cat filebeat.yml filebeat.prospectors: - input_type: log paths: - /home/test/backup/mysql-*.log document_type: mysql tail_files: true multiline.pattern: ^\[[0-9]{4}-[0-9]{2}-[0-9]{2} multiline.negate: true multiline.match: after output.kafka: hosts: ["192.168.1.99:9092"] topic: guo partition.round_robin: reachable_only: false required_acks: 1 compression: gzip max_message_bytes: 1000000 [root@VM_0_9_centos filebeat-6.2.4]# nohup ./filebeat -e -c filebeat.yml & [3] 4276 [root@VM_0_9_centos filebeat-6.2.4]# curl -XGET 'http://localhost:9200/_cat/nodes' #查看集群状态 或者:curl -XGET '<http://10.2.151.203:9200/_cat/nodes?v>' curl -XGET '<http://10.2.151.203:9200/_cluster/state/nodes?pretty>' 192.168.0.9 16 95 0 0.01 0.02 0.05 mdi * sjx_node-1 查看集群master [root@VM_0_9_centos filebeat-6.2.4]# curl -XGET 'http://localhost:9200/_cluster/state/master_node?pretty' 或者:curl -XGET '<http://10.2.151.203:9200/_cat/master?v>' { "cluster_name" : "sjx", "compressed_size_in_bytes" : 12577, "cluster_uuid" : "Si3hj1UhTIetue5-ydYAbw", "master_node" : "CsmmrG8jR8WQIze8RDdcxw" } 查询集群的健康状态 [root@VM_0_9_centos filebeat-6.2.4]# curl -XGET 'http://localhost:9200/_cluster/health?pretty' 或者:curl -XGET '<http://10.2.151.203:9200/_cat/health?v>' { "cluster_name" : "sjx", "status" : "green", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 1, "active_shards" : 1, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } 安装cerebro插件 cerebo是kopf在es5上的替代者,通过web界面来管理和监控elasticsearch集群状态信息 下载安装 [root@VM_0_9_centos ~]# wget https://github.com/lmenezes/cerebro/releases/download/v0.8.1/cerebro-0.8.1.tgz [root@elk ~]# tar -xf cerebro-0.8.1.tar -C /data/ [root@elk ~]# cd /data/cerebro-0.8.1/ [root@elk cerebro-0.8.1]# vim conf/application.conf hosts = [**** { host = "http://IP:9200" name = "my-elk" }, ] 启动/访问 [root@elk cerebro-0.8.1]# ./bin/cerebro ###启动是否有错 nohup ./bin/cerebro & #后台运行 [http://ip:9000 下载安装 安装bigdesk插件 bigdesk 统计分析和图表化elasticsearch集群状态信息 #wget <https://codeload.github.com/hlstudio/bigdesk/zip/master> ##下载到本地然后rz 上传 [root@elk ~]# unzip bigdesk-master.zip [root@elk ]# mv bigdesk-master /usr/share/elasticsearch/plugins/ [root@elk ]# cd /usr/share/elasticsearch/plugins/bigdesk-master/_site/ 使用 python -m SimpleHTTPServer 快速搭建http服务 [root@elk _site]# python -m SimpleHTTPServer Serving HTTP on 0.0.0.0 port 8000 ... 指定端口8000 [root@elk _site]# nohup python -m SimpleHTTPServer 8000 & #后台运行 [1] 6184 访问:<http://IP:8000/ 若是安装kopf 通过web界面来 管理和监控 elasticsearch集群状态信息 [root@VM_0_9_centos ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf [root@VM_0_9_centos ~]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins [root@VM_0_9_centos ~]# systemctl restart elasticsearch 两台服务器均安装插件完毕,再进行测试 (责任编辑:IT) |