[关闭]
@cdmonkey 2017-08-30T03:18:50.000000Z 字数 4656 阅读 1120

ELK-2-LogStash

ELK


LogStash

http://my.oschina.net/abcfy2/blog/372138

Install JDK

省略

Install LogStash

http://soft.dog/2016/01/05/logstash-basic/#section-5

  1. # Set hosts:
  2. [root@ls-node1 ~]# cat /etc/hosts
  3. ...
  4. 172.16.1.23 ls-node1
  5. ------------------
  6. [root@ls-node1 tools]# tar zxvf logstash-2.3.2.tar.gz
  7. [root@ls-node1 tools]# mv logstash-2.3.2 /usr/local/logstash

YUM Install: https://www.elastic.co/guide/en/logstash/current/package-repositories.html

通过前台启动服务:

  1. [root@ls-node1 ~]# /usr/local/logstash/bin/logstash -e 'input {stdin{}} output {stdout{}}'
  2. Settings: Default pipeline workers: 1
  3. Pipeline main started
  4. # 输入什么内容,随后就会输出什么内容:
  5. hehe
  6. 2016-06-16T02:21:35.618Z ls-node1 hehe
  7. ------------------
  8. # 我们能够于输出中设定参数改变输出的格式:
  9. [root@ls-node1 ~]# /usr/local/logstash/bin/logstash -e 'input {stdin{}} output {stdout{ codec => rubydebug }}'
  10. Settings: Default pipeline workers: 1
  11. Pipeline main started
  12. hello world
  13. {
  14. "message" => "hello world",
  15. "@version" => "1",
  16. "@timestamp" => "2016-06-16T02:18:43.019Z",
  17. "host" => "ls-node1"
  18. }

我们同样能够将输入的内容直接写入到ES中去:

  1. [root@ls-node1 ~]# /usr/local/logstash/bin/logstash -e 'input {stdin{}} output {elasticsearch { hosts => "172.16.1.21" }}'
  2. Settings: Default pipeline workers: 1
  3. Pipeline main started
  4. cdmonkey

我们这时可以通过“head”插件发现新生成了一个索引,证明上面输入的内容被存放到ES中去了,如图所示。

20150917151316.png-11.9kB

后台运行(当然,如果是通过YUM进行安装的话就无需该操作):无论是使用根用户还是使用普通用户,启动脚本均需要进行修改,而且相应的用户必须要安装好“JDK”。

  1. # Copy the startup script:
  2. [root@ls-node1 ~]# mv logstash.init /etc/init.d/logstash
  3. [root@ls-node1 ~]# chmod +x /etc/init.d/logstash
  4. # 注意该启动脚本的部分内容要根据实际的安装情况进行修改:
  5. # Use root:
  6. [root@ls-node1 ~]# vim /etc/init.d/logstash
  7. export JAVA_HOME=/root/jdk1.8.0_25 # The JAVA_HOME of root
  8. PATH="$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin"
  9. export PATH
  10. ...
  11. LS_USER=root
  12. LS_GROUP=root
  13. LS_HOME=/usr/local/logstash
  14. LS_HEAP_SIZE="128m"
  15. LS_LOG_DIR=/usr/local/logstash
  16. LS_CONF_DIR=/etc/logstash.conf
  17. program=/usr/local/logstash/bin/logstash
  18. # 如果是使用普通用户的话需要添加用户并配置好JDK。
  19. # Add a user:
  20. [root@ls-node1 ~]# useradd logstash
  21. # 配置文件需要相应的进行修改:
  22. export JAVA_HOME=/home/logstash/jdk1.8.0_25
  23. ...
  24. LS_USER=logstash
  25. LS_GROUP=logstash

注意:无论使用什么用户,都要确保对于输入源有读取的权限,而对于输出目标有写入的权限。

Configure Logstash

https://www.elastic.co/guide/en/logstash/current/configuration.html

Logstash进行设置:必须有一个输入以及输出,这两者是必备的,而这两者的具体设定请参见官方文档。

  1. #For example: /etc/logstash.conf
  2. [root@ls-node1 ~]# vim /etc/logstash.conf
  3. #指定输入内容,在此处我们指定了日志文件作为输入:
  4. input {
  5. file {
  6. path => "/var/log/messages"
  7. }
  8. }
  9. #我们在下面同时指定了两个输出(即日志被压缩保存的同时也会发送到ES进行处理):
  10. output {
  11. file {
  12. path => "/tmp/%{+YYYY-MM-dd}.messages.gz"
  13. gzip => true
  14. }
  15. elasticsearch {
  16. hosts => "172.16.1.21"
  17. index => "system-messages-%{+YYYY.MM.dd}"
  18. }
  19. }
  20. -------------
  21. # Test Configure file:
  22. [root@ls-node1 ~]# /usr/local/logstash/bin/logstash -t /etc/logstash.conf --testconfig
  23. # OR:
  24. [root@redis-node1 ~]# /etc/init.d/logstash configtest
  25. Configuration OK

上面的设定文件检测我认为比较鸡肋,总之设定内容有问题的话服务就无法正常启动。

处理好设置文件后我们就通过前面的启动脚本启动LS服务了:

  1. [root@Node-A1 ~]# /etc/init.d/logstash start
  2. logstash started.

Via Redis

ELK12.png-29kB

Configure Redis

请使用“Redis 3.x”版本,否则会有报错。

  1. [root@redis-node1 ~]# yum install redis
  2. [root@redis-node1 ~]# vim /etc/redis.conf
  3. bind 127.0.0.1 -> bind 172.16.1.24
  4. # Startup the service:
  5. [root@redis-node1 ~]# /etc/init.d/redis start

进行检测:

  1. [root@ls-node1 ~]# redis-cli -h 172.16.1.24 -p 6379

Logstash before Redis

  1. [root@ls-node1 ~]# vim /etc/logstash.conf
  2. input {
  3. file {
  4. path => "/var/log/messages"
  5. }
  6. }
  7. output {
  8. redis {
  9. data_type => "list"
  10. key => "system-messages"
  11. host => "172.16.1.24"
  12. port => "6379"
  13. }
  14. }

进行检测:

  1. [root@ls-node1 ~]# redis-cli -h 172.16.1.24 -p 6379
  2. redis 172.16.1.24:6379> select 0
  3. OK
  4. redis 172.16.1.24:6379> keys *
  5. 1) "system-messages"

Logstash after Redis

  1. [root@redis-node1 ~]# vim /etc/logstash.conf
  2. input {
  3. redis {
  4. data_type => "list"
  5. key => "system-messages"
  6. host => "172.16.1.24"
  7. port => "6379"
  8. }
  9. }
  10. output {
  11. elasticsearch {
  12. hosts => "172.16.1.21"
  13. index => "system-messages-redis-%{+YYYY.MM.dd}"
  14. }
  15. }

如果产生的日志量非常的大,那么可以使用消息队列(例如RabbitMQ),也可使用“Kafka”。

Nginx-Logstash

Nginx

  1. http {
  2. ...
  3. log_format logstash_json '{"@timestamp":"$time_iso8601",'
  4. '"host": "$server_addr",'
  5. '"client": "$remote_addr",'
  6. '"size": $body_bytes_sent,'
  7. '"responsetime": $request_time,'
  8. '"domain": "$host",'
  9. '"url":"$uri",'
  10. '"referer": "$http_referer",'
  11. '"agent": "$http_user_agent",'
  12. '"status":"$status"}';
  13. ...
  14. }
  15. server {
  16. ...
  17. access_log logs/access.json.log logstash_json;
  18. ...
  19. }

查看日志文件内容:

  1. [root@ls-node1 logs]# cat access.json.log
  2. {"@timestamp":"2016-06-20T10:37:02+08:00","host": "172.16.1.23","client": "172.16.1.1","size": 612,"responsetime": 0.000,"domain": "172.16.1.23","url":"/index.html","referer": "-","agent": "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0","status":"200"}
  3. ...

Logstash before Redis

  1. [root@ls-node1 ~]# vim /etc/logstash.conf
  2. input {
  3. file {
  4. path => "/usr/local/nginx/logs/access.json.log"
  5. codec => "json"
  6. }
  7. }
  8. output {
  9. redis {
  10. data_type => "list"
  11. key => "nginx-access-log"
  12. host => "172.16.1.24"
  13. port => "6379"
  14. db => "1"
  15. }
  16. }

Logstash after Redis

  1. [root@redis-node1 ~]# vim /etc/logstash.conf
  2. input {
  3. redis {
  4. data_type => "list"
  5. key => "nginx-access-log"
  6. host => "172.16.1.24"
  7. port => "6379"
  8. db => "1"
  9. }
  10. }
  11. output {
  12. elasticsearch {
  13. hosts => "172.16.1.21"
  14. index => "nginx-access-log-%{+YYYY.MM.dd}"
  15. }
  16. }
添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注