[关闭]
@chendushuai 2019-11-11T06:32:34.000000Z 字数 34490 阅读 796

基础支撑服务常用操作命令

redis zk mq


使用chmod +x 文件名 来添加sh文件执行权限

Linux

修改主机IP地址

  1. root@ubuntuRedis:~# vim /etc/netplan/50-cloud-init.yaml
  2. network:
  3. ethernets:
  4. ens33:
  5. dhcp4: false
  6. addresses: [192.168.202.130/24]
  7. gateway4: 192.168.202.1
  8. version: 2
  9. root@ubuntuRedis:~# sudo netplan apply

修改主机名称

  1. root@ubuntuRedis:~# vim /etc/hostname
  2. # 直接修改为对应的主机名称即可

查询指定目录下最后修改时间在指定时间之前的文件列表

/usr/bin/find /usr/log/applog/* -mtime +3 -ls

删除指定目录下最后修改时间在指定时间之前的文件

find /usr/log/applog/ -mtime +5 -name "tnp" -exec rm -rf {} \;

Redis

安装Redis

批量删除Key

  1. bin/redis-cli -c -h 192.168.132.36 -p 6379 keys "area_info_*" | xargs bin/redis-cli -c -h 192.168.132.36 -p 6379 del

解压压缩文件

  1. [redis@DEV-BS-1908-V429 ~]$ tar -zxvf redis-4.0.11.tar.gz

编辑配置文件redis.conf

  1. [redis@DEV-BS-1908-V429 ~]$ vi redis.conf
  2. # 需要编辑的内容有
  3. bind 192.168.122.36
  4. protected-mode no
  5. logfile "logs/redis.log"
  6. daemonize yes

执行程序编译

  1. [redis@DEV-BS-1908-V429 ~]$ make

安装到当前目录

  1. [redis@DEV-BS-1908-V429 redis-4.0.11]$ make install PREFIX=/home/redis/redis-4.0.11
  2. cd src && make install
  3. make[1]: Entering directory `/home/redis/redis-4.0.11/src'
  4. CC Makefile.dep
  5. make[1]: Leaving directory `/home/redis/redis-4.0.11/src'
  6. make[1]: Entering directory `/home/redis/redis-4.0.11/src'
  7. Hint: It's a good idea to run 'make test' ;)
  8. INSTALL install
  9. INSTALL install
  10. INSTALL install
  11. INSTALL install
  12. INSTALL install
  13. make[1]: Leaving directory `/home/redis/redis-4.0.11/src'

新建logs目录

  1. [redis@DEV-BS-1908-V429 redis-4.0.11]$ mkdir logs

启动Redis

  1. [redis@DEV-BS-1908-V429 redis-4.0.11]$ bin/redis-server redis.conf

查看Redis启动状态

  1. [redis@DEV-BS-1908-V429 redis-4.0.11]$ ps -ef | grep redis
  2. root 21497 1781 0 11:11 ? 00:00:00 sshd: redis [priv]
  3. redis 21499 21497 0 11:11 ? 00:00:00 sshd: redis@pts/0
  4. redis 21500 21499 0 11:11 pts/0 00:00:00 -bash
  5. redis 24855 21500 0 11:16 pts/0 00:00:00 bin/redis-server 192.168.122.38:6379
  6. redis 24911 21500 0 11:20 pts/0 00:00:00 ps -ef
  7. redis 24912 21500 0 11:20 pts/0 00:00:00 grep redis

连接Redis

  1. [redis@DEV-BS-1908-V429 redis-4.0.11]$ bin/redis-cli -c -h 192.168.165.16 -p 6379

查看集群配置

  1. [redis@DEV-BS-1908-V429 redis-4.0.11]$ bin/redis-cli -c -h 192.168.165.16 -p 6379 cluster nodes
  2. # 若结果如下,则认为未启用集群
  3. ERR This instance has cluster support disabled

哨兵模式配置

修改配置文件

  1. [redis@SZ1-BS-1908-V1161 redis-4.0.11]$ vi sentinel.conf
  2. # 修改内容有:
  3. protected-mode no
  4. dir "/tmp"
  5. sentinel monitor mymaster 192.168.122.36 6379 2#注意全部是主IP地址
  6. sentinel known-slave tnp 172.21.64.46 6379
  7. sentinel known-slave tnp 172.21.64.47 6379
  8. sentinel known-sentinel tnp 172.21.64.46 16379 bd11d31008ecbf0b4fb0d042b462ef2299b545fa
  9. sentinel known-sentinel tnp 172.21.64.47 16379 9bbb1066ee3e64dfd09c80068a12fbc02b5a0052

启动哨兵

  1. [redis@SZ1-BS-1908-V1161 redis-4.0.11]$ bin/redis-sentinel sentinel.conf
  2. 26640:X 27 Aug 15:39:03.926 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
  3. 26640:X 27 Aug 15:39:03.926 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=26640, just started
  4. 26640:X 27 Aug 15:39:03.926 # Configuration loaded

Zookeeper

安装ZK

解压文件

  1. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ tar -zxvf zookeeper-3.4.10.tar.gz

拷贝配置文件

  1. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ cp conf/zoo_sample.cfg conf/zoo.cfg

编辑配置文件

  1. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ vi conf/zoo.cfg
  2. # 修改内容为
  3. # The number of milliseconds of each tick
  4. tickTime=2000
  5. # The number of ticks that the initial
  6. # synchronization phase can take
  7. initLimit=10
  8. # The number of ticks that can pass between
  9. # sending a request and getting an acknowledgement
  10. syncLimit=5
  11. # the directory where the snapshot is stored.
  12. dataDir=/home/zookeeper/zookeeper-3.4.10/data
  13. # the directory where the log is stored.
  14. dataLogDir=/home/zookeeper/zookeeper-3.4.10/logs
  15. # the port at which the clients will connect
  16. clientPort=2181
  17. maxClientCnxns=200
  18. server.1=192.168.122.36:2888:4888
  19. server.2=192.168.122.37:2888:4888
  20. server.3=192.168.122.38:2888:4888

创建数据目录和日志目录

  1. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ mkdir data
  2. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ mkdir logs

启动ZK

  1. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ bin/zkServer.sh start
  2. ZooKeeper JMX enabled by default
  3. Using config: /home/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
  4. Usage: bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}

只有一台机器启动时,暂时不是集群

查看ZK运行状态

  1. [zookeeper@DEV-BS-1908-V429 zookeeper-3.4.10]$ bin/zkServer.sh status
  2. ZooKeeper JMX enabled by default
  3. Using config: /home/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
  4. Mode: follower
  1. [zookeeper@DEV-BS-1908-V429 zookeeper-3.4.10]$ bin/zkServer.sh status
  2. ZooKeeper JMX enabled by default
  3. Using config: /home/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
  4. Mode: leader

如果结果为

  1. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ bin/zkServer.sh status
  2. ZooKeeper JMX enabled by default
  3. Using config: /home/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
  4. Error contacting service. It is probably not running.

说明ZK启动失败,处理方案如下

ZK启动失败处理方案

1. 先检查本机的集群配置是否正确

  1. [zookeeper@DEV-BS-1908-V427 zookeeper-3.4.10]$ vi conf/zoo.cfg
  2. # 后面的4888端口是否打通,建议将端口修改内允许返回内的端口,如20880~20950
  3. server.1=192.168.122.36:2888:4888
  4. server.2=192.168.122.37:2888:4888
  5. server.3=192.168.122.38:2888:4888

检查是否安装了JDK

  1. [zookeeper@DEV-BS-1908-V429 zookeeper-3.4.10]$ java -version
  2. java version "1.8.0_45"
  3. Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
  4. Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

检查ZK指定端口是否被占用

此处我们使用的是默认的2181端口,因此查看2181端口是否被占用

  1. [zookeeper@DEV-BS-1908-V429 zookeeper-3.4.10]$ netstat -ano | grep 2181
  2. [zookeeper@DEV-BS-1908-V429 zookeeper-3.4.10]$

如果到这个步骤,还是无法启动,则需要检查ZK的data目录文件

删除version-2目录和zookeeper_server.pid文件,同时检查myid文件内配置是否正确(ZK多次启动,会修改该文件内的值)

  1. [zookeeper@DEV-BS-1908-V427 data]$ rm -r version-2/
  2. [zookeeper@DEV-BS-1908-V427 data]$ rm zookeeper_server.pid

再次重新启动即可正常启动

查看已注册的dubbo服务

  1. [zookeeper@SZ1-BS-1908-V1164 zookeeper-3.4.10]$ bin/zkCli.sh
  2. Connecting to localhost:2181
  3. 2019-08-20 16:35:48,231 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
  4. 2019-08-20 16:35:48,235 [myid:] - INFO [main:Environment@100] - Client environment:host.name=SZ1-BS-1908-V1164.lianlianpay-dc.com
  5. .............
  6. [zk: localhost:2181(CONNECTED) 16] ls /dubbo/com.lianlian.service.lock.LockService/providers
  7. [dubbo%3A%2F%2F172.21.64.43%3A20880%2Fcom.lianlian.service.lock.LockService%3Fanyhost%3Dtrue%26application%3Dpay_lock%26default.retries%3D0%26default.timeout%3D10000%26dubbo%3D2.5.6%26generic%3Dfalse%26interface%3Dcom.lianlian.service.lock.LockService%26methods%3Dunlock%2CisLocked%2Clock%26organization%3Dlianpay%26owner%3Dtnp-pay-lock%26pid%3D21504%26revision%3D0.0.1%26side%3Dprovider%26timestamp%3D1566291552023, dubbo%3A%2F%2F172.21.64.44%3A20880%2Fcom.lianlian.service.lock.LockService%3Fanyhost%3Dtrue%26application%3Dpay_lock%26default.retries%3D0%26default.timeout%3D10000%26dubbo%3D2.5.6%26generic%3Dfalse%26interface%3Dcom.lianlian.service.lock.LockService%26methods%3Dunlock%2CisLocked%2Clock%26organization%3Dlianpay%26owner%3Dtnp-pay-lock%26pid%3D20864%26revision%3D0.0.1%26side%3Dprovider%26timestamp%3D1566291672150]

RocketMQ

注意 data文件夹必须要拷贝
配置文件中的data文件目录必须正确,在账号不同的情况下,可能会出现目录不同的情况

进入配置文件目录

  1. [rocketmq@SZ1-BS-1908-V1142 ~]$ cd /home/rocketmq/rocketmq-4.2/conf
  2. [rocketmq@SZ1-BS-1908-V1142 conf]$

编辑配置文件

  1. [rocketmq@SZ1-BS-1908-V1142 conf]$ vi broker-a.properties
  2. namesrvAddr=172.21.64.27:9876;172.21.64.28:9876
  3. brokerIP1=172.21.64.28
  4. listenPort=10911
  5. brokerIP2=172.21.64.28
  6. haListenPort=10912
  1. [rocketmq@SZ1-BS-1908-V1142 conf]$ vi broker-a-s.properties
  2. namesrvAddr=172.21.64.27:9876;172.21.64.28:9876
  3. brokerIP1=172.21.64.28
  4. listenPort=10921
  5. haMasterAddress=172.21.64.28:10912

启动

  1. [rocketmq@SZ1-BS-1908-V1143 rocketmq-4.2]$ nohup ./bin/mqnamesrv&

启动mqbroker

  1. [rocketmq@SZ1-BS-1908-V1143 rocketmq-4.2]$ sh ./bin/mqbroker -c conf/broker-b.properties &
  2. [8] 11369
  3. [rocketmq@SZ1-BS-1908-V1143 rocketmq-4.2]$ tail -99f ~/logs/rocketmqlogs/broker.log

RocketMQ Console安装

修改配置文件

  1. [rocketMQ@DEV-BS-1908-V428 ~]$ cd /home/rocketMQ/rocketmq-console/target
  2. [rocketMQ@DEV-BS-1908-V428 target]$ vi rocketmq-console-ng-1.0.0.jar
  3. 修改 application.properties
  4. # 修改内容为
  5. #if this value is empty,use env value rocketmq.config.namesrvAddr NAMESRV_ADDR | now, you can set it in ops page.default localhost:9876
  6. rocketmq.config.namesrvAddr=192.168.122.36:9876;192.168.122.37:9876
  7. #rocketmq-console's data path:dashboard/monitor
  8. rocketmq.config.dataPath=/home/rocketMQ/rocketmq-console-data

启动console

  1. [rocketMQ@DEV-BS-1908-V428 target]$ java -jar rocketmq-console-ng-1.0.0.jar &
  2. [5] 129601
  3. [4] Killed java -jar rocketmq-console-ng-1.0.0.jar
  4. [rocketMQ@DEV-BS-1908-V428 target]$ 16:34:23,779 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
  5. 16:34:23,780 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
  6. 16:34:23,780 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [jar:file:/home/rocketMQ/rocketmq-console/target/rocketmq-console-ng-1.0.0.jar!/BOOT-INF/classes!/logback.xml]
  7. 16:34:23,820 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@5b6f7412 - URL [jar:file:/home/rocketMQ/rocketmq-console/target/rocketmq-console-ng-1.0.0.jar!/BOOT-INF/classes!/logback.xml] is not of type file
  8. 16:34:23,880 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
  9. 16:34:23,885 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
  10. 16:34:23,896 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
  11. 16:34:23,905 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
  12. 16:34:23,956 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender]
  13. 16:34:23,959 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [FILE]
  14. 16:34:23,984 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@664223387 - No compression will be used
  15. 16:34:23,986 |-INFO in c.q.l.core.rolling.TimeBasedRollingPolicy@664223387 - Will use the pattern /home/rocketMQ/logs/consolelogs/rocketmq-console-%d{yyyy-MM-dd}.%i.log for the active file
  16. 16:34:23,989 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@312b1dae - The date pattern is 'yyyy-MM-dd' from file name pattern '/home/rocketMQ/logs/consolelogs/rocketmq-console-%d{yyyy-MM-dd}.%i.log'.
  17. 16:34:23,989 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@312b1dae - Roll-over at midnight.
  18. 16:34:23,993 |-INFO in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@312b1dae - Setting initial period to Mon Aug 26 16:31:48 CST 2019
  19. 16:34:23,994 |-WARN in ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP@312b1dae - SizeAndTimeBasedFNATP is deprecated. Use SizeAndTimeBasedRollingPolicy instead
  20. 16:34:23,996 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
  21. 16:34:23,999 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - Active log file name: /home/rocketMQ/logs/consolelogs/rocketmq-console.log
  22. 16:34:23,999 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[FILE] - File property is set to [/home/rocketMQ/logs/consolelogs/rocketmq-console.log]
  23. 16:34:24,000 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
  24. 16:34:24,001 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
  25. 16:34:24,001 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [FILE] to Logger[ROOT]
  26. 16:34:24,001 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
  27. 16:34:24,002 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@7530d0a - Registering current configuration as safe fallback point
  28. . ____ _ __ _ _
  29. /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
  30. ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
  31. \\/ ___)| |_)| | | | | || (_| | ) ) ) )
  32. ' |____| .__|_| |_|_| |_\__, | / / / /
  33. =========|_|==============|___/=/_/_/_/
  34. :: Spring Boot :: (v1.4.3.RELEASE)
  35. [2019-08-26 16:34:24.649] INFO Starting App v1.0.0 on DEV-BS-1908-V428 with PID 129601 (/home/rocketMQ/rocketmq-console/target/rocketmq-console-ng-1.0.0.jar started by rocketMQ in /home/rocketMQ/rocketmq-console/target)
  36. [2019-08-26 16:34:24.653] INFO No active profile set, falling back to default profiles: default
  37. [2019-08-26 16:34:24.758] INFO Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@23ab930d: startup date [Mon Aug 26 16:34:24 CST 2019]; root of context hierarchy
  38. [2019-08-26 16:34:24.954] INFO HV000001: Hibernate Validator 5.2.4.Final
  39. [2019-08-26 16:34:27.480] INFO Tomcat initialized with port(s): 8080 (http)

Hazelcast

修改bin目录中配置XML文件

  1. [hazelcast@SZ1-BS-1908-V1145 bin]$ cat hazelcast.xml
  2. # 修改内容
  3. <management-center enabled="false">http://172.21.64.30:8080/mancenter</management-center>
  4. <tcp-ip enabled="true">
  5. <member-list>
  6. <member>172.21.64.29</member>
  7. <member>172.21.64.30</member>
  8. </member-list>
  9. </tcp-ip>

直接启动即可

  1. [hazelcast@SZ1-BS-1908-V1145 hazelcast-3.5.3]$ nohup sh server.sh &
  2. [1] 22805
  3. [hazelcast@SZ1-BS-1908-V1145 bin]$ nohup: ignoring input and appending output to `nohup.out'
  4. [hazelcast@SZ1-BS-1908-V1145 bin]$ tail -199f nohup.out
  5. JAVA_HOME found at /usr/java/jdk1.8.0_45
  6. Path to Java : /usr/java/jdk1.8.0_45/bin/java
  7. ########################################
  8. # RUN_JAVA=/usr/java/jdk1.8.0_45/bin/java
  9. # JAVA_OPTS=
  10. # starting now....
  11. ########################################
  12. Aug 20, 2019 11:06:39 AM com.hazelcast.config.XmlConfigLocator
  13. INFO: Loading 'hazelcast-default.xml' from classpath.
  14. Aug 20, 2019 11:06:39 AM com.hazelcast.instance.DefaultAddressPicker
  15. INFO: [LOCAL] [dev] [3.5.3] Prefer IPv4 stack is true.
  16. Aug 20, 2019 11:06:39 AM com.hazelcast.instance.DefaultAddressPicker
  17. INFO: [LOCAL] [dev] [3.5.3] Picked Address[172.21.64.30]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
  18. Aug 20, 2019 11:06:39 AM com.hazelcast.spi.OperationService
  19. INFO: [172.21.64.30]:5701 [dev] [3.5.3] Backpressure is disabled
  20. Aug 20, 2019 11:06:39 AM com.hazelcast.spi.impl.operationexecutor.classic.ClassicOperationExecutor
  21. INFO: [172.21.64.30]:5701 [dev] [3.5.3] Starting with 2 generic operation threads and 4 partition operation threads.
  22. Aug 20, 2019 11:06:40 AM com.hazelcast.system
  23. INFO: [172.21.64.30]:5701 [dev] [3.5.3] Hazelcast 3.5.3 (20151011 - 64c663a) starting at Address[172.21.64.30]:5701
  24. Aug 20, 2019 11:06:40 AM com.hazelcast.system
  25. INFO: [172.21.64.30]:5701 [dev] [3.5.3] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
  26. Aug 20, 2019 11:06:40 AM com.hazelcast.instance.Node
  27. INFO: [172.21.64.30]:5701 [dev] [3.5.3] Creating MulticastJoiner
  28. Aug 20, 2019 11:06:40 AM com.hazelcast.core.LifecycleService
  29. INFO: [172.21.64.30]:5701 [dev] [3.5.3] Address[172.21.64.30]:5701 is STARTING
  30. Aug 20, 2019 11:06:45 AM com.hazelcast.cluster.impl.MulticastJoiner
  31. INFO: [172.21.64.30]:5701 [dev] [3.5.3]
  32. Members [1] {
  33. Member [172.21.64.30]:5701 this
  34. }
  35. Aug 20, 2019 11:06:45 AM com.hazelcast.core.LifecycleService
  36. INFO: [172.21.64.30]:5701 [dev] [3.5.3] Address[172.21.64.30]:5701 is STARTED

注意:两台服务器必须同时启动,否则会启动失败,正常启动后的结果为:

  1. [hazelcast@SZ1-BS-1908-V1145 bin]$ ./server.sh
  2. JAVA_HOME found at /usr/java/jdk1.8.0_45
  3. Path to Java : /usr/java/jdk1.8.0_45/bin/java
  4. ########################################
  5. # RUN_JAVA=/usr/java/jdk1.8.0_45/bin/java
  6. # JAVA_OPTS=
  7. # starting now....
  8. ########################################
  9. Aug 20, 2019 4:58:31 PM com.hazelcast.config.XmlConfigLocator
  10. INFO: Loading 'hazelcast.xml' from working directory.
  11. Aug 20, 2019 4:58:31 PM com.hazelcast.instance.DefaultAddressPicker
  12. INFO: [LOCAL] [tnp] [3.5.3] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [172.21.64.30, 172.21.64.29]
  13. Aug 20, 2019 4:58:31 PM com.hazelcast.instance.DefaultAddressPicker
  14. INFO: [LOCAL] [tnp] [3.5.3] Prefer IPv4 stack is true.
  15. Aug 20, 2019 4:58:31 PM com.hazelcast.instance.DefaultAddressPicker
  16. INFO: [LOCAL] [tnp] [3.5.3] Picked Address[172.21.64.30]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
  17. Aug 20, 2019 4:58:31 PM com.hazelcast.spi.OperationService
  18. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Backpressure is disabled
  19. Aug 20, 2019 4:58:31 PM com.hazelcast.spi.impl.operationexecutor.classic.ClassicOperationExecutor
  20. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Starting with 2 generic operation threads and 4 partition operation threads.
  21. Aug 20, 2019 4:58:31 PM com.hazelcast.system
  22. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Hazelcast 3.5.3 (20151011 - 64c663a) starting at Address[172.21.64.30]:5701
  23. Aug 20, 2019 4:58:31 PM com.hazelcast.system
  24. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
  25. Aug 20, 2019 4:58:31 PM com.hazelcast.instance.Node
  26. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Creating TcpIpJoiner
  27. Aug 20, 2019 4:58:31 PM com.hazelcast.core.LifecycleService
  28. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Address[172.21.64.30]:5701 is STARTING
  29. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  30. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Connecting to /172.21.64.29:5703, timeout: 0, bind-any: true
  31. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  32. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Connecting to /172.21.64.29:5701, timeout: 0, bind-any: true
  33. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  34. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Connecting to /172.21.64.30:5703, timeout: 0, bind-any: true
  35. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  36. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Connecting to /172.21.64.30:5702, timeout: 0, bind-any: true
  37. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  38. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Connecting to /172.21.64.29:5702, timeout: 0, bind-any: true
  39. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  40. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Could not connect to: /172.21.64.30:5703. Reason: SocketException[Connection refused to address /172.21.64.30:5703]
  41. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  42. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Could not connect to: /172.21.64.30:5702. Reason: SocketException[Connection refused to address /172.21.64.30:5702]
  43. Aug 20, 2019 4:58:32 PM com.hazelcast.cluster.impl.TcpIpJoiner
  44. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Address[172.21.64.30]:5703 is added to the blacklist.
  45. Aug 20, 2019 4:58:32 PM com.hazelcast.cluster.impl.TcpIpJoiner
  46. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Address[172.21.64.30]:5702 is added to the blacklist.
  47. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  48. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Could not connect to: /172.21.64.29:5702. Reason: SocketException[Connection refused to address /172.21.64.29:5702]
  49. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.SocketConnector
  50. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Could not connect to: /172.21.64.29:5703. Reason: SocketException[Connection refused to address /172.21.64.29:5703]
  51. Aug 20, 2019 4:58:32 PM com.hazelcast.cluster.impl.TcpIpJoiner
  52. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Address[172.21.64.29]:5703 is added to the blacklist.
  53. Aug 20, 2019 4:58:32 PM com.hazelcast.cluster.impl.TcpIpJoiner
  54. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Address[172.21.64.29]:5702 is added to the blacklist.
  55. Aug 20, 2019 4:58:32 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
  56. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Established socket connection between /172.21.64.30:50922
  57. Aug 20, 2019 4:58:38 PM com.hazelcast.cluster.ClusterService
  58. INFO: [172.21.64.30]:5701 [tnp] [3.5.3]
  59. Members [2] {
  60. Member [172.21.64.29]:5701
  61. Member [172.21.64.30]:5701 this
  62. }
  63. Aug 20, 2019 4:58:41 PM com.hazelcast.core.LifecycleService
  64. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Address[172.21.64.30]:5701 is STARTED

此时去启动pay_lock服务时,在窗口会输出:

  1. Aug 20, 2019 5:17:22 PM com.hazelcast.nio.tcp.SocketAcceptor
  2. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Accepting socket connection from /172.21.64.43:37455
  3. Aug 20, 2019 5:17:22 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
  4. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Established socket connection between /172.21.64.30:5701
  5. Aug 20, 2019 5:17:22 PM com.hazelcast.client.impl.client.AuthenticationRequest
  6. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Received auth from Connection [/172.21.64.30:5701 -> /172.21.64.43:37455], endpoint=null, live=true, type=JAVA_CLIENT, successfully authenticated, principal : ClientPrincipal{uuid='a52bcd57-2f62-44dd-996f-eb62a92447a5', ownerUuid='f71d7884-6961-40ae-93d0-daf1a3e9cb99'}, owner connection : true
  7. Aug 20, 2019 5:17:24 PM com.hazelcast.nio.tcp.SocketAcceptor
  8. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Accepting socket connection from /172.21.64.44:40414
  9. Aug 20, 2019 5:17:24 PM com.hazelcast.nio.tcp.TcpIpConnectionManager
  10. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Established socket connection between /172.21.64.30:5701
  11. Aug 20, 2019 5:17:24 PM com.hazelcast.client.impl.client.AuthenticationRequest
  12. INFO: [172.21.64.30]:5701 [tnp] [3.5.3] Received auth from Connection [/172.21.64.30:5701 -> /172.21.64.44:40414], endpoint=null, live=true, type=JAVA_CLIENT, successfully authenticated, principal : ClientPrincipal{uuid='6f321cca-f60d-40b3-9a3a-5400038acc0c', ownerUuid='f71d7884-6961-40ae-93d0-daf1a3e9cb99'}, owner connection : true

pay_lock

修改配置文件

  1. [dubbo@SZ1-BS-1908-V1158 conf]$ vi dubbo.properties
  2. # 修改内容如下:
  3. dubbo.registry.address=172.21.64.48:2181,172.21.64.49:2181,172.21.64.50:2181
  4. lockServer.ip1=172.21.64.29
  5. lockServer.ip2=172.21.64.30

启动服务

  1. [dubbo@SZ1-BS-1908-V1158 bin]$ ./start.sh
  2. Starting the pay_lock .....OK!
  3. PID: 23043
  4. STDOUT: logs/stdout.log

pay_idgen

查看帮助文件

  1. [dubbo@SZ1-BS-1908-V1156 bin]$ cat readme.txt
  2. ----------------------------------------------------------脚本功能说明--------------------------------------------------------
  3. 1.addtype.sh
  4. 新增序列号定义
  5. 2.getalloffset.sh
  6. 列举所有序号当前的offset
  7. 3.getoffset.sh
  8. 列举指定序号当前的offset
  9. 4.initzk.sh
  10. 手动切换账务日期为当前服务器日期,在序号系统重新部署在全新环境(新zookeeper集群)时,需要执行此脚本初始化
  11. 5.listconf.sh
  12. 列举所有序号定义
  13. 6.loadconf.sh
  14. 指定序号定义文件,一次性load配置,一般在需要初次上线或者全新环境部署时对原配置进行批量迁移
  15. 7.setnodeinfo.sh
  16. 设置序号应用当前节点值,目前序号系统支持多中心部署,内部通过%方式确保各中心产生的序号值不重复,此处设置中心节点数、以及当前中心的%余数值
  17. 8.setoffset.sh
  18. 设置指定序号定义的offset
  19. 9.updtype.sh
  20. 修改序列定义
  21. ----------------------------------------------------------序列号新集群部署初始化-------------------------------------------------------
  22. 1.对序列号服务进行新集群部署时必须进行如下两步初始化操作:
  23. initzk.sh
  24. setnodeinfo.sh
  25. 2.初始化完成后进行序列号定义,可通过addtype.sh逐个添加或者通过loadconf.sh进行文件导入。

根据指示,使用initzk.sh初始化ZK

  1. [dubbo@SZ1-BS-1908-V1156 bin]$ ./initzk.sh 172.21.64.49
  2. [08-20 11:24:23] WARN ConnectionStateManager [ConnectionStateManager-0]: There are no ConnectionStateListeners registered.

如果序列号服务已经在其他环境进行了发布,则需要先查看已配置环境中的ZK节点序号配置

登录至已配置环境的Zk服务器

  1. [zookeeper@HZ3-BS-1811-V834 zookeeper-3.4.10]$ bin/zkCli.sh
  2. Connecting to localhost:2181
  3. [zk: localhost:2181(CONNECTED) 3] ls /idgen/node_info
  4. [0]
  5. [zk: localhost:2181(CONNECTED) 4] get /idgen/node_info/0
  6. 2,3
  7. cZxid = 0x600000018
  8. ctime = Tue Nov 20 13:49:49 CST 2018
  9. mZxid = 0x800003c97
  10. mtime = Tue Jul 16 14:42:47 CST 2019
  11. pZxid = 0x600000018
  12. cversion = 0
  13. dataVersion = 2
  14. aclVersion = 0
  15. ephemeralOwner = 0x0
  16. dataLength = 3
  17. numChildren = 0

此处可以看到已配置的环境ZK节点起始配置为2,3,则另一节点不可以使用2

其中2指的是起始序号,3为步长,也就是间隔步号,类似于2,5,6,9,12……

使用setnodeinfo.sh设置节点信息

  1. [dubbo@SZ1-BS-1908-V1156 bin]$ ./setnodeinfo.sh 172.21.64.49 1 3
  2. [08-20 11:31:05] WARN ConnectionStateManager [ConnectionStateManager-0]: There are no ConnectionStateListeners registered.

使用listconf.sh导出已配置环境的配置

  1. [dubbo@HZ3-BS-1811-V826 bin]$ ./listconf.sh 172.20.188.34:2181 > listConf.csv
  2. [dubbo@HZ3-BS-1811-V826 bin]$ cat listConf.csv
  3. Connect to zookeeper : 172.20.188.34:2181
  4. [08-20 11:34:28] WARN ConnectionStateManager [ConnectionStateManager-0]: There are no ConnectionStateListeners registered.
  5. 1 => 1,[F:%G%m%d][SEQ],7,0,0,0,9999999,1,1000
  6. 2 => 2,[F:%G%m%d][SEQ],8,0,0,0,99999999,1,1000
  7. 3 => 3,[SEQ],10,0,0,0,9999999999,1,100,1

含义为设置的序号产生的数据格式。

将已配置的内容复制到新的环境中,并修改为如下内容

  1. 1,[F:%G%m%d][SEQ],7,0,0,0,9999999,1,1000
  2. 2,[F:%G%m%d][SEQ],8,0,0,0,99999999,1,1000
  3. 3,[SEQ],10,0,0,0,9999999999,1,100,1

执行序列号配置导入

  1. [dubbo@SZ1-BS-1908-V1156 bin]$ ./loadconf.sh 172.21.64.49:2181 ../init_ids.csv
  2. [08-20 14:41:36] WARN ConnectionStateManager [ConnectionStateManager-0]: There are no ConnectionStateListeners registered.

使用updtype.sh修改序列号编码规则

  1. [dubbo@DEV-BS-1908-V429 bin]$ cat updtype.sh
  2. # update a ID type, $1 zookeeper cluster, $2 id, $3 new definition
  3. # e.g. addtype localhost:2181 888 888,P[F:%G-%m-%d][T:%H%M%S][SEQ],3,0,0,0,99999999,1,500
  4. # In the new grammar, %C%g will NOT work, use %G instead
  5. java -cp ../lib/*:../lib/pay_idgen_core-0.0.3-SNAPSHOT.jar com.lianlian.idgen.service.util.UPDConf $1 $2 $3

例如:

  1. [dubbo@TEST-BS-1810-V059 bin]$ ./updtype.sh 192.168.132.34:2181 6 6,[F:%G%m%d][SEQ],9,0,0,0,999999999,1,1000
  2. [09-29 17:37:56] WARN ConnectionStateManager [ConnectionStateManager-0]: There are no ConnectionStateListeners registered.

部分情况下,可能需要重启服务才可以生效。

配置导入完成后,查看配置内容

  1. [dubbo@SZ1-BS-1908-V1156 bin]$ ./listconf.sh 172.21.64.49:2181
  2. Connect to zookeeper : 172.21.64.49:2181
  3. [08-20 14:41:44] WARN ConnectionStateManager [ConnectionStateManager-0]: There are no ConnectionStateListeners registered.
  4. 1 => 1,[F:%G%m%d][SEQ],7,0,0,0,9999999,1,1000
  5. 2 => 2,[F:%G%m%d][SEQ],8,0,0,0,99999999,1,1000
  6. 3 => 3,[SEQ],10,0,0,0,9999999999,1,100,1

显示为上述格式,才是正确的导入完成,如果显示为下面的格式,则说明在导入前,未处理配置CSV文件为正确格式

  1. [dubbo@SZ1-BS-1908-V1156 bin]$ ./listconf.sh 172.21.64.49:2181
  2. Connect to zookeeper : 172.21.64.49:2181
  3. [08-20 14:36:49] WARN ConnectionStateManager [ConnectionStateManager-0]: There are no ConnectionStateListeners registered.
  4. 2 => 2 => 2 => 2,[F:%G%m%d][SEQ],8,0,0,0,99999999,1,1000
  5. 1 => 1 => 1 => 1,[F:%G%m%d][SEQ],7,0,0,0,9999999,1,1000
  6. 3 => 3 => 3 => 3,[SEQ],10,0,0,0,9999999999,1,100,1

此种情况下,需要先删除已有的配置

登录ZK服务器,使用zkCli.sh登录ZK

  1. [zookeeper@SZ1-BS-1908-V1164 zookeeper-3.4.10]$ bin/zkCli.sh
  2. Connecting to localhost:2181
  3. 2019-08-20 14:38:57,587 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT

查看节点配置

  1. [zk: localhost:2181(CONNECTED) 4] ls /idgen/config
  2. [2 => 2, 1 => 1, 3 => 3]

删除错误的节点配置

  1. [zk: localhost:2181(CONNECTED) 7] delete "/idgen/config/2 => 2"
  2. [zk: localhost:2181(CONNECTED) 9] delete "/idgen/config/1 => 1"
  3. [zk: localhost:2181(CONNECTED) 10] delete "/idgen/config/3 => 3"
  4. [zk: localhost:2181(CONNECTED) 11] ls /idgen/config
  5. []

删除了错误的节点配置之后,重启应用即可

启动服务

  1. [dubbo@SZ1-BS-1908-V1156 bin]$ ./start.sh
  2. Starting the pay_serial .....OK!
  3. PID: 17378
  4. STDOUT: logs/stdout.log

直接启动即可。

ElasticSearch

修改配置文件

  1. [ex@DEV-BS-1908-V510 ~]$ cd /home/ex/elasticsearch-5.6.11/config
  2. [ex@DEV-BS-1908-V510 config]$ vi elasticsearch.yml
  3. # 修改内容有
  4. # Path to directory where to store the data (separate multiple locations by comma):
  5. #
  6. path.data: /home/ex/data
  7. #
  8. # Path to log files:
  9. #
  10. path.logs: /home/ex/logs
  11. network.host: 192.168.122.39
  12. discovery.zen.ping.unicast.hosts: ["192.168.122.39:9301", "192.168.122.40:9301"]

异常问题解决

https://my.oschina.net/u/2510243/blog/810520?tdsourcetag=s_pcqq_aiomsg

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

在root用户权限下

  1. vi /etc/sysctl.conf
  2. # 添加下列配置
  3. vm.max_map_count=655360
  4. # 然后执行命令刷新配置
  5. sysctl -p
  6. # 编辑添加文件数量限制 该文件编辑后,在重启虚拟机之后生效
  7. vim /etc/security/limits.conf
  8. # 增加软硬配置
  9. * soft nofile 81960
  10. * hard nofile 81960
  11. * soft nproc 81960
  12. * hard nproc 81960
  13. # 另需要编辑临时文件,可以即时生效
  14. * - nproc 81960
  15. # 或使用下面的命令进行设置
  16. ulimit -n 81960
  17. # 或使用下面命令直接设置为最大值
  18. ulimit -Hn
  19. # 使用如下命令查看配置是否生效
  20. [ex@TEST-BS-1908-V512 ~]$ ulimit -a
  21. core file size (blocks, -c) 0
  22. data seg size (kbytes, -d) unlimited
  23. scheduling priority (-e) 0
  24. file size (blocks, -f) unlimited
  25. pending signals (-i) 31312
  26. max locked memory (kbytes, -l) 64
  27. max memory size (kbytes, -m) unlimited
  28. open files (-n) 81960
  29. pipe size (512 bytes, -p) 8
  30. POSIX message queues (bytes, -q) 819200
  31. real-time priority (-r) 0
  32. stack size (kbytes, -s) 10240
  33. cpu time (seconds, -t) unlimited
  34. max user processes (-u) 81960
  35. virtual memory (kbytes, -v) unlimited
  36. file locks (-x) unlimited

INFO: os::commit_memory(0x00000001e9990000, 25071910912, 0) failed; error='Cannot allocate memory' (errno=12)

  1. [es@HZ3UAT-BS-1909-V083 elasticsearch-5.6.11]$ bin/elasticsearch &
  2. [1] 57897
  3. [es@HZ3UAT-BS-1909-V083 elasticsearch-5.6.11]$ Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000001e9990000, 25071910912, 0) failed; error='Cannot allocate memory' (errno=12)
  4. #
  5. # There is insufficient memory for the Java Runtime Environment to continue.
  6. # Native memory allocation (mmap) failed to map 25071910912 bytes for committing reserved memory.
  7. # An error report file with more information is saved as:
  8. # /home/es/elasticsearch-5.6.11/hs_err_pid57897.log

由于elasticsearch5.0默认分配jvm空间大小为2g,修改jvm空间分配

  1. # vim config/jvm.options
  2. -Xms2g
  3. -Xmx2g
  4. ##修改为
  5. -Xms512m
  6. -Xmx512m

启动ES

  1. [ex@TEST-BS-1908-V511 bin]$ chmod +x *
  2. [ex@TEST-BS-1908-V511 bin]$ ./elasticsearch &
  3. [1] 129732
  4. [ex@TEST-BS-1908-V511 bin]$ [2019-08-27T10:52:08,413][INFO ][o.e.n.Node ] [TNP-ES-02] initializing ...
  5. [2019-08-27T10:52:08,646][INFO ][o.e.e.NodeEnvironment ] [TNP-ES-02] using [1] data paths, mounts [[/ (/dev/mapper/VolGroup-LV_root)]], net usable_space [73.9gb], net total_space [82gb], spins? [possibly], types [ext4]
  6. [2019-08-27T10:52:08,647][INFO ][o.e.e.NodeEnvironment ] [TNP-ES-02] heap size [1.9gb], compressed ordinary object pointers [true]
  7. [2019-08-27T10:52:09,026][INFO ][o.e.n.Node ] [TNP-ES-02] node name [TNP-ES-02], node ID [fpG8waVmTq-QZ2L4dqhcjg]
  8. [2019-08-27T10:52:09,027][INFO ][o.e.n.Node ] [TNP-ES-02] version[5.6.11], pid[129732], build[bc3eef4/2018-08-16T15:25:17.293Z], OS[Linux/2.6.32-696.el6.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_45/25.45-b02]
  9. [2019-08-27T10:52:09,027][INFO ][o.e.n.Node ] [TNP-ES-02] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/home/ex/elasticsearch-5.6.11]
  10. [2019-08-27T10:52:11,187][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [aggs-matrix-stats]
  11. [2019-08-27T10:52:11,187][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [ingest-common]
  12. [2019-08-27T10:52:11,188][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [lang-expression]
  13. [2019-08-27T10:52:11,188][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [lang-groovy]
  14. [2019-08-27T10:52:11,188][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [lang-mustache]
  15. [2019-08-27T10:52:11,189][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [lang-painless]
  16. [2019-08-27T10:52:11,189][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [parent-join]
  17. [2019-08-27T10:52:11,189][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [percolator]
  18. [2019-08-27T10:52:11,189][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [reindex]
  19. [2019-08-27T10:52:11,190][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [transport-netty3]
  20. [2019-08-27T10:52:11,190][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded module [transport-netty4]
  21. [2019-08-27T10:52:11,191][INFO ][o.e.p.PluginsService ] [TNP-ES-02] loaded plugin [analysis-ik]
  22. [2019-08-27T10:52:14,154][INFO ][o.e.d.DiscoveryModule ] [TNP-ES-02] using discovery type [zen]
  23. [2019-08-27T10:52:15,228][INFO ][o.e.n.Node ] [TNP-ES-02] initialized
  24. [2019-08-27T10:52:15,228][INFO ][o.e.n.Node ] [TNP-ES-02] starting ...

启动异常情况解决

如果发生如下异常内容

  1. [2019-08-27T10:52:58,069][WARN ][o.e.i.e.Engine ] [TNP-ES-02] [table_user_index_1][4] failed engine [failed to recover from translog]
  2. org.elasticsearch.index.engine.EngineException: failed to recover from translog
  3. at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:244) ~[elasticsearch-5.6.11.jar:5.6.11]
  4. at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:221) [elasticsearch-5.6.11.jar:5.6.11]
  5. at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:92) [elasticsearch-5.6.11.jar:5.6.11]
  6. at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:1033) [elasticsearch-5.6.11.jar:5.6.11]
  7. at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:987) [elasticsearch-5.6.11.jar:5.6.11]
  8. at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:360) [elasticsearch-5.6.11.jar:5.6.11]
  9. at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:90) [elasticsearch-5.6.11.jar:5.6.11]
  10. at org.elasticsearch.index.shard.StoreRecovery$$Lambda$1542/357079274.run(Unknown Source) [elasticsearch-5.6.11.jar:5.6.11]
  11. at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) [elasticsearch-5.6.11.jar:5.6.11]
  12. at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) [elasticsearch-5.6.11.jar:5.6.11]
  13. at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1236) [elasticsearch-5.6.11.jar:5.6.11]
  14. at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$1(IndexShard.java:1484) [elasticsearch-5.6.11.jar:5.6.11]
  15. at org.elasticsearch.index.shard.IndexShard$$Lambda$1541/1402880171.run(Unknown Source) [elasticsearch-5.6.11.jar:5.6.11]
  16. at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:576) [elasticsearch-5.6.11.jar:5.6.11]
  17. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_45]
  18. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_45]
  19. at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
  20. Caused by: java.io.EOFException: read past EOF. pos [9543183] length: [4] end: [9543183]
  21. at org.elasticsearch.common.io.Channels.readFromFileChannelWithEofException(Channels.java:101) ~[elasticsearch-5.6.11.jar:5.6.11]
  22. at org.elasticsearch.index.translog.TranslogSnapshot.readBytes(TranslogSnapshot.java:90) ~[elasticsearch-5.6.11.jar:5.6.11]
  23. at org.elasticsearch.index.translog.BaseTranslogReader.readSize(BaseTranslogReader.java:67) ~[elasticsearch-5.6.11.jar:5.6.11]
  24. at org.elasticsearch.index.translog.TranslogSnapshot.readOperation(TranslogSnapshot.java:68) ~[elasticsearch-5.6.11.jar:5.6.11]
  25. at org.elasticsearch.index.translog.TranslogSnapshot.next(TranslogSnapshot.java:61) ~[elasticsearch-5.6.11.jar:5.6.11]
  26. at org.elasticsearch.index.translog.MultiSnapshot.next(MultiSnapshot.java:53) ~[elasticsearch-5.6.11.jar:5.6.11]
  27. at org.elasticsearch.index.shard.TranslogRecoveryPerformer.recoveryFromSnapshot(TranslogRecoveryPerformer.java:84) ~[elasticsearch-5.6.11.jar:5.6.11]
  28. at org.elasticsearch.index.shard.IndexShard$IndexShardRecoveryPerformer.recoveryFromSnapshot(IndexShard.java:1838) ~[elasticsearch-5.6.11.jar:5.6.11]
  29. at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:242) ~[elasticsearch-5.6.11.jar:5.6.11]
  30. ... 16 more

说明data文件夹中的节点数据信息没有删除,需要查询已有的节点信息

  1. [ex@TEST-BS-1908-V511 nodes]$ pwd
  2. /home/ex/data/nodes
  3. [ex@TEST-BS-1908-V511 nodes]$ rm -r 0/

启动成功验证

如果启动成功,则可以通过两种方式进行验证ES是否启动成功

  1. 使用curl命令请求页面http://192.168.122.40:9201/
  2. 另一种是直接在页面上打开http://192.168.122.40:9201/

如果返回如下内容,则认为启动成功

  1. {
  2. "name" : "TNP-ES-03",
  3. "cluster_name" : "TNP-ES",
  4. "cluster_uuid" : "BBikXLspRBaTWJt-XAXADQ",
  5. "version" : {
  6. "number" : "5.6.11",
  7. "build_hash" : "bc3eef4",
  8. "build_date" : "2018-08-16T15:25:17.293Z",
  9. "build_snapshot" : false,
  10. "lucene_version" : "6.6.1"
  11. },
  12. "tagline" : "You Know, for Search"
  13. }

Canal 数据库日志同步工具

  1. [dubbo@HZ3UAT-BS-1909-V085 conf]$ pwd
  2. /home/dubbo/canal/conf
  3. [dubbo@HZ3UAT-BS-1909-V085 conf]$ vi canal.properties
  4. # 修改内容
  5. canal.zkServers = 172.20.179.50:2181
  6. canal.mq.servers = 172.20.188.40:9876;172.20.188.41:9876
  7. [dubbo@SZ1-BS-1908-V1132 example]$ vi instance.properties
  8. [dubbo@SZ1-BS-1908-V1132 example]$ pwd
  9. /home/dubbo/canal/conf/example
  10. # 修改内容
  11. canal.instance.master.address=172.21.70.65:3306

Kibana

修改配置文件

  1. server.port: 5601
  2. server.host: "192.168.122.40"
  3. elasticsearch.url: "http://192.168.122.40:9201"

添加node的文件执行权限

  1. [ex@DEV-BS-1908-V510 bin]$ chmod +x ../node/bin/node

启动

  1. [ex@DEV-BS-1908-V510 bin]$ ./kibana &
  2. [2] 130018
  3. [ex@DEV-BS-1908-V510 bin]$ log [02:38:06.457] [info][status][plugin:kibana@5.6.11] Status changed from uninitialized to green - Ready
  4. log [02:38:06.553] [info][status][plugin:elasticsearch@5.6.11] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  5. log [02:38:06.602] [info][status][plugin:console@5.6.11] Status changed from uninitialized to green - Ready
  6. log [02:38:06.652] [info][status][plugin:metrics@5.6.11] Status changed from uninitialized to green - Ready
  7. log [02:38:06.877] [info][status][plugin:elasticsearch@5.6.11] Status changed from yellow to green - Kibana index ready
  8. log [02:38:06.879] [info][status][plugin:timelion@5.6.11] Status changed from uninitialized to green - Ready
  9. log [02:38:06.885] [info][listening] Server running at http://192.168.122.40:5601
  10. log [02:38:06.887] [info][status][ui settings] Status changed from uninitialized to green - Ready

验证启动结果

网页访问http://192.168.122.40:5601,查看Kibana是否启动成功

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注