@phper
2015-12-28T17:00:41.000000Z
字数 6112
阅读 9035
redis
twitter的Twemproxy
(https://github.com/twitter/twemproxy)是目前市面上用的最广的使用做多的用来做redis集群服务。由于redis 是单线程,而且官方的cluster 还不是很稳定和广泛使用。
- 对外暴露一个访问节点
- 请求分片(sharding)
- 分片要合理(分片均匀,相同的请求要分配到同样的redis节点)
git clone https://github.com/twitter/twemproxy.git
提示autoreconf 的版本过低
[root@web3 twemproxy]# autoreconf
configure.ac:8: error: Autoconf version 2.64 or higher is required
configure.ac:8: the top level
autom4te: /usr/bin/m4 failed with exit status: 63
aclocal: autom4te failed with exit status: 63
autoreconf: aclocal failed with exit status: 63
下载 2.69版本的autoconf
[root@web3] wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz
[root@web3] tar zxvf autoconf-2.69.tar.gz
cd autoconf
[root@web3 autoconf-2.69]# ./configure --prefix=/usr
[root@web3 autoconf-2.69]# make
[root@web3 autoconf-2.69]# make install
继续安装 twemproxy
cd twemproxy/
[root@web3 twemproxy] CFLAGS="-ggdb3 -O0" autoreconf -fvi && ./configure --prefix=/usr/local/twemproxy --enable-debug=log
[root@web3 twemproxy] make
[root@web3 twemproxy] make install
添加配置文件,加入我们测试用的3台机器
[root@web3 conf]# cd /usr/local/twemproxy/
[root@web3 twemproxy]# mkdir conf run
[root@web3 twemproxy]# cd conf
[root@web3 conf]# vi nutcracker.yml
beta:
listen: 0.0.0.0:22122
hash: fnv1a_64
hash_tag: "{}"
distribution: ketama
auto_eject_hosts: false
timeout: 400
redis: true
servers:
- 192.168.33.11:6370:1 master0
- 192.168.33.11:6380:1 master1
- 192.168.33.11:6381:1 master2
刚已经加好了配置文件,现在测试下:
[root@web3 twemproxy]# ./sbin/nutcracker -t
nutcracker: configuration file 'conf/nutcracker.yml' syntax is ok
说明配置文件已经成功,好。现在开始运行:
[root@web3 twemproxy]# ./sbin/nutcracker -d -c /usr/local/twemproxy/conf/nutcracker.yml -p /usr/local/twemproxy/run/redisproxy.pid -o /usr/local/twemproxy/run/redisproxy.log
上面分别设置了配置文件制定路径,pid路径,日志路径等。
看看启动成功了没?
[root@web3 twemproxy]# ps -ef|grep nutcracker
root 13816 1 0 09:17 ? 00:00:00 ./sbin/nutcracker -d -c /usr/local/twemproxy/conf/nutcracker.yml -p /usr/local/twemproxy/run/redisproxy.pid -o /usr/local/twemproxy/run/redisproxy.log
说明成功了。
可以查看帮助,查看nutcracker的使用规则:
[root@web3 twemproxy]# ./sbin/nutcracker --help
This is nutcracker-0.4.1
Usage: nutcracker [-?hVdDt] [-v verbosity level] [-o output file]
[-c conf file] [-s stats port] [-a stats addr]
[-i stats interval] [-p pid file] [-m mbuf size]
Options:
-h, –help : 查看帮助文档,显示命令选项
-V, –version : 查看nutcracker版本
-t, –test-conf : 测试配置脚本的正确性
-d, –daemonize : 以守护进程运行
-D, –describe-stats : 打印状态描述
-v, –verbosity=N : 设置日志级别 (default: 5, min: 0, max: 11)
-o, –output=S : 设置日志输出路径,默认为标准错误输出 (default: stderr)
-c, –conf-file=S : 指定配置文件路径 (default: conf/nutcracker.yml)
-s, –stats-port=N : 设置状态监控端口,默认22222 (default: 22222)
-a, –stats-addr=S : 设置状态监控IP,默认0.0.0.0 (default: 0.0.0.0)
-i, –stats-interval=N : 设置状态聚合间隔 (default: 30000 msec)
-p, –pid-file=S : 指定进程pid文件路径,默认关闭 (default: off)
-m, –mbuf-size=N : 设置mbuf块大小,以bytes单位 (default: 16384 bytes)
和连接redis 一摸一样,只是端口换成了22122:
[root@qwb3 twemproxy]# redis-cli -p 22122
127.0.0.1:22122> set name2 yangyi
OK
127.0.0.1:22122> get name2
"yangyi"
表示成功了,可以进行读取。那么以后的php 代码中就可以直接连接22122端口使用redis了。
用redis自带的redis-benchmark进行性能测试:
set 测试:
twemproxy:
[root@web3 twemproxy]# redis-benchmark -h 192.168.33.13 -p 22122 -c 100 -t set -d 100
====== SET ======
100000 requests completed in 3.82 seconds
100 parallel clients
100 bytes payload
keep alive: 1
0.00% <= 1 milliseconds
0.13% <= 2 milliseconds
4.16% <= 3 milliseconds
77.03% <= 4 milliseconds
96.81% <= 5 milliseconds
98.40% <= 6 milliseconds
99.17% <= 7 milliseconds
99.46% <= 8 milliseconds
99.58% <= 9 milliseconds
99.64% <= 10 milliseconds
99.64% <= 11 milliseconds
99.78% <= 12 milliseconds
99.93% <= 13 milliseconds
99.96% <= 14 milliseconds
99.97% <= 15 milliseconds
100.00% <= 15 milliseconds
26532.24 requests per second
原生redis:
[root@web3 twemproxy]# redis-benchmark -h 192.168.33.11 -p 6380 -c 100 -t set -d 100
====== SET ======
100000 requests completed in 4.53 seconds
100 parallel clients
100 bytes payload
keep alive: 1
0.07% <= 1 milliseconds
22.66% <= 2 milliseconds
51.71% <= 3 milliseconds
60.04% <= 4 milliseconds
69.81% <= 5 milliseconds
77.51% <= 6 milliseconds
82.74% <= 7 milliseconds
86.95% <= 8 milliseconds
91.00% <= 9 milliseconds
93.94% <= 10 milliseconds
95.83% <= 11 milliseconds
98.03% <= 12 milliseconds
98.99% <= 13 milliseconds
99.36% <= 14 milliseconds
99.64% <= 15 milliseconds
99.75% <= 16 milliseconds
99.79% <= 17 milliseconds
99.87% <= 18 milliseconds
99.90% <= 19 milliseconds
99.91% <= 20 milliseconds
99.94% <= 22 milliseconds
99.95% <= 23 milliseconds
99.98% <= 24 milliseconds
99.98% <= 25 milliseconds
99.99% <= 26 milliseconds
100.00% <= 27 milliseconds
100.00% <= 27 milliseconds
22060.45 requests per second
呃,咋一看,为啥,感觉twemproxy比原生的redis屌太多。我看其他的 测试,不是说,少20%的性能嘛,这尼玛活脱脱高了20%啊。
再测测get
twemproxy:
[root@web3 twemproxy]# redis-benchmark -h 192.168.33.13 -p 22122 -c 100 -t get -d 100
====== GET ======
100000 requests completed in 3.57 seconds
100 parallel clients
100 bytes payload
keep alive: 1
0.00% <= 1 milliseconds
0.03% <= 2 milliseconds
8.20% <= 3 milliseconds
87.29% <= 4 milliseconds
98.06% <= 5 milliseconds
99.23% <= 6 milliseconds
99.54% <= 7 milliseconds
99.70% <= 8 milliseconds
99.72% <= 9 milliseconds
99.75% <= 10 milliseconds
99.80% <= 11 milliseconds
99.90% <= 12 milliseconds
99.98% <= 13 milliseconds
99.99% <= 14 milliseconds
100.00% <= 14 milliseconds
27995.52 requests per second
原生redis:
[root@web3 twemproxy]# redis-benchmark -h 192.168.33.11 -p 6380 -c 100 -t get -d 100
====== GET ======
100000 requests completed in 4.91 seconds
100 parallel clients
100 bytes payload
keep alive: 1
0.18% <= 1 milliseconds
22.35% <= 2 milliseconds
43.39% <= 3 milliseconds
53.87% <= 4 milliseconds
63.11% <= 5 milliseconds
72.07% <= 6 milliseconds
79.50% <= 7 milliseconds
85.14% <= 8 milliseconds
90.04% <= 9 milliseconds
93.09% <= 10 milliseconds
96.05% <= 11 milliseconds
98.24% <= 12 milliseconds
99.03% <= 13 milliseconds
99.45% <= 14 milliseconds
99.75% <= 15 milliseconds
99.81% <= 16 milliseconds
99.97% <= 17 milliseconds
100.00% <= 17 milliseconds
20387.36 requests per second
我擦,是我哪里配置不对嘛?get 也是 tw更屌一点~
😭
优点
这是一个轻量级的Redis和memcached代理。使用它可以减少缓存服务器的连接数,并且利用它来作分片。这个代理的速度是相当快的,在网上查到会有20%的性能损耗,但上面的redis-benchmark做了测试,发现性能更快。后来找到英文原文,作者是说最差情况下,性能损耗不会多于20%。twemproxy用了pipeline. 首先redis是支持使用pipeline批处理的。twemproxy与每个redis服务器都会建立一个连接,每个连接实现了两个FIFO的队列,通 过这两个队列实现对redis的pipeline访问。将多个客户端的访问合并到一个连接,这样既减少了redis服务器的连接数,又提高了访问性能。
缺点
- 虽然可以动态移除节点,但该移除节点的数据就丢失了。
redis集群动态增加节点的时候,twemproxy不会对已有数据做重分布.maillist里面作者说这个需要自己写个脚本实现- 性能上的损耗
上面我们新建来一个简单的配置文件nutcracker.yml
:
beta:
listen: 0.0.0.0:22122
hash: fnv1a_64
hash_tag: "{}"
distribution: ketama
auto_eject_hosts: false
timeout: 400
redis: true
servers:
- 192.168.33.11:6370:1 master0
- 192.168.33.11:6380:1 master1
- 192.168.33.11:6381:1 master2
我们来一一看下,这些参数的意思以及使用用法。