Redis配置文件redis.conf详解

redis5.0.7的配置文件详解

includes模块

################################## INCLUDES ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf

注释掉16、17行可以把多个redis.conf组合在一起,用于引入其他配置文件,实现配置的模块化管理和继承。

network模块

################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1 ::1

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /var/run/redis/redis-server.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300

bind 配置

  • 功能bind 用于指定 Redis 服务器监听的 IP 地址。若不设置该选项,Redis 会监听服务器上所有可用的网络接口,这在服务器直接暴露于互联网时存在安全风险。
  • 默认值bind 127.0.0.1 ::1,其中 127.0.0.1 是 IPv4 的回环地址,::1 是 IPv6 的回环地址。这意味着默认情况下,Redis 仅接受来自本地机器的连接。
  • 示例:若要让 Redis 监听特定的 IP 地址,可设置为 bind 192.168.1.100 10.0.0.1

protected-mode 配置

  • 功能:保护模式是一层安全防护机制,用于防止暴露在互联网上的 Redis 实例被非法访问和利用。
  • 启用条件:当保护模式开启,且未使用 bind 指令明确指定监听地址,同时未配置密码时,Redis 仅接受来自 IPv4 和 IPv6 回环地址以及 Unix 域套接字的连接。
  • 默认值yes,即默认开启保护模式。若你确定需要允许其他主机的客户端连接到 Redis,且未配置认证和明确指定监听接口,可将其设置为 no

port 配置

  • 功能:指定 Redis 服务器监听的 TCP 端口。
  • 默认值6379 。若设置为 0,Redis 将不监听 TCP 套接字。

tcp-backlog 配置

  • 功能tcp-backloglisten() 系统调用的 backlog 参数,它表示在 Redis 开始拒绝新连接之前,操作系统允许的等待处理的最大连接请求数。在高并发环境下,适当增大该值可避免慢客户端连接问题。
  • 默认值511 。需要注意的是,Linux 内核会将其截断为 /proc/sys/net/core/somaxconn 的值,因此若要增大该值,需同时调整 somaxconntcp_max_syn_backlog

Unix 套接字配置

  • 功能:通过 Unix 套接字进行本地进程间通信。可指定 Unix 套接字文件的路径,若不设置,Redis 不会监听 Unix 套接字。
  • 示例unixsocket /var/run/redis/redis-server.sock 指定了 Unix 套接字文件的路径,unixsocketperm 700 设置了该文件的权限。

timeout 配置

  • 功能:设置客户端连接在空闲 N 秒后自动关闭,若设置为 0 则禁用该功能。
  • 默认值0 ,即不关闭空闲连接。

tcp-keepalive 配置

  • 功能:若该值不为 0,Redis 会使用 SO_KEEPALIVE 选项在没有通信时向客户端发送 TCP ACK 包,这样做有两个好处:一是检测连接的对端是否已断开;二是让中间的网络设备认为连接处于活跃状态。
  • 默认值300 秒,即在 300 秒没有通信时发送 ACK 包。在 Linux 系统中,这个值就是发送 ACK 包的周期。

general模块

这部分配置属于 Redis 通用设置,对 Redis 服务器的基本运行模式、日志记录、数据库数量等方面进行了定义。

1. daemonize 配置

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes
  • 功能:此选项用于决定 Redis 是否以守护进程模式运行。若设置为 yes,Redis 会在后台运行;设置为 no,则在前台运行。
  • 默认值no
  • 设置为 yes 的影响:Redis 会在 /var/run/redis.pid (或 pidfile 指定的路径)写入进程 ID 文件,方便管理进程。

2. supervised 配置

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
supervised systemd
  • 功能:该选项用于配置 Redis 与系统初始化和监控系统(如 upstartsystemd)的交互方式。
  • 可选值:
    • no:不与监控系统交互。
    • upstart:通过将 Redis 置于 SIGSTOP 模式向 upstart 发送信号。
    • systemd:通过向 $NOTIFY_SOCKET 写入 READY=1systemd 发送信号。
    • auto:根据 UPSTART_JOBNOTIFY_SOCKET 环境变量自动检测 upstartsystemd 方法。
  • 当前设置含义supervised systemd 表明 Redis 会与 systemd 交互,在启动完成后向 systemd 发送准备就绪信号。

3. pidfile 配置

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis/redis-server.pid
  • 功能:指定 Redis 进程 ID 文件的路径。Redis 在启动时会将进程 ID 写入该文件,退出时删除。
  • 默认值:当以守护进程模式运行且未指定 pidfile 时,默认使用 /var/run/redis.pid ;非守护进程模式下,若未指定则不创建。
  • 当前设置含义:Redis 会将进程 ID 写入 /var/run/redis/redis-server.pid 文件。

4. loglevel 配置

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
  • 功能:设置 Redis 服务器的日志详细程度。
  • 可选值:
    • debug:输出大量信息,适用于开发和测试阶段。
    • verbose:输出较多但不杂乱的信息。
    • notice:输出适度详细的信息,适合生产环境。
    • warning:仅记录非常重要或关键的消息。
  • 当前设置含义loglevel notice 表示 Redis 会输出适度详细的日志信息,适用于生产环境。

5. logfile 配置

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis-server.log
  • 功能:指定 Redis 日志文件的名称和路径。若设置为空字符串,Redis 会将日志输出到标准输出;但在守护进程模式下,日志会被发送到 /dev/null
  • 当前设置含义:Redis 会将日志写入 /var/log/redis/redis-server.log 文件。

6. syslog 相关配置(注释部分)

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
  • 功能

    :这些配置用于启用 Redis 日志记录到系统日志(syslog)。

    • syslog-enabled:是否启用 syslog 日志记录。
    • syslog-ident:指定 syslog 中的标识。
    • syslog-facility:指定 syslog 的设备,必须是 USERLOCAL0 - LOCAL7 之一。
  • 当前状态:这些选项被注释掉,意味着 syslog 日志记录功能未启用。

7. databases 配置

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
  • 功能:设置 Redis 服务器的数据库数量。客户端可以使用 SELECT <dbid> 命令选择不同的数据库,dbid 的范围是 0databases - 1
  • 默认值16

8. always-show-logo 配置

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes
  • 功能:决定在 Redis 启动日志中是否始终显示 ASCII 艺术标志。默认情况下,只有在以标准输出方式启动且标准输出为 TTY 时才显示。
  • 当前设置含义always-show-logo yes 表示无论启动方式如何,Redis 启动日志中都会显示 ASCII 艺术标志。

SNAPSHOTTING模块

这部分是 Redis 的 快照持久化(RDB)配置,用于将内存中的数据定期写入磁盘生成 .rdb 文件,确保数据持久化。以下是对每个配置项的详细解析:

一、快照触发策略(save 指令)

save 900 1
save 300 10
save 60 10000

作用

  • 定义 Redis 自动执行快照(BGSAVE)的条件,格式为

    save <秒数> <写操作次数>
    • 同时满足指定时间内有至少 N 次写操作时,触发快照。
    • 可配置多个条件,满足任意一个即触发。

示例解析

  • save 900 1:若 900 秒(15 分钟)内至少有 1 次写操作,触发快照。
  • save 300 10:若 300 秒(5 分钟)内至少有 10 次写操作,触发快照。
  • save 60 10000:若 60 秒内至少有 10000 次写操作,触发快照。

扩展说明

  • 禁用快照:注释掉所有 save 行或添加 save ""
  • 触发时机:快照由后台进程执行(BGSAVE),不阻塞主线程,但会消耗额外内存(fork 子进程时复制内存)。

二、快照失败时的写操作控制(stop-writes-on-bgsave-error

stop-writes-on-bgsave-error yes

作用

  • 当快照生成失败(如磁盘空间不足、权限问题)时,是否禁止 Redis 接收新的写操作。
  • yes(默认):快照失败后拒绝写操作,强制用户修复问题(避免数据未持久化却继续写入的风险)。
  • no:忽略错误,继续处理写操作(适用于已配置监控且能容忍数据丢失的场景)。

三、RDB 文件压缩(rdbcompression

rdbcompression yes

作用

  • 是否对生成的 .rdb 文件进行压缩(使用 LZF 算法)。
  • yes(默认):压缩数据,减少磁盘占用,但会消耗 CPU 资源(主进程 fork 出的子进程负责压缩)。
  • no:不压缩,适用于 CPU 资源紧张、磁盘空间充足的场景。

四、RDB 文件校验和(rdbchecksum

rdbchecksum yes

作用

  • 是否在 .rdb 文件末尾添加 CRC64 校验和,用于检测文件是否损坏。
  • yes(默认):写入和读取时校验数据完整性,增加约 10% 的性能开销。
  • no:禁用校验,提升读写性能,但可能读取到损坏的文件(适用于追求极致性能且信任存储介质的场景)。

五、RDB 文件名与存储目录(dbfilenamedir

dbfilename dump.rdb
dir /var/lib/redis

1. dbfilename

  • 作用:指定快照文件名(不包含路径)。
  • 默认值dump.rdb,建议保留默认值或根据业务命名(如 redis-data.rdb)。

2. dir

  • 作用:指定快照文件和 AOF 文件的存储目录(必须是目录,不能是文件名)。
  • 默认值./(Redis 启动目录),但生产环境通常配置为专用路径(如 /var/lib/redis)。
  • 注意事项:
    • 目录需提前创建,且 Redis 进程有读写权限。
    • 建议将目录挂载到高速磁盘(如 SSD),避免影响快照性能。

六、快照配置最佳实践

1. 根据业务场景调整触发策略

  • 高频写场景:缩短触发时间(如 save 60 100),但需注意频繁快照对 CPU 和磁盘的压力。
  • 低频写场景:延长触发时间(如仅保留 save 86400 1,每天至少一次快照)。

2. 生产环境建议配置

save 3600 1        # 每小时至少1次写操作,触发快照
save 300 100       # 每5分钟至少100次写操作,触发快照
save 60 10000      # 每分钟至少10000次写操作,触发快照
stop-writes-on-bgsave-error yes  # 启用写操作阻断(默认)
rdbcompression yes              # 启用压缩(默认)
rdbchecksum yes                 # 启用校验(默认)
dbfilename dump-$(date +%Y%m%d-%H%M%S).rdb  # 带时间戳的文件名(需配合脚本生成)
dir /mnt/redis-data/             # 专用存储目录(建议独立磁盘)

3. 快照备份与恢复

  • 备份:定期将 dir 目录下的 .rdb 文件复制到其他存储(如 NAS、云存储),避免磁盘故障丢失数据。
  • 恢复:将备份的 .rdb 文件放入 dir 目录并重启 Redis,或使用 redis-cli --rdb <file> 命令加载。

七、常见问题与排查

  1. 快照未生成

    • 检查 save 配置是否满足触发条件(如写操作次数不足)。
    • 查看日志(logfile 路径),确认是否有权限问题或磁盘空间不足。
  2. 快照耗时过长

    • 减少 save 频率,或优化数据集(删除无用数据)。
    • 确保 dir 目录所在磁盘 I/O 性能充足。
  3. RDB 文件损坏

    • 启用 rdbchecksum yes(默认已启用),读取时会自动校验。

    • 使用redis-check-rdb工具手动校验文件:

      redis-check-rdb /var/lib/redis/dump.rdb

通过合理配置快照参数,可在数据持久性和服务器性能之间取得平衡。生产环境中建议结合 AOF 持久化(另一篇配置解析)实现更可靠的数据保障。

Replication模块

这部分是 Redis 的 主从复制(Replication)配置,用于实现数据副本、读写分离和高可用性。以下是对每个配置项的详细解析:

一、主从角色配置(replicaofmasterauth

# replicaof <masterip> <masterport>
# masterauth <master-password>

作用

  1. replicaof:将当前实例设置为从节点,指向主节点的 IP 和端口。
    • 示例:replicaof 192.168.1.100 6379
    • 注意:Redis 5.0 后使用 replicaof 替代旧版的 slaveof,功能相同。
  2. masterauth:若主节点配置了密码(requirepass),从节点需通过此配置指定主节点密码,否则无法建立复制连接。
    • 示例:masterauth "strong-password"

配置逻辑

  • 从节点启动后,会向主节点发送 SYNC 命令,开始复制数据(全量同步或部分同步)。
  • 主节点验证密码(若有)通过后,开始传输数据。

二、从节点数据可用性(replica-serve-stale-data

replica-serve-stale-data yes

作用

  • 控制从节点在复制未完成(如首次同步、连接中断)时是否响应客户端请求。
    • yes(默认):返回旧数据或空数据(可能不一致,但服务可用)。
    • no:除少数命令(如 INFOPING)外,返回 SYNC with master in progress 错误(保证数据一致性,但服务部分不可用)。

适用场景

  • 读多写少场景:建议保持 yes,允许从节点提供读服务(需容忍短暂数据延迟)。
  • 强一致性场景:设置为 no,避免返回过期数据(如金融交易场景)。

三、从节点只读模式(replica-read-only

replica-read-only yes

作用

  • 强制从节点为只读模式(默认开启),防止误操作写入数据(如客户端误连从节点执行写命令)。
  • 例外replicaofauthshutdown 等管理命令仍可执行。

安全意义

  • 避免从节点数据被篡改,确保与主节点数据一致。
  • 生产环境必须保持 yes,开发测试环境可按需关闭。

四、全量复制策略(repl-diskless-syncrepl-diskless-sync-delay

repl-diskless-sync no
repl-diskless-sync-delay 5

1. repl-diskless-sync

  • 作用:控制全量复制时是否使用无盘复制(直接通过网络传输 RDB,不写磁盘)。
    • yes:主节点生成 RDB 时直接通过 socket 发送给从节点,适合高带宽、低磁盘性能场景(如 SSD 压力大时)。
    • no(默认):主节点先将 RDB 写入磁盘,再读取磁盘传输(适合低带宽、高磁盘性能场景)。

2. repl-diskless-sync-delay

  • 作用:无盘复制时,主节点等待新从节点连接的延迟时间(秒),以便批量传输数据。
    • 示例:设为 5 秒,若 5 秒内有多个从节点连接,主节点会合并传输(减少 CPU 消耗)。
    • 0 表示立即传输,不等待。

五、复制心跳与超时(repl-ping-replica-periodrepl-timeout

# repl-ping-replica-period 10
# repl-timeout 60

1. repl-ping-replica-period

  • 作用:从节点向主节点发送心跳 PING 的间隔(秒),默认 10 秒。
  • 目的:主节点通过心跳判断从节点是否存活,从节点确认连接状态。

2. repl-timeout

  • 作用:复制操作的超时时间(秒),包括:
    • 主节点等待从节点 ACK 的时间。
    • 从节点等待主节点数据的时间。
  • 建议值:需大于 repl-ping-replica-period,默认 60 秒。

六、复制缓冲区(repl-backlog-sizerepl-backlog-ttl

# repl-backlog-size 1mb
# repl-backlog-ttl 3600

1. repl-backlog-size

  • 作用:主节点维护的复制积压缓冲区大小(环形缓冲区),用于部分重同步(Partial Resync)。
    • 当从节点断开连接后重新连接时,若断开时间较短(未超过缓冲区容量),主节点直接发送缺失的数据,避免全量复制。
  • 配置建议:
    • 缓冲区大小 = 主节点平均写入速率(字节 / 秒) × 最大允许断开时间(秒)。
    • 示例:若主节点每秒写入 1MB,允许断开 60 秒,则设为 60mb

2. repl-backlog-ttl

  • 作用:当没有从节点连接时,缓冲区的存活时间(秒)。
    • 0 表示永不释放(主节点若作为其他节点的从节点,需保持 0,以便晋升为主节点后支持部分同步)。
    • 默认 3600 秒(1 小时),若长时间无从节点连接,缓冲区会被释放。

七、从节点优先级(replica-priority

replica-priority 100

作用

  • 用于 Redis Sentinel 或集群选举时,标识从节点的优先级(值越小优先级越高)。
  • 特殊值:
    • 0:从节点永不被 Sentinel 选举为主节点(如只读从节点)。
    • 100(默认):正常参与选举。

八、写入保护(min-replicas-to-writemin-replicas-max-lag

# min-replicas-to-write 3
# min-replicas-max-lag 10

作用

  • 主节点在满足以下条件时拒绝写入:

    1. 连接的从节点数 < min-replicas-to-write(设为 0 禁用)。
    2. 所有从节点的延迟 > min-replicas-max-lag(秒,设为 0 禁用)。
  • 示例:

    min-replicas-to-write 2  
    min-replicas-max-lag 5  

    表示至少有 2 个从节点,且延迟 ≤ 5 秒时,主节点才接受写请求。

  • 目的:避免主节点在缺乏有效从节点时写入数据,减少数据丢失风险(如主节点宕机且无可用从节点)。

九、网络地址通告(replica-announce-ipreplica-announce-port

# replica-announce-ip 5.5.5.5  
# replica-announce-port 1234  

作用

  • 当从节点位于 NAT 或端口转发环境时,主动通告其对外可见的 IP 和端口。
  • 场景:
    • 主节点通过内网 IP 连接从节点,但从节点需向 Sentinel 或客户端暴露公网 IP。
    • 配置后,INFO replicationROLE 命令会显示通告的 IP 和端口。

十、主从复制最佳实践

1. 生产环境基础配置

replicaof 192.168.1.100 6379  # 指向主节点
masterauth "master-password"    # 主节点密码
replica-serve-stale-data yes    # 允许返回旧数据(读服务可用)
replica-read-only yes           # 强制只读
repl-backlog-size 512mb         # 缓冲区大小(根据写入量调整)
repl-backlog-ttl 0              # 主节点缓冲区永不释放(若可能成为其他节点的主节点)
min-replicas-to-write 1         # 至少1个从节点在线时主节点才接受写
min-replicas-max-lag 10         # 从节点延迟≤10秒

2. 读写分离架构

  • 客户端通过代理(如 Redis Proxy、Twemproxy)路由读请求到从节点,写请求到主节点。
  • 从节点配置 replica-serve-stale-data yes,并通过 replica-announce-ip 暴露公网地址。

3. 高可用性组合

  • 结合 Redis Sentinel 或 Redis Cluster,自动处理主从切换(故障转移)。
  • 主节点配置 min-replicas-to-write 避免单节点写入,从节点通过优先级(replica-priority)控制选举顺序。

十一、常见问题与排查

  1. 从节点无法连接主节点
    • 检查 replicaof IP / 端口是否正确,主节点是否允许从节点连接(防火墙、bind 配置)。
    • 主节点密码是否正确(masterauth),主节点是否配置 requirepass
  2. 全量复制频繁触发
    • 增大 repl-backlog-size,确保缓冲区能容纳断开期间的写操作。
    • 检查网络稳定性,避免从节点频繁断开。
  3. 从节点延迟过高
    • 查看主节点日志,是否因 CPU / 内存压力导致复制延迟。
    • 检查从节点磁盘 I/O 性能(全量复制时需写入 RDB)。
  4. Sentinel 无法发现从节点
    • 确保从节点配置 replica-announce-ip(若存在 NAT),或主节点能通过内网访问从节点。

通过合理配置主从复制参数,可构建高可靠的 Redis 数据副本架构,结合持久化配置(RDB/AOF)进一步提升数据安全性。

Security模块

这部分是 Redis 的 安全配置,用于控制访问权限、限制危险命令等。以下是对每个配置项的详细解析:

一、访问密码认证(requirepass

# requirepass foobared

作用

  • 设置 Redis 服务器的访问密码,客户端需通过 AUTH <密码> 命令认证后才能执行其他操作。
  • 未认证时的限制:除 AUTHPINGSHUTDOWNHELP 等少数命令外,其他命令会返回 NOAUTH Authentication required 错误。

配置建议

  • 生产环境必须启用:避免未授权访问(默认注释掉,即无密码)。

  • 密码强度:使用复杂密码(如至少 12 位,包含大小写、数字、符号),避免被暴力破解(Redis 每秒可处理约 15 万次密码尝试)。

  • 示例:

    requirepass "strong-Redis-password!123"

安全注意事项

  • 密码会暴露在 Redis 日志和进程列表中(可通过 ps aux | grep redis 查看),因此不建议使用敏感密码。
  • 若使用云服务器,建议结合防火墙(如 iptables、安全组)限制 Redis 端口仅允许可信 IP 访问。

二、命令重命名(rename-command

# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
# rename-command CONFIG ""

作用

  • 修改危险命令的名称(如 CONFIGDEBUGKEYS),降低未授权用户执行高危操作的风险。
  • 两种模式:
    1. 重命名为随机字符串:使命令难以被猜测(如将 CONFIG 重命名为随机哈希值)。
    2. 禁用命令:将命令重命名为空字符串(""),执行时会返回 未知命令 错误。

常用场景

危险命令 重命名建议(示例) 说明
CONFIG rename-command CONFIG "" 禁用配置修改(生产环境必需)
DEBUG rename-command DEBUG "" 禁用调试命令(如 DEBUG SEGFAULT
KEYS rename-command KEYS k_list 重命名为不易猜的别名
FLUSHALL rename-command FLUSHALL "" 禁用清空所有数据库

注意事项

  • 主从复制一致性:若主节点重命名了命令,从节点必须配置相同的重命名规则,否则复制会失败(因 AOF 日志和复制协议会传输命令名)。
  • 客户端兼容性:使用重命名后的命令时,客户端需显式调用新名称(如原 CONFIG GET 需改为 b840fc... GET)。
  • 避免冲突:重命名的命令名不能与现有命令重复(如将 CONFIG 重命名为 SET 会导致原有 SET 命令失效)。

三、安全配置最佳实践

1. 生产环境最小化配置

requirepass "复杂密码-建议使用随机字符串"  # 必须启用密码
rename-command CONFIG ""                  # 禁用 CONFIG 命令
rename-command DEBUG ""                   # 禁用 DEBUG 命令
rename-command KEYS "kscan"               # 重命名 KEYS 为更安全的别名(如需使用)

2. 限制访问来源

  • 结合系统防火墙:

    # 仅允许 192.168.1.0/24 网段访问 Redis 端口(6379)
    iptables -A INPUT -p tcp --dport 6379 -s 192.168.1.0/24 -j ACCEPT
    iptables -A INPUT -p tcp --dport 6379 -j DROP
  • 云服务器安全组:在云厂商控制台中,将 Redis 端口的访问权限限制为内部网络 IP。

3. 禁用公网暴露

  • 在redis.conf中通过bind指令限制监听地址(如仅监听本地或内网 IP):

    bind 127.0.0.1 192.168.1.100  # 仅监听本地和内网 IP,禁止公网访问

4. 使用 TLS/SSL 加密

  • 若必须通过公网访问 Redis,建议使用 TLS/SSL 加密连接(需配合第三方工具如 stunnel 或 Redis 自带的 TLS 模块,需编译时启用)。

四、常见安全风险与应对

  1. 未授权访问导致数据泄露
    • 风险:Redis 暴露在公网且未设密码,攻击者可直接读取数据、写入恶意命令(如设置 定时任务 植入后门)。
    • 应对:
      • 启用 requirepass 密码认证。
      • 禁止公网 IP 直接访问 Redis(通过内网或 VPN 连接)。
  2. 危险命令滥用
    • 风险:未授权用户执行 CONFIGFLUSHALL 等命令,导致配置篡改或数据丢失。
    • 应对:
      • 重命名或禁用危险命令(如 rename-command CONFIG "")。
      • 使用最小权限原则,仅允许必要命令执行。
  3. 密码爆破攻击
    • 风险:弱密码被暴力破解,攻击者获取控制权。
    • 应对:
      • 使用强密码(建议 16 位以上随机字符串)。
      • 结合 fail2ban 等工具,限制 Redis 端口的失败登录次数。

五、配置验证与测试

  1. 密码验证测试

    # 未认证时执行命令
    redis-cli ping  # 应返回 NOAUTH 错误
    
    # 认证后执行命令
    redis-cli -a "你的密码" ping  # 应返回 PONG
  2. 命令重命名验证

    # 假设 CONFIG 已重命名为 "myconfig"
    redis-cli -a "密码" myconfig GET requirepass  # 正常执行
    redis-cli -a "密码" CONFIG GET requirepass     # 应返回未知命令错误
  3. 网络访问限制验证

    • 从公网 IP 连接 Redis 应失败(通过 telnet <公网IP> 6379 测试)。
    • 仅允许的内网 IP 可成功连接并认证。

通过以上安全配置,可显著降低 Redis 实例面临的网络攻击和误操作风险。生产环境中需结合网络层防护、定期审计日志等措施,形成完整的安全体系。

CLIENT模块

这部分是 Redis 的 客户端连接配置,用于控制同时连接到 Redis 服务器的客户端数量。以下是对该配置项的详细解析:

一、核心配置:maxclients

maxclients 10000

作用

  • 设置 Redis 服务器允许的最大客户端连接数(包括异步连接、发布订阅连接等)。
  • 当连接数达到上限时,新的连接会被拒绝,并返回错误:ERR max number of clients reached

默认值

  • 默认为10000,但实际可用连接数可能受限于系统资源(如文件描述符限制)。
    • Redis 会预留 32 个文件描述符用于内部使用,因此实际最大连接数为 maxclients系统文件描述符限制 - 32 中的较小值。

配置逻辑

  • Redis 使用文件描述符(file descriptor)管理客户端连接,每个连接占用一个文件描述符。
  • 若需增大 maxclients,需同时调整系统层面的文件描述符限制(如 ulimit -n),否则配置不生效。

二、系统文件描述符限制

1. 查看当前限制

# 查看 Redis 进程的文件描述符限制
cat /proc/$(pidof redis-server)/limits | grep "Open Files"

2. 调整系统限制

  • 临时调整(当前会话有效):

    ulimit -n 65536  # 设置当前 shell 的文件描述符限制为 65536
  • 永久调整

    (适用于 Redis 服务):

    • 在/etc/security/limits.conf中添加:
    redis   soft    nofile   65536
    redis   hard    nofile   65536

    (redis为 Redis 进程用户,需根据实际情况修改)

    • 重启 Redis 服务使配置生效。

三、适用场景与配置建议

1. 生产环境配置

  • 高并发场景

    (如缓存层):

    • 若 Redis 仅用于缓存,且客户端连接数较多(如分布式系统中的多个节点),建议设置 maxclients50000-100000,并相应调整文件描述符限制。

    • 示例:

    maxclients 50000
  • 普通场景:

    • 保持默认值 10000 通常足够,适用于中小型应用。

2. 内存与连接数的关系

  • 每个客户端连接会消耗一定内存(约 8KB-16KB),需结合 Redis 实例的内存容量计算:

    最大连接数 ≈ (总内存 - 数据内存 - 系统开销) / 单个连接内存

    例如:总内存 8GB,数据占用 6GB,单个连接占用 10KB,则最大连接数约为(210241024)/10 ≈209715。

3. 连接复用

  • 建议客户端使用连接池(如 Jedis Pool、Redisson)管理连接,减少频繁创建 / 销毁连接的开销,同时避免达到 maxclients 限制。

四、监控与调优

1. 查看当前连接数

  • 使用INFO clients命令获取实时连接信息:

    redis-cli info clients | grep "connected_clients"
    • connected_clients:当前活跃客户端数。
    • rejected_connections:因达到 maxclients 限制而拒绝的连接数。

2. 调优策略

  • 若rejected_connections持续增加:
    • 增大 maxclients 并调整文件描述符限制。
    • 优化客户端连接使用(如减少闲置连接、启用连接池)。
  • 若内存不足导致无法增大连接数:
    • 增加 Redis 实例内存或拆分集群。
    • 淘汰闲置连接(通过 timeout 配置,见 NETWORK 配置解析)。

五、注意事项

  1. 操作系统限制
    • Linux 系统默认的文件描述符限制较低(如 1024),需为 Redis 单独配置更高的限制,否则 maxclients 无法生效。
  2. 异步连接与 Pub/Sub
    • 异步连接(如 ASYNC 命令)和发布订阅连接同样计入 maxclients,需预留足够连接数。
  3. 云服务器配置
    • 部分云厂商会限制单个实例的文件描述符或连接数,需参考云厂商文档调整。

通过合理配置 maxclients 并配合系统资源优化,可确保 Redis 在高并发场景下稳定处理客户端请求,避免因连接数限制导致服务不可用。

MEMORY MANAGEMENT模块

这部分是 Redis 的 内存管理配置,用于控制内存使用上限、淘汰策略及相关优化参数。以下是对每个配置项的详细解析:

一、内存使用上限(maxmemory

# maxmemory <bytes>

作用

  • 设置 Redis 实例可用的最大内存(字节)。当内存使用达到该阈值时,Redis 会根据 maxmemory-policy 配置的策略淘汰数据,或拒绝写入操作。
  • 适用场景:
    • 作为缓存时,限制内存使用(如设置为物理内存的 80%)。
    • 防止 Redis 占用过多内存导致系统 OOM(Out of Memory)。

配置建议

  • 生产环境:

    maxmemory 4gb  # 设为实例内存的 80% 左右,预留空间给操作系统和 Redis 内部结构
  • 计算方式:

    • 若 Redis 单独部署在 8GB 内存的服务器上,建议设置为 6gb8gb * 0.75)。
    • 避免设置为 100% 内存,防止 Redis 因内存碎片或突发请求导致 OOM。

二、内存淘汰策略(maxmemory-policy

# maxmemory-policy noeviction

作用

  • 当内存达到 maxmemory 时,决定淘汰数据的策略。

  • 可选策略:

    策略名称 描述
    volatile-lru 对设置了过期时间的键,使用近似 LRU(最近最少使用)算法淘汰数据。
    allkeys-lru 对所有键使用近似 LRU 算法淘汰数据(推荐用于缓存场景)。
    volatile-lfu 对设置了过期时间的键,使用近似 LFU(最不频繁使用)算法淘汰数据。
    allkeys-lfu 对所有键使用近似 LFU 算法淘汰数据(适用于访问频率差异明显的场景)。
    volatile-random 随机淘汰设置了过期时间的键。
    allkeys-random 随机淘汰任意键。
    volatile-ttl 优先淘汰剩余过期时间最短的键。
    noeviction(默认) 不淘汰数据,拒绝所有写入操作(仅允许读操作)。

策略选择建议

场景 推荐策略 说明
通用缓存 allkeys-lru 大多数场景适用,优先淘汰长时间未访问的键。
包含过期键的缓存 volatile-lru 仅淘汰过期键,保留未过期键(若需严格控制过期数据)。
低频访问场景 allkeys-lfu 优先淘汰访问频率低的键,比 LRU 更适合 “热点数据” 场景。
持久化存储(禁止淘汰) noeviction 防止数据丢失,需配合业务层处理写入失败(如队列重试)。

三、LRU/LFU 算法优化(maxmemory-samples

# maxmemory-samples 5

作用

  • 配置 LRU/LFU 算法的采样数量,用于近似计算键的访问频率或时间。
  • 数值影响:
    • 增大数值(如 10):更接近真实 LRU/LFU 结果,淘汰更精准,但消耗更多 CPU。
    • 减小数值(如 3):计算更快,但可能导致淘汰策略不够精准。
  • 默认值:5,平衡性能与准确性,适用于大多数场景。

四、从节点内存策略(replica-ignore-maxmemory

# replica-ignore-maxmemory yes

作用

  • 控制从节点是否忽略maxmemory配置:
    • yes(默认):从节点不主动淘汰数据,仅跟随主节点同步淘汰命令(主节点淘汰数据后,从节点执行 DEL)。
    • no:从节点独立应用 maxmemory 策略(需确保主从数据一致,仅建议在特殊场景使用)。

注意事项

  • 从节点可能因复制缓冲区、内存碎片等原因使用更多内存,需确保其实际内存使用不超过物理限制。
  • 若从节点可写(replica-read-only no),需谨慎设置 maxmemory,避免主从数据不一致。

五、内存管理最佳实践

1. 生产环境基础配置

maxmemory 8gb           # 设为实例内存的 80%
maxmemory-policy allkeys-lru  # 使用 LRU 淘汰策略
maxmemory-samples 10     # 提高 LRU 准确性(若 CPU 允许)
replica-ignore-maxmemory yes  # 从节点跟随主节点淘汰(默认)

2. 内存碎片优化

  • Redis 会自动整理内存碎片,但可通过activerehashing配置(默认开启)加速:

    activedefrag yes  # 启用主动内存碎片整理(Redis 4.0+ 支持)

3. 监控与调优

  • 使用INFO memory命令获取内存相关指标:

    redis-cli info memory | grep -E "used_memory|mem_fragmentation_ratio"
    • used_memory:实际使用的内存(字节)。
    • mem_fragmentation_ratio:内存碎片率(理想值 1.0-1.5,大于 2 需优化)。
  • 若碎片率过高,可尝试重启 Redis(自动整理内存)或调整 maxmemory-policy 减少删除操作。

六、常见问题与排查

  1. 写入操作频繁失败(maxmemory-policynoeviction
    • 检查 used_memory 是否已达到 maxmemory
    • 确认淘汰策略是否正确(如 volatile-lru 需有带过期时间的键)。
    • 增大 maxmemory 或调整淘汰策略为更积极的模式(如 allkeys-lru)。
  2. 从节点内存占用过高
    • 确保主节点淘汰策略有效,避免从节点积累过多数据。
    • 若从节点为只读,可适当增大主节点 maxmemory,为从节点预留复制缓冲区空间。
  3. LRU 淘汰效果不佳
    • 增大 maxmemory-samples(如从 5 改为 10),提高淘汰准确性。
    • 改用 LFU 策略(allkeys-lfu),区分访问频率差异。

七、内存淘汰策略对比表

策略 适用场景 内存利用率 实现复杂度 CPU 消耗
allkeys-lru 通用缓存,无明显访问模式
allkeys-lfu 热点数据明显,低频数据需淘汰 极高
volatile-ttl 依赖过期时间的缓存(如限时活动)
noeviction 持久化存储,不允许数据丢失

通过合理配置内存管理参数,可确保 Redis 在有限内存下高效运行,平衡数据持久性、访问性能和资源利用率。生产环境中需结合业务特点选择淘汰策略,并定期监控内存使用情况。

Lazy Freeing模块

部分配置主要围绕 Redis 的延迟释放(Lazy Freeing)机制展开,它能让 Redis 以非阻塞方式释放内存,避免因删除大对象而导致服务器阻塞。下面对各个配置项进行详细解释:

1. Redis 的删除操作原理

Redis 有两种删除键的方式:

  • 阻塞删除(DEL):服务器会停止处理新命令,以同步方式回收与对象关联的所有内存。若删除的键关联的是小对象,执行DEL命令所需时间很短,与 Redis 中其他大部分 (O(1)) 或 (O(log_N)) 复杂度的命令相当;但如果键关联的是包含数百万个元素的聚合值,服务器可能会阻塞很长时间(甚至数秒)才能完成操作。
  • 非阻塞删除(UNLINK):此操作会立即返回,另一个线程会在后台尽快逐步释放对象。FLUSHALLFLUSHDB命令也有ASYNC选项,可实现非阻塞清空数据库。

2. Redis 自动删除键的场景

在以下几种情况下,Redis 会自动删除键:

  • 内存淘汰:由于maxmemorymaxmemory-policy配置,为了给新数据腾出空间且不超过指定的内存限制,Redis 会删除一些键。
  • 过期删除:当带有过期时间的键(通过EXPIRE命令设置)到期时,需要从内存中删除。
  • 命令副作用:某些命令在存储数据时可能会删除已存在的键。例如,RENAME命令在替换旧键内容时会删除旧键;SUNIONSTORE或带有STORE选项的SORT命令可能会删除现有键;SET命令会删除指定键的旧内容,以便用新字符串替换。
  • 复制同步:从节点与主节点进行全量同步时,会删除整个数据库的内容,以加载刚刚传输的 RDB 文件。

3. 延迟释放配置项

lazyfree-lazy-eviction

lazyfree-lazy-eviction no
  • 作用:控制在内存淘汰时是否采用非阻塞方式释放内存。当yes时,Redis 在因内存淘汰而删除键时会使用非阻塞方式(类似UNLINK);当为no时,使用阻塞方式(类似DEL)。
  • 建议:若系统中存在大对象,且对响应时间要求较高,可将其设置为yes,以避免因内存淘汰导致的阻塞;若系统中对象较小,且对内存操作的一致性要求较高,可保持默认的no

lazyfree-lazy-expire

lazyfree-lazy-expire no
  • 作用:控制在键过期时是否采用非阻塞方式释放内存。设置为yes时,键过期删除会采用非阻塞方式;设置为no时,采用阻塞方式。
  • 建议:如果系统中有大量键设置了过期时间,且过期键的删除可能会影响性能,可设置为yes;若过期键较少,可保持默认的no

lazyfree-lazy-server-del

lazyfree-lazy-server-del no
  • 作用:控制在因命令副作用(如RENAMESET等命令)删除键时是否采用非阻塞方式释放内存。设为yes时,采用非阻塞方式;设为no时,采用阻塞方式。
  • 建议:若系统中这类命令频繁操作大对象,可设置为yes以减少阻塞;若操作对象较小,可保持默认的no

replica-lazy-flush

replica-lazy-flush no
  • 作用:控制从节点在进行全量同步时,删除数据库内容是否采用非阻塞方式。设为yes时,采用非阻塞方式;设为no时,采用阻塞方式。
  • 建议:若从节点数据量较大,全量同步时的删除操作可能会导致阻塞,可设置为yes;若数据量较小,可保持默认的no

4. 配置建议

  • 高并发、大对象场景
lazyfree-lazy-eviction yes
lazyfree-lazy-expire yes
lazyfree-lazy-server-del yes
replica-lazy-flush yes

这样可以最大程度减少因删除操作导致的阻塞,提高系统的响应性能。

  • 小对象、对一致性要求高的场景
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

这种配置能保证内存操作的一致性,但在处理大对象删除时可能会出现阻塞。

5. 注意事项

  • 开启延迟释放机制后,虽然能避免主线程阻塞,但会增加后台线程的负担。若后台线程处理不及时,可能会导致内存占用持续上升,甚至出现内存泄漏的情况。因此,在开启这些配置后,需要密切监控系统的内存使用情况和后台线程的性能。
  • 延迟释放机制会引入一定的延迟,对于对数据一致性和实时性要求极高的场景,需要谨慎使用。

APPEND ONLY MODE模块

这部分是 Redis 的 AOF(Append Only File)持久化配置,用于将写操作以日志形式追加到文件中,提供比 RDB 更持久的数据保障。以下是对每个配置项的详细解析:

一、AOF 持久化开关(appendonly

appendonly no

作用

  • 启用或禁用 AOF 持久化。
    • yes:开启 AOF,所有写命令会被追加到 AOF 文件。
    • no(默认):仅使用 RDB 持久化(数据安全性较低,可能丢失未写入 RDB 的数据)。

适用场景

  • 需要高数据安全性(如金融、实时数据场景):设置为 yes,搭配 appendfsync everysec 平衡性能与安全。
  • 纯缓存场景(允许数据丢失):保持 no,仅用 RDB 或不持久化。

二、AOF 文件名(appendfilename

appendfilename "appendonly.aof"

作用

  • 指定 AOF 文件的名称(支持相对路径或绝对路径)。
  • 建议:生产环境可按实例命名(如 appendonly-6379.aof),便于区分多实例文件。

三、磁盘同步策略(appendfsync

# appendfsync always
appendfsync everysec
# appendfsync no

作用

  • 控制 AOF 日志写入磁盘的频率,影响数据安全性和性能。

  • 可选值:

    策略 描述 数据安全性 性能影响
    always 每个写命令都触发 fsync 强制写入磁盘,安全性最高(最多丢失 1 个写命令)。 严重降低性能
    everysec(默认) 每秒执行一次 fsync,若中途宕机最多丢失 1 秒数据。 中高 轻微影响
    no 由操作系统决定何时写入磁盘(依赖内核缓冲区刷新机制),性能最佳。

配置建议

  • 生产环境:优先使用 everysec,平衡数据安全与性能。
  • 高性能场景:若能接受数据丢失风险(如缓存),可设为 no
  • 金融级场景:必须设为 always(需配合硬件加速)。

四、重写时的磁盘同步控制(no-appendfsync-on-rewrite

no-appendfsync-on-rewrite no

作用

  • 当 Redis 执行 AOF 重写(BGREWRITEAOF)时,是否暂停主进程的fsync操作:

    • yes:重写期间主进程不执行 fsync,日志先存于内存缓冲区,可能丢失最多 30 秒数据(适用于延迟敏感场景)。
    • no(默认):重写期间主进程继续执行 fsync,可能因磁盘竞争导致主进程阻塞(数据安全性更高)。

注意事项

  • 开启此选项会降低重写期间的写入性能,但可避免主进程阻塞。
  • 仅建议在 AOF 重写频繁且对延迟敏感的场景启用。

五、自动 AOF 重写(auto-aof-rewrite-percentageauto-aof-rewrite-min-size

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

作用

  • auto-aof-rewrite-percentage:设置 AOF 文件增长百分比阈值,触发自动重写。

    • 示例:设为 100,当当前 AOF 大小较上次重写后增长 100%(即翻倍)时,触发重写。
  • auto-aof-rewrite-min-size:设置 AOF 文件的最小尺寸,小于该值时即使达到增长百分比也不重写。

  • 示例:设为 64mb,避免小文件频繁重写。

配置逻辑

  • 公式:当前 AOF 大小 > 上次重写后大小 × (1 + 百分比/100)当前 AOF 大小 > 最小尺寸 时触发重写。

  • 建议:

    • 生产环境保持默认值(增长 100% 且最小 64MB)。
  • 若 AOF 文件增长缓慢,可增大百分比(如 200)减少重写频率。

六、AOF 文件截断处理(aof-load-truncated

aof-load-truncated yes

作用

  • 当 AOF 文件因异常(如服务器崩溃)导致末尾数据截断时,控制 Redis 启动行为:
    • yes(默认):忽略截断部分,加载有效数据并启动,记录日志提示用户。
    • no:拒绝启动,需手动使用 redis-check-aof --fix 修复文件。

安全性建议

  • 生产环境保持 yes,避免因文件轻微损坏导致服务无法启动。
  • 定期使用 redis-check-aof 工具扫描文件,确保完整性。

七、AOF 重写优化(aof-use-rdb-preamble

aof-use-rdb-preamble yes

作用

  • 在 AOF 重写时,是否在文件开头写入 RDB 格式的预 amble(头部):
    • yes(默认):重写后的 AOF 文件由 RDB 数据 + AOF 增量日志 组成,加载时先解析 RDB(速度更快)。
    • no:仅包含 AOF 日志,加载时逐行解析(适用于特殊格式需求)。

优势

  • 减少 AOF 文件体积,提升加载速度(尤其适合大数据集)。
  • 兼容 RDB 解析逻辑,优化内存恢复效率。

八、AOF 与 RDB 混合使用

配置示例

appendonly yes            # 启用 AOF
appendfsync everysec      # 每秒同步
auto-aof-rewrite-percentage 100  # 增长 100% 触发重写
auto-aof-rewrite-min-size 64mb    # 最小 64MB 才重写
aof-use-rdb-preamble yes  # 重写时包含 RDB 头部

工作流程

  1. 日常写入:命令先写入 AOF 缓冲区,按 appendfsync 策略同步到磁盘。
  2. 重写触发:当 AOF 大小超过阈值,后台执行 BGREWRITEAOF,生成紧凑的新 AOF 文件(含 RDB 预 amble)。
  3. 故障恢复:优先加载 AOF 文件,若存在预 amble 则先解析 RDB 部分,再应用增量日志。

九、常见问题与排查

  1. AOF 文件过大

    • 检查 auto-aof-rewrite-percentageauto-aof-rewrite-min-size 是否合理,手动执行 BGREWRITEAOF 触发重写。
    • 分析写入命令,避免大量无效操作(如频繁修改同一键)。
  2. 主进程因 fsync 阻塞

    • 查看日志是否有 Background AOF rewrite started 或磁盘 I/O 瓶颈,尝试开启 no-appendfsync-on-rewrite yes
  3. AOF 加载失败

    • 设置aof-load-truncated no并修复文件:

      redis-check-aof --fix /path/to/appendonly.aof

十、AOF 与 RDB 对比

特性 AOF RDB
数据安全性 高(可配置 alwayseverysec 中(依赖快照间隔,可能丢失分钟级数据)
文件体积 较大(记录全量操作) 较小(二进制压缩)
恢复速度 较慢(逐行解析日志) 较快(直接加载二进制)
适用场景 高持久化需求(如数据库) 缓存或备份(如容灾恢复)

通过合理配置 AOF,可在数据持久性和服务器性能之间找到平衡点。生产环境中建议同时启用 AOF 和 RDB(appendonly yes + 定期快照),以实现双重数据保障。

LUA SCRIPTING模块

这部分是 Redis 的 Lua 脚本执行配置,用于控制 Lua 脚本的最大执行时间,避免长耗时脚本阻塞主线程。以下是对该配置项的详细解析:

一、核心配置:lua-time-limit

lua-time-limit 5000

作用

  • 设置 Lua 脚本在 Redis 中的最大执行时间(毫秒)。
  • 当脚本执行时间超过该阈值时:
    1. Redis 会记录日志:Script is still running after max allowed time
    2. 拒绝处理新命令(除 SCRIPT KILLSHUTDOWN NOSAVE 外),防止主线程被永久阻塞。

默认值

  • 5000 毫秒(5 秒),平衡了大多数场景的脚本执行需求与系统稳定性。

配置逻辑

  • Redis 使用单线程处理命令,包括 Lua 脚本。长耗时脚本会阻塞其他请求,因此需要通过 lua-time-limit 限制执行时间。
  • 0 或负数:禁用时间限制(不建议生产环境使用,可能导致服务不可用)。

二、超时后的处理机制

1. 允许的命令

  • 仅以下两个命令可在超时后执行:
    • SCRIPT KILL:终止未执行写操作的脚本(NO SCRIPT 错误表示脚本已执行写操作,无法终止)。
    • SHUTDOWN NOSAVE:强制关闭 Redis 服务器(不保存数据,用于紧急情况)。

2. 脚本类型与终止限制

  • 只读脚本:未执行写命令的脚本,可通过 SCRIPT KILL 终止。
  • 写脚本:已执行写命令(如 SETDEL)的脚本,无法通过 SCRIPT KILL 终止,只能等待其执行完毕或使用 SHUTDOWN NOSAVE 关闭服务器。

三、适用场景与调优建议

1. 生产环境配置

  • 保持默认值(5 秒):适用于大多数业务场景,避免短耗时脚本被误杀,同时防止长耗时脚本阻塞。

  • 特殊场景调整:

    • 若脚本复杂度高(如批量处理百万级数据),可适当增大阈值(如 10000 毫秒),但需监控执行时间,避免影响性能。
  • 若追求极致稳定性,可降低阈值(如 2000 毫秒),强制限制脚本执行效率。

2. 脚本优化方向

  • 避免复杂逻辑:将大数据量处理拆分为多个小批次,或使用 Redis 原生命令替代 Lua 脚本(如 SCAN + 管道操作)。

  • 使用 EVALSHA:缓存脚本哈希值,减少重复编译开销。

  • 监控脚本执行时间:通过SCRIPT STATS命令查看脚本执行耗时分布,定位慢脚本:

    redis-cli script stats

四、注意事项

  1. 单线程阻塞风险

    • 即使设置了 lua-time-limit,超时脚本仍会占用线程直至终止,因此需从源头控制脚本复杂度。
  2. 与慢查询日志结合

    • 启用慢查询日志(slowlog-log-slower-than),捕获执行时间超过阈值的脚本:

      slowlog-log-slower-than 1000  # 记录执行时间超过 1 毫秒的命令(含 Lua 脚本)
  3. 分布式场景限制

    • Lua 脚本仅在单个 Redis 节点执行,无法跨实例操作。若需分布式逻辑,建议使用 Redis 集群或外部协调工具(如 ZooKeeper)。

五、典型错误与处理

1. 脚本超时导致服务不可用

WARNING: Lua script took too long to execute (5001 ms > 5000 ms)
  • 处理步骤:

    1. 使用 redis-cli script kill 尝试终止脚本(若为只读脚本)。
    2. 若脚本已执行写操作,使用 redis-cli shutdown nosave 重启实例(需确保数据可通过持久化恢复)。
    3. 分析脚本逻辑,优化或拆分耗时操作。

2. SCRIPT KILL 失败

redis-cli script kill
(error) NO SCRIPT The script already executed write commands
  • 原因:脚本已执行写操作,无法安全终止。

  • 解决方案:

    • 等待脚本自然执行完毕(监控内存和 CPU 消耗)。
  • 若紧急,使用 SHUTDOWN NOSAVE 重启,并通过 AOF/RDB 恢复数据(可能丢失未持久化的写操作)。

六、总结

lua-time-limit 是 Redis 防止 Lua 脚本阻塞的重要机制,合理配置可在功能灵活性与系统稳定性之间取得平衡。生产环境中需遵循以下原则:

  1. 避免在 Lua 脚本中执行高耗时操作(如大数据集遍历、复杂计算)。
  2. 通过监控和慢查询日志提前发现潜在问题脚本。
  3. 优先使用 Redis 原生命令实现业务逻辑,仅在必要时使用 Lua 脚本(如原子性要求高的场景)。

通过以上措施,可有效避免 Lua 脚本成为 Redis 性能瓶颈,确保服务高可用性。

REDIS CLUSTER

这部分是 Redis 集群(Redis Cluster)的配置,用于将多个 Redis 节点组成分布式集群,实现数据分片、高可用性和水平扩展。以下是对每个配置项的详细解析:

一、集群功能开关(cluster-enabled

# cluster-enabled yes

作用

  • 启用 Redis 集群模式。
    • yes:当前节点作为集群节点运行,支持集群协议和数据分片。
    • no(默认):节点以单实例模式运行,无法加入集群。

配置要点

  • 集群中所有节点必须启用此配置,且端口、IP 可互通。
  • 生产环境需为每个节点设置唯一的 cluster-config-file 和端口(如 7000、7001 等)。

二、集群配置文件(cluster-config-file

# cluster-config-file nodes-6379.conf

作用

  • 指定集群节点的配置文件名(由 Redis 自动生成和维护,无需手动编辑)。

  • 要求:

    • 不同节点的配置文件名必须唯一(通常包含端口号,如 nodes-7000.conf)。
  • 文件路径需可写(默认在 dir 配置的目录中,如 /var/lib/redis)。

三、节点超时时间(cluster-node-timeout

# cluster-node-timeout 15000

作用

  • 设置节点超时时间(毫秒),用于判断节点是否故障:
    • 若节点在 cluster-node-timeout 时间内未响应心跳或命令,会被标记为疑似下线(PFAIL)。
    • 当多个节点确认某节点 PFAIL 后,该节点会被标记为已下线(FAIL),触发故障转移。

默认值

  • 15000 毫秒(15 秒),适用于大多数网络环境。

  • 调整建议:

    • 高延迟网络可增大此值(如 30000 毫秒)。
  • 低延迟网络可减小此值(如 5000 毫秒)以快速响应故障。

四、从节点有效性因子(cluster-replica-validity-factor

# cluster-replica-validity-factor 10

作用

  • 控制从节点在主节点故障时是否具备故障转移资格:
    • 从节点与主节点最后一次交互的时间超过 (node-timeout * factor) + repl-ping-replica-period 时,将被视为数据过旧,不参与选举。
    • factor=0:允许任何从节点参与故障转移(即使长时间未同步数据),适用于追求高可用性但容忍数据不一致的场景。
    • factor>0(默认 10):限制从节点需在有效时间内与主节点通信,确保故障转移后的数据一致性。

五、迁移屏障(cluster-migration-barrier

# cluster-migration-barrier 1

作用

  • 控制从节点迁移至孤立主节点的条件(孤立主节点:无可用从节点的主节点):
    • 从节点仅在原主节点至少保留 cluster-migration-barrier 个其他从节点时,才会迁移。
    • barrier=1(默认):从节点迁移前,原主节点必须至少还有 1 个从节点,避免原主节点失去所有从节点。
    • barrier=0:允许从节点无条件迁移(危险,可能导致原主节点无备份)。

适用场景

  • 单主多从架构:设置为从节点数量 - 1(如 3 个从节点设为 2),确保至少 1 个从节点留守。
  • 禁止迁移:设置为较大值(如 100),适用于跨数据中心集群,防止从节点自动迁移到其他中心的主节点。

六、集群全覆盖要求(cluster-require-full-coverage

# cluster-require-full-coverage yes

作用

  • 控制集群在部分哈希槽未覆盖时的行为:
    • yes(默认):当存在未覆盖的哈希槽时,节点拒绝处理任何请求,确保集群状态一致。
    • no:允许节点处理已覆盖槽的请求,适用于需要部分可用的场景(如读写分离)。

生产建议

  • 保持 yes 以保证集群完整性,避免数据不一致。
  • 仅在特殊场景(如灾难恢复)下设置为 no

七、从节点禁止故障转移(cluster-replica-no-failover

# cluster-replica-no-failover no

作用

  • 禁止从节点自动发起故障转移(仅允许手动触发):
    • yes:从节点永不自动晋升为主节点,需通过 cluster failover 命令手动触发。
    • no(默认):允许从节点在主节点故障时自动选举。

使用场景

  • 跨数据中心集群:设置为 yes,防止某数据中心主节点故障时,另一中心的从节点自动晋升(需人工确认故障)。
  • 测试环境:临时禁止自动故障转移,便于模拟故障场景。

八、集群配置最佳实践

1. 基础集群配置

cluster-enabled yes            # 启用集群
cluster-config-file nodes.conf # 配置文件名(自动生成,建议包含端口)
cluster-node-timeout 5000      # 5 秒超时,适合低延迟网络
cluster-replica-validity-factor 5 # 缩短从节点有效时间(如 5 * 5s + 10s = 35s)
cluster-migration-barrier 2    # 3 个从节点时,允许 1 个迁移,保留 2 个
cluster-require-full-coverage yes # 强制全槽覆盖(默认)

2. 高可用架构建议

  • 节点分布:每个主节点配备至少 1 个从节点,分布在不同物理机 / 机架上。
  • 网络优化:确保节点间延迟 < 1 毫秒,禁用防火墙对集群端口(默认 6379 + 10000,如 16379)的限制。
  • 监控工具:使用 redis-cli --cluster check 定期检查集群状态,结合 Prometheus + Grafana 监控节点健康度。

九、集群搭建步骤(示例)

  1. 配置各节点:

    • 修改 redis.conf,启用集群并设置唯一端口(如 7000、7001、7002)。
    port 7000
    cluster-enabled yes
    cluster-config-file nodes-7000.conf
  2. 启动节点:

    redis-server /etc/redis/7000.conf
    redis-server /etc/redis/7001.conf
    redis-server /etc/redis/7002.conf
  3. 创建集群:

    redis-cli --cluster create 192.168.1.100:7000 192.168.1.100:7001 192.168.1.100:7002 --cluster-replicas 1
    • --cluster-replicas 1:为每个主节点分配 1 个从节点(自动选举)。

十、常见问题与排查

  1. 节点无法加入集群

    • 检查 cluster-enabled 是否为 yes,端口是否互通(如 telnet <ip> <port>)。
    • 确保配置文件路径可写,无权限问题。
  2. 故障转移失败

    • 查看 cluster-node-timeout 是否过小,导致从节点未及时响应选举。
    • 检查 cluster-replica-validity-factor 是否过大,导致从节点因数据过旧被排除。
  3. 集群状态异常(如槽未覆盖)

    • 使用redis-cli --cluster check

      查看槽分布,手动迁移槽:

      redis-cli --cluster reshard 

十一、集群与单实例对比

特性 单机 Redis Redis 集群
数据容量 受限于单节点内存 水平扩展,理论无上限
高可用性 依赖主从复制 + Sentinel 内置故障转移,自动选举主节点
读写性能 单线程处理,瓶颈明显 多节点并行处理,性能线性提升
复杂度 简单 较高(需处理分片、迁移等)

通过合理配置 Redis 集群参数,可构建高可用、可扩展的分布式缓存 / 数据库系统。生产环境中需结合业务流量规划节点数量,定期进行故障演练,并监控集群状态以确保稳定性。

CLUSTER DOCKER/NAT support模块

这部分配置用于解决 Redis 集群在 Docker 容器NAT 网络环境下的地址发现问题。当节点通过网络地址转换(NAT)或端口转发暴露到外部时,默认的自动地址检测机制会失效,需通过手动配置告知集群节点其公网地址端口。以下是对各配置项的详细解析:

一、集群地址通告配置项

1. cluster-announce-ip

# cluster-announce-ip 10.1.1.5
  • 作用:指定节点在集群中通告的公网 IP 地址(用于其他节点识别和连接)。

  • 场景:

    • 节点位于 Docker 容器内,主机通过 NAT 映射容器 IP 到公网。
  • 节点部署在云服务器中,内网 IP 与公网 IP 不一致(如 AWS EC2 的私有 IP 和弹性 IP)。

2. cluster-announce-port

# cluster-announce-port 6379
  • 作用:指定节点用于客户端连接的公网端口(默认 6379)。

  • 场景:

    • 容器或防火墙将节点的客户端端口(6379)转发到其他端口(如主机端口 30000 映射到容器端口 6379),需设置为主机暴露的端口(如 30000)。

3. cluster-announce-bus-port

# cluster-announce-bus-port 6380
  • 作用:指定节点用于集群内部通信的公网端口(集群总线端口,默认为客户端端口 + 10000,即 16379)。

  • 场景:

    • 当集群总线端口被 NAT 或防火墙修改(如容器端口 16379 映射到主机端口 40000),需手动指定为主机暴露的总线端口(如 40000)。
  • 若未设置,默认使用 客户端端口 + 10000 计算总线端口。

二、配置逻辑与示例

典型 Docker 场景配置

假设 Docker 容器内的 Redis 节点配置如下:

  • 容器内 IP:172.17.0.2(内网,其他容器无法直接访问)。
  • 主机公网 IP:192.168.1.100
  • 主机端口映射:
    • 客户端端口:主机 30000 → 容器 6379
    • 总线端口:主机 40000 → 容器 163796379 + 10000)。

配置文件示例

cluster-enabled yes
cluster-announce-ip 192.168.1.100   # 主机公网 IP
cluster-announce-port 30000          # 主机暴露的客户端端口
cluster-announce-bus-port 40000      # 主机暴露的总线端口

云服务器 NAT 场景

若云服务器分配了弹性 IP 52.198.10.50,但节点默认使用内网 IP 10.0.0.4 通信:

cluster-announce-ip 52.198.10.50    # 弹性 IP(公网地址)
cluster-announce-port 6379           # 客户端端口(未修改时保持默认)
# 未修改总线端口时,无需配置 cluster-announce-bus-port(默认 16379)

三、配置注意事项

  1. 端口映射一致性
    • 确保 cluster-announce-portcluster-announce-bus-port 与实际网络映射一致,否则集群节点无法通信。
    • 示例:若主机将容器的 6379 映射到 30000,但 cluster-announce-port 仍设为 6379,其他节点会尝试连接 6379 而非 30000,导致连接失败。
  2. 防火墙规则
    • 开放公网 IP 的 cluster-announce-portcluster-announce-bus-port(或默认总线端口),允许集群节点间的双向通信。
    • 例如,在 AWS 安全组中添加规则,允许来源为集群其他节点公网 IP 的流量访问这两个端口。
  3. 最小配置要求
    • 至少配置 cluster-announce-ipcluster-announce-port,总线端口可依赖默认计算(客户端端口 + 10000)。
    • 仅当总线端口非默认时(如手动修改为其他值),才需配置 cluster-announce-bus-port

四、验证与排查

  1. 查看集群节点信息
    使用 redis-cli --cluster check <node-address> 命令,检查节点显示的 IP 和端口是否为配置的公网地址:

    redis-cli --cluster check 192.168.1.100:30000
    # 输出应显示 "IP:port" 为配置的公网地址和端口
  2. 端口连通性测试

    • 从其他集群节点 ping 公网 IP,确保端口可达:

      telnet 192.168.1.100 30000  # 客户端端口
      telnet 192.168.1.100 40000  # 总线端口
  3. 日志排查
    查看节点日志,若出现 CLUSTERDOWN Cannot reach any node in the cluster 错误,可能是地址通告配置错误或端口未开放:

    [12345] 01 Jan 00:00:00 * Node 192.168.1.101:30001 is not reachable (PING timeout)

五、总结

在 Docker、NAT 或云服务器等网络环境中,Redis 集群的自动地址发现机制会因 IP 和端口的转换而失效。通过 cluster-announce-ipcluster-announce-port 手动通告公网地址和端口,可确保集群节点正确识别彼此,维持分布式系统的连通性和可用性。配置时需严格匹配网络映射规则,并确保防火墙放行相关端口,避免因网络配置错误导致集群分裂或通信失败。

SLOW LOG

这部分是 Redis 的 慢查询日志配置,用于记录执行时间超过阈值的命令,帮助定位性能瓶颈。以下是对每个配置项的详细解析:

一、慢查询日志核心参数

1. slowlog-log-slower-than

slowlog-log-slower-than 10000
  • 作用:设置慢查询的时间阈值(微秒),执行时间超过该值的命令会被记录。

    • 单位:微秒(1 秒 = 1,000,000 微秒)。

    • 特殊值:

    • 0:记录所有命令(包括执行时间为 0 微秒的命令)。

    • 负数(如 -1):禁用慢查询日志。

  • 默认值10000 微秒(10 毫秒),即记录执行时间超过 10 毫秒的命令。

2. slowlog-max-len

slowlog-max-len 128
  • 作用:设置慢查询日志的最大保存条数。当新命令触发记录时,最早的日志会被删除,保持日志总数不超过此值。

  • 特性:

    • 日志以先进先出(FIFO)方式管理,非环形缓冲区。
    • 内存占用与日志长度成正比,每条日志包含命令内容、执行时间、客户端信息等。

二、配置示例与调优

1. 生产环境常用配置

slowlog-log-slower-than 1000  # 记录执行时间超过 1 毫秒的命令(1000 微秒)
slowlog-max-len 1024         # 保存最近 1024 条慢查询日志
  • 场景说明:

    • 降低阈值至 1 毫秒,更敏感地捕捉性能问题。
    • 增加日志长度至 1024,便于分析高频慢查询。

2. 不同场景的阈值设置

场景 阈值建议(微秒) 说明
开发测试环境 1000(1 毫秒) 严格监控,及时发现开发阶段性能问题。
生产环境(高并发) 5000(5 毫秒) 平衡监控精度与日志量,避免内存占用过高。
低频操作场景 100000(100 毫秒) 仅记录明显缓慢的命令(如大数据集操作)。

三、慢查询日志的使用与分析

1. 查看慢查询日志

使用 SLOWLOG GET [n] 命令获取日志列表:

redis-cli slowlog get 5  # 获取最近 5 条慢查询日志
  • 输出示例:

    1) 1) 1693456875  # 日志唯一标识符(时间戳+序号)
     2) 1689723456789  # 命令执行时的 UNIX 时间戳(毫秒)
     3) 15000  # 命令执行时间(微秒)
     4) "HGETALL bighash"  # 具体命令及参数
     5) 127.0.0.1:6379  # 客户端地址和端口

2. 清除慢查询日志

使用 SLOWLOG RESET 命令清空所有日志:

redis-cli slowlog reset

3. 分析工具与方法

  • 慢查询趋势监控 :通过

    INFO slowlog

    命令获取当前日志数量和最长执行时间:

    redis-cli info slowlog | grep -E "slowlog_len|slowlog_max_entry_time"
  • 高频慢命令统计:结合外部工具(如 Python 脚本)解析日志,统计出现次数最多的慢查询:

    # 示例:统计慢查询命令频率
    import redis
    r = redis.Redis()
    logs = r.slowlog_get()
    command_counts = {}
    for log in logs:
      cmd = log[3].split()[0]  # 提取命令名称(如 HGETALL)
      command_counts[cmd] = command_counts.get(cmd, 0) + 1
    print(command_counts)

四、常见慢查询原因与优化

1. 命令复杂度高

  • 现象HGETALLKEYS *SORT 等命令执行时间长。

  • 优化:

    • 避免使用 KEYS *,改用 SCAN 分批遍历。
    • SORT 命令使用 STORE 选项缓存结果,或改用外部程序处理排序。
    • 使用 HLEN 先判断哈希表大小,避免对大哈希执行 HGETALL

2. 数据量过大

  • 现象:操作包含百万级元素的列表(LPUSH)、集合(SADD)等。

  • 优化 :

    • 拆分大集合为多个小集合(如按用户 ID 哈希分桶)。
    • 使用流水线(Pipeline)减少客户端与服务器的往返次数。
    • 对高频访问的大键启用 LFULRU 淘汰策略,避免长期占用内存。

3. 服务器负载过高

  • 现象:慢查询集中出现在 CPU 利用率超过 80% 时。

  • 优化 :

    • 检查是否有长耗时 Lua 脚本(通过 lua-time-limit 限制)。
    • 增加 Redis 实例节点,通过集群分摊负载。
    • 升级服务器硬件(如更换为更快的 CPU 或 SSD)。

五、注意事项

  1. 日志对性能的影响
    • 日志记录由主线程处理,slowlog-max-len 过大可能影响 Redis 性能(尤其是写入密集场景)。建议生产环境控制在 1000 条以内。
  2. 敏感信息安全
    • 慢查询日志包含完整的命令参数(如 SET user:123 "secret"),需确保服务器日志存储安全,避免敏感数据泄露。
  3. 与其他监控结合
    • 结合 Redis 监控指标(如 used_memoryrejected_connections)和操作系统监控(如 topiostat),综合分析性能问题。

六、总结

慢查询日志是 Redis 性能优化的重要工具,通过合理设置阈值和日志长度,可有效捕捉低效命令并针对性优化。生产环境中建议:

  1. 定期分析慢查询日志,形成优化清单(如淘汰低效命令、优化数据结构)。
  2. 对核心业务命令设置较低阈值(如 1 毫秒),确保关键操作性能。
  3. 结合自动化监控报警,及时发现慢查询激增等异常情况。

通过慢查询日志与其他性能调优手段(如内存优化、集群扩展)的配合,可显著提升 Redis 实例的稳定性和响应速度。

LATENCY MONITOR

这部分是 Redis 的 延迟监控配置,用于实时采样和记录实例在不同操作阶段的延迟数据,帮助定位和分析性能瓶颈。以下是对该配置项的详细解析:

一、核心配置:latency-monitor-threshold

latency-monitor-threshold 0

作用

  • 设置延迟监控的阈值(毫秒),仅记录执行时间≥ 阈值的操作。
    • 0(默认):禁用延迟监控(不记录任何操作)。
    • 正数(如 1):启用监控,记录所有执行时间 ≥ 指定毫秒数的操作(如磁盘 I/O、命令执行、网络处理等)。

工作原理

  • Redis 内部对各类操作(如 GETSETBGSAVEAOF 同步 等)的执行时间进行采样。
  • 当操作耗时超过阈值时,将其归类到对应的 事件类型 中,并记录最近的发生时间和耗时分布。

二、延迟监控的事件类型

Redis 支持监控以下类型的延迟事件(通过 LATENCY EVENTS 命令查看):

事件类型 说明
command 命令执行阶段的延迟(不包含网络 I/O)。
networking 网络接收 / 发送数据的延迟(如客户端请求排队、Socket I/O)。
aof_fsync AOF 日志执行 fsync 写入磁盘的延迟。
rdb_save RDB 快照生成(SAVE/BGSAVE)的延迟。
主从复制 主从同步数据传输的延迟(如全量复制、部分复制)。
swap 操作系统 Swap 交换导致的延迟(Redis 应禁用 Swap,此事件通常不触发)。
latency-monitor 延迟监控系统自身的开销(一般可忽略)。

三、启用与使用延迟监控

1. 动态启用监控(无需重启)

redis-cli config set latency-monitor-threshold 1  # 监控所有 ≥ 1 毫秒的操作

2. 查看延迟数据

  • 查看各事件的最大延迟

    redis-cli latency list
    # 输出示例:
    1) "aof_fsync"       # 事件类型
     "max"            # 最大延迟(单位:毫秒)
     "15"             # 15 毫秒
     "last occurrence"
     "1693456875"     # 最后发生时间(UNIX 时间戳)
  • 查看事件详细历史

    redis-cli latency history event aof_fsync count 5  # 查看 AOF fsync 的最近 5 次延迟记录
  • 清空监控数据

    redis-cli latency reset  # 清除所有事件的延迟记录

四、适用场景与配置建议

1. 生产环境使用场景

  • 定位偶发延迟:当 Redis 出现间歇性响应变慢时,启用监控(如阈值设为 0.1 毫秒)捕捉短暂延迟。
  • 分析持久化影响:监控 aof_fsyncrdb_save 事件,判断磁盘 I/O 是否为性能瓶颈。
  • 排查主从复制延迟:通过 主从复制 事件监控数据同步耗时,优化复制拓扑。

2. 阈值设置建议

场景 阈值(毫秒) 说明
常规性能诊断 1 捕捉明显延迟操作(如大键删除、RDB 生成)。
高敏感场景(如金融) 0.1 监控亚毫秒级延迟(需配合高性能硬件)。
临时故障排查 0 临时启用全量监控(谨慎使用,可能影响性能)。

五、注意事项

  1. 性能影响
    • 延迟监控会引入轻微性能开销(约增加 1-5% 的 CPU 使用率),生产环境建议仅在故障排查时启用,平时保持禁用(threshold=0)。
  2. 与慢查询日志的区别
    • 慢查询日志:记录具体命令的执行时间(微秒级),侧重 命令级优化
    • 延迟监控:按操作类型统计延迟(毫秒级),侧重 系统级瓶颈分析(如磁盘、网络、复制)。
  3. 结合其他工具
    • 监控时需同时查看 Redis 指标(INFO stats)和系统状态(dstat、iotop),例如:
      • aof_fsync 延迟高,检查磁盘写入速度是否达标。
      • networking 延迟高,排查网络带宽或客户端连接池问题。

六、典型案例:排查 AOF 同步延迟

问题现象

Redis 写入性能突然下降,INFO stats 显示 aof_pending_fsync 持续大于 0。

监控步骤

  1. 启用延迟监控:

    redis-cli config set latency-monitor-threshold 0.5  # 监控 ≥ 0.5 毫秒的操作
  2. 查看aof_fsync事件延迟:

    redis-cli latency list | grep aof_fsync
    # 输出:"aof_fsync" "max" "25" 表示最大延迟 25 毫秒
  3. 分析原因:

    • 磁盘繁忙(通过 iotop 发现 AOF 文件所在磁盘写入队列积压)。
    • 解决方案:将 AOF 文件迁移至 SSD 磁盘,或调整 appendfsync 策略为 everysec(默认)减少 fsync 频率。

七、总结

延迟监控是 Redis 性能诊断的高级工具,适用于解决慢查询日志无法定位的系统性延迟问题。使用时需遵循以下原则:

  1. 按需启用:平时保持禁用,仅在遇到延迟问题时临时开启。
  2. 合理设置阈值:根据业务敏感度选择阈值,避免过度采集影响性能。
  3. 多维分析:结合慢查询日志、系统监控和 Redis 指标,精准定位瓶颈根源。

通过延迟监控,可深入分析 Redis 在命令执行、持久化、网络通信等环节的潜在问题,为优化硬件配置、调整参数提供数据支撑。

EVENT NOTIFICATION

这部分是 Redis 的 事件通知配置,用于通过发布订阅(Pub/Sub)机制实时推送键空间或键事件。以下是对配置项的详细解析:

一、事件通知核心配置

notify-keyspace-events ""

作用

  • 控制 Redis 发布的事件类型。
  • 格式:由多个字符组合而成,每个字符代表一类事件(见下文分类)。
  • 默认值""(禁用事件通知)。

二、事件类型分类

Redis 将事件分为 键空间事件(Keyspace Events)键事件(Keyevent Events),并按操作类型进一步细分:

1. 事件类别标识

字符 类别 说明
K 键空间事件 __keyspace@<db>__:<key> 为频道名,发布与键相关的事件(如键被修改)。
E 键事件 __keyevent@<db>__:<event> 为频道名,发布与操作相关的事件(如执行 DEL)。
g 通用命令 非类型特定命令(如 DELEXPIRERENAME 等)。
$ 字符串命令 仅字符串类型相关命令(如 SETGET)。
l 列表命令 仅列表类型相关命令(如 LPUSHLRANGE)。
s 集合命令 仅集合类型相关命令(如 SADDSMEMBERS)。
h 哈希命令 仅哈希类型相关命令(如 HSETHGETALL)。
z 有序集合命令 仅有序集合类型相关命令(如 ZADDZRANGE)。
x 键过期事件 键过期时触发(对应 expired 事件)。
e 键淘汰事件 因内存不足被淘汰时触发(对应 evicted 事件)。
A 通配符(所有类型) 等价于 g$lshzxe(不包含 KE,需显式添加)。

2. 组合规则

  • 必须包含K或E之一,否则不发布任何事件:

    • K:关注键的变化(频道名包含键名)。
    • E:关注操作类型(频道名包含操作名)。
  • 示例组合:

    配置字符串 说明
    Ex 发布键过期事件(键事件类型,频道名如 __keyevent@0__:expired)。
    Kx 发布键空间中的过期事件(频道名如 __keyspace@0__:mykey,事件类型 expired)。
    AKE 发布所有类型的键空间和键事件(等价于 g$lshzxeKE)。

三、配置示例与场景

1. 监控键过期事件

notify-keyspace-events Ex
  • 频道订阅:

    redis-cli psubscribe "__keyevent@*__:expired"  # 监听所有数据库的过期事件
  • 应用场景:

    • 缓存失效通知:当缓存键过期时,触发下游服务重新加载数据。
    • 定时任务解耦:通过监听过期事件实现轻量级定时任务(如 SET key value EX 30 配合事件通知)。

2. 监控特定类型命令

notify-keyspace-events Elh  # 键事件 + 列表命令 + 哈希命令
  • 频道示例:
    • 执行 LPUSH mylist 1 时,发布到 __keyevent@0__:lpush,携带键名 mylist
    • 执行 HSET myhash field value 时,发布到 __keyevent@0__:hset,携带键名 myhash

3. 监控所有键空间变化

notify-keyspace-events KEA  # 键空间事件 + 键事件 + 所有类型命令
  • 注意:此配置会产生大量事件,可能影响 Redis 性能,仅建议在开发或监控场景使用。

四、性能影响与最佳实践

1. 性能注意事项

  • 内存与带宽开销:事件通知会增加 Redis 的内存占用(存储订阅关系)和网络带宽(发布事件消息)。
  • 高并发场景:大量事件可能导致 Redis 主线程阻塞,建议通过以下方式优化:
    • 仅订阅必要的事件类型(如仅关注 xe)。
    • 使用独立的 Redis 实例处理事件订阅,避免影响主实例性能。

2. 生产环境建议

  • 最小化事件类型:

    notify-keyspace-events "Ex"  # 仅启用过期事件通知
  • 分库隔离:

    • 将事件通知功能限制在特定数据库(如 db 1),避免干扰主业务数据(db 0)。
  • 异步处理事件:

    • 通过中间件(如 Kafka)缓冲事件,避免 Redis 客户端处理不及时导致积压。

五、事件通知的使用方法

1. 订阅频道

  • 按事件类型订阅:

    # 订阅数据库 0 中所有键的过期事件(键事件类型)
    redis-cli -h 127.0.0.1 -p 6379 subscribe "__keyevent@0__:expired"
  • 按键空间订阅:

    # 订阅数据库 0 中键名为 "user:*" 的所有变化(键空间事件)
    redis-cli -h 127.0.0.1 -p 6379 psubscribe "__keyspace@0__:user:*"

2. 事件消息格式

  • 键空间事件:

    PUBLISH __keyspace@__: 
    • 示例:删除键 user:123 时,发布 __keyspace@0__:user:123 del
  • 键事件:

    PUBLISH __keyevent@__: 
    • 示例:删除键 user:123 时,发布 __keyevent@0__:del user:123

六、常见问题与排查

  1. 事件未收到
    • 检查 notify-keyspace-events 是否包含 KE(如配置为 x 会失效,因未包含 K/E)。
    • 确认订阅的频道名称正确(如过期事件频道为 __keyevent@<db>__:expired)。
  2. 性能下降
    • 减少订阅的事件类型,或使用 AOF 持久化替代部分事件监控需求。
    • 若事件量过大,考虑使用 Redis 集群分流订阅客户端。

七、总结

事件通知是 Redis 实现实时数据同步、缓存失效通知等场景的重要功能,但需谨慎配置以避免性能影响。生产环境中建议:

  1. 仅启用必要的事件类型(如 Ex 监控过期)。
  2. 通过独立实例或中间件解耦事件处理逻辑。
  3. 结合 SUBSCRIBEPSUBSCRIBE 灵活订阅,避免全量监听。

通过合理使用事件通知,可在低侵入性下实现 Redis 与业务系统的实时交互,提升架构的响应能力和扩展性。

ADVANCED CONFIG

这部分是 Redis 的高级配置,涉及数据结构优化、内存管理、客户端缓冲区控制等底层参数。以下是关键配置项的解析与优化建议:

一、数据结构编码优化

Redis 对小数据量的集合(如哈希、列表、有序集合)使用压缩编码(如 ziplist、intset)以节省内存,可通过以下参数控制编码转换阈值:

1. 哈希(Hash)

hash-max-ziplist-entries 512  # 哈希键最多包含 512 个字段时使用 ziplist 编码
hash-max-ziplist-value 64     # 单个字段值最大 64 字节时使用 ziplist 编码
  • 优化建议:
    • 若业务中哈希键字段数通常小于 512 且值较小(如用户属性),保持默认值即可。
    • 若哈希键较大,可适当降低阈值(如 256)以提前转为更高效的编码(如 hashtable)。

2. 列表(List)

list-max-ziplist-size -2     # 每个 ziplist 节点最大 8 KB(-2 对应 8 KB,-1 对应 4 KB)
list-compress-depth 0         # 禁用列表压缩(0 表示不压缩,1 表示压缩中间节点)
  • 优化建议:
    • list-max-ziplist-size 设为 -2-1 可平衡内存与性能,适合中等长度列表(如消息队列)。
    • 若列表元素极少变化,可启用压缩(如 list-compress-depth 1)以进一步节省内存,但会增加 CPU 开销。

3. 有序集合(Sorted Set)

zset-max-ziplist-entries 128  # 有序集合最多包含 128 个元素时使用 ziplist 编码
zset-max-ziplist-value 64     # 元素值最大 64 字节时使用 ziplist 编码
  • 适用场景:
    • 排行榜数据量较小(如 Top 100)时保持默认值,数据量较大时需转为跳表(skiplist)编码。

4. 整数集合(Intset)

set-max-intset-entries 512    # 集合全为整数且元素数 ≤ 512 时使用 intset 编码
  • 注意:当集合包含非整数元素或元素数超过阈值时,自动转为哈希表(hashtable)编码。

二、内存与 CPU 优化

1. 主动哈希表重建(Activerehashing)

activerehashing yes           # 启用主动哈希表重建(默认开启)
  • 作用:定期(每秒 10 次)执行哈希表重建,释放旧表内存,避免渐进式重建导致的内存碎片化。
  • 建议:生产环境保持 yes,仅在对延迟敏感且内存充足的场景下关闭(activerehashing no)。

2. 后台任务执行频率(HZ)

hz 10                         # 后台任务每秒执行 10 次(默认)
dynamic-hz yes                # 动态调整 HZ(默认开启)
  • 参数说明:
    • hz 控制过期键删除、客户端超时检查等任务的执行频率。
    • dynamic-hz 会在客户端连接数多时自动提高 HZ(最高 100),提升响应速度。
  • 优化建议:
    • 高并发场景可将 hz 设为 20-50,平衡 CPU 与响应延迟(需监控 CPU 使用率)。
    • 低负载场景保持默认值以降低 CPU 消耗。

三、客户端缓冲区控制

用于限制客户端输出缓冲区大小,避免慢客户端导致 Redis 内存溢出:

1. 从节点(Replica)缓冲区

client-output-buffer-limit replica 256mb 64mb 60
  • 配置含义:
    • 硬限制:256 MB(超过即断开连接)。
    • 软限制:64 MB,持续 60 秒未释放则断开连接。
  • 适用场景:主从复制延迟较高时,可增大硬限制(如 512 MB),但需确保主节点有足够内存。

2. 发布订阅(Pub/Sub)缓冲区

client-output-buffer-limit pubsub 32mb 8mb 60
  • 建议:若订阅者消费速度慢,可适当增大软限制(如 16 MB),或使用中间件(如 Kafka)缓冲消息。

四、持久化优化

1. AOF 重写增量同步

aof-rewrite-incremental-fsync yes  # 每生成 32 MB AOF 数据执行一次 fsync
  • 作用:避免一次性写入大量数据导致的磁盘 I/O 阻塞,适用于大文件重写场景。

2. RDB 快照增量同步

rdb-save-incremental-fsync yes    # 每生成 32 MB RDB 数据执行一次 fsync
  • 注意:开启后可能增加 RDB 生成时间,但能减少内存占用峰值。

五、LFU 淘汰策略调优

1. 访问频率计数器因子

# lfu-log-factor 10             # 计数器增长的对数因子(默认 10)
  • 影响:值越大,计数器增长越慢,适合访问频率差异大的场景(如热点数据)。
    • 示例:factor=10 时,100 次访问计数器约为 10;factor=1 时约为 18。

2. 计数器衰减时间

# lfu-decay-time 1              # 计数器减半的时间(分钟,默认 1)
  • 作用:控制旧数据的计数器衰减速度,值越大,历史访问记录保留越久。
    • decay-time=0:每次扫描时立即衰减,适合数据时效性强的场景。

六、其他高级配置

1. 超日志日志(HyperLogLog)压缩阈值

hll-sparse-max-bytes 3000       # 稀疏编码最大字节数(默认 3000)
  • 建议:若业务中 HLL 基数较小(如 <10 万),保持默认值;基数较大时设为 0 强制使用密集编码。

2. 流(Stream)节点大小

stream-node-max-bytes 4096      # 流节点最大字节数(默认 4 KB)
stream-node-max-entries 100     # 流节点最大条目数(默认 100)
  • 优化:高吞吐量场景可增大 max-entries 至 500-1000,减少节点数量;低延迟场景保持默认值。

七、生产环境最佳实践

1. 内存优化组合

hash-max-ziplist-entries 256    # 哈希字段数阈值降低,提前转为 hashtable
zset-max-ziplist-entries 64     # 有序集合元素数阈值降低,适合小排行榜
activerehashing yes             # 启用主动哈希表重建,避免内存碎片
hz 20                           # 提高后台任务频率,加速过期键清理

2. 主从复制优化

client-output-buffer-limit replica 512mb 128mb 120  # 增大缓冲区应对慢从节点
replica-ping-replica-period 10                     # 从节点心跳间隔(默认 10 秒)

3. 监控与调优工具

  • 使用 OBJECT ENCODING <key> 查看键的编码类型,确认是否触发压缩编码。
  • 通过 INFO memory 监控内存碎片率(mem_fragmentation_ratio),若大于 1.5 需重启或调整 activerehashing

八、注意事项

  1. 参数联动效应:修改数据结构编码阈值可能影响内存与 CPU 的平衡(如 ziplist 节省内存但增加查询耗时)。
  2. 版本兼容性:部分配置(如 stream 相关)仅在 Redis 5.0+ 有效,需确认实例版本。
  3. 最小改动原则:生产环境避免一次性修改多个高级参数,建议通过 A/B 测试验证性能影响。

通过合理调整高级配置,可在 Redis 的内存占用、CPU 利用率和操作延迟之间找到最优解。建议结合业务数据特征(如数据结构类型、访问模式)逐步优化,并利用 Redis 自带工具(如 redis-cli --stat)实时监控配置变更后的效果。

附录

SNAPSHOTTING

################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""

save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /var/lib/redis

REPLICATION

################################# REPLICATION #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
#
# replicaof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
#    SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
#    COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes
# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.

# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the replica to connect with the master.
#
#   Port: The port is communicated by the replica during the replication
#   handshake, and is normally the port that the replica is using to
#   listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234

SECURITY

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.

CLIENTS

################################### CLIENTS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000

MEMORY MANAGEMENT

############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes

LAZY FREEING

############################# LAZY FREEING ####################################

# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
#    in order to make room for new data, without going over the specified
#    memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
#    EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
#    already exist. For example the RENAME command may delete the old key
#    content when it is replaced with another one. Similarly SUNIONSTORE
#    or SORT with STORE option may delete existing keys. The SET command
#    itself removes any old content of the specified key in order to replace
#    it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
#    its master, the content of the whole database is removed in order to
#    load the RDB file just transferred.
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

APPEND ONLY MODE

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
#   [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes

LUA SCRIPTING

################################ LUA SCRIPTING  ###############################

# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000

REDIS CLUSTER

################################ REDIS CLUSTER  ###############################

# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000

# A replica of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple replicas able to failover, they exchange messages
#    in order to try to give an advantage to the replica with the best
#    replication offset (more data from the master processed).
#    Replicas will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#
# 2) Every single replica computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the replica will not try to failover
#    at all.
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
#   (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-replica-validity-factor 10

# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes

# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
#
# cluster-replica-no-failover no

# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.

CLUSTER DOCKER/NAT support


########################## CLUSTER DOCKER/NAT support  ########################

# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380

SLOW LOG

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128

LATENCY MONITOR

################################ LATENCY MONITOR ##############################

# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0

EVENT NOTIFICATION

############################# EVENT NOTIFICATION ##############################

# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#
#  The "notify-keyspace-events" takes as argument a string that is composed
#  of zero or multiple characters. The empty string means that notifications
#  are disabled.
#
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#
#  notify-keyspace-events Elg
#
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#
#  notify-keyspace-events Ex
#
#  By default all notifications are disabled because most users don't need
#  this feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""

ADVANCED CONFIG

############################### ADVANCED CONFIG ###############################

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb  <-- not recommended for normal workloads
# -4: max size: 32 Kb  <-- not recommended
# -3: max size: 16 Kb  <-- probably not recommended
# -2: max size: 8 Kb   <-- good
# -1: max size: 4 Kb   <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2

# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression.  The head and tail of the list
# are always uncompressed for fast push/pop operations.  Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
#    going from either the head or tail"
#    So: [head]->node->node->...->node->[tail]
#    [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
#    2 here means: don't compress head or head->next or tail->prev or tail,
#    but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000

# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
stream-node-max-bytes 4096
stream-node-max-entries 100

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# replica  -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
#
# client-query-buffer-limit 1gb

# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
#
# proto-max-bulk-len 512mb

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10

# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
dynamic-hz yes

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes

# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
rdb-save-incremental-fsync yes

# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
# +--------+------------+------------+------------+------------+------------+
# | 0      | 104        | 255        | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 1      | 18         | 49         | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 10     | 10         | 18         | 142        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 100    | 8          | 11         | 49         | 143        | 255        |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
#   redis-benchmark -n 1000000 incr foo
#   redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1
########################### ACTIVE DEFRAGMENTATION #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
#    to use the copy of Jemalloc we ship with the source code of Redis.
#    This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don't have fragmentation
#    issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
#    needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.

# Enabled active defragmentation
# activedefrag yes

# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10
# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100

# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 5

# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75

# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# act
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇