redis.conf配置

  1. # Redis configuration file example
  2. # Redis示例配置文件
  3. # Note on units: when memory size is needed, it is possible to specify
  4. # it in the usual form of 1k 5GB 4M and so forth:
  5. # 注意单位问题:当需要设置内存大小的时候,可以使用类似1k、5GB、4M这样的常见格式:
  6. #
  7. # 1k => 1000 bytes
  8. # 1kb => 1024 bytes
  9. # 1m => 1000000 bytes
  10. # 1mb => 1024*1024 bytes
  11. # 1g => 1000000000 bytes
  12. # 1gb => 1024*1024*1024 bytes
  13. #
  14. # units are case insensitive so 1GB 1Gb 1gB are all the same.
  15. # 单位是大小写不敏感的,所以1GB 1Gb 1gB的写法都是完全一样的
  16. ################################## INCLUDES ###################################
  17. # Include one or more other config files here. This is useful if you
  18. # have a standard template that goes to all Redis servers but also need
  19. # to customize a few per-server settings. Include files can include
  20. # other files, so use this wisely.
  21. #
  22. # Notice option "include" won‘t be rewritten by command "CONFIG REWRITE"
  23. # from admin or Redis Sentinel. Since Redis always uses the last processed
  24. # line as value of a configuration directive, you‘d better put includes
  25. # at the beginning of this file to avoid overwriting config change at runtime.
  26. #
  27. # If instead you are interested in using includes to override configuration
  28. # options, it is better to use include as the last line.
  29. # 包含一个或多个其他配置文件。
  30. # 这在你有标准配置模板但是每个redis服务器又需要个性设置的时候很有用。
  31. # 包含文件特性允许你引人其他配置文件,所以好好利用吧。
  32. #
  33. # include /path/to/local.conf
  34. # include /path/to/other.conf
  35. ################################ GENERAL #####################################
  36. # By default Redis does not run as a daemon. Use ‘yes‘ if you need it.
  37. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
  38. # Redis默认是不作为守护进程来运行的。你可以把这个设置为"yes"让它作为守护进程来运行。
  39. # 注意,当作为守护进程的时候,Redis会把进程ID写到 /var/run/redis.pid
  40. daemonize no
  41. # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
  42. # default. You can specify a custom pid file location here.
  43. # 当以守护进程方式运行的时候,Redis会把进程ID默认写到 /var/run/redis.pid。你可以在这里修改路径
  44. pidfile /var/run/redis.pid
  45. # Accept connections on the specified port, default is 6379.
  46. # If port 0 is specified Redis will not listen on a TCP socket.
  47. # 接受连接的特定端口,默认是6379。
  48. # 如果端口设置为0,Redis就不会监听TCP套接字。
  49. port 6379
  50. # TCP listen() backlog.
  51. #
  52. # In high requests-per-second environments you need an high backlog in order
  53. # to avoid slow clients connections issues. Note that the Linux kernel
  54. # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
  55. # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
  56. # in order to get the desired effect.
  57. tcp-backlog 511
  58. # By default Redis listens for connections from all the network interfaces
  59. # available on the server. It is possible to listen to just one or multiple
  60. # interfaces using the "bind" configuration directive, followed by one or
  61. # more IP addresses.
  62. # 如果你想的话,你可以绑定单一接口;如果这里没单独设置,那么所有接口的连接都会被监听
  63. # Examples:
  64. #
  65. # bind 192.168.1.100 10.0.0.1
  66. # bind 127.0.0.1
  67. # Specify the path for the Unix socket that will be used to listen for
  68. # incoming connections. There is no default, so Redis will not listen
  69. # on a unix socket when not specified.
  70. # 指定用来监听连接的unxi套接字的路径。这个没有默认值,所以如果你不指定的话,Redis就不会通过unix套接字来监听
  71. # unixsocket /tmp/redis.sock
  72. # unixsocketperm 700
  73. # Close the connection after a client is idle for N seconds (0 to disable)
  74. # 一个客户端空闲多少秒后关闭连接。(0代表禁用,永不关闭)
  75. timeout 0
  76. # TCP keepalive.
  77. #
  78. # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
  79. # of communication. This is useful for two reasons:
  80. #
  81. # 1) Detect dead peers.
  82. # 2) Take the connection alive from the point of view of network
  83. # equipment in the middle.
  84. #
  85. # On Linux, the specified value (in seconds) is the period used to send ACKs.
  86. # Note that to close the connection the double of the time is needed.
  87. # On other kernels the period depends on the kernel configuration.
  88. #
  89. # A reasonable value for this option is 60 seconds.
  90. tcp-keepalive 0
  91. # Specify the server verbosity level.
  92. # This can be one of:
  93. # debug (a lot of information, useful for development/testing)
  94. # verbose (many rarely useful info, but not a mess like the debug level)
  95. # notice (moderately verbose, what you want in production probably)
  96. # warning (only very important / critical messages are logged)
  97. # 设置服务器调试等级。
  98. # 可能值:
  99. # debug (很多信息,对开发/测试有用)
  100. # verbose (很多精简的有用信息,但是不像debug等级那么多)
  101. # notice (适量的信息,基本上是你生产环境中需要的程度)
  102. # warning (只有很重要/严重的信息会记录下来)
  103. loglevel notice
  104. # Specify the log file name. Also the empty string can be used to force
  105. # Redis to log on the standard output. Note that if you use standard
  106. # output for logging but daemonize, logs will be sent to /dev/null
  107. # 指明日志文件名。也可以使用"stdout"来强制让Redis把日志信息写到标准输出上。
  108. # 注意:如果Redis以守护进程方式运行,而你设置日志显示到标准输出的话,那么日志会发送到 /dev/null
  109. logfile ""
  110. # To enable logging to the system logger, just set ‘syslog-enabled‘ to yes,
  111. # and optionally update the other syslog parameters to suit your needs.
  112. # 要使用系统日志记录器很简单,只要设置 "syslog-enabled" 为 "yes" 就可以了。
  113. # 然后根据需要设置其他一些syslog参数就可以了。
  114. # syslog-enabled no
  115. # Specify the syslog identity.
  116. # 指明syslog身份
  117. # syslog-ident redis
  118. # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
  119. # 指明syslog的设备。必须是一个用户或者是 LOCAL0 ~ LOCAL7 之一。
  120. # syslog-facility local0
  121. # Set the number of databases. The default database is DB 0, you can select
  122. # a different one on a per-connection basis using SELECT <dbid> where
  123. # dbid is a number between 0 and ‘databases‘-1
  124. # 设置数据库个数。默认数据库是 DB 0,你可以通过SELECT <dbid> WHERE dbid(0~‘databases‘ - 1)来为每个连接使用不同的数据库。
  125. databases 16
  126. ################################ SNAPSHOTTING ################################
  127. #
  128. # Save the DB on disk:
  129. #
  130. # save <seconds> <changes>
  131. #
  132. # Will save the DB if both the given number of seconds and the given
  133. # number of write operations against the DB occurred.
  134. #
  135. # In the example below the behaviour will be to save:
  136. # after 900 sec (15 min) if at least 1 key changed
  137. # after 300 sec (5 min) if at least 10 keys changed
  138. # after 60 sec if at least 10000 keys changed
  139. #
  140. # Note: you can disable saving completely by commenting out all "save" lines.
  141. #
  142. # It is also possible to remove all the previously configured save
  143. # points by adding a save directive with a single empty string argument
  144. # like in the following example:
  145. #
  146. # 把数据库存到磁盘上:
  147. #
  148. # save <seconds> <changes>
  149. #
  150. # 会在指定秒数和数据变化次数之后把数据库写到磁盘上。
  151. #
  152. # 下面的例子将会进行把数据写入磁盘的操作:
  153. # 900秒(15分钟)之后,且至少1次变更
  154. # 300秒(5分钟)之后,且至少10次变更
  155. # 60秒之后,且至少10000次变更
  156. #
  157. # 注意:你要想不写磁盘的话就把所有 "save" 设置注释掉就行了。
  158. #
  159. # save ""
  160. save 900 1
  161. save 300 10
  162. save 60 10000
  163. # By default Redis will stop accepting writes if RDB snapshots are enabled
  164. # (at least one save point) and the latest background save failed.
  165. # This will make the user aware (in a hard way) that data is not persisting
  166. # on disk properly, otherwise chances are that no one will notice and some
  167. # disaster will happen.
  168. #
  169. # If the background saving process will start working again Redis will
  170. # automatically allow writes again.
  171. #
  172. # However if you have setup your proper monitoring of the Redis server
  173. # and persistence, you may want to disable this feature so that Redis will
  174. # continue to work as usual even if there are problems with disk,
  175. # permissions, and so forth.
  176. stop-writes-on-bgsave-error yes
  177. # Compress string objects using LZF when dump .rdb databases?
  178. # For default that‘s set to ‘yes‘ as it‘s almost always a win.
  179. # If you want to save some CPU in the saving child set it to ‘no‘ but
  180. # the dataset will likely be bigger if you have compressible values or keys.
  181. # 当导出到 .rdb 数据库时是否用LZF压缩字符串对象。
  182. # 默认设置为 "yes",所以几乎总是生效的。
  183. # 如果你想节省CPU的话你可以把这个设置为 "no",但是如果你有可压缩的key的话,那数据文件就会更大了。
  184. rdbcompression yes
  185. # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
  186. # This makes the format more resistant to corruption but there is a performance
  187. # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
  188. # for maximum performances.
  189. #
  190. # RDB files created with checksum disabled have a checksum of zero that will
  191. # tell the loading code to skip the check.
  192. rdbchecksum yes
  193. # The filename where to dump the DB
  194. # 数据库的文件名
  195. dbfilename dump.rdb
  196. # The working directory.
  197. #
  198. # The DB will be written inside this directory, with the filename specified
  199. # above using the ‘dbfilename‘ configuration directive.
  200. #
  201. # The Append Only File will also be created inside this directory.
  202. #
  203. # Note that you must specify a directory here, not a file name.
  204. # 工作目录
  205. #
  206. # 数据库会写到这个目录下,文件名就是上面的 "dbfilename" 的值。
  207. #
  208. # 累加文件也放这里。
  209. #
  210. # 注意你这里指定的必须是目录,不是文件名。
  211. dir ./
  212. ################################# REPLICATION #################################
  213. # Master-Slave replication. Use slaveof to make a Redis instance a copy of
  214. # another Redis server. A few things to understand ASAP about Redis replication.
  215. #
  216. # 1) Redis replication is asynchronous, but you can configure a master to
  217. # stop accepting writes if it appears to be not connected with at least
  218. # a given number of slaves.
  219. # 2) Redis slaves are able to perform a partial resynchronization with the
  220. # master if the replication link is lost for a relatively small amount of
  221. # time. You may want to configure the replication backlog size (see the next
  222. # sections of this file) with a sensible value depending on your needs.
  223. # 3) Replication is automatic and does not need user intervention. After a
  224. # network partition slaves automatically try to reconnect to masters
  225. # and resynchronize with them.
  226. #
  227. # 主从同步。通过 slaveof 配置来实现Redis实例的备份。
  228. # 注意,这里是本地从远端复制数据。也就是说,本地可以有不同的数据库文件、绑定不同的IP、监听不同的端口。
  229. #
  230. # slaveof <masterip> <masterport>
  231. # If the master is password protected (using the "requirepass" configuration
  232. # directive below) it is possible to tell the slave to authenticate before
  233. # starting the replication synchronization process, otherwise the master will
  234. # refuse the slave request.
  235. # 如果master设置了密码(通过下面的 "requirepass" 选项来配置),那么slave在开始同步之前必须进行身份验证,否则它的同步请求会被拒绝。
  236. #
  237. # masterauth <master-password>
  238. # When a slave loses its connection with the master, or when the replication
  239. # is still in progress, the slave can act in two different ways:
  240. #
  241. # 1) if slave-serve-stale-data is set to ‘yes‘ (the default) the slave will
  242. # still reply to client requests, possibly with out of date data, or the
  243. # data set may just be empty if this is the first synchronization.
  244. #
  245. # 2) if slave-serve-stale-data is set to ‘no‘ the slave will reply with
  246. # an error "SYNC with master in progress" to all the kind of commands
  247. # but to INFO and SLAVEOF.
  248. # 当一个slave失去和master的连接,或者同步正在进行中,slave的行为有两种可能:
  249. #
  250. # 1) 如果 slave-serve-stale-data 设置为 "yes" (默认值),slave会继续响应客户端请求,可能是正常数据,也可能是还没获得值的空数据。
  251. # 2) 如果 slave-serve-stale-data 设置为 "no",slave会回复"正在从master同步(SYNC with master in progress)"来处理各种请求,除了 INFO 和 SLAVEOF 命令。
  252. #
  253. slave-serve-stale-data yes
  254. # You can configure a slave instance to accept writes or not. Writing against
  255. # a slave instance may be useful to store some ephemeral data (because data
  256. # written on a slave will be easily deleted after resync with the master) but
  257. # may also cause problems if clients are writing to it because of a
  258. # misconfiguration.
  259. #
  260. # Since Redis 2.6 by default slaves are read-only.
  261. #
  262. # Note: read only slaves are not designed to be exposed to untrusted clients
  263. # on the internet. It‘s just a protection layer against misuse of the instance.
  264. # Still a read only slave exports by default all the administrative commands
  265. # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
  266. # security of read only slaves using ‘rename-command‘ to shadow all the
  267. # administrative / dangerous commands.
  268. slave-read-only yes
  269. # Replication SYNC strategy: disk or socket.
  270. #
  271. # -------------------------------------------------------
  272. # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
  273. # -------------------------------------------------------
  274. #
  275. # New slaves and reconnecting slaves that are not able to continue the replication
  276. # process just receiving differences, need to do what is called a "full
  277. # synchronization". An RDB file is transmitted from the master to the slaves.
  278. # The transmission can happen in two different ways:
  279. #
  280. # 1) Disk-backed: The Redis master creates a new process that writes the RDB
  281. # file on disk. Later the file is transferred by the parent
  282. # process to the slaves incrementally.
  283. # 2) Diskless: The Redis master creates a new process that directly writes the
  284. # RDB file to slave sockets, without touching the disk at all.
  285. #
  286. # With disk-backed replication, while the RDB file is generated, more slaves
  287. # can be queued and served with the RDB file as soon as the current child producing
  288. # the RDB file finishes its work. With diskless replication instead once
  289. # the transfer starts, new slaves arriving will be queued and a new transfer
  290. # will start when the current one terminates.
  291. #
  292. # When diskless replication is used, the master waits a configurable amount of
  293. # time (in seconds) before starting the transfer in the hope that multiple slaves
  294. # will arrive and the transfer can be parallelized.
  295. #
  296. # With slow disks and fast (large bandwidth) networks, diskless replication
  297. # works better.
  298. repl-diskless-sync no
  299. # When diskless replication is enabled, it is possible to configure the delay
  300. # the server waits in order to spawn the child that trnasfers the RDB via socket
  301. # to the slaves.
  302. #
  303. # This is important since once the transfer starts, it is not possible to serve
  304. # new slaves arriving, that will be queued for the next RDB transfer, so the server
  305. # waits a delay in order to let more slaves arrive.
  306. #
  307. # The delay is specified in seconds, and by default is 5 seconds. To disable
  308. # it entirely just set it to 0 seconds and the transfer will start ASAP.
  309. repl-diskless-sync-delay 5
  310. # Slaves send PINGs to server in a predefined interval. It‘s possible to change
  311. # this interval with the repl_ping_slave_period option. The default value is 10
  312. # seconds.
  313. # slave根据指定的时间间隔向服务器发送ping请求。
  314. # 时间间隔可以通过 repl_ping_slave_period 来设置。
  315. # 默认10秒。
  316. #
  317. # repl-ping-slave-period 10
  318. # The following option sets the replication timeout for:
  319. #
  320. # 1) Bulk transfer I/O during SYNC, from the point of view of slave.
  321. # 2) Master timeout from the point of view of slaves (data, pings).
  322. # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
  323. #
  324. # It is important to make sure that this value is greater than the value
  325. # specified for repl-ping-slave-period otherwise a timeout will be detected
  326. # every time there is low traffic between the master and the slave.
  327. # 下面的选项设置了大块数据I/O、向master请求数据和ping响应的过期时间。
  328. # 默认值60秒。
  329. #
  330. # 一个很重要的事情是:确保这个值比 repl-ping-slave-period 大,否则master和slave之间的传输过期时间比预想的要短。
  331. #
  332. # repl-timeout 60
  333. # Disable TCP_NODELAY on the slave socket after SYNC?
  334. #
  335. # If you select "yes" Redis will use a smaller number of TCP packets and
  336. # less bandwidth to send data to slaves. But this can add a delay for
  337. # the data to appear on the slave side, up to 40 milliseconds with
  338. # Linux kernels using a default configuration.
  339. #
  340. # If you select "no" the delay for data to appear on the slave side will
  341. # be reduced but more bandwidth will be used for replication.
  342. #
  343. # By default we optimize for low latency, but in very high traffic conditions
  344. # or when the master and slaves are many hops away, turning this to "yes" may
  345. # be a good idea.
  346. repl-disable-tcp-nodelay no
  347. # Set the replication backlog size. The backlog is a buffer that accumulates
  348. # slave data when slaves are disconnected for some time, so that when a slave
  349. # wants to reconnect again, often a full resync is not needed, but a partial
  350. # resync is enough, just passing the portion of data the slave missed while
  351. # disconnected.
  352. #
  353. # The bigger the replication backlog, the longer the time the slave can be
  354. # disconnected and later be able to perform a partial resynchronization.
  355. #
  356. # The backlog is only allocated once there is at least a slave connected.
  357. #
  358. # repl-backlog-size 1mb
  359. # After a master has no longer connected slaves for some time, the backlog
  360. # will be freed. The following option configures the amount of seconds that
  361. # need to elapse, starting from the time the last slave disconnected, for
  362. # the backlog buffer to be freed.
  363. #
  364. # A value of 0 means to never release the backlog.
  365. #
  366. # repl-backlog-ttl 3600
  367. # The slave priority is an integer number published by Redis in the INFO output.
  368. # It is used by Redis Sentinel in order to select a slave to promote into a
  369. # master if the master is no longer working correctly.
  370. #
  371. # A slave with a low priority number is considered better for promotion, so
  372. # for instance if there are three slaves with priority 10, 100, 25 Sentinel will
  373. # pick the one with priority 10, that is the lowest.
  374. #
  375. # However a special priority of 0 marks the slave as not able to perform the
  376. # role of master, so a slave with priority of 0 will never be selected by
  377. # Redis Sentinel for promotion.
  378. #
  379. # By default the priority is 100.
  380. slave-priority 100
  381. # It is possible for a master to stop accepting writes if there are less than
  382. # N slaves connected, having a lag less or equal than M seconds.
  383. #
  384. # The N slaves need to be in "online" state.
  385. #
  386. # The lag in seconds, that must be <= the specified value, is calculated from
  387. # the last ping received from the slave, that is usually sent every second.
  388. #
  389. # This option does not GUARANTEE that N replicas will accept the write, but
  390. # will limit the window of exposure for lost writes in case not enough slaves
  391. # are available, to the specified number of seconds.
  392. #
  393. # For example to require at least 3 slaves with a lag <= 10 seconds use:
  394. #
  395. # min-slaves-to-write 3
  396. # min-slaves-max-lag 10
  397. #
  398. # Setting one or the other to 0 disables the feature.
  399. #
  400. # By default min-slaves-to-write is set to 0 (feature disabled) and
  401. # min-slaves-max-lag is set to 10.
  402. ################################## SECURITY ###################################
  403. # Require clients to issue AUTH <PASSWORD> before processing any other
  404. # commands. This might be useful in environments in which you do not trust
  405. # others with access to the host running redis-server.
  406. #
  407. # This should stay commented out for backward compatibility and because most
  408. # people do not need auth (e.g. they run their own servers).
  409. #
  410. # Warning: since Redis is pretty fast an outside user can try up to
  411. # 150k passwords per second against a good box. This means that you should
  412. # use a very strong password otherwise it will be very easy to break.
  413. # 要求客户端在处理任何命令时都要验证身份和密码。
  414. # 这在你信不过来访者时很有用。
  415. #
  416. # 为了向后兼容的话,这段应该注释掉。而且大多数人不需要身份验证(例如:它们运行在自己的服务器上。)
  417. #
  418. # 警告:因为Redis太快了,所以居心不良的人可以每秒尝试150k的密码来试图破解密码。
  419. # 这意味着你需要一个高强度的密码,否则破解太容易了。
  420. #
  421. # requirepass foobared
  422. # Command renaming.
  423. #
  424. # It is possible to change the name of dangerous commands in a shared
  425. # environment. For instance the CONFIG command may be renamed into something
  426. # hard to guess so that it will still be available for internal-use tools
  427. # but not available for general clients.
  428. #
  429. # Example:
  430. #
  431. # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
  432. #
  433. # It is also possible to completely kill a command by renaming it into
  434. # 命令重命名
  435. #
  436. # 在共享环境下,可以为危险命令改变名字。比如,你可以为 CONFIG 改个其他不太容易猜到的名字,这样你自己仍然可以使用,而别人却没法做坏事了。
  437. #
  438. # 例如:
  439. #
  440. # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
  441. #
  442. # 甚至也可以通过给命令赋值一个空字符串来完全禁用这条命令:
  443. #
  444. # an empty string:
  445. #
  446. # rename-command CONFIG ""
  447. #
  448. # Please note that changing the name of commands that are logged into the
  449. # AOF file or transmitted to slaves may cause problems.
  450. #请注意,改变指令所记录到的名字
  451. #AOF文件或者发送到从站可能会引起问题。
  452. ################################### LIMITS ####################################
  453. # Set the max number of connected clients at the same time. By default
  454. # this limit is set to 10000 clients, however if the Redis server is not
  455. # able to configure the process file limit to allow for the specified limit
  456. # the max number of allowed clients is set to the current file limit
  457. # minus 32 (as Redis reserves a few file descriptors for internal uses).
  458. #
  459. # Once the limit is reached Redis will close all the new connections sending
  460. # an error ‘max number of clients reached‘.
  461. # 设置最多同时连接客户端数量。
  462. # 默认没有限制,这个关系到Redis进程能够打开的文件描述符数量。
  463. # 特殊值"0"表示没有限制。
  464. # 一旦达到这个限制,Redis会关闭所有新连接并发送错误"达到最大用户数上限(max number of clients reached)"
  465. #
  466. # maxclients 10000
  467. # Don‘t use more memory than the specified amount of bytes.
  468. # When the memory limit is reached Redis will try to remove keys
  469. # according to the eviction policy selected (see maxmemory-policy).
  470. #
  471. # If Redis can‘t remove keys according to the policy, or if the policy is
  472. # set to ‘noeviction‘, Redis will start to reply with errors to commands
  473. # that would use more memory, like SET, LPUSH, and so on, and will continue
  474. # to reply to read-only commands like GET.
  475. #
  476. # This option is usually useful when using Redis as an LRU cache, or to set
  477. # a hard memory limit for an instance (using the ‘noeviction‘ policy).
  478. #
  479. # WARNING: If you have slaves attached to an instance with maxmemory on,
  480. # the size of the output buffers needed to feed the slaves are subtracted
  481. # from the used memory count, so that network problems / resyncs will
  482. # not trigger a loop where keys are evicted, and in turn the output
  483. # buffer of slaves is full with DELs of keys evicted triggering the deletion
  484. # of more keys, and so forth until the database is completely emptied.
  485. #
  486. # In short... if you have slaves attached it is suggested that you set a lower
  487. # limit for maxmemory so that there is some free RAM on the system for slave
  488. # output buffers (but this is not needed if the policy is ‘noeviction‘).
  489. # 不要用比设置的上限更多的内存。一旦内存使用达到上限,Redis会根据选定的回收策略(参见:maxmemmory-policy)删除key。
  490. #
  491. # 如果因为删除策略问题Redis无法删除key,或者策略设置为 "noeviction",Redis会回复需要更多内存的错误信息给命令。
  492. # 例如,SET,LPUSH等等。但是会继续合理响应只读命令,比如:GET。
  493. #
  494. # 在使用Redis作为LRU缓存,或者为实例设置了硬性内存限制的时候(使用 "noeviction" 策略)的时候,这个选项还是满有用的。
  495. #
  496. # 警告:当一堆slave连上达到内存上限的实例的时候,响应slave需要的输出缓存所需内存不计算在使用内存当中。
  497. # 这样当请求一个删除掉的key的时候就不会触发网络问题/重新同步的事件,然后slave就会收到一堆删除指令,直到数据库空了为止。
  498. #
  499. # 简而言之,如果你有slave连上一个master的话,那建议你把master内存限制设小点儿,确保有足够的系统内存用作输出缓存。
  500. # (如果策略设置为"noeviction"的话就不无所谓了)
  501. #
  502. # maxmemory <bytes>
  503. # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
  504. # is reached. You can select among five behaviors:
  505. #
  506. # volatile-lru -> remove the key with an expire set using an LRU algorithm
  507. # allkeys-lru -> remove any key according to the LRU algorithm
  508. # volatile-random -> remove a random key with an expire set
  509. # allkeys-random -> remove a random key, any key
  510. # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
  511. # noeviction -> don‘t expire at all, just return an error on write operations
  512. #
  513. # Note: with any of the above policies, Redis will return an error on write
  514. # operations, when there are no suitable keys for eviction.
  515. #
  516. # At the date of writing these commands are: set setnx setex append
  517. # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
  518. # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
  519. # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
  520. # getset mset msetnx exec sort
  521. # 内存策略:如果达到内存限制了,Redis如何删除key。你可以在下面五个策略里面选:
  522. #
  523. # volatile-lru -> 根据LRU算法生成的过期时间来删除。
  524. # allkeys-lru -> 根据LRU算法删除任何key。
  525. # volatile-random -> 根据过期设置来随机删除key。
  526. # allkeys->random -> 无差别随机删。
  527. # volatile-ttl -> 根据最近过期时间来删除(辅以TTL)
  528. # noeviction -> 谁也不删,直接在写操作时返回错误。
  529. #
  530. # 注意:对所有策略来说,如果Redis找不到合适的可以删除的key都会在写操作时返回一个错误。
  531. #
  532. # 这里涉及的命令:set setnx setex append
  533. # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
  534. # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
  535. # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
  536. # getset mset msetnx exec sort
  537. #
  538. # The default is:
  539. #
  540. # maxmemory-policy volatile-lru
  541. # LRU and minimal TTL algorithms are not precise algorithms but approximated
  542. # algorithms (in order to save memory), so you can select as well the sample
  543. # size to check. For instance for default Redis will check three keys and
  544. # pick the one that was used less recently, you can change the sample size
  545. # using the following configuration directive.
  546. # LRU和最小TTL算法的实现都不是很精确,但是很接近(为了省内存),所以你可以用样例做测试。
  547. # 例如:默认Redis会检查三个key然后取最旧的那个,你可以通过下面的配置项来设置样本的个数。
  548. #
  549. # maxmemory-samples 3
  550. ############################## APPEND ONLY MODE ###############################
  551. # By default Redis asynchronously dumps the dataset on disk. This mode is
  552. # good enough in many applications, but an issue with the Redis process or
  553. # a power outage may result into a few minutes of writes lost (depending on
  554. # the configured save points).
  555. #
  556. # The Append Only File is an alternative persistence mode that provides
  557. # much better durability. For instance using the default data fsync policy
  558. # (see later in the config file) Redis can lose just one second of writes in a
  559. # dramatic event like a server power outage, or a single write if something
  560. # wrong with the Redis process itself happens, but the operating system is
  561. # still running correctly.
  562. #
  563. # AOF and RDB persistence can be enabled at the same time without problems.
  564. # If the AOF is enabled on startup Redis will load the AOF, that is the file
  565. # with the better durability guarantees.
  566. #
  567. # Please check http://redis.io/topics/persistence for more information.
  568. # 默认情况下,Redis是异步的把数据导出到磁盘上。这种情况下,当Redis挂掉的时候,最新的数据就丢了。
  569. # 如果不希望丢掉任何一条数据的话就该用纯累加模式:一旦开启这个模式,Redis会把每次写入的数据在接收后都写入 appendonly.aof 文件。
  570. # 每次启动时Redis都会把这个文件的数据读入内存里。
  571. #
  572. # 注意,异步导出的数据库文件和纯累加文件可以并存(你得把上面所有"save"设置都注释掉,关掉导出机制)。
  573. # 如果纯累加模式开启了,那么Redis会在启动时载入日志文件而忽略导出的 dump.rdb 文件。
  574. #
  575. # 重要:查看 BGREWRITEAOF 来了解当累加日志文件太大了之后,怎么在后台重新处理这个日志文件。
  576. appendonly no
  577. # The name of the append only file (default: "appendonly.aof")
  578. # 纯累加文件名字(默认:"appendonly.aof")
  579. appendfilename "appendonly.aof"
  580. # The fsync() call tells the Operating System to actually write data on disk
  581. # instead of waiting for more data in the output buffer. Some OS will really flush
  582. # data on disk, some other OS will just try to do it ASAP.
  583. #
  584. # Redis supports three different modes:
  585. #
  586. # no: don‘t fsync, just let the OS flush the data when it wants. Faster.
  587. # always: fsync after every write to the append only log. Slow, Safest.
  588. # everysec: fsync only one time every second. Compromise.
  589. #
  590. # The default is "everysec", as that‘s usually the right compromise between
  591. # speed and data safety. It‘s up to you to understand if you can relax this to
  592. # "no" that will let the operating system flush the output buffer when
  593. # it wants, for better performances (but if you can live with the idea of
  594. # some data loss consider the default persistence mode that‘s snapshotting),
  595. # or on the contrary, use "always" that‘s very slow but a bit safer than
  596. # everysec.
  597. #
  598. # More details please check the following article:
  599. # http://antirez.com/post/redis-persistence-demystified.html
  600. #
  601. # If unsure, use "everysec".
  602. # fsync() 请求操作系统马上把数据写到磁盘上,不要再等了。
  603. # 有些操作系统会真的把数据马上刷到磁盘上;有些则要磨蹭一下,但是会尽快去做。
  604. #
  605. # Redis支持三种不同的模式:
  606. #
  607. # no:不要立刻刷,只有在操作系统需要刷的时候再刷。比较快。
  608. # always:每次写操作都立刻写入到aof文件。慢,但是最安全。
  609. # everysec:每秒写一次。折衷方案。
  610. #
  611. # 默认的 "everysec" 通常来说能在速度和数据安全性之间取得比较好的平衡。
  612. # 如果你真的理解了这个意味着什么,那么设置"no"可以获得更好的性能表现(如果丢数据的话,则只能拿到一个不是很新的快照);
  613. # 或者相反的,你选择 "always" 来牺牲速度确保数据安全、完整。
  614. #
  615. # 如果拿不准,就用 "everysec"
  616. # appendfsync always
  617. appendfsync everysec
  618. # appendfsync no
  619. # When the AOF fsync policy is set to always or everysec, and a background
  620. # saving process (a background save or AOF log background rewriting) is
  621. # performing a lot of I/O against the disk, in some Linux configurations
  622. # Redis may block too long on the fsync() call. Note that there is no fix for
  623. # this currently, as even performing fsync in a different thread will block
  624. # our synchronous write(2) call.
  625. #
  626. # In order to mitigate this problem it‘s possible to use the following option
  627. # that will prevent fsync() from being called in the main process while a
  628. # BGSAVE or BGREWRITEAOF is in progress.
  629. #
  630. # This means that while another child is saving, the durability of Redis is
  631. # the same as "appendfsync none". In practical terms, this means that it is
  632. # possible to lose up to 30 seconds of log in the worst scenario (with the
  633. # default Linux settings).
  634. #
  635. # If you have latency problems turn this to "yes". Otherwise leave it as
  636. # "no" that is the safest pick from the point of view of durability.
  637. # 如果AOF的同步策略设置成 "always" 或者 "everysec",那么后台的存储进程(后台存储或写入AOF日志)会产生很多磁盘I/O开销。
  638. # 某些Linux的配置下会使Redis因为 fsync() 而阻塞很久。
  639. # 注意,目前对这个情况还没有完美修正,甚至不同线程的 fsync() 会阻塞我们的 write(2) 请求。
  640. #
  641. # 为了缓解这个问题,可以用下面这个选项。它可以在 BGSAVE 或 BGREWRITEAOF 处理时阻止 fsync()。
  642. #
  643. # 这就意味着如果有子进程在进行保存操作,那么Redis就处于"不可同步"的状态。
  644. # 这实际上是说,在最差的情况下可能会丢掉30秒钟的日志数据。(默认Linux设定)
  645. #
  646. # 如果你有延迟的问题那就把这个设为 "yes",否则就保持 "no",这是保存持久数据的最安全的方式。
  647. no-appendfsync-on-rewrite no
  648. # Automatic rewrite of the append only file.
  649. # Redis is able to automatically rewrite the log file implicitly calling
  650. # BGREWRITEAOF when the AOF log size grows by the specified percentage.
  651. #
  652. # This is how it works: Redis remembers the size of the AOF file after the
  653. # latest rewrite (if no rewrite has happened since the restart, the size of
  654. # the AOF at startup is used).
  655. #
  656. # This base size is compared to the current size. If the current size is
  657. # bigger than the specified percentage, the rewrite is triggered. Also
  658. # you need to specify a minimal size for the AOF file to be rewritten, this
  659. # is useful to avoid rewriting the AOF file even if the percentage increase
  660. # is reached but it is still pretty small.
  661. #
  662. # Specify a percentage of zero in order to disable the automatic AOF
  663. # rewrite feature.
  664. # 自动重写AOF文件
  665. #
  666. # 如果AOF日志文件大到指定百分比,Redis能够通过 BGREWRITEAOF 自动重写AOF日志文件。
  667. #
  668. # 工作原理:Redis记住上次重写时AOF日志的大小(或者重启后没有写操作的话,那就直接用此时的AOF文件),
  669. # 基准尺寸和当前尺寸做比较。如果当前尺寸超过指定比例,就会触发重写操作。
  670. #
  671. # 你还需要指定被重写日志的最小尺寸,这样避免了达到约定百分比但尺寸仍然很小的情况还要重写。
  672. #
  673. # 指定百分比为0会禁用AOF自动重写特性。
  674. auto-aof-rewrite-percentage 100
  675. auto-aof-rewrite-min-size 64mb
  676. # An AOF file may be found to be truncated at the end during the Redis
  677. # startup process, when the AOF data gets loaded back into memory.
  678. # This may happen when the system where Redis is running
  679. # crashes, especially when an ext4 filesystem is mounted without the
  680. # data=ordered option (however this can‘t happen when Redis itself
  681. # crashes or aborts but the operating system still works correctly).
  682. #
  683. # Redis can either exit with an error when this happens, or load as much
  684. # data as possible (the default now) and start if the AOF file is found
  685. # to be truncated at the end. The following option controls this behavior.
  686. #
  687. # If aof-load-truncated is set to yes, a truncated AOF file is loaded and
  688. # the Redis server starts emitting a log to inform the user of the event.
  689. # Otherwise if the option is set to no, the server aborts with an error
  690. # and refuses to start. When the option is set to no, the user requires
  691. # to fix the AOF file using the "redis-check-aof" utility before to restart
  692. # the server.
  693. #
  694. # Note that if the AOF file will be found to be corrupted in the middle
  695. # the server will still exit with an error. This option only applies when
  696. # Redis will try to read more data from the AOF file but not enough bytes
  697. # will be found.
  698. aof-load-truncated yes
  699. ################################ LUA SCRIPTING ###############################
  700. # Max execution time of a Lua script in milliseconds.
  701. #
  702. # If the maximum execution time is reached Redis will log that a script is
  703. # still in execution after the maximum allowed time and will start to
  704. # reply to queries with an error.
  705. #
  706. # When a long running script exceeds the maximum execution time only the
  707. # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
  708. # used to stop a script that did not yet called write commands. The second
  709. # is the only way to shut down the server in the case a write command was
  710. # already issued by the script but the user doesn‘t want to wait for the natural
  711. # termination of the script.
  712. #
  713. # Set it to 0 or a negative value for unlimited execution without warnings.
  714. lua-time-limit 5000
  715. ################################## SLOW LOG ###################################
  716. # The Redis Slow Log is a system to log queries that exceeded a specified
  717. # execution time. The execution time does not include the I/O operations
  718. # like talking with the client, sending the reply and so forth,
  719. # but just the time needed to actually execute the command (this is the only
  720. # stage of command execution where the thread is blocked and can not serve
  721. # other requests in the meantime).
  722. #
  723. # You can configure the slow log with two parameters: one tells Redis
  724. # what is the execution time, in microseconds, to exceed in order for the
  725. # command to get logged, and the other parameter is the length of the
  726. # slow log. When a new command is logged the oldest one is removed from the
  727. # queue of logged commands.
  728. # The following time is expressed in microseconds, so 1000000 is equivalent
  729. # to one second. Note that a negative number disables the slow log, while
  730. # a value of zero forces the logging of every command.
  731. # Redis慢查询日志可以记录超过指定时间的查询。运行时间不包括各种I/O时间。
  732. # 例如:连接客户端,发送响应数据等。只计算命令运行的实际时间(这是唯一一种命令运行线程阻塞而无法同时为其他请求服务的场景)
  733. #
  734. # 你可以为慢查询日志配置两个参数:一个是超标时间,单位为微妙,记录超过个时间的命令。
  735. # 另一个是慢查询日志长度。当一个新的命令被写进日志的时候,最老的那个记录会被删掉。
  736. #
  737. # 下面的时间单位是微秒,所以1000000就是1秒。注意,负数时间会禁用慢查询日志,而0则会强制记录所有命令。
  738. slowlog-log-slower-than 10000
  739. # There is no limit to this length. Just be aware that it will consume memory.
  740. # You can reclaim memory used by the slow log with SLOWLOG RESET.
  741. # 这个长度没有限制。只要有足够的内存就行。你可以通过 SLOWLOG RESET 来释放内存。(译者注:日志居然是在内存里的Orz)
  742. slowlog-max-len 128
  743. ################################ LATENCY MONITOR ##############################
  744. # The Redis latency monitoring subsystem samples different operations
  745. # at runtime in order to collect data related to possible sources of
  746. # latency of a Redis instance.
  747. #
  748. # Via the LATENCY command this information is available to the user that can
  749. # print graphs and obtain reports.
  750. #
  751. # The system only logs operations that were performed in a time equal or
  752. # greater than the amount of milliseconds specified via the
  753. # latency-monitor-threshold configuration directive. When its value is set
  754. # to zero, the latency monitor is turned off.
  755. #
  756. # By default latency monitoring is disabled since it is mostly not needed
  757. # if you don‘t have latency issues, and collecting data has a performance
  758. # impact, that while very small, can be measured under big load. Latency
  759. # monitoring can easily be enalbed at runtime using the command
  760. # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
  761. latency-monitor-threshold 0
  762. ############################# Event notification ##############################
  763. # Redis can notify Pub/Sub clients about events happening in the key space.
  764. # This feature is documented at http://redis.io/topics/notifications
  765. #
  766. # For instance if keyspace events notification is enabled, and a client
  767. # performs a DEL operation on key "foo" stored in the Database 0, two
  768. # messages will be published via Pub/Sub:
  769. #
  770. # PUBLISH [email protected]__:foo del
  771. # PUBLISH [email protected]__:del foo
  772. #
  773. # It is possible to select the events that Redis will notify among a set
  774. # of classes. Every class is identified by a single character:
  775. #
  776. # K Keyspace events, published with [email protected]<db>__ prefix.
  777. # E Keyevent events, published with [email protected]<db>__ prefix.
  778. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
  779. # $ String commands
  780. # l List commands
  781. # s Set commands
  782. # h Hash commands
  783. # z Sorted set commands
  784. # x Expired events (events generated every time a key expires)
  785. # e Evicted events (events generated when a key is evicted for maxmemory)
  786. # A Alias for g$lshzxe, so that the "AKE" string means all the events.
  787. #
  788. # The "notify-keyspace-events" takes as argument a string that is composed
  789. # of zero or multiple characters. The empty string means that notifications
  790. # are disabled.
  791. #
  792. # Example: to enable list and generic events, from the point of view of the
  793. # event name, use:
  794. #
  795. # notify-keyspace-events Elg
  796. #
  797. # Example 2: to get the stream of the expired keys subscribing to channel
  798. # name [email protected]__:expired use:
  799. #
  800. # notify-keyspace-events Ex
  801. #
  802. # By default all notifications are disabled because most users don‘t need
  803. # this feature and the feature has some overhead. Note that if you don‘t
  804. # specify at least one of K or E, no events will be delivered.
  805. notify-keyspace-events ""
  806. ############################### ADVANCED CONFIG ###############################
  807. # Hashes are encoded using a memory efficient data structure when they have a
  808. # small number of entries, and the biggest entry does not exceed a given
  809. # threshold. These thresholds can be configured using the following directives.
  810. # 当有大量数据时,适合用哈希编码(需要更多的内存),元素数量上限不能超过给定限制。
  811. # 你可以通过下面的选项来设定这些限制:
  812. hash-max-ziplist-entries 512
  813. hash-max-ziplist-value 64
  814. # Similarly to hashes, small lists are also encoded in a special way in order
  815. # to save a lot of space. The special representation is only used when
  816. # you are under the following limits:
  817. # 与哈希相类似,数据元素较少的情况下,可以用另一种方式来编码从而节省大量空间。
  818. # 这种方式只有在符合下面限制的时候才可以用:
  819. list-max-ziplist-entries 512
  820. list-max-ziplist-value 64
  821. # Sets have a special encoding in just one case: when a set is composed
  822. # of just strings that happen to be integers in radix 10 in the range
  823. # of 64 bit signed integers.
  824. # The following configuration setting sets the limit in the size of the
  825. # set in order to use this special memory saving encoding.
  826. # 还有这样一种特殊编码的情况:数据全是64位无符号整型数字构成的字符串。
  827. # 下面这个配置项就是用来限制这种情况下使用这种编码的最大上限的。
  828. set-max-intset-entries 512
  829. # Similarly to hashes and lists, sorted sets are also specially encoded in
  830. # order to save a lot of space. This encoding is only used when the length and
  831. # elements of a sorted set are below the following limits:
  832. # 与第一、第二种情况相似,有序序列也可以用一种特别的编码方式来处理,可节省大量空间。
  833. # 这种编码只适合长度和元素都符合下面限制的有序序列:
  834. zset-max-ziplist-entries 128
  835. zset-max-ziplist-value 64
  836. # HyperLogLog sparse representation bytes limit. The limit includes the
  837. # 16 bytes header. When an HyperLogLog using the sparse representation crosses
  838. # this limit, it is converted into the dense representation.
  839. #
  840. # A value greater than 16000 is totally useless, since at that point the
  841. # dense representation is more memory efficient.
  842. #
  843. # The suggested value is ~ 3000 in order to have the benefits of
  844. # the space efficient encoding without slowing down too much PFADD,
  845. # which is O(N) with the sparse encoding. The value can be raised to
  846. # ~ 10000 when CPU is not a concern, but space is, and the data set is
  847. # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
  848. hll-sparse-max-bytes 3000
  849. # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
  850. # order to help rehashing the main Redis hash table (the one mapping top-level
  851. # keys to values). The hash table implementation Redis uses (see dict.c)
  852. # performs a lazy rehashing: the more operation you run into a hash table
  853. # that is rehashing, the more rehashing "steps" are performed, so if the
  854. # server is idle the rehashing is never complete and some more memory is used
  855. # by the hash table.
  856. #
  857. # The default is to use this millisecond 10 times every second in order to
  858. # actively rehash the main dictionaries, freeing memory when possible.
  859. #
  860. # If unsure:
  861. # use "activerehashing no" if you have hard latency requirements and it is
  862. # not a good thing in your environment that Redis can reply from time to time
  863. # to queries with 2 milliseconds delay.
  864. #
  865. # use "activerehashing yes" if you don‘t have such hard requirements but
  866. # want to free memory asap when possible.
  867. # 哈希刷新,每100个CPU毫秒会拿出1个毫秒来刷新Redis的主哈希表(顶级键值映射表)。
  868. # redis所用的哈希表实现(见dict.c)采用延迟哈希刷新机制:你对一个哈希表操作越多,哈希刷新操作就越频繁;
  869. # 反之,如果服务器非常不活跃那么也就是用点内存保存哈希表而已。
  870. #
  871. # 默认是每秒钟进行10次哈希表刷新,用来刷新字典,然后尽快释放内存。
  872. #
  873. # 建议:
  874. # 如果你对延迟比较在意的话就用 "activerehashing no",每个请求延迟2毫秒不太好嘛。
  875. # 如果你不太在意延迟而希望尽快释放内存的话就设置 "activerehashing yes"。
  876. activerehashing yes
  877. # The client output buffer limits can be used to force disconnection of clients
  878. # that are not reading data from the server fast enough for some reason (a
  879. # common reason is that a Pub/Sub client can‘t consume messages as fast as the
  880. # publisher can produce them).
  881. #
  882. # The limit can be set differently for the three different classes of clients:
  883. #
  884. # normal -> normal clients including MONITOR clients
  885. # slave -> slave clients
  886. # pubsub -> clients subscribed to at least one pubsub channel or pattern
  887. #
  888. # The syntax of every client-output-buffer-limit directive is the following:
  889. #
  890. # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
  891. #
  892. # A client is immediately disconnected once the hard limit is reached, or if
  893. # the soft limit is reached and remains reached for the specified number of
  894. # seconds (continuously).
  895. # So for instance if the hard limit is 32 megabytes and the soft limit is
  896. # 16 megabytes / 10 seconds, the client will get disconnected immediately
  897. # if the size of the output buffers reach 32 megabytes, but will also get
  898. # disconnected if the client reaches 16 megabytes and continuously overcomes
  899. # the limit for 10 seconds.
  900. #
  901. # By default normal clients are not limited because they don‘t receive data
  902. # without asking (in a push way), but just after a request, so only
  903. # asynchronous clients may create a scenario where data is requested faster
  904. # than it can read.
  905. #
  906. # Instead there is a default limit for pubsub and slave clients, since
  907. # subscribers and slaves receive data in a push fashion.
  908. #
  909. # Both the hard or the soft limit can be disabled by setting them to zero.
  910. client-output-buffer-limit normal 0 0 0
  911. client-output-buffer-limit slave 256mb 64mb 60
  912. client-output-buffer-limit pubsub 32mb 8mb 60
  913. # Redis calls an internal function to perform many background tasks, like
  914. # closing connections of clients in timeout, purging expired keys that are
  915. # never requested, and so forth.
  916. #
  917. # Not all tasks are performed with the same frequency, but Redis checks for
  918. # tasks to perform according to the specified "hz" value.
  919. #
  920. # By default "hz" is set to 10. Raising the value will use more CPU when
  921. # Redis is idle, but at the same time will make Redis more responsive when
  922. # there are many keys expiring at the same time, and timeouts may be
  923. # handled with more precision.
  924. #
  925. # The range is between 1 and 500, however a value over 100 is usually not
  926. # a good idea. Most users should use the default of 10 and raise this up to
  927. # 100 only in environments where very low latency is required.
  928. hz 10
  929. # When a child rewrites the AOF file, if the following option is enabled
  930. # the file will be fsync-ed every 32 MB of data generated. This is useful
  931. # in order to commit the file to the disk more incrementally and avoid
  932. # big latency spikes.
  933. aof-rewrite-incremental-fsync yes

-------------------------------------------------------------------------------------------------------------------------------

来自为知笔记(Wiz)

时间: 2024-10-10 17:33:03

redis.conf配置的相关文章

redis.conf 配置详解

# Redis 配置文件 # 当配置中需要配置内存大小时,可以使用 1k, 5GB, 4M 等类似的格式,其转换方式如下(不区分大小写) # # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 bytes # 1gb => 1024*1024*1024 bytes # # 内存配置大小写是一样的.比如 1gb 1G

redis.conf 配置文档详解

redis 配置文档详解. 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411

Redis的安装和使用之三------redis.conf配置释义

Redis配置文件被分成几大块区域,分别是: 1.通用(general) 2.快照(snapshotting) 3.复制(replication) 4.安全(security) 5.限制(limits) 6.追加模式(append only mode) 7.LUA脚本(lua scripting) 8.Redis集群(redis cluster) 9.慢日志(slow log) 10.延迟监控(latency monitor) 11.事件通知(event notification) 12.高级配

redis.conf配置详细解析

# redis 配置文件示例 # 当你需要为某个配置项指定内存大小的时候,必须要带上单位,# 通常的格式就是 1k 5gb 4m 等酱紫:## 1k  => 1000 bytes# 1kb => 1024 bytes# 1m  => 1000000 bytes# 1mb => 1024*1024 bytes# 1g  => 1000000000 bytes# 1gb => 1024*1024*1024 bytes## 单位是不区分大小写的,你写 1K 5GB 4M 也行

redis.conf 配置 详解 中文 2.8

# redis version 2.8.19 # 1k => 1000 bytes# 1kb => 1024 bytes# 1m => 1000000 bytes# 1mb => 1024*1024 bytes# 1g => 1000000000 bytes# 1gb => 1024*1024*1024 bytes # 只用最新的,所以可以放到最后. # include /path/to/local.conf # include /path/to/other.conf

redis优化配置和redis.conf说明

1. redis.conf 配置参数: #是否作为守护进程运行 daemonize yes #如以后台进程运行,则需指定一个pid,默认为/var/run/redis.pid pidfile redis.pid #绑定主机IP,默认值为127.0.0.1 #bind 127.0.0.1 #Redis默认监听端口 port 6379 #客户端闲置多少秒后,断开连接,默认为300(秒) timeout 300 #日志记录等级,有4个可选值,debug,verbose(默认值),notice,warn

Docker 部署Redis并配置redis.conf

redis的dockerhub地址:https://hub.docker.com/_/redis 1).使用redis镜像的默认配置并开启AOF,默认的配置不设置密码,RDB/AOF存放在/data下,可远程访问: #docker run --name some-redis -d redis redis-server --appendonly yes 2).使用自定义的redis.conf: 其中redis.conf配置: **#redis开启守护进程,需要注释掉,不然容器无法跑起来** #da

redis 安装配置学习笔记

redis 安装配置学习笔记 //wget http://download.redis.io/releases/redis-2.8.17.tar.gz 下载最新版本 wget http://download.redis.io/redis-stable.tar.gz 首先必须要有 gcc 与 make apt-get install gcc apt-get install make 1.解压 [email protected]:~# tar -xvf redis-stable.tar.gz 2.测

第一天Redis安装配置

一.Redis简介 redis 是一个高性能的key-value数据库.redis的出现,很大程度补偿了memcached这类keyvalue存储的不足,在部分场合可以对关系数据库起到很好的补充作用.它跟memcached类似,不过数据可以持久化,而且支持的数据类型很丰富.有字符串,链表,集合和有序集合.支持在服务器端计算集合的并,交和补集(difference)等,还支持多种排序功能.所以Redis也可以被看成是一个数据结构服务器. Redis的所有数据都是保存在内存中,然后不定期的通过异步方