Page MenuHomeSoftware Heritage

No OneTemporary

diff --git a/manifests/init.pp b/manifests/init.pp
index 9d42055..c53f9b5 100644
--- a/manifests/init.pp
+++ b/manifests/init.pp
@@ -1,257 +1,259 @@
# This class installs redis
#
# @example Default install
# include redis
#
# @example Slave Node
# class { '::redis':
# bind => '10.0.1.2',
# slaveof => '10.0.1.1 6379',
# }
#
# @param [String] activerehashing Enable/disable active rehashing.
# @param [String] aof_load_truncated Enable/disable loading truncated AOF file
# @param [String] aof_rewrite_incremental_fsync Enable/disable fsync for AOF file
# @param [String] appendfilename The name of the append only file
# @param [String] appendfsync Adjust fsync mode. Valid options: always, everysec, no. Default: everysec
# @param [String] appendonly Enable/disable appendonly mode.
# @param [String] auto_aof_rewrite_min_size Adjust minimum size for auto-aof-rewrite.
# @param [String] auto_aof_rewrite_percentage Adjust percentatge for auto-aof-rewrite.
# @param [String] bind Configure which IP address to listen on.
# @param [String] config_dir Directory containing the configuration files.
# @param [String] config_dir_mode Adjust mode for directory containing configuration files.
# @param [String] config_file_orig The location and name of a config file that provides the source
# @param [String] config_file Adjust main configuration file.
# @param [String] config_file_mode Adjust permissions for configuration files.
# @param [String] config_group Adjust filesystem group for config files.
# @param [String] config_owner Adjust filesystem owner for config files.
# @param [String] conf_template Define which template to use.
# @param [String] daemonize Have Redis run as a daemon.
# @param [String] default_install Configure a default install of redis
# @param [String] databases Set the number of databases.
# @param [String] dbfilename The filename where to dump the DB
# @param [String] extra_config_file Description
# @param [String] hash_max_ziplist_entries Set max ziplist entries for hashes.
# @param [String] hash_max_ziplist_value Set max ziplist values for hashes.
# @param [String] hll_sparse_max_bytes HyperLogLog sparse representation bytes limit
# @param [String] hz Set redis background tasks frequency
# @param [String] latency_monitor_threshold Latency monitoring threshold in milliseconds
# @param [String] list_max_ziplist_entries Set max ziplist entries for lists.
# @param [String] list_max_ziplist_value Set max ziplist values for lists.
# @param [String] log_dir Specify directory where to write log entries.
# @param [String] log_dir_mode Adjust mode for directory containing log files.
# @param [String] log_file Specify file where to write log entries.
# @param [String] log_level Specify the server verbosity level.
# @param [String] manage_repo Enable/disable upstream repository configuration.
# @param [String] manage_package Enable/disable management of package
# @param [String] managed_by_cluster_manager Choose if redis will be managed by a cluster manager such as pacemaker or rgmanager
# @param [String] masterauth If the master is password protected (using the "requirepass" configuration
# @param [String] maxclients Set the max number of connected clients at the same time.
# @param [String] maxmemory Don't use more memory than the specified amount of bytes.
# @param [String] maxmemory_policy How Redis will select what to remove when maxmemory is reached.
# @param [String] maxmemory_samples Select as well the sample size to check.
# @param [String] min_slaves_max_lag The lag in seconds
# @param [String] min_slaves_to_write Minimum number of slaves to be in "online" state
# @param [String] no_appendfsync_on_rewrite If you have latency problems turn this to 'true'. Otherwise leave it as
# @param [String] notify_keyspace_events Which events to notify Pub/Sub clients about events happening
# @param [String] notify_service You may disable service reloads when config files change if you
# @param [String] package_ensure Default action for package.
# @param [String] package_name Upstream package name.
# @param [String] pid_file Where to store the pid.
# @param [String] port Configure which port to listen on.
+# @param [String] protected_mode Whether protected mode is enabled or not. Only applicable when no bind is set.
# @param [String] ppa_repo Specify upstream (Ubuntu) PPA entry.
# @param [String] rdbcompression Enable/disable compression of string objects using LZF when dumping.
# @param [String] repl_backlog_size The replication backlog size
# @param [String] repl_backlog_ttl The number of seconds to elapse before freeing backlog buffer
# @param [String] repl_disable_tcp_nodelay Enable/disable TCP_NODELAY on the slave socket after SYNC
# @param [String] repl_ping_slave_period Slaves send PINGs to server in a predefined interval. It's possible
# @param [String] repl_timeout Set the replication timeout for:
# @param [String] requirepass Require clients to issue AUTH <PASSWORD> before processing any
# other commands.
# @param [String] save_db_to_disk Set if save db to disk.
# @param [String] save_db_to_disk_interval save the dataset every N seconds if there are at least M changes in the dataset
# @param [String] service_manage Specify if the service should be part of the catalog.
# @param [String] service_enable Enable/disable daemon at boot.
# @param [String] service_ensure Specify if the server should be running.
# @param [String] service_group Specify which group to run as.
# @param [String] service_hasrestart Does the init script support restart?
# @param [String] service_hasstatus Does the init script support status?
# @param [String] service_name Specify the service name for Init or Systemd.
# @param [String] service_provider Specify the service provider to use
# @param [String] service_user Specify which user to run as.
# @param [String] set_max_intset_entries The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
# Default: 512
# @param [String] slave_priority The priority number for slave promotion by Sentinel
# @param [String] slave_read_only You can configure a slave instance to accept writes or not.
# @param [String] slave_serve_stale_data When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
# Default: true
#
# @param [String] slaveof Use slaveof to make a Redis instance a copy of another Redis server.
# @param [String] slowlog_log_slower_than Tells Redis what is the execution time, in microseconds, to exceed
# in order for the command to get logged.
# Default: 10000
#
# @param [String] slowlog_max_len Tells Redis what is the length to exceed in order for the command
# to get logged.
# Default: 1024
#
# @param [String] stop_writes_on_bgsave_error If false then Redis will continue to work as usual even if there
# are problems with disk, permissions, and so forth.
# Default: true
#
# @param [String] syslog_enabled Enable/disable logging to the system logger.
# @param [String] syslog_facility Specify the syslog facility.
# Must be USER or between LOCAL0-LOCAL7.
# Default: undef
#
# @param [String] tcp_backlog Sets the TCP backlog
# @param [String] tcp_keepalive TCP keepalive.
# @param [String] timeout Close the connection after a client is idle for N seconds (0 to disable).
# @param [String] ulimit Limit the use of system-wide resources.
# @param [String] unixsocket Define unix socket path
# @param [String] unixsocketperm Define unix socket file permissions
# @param [String] workdir The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
# Default: /var/lib/redis/
# @param [String] workdir_mode Adjust mode for data directory.
# @param [String] zset_max_ziplist_entries Set max entries for sorted sets.
# @param [String] zset_max_ziplist_value Set max values for sorted sets.
# @param [String] cluster_enabled Enables redis 3.0 cluster functionality
# @param [String] cluster_config_file Config file for saving cluster nodes configuration. This file is never touched by humans.
# Only set if cluster_enabled is true
# Default: nodes.conf
# @param [String] cluster_node_timeout Node timeout
# Only set if cluster_enabled is true
# Default: 5000
class redis (
$activerehashing = $::redis::params::activerehashing,
$aof_load_truncated = $::redis::params::aof_load_truncated,
$aof_rewrite_incremental_fsync = $::redis::params::aof_rewrite_incremental_fsync,
$appendfilename = $::redis::params::appendfilename,
$appendfsync = $::redis::params::appendfsync,
$appendonly = $::redis::params::appendonly,
$auto_aof_rewrite_min_size = $::redis::params::auto_aof_rewrite_min_size,
$auto_aof_rewrite_percentage = $::redis::params::auto_aof_rewrite_percentage,
$bind = $::redis::params::bind,
$output_buffer_limit_slave = $::redis::params::output_buffer_limit_slave,
$output_buffer_limit_pubsub = $::redis::params::output_buffer_limit_pubsub,
$conf_template = $::redis::params::conf_template,
$config_dir = $::redis::params::config_dir,
$config_dir_mode = $::redis::params::config_dir_mode,
$config_file = $::redis::params::config_file,
$config_file_mode = $::redis::params::config_file_mode,
$config_file_orig = $::redis::params::config_file_orig,
$config_group = $::redis::params::config_group,
$config_owner = $::redis::params::config_owner,
$daemonize = $::redis::params::daemonize,
$databases = $::redis::params::databases,
$default_install = $::redis::params::default_install,
$dbfilename = $::redis::params::dbfilename,
$extra_config_file = $::redis::params::extra_config_file,
$hash_max_ziplist_entries = $::redis::params::hash_max_ziplist_entries,
$hash_max_ziplist_value = $::redis::params::hash_max_ziplist_value,
$hll_sparse_max_bytes = $::redis::params::hll_sparse_max_bytes,
$hz = $::redis::params::hz,
$latency_monitor_threshold = $::redis::params::latency_monitor_threshold,
$list_max_ziplist_entries = $::redis::params::list_max_ziplist_entries,
$list_max_ziplist_value = $::redis::params::list_max_ziplist_value,
$log_dir = $::redis::params::log_dir,
$log_dir_mode = $::redis::params::log_dir_mode,
$log_file = $::redis::params::log_file,
$log_level = $::redis::params::log_level,
$manage_package = $::redis::params::manage_package,
$manage_repo = $::redis::params::manage_repo,
$masterauth = $::redis::params::masterauth,
$maxclients = $::redis::params::maxclients,
$maxmemory = $::redis::params::maxmemory,
$maxmemory_policy = $::redis::params::maxmemory_policy,
$maxmemory_samples = $::redis::params::maxmemory_samples,
$min_slaves_max_lag = $::redis::params::min_slaves_max_lag,
$min_slaves_to_write = $::redis::params::min_slaves_to_write,
$no_appendfsync_on_rewrite = $::redis::params::no_appendfsync_on_rewrite,
$notify_keyspace_events = $::redis::params::notify_keyspace_events,
$notify_service = $::redis::params::notify_service,
$managed_by_cluster_manager = $::redis::params::managed_by_cluster_manager,
$package_ensure = $::redis::params::package_ensure,
$package_name = $::redis::params::package_name,
$pid_file = $::redis::params::pid_file,
$port = $::redis::params::port,
+ $protected_mode = $::redis::params::protected_mode,
$ppa_repo = $::redis::params::ppa_repo,
$rdbcompression = $::redis::params::rdbcompression,
$repl_backlog_size = $::redis::params::repl_backlog_size,
$repl_backlog_ttl = $::redis::params::repl_backlog_ttl,
$repl_disable_tcp_nodelay = $::redis::params::repl_disable_tcp_nodelay,
$repl_ping_slave_period = $::redis::params::repl_ping_slave_period,
$repl_timeout = $::redis::params::repl_timeout,
$requirepass = $::redis::params::requirepass,
$save_db_to_disk = $::redis::params::save_db_to_disk,
$save_db_to_disk_interval = $::redis::params::save_db_to_disk_interval,
$service_enable = $::redis::params::service_enable,
$service_ensure = $::redis::params::service_ensure,
$service_group = $::redis::params::service_group,
$service_hasrestart = $::redis::params::service_hasrestart,
$service_hasstatus = $::redis::params::service_hasstatus,
$service_manage = $::redis::params::service_manage,
$service_name = $::redis::params::service_name,
$service_provider = $::redis::params::service_provider,
$service_user = $::redis::params::service_user,
$set_max_intset_entries = $::redis::params::set_max_intset_entries,
$slave_priority = $::redis::params::slave_priority,
$slave_read_only = $::redis::params::slave_read_only,
$slave_serve_stale_data = $::redis::params::slave_serve_stale_data,
$slaveof = $::redis::params::slaveof,
$slowlog_log_slower_than = $::redis::params::slowlog_log_slower_than,
$slowlog_max_len = $::redis::params::slowlog_max_len,
$stop_writes_on_bgsave_error = $::redis::params::stop_writes_on_bgsave_error,
$syslog_enabled = $::redis::params::syslog_enabled,
$syslog_facility = $::redis::params::syslog_facility,
$tcp_backlog = $::redis::params::tcp_backlog,
$tcp_keepalive = $::redis::params::tcp_keepalive,
$timeout = $::redis::params::timeout,
$unixsocket = $::redis::params::unixsocket,
$unixsocketperm = $::redis::params::unixsocketperm,
$ulimit = $::redis::params::ulimit,
$workdir = $::redis::params::workdir,
$workdir_mode = $::redis::params::workdir_mode,
$zset_max_ziplist_entries = $::redis::params::zset_max_ziplist_entries,
$zset_max_ziplist_value = $::redis::params::zset_max_ziplist_value,
$cluster_enabled = $::redis::params::cluster_enabled,
$cluster_config_file = $::redis::params::cluster_config_file,
$cluster_node_timeout = $::redis::params::cluster_node_timeout,
) inherits redis::params {
contain ::redis::preinstall
contain ::redis::install
contain ::redis::config
contain ::redis::service
Class['redis::preinstall']
-> Class['redis::install']
-> Class['redis::config']
if $::redis::notify_service {
Class['redis::config']
~> Class['redis::service']
}
if $::puppetversion and versioncmp($::puppetversion, '4.0.0') < 0 {
warning("Puppet 3 is EOL as of 01/01/2017, The 3.X.X releases of the module are the last that will support Puppet 3\nFor more information, see https://github.com/arioch/puppet-redis#puppet-3-support")
}
exec { 'systemd-reload-redis':
command => 'systemctl daemon-reload',
refreshonly => true,
path => '/bin:/usr/bin:/usr/local/bin',
}
}
diff --git a/manifests/instance.pp b/manifests/instance.pp
index 72badfc..d566139 100644
--- a/manifests/instance.pp
+++ b/manifests/instance.pp
@@ -1,342 +1,347 @@
# redis::instance
#
# This is an defined type to allow the configuration of
# multiple redis instances on one machine without conflicts
#
# @summary Allows the configuration of multiple redis configurations on one machine
#
# @example
# redis::instance {'6380':
# port => '6380',
# }
#
# @param [String] activerehashing Enable/disable active rehashing.
# @param [String] aof_load_truncated Enable/disable loading truncated AOF file
# @param [String] aof_rewrite_incremental_fsync Enable/disable fsync for AOF file
# @param [String] appendfilename The name of the append only file
# @param [String] appendfsync Adjust fsync mode. Valid options: always, everysec, no. Default: everysec
# @param [String] appendonly Enable/disable appendonly mode.
# @param [String] auto_aof_rewrite_min_size Adjust minimum size for auto-aof-rewrite.
# @param [String] auto_aof_rewrite_percentage Adjust percentatge for auto-aof-rewrite.
# @param [String] bind Configure which IP address to listen on.
# @param [String] config_dir Directory containing the configuration files.
# @param [String] config_dir_mode Adjust mode for directory containing configuration files.
# @param [String] config_file_orig The location and name of a config file that provides the source
# @param [String] config_file Adjust main configuration file.
# @param [String] config_file_mode Adjust permissions for configuration files.
# @param [String] config_group Adjust filesystem group for config files.
# @param [String] config_owner Adjust filesystem owner for config files.
# @param [String] conf_template Define which template to use.
# @param [String] daemonize Have Redis run as a daemon.
# @param [String] databases Set the number of databases.
# @param [String] dbfilename The filename where to dump the DB
# @param [String] extra_config_file Description
# @param [String] hash_max_ziplist_entries Set max ziplist entries for hashes.
# @param [String] hash_max_ziplist_value Set max ziplist values for hashes.
# @param [String] hll_sparse_max_bytes HyperLogLog sparse representation bytes limit
# @param [String] hz Set redis background tasks frequency
# @param [String] latency_monitor_threshold Latency monitoring threshold in milliseconds
# @param [String] list_max_ziplist_entries Set max ziplist entries for lists.
# @param [String] list_max_ziplist_value Set max ziplist values for lists.
# @param [String] log_dir Specify directory where to write log entries.
# @param [String] log_dir_mode Adjust mode for directory containing log files.
# @param [String] log_file Specify file where to write log entries.
# @param [String] log_level Specify the server verbosity level.
# @param [String] masterauth If the master is password protected (using the "requirepass" configuration
# @param [String] maxclients Set the max number of connected clients at the same time.
# @param [String] maxmemory Don't use more memory than the specified amount of bytes.
# @param [String] maxmemory_policy How Redis will select what to remove when maxmemory is reached.
# @param [String] maxmemory_samples Select as well the sample size to check.
# @param [String] min_slaves_max_lag The lag in seconds
# @param [String] min_slaves_to_write Minimum number of slaves to be in "online" state
# @param [String] no_appendfsync_on_rewrite If you have latency problems turn this to 'true'. Otherwise leave it as
# @param [String] notify_keyspace_events Which events to notify Pub/Sub clients about events happening
# @param [String] pid_file Where to store the pid.
# @param [String] port Configure which port to listen on.
+# @param [String] protected_mode Whether protected mode is enabled or not. Only applicable when no bind is set.
# @param [String] rdbcompression Enable/disable compression of string objects using LZF when dumping.
# @param [String] repl_backlog_size The replication backlog size
# @param [String] repl_backlog_ttl The number of seconds to elapse before freeing backlog buffer
# @param [String] repl_disable_tcp_nodelay Enable/disable TCP_NODELAY on the slave socket after SYNC
# @param [String] repl_ping_slave_period Slaves send PINGs to server in a predefined interval. It's possible
# @param [String] repl_timeout Set the replication timeout for:
# @param [String] requirepass Require clients to issue AUTH <PASSWORD> before processing any
# other commands.
# @param [String] save_db_to_disk Set if save db to disk.
# @param [String] save_db_to_disk_interval save the dataset every N seconds if there are at least M changes in the dataset
# @param [String] service_enable Enable/disable daemon at boot.
# @param [String] service_ensure Specify if the server should be running.
# @param [String] service_group Specify which group to run as.
# @param [String] service_hasrestart Does the init script support restart?
# @param [String] service_hasstatus Does the init script support status?
# @param [String] service_user Specify which user to run as.
# @param [String] set_max_intset_entries The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
# Default: 512
# @param [String] slave_priority The priority number for slave promotion by Sentinel
# @param [String] slave_read_only You can configure a slave instance to accept writes or not.
# @param [String] slave_serve_stale_data When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
# Default: true
#
# @param [String] slaveof Use slaveof to make a Redis instance a copy of another Redis server.
# @param [String] slowlog_log_slower_than Tells Redis what is the execution time, in microseconds, to exceed
# in order for the command to get logged.
# Default: 10000
#
# @param [String] slowlog_max_len Tells Redis what is the length to exceed in order for the command
# to get logged.
# Default: 1024
#
# @param [String] stop_writes_on_bgsave_error If false then Redis will continue to work as usual even if there
# are problems with disk, permissions, and so forth.
# Default: true
#
# @param [String] syslog_enabled Enable/disable logging to the system logger.
# @param [String] syslog_facility Specify the syslog facility.
# Must be USER or between LOCAL0-LOCAL7.
# Default: undef
#
# @param [String] tcp_backlog Sets the TCP backlog
# @param [String] tcp_keepalive TCP keepalive.
# @param [String] timeout Close the connection after a client is idle for N seconds (0 to disable).
# @param [String] ulimit Limit the use of system-wide resources.
# @param [String] unixsocket Define unix socket path
# @param [String] unixsocketperm Define unix socket file permissions
# @param [String] workdir The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
# Default: /var/lib/redis/
# @param [String] workdir_mode Adjust mode for data directory.
# @param [String] zset_max_ziplist_entries Set max entries for sorted sets.
# @param [String] zset_max_ziplist_value Set max values for sorted sets.
# @param [String] cluster_enabled Enables redis 3.0 cluster functionality
# @param [String] cluster_config_file Config file for saving cluster nodes configuration. This file is never touched by humans.
# Only set if cluster_enabled is true
# Default: nodes.conf
# @param [String] cluster_node_timeout Node timeout
# Only set if cluster_enabled is true
# Default: 5000
define redis::instance(
$activerehashing = $::redis::activerehashing,
$aof_load_truncated = $::redis::aof_load_truncated,
$aof_rewrite_incremental_fsync = $::redis::aof_rewrite_incremental_fsync,
$appendfilename = $::redis::appendfilename,
$appendfsync = $::redis::appendfsync,
$appendonly = $::redis::appendonly,
$auto_aof_rewrite_min_size = $::redis::auto_aof_rewrite_min_size,
$auto_aof_rewrite_percentage = $::redis::auto_aof_rewrite_percentage,
$bind = $::redis::bind,
$output_buffer_limit_slave = $::redis::output_buffer_limit_slave,
$output_buffer_limit_pubsub = $::redis::output_buffer_limit_pubsub,
$conf_template = $::redis::conf_template,
$config_dir = $::redis::config_dir,
$config_dir_mode = $::redis::config_dir_mode,
$config_file = $::redis::config_file,
$config_file_mode = $::redis::config_file_mode,
$config_file_orig = $::redis::config_file_orig,
$config_group = $::redis::config_group,
$config_owner = $::redis::config_owner,
$daemonize = $::redis::daemonize,
$databases = $::redis::databases,
$dbfilename = $::redis::dbfilename,
$extra_config_file = $::redis::extra_config_file,
$hash_max_ziplist_entries = $::redis::hash_max_ziplist_entries,
$hash_max_ziplist_value = $::redis::hash_max_ziplist_value,
$hll_sparse_max_bytes = $::redis::hll_sparse_max_bytes,
$hz = $::redis::hz,
$latency_monitor_threshold = $::redis::latency_monitor_threshold,
$list_max_ziplist_entries = $::redis::list_max_ziplist_entries,
$list_max_ziplist_value = $::redis::list_max_ziplist_value,
$log_dir = $::redis::log_dir,
$log_dir_mode = $::redis::log_dir_mode,
$log_level = $::redis::log_level,
$minimum_version = $::redis::minimum_version,
$masterauth = $::redis::masterauth,
$maxclients = $::redis::maxclients,
$maxmemory = $::redis::maxmemory,
$maxmemory_policy = $::redis::maxmemory_policy,
$maxmemory_samples = $::redis::maxmemory_samples,
$min_slaves_max_lag = $::redis::min_slaves_max_lag,
$min_slaves_to_write = $::redis::min_slaves_to_write,
$no_appendfsync_on_rewrite = $::redis::no_appendfsync_on_rewrite,
$notify_keyspace_events = $::redis::notify_keyspace_events,
$managed_by_cluster_manager = $::redis::managed_by_cluster_manager,
$package_ensure = $::redis::package_ensure,
$port = $::redis::port,
+ $protected_mode = $::redis::protected_mode,
$rdbcompression = $::redis::rdbcompression,
$repl_backlog_size = $::redis::repl_backlog_size,
$repl_backlog_ttl = $::redis::repl_backlog_ttl,
$repl_disable_tcp_nodelay = $::redis::repl_disable_tcp_nodelay,
$repl_ping_slave_period = $::redis::repl_ping_slave_period,
$repl_timeout = $::redis::repl_timeout,
$requirepass = $::redis::requirepass,
$save_db_to_disk = $::redis::save_db_to_disk,
$save_db_to_disk_interval = $::redis::save_db_to_disk_interval,
$service_user = $::redis::service_user,
$set_max_intset_entries = $::redis::set_max_intset_entries,
$slave_priority = $::redis::slave_priority,
$slave_read_only = $::redis::slave_read_only,
$slave_serve_stale_data = $::redis::slave_serve_stale_data,
$slaveof = $::redis::slaveof,
$slowlog_log_slower_than = $::redis::slowlog_log_slower_than,
$slowlog_max_len = $::redis::slowlog_max_len,
$stop_writes_on_bgsave_error = $::redis::stop_writes_on_bgsave_error,
$syslog_enabled = $::redis::syslog_enabled,
$syslog_facility = $::redis::syslog_facility,
$tcp_backlog = $::redis::tcp_backlog,
$tcp_keepalive = $::redis::tcp_keepalive,
$timeout = $::redis::timeout,
$unixsocketperm = $::redis::unixsocketperm,
$ulimit = $::redis::ulimit,
$workdir_mode = $::redis::workdir_mode,
$zset_max_ziplist_entries = $::redis::zset_max_ziplist_entries,
$zset_max_ziplist_value = $::redis::zset_max_ziplist_value,
$cluster_enabled = $::redis::cluster_enabled,
$cluster_config_file = $::redis::cluster_config_file,
$cluster_node_timeout = $::redis::cluster_node_timeout,
$service_ensure = $::redis::service_ensure,
$service_enable = $::redis::service_enable,
$service_group = $::redis::service_group,
$service_hasrestart = $::redis::service_hasrestart,
$service_hasstatus = $::redis::service_hasstatus,
# Defaults for redis::instance
$manage_service_file = true,
$log_file = undef,
$pid_file = "/var/run/redis/redis-server-${name}.pid",
$unixsocket = "/var/run/redis/redis-server-${name}.sock",
$workdir = "${::redis::workdir}/redis-server-${name}",
) {
if $title == 'default' {
$redis_file_name_orig = $config_file_orig
$redis_file_name = $config_file
} else {
$redis_server_name = "redis-server-${name}"
$redis_file_name_orig = sprintf('%s/%s.%s', dirname($config_file_orig), $redis_server_name, 'conf.puppet')
$redis_file_name = sprintf('%s/%s.%s', dirname($config_file), $redis_server_name, 'conf')
}
if $log_dir != $::redis::log_dir {
file { $log_dir:
ensure => directory,
group => $service_group,
mode => $log_dir_mode,
owner => $service_user,
}
}
$_real_log_file = $log_file ? {
undef => "${log_dir}/redis-server-${name}.log",
default => $log_file,
}
if $workdir != $::redis::workdir {
file { $workdir:
ensure => directory,
group => $service_group,
mode => $workdir_mode,
owner => $service_user,
}
}
if $manage_service_file {
$service_provider_lookup = pick(getvar_emptystring('service_provider'), false)
if $service_provider_lookup == 'systemd' {
file { "/etc/systemd/system/${redis_server_name}.service":
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
content => template('redis/service_templates/redis.service.erb'),
}
~> Exec['systemd-reload-redis']
if $title != 'default' {
service { $redis_server_name:
ensure => $service_ensure,
enable => $service_enable,
hasrestart => $service_hasrestart,
hasstatus => $service_hasstatus,
subscribe => [
File["/etc/systemd/system/${redis_server_name}.service"],
Exec["cp -p ${redis_file_name_orig} ${redis_file_name}"],
],
}
}
} else {
file { "/etc/init.d/${redis_server_name}":
ensure => file,
mode => '0755',
content => template("redis/service_templates/redis.${::osfamily}.erb"),
}
if $title != 'default' {
service { $redis_server_name:
ensure => $service_ensure,
enable => $service_enable,
hasrestart => $service_hasrestart,
hasstatus => $service_hasstatus,
subscribe => [
File["/etc/init.d/${redis_server_name}"],
Exec["cp -p ${redis_file_name_orig} ${redis_file_name}"],
],
}
}
}
}
File {
owner => $config_owner,
group => $config_group,
mode => $config_file_mode,
}
file {$redis_file_name_orig:
ensure => file,
}
exec {"cp -p ${redis_file_name_orig} ${redis_file_name}":
path => '/usr/bin:/bin',
subscribe => File[$redis_file_name_orig],
refreshonly => true,
}
if $package_ensure =~ /^([0-9]+:)?[0-9]+\.[0-9]/ {
if ':' in $package_ensure {
$_redis_version_real = split($package_ensure, ':')
$redis_version_real = $_redis_version_real[1]
} else {
$redis_version_real = $package_ensure
}
} else {
$redis_version_real = pick(getvar_emptystring('redis_server_version'), $minimum_version)
}
if ($redis_version_real and $conf_template == 'redis/redis.conf.erb') {
case $redis_version_real {
/^2.4./: {
File[$redis_file_name_orig] { content => template('redis/redis.conf.2.4.10.erb') }
}
/^2.8./: {
File[$redis_file_name_orig] { content => template('redis/redis.conf.2.8.erb') }
}
+ /^3.2./: {
+ File[$redis_file_name_orig] { content => template('redis/redis.conf.3.2.erb') }
+ }
default: {
File[$redis_file_name_orig] { content => template($conf_template) }
}
}
} else {
File[$redis_file_name_orig] { content => template($conf_template) }
}
}
diff --git a/manifests/params.pp b/manifests/params.pp
index c5af7e8..2714fdd 100644
--- a/manifests/params.pp
+++ b/manifests/params.pp
@@ -1,322 +1,323 @@
# = Class: redis::params
#
# This class provides a number of parameters.
#
class redis::params {
# Generic
$manage_repo = false
$manage_package = true
$managed_by_cluster_manager = false
# redis.conf.erb
$activerehashing = true
$aof_load_truncated = true
$aof_rewrite_incremental_fsync = true
$appendfilename = 'appendonly.aof'
$appendfsync = 'everysec'
$appendonly = false
$auto_aof_rewrite_min_size = '64mb'
$auto_aof_rewrite_percentage = 100
$bind = '127.0.0.1'
$output_buffer_limit_slave = '256mb 64mb 60'
$output_buffer_limit_pubsub = '32mb 8mb 60'
$conf_template = 'redis/redis.conf.erb'
$default_install = true
$databases = 16
$dbfilename = 'dump.rdb'
$extra_config_file = undef
$hash_max_ziplist_entries = 512
$hash_max_ziplist_value = 64
$hll_sparse_max_bytes = 3000
$hz = 10
$latency_monitor_threshold = 0
$list_max_ziplist_entries = 512
$list_max_ziplist_value = 64
$log_dir = '/var/log/redis'
$log_file = '/var/log/redis/redis.log'
$log_level = 'notice'
$manage_service_file = false
$maxclients = 10000
$maxmemory = undef
$maxmemory_policy = undef
$maxmemory_samples = undef
$no_appendfsync_on_rewrite = false
$notify_keyspace_events = undef
$notify_service = true
$port = 6379
+ $protected_mode = 'yes'
$rdbcompression = true
$requirepass = undef
$save_db_to_disk = true
$save_db_to_disk_interval = {'900' =>'1', '300' => '10', '60' => '10000'}
$sentinel_auth_pass = undef
$sentinel_bind = undef
$sentinel_config_file_mode = '0644'
$sentinel_config_group = 'root'
$sentinel_config_owner = 'redis'
$sentinel_conf_template = 'redis/redis-sentinel.conf.erb'
$sentinel_down_after = 30000
$sentinel_failover_timeout = 180000
$sentinel_master_name = 'mymaster'
$sentinel_parallel_sync = 1
$sentinel_port = 26379
$sentinel_quorum = 2
$sentinel_service_name = 'redis-sentinel'
$sentinel_working_dir = '/tmp'
$sentinel_init_template = 'redis/redis-sentinel.init.erb'
$sentinel_pid_file = '/var/run/redis/redis-sentinel.pid'
$sentinel_notification_script = undef
$sentinel_client_reconfig_script = undef
$service_provider = undef
$set_max_intset_entries = 512
$slave_priority = 100
$slowlog_log_slower_than = 10000
$slowlog_max_len = 1024
$stop_writes_on_bgsave_error = true
$syslog_enabled = undef
$syslog_facility = undef
$tcp_backlog = 511
$tcp_keepalive = 0
$timeout = 0
$ulimit = 65536
$unixsocket = '/var/run/redis/redis.sock'
$unixsocketperm = 755
$zset_max_ziplist_entries = 128
$zset_max_ziplist_value = 64
# redis.conf.erb - replication
$masterauth = undef
$min_slaves_to_write = 0
$min_slaves_max_lag = 10
$repl_backlog_size = '1mb'
$repl_backlog_ttl = 3600
$repl_disable_tcp_nodelay = false
$repl_ping_slave_period = 10
$repl_timeout = 60
$slave_read_only = true
$slave_serve_stale_data = true
$slaveof = undef
# redis.conf.erb - redis 3.0 clustering
$cluster_enabled = false
$cluster_config_file = 'nodes.conf'
$cluster_node_timeout = 5000
case $::osfamily {
'Debian': {
$config_dir = '/etc/redis'
$config_dir_mode = '0755'
$config_file = '/etc/redis/redis.conf'
$config_file_mode = '0644'
$config_file_orig = '/etc/redis/redis.conf.puppet'
$config_owner = 'redis'
$daemonize = true
$log_dir_mode = '0755'
$package_ensure = 'present'
$package_name = 'redis-server'
$pid_file = '/var/run/redis/redis-server.pid'
$sentinel_config_file = '/etc/redis/sentinel.conf'
$sentinel_config_file_orig = '/etc/redis/redis-sentinel.conf.puppet'
$sentinel_daemonize = true
$sentinel_init_script = '/etc/init.d/redis-sentinel'
$sentinel_package_name = 'redis-sentinel'
$sentinel_package_ensure = 'present'
$service_manage = true
$service_enable = true
$service_ensure = 'running'
$service_group = 'redis'
$service_hasrestart = true
$service_hasstatus = true
$service_name = 'redis-server'
$service_user = 'redis'
$ppa_repo = 'ppa:chris-lea/redis-server'
$workdir = '/var/lib/redis'
$workdir_mode = '0750'
case $::operatingsystem {
'Ubuntu': {
$config_group = 'redis'
case $::operatingsystemmajrelease {
'14.04': {
# upstream package is 2.8.4
$minimum_version = '2.8.4'
}
'16.04': {
# upstream package is 3.0.3
$minimum_version = '3.0.3'
}
default: {
warning("Ubuntu release ${::operatingsystemmajrelease} isn't 'officially' supported by module, but will git it a shot")
$minimum_version = '2.8.5'
}
}
}
default: {
$config_group = 'root'
# Debian standard package is 2.4.14
# But we have dotdeb repo which is 3.2.5
$minimum_version = '3.2.5'
}
}
}
'RedHat': {
$config_dir = '/etc/redis'
$config_dir_mode = '0755'
$config_file = '/etc/redis.conf'
$config_file_mode = '0644'
$config_file_orig = '/etc/redis.conf.puppet'
$config_group = 'root'
$config_owner = 'redis'
$daemonize = true
$log_dir_mode = '0755'
$package_ensure = 'present'
$package_name = 'redis'
$pid_file = '/var/run/redis/redis.pid'
$sentinel_config_file = '/etc/redis-sentinel.conf'
$sentinel_config_file_orig = '/etc/redis-sentinel.conf.puppet'
$sentinel_daemonize = false
$sentinel_init_script = undef
$sentinel_package_name = 'redis'
$sentinel_package_ensure = 'present'
$service_manage = true
$service_enable = true
$service_ensure = 'running'
$service_hasrestart = true
$service_hasstatus = true
$service_name = 'redis'
$service_user = 'redis'
$ppa_repo = undef
$workdir = '/var/lib/redis'
$workdir_mode = '0755'
case $::operatingsystemmajrelease {
'6': {
# CentOS 6 EPEL package is just updated to 3.2.10
# https://bugzilla.redhat.com/show_bug.cgi?id=923970
$minimum_version = '3.2.10'
$service_group = 'root'
}
'7': {
# CentOS 7 EPEL package is 3.2.3
$minimum_version = '3.2.3'
$service_group = 'redis'
}
default: {
fail("Not sure what Redis version is avaliable upstream on your release: ${::operatingsystemmajrelease}")
}
}
}
'FreeBSD': {
$config_dir = '/usr/local/etc/redis'
$config_dir_mode = '0755'
$config_file = '/usr/local/etc/redis.conf'
$config_file_mode = '0644'
$config_file_orig = '/usr/local/etc/redis.conf.puppet'
$config_group = 'wheel'
$config_owner = 'redis'
$daemonize = true
$log_dir_mode = '0755'
$package_ensure = 'present'
$package_name = 'redis'
$pid_file = '/var/run/redis/redis.pid'
$sentinel_config_file = '/usr/local/etc/redis-sentinel.conf'
$sentinel_config_file_orig = '/usr/local/etc/redis-sentinel.conf.puppet'
$sentinel_daemonize = true
$sentinel_init_script = undef
$sentinel_package_name = 'redis'
$sentinel_package_ensure = 'present'
$service_manage = true
$service_enable = true
$service_ensure = 'running'
$service_group = 'redis'
$service_hasrestart = true
$service_hasstatus = true
$service_name = 'redis'
$service_user = 'redis'
$ppa_repo = undef
$workdir = '/var/db/redis'
$workdir_mode = '0750'
# pkg version
$minimum_version = '3.2.4'
}
'Suse': {
$config_dir = '/etc/redis'
$config_dir_mode = '0750'
$config_file = '/etc/redis/redis-server.conf'
$config_file_mode = '0644'
$config_group = 'redis'
$config_owner = 'redis'
$daemonize = true
$log_dir_mode = '0750'
$package_ensure = 'present'
$package_name = 'redis'
$pid_file = '/var/run/redis/redis-server.pid'
$sentinel_config_file = '/etc/redis/redis-sentinel.conf'
$sentinel_config_file_orig = '/etc/redis/redis-sentinel.conf.puppet'
$sentinel_daemonize = true
$sentinel_init_script = undef
$sentinel_package_name = 'redis'
$sentinel_package_ensure = 'present'
$service_manage = true
$service_enable = true
$service_ensure = 'running'
$service_group = 'redis'
$service_hasrestart = true
$service_hasstatus = true
$service_name = 'redis'
$service_user = 'redis'
$ppa_repo = undef
$workdir = '/var/lib/redis'
$workdir_mode = '0750'
# suse package version
$minimum_version = '3.0.5'
}
'Archlinux': {
$config_dir = '/etc/redis'
$config_dir_mode = '0755'
$config_file = '/etc/redis/redis.conf'
$config_file_mode = '0644'
$config_file_orig = '/etc/redis/redis.conf.puppet'
$config_group = 'root'
$config_owner = 'root'
$daemonize = true
$log_dir_mode = '0755'
$package_ensure = 'present'
$package_name = 'redis'
$pid_file = '/var/run/redis.pid'
$sentinel_config_file = '/etc/redis/redis-sentinel.conf'
$sentinel_config_file_orig = '/etc/redis/redis-sentinel.conf.puppet'
$sentinel_daemonize = true
$sentinel_init_script = undef
$sentinel_package_name = 'redis'
$sentinel_package_ensure = 'present'
$service_manage = true
$service_enable = true
$service_ensure = 'running'
$service_group = 'redis'
$service_hasrestart = true
$service_hasstatus = true
$service_name = 'redis'
$service_user = 'redis'
$ppa_repo = undef
$workdir = '/var/lib/redis'
$workdir_mode = '0750'
# pkg version
$minimum_version = '3.2.4'
}
default: {
fail "Operating system ${::operatingsystem} is not supported yet."
}
}
}
diff --git a/spec/classes/redis_centos_6_spec.rb b/spec/classes/redis_centos_6_spec.rb
index 3da820c..b78ee6b 100644
--- a/spec/classes/redis_centos_6_spec.rb
+++ b/spec/classes/redis_centos_6_spec.rb
@@ -1,75 +1,79 @@
require 'spec_helper'
describe 'redis' do
context 'on CentOS 6' do
let(:facts) {
centos_6_facts
}
context 'should set CentOS specific values' do
context 'when $::redis_server_version fact is not present and package_ensure is a newer version(3.2.1) (older features enabled)' do
let(:facts) {
centos_6_facts.merge({
:redis_server_version => nil,
:puppetversion => Puppet.version,
})
}
let (:params) { { :package_ensure => '3.2.1' } }
it { should contain_file('/etc/redis.conf.puppet').without('content' => /^hash-max-zipmap-entries/) }
it { should contain_file('/etc/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis.conf.puppet').with('content' => /^protected-mode/) }
it { should contain_file('/etc/redis.conf.puppet').with('content' => /^tcp-backlog/) }
end
context 'when $::redis_server_version fact is not present and package_ensure is a newer version(4.0-rc3) (older features enabled)' do
let(:facts) {
centos_6_facts.merge({
:redis_server_version => nil,
:puppetversion => Puppet.version,
})
}
let (:params) { { :package_ensure => '4.0-rc3' } }
it { should contain_file('/etc/redis.conf.puppet').without('content' => /^hash-max-zipmap-entries/) }
it { should contain_file('/etc/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis.conf.puppet').without('content' => /^protected-mode/) }
it { should contain_file('/etc/redis.conf.puppet').with('content' => /^tcp-backlog/) }
end
context 'when $::redis_server_version fact is present but the older version (older features not enabled)' do
let(:facts) {
centos_6_facts.merge({
:redis_server_version => '2.4.10',
:puppetversion => Puppet.version,
})
}
it { should contain_file('/etc/redis.conf.puppet').with('content' => /^hash-max-zipmap-entries/) }
it { should contain_file('/etc/redis.conf.puppet').without('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis.conf.puppet').without('content' => /^protected-mode/) }
it { should contain_file('/etc/redis.conf.puppet').without('content' => /^tcp-backlog/) }
end
context 'when $::redis_server_version fact is present but a newer version (older features enabled)' do
let(:facts) {
centos_6_facts.merge({
:redis_server_version => '3.2.1',
:puppetversion => Puppet.version,
})
}
it { should contain_file('/etc/redis.conf.puppet').without('content' => /^hash-max-zipmap-entries/) }
it { should contain_file('/etc/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis.conf.puppet').with('content' => /^protected-mode/) }
it { should contain_file('/etc/redis.conf.puppet').with('content' => /^tcp-backlog/) }
end
end
end
end
diff --git a/spec/classes/redis_spec.rb b/spec/classes/redis_spec.rb
index edc3858..6424525 100644
--- a/spec/classes/redis_spec.rb
+++ b/spec/classes/redis_spec.rb
@@ -1,1137 +1,1150 @@
require 'spec_helper'
describe 'redis', :type => :class do
on_supported_os.each do |os, facts|
context "on #{os}" do
let(:facts) {
facts.merge({
:redis_server_version => '3.2.3',
})
}
let(:package_name) { manifest_vars[:package_name] }
let(:service_name) { manifest_vars[:service_name] }
let(:config_file_orig) { manifest_vars[:config_file_orig] }
describe 'without parameters' do
it { is_expected.to create_class('redis') }
it { is_expected.to contain_class('redis::preinstall') }
it { is_expected.to contain_class('redis::install') }
it { is_expected.to contain_class('redis::config') }
it { is_expected.to contain_class('redis::service') }
it { is_expected.to contain_package(package_name).with_ensure('present') }
it { is_expected.to contain_file(config_file_orig).with_ensure('file') }
it { is_expected.to contain_file(config_file_orig).without_content(/undef/) }
it do
is_expected.to contain_service(service_name).with(
'ensure' => 'running',
'enable' => 'true',
'hasrestart' => 'true',
'hasstatus' => 'true'
)
end
case facts[:operatingsystem]
when 'Debian'
context 'on Debian' do
it do
is_expected.to contain_file('/var/run/redis').with({
:ensure => 'directory',
:owner => 'redis',
:group => 'root',
:mode => '2775',
})
end
end
when 'Ubuntu'
it do
is_expected.to contain_file('/var/run/redis').with({
:ensure => 'directory',
:owner => 'redis',
:group => 'redis',
:mode => '0755',
})
end
end
end
describe 'with parameter activerehashing' do
let (:params) {
{
:activerehashing => true
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/activerehashing.*yes/) }
end
describe 'with parameter aof_load_truncated' do
let (:params) {
{
:aof_load_truncated => true
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/aof-load-truncated.*yes/) }
end
describe 'with parameter aof_rewrite_incremental_fsync' do
let (:params) {
{
:aof_rewrite_incremental_fsync => true
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/aof-rewrite-incremental-fsync.*yes/) }
end
describe 'with parameter appendfilename' do
let (:params) {
{
:appendfilename => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/appendfilename.*_VALUE_/) }
end
describe 'with parameter appendfsync' do
let (:params) {
{
:appendfsync => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/appendfsync.*_VALUE_/) }
end
describe 'with parameter appendonly' do
let (:params) {
{
:appendonly => true
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/appendonly.*yes/) }
end
describe 'with parameter auto_aof_rewrite_min_size' do
let (:params) {
{
:auto_aof_rewrite_min_size => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/auto-aof-rewrite-min-size.*_VALUE_/) }
end
describe 'with parameter auto_aof_rewrite_percentage' do
let (:params) {
{
:auto_aof_rewrite_percentage => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/auto-aof-rewrite-percentage.*_VALUE_/) }
end
describe 'with parameter bind' do
let (:params) {
{
:bind => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/bind.*_VALUE_/) }
end
describe 'with parameter output_buffer_limit_slave' do
let (:params) {
{
:output_buffer_limit_slave => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/client-output-buffer-limit slave.*_VALUE_/) }
end
describe 'with parameter output_buffer_limit_pubsub' do
let (:params) {
{
:output_buffer_limit_pubsub => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with_content(/client-output-buffer-limit pubsub.*_VALUE_/) }
end
describe 'with parameter: config_dir' do
let (:params) { { :config_dir => '_VALUE_' } }
it { is_expected.to contain_file('_VALUE_').with_ensure('directory') }
end
describe 'with parameter: config_dir_mode' do
let (:params) { { :config_dir_mode => '_VALUE_' } }
it { is_expected.to contain_file('/etc/redis').with_mode('_VALUE_') }
end
describe 'with parameter: log_dir_mode' do
let (:params) { { :log_dir_mode => '_VALUE_' } }
it { is_expected.to contain_file('/var/log/redis').with_mode('_VALUE_') }
end
describe 'with parameter: config_file_orig' do
let (:params) { { :config_file_orig => '_VALUE_' } }
it { is_expected.to contain_file('_VALUE_') }
end
describe 'with parameter: config_file_mode' do
let (:params) { { :config_file_mode => '_VALUE_' } }
it { is_expected.to contain_file(config_file_orig).with_mode('_VALUE_') }
end
describe 'with parameter: config_group' do
let (:params) { { :config_group => '_VALUE_' } }
it { is_expected.to contain_file('/etc/redis').with_group('_VALUE_') }
end
describe 'with parameter: config_owner' do
let (:params) { { :config_owner => '_VALUE_' } }
it { is_expected.to contain_file('/etc/redis').with_owner('_VALUE_') }
end
describe 'with parameter daemonize' do
let (:params) {
{
:daemonize => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /daemonize.*yes/
)
}
end
describe 'with parameter databases' do
let (:params) {
{
:databases => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /databases.*_VALUE_/
)
}
end
describe 'with parameter dbfilename' do
let (:params) {
{
:dbfilename => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /dbfilename.*_VALUE_/
)
}
end
describe 'without parameter dbfilename' do
let(:params) {
{
:dbfilename => false,
}
}
it { is_expected.to contain_file(config_file_orig).without_content(/^dbfilename/) }
end
describe 'with parameter hash_max_ziplist_entries' do
let (:params) {
{
:hash_max_ziplist_entries => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /hash-max-ziplist-entries.*_VALUE_/
)
}
end
describe 'with parameter hash_max_ziplist_value' do
let (:params) {
{
:hash_max_ziplist_value => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /hash-max-ziplist-value.*_VALUE_/
)
}
end
describe 'with parameter list_max_ziplist_entries' do
let (:params) {
{
:list_max_ziplist_entries => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /list-max-ziplist-entries.*_VALUE_/
)
}
end
describe 'with parameter list_max_ziplist_value' do
let (:params) {
{
:list_max_ziplist_value => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /list-max-ziplist-value.*_VALUE_/
)
}
end
describe 'with parameter log_dir' do
let (:params) {
{
:log_dir => '_VALUE_'
}
}
it { is_expected.to contain_file('_VALUE_').with(
'ensure' => 'directory'
)
}
end
describe 'with parameter log_file' do
let (:params) {
{
:log_file => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /logfile.*_VALUE_/
)
}
end
describe 'with parameter log_level' do
let (:params) {
{
:log_level => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /loglevel.*_VALUE_/
)
}
end
describe 'with parameter: manage_repo' do
let (:params) { { :manage_repo => true } }
case facts[:operatingsystem]
when 'Debian'
context 'on Debian' do
it do
is_expected.to create_apt__source('dotdeb').with({
:location => 'http://packages.dotdeb.org/',
:release => facts[:lsbdistcodename],
:repos => 'all',
:key => {
"id"=>"6572BBEF1B5FF28B28B706837E3F070089DF5277",
"source"=>"http://www.dotdeb.org/dotdeb.gpg"
},
:include => { 'src' => true },
})
end
end
when 'Ubuntu'
let(:ppa_repo) { manifest_vars[:ppa_repo] }
it { is_expected.to contain_apt__ppa(ppa_repo) }
when 'RedHat', 'CentOS', 'Scientific', 'OEL', 'Amazon'
it { is_expected.to contain_class('epel') }
end
end
describe 'with parameter unixsocket' do
let (:params) {
{
:unixsocket => '/tmp/redis.sock'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /unixsocket.*\/tmp\/redis.sock/
)
}
end
describe 'with parameter unixsocketperm' do
let (:params) {
{
:unixsocketperm => '777'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /unixsocketperm.*777/
)
}
end
describe 'with parameter masterauth' do
let (:params) {
{
:masterauth => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /masterauth.*_VALUE_/
)
}
end
describe 'with parameter maxclients' do
let (:params) {
{
:maxclients => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /maxclients.*_VALUE_/
)
}
end
describe 'with parameter maxmemory' do
let (:params) {
{
:maxmemory => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /maxmemory.*_VALUE_/
)
}
end
describe 'with parameter maxmemory_policy' do
let (:params) {
{
:maxmemory_policy => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /maxmemory-policy.*_VALUE_/
)
}
end
describe 'with parameter maxmemory_samples' do
let (:params) {
{
:maxmemory_samples => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /maxmemory-samples.*_VALUE_/
)
}
end
describe 'with parameter min_slaves_max_lag' do
let (:params) {
{
:min_slaves_max_lag => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /min-slaves-max-lag.*_VALUE_/
)
}
end
describe 'with parameter min_slaves_to_write' do
let (:params) {
{
:min_slaves_to_write => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /min-slaves-to-write.*_VALUE_/
)
}
end
describe 'with parameter notify_keyspace_events' do
let (:params) {
{
:notify_keyspace_events => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /notify-keyspace-events.*_VALUE_/
)
}
end
describe 'with parameter notify_service' do
let (:params) {
{
:notify_service => true
}
}
let(:service_name) { manifest_vars[:service_name] }
it { is_expected.to contain_file(config_file_orig).that_notifies("Service[#{service_name}]") }
end
describe 'with parameter no_appendfsync_on_rewrite' do
let (:params) {
{
:no_appendfsync_on_rewrite => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /no-appendfsync-on-rewrite.*yes/
)
}
end
describe 'with parameter: package_ensure' do
let (:params) { { :package_ensure => '_VALUE_' } }
let(:package_name) { manifest_vars[:package_name] }
it { is_expected.to contain_package(package_name).with(
'ensure' => '_VALUE_'
)
}
end
describe 'with parameter: package_name' do
let (:params) { { :package_name => '_VALUE_' } }
it { is_expected.to contain_package('_VALUE_') }
end
describe 'with parameter pid_file' do
let (:params) {
{
:pid_file => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /pidfile.*_VALUE_/
)
}
end
describe 'with parameter port' do
let (:params) {
{
:port => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /port.*_VALUE_/
)
}
end
+ describe 'with parameter protected-mode' do
+ let (:params) {
+ {
+ :protected_mode => '_VALUE_'
+ }
+ }
+
+ it { is_expected.to contain_file(config_file_orig).with(
+ 'content' => /protected-mode.*_VALUE_/
+ )
+ }
+ end
+
describe 'with parameter hll_sparse_max_bytes' do
let (:params) {
{
:hll_sparse_max_bytes=> '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /hll-sparse-max-bytes.*_VALUE_/
)
}
end
describe 'with parameter hz' do
let (:params) {
{
:hz=> '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /hz.*_VALUE_/
)
}
end
describe 'with parameter latency_monitor_threshold' do
let (:params) {
{
:latency_monitor_threshold=> '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /latency-monitor-threshold.*_VALUE_/
)
}
end
describe 'with parameter rdbcompression' do
let (:params) {
{
:rdbcompression => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /rdbcompression.*yes/
)
}
end
describe 'with parameter repl_backlog_size' do
let (:params) {
{
:repl_backlog_size => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /repl-backlog-size.*_VALUE_/
)
}
end
describe 'with parameter repl_backlog_ttl' do
let (:params) {
{
:repl_backlog_ttl => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /repl-backlog-ttl.*_VALUE_/
)
}
end
describe 'with parameter repl_disable_tcp_nodelay' do
let (:params) {
{
:repl_disable_tcp_nodelay => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /repl-disable-tcp-nodelay.*yes/
)
}
end
describe 'with parameter repl_ping_slave_period' do
let (:params) {
{
:repl_ping_slave_period => 1
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /repl-ping-slave-period.*1/
)
}
end
describe 'with parameter repl_timeout' do
let (:params) {
{
:repl_timeout => 1
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /repl-timeout.*1/
)
}
end
describe 'with parameter requirepass' do
let (:params) {
{
:requirepass => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /requirepass.*_VALUE_/
)
}
end
describe 'with parameter save_db_to_disk' do
context 'true' do
let (:params) {
{
:save_db_to_disk => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /^save/
)
}
end
context 'false' do
let (:params) {
{
:save_db_to_disk => false
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /^(?!save)/
)
}
end
end
describe 'with parameter save_db_to_disk_interval' do
context 'with save_db_to_disk true' do
context 'default' do
let (:params) {
{
:save_db_to_disk => true
}
}
it { is_expected.to contain_file(config_file_orig).with('content' => /save 900 1/)}
it { is_expected.to contain_file(config_file_orig).with('content' => /save 300 10/)}
it { is_expected.to contain_file(config_file_orig).with('content' => /save 60 10000/)
}
end
context 'default' do
let (:params) {
{
:save_db_to_disk => true,
:save_db_to_disk_interval => {'900' =>'2', '300' => '11', '60' => '10011'}
}
}
it { is_expected.to contain_file(config_file_orig).with('content' => /save 900 2/)}
it { is_expected.to contain_file(config_file_orig).with('content' => /save 300 11/)}
it { is_expected.to contain_file(config_file_orig).with('content' => /save 60 10011/)
}
end
end
context 'with save_db_to_disk false' do
context 'default' do
let (:params) {
{
:save_db_to_disk => false
}
}
it { is_expected.to contain_file(config_file_orig).without('content' => /save 900 1/) }
it { is_expected.to contain_file(config_file_orig).without('content' => /save 300 10/) }
it { is_expected.to contain_file(config_file_orig).without('content' => /save 60 10000/) }
end
end
end
describe 'with parameter: service_manage (set to false)' do
let (:params) { { :service_manage => false } }
let(:package_name) { manifest_vars[:package_name] }
it { should_not contain_service(package_name) }
end
describe 'with parameter: service_enable' do
let (:params) { { :service_enable => true } }
let(:package_name) { manifest_vars[:package_name] }
it { is_expected.to contain_service(package_name).with_enable(true) }
end
describe 'with parameter: service_ensure' do
let (:params) { { :service_ensure => '_VALUE_' } }
let(:package_name) { manifest_vars[:package_name] }
it { is_expected.to contain_service(package_name).with_ensure('_VALUE_') }
end
describe 'with parameter: service_group' do
let (:params) { { :service_group => '_VALUE_' } }
it { is_expected.to contain_file('/var/log/redis').with_group('_VALUE_') }
end
describe 'with parameter: service_hasrestart' do
let (:params) { { :service_hasrestart => true } }
let(:package_name) { manifest_vars[:package_name] }
it { is_expected.to contain_service(package_name).with_hasrestart(true) }
end
describe 'with parameter: service_hasstatus' do
let (:params) { { :service_hasstatus => true } }
let(:package_name) { manifest_vars[:package_name] }
it { is_expected.to contain_service(package_name).with_hasstatus(true) }
end
describe 'with parameter: service_name' do
let (:params) { { :service_name => '_VALUE_' } }
it { is_expected.to contain_service('_VALUE_').with_name('_VALUE_') }
end
describe 'with parameter: service_user' do
let (:params) { { :service_user => '_VALUE_' } }
it { is_expected.to contain_file('/var/log/redis').with_owner('_VALUE_') }
end
describe 'with parameter set_max_intset_entries' do
let (:params) {
{
:set_max_intset_entries => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /set-max-intset-entries.*_VALUE_/
)
}
end
describe 'with parameter slave_priority' do
let (:params) {
{
:slave_priority => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /slave-priority.*_VALUE_/
)
}
end
describe 'with parameter slave_read_only' do
let (:params) {
{
:slave_read_only => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /slave-read-only.*yes/
)
}
end
describe 'with parameter slave_serve_stale_data' do
let (:params) {
{
:slave_serve_stale_data => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /slave-serve-stale-data.*yes/
)
}
end
describe 'with parameter: slaveof' do
context 'binding to localhost' do
let (:params) {
{
:bind => '127.0.0.1',
:slaveof => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /^slaveof _VALUE_/
)}
end
context 'binding to external ip' do
let (:params) {
{
:bind => '10.0.0.1',
:slaveof => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /^slaveof _VALUE_/
)
}
end
end
describe 'with parameter slowlog_log_slower_than' do
let (:params) {
{
:slowlog_log_slower_than => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /slowlog-log-slower-than.*_VALUE_/
)
}
end
describe 'with parameter slowlog_max_len' do
let (:params) {
{
:slowlog_max_len => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /slowlog-max-len.*_VALUE_/
)
}
end
describe 'with parameter stop_writes_on_bgsave_error' do
let (:params) {
{
:stop_writes_on_bgsave_error => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /stop-writes-on-bgsave-error.*yes/
)
}
end
describe 'with parameter syslog_enabled' do
let (:params) {
{
:syslog_enabled => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /syslog-enabled yes/
)
}
end
describe 'with parameter syslog_facility' do
let (:params) {
{
:syslog_enabled => true,
:syslog_facility => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /syslog-facility.*_VALUE_/
)
}
end
describe 'with parameter tcp_backlog' do
let (:params) {
{
:tcp_backlog=> '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /tcp-backlog.*_VALUE_/
)
}
end
describe 'with parameter tcp_keepalive' do
let (:params) {
{
:tcp_keepalive => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /tcp-keepalive.*_VALUE_/
)
}
end
describe 'with parameter timeout' do
let (:params) {
{
:timeout => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /timeout.*_VALUE_/
)
}
end
describe 'with parameter workdir' do
let (:params) {
{
:workdir => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /dir.*_VALUE_/
)
}
end
describe 'with parameter zset_max_ziplist_entries' do
let (:params) {
{
:zset_max_ziplist_entries => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /zset-max-ziplist-entries.*_VALUE_/
)
}
end
describe 'with parameter zset_max_ziplist_value' do
let (:params) {
{
:zset_max_ziplist_value => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /zset-max-ziplist-value.*_VALUE_/
)
}
end
describe 'with parameter cluster_enabled-false' do
let (:params) {
{
:cluster_enabled => false
}
}
it { should_not contain_file(config_file_orig).with(
'content' => /cluster-enabled/
)
}
end
describe 'with parameter cluster_enabled-true' do
let (:params) {
{
:cluster_enabled => true
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /cluster-enabled.*yes/
)
}
end
describe 'with parameter cluster_config_file' do
let (:params) {
{
:cluster_enabled => true,
:cluster_config_file => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /cluster-config-file.*_VALUE_/
)
}
end
describe 'with parameter cluster_config_file' do
let (:params) {
{
:cluster_enabled => true,
:cluster_node_timeout => '_VALUE_'
}
}
it { is_expected.to contain_file(config_file_orig).with(
'content' => /cluster-node-timeout.*_VALUE_/
)
}
end
end
end
end
diff --git a/spec/classes/redis_ubuntu_1404_spec.rb b/spec/classes/redis_ubuntu_1404_spec.rb
index 4d4afa2..6244a27 100644
--- a/spec/classes/redis_ubuntu_1404_spec.rb
+++ b/spec/classes/redis_ubuntu_1404_spec.rb
@@ -1,109 +1,116 @@
require 'spec_helper'
describe 'redis' do
context 'on Ubuntu 1404' do
let(:facts) {
ubuntu_1404_facts
}
context 'should set Ubuntu specific values' do
context 'when $::redis_server_version fact is not present (older features not enabled)' do
let(:facts) {
ubuntu_1404_facts.merge({
:redis_server_version => nil,
})
}
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
it { should contain_file('/etc/redis/redis.conf.puppet').without('content' => /^tcp-backlog/) }
+ it { should contain_file('/etc/redis/redis.conf.puppet').without('content' => /^protected-mode/) }
end
context 'when $::redis_server_version fact is not present and package_ensure is a newer version(3.2.1) (older features enabled)' do
let(:facts) {
ubuntu_1404_facts.merge({
:redis_server_version => nil,
})
}
let (:params) { { :package_ensure => '3.2.1' } }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^protected-mode/) }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^tcp-backlog/) }
end
context 'when $::redis_server_version fact is not present and package_ensure is a newer version(3:3.2.1) (older features enabled)' do
let(:facts) {
ubuntu_1404_facts.merge({
:redis_server_version => nil,
})
}
let (:params) { { :package_ensure => '3:3.2.1' } }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^protected-mode/) }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^tcp-backlog/) }
end
context 'when $::redis_server_version fact is not present and package_ensure is a newer version(4:4.0-rc3) (older features enabled)' do
let(:facts) {
ubuntu_1404_facts.merge({
:redis_server_version => nil,
})
}
let (:params) { { :package_ensure => '4:4.0-rc3' } }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis/redis.conf.puppet').without('content' => /^protected-mode/) }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^tcp-backlog/) }
end
context 'when $::redis_server_version fact is not present and package_ensure is a newer version(4.0-rc3) (older features enabled)' do
let(:facts) {
ubuntu_1404_facts.merge({
:redis_server_version => nil,
})
}
let (:params) { { :package_ensure => '4.0-rc3' } }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
+ it { should contain_file('/etc/redis/redis.conf.puppet').without('content' => /^protected-mode/) }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^tcp-backlog/) }
end
context 'when $::redis_server_version fact is present but the older version (older features not enabled)' do
let(:facts) {
ubuntu_1404_facts.merge({
:redis_server_version => '2.8.4',
})
}
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
it { should contain_file('/etc/redis/redis.conf.puppet').without('content' => /^tcp-backlog/) }
+ it { should contain_file('/etc/redis/redis.conf.puppet').without('content' => /^protected-mode/) }
end
context 'when $::redis_server_version fact is present but a newer version (older features enabled)' do
let(:facts) {
ubuntu_1404_facts.merge({
:redis_server_version => '3.2.1',
})
}
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^hash-max-ziplist-entries/) }
it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^tcp-backlog/) }
+ it { should contain_file('/etc/redis/redis.conf.puppet').with('content' => /^protected-mode/) }
end
end
end
end
diff --git a/spec/fixtures/facts/redis_server_3209_version b/spec/fixtures/facts/redis_server_3209_version
new file mode 100644
index 0000000..510d820
--- /dev/null
+++ b/spec/fixtures/facts/redis_server_3209_version
@@ -0,0 +1 @@
+Redis server v=3.2.9 sha=00000000:0 malloc=jemalloc-4.0.3 bits=64 build=67e0f9d6580364c0
\ No newline at end of file
diff --git a/spec/unit/redis_server_version_spec.rb b/spec/unit/redis_server_version_spec.rb
index 4e476f5..ec5e659 100644
--- a/spec/unit/redis_server_version_spec.rb
+++ b/spec/unit/redis_server_version_spec.rb
@@ -1,25 +1,32 @@
require 'spec_helper'
describe 'redis_server_version', type: :fact do
before { Facter.clear }
after { Facter.clear }
it 'is 2.4.10 according to output' do
Facter::Util::Resolution.stubs(:which).with('redis-server').returns('/usr/bin/redis-server')
redis_server_2410_version = File.read(fixtures('facts', 'redis_server_2410_version'))
Facter::Util::Resolution.stubs(:exec).with('redis-server -v').returns(redis_server_2410_version)
expect(Facter.fact(:redis_server_version).value).to eq('2.4.10')
end
it 'is 2.8.19 according to output' do
Facter::Util::Resolution.stubs(:which).with('redis-server').returns('/usr/bin/redis-server')
redis_server_2819_version = File.read(fixtures('facts', 'redis_server_2819_version'))
Facter::Util::Resolution.stubs(:exec).with('redis-server -v').returns(redis_server_2819_version)
expect(Facter.fact(:redis_server_version).value).to eq('2.8.19')
end
+ it 'is 3.2.9 according to output' do
+ Facter::Util::Resolution.stubs(:which).with('redis-server').returns('/usr/bin/redis-server')
+ redis_server_3209_version = File.read(fixtures('facts', 'redis_server_3209_version'))
+ Facter::Util::Resolution.stubs(:exec).with('redis-server -v').returns(redis_server_3209_version)
+ expect(Facter.fact(:redis_server_version).value).to eq('3.2.9')
+ end
+
it 'is empty string if redis-server not installed' do
Facter::Util::Resolution.stubs(:which).with('redis-server').returns(nil)
expect(Facter.fact(:redis_server_version).value).to eq(nil)
end
end
diff --git a/templates/redis.conf.3.2.erb b/templates/redis.conf.3.2.erb
new file mode 100644
index 0000000..3c6a97c
--- /dev/null
+++ b/templates/redis.conf.3.2.erb
@@ -0,0 +1,787 @@
+# Redis configuration file example
+
+# Note on units: when memory size is needed, it is possible to specify
+# it in the usual form of 1k 5GB 4M and so forth:
+#
+# 1k => 1000 bytes
+# 1kb => 1024 bytes
+# 1m => 1000000 bytes
+# 1mb => 1024*1024 bytes
+# 1g => 1000000000 bytes
+# 1gb => 1024*1024*1024 bytes
+#
+# units are case insensitive so 1GB 1Gb 1gB are all the same.
+
+# By default Redis does not run as a daemon. Use 'yes' if you need it.
+# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
+daemonize <% if @daemonize -%>yes<% else -%>no<% end -%>
+
+# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
+# default. You can specify a custom pid file location here.
+pidfile <%= @pid_file %>
+
+# Protected mode is a layer of security protection, in order to avoid that
+# Redis instances left open on the internet are accessed and exploited.
+#
+# When protected mode is on and if:
+#
+# 1) The server is not binding explicitly to a set of addresses using the
+# "bind" directive.
+# 2) No password is configured.
+#
+# The server only accepts connections from clients connecting from the
+# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
+# sockets.
+#
+# By default protected mode is enabled. You should disable it only if
+# you are sure you want clients from other hosts to connect to Redis
+# even if no authentication is configured, nor a specific set of interfaces
+# are explicitly listed using the "bind" directive.
+protected-mode <%= @protected_mode %>
+
+# Accept connections on the specified port, default is 6379.
+# If port 0 is specified Redis will not listen on a TCP socket.
+port <%= @port %>
+
+# TCP listen() backlog.
+#
+# In high requests-per-second environments you need an high backlog in order
+# to avoid slow clients connections issues. Note that the Linux kernel
+# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
+# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
+# in order to get the desired effect.
+tcp-backlog <%= @tcp_backlog %>
+
+# If you want you can bind a single interface, if the bind option is not
+# specified all the interfaces will listen for incoming connections.
+#
+bind <%= @bind %>
+
+# Specify the path for the unix socket that will be used to listen for
+# incoming connections. There is no default, so Redis will not listen
+# on a unix socket when not specified.
+#
+<% if @unixsocket %>unixsocket <%= @unixsocket %><% end %>
+<% if @unixsocketperm %>unixsocketperm <%= @unixsocketperm %><% end %>
+
+# Close the connection after a client is idle for N seconds (0 to disable)
+timeout <%= @timeout %>
+
+# TCP keepalive.
+#
+# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
+# of communication. This is useful for two reasons:
+#
+# 1) Detect dead peers.
+# 2) Take the connection alive from the point of view of network
+# equipment in the middle.
+#
+# On Linux, the specified value (in seconds) is the period used to send ACKs.
+# Note that to close the connection the double of the time is needed.
+# On other kernels the period depends on the kernel configuration.
+#
+# A reasonable value for this option is 60 seconds.
+tcp-keepalive <%= @tcp_keepalive %>
+
+# Set server verbosity to 'debug'
+# it can be one of:
+# debug (a lot of information, useful for development/testing)
+# verbose (many rarely useful info, but not a mess like the debug level)
+# notice (moderately verbose, what you want in production probably)
+# warning (only very important / critical messages are logged)
+loglevel <%= @log_level %>
+
+# Specify the log file name. Also 'stdout' can be used to force
+# Redis to log on the standard output. Note that if you use standard
+# output for logging but daemonize, logs will be sent to /dev/null
+logfile <%= @_real_log_file %>
+
+# To enable logging to the system logger, just set 'syslog-enabled' to yes,
+# and optionally update the other syslog parameters to suit your needs.
+syslog-enabled <% if @syslog_enabled %>yes<% else %>no<% end %>
+
+# Specify the syslog identity.
+# syslog-ident redis
+
+# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
+<% if @syslog_facility %>syslog-facility <%= @syslog_facility %><% else %># syslog-facility local0<% end %>
+
+# Set the number of databases. The default database is DB 0, you can select
+# a different one on a per-connection basis using SELECT <dbid> where
+# dbid is a number between 0 and 'databases'-1
+databases <%= @databases %>
+
+################################ SNAPSHOTTING #################################
+#
+# Save the DB on disk:
+#
+# save <seconds> <changes>
+#
+# Will save the DB if both the given number of seconds and the given
+# number of write operations against the DB occurred.
+#
+# In the example below the behaviour will be to save:
+# after 900 sec (15 min) if at least 1 key changed
+# after 300 sec (5 min) if at least 10 keys changed
+# after 60 sec if at least 10000 keys changed
+#
+# Note: you can disable saving at all commenting all the "save" lines.
+#
+# It is also possible to remove all the previously configured save
+# points by adding a save directive with a single empty string argument
+# like in the following example:
+#
+# save ""
+<% if @save_db_to_disk %>
+<%- @save_db_to_disk_interval.sort_by{|k,v|k}.each do |seconds, key_change| -%>
+save <%= seconds -%> <%= key_change -%> <%= "\n" -%>
+<%- end -%>
+<% end %>
+# By default Redis will stop accepting writes if RDB snapshots are enabled
+# (at least one save point) and the latest background save failed.
+# This will make the user aware (in a hard way) that data is not persisting
+# on disk properly, otherwise chances are that no one will notice and some
+# distater will happen.
+#
+# If the background saving process will start working again Redis will
+# automatically allow writes again.
+#
+# However if you have setup your proper monitoring of the Redis server
+# and persistence, you may want to disable this feature so that Redis will
+# continue to work as usual even if there are problems with disk,
+# permissions, and so forth.
+stop-writes-on-bgsave-error <% if @stop_writes_on_bgsave_error -%>yes<% else -%>no<% end %>
+
+# Compress string objects using LZF when dump .rdb databases?
+# For default that's set to 'yes' as it's almost always a win.
+# If you want to save some CPU in the saving child set it to 'no' but
+# the dataset will likely be bigger if you have compressible values or keys.
+rdbcompression <% if @rdbcompression -%>yes<% else -%>no<% end %>
+
+# Since verison 5 of RDB a CRC64 checksum is placed at the end of the file.
+# This makes the format more resistant to corruption but there is a performance
+# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
+# for maximum performances.
+#
+# RDB files created with checksum disabled have a checksum of zero that will
+# tell the loading code to skip the check.
+rdbchecksum yes
+
+# The filename where to dump the DB
+<% if @dbfilename %>dbfilename <%= @dbfilename %><% else %># dbfilename dump.rdb<% end %>
+
+# The working directory.
+#
+# The DB will be written inside this directory, with the filename specified
+# above using the 'dbfilename' configuration directive.
+#
+# Also the Append Only File will be created inside this directory.
+#
+# Note that you must specify a directory here, not a file name.
+dir <%= @workdir %>
+
+################################# REPLICATION #################################
+
+# Master-Slave replication. Use slaveof to make a Redis instance a copy of
+# another Redis server. Note that the configuration is local to the slave
+# so for example it is possible to configure the slave to save the DB with a
+# different interval, or to listen to another port, and so on.
+#
+# slaveof <masterip> <masterport>
+<% if @slaveof -%>slaveof <%= @slaveof %><% end -%>
+
+# If the master is password protected (using the "requirepass" configuration
+# directive below) it is possible to tell the slave to authenticate before
+# starting the replication synchronization process, otherwise the master will
+# refuse the slave request.
+#
+# masterauth <master-password>
+<% if @masterauth -%>masterauth <%= @masterauth %><% end -%>
+
+# When a slave loses the connection with the master, or when the replication
+# is still in progress, the slave can act in two different ways:
+#
+# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
+# still reply to client requests, possibly with out of date data, or the
+# data set may just be empty if this is the first synchronization.
+#
+# 2) if slave-serve-stale data is set to 'no' the slave will reply with
+# an error "SYNC with master in progress" to all the kind of commands
+# but to INFO and SLAVEOF.
+#
+slave-serve-stale-data <% if @slave_serve_stale_data -%>yes<% else -%>no<% end %>
+
+# You can configure a slave instance to accept writes or not. Writing against
+# a slave instance may be useful to store some ephemeral data (because data
+# written on a slave will be easily deleted after resync with the master) but
+# may also cause problems if clients are writing to it because of a
+# misconfiguration.
+#
+# Since Redis 2.6 by default slaves are read-only.
+#
+# Note: read only slaves are not designed to be exposed to untrusted clients
+# on the internet. It's just a protection layer against misuse of the instance.
+# Still a read only slave exports by default all the administrative commands
+# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve
+# security of read only slaves using 'rename-command' to shadow all the
+# administrative / dangerous commands.
+slave-read-only <% if @slave_read_only -%>yes<% else -%>no<% end %>
+
+# Slaves send PINGs to server in a predefined interval. It's possible to change
+# this interval with the repl_ping_slave_period option. The default value is 10
+# seconds.
+#
+# repl-ping-slave-period 10
+
+# The following option sets a timeout for both Bulk transfer I/O timeout and
+# master data or ping response timeout. The default value is 60 seconds.
+#
+# It is important to make sure that this value is greater than the value
+# specified for repl-ping-slave-period otherwise a timeout will be detected
+# every time there is low traffic between the master and the slave.
+#
+repl-timeout <%= @repl_timeout %>
+
+# Disable TCP_NODELAY on the slave socket after SYNC?
+#
+# If you select "yes" Redis will use a smaller number of TCP packets and
+# less bandwidth to send data to slaves. But this can add a delay for
+# the data to appear on the slave side, up to 40 milliseconds with
+# Linux kernels using a default configuration.
+#
+# If you select "no" the delay for data to appear on the slave side will
+# be reduced but more bandwidth will be used for replication.
+#
+# By default we optimize for low latency, but in very high traffic conditions
+# or when the master and slaves are many hops away, turning this to "yes" may
+# be a good idea.
+repl-disable-tcp-nodelay <% if @repl_disable_tcp_nodelay -%>yes<% else -%>no<% end -%>
+
+# Set the replication backlog size. The backlog is a buffer that accumulates
+# slave data when slaves are disconnected for some time, so that when a slave
+# wants to reconnect again, often a full resync is not needed, but a partial
+# resync is enough, just passing the portion of data the slave missed while
+# disconnected.
+#
+# The bigger the replication backlog, the longer the time the slave can be
+# disconnected and later be able to perform a partial resynchronization.
+#
+# The backlog is only allocated once there is at least a slave connected.
+#
+repl-backlog-size <%= @repl_backlog_size %>
+
+# After a master has no longer connected slaves for some time, the backlog
+# will be freed. The following option configures the amount of seconds that
+# need to elapse, starting from the time the last slave disconnected, for
+# the backlog buffer to be freed.
+#
+# A value of 0 means to never release the backlog.
+#
+repl-backlog-ttl <%= @repl_backlog_ttl %>
+
+# The slave priority is an integer number published by Redis in the INFO output.
+# It is used by Redis Sentinel in order to select a slave to promote into a
+# master if the master is no longer working correctly.
+#
+# A slave with a low priority number is considered better for promotion, so
+# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
+# pick the one wtih priority 10, that is the lowest.
+#
+# However a special priority of 0 marks the slave as not able to perform the
+# role of master, so a slave with priority of 0 will never be selected by
+# Redis Sentinel for promotion.
+#
+# By default the priority is 100.
+slave-priority <%= @slave_priority %>
+
+# It is possible for a master to stop accepting writes if there are less than
+# N slaves connected, having a lag less or equal than M seconds.
+#
+# The N slaves need to be in "online" state.
+#
+# The lag in seconds, that must be <= the specified value, is calculated from
+# the last ping received from the slave, that is usually sent every second.
+#
+# This option does not GUARANTEE that N replicas will accept the write, but
+# will limit the window of exposure for lost writes in case not enough slaves
+# are available, to the specified number of seconds.
+#
+# For example to require at least 3 slaves with a lag <= 10 seconds use:
+#
+# min-slaves-to-write 3
+# min-slaves-max-lag 10
+#
+# Setting one or the other to 0 disables the feature.
+#
+# By default min-slaves-to-write is set to 0 (feature disabled) and
+# min-slaves-max-lag is set to 10.
+min-slaves-to-write <%= @min_slaves_to_write %>
+min-slaves-max-lag <%= @min_slaves_max_lag %>
+
+################################## SECURITY ###################################
+
+# Require clients to issue AUTH <PASSWORD> before processing any other
+# commands. This might be useful in environments in which you do not trust
+# others with access to the host running redis-server.
+#
+# This should stay commented out for backward compatibility and because most
+# people do not need auth (e.g. they run their own servers).
+#
+# Warning: since Redis is pretty fast an outside user can try up to
+# 150k passwords per second against a good box. This means that you should
+# use a very strong password otherwise it will be very easy to break.
+#
+<% if @requirepass -%>requirepass <%= @requirepass %><% end -%>
+
+# Command renaming.
+#
+# It is possible to change the name of dangerous commands in a shared
+# environment. For instance the CONFIG command may be renamed into something
+# of hard to guess so that it will be still available for internal-use
+# tools but not available for general clients.
+#
+# Example:
+#
+# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
+#
+# It is also possible to completely kill a command renaming it into
+# an empty string:
+#
+# rename-command CONFIG ""
+
+################################### LIMITS ####################################
+
+# Set the max number of connected clients at the same time. By default
+# this limit is set to 10000 clients, however if the Redis server is not
+# able ot configure the process file limit to allow for the specified limit
+# the max number of allowed clients is set to the current file limit
+# minus 32 (as Redis reserves a few file descriptors for internal uses).
+#
+# Once the limit is reached Redis will close all the new connections sending
+# an error 'max number of clients reached'.
+#
+maxclients <%= @maxclients %>
+
+# Don't use more memory than the specified amount of bytes.
+# When the memory limit is reached Redis will try to remove keys
+# accordingly to the eviction policy selected (see maxmemmory-policy).
+#
+# If Redis can't remove keys according to the policy, or if the policy is
+# set to 'noeviction', Redis will start to reply with errors to commands
+# that would use more memory, like SET, LPUSH, and so on, and will continue
+# to reply to read-only commands like GET.
+#
+# This option is usually useful when using Redis as an LRU cache, or to set
+# an hard memory limit for an instance (using the 'noeviction' policy).
+#
+# WARNING: If you have slaves attached to an instance with maxmemory on,
+# the size of the output buffers needed to feed the slaves are subtracted
+# from the used memory count, so that network problems / resyncs will
+# not trigger a loop where keys are evicted, and in turn the output
+# buffer of slaves is full with DELs of keys evicted triggering the deletion
+# of more keys, and so forth until the database is completely emptied.
+#
+# In short... if you have slaves attached it is suggested that you set a lower
+# limit for maxmemory so that there is some free RAM on the system for slave
+# output buffers (but this is not needed if the policy is 'noeviction').
+#
+# maxmemory <bytes>
+<% if @maxmemory -%>maxmemory <%= @maxmemory %><% end -%>
+
+# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
+# is reached? You can select among five behavior:
+#
+# volatile-lru -> remove the key with an expire set using an LRU algorithm
+# allkeys-lru -> remove any key accordingly to the LRU algorithm
+# volatile-random -> remove a random key with an expire set
+# allkeys-random -> remove a random key, any key
+# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
+# noeviction -> don't expire at all, just return an error on write operations
+#
+# Note: with all the kind of policies, Redis will return an error on write
+# operations, when there are not suitable keys for eviction.
+#
+# At the date of writing this commands are: set setnx setex append
+# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
+# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
+# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
+# getset mset msetnx exec sort
+#
+# The default is:
+#
+# maxmemory-policy volatile-lru
+<% if @maxmemory_policy -%>maxmemory-policy <%= @maxmemory_policy %><% end -%>
+
+# LRU and minimal TTL algorithms are not precise algorithms but approximated
+# algorithms (in order to save memory), so you can select as well the sample
+# size to check. For instance for default Redis will check three keys and
+# pick the one that was used less recently, you can change the sample size
+# using the following configuration directive.
+#
+# maxmemory-samples 3
+<% if @maxmemory_samples -%>maxmemory-samples <%= @maxmemory_samples %><% end -%>
+
+############################## APPEND ONLY MODE ###############################
+
+# By default Redis asynchronously dumps the dataset on disk. This mode is
+# good enough in many applications, but an issue with the Redis process or
+# a power outage may result into a few minutes of writes lost (depending on
+# the configured save points).
+#
+# The Append Only File is an alternative persistence mode that provides
+# much better durability. For instance using the default data fsync policy
+# (see later in the config file) Redis can lose just one second of writes in a
+# dramatic event like a server power outage, or a single write if something
+# wrong with the Redis process itself happens, but the operating system is
+# still running correctly.
+#
+# AOF and RDB persistence can be enabled at the same time without problems.
+# If the AOF is enabled on startup Redis will load the AOF, that is the file
+# with the better durability guarantees.
+#
+# Please check http://redis.io/topics/persistence for more information.
+
+appendonly <% if @appendonly -%>yes<% else -%>no<% end -%>
+
+# The name of the append only file (default: "appendonly.aof")
+appendfilename <%= @appendfilename %>
+
+# The fsync() call tells the Operating System to actually write data on disk
+# instead to wait for more data in the output buffer. Some OS will really flush
+# data on disk, some other OS will just try to do it ASAP.
+#
+# Redis supports three different modes:
+#
+# no: don't fsync, just let the OS flush the data when it wants. Faster.
+# always: fsync after every write to the append only log . Slow, Safest.
+# everysec: fsync only one time every second. Compromise.
+#
+# The default is "everysec" that's usually the right compromise between
+# speed and data safety. It's up to you to understand if you can relax this to
+# "no" that will let the operating system flush the output buffer when
+# it wants, for better performances (but if you can live with the idea of
+# some data loss consider the default persistence mode that's snapshotting),
+# or on the contrary, use "always" that's very slow but a bit safer than
+# everysec.
+#
+# More details please check the following article:
+# http://antirez.com/post/redis-persistence-demystified.html
+#
+# If unsure, use "everysec".
+
+appendfsync <%= @appendfsync %>
+
+# When the AOF fsync policy is set to always or everysec, and a background
+# saving process (a background save or AOF log background rewriting) is
+# performing a lot of I/O against the disk, in some Linux configurations
+# Redis may block too long on the fsync() call. Note that there is no fix for
+# this currently, as even performing fsync in a different thread will block
+# our synchronous write(2) call.
+#
+# In order to mitigate this problem it's possible to use the following option
+# that will prevent fsync() from being called in the main process while a
+# BGSAVE or BGREWRITEAOF is in progress.
+#
+# This means that while another child is saving the durability of Redis is
+# the same as "appendfsync none", that in practical terms means that it is
+# possible to lost up to 30 seconds of log in the worst scenario (with the
+# default Linux settings).
+#
+# If you have latency problems turn this to "yes". Otherwise leave it as
+# "no" that is the safest pick from the point of view of durability.
+no-appendfsync-on-rewrite <% if @no_appendfsync_on_rewrite -%>yes<% else -%>no<% end -%>
+
+# Automatic rewrite of the append only file.
+# Redis is able to automatically rewrite the log file implicitly calling
+# BGREWRITEAOF when the AOF log size will growth by the specified percentage.
+#
+# This is how it works: Redis remembers the size of the AOF file after the
+# latest rewrite (or if no rewrite happened since the restart, the size of
+# the AOF at startup is used).
+#
+# This base size is compared to the current size. If the current size is
+# bigger than the specified percentage, the rewrite is triggered. Also
+# you need to specify a minimal size for the AOF file to be rewritten, this
+# is useful to avoid rewriting the AOF file even if the percentage increase
+# is reached but it is still pretty small.
+#
+# Specify a percentage of zero in order to disable the automatic AOF
+# rewrite feature.
+
+auto-aof-rewrite-percentage <%= @auto_aof_rewrite_percentage %>
+auto-aof-rewrite-min-size <%= @auto_aof_rewrite_min_size %>
+
+# An AOF file may be found to be truncated at the end during the Redis
+# startup process, when the AOF data gets loaded back into memory.
+# This may happen when the system where Redis is running
+# crashes, especially when an ext4 filesystem is mounted without the
+# data=ordered option (however this can't happen when Redis itself
+# crashes or aborts but the operating system still works correctly).
+#
+# Redis can either exit with an error when this happens, or load as much
+# data as possible (the default now) and start if the AOF file is found
+# to be truncated at the end. The following option controls this behavior.
+#
+# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
+# the Redis server starts emitting a log to inform the user of the event.
+# Otherwise if the option is set to no, the server aborts with an error
+# and refuses to start. When the option is set to no, the user requires
+# to fix the AOF file using the "redis-check-aof" utility before to restart
+# the server.
+#
+# Note that if the AOF file will be found to be corrupted in the middle
+# the server will still exit with an error. This option only applies when
+# Redis will try to read more data from the AOF file but not enough bytes
+# will be found.
+aof-load-truncated <% if @aof_load_truncated -%>yes<% else -%>no<% end -%>
+
+################################ LUA SCRIPTING ###############################
+
+# Max execution time of a Lua script in milliseconds.
+#
+# If the maximum execution time is reached Redis will log that a script is
+# still in execution after the maximum allowed time and will start to
+# reply to queries with an error.
+#
+# When a long running script exceed the maximum execution time only the
+# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
+# used to stop a script that did not yet called write commands. The second
+# is the only way to shut down the server in the case a write commands was
+# already issue by the script but the user don't want to wait for the natural
+# termination of the script.
+#
+# Set it to 0 or a negative value for unlimited execution without warnings.
+lua-time-limit 5000
+
+################################## SLOW LOG ###################################
+
+# The Redis Slow Log is a system to log queries that exceeded a specified
+# execution time. The execution time does not include the I/O operations
+# like talking with the client, sending the reply and so forth,
+# but just the time needed to actually execute the command (this is the only
+# stage of command execution where the thread is blocked and can not serve
+# other requests in the meantime).
+#
+# You can configure the slow log with two parameters: one tells Redis
+# what is the execution time, in microseconds, to exceed in order for the
+# command to get logged, and the other parameter is the length of the
+# slow log. When a new command is logged the oldest one is removed from the
+# queue of logged commands.
+
+# The following time is expressed in microseconds, so 1000000 is equivalent
+# to one second. Note that a negative number disables the slow log, while
+# a value of zero forces the logging of every command.
+slowlog-log-slower-than <%= @slowlog_log_slower_than %>
+
+# There is no limit to this length. Just be aware that it will consume memory.
+# You can reclaim memory used by the slow log with SLOWLOG RESET.
+slowlog-max-len <%= @slowlog_max_len %>
+
+################################ LATENCY MONITOR ##############################
+
+# The Redis latency monitoring subsystem samples different operations
+# at runtime in order to collect data related to possible sources of
+# latency of a Redis instance.
+#
+# Via the LATENCY command this information is available to the user that can
+# print graphs and obtain reports.
+#
+# The system only logs operations that were performed in a time equal or
+# greater than the amount of milliseconds specified via the
+# latency-monitor-threshold configuration directive. When its value is set
+# to zero, the latency monitor is turned off.
+#
+# By default latency monitoring is disabled since it is mostly not needed
+# if you don't have latency issues, and collecting data has a performance
+# impact, that while very small, can be measured under big load. Latency
+# monitoring can easily be enalbed at runtime using the command
+# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
+latency-monitor-threshold <%= @latency_monitor_threshold %>
+
+############################# Event notification ##############################
+
+# Redis can notify Pub/Sub clients about events happening in the key space.
+# This feature is documented at http://redis.io/topics/notifications
+#
+# For instance if keyspace events notification is enabled, and a client
+# performs a DEL operation on key "foo" stored in the Database 0, two
+# messages will be published via Pub/Sub:
+#
+# PUBLISH __keyspace@0__:foo del
+# PUBLISH __keyevent@0__:del foo
+#
+# It is possible to select the events that Redis will notify among a set
+# of classes. Every class is identified by a single character:
+#
+# K Keyspace events, published with __keyspace@<db>__ prefix.
+# E Keyevent events, published with __keyevent@<db>__ prefix.
+# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
+# $ String commands
+# l List commands
+# s Set commands
+# h Hash commands
+# z Sorted set commands
+# x Expired events (events generated every time a key expires)
+# e Evicted events (events generated when a key is evicted for maxmemory)
+# A Alias for g$lshzxe, so that the "AKE" string means all the events.
+#
+# The "notify-keyspace-events" takes as argument a string that is composed
+# of zero or multiple characters. The empty string means that notifications
+# are disabled.
+#
+# Example: to enable list and generic events, from the point of view of the
+# event name, use:
+#
+# notify-keyspace-events Elg
+#
+# Example 2: to get the stream of the expired keys subscribing to channel
+# name __keyevent@0__:expired use:
+#
+# notify-keyspace-events Ex
+#
+# By default all notifications are disabled because most users don't need
+# this feature and the feature has some overhead. Note that if you don't
+# specify at least one of K or E, no events will be delivered.
+notify-keyspace-events <% if @notify_keyspace_events -%><%= @notify_keyspace_events %><% else -%>""<% end -%>
+
+############################### ADVANCED CONFIG ###############################
+
+# Hashes are encoded using a memory efficient data structure when they have a
+# small number of entries, and the biggest entry does not exceed a given
+# threshold. These thresholds can be configured using the following directives.
+hash-max-ziplist-entries <%= @hash_max_ziplist_entries %>
+hash-max-ziplist-value <%= @hash_max_ziplist_value %>
+
+# Similarly to hashes, small lists are also encoded in a special way in order
+# to save a lot of space. The special representation is only used when
+# you are under the following limits:
+list-max-ziplist-entries <%= @list_max_ziplist_entries %>
+list-max-ziplist-value <%= @list_max_ziplist_value %>
+
+# Sets have a special encoding in just one case: when a set is composed
+# of just strings that happens to be integers in radix 10 in the range
+# of 64 bit signed integers.
+# The following configuration setting sets the limit in the size of the
+# set in order to use this special memory saving encoding.
+set-max-intset-entries <%= @set_max_intset_entries %>
+
+# Similarly to hashes and lists, sorted sets are also specially encoded in
+# order to save a lot of space. This encoding is only used when the length and
+# elements of a sorted set are below the following limits:
+zset-max-ziplist-entries <%= @zset_max_ziplist_entries %>
+zset-max-ziplist-value <%= @zset_max_ziplist_value %>
+
+# HyperLogLog sparse representation bytes limit. The limit includes the
+# 16 bytes header. When an HyperLogLog using the sparse representation crosses
+# this limit, it is converted into the dense representation.
+#
+# A value greater than 16000 is totally useless, since at that point the
+# dense representation is more memory efficient.
+#
+# The suggested value is ~ 3000 in order to have the benefits of
+# the space efficient encoding without slowing down too much PFADD,
+# which is O(N) with the sparse encoding. The value can be raised to
+# ~ 10000 when CPU is not a concern, but space is, and the data set is
+# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
+hll-sparse-max-bytes <%= @hll_sparse_max_bytes %>
+
+# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
+# order to help rehashing the main Redis hash table (the one mapping top-level
+# keys to values). The hash table implementation Redis uses (see dict.c)
+# performs a lazy rehashing: the more operation you run into an hash table
+# that is rehashing, the more rehashing "steps" are performed, so if the
+# server is idle the rehashing is never complete and some more memory is used
+# by the hash table.
+#
+# The default is to use this millisecond 10 times every second in order to
+# active rehashing the main dictionaries, freeing memory when possible.
+#
+# If unsure:
+# use "activerehashing no" if you have hard latency requirements and it is
+# not a good thing in your environment that Redis can reply form time to time
+# to queries with 2 milliseconds delay.
+#
+# use "activerehashing yes" if you don't have such hard requirements but
+# want to free memory asap when possible.
+activerehashing <% if @activerehashing -%>yes<% else -%>no<% end -%>
+
+# The client output buffer limits can be used to force disconnection of clients
+# that are not reading data from the server fast enough for some reason (a
+# common reason is that a Pub/Sub client can't consume messages as fast as the
+# publisher can produce them).
+#
+# The limit can be set differently for the three different classes of clients:
+#
+# normal -> normal clients
+# slave -> slave clients and MONITOR clients
+# pubsub -> clients subcribed to at least one pubsub channel or pattern
+#
+# The syntax of every client-output-buffer-limit directive is the following:
+#
+# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
+#
+# A client is immediately disconnected once the hard limit is reached, or if
+# the soft limit is reached and remains reached for the specified number of
+# seconds (continuously).
+# So for instance if the hard limit is 32 megabytes and the soft limit is
+# 16 megabytes / 10 seconds, the client will get disconnected immediately
+# if the size of the output buffers reach 32 megabytes, but will also get
+# disconnected if the client reaches 16 megabytes and continuously overcomes
+# the limit for 10 seconds.
+#
+# By default normal clients are not limited because they don't receive data
+# without asking (in a push way), but just after a request, so only
+# asynchronous clients may create a scenario where data is requested faster
+# than it can read.
+#
+# Instead there is a default limit for pubsub and slave clients, since
+# subscribers and slaves receive data in a push fashion.
+#
+# Both the hard or the soft limit can be disabled just setting it to zero.
+client-output-buffer-limit normal 0 0 0
+client-output-buffer-limit slave <%= @output_buffer_limit_slave %>
+client-output-buffer-limit pubsub <%= @output_buffer_limit_pubsub %>
+
+# Redis calls an internal function to perform many background tasks, like
+# closing connections of clients in timeout, purging expired keys that are
+# never requested, and so forth.
+#
+# Not all tasks are performed with the same frequency, but Redis checks for
+# tasks to perform accordingly to the specified "hz" value.
+#
+# By default "hz" is set to 10. Raising the value will use more CPU when
+# Redis is idle, but at the same time will make Redis more responsive when
+# there are many keys expiring at the same time, and timeouts may be
+# handled with more precision.
+#
+# The range is between 1 and 500, however a value over 100 is usually not
+# a good idea. Most users should use the default of 10 and raise this up to
+# 100 only in environments where very low latency is required.
+hz <%= @hz %>
+
+# When a child rewrites the AOF file, if the following option is enabled
+# the file will be fsync-ed every 32 MB of data generated. This is useful
+# in order to commit the file to the disk more incrementally and avoid
+# big latency spikes.
+aof-rewrite-incremental-fsync <% if @aof_rewrite_incremental_fsync -%>yes<% else -%>no<% end -%>
+
+# Redis Cluster Settings
+<% if @cluster_enabled -%>
+cluster-enabled yes
+cluster-config-file <%= @cluster_config_file %>
+cluster-node-timeout <%= @cluster_node_timeout %>
+<% end -%>
+
+
+################################## INCLUDES ###################################
+
+# Include one or more other config files here. This is useful if you
+# have a standard template that goes to all Redis server but also need
+# to customize a few per-server settings. Include files can include
+# other files, so use this wisely.
+#
+# include /path/to/local.conf
+# include /path/to/other.conf
+<% if @extra_config_file -%>
+include <%= @extra_config_file %>
+<% end -%>

File Metadata

Mime Type
text/x-diff
Expires
Mon, Aug 18, 10:03 PM (1 d, 3 h)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
3325855

Event Timeline