diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 2240a97..f614fed 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,281 +1,280 @@ # Contribution guidelines ## Table of contents * [Contributing](#contributing) * [Writing proper commits - short version](#writing-proper-commits-short-version) * [Writing proper commits - long version](#writing-proper-commits-long-version) * [Dependencies](#dependencies) * [Note for OS X users](#note-for-os-x-users) * [The test matrix](#the-test-matrix) * [Syntax and style](#syntax-and-style) * [Running the unit tests](#running-the-unit-tests) * [Unit tests in docker](#unit-tests-in-docker) * [Integration tests](#integration-tests) This module has grown over time based on a range of contributions from people using it. If you follow these contributing guidelines your patch will likely make it into a release a little more quickly. ## Contributing Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. [Contributor Code of Conduct](https://voxpupuli.org/coc/). * Fork the repo. * Create a separate branch for your change. * We only take pull requests with passing tests, and documentation. [travis-ci](http://travis-ci.org) runs the tests for us. You can also execute them locally. This is explained [in a later section](#the-test-matrix). * Checkout [our docs](https://voxpupuli.org/docs/reviewing_pr/) we use to review a module and the [official styleguide](https://puppet.com/docs/puppet/6.0/style_guide.html). They provide some guidance for new code that might help you before you submit a pull request. * Add a test for your change. Only refactoring and documentation changes require no new tests. If you are adding functionality or fixing a bug, please add a test. * Squash your commits down into logical components. Make sure to rebase against our current master. * Push the branch to your fork and submit a pull request. Please be prepared to repeat some of these steps as our contributors review your code. ## Writing proper commits - short version * Make commits of logical units. * Check for unnecessary whitespace with "git diff --check" before committing. * Commit using Unix line endings (check the settings around "crlf" in git-config(1)). * Do not check in commented out code or unneeded files. * The first line of the commit message should be a short description (50 characters is the soft limit, excluding ticket number(s)), and should skip the full stop. * Associate the issue in the message. The first line should include the issue number in the form "(#XXXX) Rest of message". * The body should provide a meaningful commit message, which: *uses the imperative, present tense: `change`, not `changed` or `changes`. * includes motivation for the change, and contrasts its implementation with the previous behavior. * Make sure that you have tests for the bug you are fixing, or feature you are adding. * Make sure the test suites passes after your commit: * When introducing a new feature, make sure it is properly documented in the README.md ## Writing proper commits - long version 1. Make separate commits for logically separate changes. Please break your commits down into logically consistent units which include new or changed tests relevant to the rest of the change. The goal of doing this is to make the diff easier to read for whoever is reviewing your code. In general, the easier your diff is to read, the more likely someone will be happy to review it and get it into the code base. If you are going to refactor a piece of code, please do so as a separate commit from your feature or bug fix changes. We also really appreciate changes that include tests to make sure the bug is not re-introduced, and that the feature is not accidentally broken. Describe the technical detail of the change(s). If your description starts to get too long, that is a good sign that you probably need to split up your commit into more finely grained pieces. Commits which plainly describe the things which help reviewers check the patch and future developers understand the code are much more likely to be merged in with a minimum of bike-shedding or requested changes. Ideally, the commit message would include information, and be in a form suitable for inclusion in the release notes for the version of Puppet that includes them. Please also check that you are not introducing any trailing whitespace or other "whitespace errors". You can do this by running "git diff --check" on your changes before you commit. 2. Sending your patches To submit your changes via a GitHub pull request, we _highly_ recommend that you have them on a topic branch, instead of directly on `master`. It makes things much easier to keep track of, especially if you decide to work on another thing before your first change is merged in. GitHub has some pretty good [general documentation](http://help.github.com/) on using their site. They also have documentation on [creating pull requests](http://help.github.com/send-pull-requests/). In general, after pushing your topic branch up to your repository on GitHub, you can switch to the branch in the GitHub UI and click "Pull Request" towards the top of the page in order to open a pull request. 3. Update the related GitHub issue. If there is a GitHub issue associated with the change you submitted, then you should update the ticket to include the location of your branch, along with any other commentary you may wish to make. ## Dependencies The testing and development tools have a bunch of dependencies, all managed by [bundler](http://bundler.io/) according to the [Puppet support matrix](http://docs.puppetlabs.com/guides/platforms.html#ruby-versions). By default the tests use a baseline version of Puppet. If you have Ruby 2.x or want a specific version of Puppet, you must set an environment variable such as: ```sh export PUPPET_VERSION="~> 5.5.6" ``` You can install all needed gems for spec tests into the modules directory by running: ```sh bundle install --path .vendor/ --without development system_tests release --jobs "$(nproc)" ``` If you also want to run acceptance tests: ```sh bundle install --path .vendor/ --with system_tests --without development release --jobs "$(nproc)" ``` Our all in one solution if you don't know if you need to install or update gems: ```sh bundle install --path .vendor/ --with system_tests --without development release --jobs "$(nproc)"; bundle update; bundle clean ``` As an alternative to the `--jobs "$(nproc)` parameter, you can set an environment variable: ```sh BUNDLE_JOBS="$(nproc)" ``` ### Note for OS X users `nproc` isn't a valid command under OS x. As an alternative, you can do: ```sh --jobs "$(sysctl -n hw.ncpu)" ``` ## The test matrix ### Syntax and style The test suite will run [Puppet Lint](http://puppet-lint.com/) and [Puppet Syntax](https://github.com/gds-operations/puppet-syntax) to check various syntax and style things. You can run these locally with: ```sh bundle exec rake lint bundle exec rake validate ``` It will also run some [Rubocop](http://batsov.com/rubocop/) tests against it. You can run those locally ahead of time with: ```sh bundle exec rake rubocop ``` ### Running the unit tests The unit test suite covers most of the code, as mentioned above please add tests if you're adding new functionality. If you've not used [rspec-puppet](http://rspec-puppet.com/) before then feel free to ask about how best to test your new feature. To run the linter, the syntax checker and the unit tests: ```sh bundle exec rake test ``` To run your all the unit tests ```sh bundle exec rake spec ``` To run a specific spec test set the `SPEC` variable: ```sh bundle exec rake spec SPEC=spec/foo_spec.rb ``` #### Unit tests in docker Some people don't want to run the dependencies locally or don't want to install ruby. We ship a Dockerfile that enables you to run all unit tests and linting. You only need to run: ```sh docker build . ``` Please ensure that a docker daemon is running and that your user has the permission to talk to it. You can specify a remote docker host by setting the `DOCKER_HOST` environment variable. it will copy the content of the module into the docker image. So it will not work if a Gemfile.lock exists. ### Integration tests The unit tests just check the code runs, not that it does exactly what we want on a real machine. For that we're using [beaker](https://github.com/puppetlabs/beaker). This fires up a new virtual machine (using vagrant) and runs a series of simple tests against it after applying the module. You can run this with: ```sh bundle exec rake acceptance ``` This will run the tests on the module's default nodeset. You can override the nodeset used, e.g., ```sh BEAKER_set=centos-7-x64 bundle exec rake acceptance ``` There are default rake tasks for the various acceptance test modules, e.g., ```sh bundle exec rake beaker:centos-7-x64 bundle exec rake beaker:ssh:centos-7-x64 ``` If you don't want to have to recreate the virtual machine every time you can use `BEAKER_destroy=no` and `BEAKER_provision=no`. On the first run you will at least need `BEAKER_provision` set to yes (the default). The Vagrantfile for the created virtual machines will be in `.vagrant/beaker_vagrant_files`. Beaker also supports docker containers. We also use that in our automated CI pipeline at [travis-ci](http://travis-ci.org). To use that instead of Vagrant: ```sh PUPPET_INSTALL_TYPE=agent BEAKER_IS_PE=no BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_debug=true BEAKER_setfile=debian10-64{hypervisor=docker} BEAKER_destroy=yes bundle exec rake beaker ``` You can replace the string `debian10` with any common operating system. The following strings are known to work: * ubuntu1604 * ubuntu1804 * debian8 * debian9 * debian10 -* centos6 * centos7 * centos8 The easiest way to debug in a docker container is to open a shell: ```sh docker exec -it -u root ${container_id_or_name} bash ``` The source of this file is in our [modulesync_config](https://github.com/voxpupuli/modulesync_config/blob/master/moduleroot/.github/CONTRIBUTING.md.erb) repository. diff --git a/.sync.yml b/.sync.yml index 930d06c..9583dce 100644 --- a/.sync.yml +++ b/.sync.yml @@ -1,25 +1,24 @@ --- .travis.yml: docker_sets: - set: debian8-64 - set: debian9-64 - set: debian10-64 - set: ubuntu1604-64 - set: ubuntu1804-64 - - set: centos6-64 - set: centos7-64 secure: "j/Db/NnuJUwyFWGVwZEciC/0Xrhaes647UK49ZnlZTjUppUeTsqY/rKE8Pc4jpiW8DsfeGijCYP1O02tquH+KSKSwiwxIBjbToFjhNNJ6Qgh0DGIR29VZkiyirh5ZkK1yLMx9Ciyn8opwOXHqTRMk6JwAY05Gux1sD2T7Eu2c4w=" spec/acceptance/nodesets/ec2/amazonlinux-2016091.yml: delete: true spec/acceptance/nodesets/ec2/image_templates.yaml: delete: true spec/acceptance/nodesets/ec2/rhel-73-x64.yml: delete: true spec/acceptance/nodesets/ec2/sles-12sp2-x64.yml: delete: true spec/acceptance/nodesets/ec2/ubuntu-1604-x64.yml: delete: true spec/acceptance/nodesets/ec2/windows-2016-base-x64.yml: delete: true spec/acceptance/nodesets/archlinux-2-x64.yml: delete: true diff --git a/.travis.yml b/.travis.yml index 8c52ae2..d32a23e 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,104 +1,96 @@ --- os: linux dist: bionic language: ruby cache: bundler before_install: - yes | gem update --system - bundle --version script: - 'bundle exec rake $CHECK' jobs: fast_finish: true include: - rvm: 2.4.4 bundler_args: --without system_tests development release env: PUPPET_VERSION="~> 5.0" CHECK=test - rvm: 2.5.3 bundler_args: --without system_tests development release env: PUPPET_VERSION="~> 6.0" CHECK=test_with_coveralls - rvm: 2.5.3 bundler_args: --without system_tests development release env: PUPPET_VERSION="~> 6.0" CHECK=rubocop - rvm: 2.4.4 bundler_args: --without system_tests development release env: PUPPET_VERSION="~> 5.0" CHECK=build DEPLOY_TO_FORGE=yes - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet5 BEAKER_setfile=debian8-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_setfile=debian8-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet5 BEAKER_setfile=debian9-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_setfile=debian9-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet5 BEAKER_setfile=debian10-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_setfile=debian10-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet5 BEAKER_setfile=ubuntu1604-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_setfile=ubuntu1604-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet5 BEAKER_setfile=ubuntu1804-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_setfile=ubuntu1804-64 CHECK=beaker services: docker - - rvm: 2.5.3 - bundler_args: --without development release - env: BEAKER_PUPPET_COLLECTION=puppet5 BEAKER_setfile=centos6-64 CHECK=beaker - services: docker - - rvm: 2.5.3 - bundler_args: --without development release - env: BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_setfile=centos6-64 CHECK=beaker - services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet5 BEAKER_setfile=centos7-64 CHECK=beaker services: docker - rvm: 2.5.3 bundler_args: --without development release env: BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_setfile=centos7-64 CHECK=beaker services: docker branches: only: - master - /^v\d/ notifications: email: false webhooks: https://voxpupu.li/incoming/travis irc: on_success: always on_failure: always channels: - "chat.freenode.org#voxpupuli-notifications" deploy: provider: puppetforge username: puppet password: secure: "j/Db/NnuJUwyFWGVwZEciC/0Xrhaes647UK49ZnlZTjUppUeTsqY/rKE8Pc4jpiW8DsfeGijCYP1O02tquH+KSKSwiwxIBjbToFjhNNJ6Qgh0DGIR29VZkiyirh5ZkK1yLMx9Ciyn8opwOXHqTRMk6JwAY05Gux1sD2T7Eu2c4w=" on: tags: true # all_branches is required to use tags all_branches: true # Only publish the build marked with "DEPLOY_TO_FORGE" condition: "$DEPLOY_TO_FORGE = yes" diff --git a/README.md b/README.md index c52b74d..853c03a 100644 --- a/README.md +++ b/README.md @@ -1,133 +1,132 @@ # Kafka module for Puppet [![Build Status](https://travis-ci.org/voxpupuli/puppet-kafka.png?branch=master)](https://travis-ci.org/voxpupuli/puppet-kafka) [![Puppet Forge](https://img.shields.io/puppetforge/v/puppet/kafka.svg)](https://forge.puppetlabs.com/puppet/kafka) [![Puppet Forge - downloads](https://img.shields.io/puppetforge/dt/puppet/kafka.svg)](https://forge.puppetlabs.com/puppet/kafka) [![Puppet Forge - endorsement](https://img.shields.io/puppetforge/e/puppet/kafka.svg)](https://forge.puppetlabs.com/puppet/kafka) [![Puppet Forge - scores](https://img.shields.io/puppetforge/f/puppet/kafka.svg)](https://forge.puppetlabs.com/puppet/kafka) ## Table of Contents 1. [Overview](#overview) 1. [Module Description - What the module does and why it is useful](#module-description) 1. [Setup - The basics of getting started with Kafka](#setup) * [What Kafka affects](#what-kafka-affects) * [Setup requirements](#setup-requirements) * [Beginning with Kafka](#beginning-with-kafka) 1. [Usage - Configuration options and additional functionality](#usage) 1. [Reference - An under-the-hood peek at what the module is doing and how](#reference) 1. [Limitations - OS compatibility, etc.](#limitations) 1. [Development - Guide for contributing to the module](#development) ## Overview The Kafka module for managing the installation and configuration of [Apache Kafka](http://kafka.apache.org). ## Module Description The Kafka module for managing the installation and configuration of Apache Kafka: it's brokers, producers and consumers. ## Setup ### What Kafka affects Installs the Kafka package and creates a new service. ### Setup requirements This module has the following dependencies: * [deric/zookeeper](https://github.com/deric/puppet-zookeeper) * [camptocamp/systemd](https://github.com/camptocamp/puppet-systemd) * [puppet/archive](https://github.com/voxpupuli/puppet-archive) * [puppetlabs/java](https://github.com/puppetlabs/puppetlabs-java) * [puppetlabs/stdlib](https://github.com/puppetlabs/puppetlabs-stdlib) ### Beginning with Kafka To successfully install Kafka using this module you need to have Apache ZooKeeper already running at localhost:2181. You can specify another ZooKeeper host:port configuration using the config hash of the kafka::broker class. The default configuration installs Kafka 0.11.0.3 binaries with Scala 2.11: ```puppet class { 'kafka': } ``` If you want a Kafka broker server that connects to ZooKeeper listening on port 2181: ```puppet class { 'kafka::broker': config => { 'broker.id' => '0', 'zookeeper.connect' => 'localhost:2181' } } ``` ## Usage You can specify different Kafka binaries packages versions to install. Please take a look at the different Scala and Kafka versions combinations at the [Apache Kafka Website](http://kafka.apache.org/downloads.html) ### Installing Kafka version 1.1.0 with scala 2.12 We first install the binary package with: ```puppet class { 'kafka': version => '1.1.0', scala_version => '2.12' } ``` Then we set a minimal Kafka broker configuration with: ```puppet class { 'kafka::broker': config => { 'broker.id' => '0', 'zookeeper.connect' => 'localhost:2181' } } ``` ## Reference The [reference][1] documentation of this module is generated using [puppetlabs/puppetlabs-strings][2]. ## Limitations This module only supports Kafka >= 0.9.0.0. This module is tested on the following platforms: * Debian 8 * Debian 9 * Debian 10 * Ubuntu 16.04 * Ubuntu 18.04 -* CentOS 6 * CentOS 7 It is tested with the OSS version of Puppet (>= 5.5) only. ## Development This module has grown over time based on a range of contributions from people using it. If you follow these [contributing][3] guidelines your patch will likely make it into a release a little more quickly. ## Author This module is maintained by [Vox Pupuli][4]. It was originally written and maintained by [Liam Bennett][5]. [1]: https://github.com/voxpupuli/puppet-kafka/blob/master/REFERENCE.md [2]: https://github.com/puppetlabs/puppetlabs-strings [3]: https://github.com/voxpupuli/puppet-kafka/blob/master/.github/CONTRIBUTING.md [4]: https://voxpupuli.org -[5]: https://www.opentable.com \ No newline at end of file +[5]: https://www.opentable.com diff --git a/metadata.json b/metadata.json index 187fd40..f7c0780 100644 --- a/metadata.json +++ b/metadata.json @@ -1,81 +1,79 @@ { "name": "puppet-kafka", "version": "7.0.1-rc0", "author": "Vox Pupuli", "summary": "Puppet module for Kafka", "license": "MIT", "source": "https://github.com/voxpupuli/puppet-kafka", "project_page": "https://github.com/voxpupuli/puppet-kafka", "issues_url": "https://github.com/voxpupuli/puppet-kafka/issues", "dependencies": [ { "name": "puppet/archive", "version_requirement": ">= 1.0.0 < 5.0.0" }, { "name": "puppetlabs/java", "version_requirement": ">= 1.4.2 < 6.0.0" }, { "name": "puppetlabs/stdlib", "version_requirement": ">= 4.22.0 < 7.0.0" }, { "name": "deric/zookeeper", "version_requirement": ">= 0.5.1 < 2.0.0" }, { "name": "camptocamp/systemd", "version_requirement": ">= 0.4.0 < 3.0.0" } ], "operatingsystem_support": [ { "operatingsystem": "CentOS", "operatingsystemrelease": [ - "6", "7" ] }, { "operatingsystem": "RedHat", "operatingsystemrelease": [ - "6", "7" ] }, { "operatingsystem": "Ubuntu", "operatingsystemrelease": [ "16.04", "18.04" ] }, { "operatingsystem": "Debian", "operatingsystemrelease": [ "8", "9", "10" ] }, { "operatingsystem": "SLES", "operatingsystemrelease": [ "11", "12", "15" ] } ], "requirements": [ { "name": "puppet", "version_requirement": ">= 5.5.8 < 7.0.0" } ], "tags": [ "kafka", "pubsub" ] } diff --git a/spec/acceptance/broker_spec.rb b/spec/acceptance/broker_spec.rb index 6f90278..4b9136f 100644 --- a/spec/acceptance/broker_spec.rb +++ b/spec/acceptance/broker_spec.rb @@ -1,245 +1,226 @@ require 'spec_helper_acceptance' -if fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat' - user_shell = '/bin/bash' -else - case fact('osfamily') - when 'RedHat', 'Suse' - user_shell = '/sbin/nologin' - when 'Debian' - user_shell = '/usr/sbin/nologin' - end +case fact('osfamily') +when 'RedHat', 'Suse' + user_shell = '/sbin/nologin' +when 'Debian' + user_shell = '/usr/sbin/nologin' end describe 'kafka::broker' do it 'works with no errors' do pp = <<-EOS class { 'kafka::broker': config => { 'zookeeper.connect' => 'localhost:2181', }, } -> kafka::topic { 'demo': ensure => present, zookeeper => 'localhost:2181', } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end describe 'kafka::broker::install' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::broker': config => { 'zookeeper.connect' => 'localhost:2181', }, } EOS apply_manifest(pp, catch_failures: true) end describe group('kafka') do it { is_expected.to exist } end describe user('kafka') do it { is_expected.to exist } it { is_expected.to belong_to_group 'kafka' } it { is_expected.to have_login_shell user_shell } end describe file('/var/tmp/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka-2.12-2.4.1') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka') do it { is_expected.to be_linked_to('/opt/kafka-2.12-2.4.1') } end describe file('/opt/kafka/config') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/var/log/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end describe 'kafka::broker::config' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::broker': config => { 'zookeeper.connect' => 'localhost:2181', }, } EOS apply_manifest(pp, catch_failures: true) end describe file('/opt/kafka/config/server.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } it { is_expected.to contain 'zookeeper.connect=localhost:2181' } end end context 'with custom config dir' do it 'works with no errors' do pp = <<-EOS class { 'kafka::broker': config => { 'zookeeper.connect' => 'localhost:2181', }, config_dir => '/opt/kafka/custom_config' } EOS apply_manifest(pp, catch_failures: true) end describe file('/opt/kafka/custom_config/server.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } it { is_expected.to contain 'zookeeper.connect=localhost:2181' } end end context 'with specific version' do it 'works with no errors' do pp = <<-EOS class { 'kafka::broker': kafka_version => '2.4.0', config => { 'zookeeper.connect' => 'localhost:2181', }, } EOS apply_manifest(pp, catch_failures: true) end describe file('/opt/kafka/config/server.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end describe 'kafka::broker::service' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::broker': config => { 'zookeeper.connect' => 'localhost:2181', }, } EOS apply_manifest(pp, catch_failures: true) end - describe file('/etc/init.d/kafka'), if: (fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat') do - it { is_expected.to be_file } - it { is_expected.to be_owned_by 'root' } - it { is_expected.to be_grouped_into 'root' } - end - describe file('/etc/systemd/system/kafka.service'), if: (fact('operatingsystemmajrelease') == '7' && fact('osfamily') == 'RedHat') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'root' } it { is_expected.to be_grouped_into 'root' } end describe service('kafka') do it { is_expected.to be_running } it { is_expected.to be_enabled } end end end describe 'kafka::broker::service' do context 'with log4j/jmx parameters' do it 'works with no errors' do pp = <<-EOS exec { 'create log dir': command => '/bin/mkdir -p /some/path/to/logs', creates => '/some/path/to/logs', } -> class { 'kafka::broker': config => { 'zookeeper.connect' => 'localhost:2181', }, heap_opts => '-Xmx512M -Xmx512M', log4j_opts => '-Dlog4j.configuration=file:/tmp/log4j.properties', jmx_opts => '-Dcom.sun.management.jmxremote', opts => '-Djava.security.policy=/some/path/my.policy', log_dir => '/some/path/to/logs' } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end - describe file('/etc/init.d/kafka'), if: (fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat') do - it { is_expected.to be_file } - it { is_expected.to be_owned_by 'root' } - it { is_expected.to be_grouped_into 'root' } - it { is_expected.to contain 'export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote"' } - it { is_expected.to contain 'export KAFKA_HEAP_OPTS="-Xmx512M -Xmx512M"' } - it { is_expected.to contain 'export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/tmp/log4j.properties"' } - end - describe file('/etc/init.d/kafka'), if: (fact('service_provider') == 'upstart' && fact('osfamily') == 'Debian') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'root' } it { is_expected.to be_grouped_into 'root' } it { is_expected.to contain %r{^# Provides:\s+kafka$} } it { is_expected.to contain 'export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote"' } it { is_expected.to contain 'export KAFKA_HEAP_OPTS="-Xmx512M -Xmx512M"' } it { is_expected.to contain 'export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/tmp/log4j.properties"' } end describe file('/etc/systemd/system/kafka.service'), if: (fact('operatingsystemmajrelease') == '7' && fact('osfamily') == 'RedHat') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'root' } it { is_expected.to be_grouped_into 'root' } it { is_expected.to contain "Environment='KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote'" } it { is_expected.to contain "Environment='KAFKA_HEAP_OPTS=-Xmx512M -Xmx512M'" } it { is_expected.to contain "Environment='KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:/tmp/log4j.properties'" } it { is_expected.to contain "Environment='KAFKA_OPTS=-Djava.security.policy=/some/path/my.policy'" } it { is_expected.to contain "Environment='LOG_DIR=/some/path/to/logs'" } end describe service('kafka') do it { is_expected.to be_running } it { is_expected.to be_enabled } end end end end diff --git a/spec/acceptance/consumer_spec.rb b/spec/acceptance/consumer_spec.rb index 5813475..812abff 100644 --- a/spec/acceptance/consumer_spec.rb +++ b/spec/acceptance/consumer_spec.rb @@ -1,177 +1,165 @@ require 'spec_helper_acceptance' -if fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat' - user_shell = '/bin/bash' -else - case fact('osfamily') - when 'RedHat', 'Suse' - user_shell = '/sbin/nologin' - when 'Debian' - user_shell = '/usr/sbin/nologin' - end +case fact('osfamily') +when 'RedHat', 'Suse' + user_shell = '/sbin/nologin' +when 'Debian' + user_shell = '/usr/sbin/nologin' end describe 'kafka::consumer' do it 'works with no errors' do pp = <<-EOS class { 'kafka::consumer': service_config => { topic => 'demo', bootstrap-server => 'localhost:9092', }, } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end describe 'kafka::consumer::install' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::consumer': service_config => { topic => 'demo', bootstrap-server => 'localhost:9092', }, } EOS apply_manifest(pp, catch_failures: true) end describe group('kafka') do it { is_expected.to exist } end describe user('kafka') do it { is_expected.to exist } it { is_expected.to belong_to_group 'kafka' } it { is_expected.to have_login_shell user_shell } end describe file('/var/tmp/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka-2.12-2.4.1') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka') do it { is_expected.to be_linked_to('/opt/kafka-2.12-2.4.1') } end describe file('/opt/kafka/config') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/var/log/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end describe 'kafka::consumer::config' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::consumer': service_config => { topic => 'demo', bootstrap-server => 'localhost:9092', }, } EOS apply_manifest(pp, catch_failures: true) end describe file('/opt/kafka/config/consumer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end describe 'kafka::consumer::config' do context 'with custom config_dir' do it 'works with no errors' do pp = <<-EOS class { 'kafka::consumer': service_config => { topic => 'demo', bootstrap-server => 'localhost:9092', }, config_dir => '/opt/kafka/custom_config', } EOS apply_manifest(pp, catch_failures: true) end describe file('/opt/kafka/custom_config/consumer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end describe 'kafka::consumer::service' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::consumer': service_config => { topic => 'demo', bootstrap-server => 'localhost:9092', }, } EOS apply_manifest(pp, catch_failures: true) end - describe file('/etc/init.d/kafka-consumer'), if: (fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat') do - it { is_expected.to be_file } - it { is_expected.to be_owned_by 'root' } - it { is_expected.to be_grouped_into 'root' } - it { is_expected.to contain 'export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9993"' } - it { is_expected.to contain 'export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties"' } - end - describe file('/etc/init.d/kafka-consumer'), if: (fact('service_provider') == 'upstart' && fact('osfamily') == 'Debian') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'root' } it { is_expected.to be_grouped_into 'root' } it { is_expected.to contain %r{^# Provides:\s+kafka-consumer$} } it { is_expected.to contain 'export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9993"' } it { is_expected.to contain 'export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties"' } end describe file('/etc/systemd/system/kafka-consumer.service'), if: (fact('operatingsystemmajrelease') == '7' && fact('osfamily') == 'RedHat') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'root' } it { is_expected.to be_grouped_into 'root' } it { is_expected.to contain 'Environment=\'KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9993\'' } it { is_expected.to contain 'Environment=\'KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties\'' } end describe service('kafka-consumer') do it { is_expected.to be_running } it { is_expected.to be_enabled } end end end end diff --git a/spec/acceptance/init_spec.rb b/spec/acceptance/init_spec.rb index 53c4db3..df552dc 100644 --- a/spec/acceptance/init_spec.rb +++ b/spec/acceptance/init_spec.rb @@ -1,223 +1,219 @@ require 'spec_helper_acceptance' -if fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat' - user_shell = '/bin/bash' -else - case fact('osfamily') - when 'RedHat', 'Suse' - user_shell = '/sbin/nologin' - when 'Debian' - user_shell = '/usr/sbin/nologin' - end +case fact('osfamily') +when 'RedHat', 'Suse' + user_shell = '/sbin/nologin' +when 'Debian' + user_shell = '/usr/sbin/nologin' end describe 'kafka' do it 'works with no errors' do pp = <<-EOS class { 'kafka': } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end describe 'kafka::init' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka': } EOS apply_manifest(pp, catch_failures: true) end describe group('kafka') do it { is_expected.to exist } end describe user('kafka') do it { is_expected.to exist } it { is_expected.to belong_to_group 'kafka' } it { is_expected.to have_login_shell user_shell } end describe file('/var/tmp/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka-2.12-2.4.1') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka') do it { is_expected.to be_linked_to('/opt/kafka-2.12-2.4.1') } end describe file('/opt/kafka/config') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/var/log/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end context 'with specific kafka version' do it 'works with no errors' do pp = <<-EOS class { 'kafka': kafka_version => '2.4.0', } EOS apply_manifest(pp, catch_failures: true) end describe group('kafka') do it { is_expected.to exist } end describe user('kafka') do it { is_expected.to exist } it { is_expected.to belong_to_group 'kafka' } it { is_expected.to have_login_shell user_shell } end describe file('/var/tmp/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka-2.12-2.4.0') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka') do it { is_expected.to be_linked_to('/opt/kafka-2.12-2.4.0') } end describe file('/opt/kafka/config') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/var/log/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end context 'with specific scala version' do it 'works with no errors' do pp = <<-EOS class { 'kafka': scala_version => '2.13', } EOS apply_manifest(pp, catch_failures: true) end describe group('kafka') do it { is_expected.to exist } end describe user('kafka') do it { is_expected.to exist } it { is_expected.to belong_to_group 'kafka' } it { is_expected.to have_login_shell user_shell } end describe file('/var/tmp/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka-2.13-2.4.1') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka') do it { is_expected.to be_linked_to('/opt/kafka-2.13-2.4.1') } end describe file('/opt/kafka/config') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/var/log/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end context 'with specific config dir' do it 'works with no errors' do pp = <<-EOS class { 'kafka': config_dir => '/opt/kafka/custom_config', } EOS apply_manifest(pp, catch_failures: true) end describe group('kafka') do it { is_expected.to exist } end describe user('kafka') do it { is_expected.to exist } it { is_expected.to belong_to_group 'kafka' } it { is_expected.to have_login_shell user_shell } end describe file('/var/tmp/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka-2.12-2.4.1') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka') do it { is_expected.to be_linked_to('/opt/kafka-2.12-2.4.1') } end describe file('/opt/kafka/custom_config') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/var/log/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end end diff --git a/spec/acceptance/mirror_spec.rb b/spec/acceptance/mirror_spec.rb index d5144e7..443d144 100644 --- a/spec/acceptance/mirror_spec.rb +++ b/spec/acceptance/mirror_spec.rb @@ -1,255 +1,243 @@ require 'spec_helper_acceptance' -if fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat' - user_shell = '/bin/bash' -else - case fact('osfamily') - when 'RedHat', 'Suse' - user_shell = '/sbin/nologin' - when 'Debian' - user_shell = '/usr/sbin/nologin' - end +case fact('osfamily') +when 'RedHat', 'Suse' + user_shell = '/sbin/nologin' +when 'Debian' + user_shell = '/usr/sbin/nologin' end describe 'kafka::mirror' do it 'works with no errors' do pp = <<-EOS class { 'kafka::mirror': consumer_config => { 'group.id' => 'kafka-mirror', 'bootstrap.servers' => 'localhost:9092', }, producer_config => { 'bootstrap.servers' => 'localhost:9092', }, service_config => { 'whitelist' => '.*', }, } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end describe 'kafka::mirror::install' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::mirror': consumer_config => { 'group.id' => 'kafka-mirror', 'bootstrap.servers' => 'localhost:9092', }, producer_config => { 'bootstrap.servers' => 'localhost:9092', }, service_config => { 'whitelist' => '.*', }, } EOS apply_manifest(pp, catch_failures: true) end describe group('kafka') do it { is_expected.to exist } end describe user('kafka') do it { is_expected.to exist } it { is_expected.to belong_to_group 'kafka' } it { is_expected.to have_login_shell user_shell } end describe file('/var/tmp/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka-2.12-2.4.1') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka') do it { is_expected.to be_linked_to('/opt/kafka-2.12-2.4.1') } end describe file('/opt/kafka/config') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/var/log/kafka') do it { is_expected.to be_directory } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end describe 'kafka::mirror::config' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::mirror': consumer_config => { 'group.id' => 'kafka-mirror', 'bootstrap.servers' => 'localhost:9092', }, producer_config => { 'bootstrap.servers' => 'localhost:9092', }, service_config => { 'whitelist' => '.*', }, } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end describe file('/opt/kafka/config/consumer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka/config/producer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end context 'with custom config_dir' do it 'works with no errors' do pp = <<-EOS class { 'kafka::mirror': consumer_config => { 'group.id' => 'kafka-mirror', 'bootstrap.servers' => 'localhost:9092', }, producer_config => { 'bootstrap.servers' => 'localhost:9092', }, service_config => { 'whitelist' => '.*', }, config_dir => '/opt/kafka/custom_config', } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end describe file('/opt/kafka/custom_config/consumer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka/custom_config/producer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end context 'with specific version' do it 'works with no errors' do pp = <<-EOS class { 'kafka::mirror': kafka_version => '2.4.0', consumer_config => { 'group.id' => 'kafka-mirror', 'bootstrap.servers' => 'localhost:9092', }, producer_config => { 'bootstrap.servers' => 'localhost:9092', }, service_config => { 'whitelist' => '.*', }, } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end describe file('/opt/kafka/config/consumer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end describe file('/opt/kafka/config/producer.properties') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'kafka' } it { is_expected.to be_grouped_into 'kafka' } end end end describe 'kafka::mirror::service' do context 'with default parameters' do it 'works with no errors' do pp = <<-EOS class { 'kafka::mirror': consumer_config => { 'group.id' => 'kafka-mirror', 'bootstrap.servers' => 'localhost:9092', }, producer_config => { 'bootstrap.servers' => 'localhost:9092', }, service_config => { 'whitelist' => '.*', }, } EOS apply_manifest(pp, catch_failures: true) apply_manifest(pp, catch_changes: true) end - describe file('/etc/init.d/kafka-mirror'), if: (fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat') do - it { is_expected.to be_file } - it { is_expected.to be_owned_by 'root' } - it { is_expected.to be_grouped_into 'root' } - it { is_expected.to contain 'export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9991"' } - it { is_expected.to contain 'export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties"' } - end - describe file('/etc/init.d/kafka-mirror'), if: (fact('service_provider') == 'upstart' && fact('osfamily') == 'Debian') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'root' } it { is_expected.to be_grouped_into 'root' } it { is_expected.to contain %r{^# Provides:\s+kafka-mirror$} } it { is_expected.to contain 'export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9991"' } it { is_expected.to contain 'export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties"' } end describe file('/etc/systemd/system/kafka-mirror.service'), if: (fact('operatingsystemmajrelease') == '7' && fact('osfamily') == 'RedHat') do it { is_expected.to be_file } it { is_expected.to be_owned_by 'root' } it { is_expected.to be_grouped_into 'root' } it { is_expected.to contain 'Environment=\'KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9991\'' } it { is_expected.to contain 'Environment=\'KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties\'' } end describe service('kafka-mirror') do it { is_expected.to be_running } it { is_expected.to be_enabled } end end end end diff --git a/spec/acceptance/producer_spec.rb b/spec/acceptance/producer_spec.rb deleted file mode 100644 index 2df0dce..0000000 --- a/spec/acceptance/producer_spec.rb +++ /dev/null @@ -1,162 +0,0 @@ -require 'spec_helper_acceptance' - -if fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat' - user_shell = '/bin/bash' -else - case fact('osfamily') - when 'RedHat', 'Suse' - user_shell = '/sbin/nologin' - when 'Debian' - user_shell = '/usr/sbin/nologin' - end -end - -describe 'kafka::producer', if: (fact('operatingsystemmajrelease') == '6' && fact('osfamily') == 'RedHat') do # systemd systems not supported by kafka::producer::service - it 'works with no errors' do - pp = <<-EOS - exec { 'create fifo': - command => '/usr/bin/mkfifo /tmp/kafka-producer', - user => 'kafka', - creates => '/tmp/kafka-producer', - } -> - class { 'kafka::producer': - service_config => { - 'broker-list' => 'localhost:9092', - topic => 'demo', - }, - input => '3<>/tmp/kafka-producer 0>&3', - } - EOS - - apply_manifest(pp, catch_failures: true) - apply_manifest(pp, catch_changes: true) - end - - describe 'kafka::producer::install' do - context 'with default parameters' do - it 'works with no errors' do - pp = <<-EOS - exec { 'create fifo': - command => '/usr/bin/mkfifo /tmp/kafka-producer', - user => 'kafka', - creates => '/tmp/kafka-producer', - } -> - class { 'kafka::producer': - service_config => { - 'broker-list' => 'localhost:9092', - topic => 'demo', - }, - input => '3<>/tmp/kafka-producer 0>&3', - } - EOS - - apply_manifest(pp, catch_failures: true) - apply_manifest(pp, catch_changes: true) - end - - describe group('kafka') do - it { is_expected.to exist } - end - - describe user('kafka') do - it { is_expected.to exist } - it { is_expected.to belong_to_group 'kafka' } - it { is_expected.to have_login_shell user_shell } - end - - describe file('/var/tmp/kafka') do - it { is_expected.to be_directory } - it { is_expected.to be_owned_by 'kafka' } - it { is_expected.to be_grouped_into 'kafka' } - end - - describe file('/opt/kafka-2.12-2.4.1') do - it { is_expected.to be_directory } - it { is_expected.to be_owned_by 'kafka' } - it { is_expected.to be_grouped_into 'kafka' } - end - - describe file('/opt/kafka') do - it { is_expected.to be_linked_to('/opt/kafka-2.12-2.4.1') } - end - - describe file('/opt/kafka/config') do - it { is_expected.to be_directory } - it { is_expected.to be_owned_by 'kafka' } - it { is_expected.to be_grouped_into 'kafka' } - end - - describe file('/var/log/kafka') do - it { is_expected.to be_directory } - it { is_expected.to be_owned_by 'kafka' } - it { is_expected.to be_grouped_into 'kafka' } - end - end - end - - describe 'kafka::producer::config' do - context 'with default parameters' do - it 'works with no errors' do - pp = <<-EOS - exec { 'create fifo': - command => '/usr/bin/mkfifo /tmp/kafka-producer', - user => 'kafka', - creates => '/tmp/kafka-producer', - } -> - class { 'kafka::producer': - service_config => { - 'broker-list' => 'localhost:9092', - topic => 'demo', - }, - input => '3<>/tmp/kafka-producer 0>&3', - } - EOS - - apply_manifest(pp, catch_failures: true) - apply_manifest(pp, catch_changes: true) - end - - describe file('/opt/kafka/config/producer.properties') do - it { is_expected.to be_file } - it { is_expected.to be_owned_by 'kafka' } - it { is_expected.to be_grouped_into 'kafka' } - end - end - end - - describe 'kafka::producer::service' do - context 'with default parameters' do - it 'works with no errors' do - pp = <<-EOS - class { 'kafka::producer': - service_config => { - 'broker-list' => 'localhost:9092', - topic => 'demo', - }, - input => '3<>/tmp/kafka-producer 0>&3', - } - EOS - - apply_manifest(pp, catch_failures: true) - apply_manifest(pp, catch_changes: true) - end - - describe file('/etc/init.d/kafka-producer') do - it { is_expected.to be_file } - it { is_expected.to be_owned_by 'root' } - it { is_expected.to be_grouped_into 'root' } - it { is_expected.to contain 'export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9992"' } - it { is_expected.to contain 'export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/opt/kafka/config/log4j.properties"' } - end - - describe file('/etc/init.d/kafka-producer'), if: (fact('service_provider') == 'upstart' && fact('osfamily') == 'Debian') do - it { is_expected.to contain %r{^# Provides:\s+kafka-producer$} } - end - - describe service('kafka-producer') do - it { is_expected.to be_running } - it { is_expected.to be_enabled } - end - end - end -end