GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
If nothing happens, download GitHub Desktop and try again.
DevOps Series Ansible Deployment of Consul
If nothing happens, download Xcode and try again. If nothing happens, download the GitHub extension for Visual Studio and try again. This Ansible role installs Consulincluding filesystem structure and server or client configuration.
It can also bootstrap a development or evaluation cluster of 3 server agents running in a Vagrant and VirtualBox based environment. It might work with other software versions, but does work with the following specific software and versions:. This role also uses a host inventory variable to define the server's role when forming a cluster. You can also specify client as the role, and Consul will be configured as a client agent instead of a server.
Here is an example of how the hosts inventory could be defined for a simple cluster of 3 servers:. The consul binary works on most Linux platforms and is not distribution specific. However, some distributions require installation of specific OS packages with different package names. Ansible requires GNU tar and this role performs some local use of the unarchive module, so ensure that your system has gtar installed and in the PATH.
If you're on system with a different i. BSD tarlike macOS and you see odd errors during unarchive tasks, you could be missing gtar.
You can also pass variables in using the --extra-vars option to the ansible-playbook command:. Basic support for ACLs is included in the role. They are not all currently picked up from environment variables, but do have some sensible defaults. The role now includes support for DNS forwarding with Dnsmasq. You can enable it like this:. Note that iptables forwarding and Dnsmasq forwarding cannot be used simultaneously and the execution of the role will stop with error if such a configuration is specified.
You can enable TLS encryption by dropping a CA certificate, server certificate, and server key into the role's files directory.Ansible Tutorial Class 3 - Ansible Manual Inventory - Tech Arkit
Skip to content. Dismiss Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. Sign up. Shell Branch: master. Find file. Sign in Sign up. Go back. Launching Xcode If nothing happens, download Xcode and try again. This branch is commits behind ansible-community:master. Pull request Compare. Latest commit Fetching latest commit…. Consul This Ansible role installs Consulincluding filesystem structure and server or client configuration.In this 17th article in the DevOps series, we discuss the Ansible deployment of Consul.
Consul is a tool that has been written by HashiCorp, and can be used for creating health checks for services and systems. Consul is distributed, highly available, and is data centre aware. The recommended number of Consul nodes is three or five in a cluster, in order to handle failures.
Ansible and HashiCorp: Better Together
Every data centre can contain a Consul cluster. Consul is released under the Mozilla Public License 2. The version of Ansible used is 2. On the host system, we will create a project directory structure to store the Ansible playbooks, inventory and configuration files, as shown below:. The other members consul2 and consul3 of the Consul cluster belong to the server group.
The default Debian 9 installation does not have the sudo package installed. Log in as the root user, and install the sudo package on all the three VMs. You can now test connectivity from Ansible to the individual Consul nodes as well as collectively, by using the following commands:. The first step is to install Consul on all the nodes. The software package repository is updated, and a few network tools are installed. The execution of the binary is verified.
The playbook to install Consul is provided below for reference:. You can also now log in to any one of the nodes and check the version output by running Consul as indicated below:. The bootstrap Consul node is the first to be configured. Its file contents are as follows:. The configuration specifies that this is a bootstrap node, and the server should bind to any IP address. We specify a name to the data centre, and also the path to the data directory. An encryption key is specified to encrypt the traffic between the Consul nodes.
Finally, the log level and use of syslog is specified in the configuration file. We also create a systemd configuration file for starting the bootstrap Consul node as shown below:. The entire playbook to set up and start the bootstrap node is as follows:. The last step is to configure the other Consul nodes in the cluster. The contents of the configuration file are as follows:.
The configuration specifies that this is not a bootstrap node but is part of the Consul cluster. The data centre, data directory and encryption keys are specified. The log level information is also supplied. Finally, the IP addresses of all the Consul nodes are mentioned to join the cluster. A systemd configuration file is also created to start the Consul service on these nodes, as shown below:.
The playbook for setting up the Consul nodes is given below:. You can now verify the nodes that are part of the Consul cluster using the following commands from any host:.
The web UI listens on Port on host1, and you can make a Curl request for the same as shown below:.If you notice any issues in this documentation, you can edit this document to improve it. Ansible 2. Checks may also be registered per node e. Currently, there is no complete way to retrieve the script, interval or ttl metadata for a registered check. Without this metadata it is not possible to tell if the data supplied with ansible represents a change to a check.
As a result this does not attempt to determine changes and will always report a changed occurred. An API method is planned to supply this metadata so at that stage change management will be added. Ignored if part of a service definition. This means that consul will check that the http endpoint returns a successful HTTP status.
This is a number with a s or m suffix to signify the units of seconds or minutes e. If no suffix is supplied, m will be used by default e.
Required if the script parameter is specified. Scripts require interval and vice versa. Unique name for the service on a node, must be unique per node, required if registering a service.
May be omitted if registering a node level check. Can optionally be supplied for registration of a service, i. A custom HTTP check timeout. The consul default is 10 seconds. Similar to the interval this is a number with a s or m suffix to signify the units of seconds or minutes, e.
If it doesn't the check will be considered failed. Required if registering a check and the script an interval are missing Similar to the interval this is a number with a s or m suffix to signify the units of seconds or minutes e. Required if standalone, ignored if part of service definition.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
If nothing happens, download GitHub Desktop and try again. If nothing happens, download Xcode and try again. If nothing happens, download the GitHub extension for Visual Studio and try again. This Ansible role installs Consulincluding establishing a filesystem structure and server or client agent configuration with support for some common operational features. It can also bootstrap a development or evaluation cluster of 3 server agents running in a Vagrant and VirtualBox based environment. Please note that the original design goal of this role was more concerned with the initial installation and bootstrapping of a Consul server cluster environment and so it does not currently concern itself all that much with performing ongoing maintenance of a cluster.
Many users have expressed that the Vagrant based environment makes getting a working local Consul server cluster environment up and running an easy process — so this role will target that experience as a primary motivator for existing. The role might work with other OS distributions and versions, but is known to function well with the following software versions:. Note that for the "local" installation mode the defaultthis role will locally download only one instance of the Consul archive, unzip it and install the resulting binary on all desired Consul hosts.
To do so requires that unzip is available on the Ansible control host and the role will fail if it doesn't detect unzip in the PATH. This role does not fully support the limit option ansible -l to limit the hosts, as this will break populating required host variables. If you do use the limit option with this role, you can encounter template errors like:. The role will not properly function if the label name is anything other value.
Many role variables can also take their values from environment variables as well; those are noted in the description where appropriate. Notice that the dict object has to use precisely the names stated in the documentation! And all ports must be specified. One server should be designated as the bootstrap server, and the other servers will connect to this server.
You can also specify client as the role, and Consul will be configured as a client agent instead of a server. There are two methods to setup a cluster, the first one is to explicitly choose the bootstrap server, the other one is to let the servers elect a leader among themselves.
Autopilot is a set of new features added in Consul 0. It includes cleanup of dead servers, monitoring the state of the Raft cluster, and stable server introduction. Dead servers will periodically be cleaned up and removed from the Raft peer set, to prevent them from interfering with the quorum size and leader elections.
This cleanup will also happen whenever a new server is successfully added to the cluster. Consul snapshot agent takes backup snaps on a set interval and stores them. Must have enterprise.
The consul binary works on most Linux platforms and is not distribution specific. However, some distributions require installation of specific OS packages with different package names. Node leave drain time is the dwell time for a server to honor requests while gracefully leaving. Ansible requires GNU tar and this role performs some local use of the unarchive module for efficiency, so ensure that your system has gtar and unzip installed and in the PATH. If you don't this role will install unzip on the remote machines to unarchive the ZIP files.
If you're on system with a different i. BSD tarlike macOS and you see odd errors during unarchive tasks, you could be missing gtar. These already installed on Windows Server R2 and onward. If you're attempting this role on Windows Server or earlier, you'll want to install the extensions here. You can also pass variables in using the --extra-vars option to the ansible-playbook command:.
I would like to pull KV information from Consul when running ansible-playbooks to populate the inventory with the host and role assignment. The basic idea is to use --extra-vars and supply the hostname and from there pull the information from Consul. Just need to know if it's possible using the built-in functions if so which plugins would be appropriate if needed or if some sort of workaround is the only way. The answer to your question is to use the dynamic inventory mechanism, and from there you can use any programming language you'd like, including bash and invoking the consul CLI to run whatever queries you want.
So long as the output is the JSON that ansible is expecting, that contract is well-defined. And the answer appears to be "not very hard" here I'm using the dig lookup since I don't have consul nor python-consul available to test, but dig will do for our purposes :.
Using Consul KV store for inventory details, possible? Ask Question. Asked 10 months ago. Active 10 months ago. Viewed times. Is it possible to use KV information from Consul to populate the inventory during runtime? FreKac FreKac 3 2 2 bronze badges. Active Oldest Votes. And the answer appears to be "not very hard" here I'm using the dig lookup since I don't have consul nor python-consul available to test, but dig will do for our purposes :!
Thanks Matthew, I mark the first one as the answer since it was the first but I'll look into both and see which one I'll go for. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown.
The Overflow Blog. Podcast Programming tutorials can be a real drag.
Subscribe to RSS
Featured on Meta. Community and Moderator guidelines for escalating issues via new response…. Feedback on Q2 Community Roadmap. Technical site integration observational experiment live on Stack Overflow. Dark Mode Beta - help us root out low-contrast and un-converted bits. Related Hot Network Questions. Question feed.
Sean: So today we're going to present to you our musings on the subject of how our tools work better together. So just kick back, enjoy the ride with us and hear us muttering amongst ourselves on how these work together. Dylan: And keep your hands up. How many of you are actually Ansible users as well? All right. A fair amount. Dylan: I'll apologize in advance. I've got a boring slide coming, but we'll get past that one pretty quickly.
Dylan: So, the first question. Well, we look at it this way, from Red Hat Ansible automation side, which really is the culmination of engine, tower, galaxy, Ansible vault, it's how do we take that tool set and take the community that comes from it and extend it out to the rest of the ecosystem.
So, occasionally you'll hear us mention Ansible as the glue of all that is automation, all that is the DevOps tool ecosystem that we all work with. Taking that step back, Ansible doesn't necessarily have to own and do every single task that it sets out to do.
So, being that glue or being the orchestrator, think of it as the composer of a nice symphonic piece, we can reach out and tell other tools and work with other tools to do the task that it's best suited for. So, we're not that big instrument that owns the whole piece.
There are other instruments that can do the job better than us, or can actually do it in a sense that we wouldn't actually be able to tackle it with. So that being said Sean: Today we'll be showing you how three different HashiCorp tools can benefit the Ansible user.
First we'll take a look at how HashiCorp Vault—our secrets management product—and how it compares to Ansible Vault. Next we'll show you how Ansible can be combined with Terraform or Packer to enable powerful and efficient build pipelines.
There are many products and projects that contain "vault" in their name.If your Ansible inventory fluctuates over time, with hosts spinning up and shutting down in response to business demands, the static inventory solutions described in How to build your inventory will not serve your needs.
Ansible integrates all of these options via a dynamic external inventory system. Ansible supports two ways to connect with external inventory: Inventory Plugins and inventory scripts. Inventory plugins take advantage of the most recent updates to the Ansible core code. We recommend plugins over scripts for dynamic inventory. You can write your own plugin to connect to additional dynamic inventory sources. You can still use inventory scripts if you choose.
When we implemented inventory plugins, we ensured backwards compatibility via the script inventory plugin. The examples below illustrate how to use inventory scripts. If you would like a GUI for handling dynamic inventory, the Red Hat Ansible Tower inventory database syncs with all your dynamic inventory sources, provides web and REST access to the results, and offers a graphical inventory editor.
With a database record of all of your hosts, you can correlate past event history and see which hosts have had failures on their last playbook runs. Ansible integrates seamlessly with Cobblera Linux installation server originally written by Michael DeHaan and now led by James Cammarata, who works for Ansible. Run cobblerd any time you use Ansible and use the -i command line option e. Add a cobbler. For example:. You should see some JSON data output, but it may not have anything in it just yet.
The script provides more than host and group info. You can still pass in your own variables like normal in Ansible, but variables from the external inventory script will override any that have the same name. So, with the template above motd. If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For this reason, you can use the EC2 external inventory script.
You can use this script in one of two ways. You must also copy the ec2. Then you can run ansible as you would normally. You can do this in several ways available, but the simplest is to export two environment variables:.
An example profile might be:. You can then run ec2.
Since each region requires its own API call, if you are only using a small set of regions, you can edit the ec2. There are other config options in ec2. By default, the ec2. At their heart, inventory files are simply a mapping from some name to a destination address. The default ec2. This is particularly important when running Ansible within a private subnet inside a VPC, where the only way to access an instance is via its private IP address. It then makes information about that instance available as variables to your playbooks.
Here are some of the variables available:. Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable in ec2.