# DevNet Lab 16 -- Using Ansible to automate the installation of web services on Incus containers
[toc]
---
### Scenario
In this lab, you will first configure Ansible to communicate with a virtual machine hosting web servers in Incus system containers. You will create playbooks that automate the process of installing Incus on the Web server VM and build a dynamic inventory with a Python script. You will also create a custom playbook that installs Apache with specific instructions on each container.

### Objectives
After completing the hands-on activities of this lab, you will be able to
- Set up Ansible on the Devnet development VM.
- Start a target virtual machine for hosting Incus containers.
- Generate and manage user secrets for Ansible using a dedicated user account.
- Create a dynamic inventory using a Python script.
- Automate and troubleshoot Apache installation on group of containers.
## Part 1: Launch the Web server VM
Starting from the hypervisor connection, we need to apply a tap interface configuration and then start a new virtual machine that will host Incus containers.
### Step 1: Apply hypervisor switch port configuration
Here is an example `lab16-switch.yaml` file:
```yaml=
ovs:
switches:
- name: dsw-host
ports:
- name: tapXXX # <-- YOUR TAP NUMBER
type: OVSPort
vlan_mode: access
tag: OOB_VLAN_ID # <-- YOUR OUT-OF-BAND VLAN ID
```
Apply hypervisor switch port configuration.
```bash
switch-conf.py lab16-switch.yaml
```
### Step 2: Start the Incus container server
Next, we need to declare the properties of the new virtual server, especially its network configuration, which is specific to container hosting.
The design point here is that we want both the virtual server and its containers to be connected to the same VLAN. Therefore we set the network configuration with an internal switch named `c-3po`.
To create your virtual machine's YAML declaration file, such as `lab16-server.yaml`, you need to copy and edit the example below and use your own tap interface number.
We start by calculating the MAC address. Here is a sample code for tap interface number **300**:
```python
python3
```
```python=
Python 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> tap_number = 300
>>> mac_address = f"b8:ad:ca:fe:{(tap_number // 256):02x}:{(tap_number % 256):02x}"
>>> print(mac_address)
b8:ad:ca:fe:01:2c
>>>
```
Now that we know the MAC address for our C-3PO internal switch, we can add it to the YAML declaration file.
```yaml=
kvm:
vms:
- vm_name: lab16-vm
os: linux
master_image: ubuntu-24.04-server.qcow2
force_copy: false
memory: 2048
tapnum: XXX # <-- YOUR TAP INTERFACE NUMBER
cloud_init:
force_seed: false
hostname: lab16
packages:
- openvswitch-switch
write_files:
- path: /etc/netplan/01-netcfg.yaml
content: |
network:
version: 2
bridges:
c-3po:
openvswitch: {}
interfaces: [enp0s1]
dhcp4: true
dhcp6: false
accept-ra: true
link-local: [ipv6]
macaddress: b8:ad:ca:fe:QQ:RR # <-- YOUR MAC ADDRESS
ethernets:
enp0s1:
dhcp4: false
dhcp6: false
accept-ra: false
runcmd:
- [sh, -c, "rm -f /etc/netplan/enp0s1.yaml"]
- [sh, -c, "rm -f /etc/netplan/50-cloud-init.yaml"]
- [sh, -c, "netplan apply"]
```
Start the Incus container server virtual machine with:
```bash
lab-startup.py lab16-server.yaml
```
```bash=
Copying /home/etudianttest/masters/ubuntu-24.04-server.qcow2 to lab16-vm.qcow2...
done.
Creating OVMF_CODE.fd symlink...
Creating lab16-vm_OVMF_VARS.fd file...
Creating lab16-vm-seed.img...
done.
Starting lab16-vm...
Waiting a second for TPM socket to be ready.
~> Virtual machine filename : lab16-vm.qcow2
~> RAM size : 2048MB
~> SPICE VDI port number : 5902
~> telnet console port number : 2302
~> MAC address : b8:ad:ca:fe:00:XXX
~> Switch port interface : tap2, access mode
~> IPv6 LL address : fe80::baad:caff:fefe:XXX%vlanOOB_VLAN_ID
lab16-vm started!
```
### Step 3: Open a first SSH connection to the Incus container server
As usual, do not forget to change the tap interface number at the right end of the link local IPv6 address.
```bash
ssh etu@fe80::baad:caff:fefe:XXX%vlanOOB_VLAN_ID
```
- XXX is the hexadecimal conversion value of the tap interface number.
- OOB_VLAN_ID is the out-of-band VLAN identifier.
## Part 2: Configure Ansible on the Devnet VM
The Incus container server hosting VM is now ready for Ansible automation. First, we need to configure Ansible and check that we have access to the server from the Devnet virtual machine via SSH.
### Step 1: Create the Ansible directory and configuration file
1. Ensure the ~/labs/lab16 directory exist and navigate to this folder
```bash
mkdir -p ~/labs/lab16 && cd ~/labs/lab16
```
2. Install the **ansible** Python virtual environement
There are two main ways to set up a new Ansible workspace. Packages and Python virtual environments are both viable options. Here we choose to install Ansible in a Python virtual environment to take advantage of the latest release.
We start by creating a `requirements.txt` file.
```bash
cat << EOF > requirements.txt
ansible
ansible-lint
netaddr
EOF
```
Then we install the tools in a virtual environment called `ansible`.
```bash
python3 -m venv ansible
source ./ansible/bin/activate
pip3 install -r requirements.txt
```
3. Create a new `ansible.cfg` file in the `lab16` directory from the shell prompt
```bash=
cat << 'EOF' > ansible.cfg
# config file for Lab 15 Web Servers management
[defaults]
# Use inventory/ folder files as source
inventory=inventory/
host_key_checking = False # Don't worry about RSA Fingerprints
retry_files_enabled = False # Do not create them
deprecation_warnings = False # Do not show warnings
interpreter_python = /usr/bin/python3
[inventory]
enable_plugins = auto, host_list, yaml, ini, toml, script
[persistent_connection]
command_timeout=100
connect_timeout=100
connect_retry_timeout=100
EOF
```
### Step 2: Create a new Ansible vault file and set its password
Create a new vault file called `$HOME/.lab_passwd.yml` and enter the unique vault password which will be used for all users passwords to be stored.
```bash
ansible-vault create $HOME/.lab_passwd.yml
```
```bash=
New Vault password:
Confirm New Vault password:
```
This will open the default editor which is defined by the `$EDITOR` environment variable.
There we enter a variable name which will designate the Incus container server user account.
```bash
incus_admin_user: c-admin
```
We now need to store the vault secret in a separate file at the user's home directory level.
```bash
echo "ThisVaultSecret" >$HOME/.vault.passwd
chmod 600 $HOME/.vault.passwd
```
:::warning
Don't forget to replace the "ThisVaultSecret" with your own vault secret password.
:::
To avoid re-entering the vault password, we make sure that the `ANSIBLE_VAULT_PASSWORD_FILE` environment variable is set each time a new shell is opened.
```bash
touch $HOME/.profile
echo "export ANSIBLE_VAULT_PASSWORD_FILE=\$HOME/.vault.passwd" |\
tee -a $HOME/.profile
source $HOME/.profile
```
### Step 3: Generate the Ansible user secret
The Ansible user is responsible for running tasks on the target container server. We need to generate and use the user secret in two different ways.
- This secret needs to be stored in the Ansible vault on the Devnet system.
- The same secret is used to create the Ansible user account on the target container server
Here is Bash shell script code that performs the following key operations:
Secret Management
: Generates a random secret and stores it in an Ansible vault file, associating it with the `incus_admin_pass` variable.
Vault Handling
: Decrypts an existing Ansible vault file, adds the new secret entry, and re-encrypts the file using appropriate encryption techniques.
User Verification
: Checks for existing entries in the vault file and verifies that the target user already exists on the remote system before proceeding.
Remote User Creation
: Creates a new user on the remote host using the username extracted from the vault and assigns it the newly generated password.
Privilege Management
: Adds the created user to the `sudo` group on the remote system, enabling administrative privileges.
```bash=
#!/usr/bin/env bash
# Ansible vault constants
ANSIBLE_VAULT_PASSWD_FILE=${HOME}/.vault.passwd
ANSIBLE_VAULT_FILE=${HOME}/.lab_passwd.yml
ANSIBLE_VAULT_USER="incus_admin_user"
ANSIBLE_VAULT_NEW_PASS="incus_admin_pass"
# Target container server constants
TARGET_HOST="fe80::baad:caff:fefe:XXXX%enp0s1" # <-- YOUR OWN IPv6 LL ADDRESS
TARGET_USER="etu"
# Add error handling
set -e
# Function to display error messages
error_exit() {
echo "ERROR: $1" >&2
exit 1
}
# Function to decrypt the Ansible vault file with 2 parameters: vault file and password file
decrypt_vault() {
ansible-vault decrypt --vault-password-file="${1}" "${2}" || error_exit "Failed to decrypt the vault file"
}
# Function to encrypt the Ansible vault file with 2 parameters: vault file and password file
encrypt_vault() {
ansible-vault encrypt --encrypt-vault-id default --vault-password-file="${1}" "${2}" || error_exit "Failed to encrypt the vault file"
}
# Check if Ansible password file exists
if [[ ! -f ${ANSIBLE_VAULT_PASSWD_FILE} ]]; then
error_exit "Ansible password file does not exist: ${ANSIBLE_VAULT_PASSWD_FILE}"
fi
# Check if Ansible vault file exists
if [[ ! -f ${ANSIBLE_VAULT_FILE} ]]; then
error_exit "Ansible vault file does not exist: ${ANSIBLE_VAULT_FILE}"
fi
# New secret
echo "Generating a new secret..."
SECRET=$(openssl rand -base64 16) || error_exit "Failed to generate a new secret"
SECRET=$(echo "${SECRET}" | tr -d '=')
# Add secret to Ansible vault
echo "Decrypting the vault file..."
decrypt_vault "${ANSIBLE_VAULT_PASSWD_FILE}" "${ANSIBLE_VAULT_FILE}"
# Check if the new entry already exists
if grep -q "${ANSIBLE_VAULT_NEW_PASS}" "${ANSIBLE_VAULT_FILE}"; then
echo "Encrypting the vault file with existing entries..."
encrypt_vault "${ANSIBLE_VAULT_PASSWD_FILE}" "${ANSIBLE_VAULT_FILE}"
error_exit "Entry already exists in the vault file"
else
echo "Adding new entry to vault file..."
echo "${ANSIBLE_VAULT_NEW_PASS}: ${SECRET}" >>"${ANSIBLE_VAULT_FILE}"
fi
# Check if the ${ANSIBLE_VAULT_USER} exists in the vault file
if ! grep -q "${ANSIBLE_VAULT_USER}" "${ANSIBLE_VAULT_FILE}"; then
encrypt_vault "${ANSIBLE_VAULT_PASSWD_FILE}" "${ANSIBLE_VAULT_FILE}"
error_exit "User ${ANSIBLE_VAULT_USER} not found in vault file"
fi
# Get ${ANSIBLE_VAULT_USER} value from the vault file
VAULT_FILE_CONTENT=$(grep "${ANSIBLE_VAULT_USER}" "${ANSIBLE_VAULT_FILE}")
VAULT_FILE_CONTENT=$(echo "${VAULT_FILE_CONTENT}" | cut -d ' ' -f 2)
ANSIBLE_USER=$(echo "${VAULT_FILE_CONTENT}" | tr -d ' ')
echo "Encrypting the vault file..."
encrypt_vault "${ANSIBLE_VAULT_PASSWD_FILE}" "${ANSIBLE_VAULT_FILE}"
# Use the new secret to create a new user on the target host
echo "Creating user ${ANSIBLE_USER} on remote host..."
ADD_USER_CMD="sudo adduser --gecos '' --disabled-password ${ANSIBLE_USER} && echo '${ANSIBLE_USER}:${SECRET}' | sudo chpasswd"
if ! ssh -6 -tt "${TARGET_USER}@${TARGET_HOST}" "${ADD_USER_CMD}"; then
error_exit "Failed to create user on remote host"
fi
# Check if the ${ANSIBLE_VAULT_USER} is already in the sudo group
CHECK_SUDO_CMD="groups ${ANSIBLE_USER} | grep -q sudo"
if ssh -6 -tt "${TARGET_USER}@${TARGET_HOST}" "${CHECK_SUDO_CMD}"; then
error_exit "User ${ANSIBLE_USER} is already in the sudo group"
fi
# Add the new user to the sudo group
echo "Adding user ${ANSIBLE_USER} to the sudo group..."
ADD_USER_SUDO_CMD="sudo adduser ${ANSIBLE_USER} sudo"
if ! ssh -6 -tt "${TARGET_USER}@${TARGET_HOST}" "${ADD_USER_SUDO_CMD}"; then
error_exit "Failed to add user to the sudo group"
fi
echo "Operation completed successfully."
exit 0
```
Now we can run this script and take see the results:
```bash
bash add-new-secret.sh
```
```bash=
Generating a new secret...
Decrypting the vault file...
Decryption successful
Adding new entry to vault file...
Encrypting the vault file...
Encryption successful
Creating user c-admin on remote host...
Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
[sudo] Mot de passe de etu :
info: Adding user `c-admin' ...
info: Selecting UID/GID from range 1000 to 59999 ...
info: Adding new group `c-admin' (1002) ...
info: Adding new user `c-admin' (1002) with group `c-admin (1002)' ...
info: Creating home directory `/home/c-admin' ...
info: Copying files from `/etc/skel' ...
info: Adding new user `c-admin' to supplemental / extra groups `users' ...
info: Adding user `c-admin' to group `users' ...
Connection to fe80::baad:caff:fefe:2%enp0s1 closed.
Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
Connection to fe80::baad:caff:fefe:2%enp0s1 closed.
Adding user c-admin to the sudo group...
Warning: Permanently added 'fe80::baad:caff:fefe:XXX%enp0s1' (ED25519) to the list of known hosts.
[sudo] Mot de passe de etu :
info: Adding user `c-admin' to group `sudo' ...
Connection to fe80::baad:caff:fefe:2%enp0s1 closed.
Operation completed successfully.
```
:::warning
Note that if you haven't configured passwordless SSH authentication between the Devnet VM and the target container server, you will be prompted many times for the default user password when you run the script.
:::
Use the following command to verify that your Ansible vault contains both the username and secret key value pairs.
```bash
ansible-vault edit $HOME/.lab_passwd.yml
```
Here is a sample vault content:
```yaml=
incus_admin_user: c-admin
incus_admin_pass: ABCDEFGHIJKLMNOPQRSTUV
```
To complete this step, we can open an SSH connection to the target container server using the Ansible user account:
```bash
ssh -o StrictHostKeyChecking=accept-new c-admin@fe80::baad:caff:fefe:XXXX%enp0s1
```
```bash=
Warning: Permanently added 'fe80::baad:caff:fefe:XXXX%enp0s1' (ED25519) to the list of known hosts.
c-admin@fe80::baad:caff:fefe:XXXX%enp0s1's password:
Linux lab16 6.12.12-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.12-1 (2025-02-02) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
c-admin@lab16:~$
```
### Step 4: Create a new inventory file
Ensure the `inventory` directory is created:
```bash
mkdir -p $HOME/labs/lab16/inventory
```
Create the `inventory/hosts.yml` inventory file with the local link IPv6 address of your Container Server VM.
```bash=
cat << 'EOF' > inventory/hosts.yml
---
vms:
hosts:
c-server:
ansible_host: fe80::baad:caff:fefe:XXXX%enp0s1
vars:
ansible_ssh_user: "{{ incus_admin_user }}"
ansible_ssh_pass: "{{ incus_admin_pass }}"
ansible_become_pass: "{{ incus_admin_pass }}"
ansible_ssh_common_args: -o StrictHostKeyChecking=accept-new
all:
children:
vms:
containers:
EOF
```
### Step 5: Verify Ansible communication with the container server VM
Now we can use the `ping` Ansible module to connect to the `c-server` entry defined in the inventory file.
```bash
ansible c-server -m ping --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
c-server | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
Since the Ansible ping is successful, we can proceed with setting up container management inside the target VM.
## Part 3: Initialize Incus container management with Ansible
In order to be able to launch system containers and configure Web services in these containers, we first must initialize the Incus manager with an Ansible playbook.
### Step 1: Create the `01_incus_init.yml` playbook
Create the `01_incus_init.yml` file an add the following tasks to the file.
You can use a bash heredoc to make sure that YAML indentation is correct.
```bash
cat << 'EOF' > 01_incus_init.yml
# Playbook file content
EOF
```
Here is a copy of the YAML content of the file `01_incus_init.yml`.
```yaml=
---
- name: INCUS INSTALLATION AND INITIALIZATION
hosts: c-server
pre_tasks:
- name: INSTALL INCUS PACKAGE
ansible.builtin.apt:
name: incus
state: present
update_cache: true
become: true
- name: ADD USER TO INCUS SYSTEM GROUPS
ansible.builtin.user:
name: "{{ ansible_ssh_user }}"
groups:
- incus
- incus-admin
append: true
become: true
register: group_add
notify:
- RESET SSH CONNECTION
# Force handlers to run now, before continuing with the rest of tasks
- name: FLUSH HANDLERS
ansible.builtin.meta: flush_handlers
tasks:
- name: CHECK IF INCUS IS INITIALIZED
# If no storage pools are present, Incus is not initialized
ansible.builtin.command: incus storage ls -cn -f yaml
register: incus_status
changed_when: false
failed_when: incus_status.rc != 0
- name: INITIALIZE INCUS
ansible.builtin.shell: |
set -o pipefail
cat << EOT | incus admin init --preseed
config: {}
networks: []
storage_pools:
- config: {}
description: ""
name: default
driver: dir
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
nictype: bridged
parent: c-3po
type: nic
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
EOT
register: incus_init
changed_when: incus_init.rc == 0
failed_when: incus_init.rc != 0
when: incus_status.stdout == '[]'
- name: SHOW INCUS DEFAULT PROFILE
ansible.builtin.command: incus profile show default
register: incus_profile
changed_when: false
failed_when: incus_profile.rc != 0
- name: COMPARE CURRENT PROFILE TO EXPECTED STATE
ansible.builtin.assert:
that:
- incus_profile.stdout == 'config: {}
description: Default Incus profile
devices:
eth0:
name: eth0
nictype: bridged
parent: c-3po
type: nic
root:
path: /
pool: default
type: disk
name: default
used_by: []
project: default'
msg: Incus profile does not match expected state
changed_when: false
handlers:
- name: RESET SSH CONNECTION
ansible.builtin.meta: reset_connection
when: group_add.changed
```
This playbook is designed with idempotence as a core principle, ensuring consistent and predictable results across multiple executions. It implements proper state checking before performing actions, conditionally executes tasks only when necessary, and uses handlers for dependent operations.
The workflow includes a strategic `pre-tasks:` section that handles system prerequisites and group permissions with appropriate connection handling via flush_handlers, ensuring that permission changes take effect immediately.
Each subsequent task verifies the current state before making changes, with clear success/failure criteria and proper changed_when conditions to accurately report changes.
Install Incus Package (pre-task)
: Ensures the Incus platform is installed on the target server, updating package cache as needed.
Add User to Incus System Groups (pre-task)
: Adds the Ansible SSH user to the required Incus permission groups, notifying the SSH connection reset handler when group membership changes.
Flush Handlers (pre-task)
: Forces the execution of the SSH connection reset handler immediately after group changes, ensuring proper permissions for subsequent tasks.
Check Incus Initialization Status
: Verifies if Incus is already initialized to avoid duplicate initialization attempts. We use the default storage pool presence as initialization criteria.
Initialize Incus
: Configures Incus with predefined storage pools, network profiles, and default settings, executing only when Incus is not already initialized.
Verify Default Profile Configuration
: Inspects the created profile and compares it against expected values to confirm successful initialization and proper configuration.
### Step 2: Run the `01_incus_init.yml` playbook twice
In this step, we run the Ansible playbook twice, looking specifically at the changed counter to assess task idempotence.
```bash
ansible-playbook 01_incus_init.yml --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
PLAY [INCUS INSTALLATION AND INITIALIZATION] ************************************************
TASK [Gathering Facts] **********************************************************************
ok: [c-server]
TASK [INSTALL INCUS PACKAGE] ****************************************************************
changed: [c-server]
TASK [ADD USER TO INCUS SYSTEM GROUPS] ******************************************************
changed: [c-server]
TASK [FLUSH HANDLERS] ***********************************************************************
RUNNING HANDLER [RESET SSH CONNECTION] ******************************************************
TASK [CHECK IF INCUS IS INITIALIZED] ********************************************************
ok: [c-server]
TASK [INITIALIZE INCUS] *********************************************************************
changed: [c-server]
TASK [SHOW INCUS DEFAULT PROFILE] ***********************************************************
ok: [c-server]
TASK [COMPARE CURRENT PROFILE TO EXPECTED STATE] ********************************************
ok: [c-server] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP **********************************************************************************
c-server : ok=7 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
After this first playbook run, the `changed` counter value is 3, which means that the Incus packages have been installed, the c-admin user has been added to the Incus specific system groups, and finally, Incus is fully initialized.
Let's do a second run.
```bash
ansible-playbook 01_incus_init.yml --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
PLAY [INCUS INSTALLATION AND INITIALIZATION] *************************************************
TASK [Gathering Facts] ***********************************************************************
ok: [c-server]
TASK [INSTALL INCUS PACKAGE] *****************************************************************
ok: [c-server]
TASK [ADD USER TO INCUS SYSTEM GROUPS] *******************************************************
ok: [c-server]
TASK [FLUSH HANDLERS] ************************************************************************
skipping: [c-server]
TASK [CHECK IF INCUS IS INITIALIZED] *********************************************************
ok: [c-server]
TASK [INITIALIZE INCUS] **********************************************************************
skipping: [c-server]
TASK [SHOW INCUS DEFAULT PROFILE] ************************************************************
ok: [c-server]
TASK [COMPARE CURRENT PROFILE TO EXPECTED STATE] *********************************************
ok: [c-server] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP ***********************************************************************************
c-server : ok=6 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
We now have a proof of idempotency as the `changed` counter value is zero and all installation and configuration tasks have been skipped.
## Part 4: Instantiate containers with Ansible
In this part, we begin to manage web services on-demand with Incus container instantiation based on an Ansible playbook.
### Step 1: Create a lab inventory template
In Part 2, Step 2, we created the `inventory/hosts.yml` file, which defines all the parameters necessary to run Ansible playbooks on the container server VM.
Now we need to create a new inventory file called `inventory/lab.yml` that defines all the system container parameters. The purpose here is to be able to run Ansible playbooks inside these containers.
```bash=
cat << 'EOF' > inventory/lab.yml
---
containers:
hosts:
web[01:04]:
vars:
ansible_ssh_user: "{{ webuser_name }}"
ansible_ssh_pass: "{{ webuser_pass }}"
ansible_become_pass: "{{ webuser_pass }}"
EOF
```
> Note: This inventory file is incomplete because it does not define the `ansible_host` variable for each container. Once the containers are started, we will get their addresses and complete the inventory.
### Step 2: Add a new entry in Ansible vault for container access
In the previous step above, we defined a user named `webuser_name` with its password stored in the `webuser_pass` variable.
We need to add the corresponding entries to the Ansible vault file named `$HOME/.lab_passwd.yml`. To do this, we open the vault file for editing.
```bash
ansible-vault edit $HOME/.lab_passwd.yml
New Vault password:
Confirm New Vault password:
```
There we will enter two variable names that will specify the name and password for each container user account.
```bash
webuser_name: web
webuser_pass: XXXXXXXXXX
```
As a reminder, you can use the `openssl` command to generate a somewhat strong secret for your container user account.
```bash
openssl rand -base64 16 | tr -d '='
```
### Step 3: Create the `02_incus_launch.yml` Ansible playbook to launch and configure access to containers
The purpose of this step is to automate the deployment and configuration of Incus containers with a focus on idempotency. It will provision multiple Debian Trixie containers, create user accounts, and install necessary packages to prepare the containers for SSH access.
Create the `02_incus_launch.yml` file an add the following tasks to the file.
You can use a bash heredoc to make sure that YAML indentation is correct.
```bash=
cat << 'EOF' > 02_incus_launch.yml
# Playbook file content
EOF
```
Here is a copy of the YAML content of the file `02_incus_launch.yml`.
```yaml=
---
- name: LAUNCH INCUS CONTAINERS, SETUP USER ACCOUNT AND SSH SERVICE
hosts: c-server
vars:
packages_to_install:
- openssh-server
- python3
- python3-apt
- apt-utils
container_image: images:debian/trixie
tasks:
- name: CHECK CONTAINERS STATE
ansible.builtin.shell:
cmd: set -o pipefail && incus ls --format csv -c n,s | grep {{ item }}
executable: /bin/bash
register: container_states
with_inventory_hostnames:
- containers
changed_when: false
failed_when: false
check_mode: false
- name: LAUNCH INCUS CONTAINERS
ansible.builtin.command:
cmd: incus launch {{ container_image }} "{{ item.item }}"
loop: "{{ container_states.results }}"
when: item.rc != 0 # Container not found
register: containers_created
changed_when: containers_created is changed
- name: START STOPPED CONTAINERS
ansible.builtin.command:
cmd: incus start "{{ item.item }}"
loop: "{{ container_states.results }}"
when: item.rc == 0 and 'STOPPED' in item.stdout
register: containers_started
changed_when: containers_started is changed
- name: CREATE WEBUSER ACCOUNT IF NOT EXISTS
ansible.builtin.command:
cmd: >-
incus exec "{{ item }}" -- bash -c
"if ! grep -q {{ webuser_name }} /etc/passwd; then
adduser --quiet --gecos \"\" --disabled-password {{ webuser_name }};
fi"
with_inventory_hostnames:
- containers
register: user_created
changed_when: "'adduser' in user_created.stdout | default('')"
- name: SET WEBUSER PASSWORD
ansible.builtin.command:
cmd: incus exec "{{ item }}" -- bash -c "chpasswd <<<\"{{ webuser_name }}:{{ webuser_pass }}\""
with_inventory_hostnames:
- containers
no_log: true
changed_when: false
- name: ADD WEBUSER TO SUDO GROUP
ansible.builtin.command:
cmd: incus exec "{{ item }}" -- bash -c "if ! id {{ webuser_name }} | grep -qo sudo; then adduser --quiet {{ webuser_name }} sudo; fi"
with_inventory_hostnames:
- containers
register: sudo_added
changed_when: "'adduser' in sudo_added.stdout | default('')"
- name: UPDATE APT CACHE
ansible.builtin.command:
cmd: incus exec "{{ item }}" -- apt update
with_inventory_hostnames:
- containers
register: apt_update
changed_when: false
failed_when: apt_update.rc != 0
- name: INSTALL REQUIRED PACKAGES
ansible.builtin.command:
cmd: incus exec "{{ item }}" -- apt install -y {{ packages_to_install | join(' ') }}
with_inventory_hostnames:
- containers
register: packages_installed
failed_when: packages_installed.rc != 0
changed_when: >
packages_installed.rc == 0 and 'Upgrading: 0, Installing: 0, Removing: 0' not in packages_installed.stdout
- name: CLEAN APT CACHE
ansible.builtin.command:
cmd: incus exec "{{ item }}" -- apt clean
with_inventory_hostnames:
- containers
changed_when: false
```
The playbook implements several idempotent checks to ensure that it can be run multiple times without causing unwanted changes. It verifies the existence of containers before creating new ones, checks if users already exist before adding them, and checks package installation status before installing packages. Each task includes appropriate `changed_when` conditions to accurately report when actual changes occur, allowing for reliable execution in CI/CD pipelines or repeated manual runs.
The key tasks are:
Container Management
: Checks existing container states and either launches new containers or starts stopped ones based on their current state.
User Account Management
: Creates a webuser account if it doesn't exist, sets the password, and adds the user to the sudo group for administrative privileges.
Package Management
: Updates APT cache and installs required packages including SSH server and Python, enabling remote connection and Ansible management of the containers.
Shell Command Safety
: Uses proper shell options like `set -o pipefail` to ensure reliable command execution and accurate error reporting as required by `ansible-lint` recommendations.
### Step 4: Run the `02_incus_launch.yml` playbook twice
```bash
ansible-playbook 02_incus_launch.yml --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
PLAY [LAUNCH INCUS CONTAINERS, SETUP USER ACCOUNT AND SSH SERVICE] ***************************
TASK [Gathering Facts] ***********************************************************************
ok: [c-server]
TASK [CHECK CONTAINERS STATE] ****************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [LAUNCH INCUS CONTAINERS] ***************************************************************
changed: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', 'start': '2025-03-10 09:35:05.401204', 'end': '2025-03-10 09:35:05.431715', 'delta': '0:00:00.030511', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web01', 'ansible_loop_var': 'item'})
changed: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', 'start': '2025-03-10 09:35:05.801397', 'end': '2025-03-10 09:35:05.831049', 'delta': '0:00:00.029652', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web02', 'ansible_loop_var': 'item'})
changed: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', 'start': '2025-03-10 09:35:06.209975', 'end': '2025-03-10 09:35:06.240345', 'delta': '0:00:00.030370', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web03', 'ansible_loop_var': 'item'})
changed: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', 'start': '2025-03-10 09:35:06.618957', 'end': '2025-03-10 09:35:06.656282', 'delta': '0:00:00.037325', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web04', 'ansible_loop_var': 'item'})
TASK [START STOPPED CONTAINERS] **************************************************************
skipping: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', 'start': '2025-03-10 09:35:05.401204', 'end': '2025-03-10 09:35:05.431715', 'delta': '0:00:00.030511', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web01', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', 'start': '2025-03-10 09:35:05.801397', 'end': '2025-03-10 09:35:05.831049', 'delta': '0:00:00.029652', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web02', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', 'start': '2025-03-10 09:35:06.209975', 'end': '2025-03-10 09:35:06.240345', 'delta': '0:00:00.030370', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web03', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 1, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', 'start': '2025-03-10 09:35:06.618957', 'end': '2025-03-10 09:35:06.656282', 'delta': '0:00:00.037325', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed_when_result': False, 'item': 'web04', 'ansible_loop_var': 'item'})
skipping: [c-server]
TASK [CREATE WEBUSER ACCOUNT IF NOT EXISTS] **************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [SET WEBUSER PASSWORD] ******************************************************************
ok: [c-server] => (item=None)
ok: [c-server] => (item=None)
ok: [c-server] => (item=None)
ok: [c-server] => (item=None)
ok: [c-server]
TASK [ADD WEBUSER TO SUDO GROUP] *************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [UPDATE APT CACHE] **********************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [INSTALL REQUIRED PACKAGES] *************************************************************
changed: [c-server] => (item=web01)
changed: [c-server] => (item=web02)
changed: [c-server] => (item=web03)
changed: [c-server] => (item=web04)
TASK [CLEAN APT CACHE] ***********************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
PLAY RECAP ***********************************************************************************
c-server : ok=9 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
```
For this first playbook run, the changed counter value is only 2 because many tasks are either idempotent checks or unconditional processing.
Let's do a second run to verify that the changed counter is zero.
```bash
ansible-playbook 02_incus_launch.yml --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
PLAY [LAUNCH INCUS CONTAINERS, SETUP USER ACCOUNT AND SSH SERVICE] ***************************
TASK [Gathering Facts] ***********************************************************************
ok: [c-server]
TASK [CHECK CONTAINERS STATE] ****************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [LAUNCH INCUS CONTAINERS] ***************************************************************
skipping: [c-server] => (item={'changed': False, 'stdout': 'web01,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', 'start': '2025-03-10 10:06:04.054026', 'end': '2025-03-10 10:06:04.094846', 'delta': '0:00:00.040820', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web01,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web01', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': 'web02,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', 'start': '2025-03-10 10:06:04.461705', 'end': '2025-03-10 10:06:04.493866', 'delta': '0:00:00.032161', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web02,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web02', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': 'web03,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', 'start': '2025-03-10 10:06:04.848409', 'end': '2025-03-10 10:06:04.884405', 'delta': '0:00:00.035996', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web03,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web03', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': 'web04,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', 'start': '2025-03-10 10:06:05.240980', 'end': '2025-03-10 10:06:05.274389', 'delta': '0:00:00.033409', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web04,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web04', 'ansible_loop_var': 'item'})
skipping: [c-server]
TASK [START STOPPED CONTAINERS] **************************************************************
skipping: [c-server] => (item={'changed': False, 'stdout': 'web01,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', 'start': '2025-03-10 10:06:04.054026', 'end': '2025-03-10 10:06:04.094846', 'delta': '0:00:00.040820', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web01', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web01,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web01', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': 'web02,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', 'start': '2025-03-10 10:06:04.461705', 'end': '2025-03-10 10:06:04.493866', 'delta': '0:00:00.032161', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web02', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web02,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web02', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': 'web03,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', 'start': '2025-03-10 10:06:04.848409', 'end': '2025-03-10 10:06:04.884405', 'delta': '0:00:00.035996', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web03', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web03,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web03', 'ansible_loop_var': 'item'})
skipping: [c-server] => (item={'changed': False, 'stdout': 'web04,RUNNING', 'stderr': '', 'rc': 0, 'cmd': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', 'start': '2025-03-10 10:06:05.240980', 'end': '2025-03-10 10:06:05.274389', 'delta': '0:00:00.033409', 'msg': '', 'invocation': {'module_args': {'executable': '/bin/bash', '_raw_params': 'set -o pipefail && incus ls --format csv -c n,s | grep web04', '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['web04,RUNNING'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'web04', 'ansible_loop_var': 'item'})
skipping: [c-server]
TASK [CREATE WEBUSER ACCOUNT IF NOT EXISTS] **************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [SET WEBUSER PASSWORD] ******************************************************************
ok: [c-server] => (item=None)
ok: [c-server] => (item=None)
ok: [c-server] => (item=None)
ok: [c-server] => (item=None)
ok: [c-server]
TASK [ADD WEBUSER TO SUDO GROUP] *************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [UPDATE APT CACHE] **********************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [INSTALL REQUIRED PACKAGES] *************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
TASK [CLEAN APT CACHE] ***********************************************************************
ok: [c-server] => (item=web01)
ok: [c-server] => (item=web02)
ok: [c-server] => (item=web03)
ok: [c-server] => (item=web04)
PLAY RECAP ***********************************************************************************
c-server : ok=8 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
```
The idempotent check is on the last line of the screenshot above. The `changed` counter value is actually zero.
## Part 5: Complete a dynamic inventory
Now that the containers are started, it is time to get their network addresses to build a new inventory file that will allow Ansible to run playbooks in each of these containers.
We switch here to Python development to build the new YAML inventory file based on information provided by **Incus** on the container server VM.
### Step 1: Fetch container list and addresses
Here is a short new playbook named `03_incus_inventory.yml` which will retrieve configuration from the container server VM to the Devnet VM.
You can use a bash heredoc to make sure that YAML indentation is correct.
```bash=
cat << 'EOF' > 03_incus_inventory.yml
# Playbook file content
EOF
```
```yaml=
---
- name: BUILD CONTAINERS DYNAMIC INVENTORY
hosts: c-server
tasks:
- name: ENSURE TRACE DIRECTORY EXISTS
delegate_to: localhost
ansible.builtin.file:
path: trace
state: directory
mode: "0755"
- name: GET INCUS CONTAINERS CONFIGURATION
ansible.builtin.command:
cmd: incus --format yaml ls
register: container_config
changed_when: false
- name: SAVE CONTAINER CONFIG FOR REFERENCE
ansible.builtin.copy:
content: "{{ container_config.stdout }}"
dest: trace/container_config.yml
mode: "0644"
delegate_to: localhost
changed_when: false
- name: ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
delegate_to: localhost
ansible.builtin.shell:
cmd: set -o pipefail && cat trace/container_config.yml | python3 build_inventory.py
executable: /bin/bash
register: inventory_data
changed_when: true
- name: WRITE INVENTORY FILE
delegate_to: localhost
ansible.builtin.copy:
content: "{{ inventory_data.stdout }}"
dest: inventory/containers.yml
mode: "0644"
```
This playbook runs a Python script called `build_inventory.py`, which will be developed in the next step.
### Step 2: Build a Python script for containers inventory
The main purpose here is to build a dynamic inventory with containers' actual network addresses. With our **Incus** network setup and random layer 2 MAC addresses, container IPv6 addresses are completely dynamic.
Therefore, we need to extract the network addresses from the YAML configuration file and create a new inventory file.
The order of the tasks in the `03_incus_inventory.yml` playbook is as follows:
GET INCUS CONTAINERS CONFIGURATION
: Retrieves detailed YAML configuration of all Incus containers from the host server
SAVE CONTAINER CONFIG FOR REFERENCE
: Stores a copy of container configuration locally for troubleshooting and documentation purposes
ADD INCUS CONTAINERS ADDRESSES TO INVENTORY
: Processes container configuration through a Python script to extract IPv6 link-local addresses for Ansible inventory
WRITE INVENTORY FILE
: Writes the generated inventory data to an Ansible-compatible YAML inventory file for container access
Here is the `build_inventory.py` Python script code:
```python=
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import sys
import yaml
# Read the YAML from stdin
try:
containers = yaml.safe_load(sys.stdin)
except yaml.YAMLError as e:
print(e, file=sys.stderr)
sys.exit(1)
# Initialize the inventory dictionary
inventory = {"containers": {"hosts": {}}}
# Parse container list and extract the IPv6 link-local address
for container in containers:
container_name = container["name"]
for addresses in container["state"]["network"]["eth0"]["addresses"]:
if addresses["family"] == "inet6" and addresses["scope"] == "link":
inventory["containers"]["hosts"][container_name] = {
"ansible_host": f"{addresses['address']}%enp0s1"
}
# Print the inventory as YAML
print(yaml.dump(inventory, default_flow_style=False, sort_keys=False))
sys.exit(0)
```
Now that both the playbook and the script are in place, we can run `ansible-playbook`.
```bash
ansible-playbook 03_incus_inventory.yml --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash
PLAY [BUILD CONTAINERS DYNAMIC INVENTORY] ***********************************************
TASK [Gathering Facts] ******************************************************************
ok: [c-server]
TASK [ENSURE TRACE DIRECTORY EXISTS] ****************************************************
ok: [c-server -> localhost]
TASK [GET INCUS CONTAINERS CONFIGURATION] ***********************************************
ok: [c-server]
TASK [SAVE CONTAINER CONFIG FOR REFERENCE] **********************************************
ok: [c-server -> localhost]
TASK [ADD INCUS CONTAINERS ADDRESSES TO INVENTORY] **************************************
changed: [c-server -> localhost]
TASK [WRITE INVENTORY FILE] *************************************************************
ok: [c-server -> localhost]
PLAY RECAP ******************************************************************************
c-server : ok=6 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
The `changed` counter value is 1 because we think the script should create the `inventory/containers.yml` inventory file every time the playbook runs.
### Step 3: Check Ansible inventory
We can now run the `ansible-inventory` command to verify that the container server VM and its containers are properly addressed.
```bash
ansible-inventory --yaml --list
```
```yaml=
all:
children:
containers:
hosts:
web01:
ansible_become_pass: '{{ webuser_pass }}'
ansible_host: fe80::216:3eff:fe00:404%enp0s1
ansible_ssh_pass: '{{ webuser_pass }}'
ansible_ssh_user: '{{ webuser_name }}'
web02:
ansible_become_pass: '{{ webuser_pass }}'
ansible_host: fe80::216:3eff:fecc:2fac%enp0s1
ansible_ssh_pass: '{{ webuser_pass }}'
ansible_ssh_user: '{{ webuser_name }}'
web03:
ansible_become_pass: '{{ webuser_pass }}'
ansible_host: fe80::216:3eff:fe75:84c5%enp0s1
ansible_ssh_pass: '{{ webuser_pass }}'
ansible_ssh_user: '{{ webuser_name }}'
web04:
ansible_become_pass: '{{ webuser_pass }}'
ansible_host: fe80::216:3eff:fedc:e80%enp0s1
ansible_ssh_pass: '{{ webuser_pass }}'
ansible_ssh_user: '{{ webuser_name }}'
vms:
hosts:
c-server:
ansible_become_pass: '{{ incus_admin_pass }}'
ansible_host: fe80::baad:caff:fefe:2%enp0s1
ansible_ssh_common_args: -o StrictHostKeyChecking=accept-new
ansible_ssh_pass: '{{ incus_admin_pass }}'
ansible_ssh_user: '{{ incus_admin_user }}'
```
### Step 4: Check Ansible SSH access to the containers
We are also now able to run the `ansible` command with its **ping** module to check SSH access to all containers.
```bash
ansible containers -m ping --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
web04 | SUCCESS => {
"changed": false,
"ping": "pong"
}
web03 | SUCCESS => {
"changed": false,
"ping": "pong"
}
web01 | SUCCESS => {
"changed": false,
"ping": "pong"
}
web02 | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
Another way to check SSH access to the containers is to use the **command** module instead of **ping**.
```bash
ansible containers -m command -a "/bin/echo Hello, World!" --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
web04 | CHANGED | rc=0 >>
Hello, World!
web01 | CHANGED | rc=0 >>
Hello, World!
web03 | CHANGED | rc=0 >>
Hello, World!
web02 | CHANGED | rc=0 >>
Hello, World!
```
## Part 6: Create an Ansible playbook to automate Web service installation
In this part, you will create and automate the installation of the Apache web server software in each container instance. The idea here is to illustrate that a single process can be applied to as many instances as needed.
We will create a playbook named `install_apache.yml` and add tasks to it at each step.
1. Install `apache2`
2. Enable the `mod_rewrite` module
3. Change the listen port to 8081
4. Check the HTTP status code to validate our configuration
### Step 1: Create the `04_install_apache.yml` playbook
As before, use a bash heredoc to copy and paste the playbook content without breaking the indentation.
```bash=
cat << 'EOF' > 04_install_apache.yml
# Playbbok content
```
```yaml=
---
- name: INSTALL APACHE2, ENABLE MOD_REWRITE
hosts: containers
become: true
tasks:
- name: UPDATE APT CACHE
# Updates package information to ensure latest versions are available
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600 # Consider cache valid for 1 hour
- name: UPGRADE APT PACKAGES
# Ensures all packages are up to date
ansible.builtin.apt:
upgrade: full
autoclean: true
register: apt_upgrade
changed_when: apt_upgrade is changed
- name: INSTALL APACHE2
# Installs Apache2 HTTP server
ansible.builtin.apt:
name: apache2
state: present
- name: ENABLE MOD_REWRITE
# Enables mod_rewrite module for Apache2
community.general.apache2_module:
name: rewrite
state: present
notify: RESTART APACHE2
handlers:
- name: RESTART APACHE2
# Restarts Apache2 service
ansible.builtin.service:
name: apache2
state: restarted
```
This playbook configures Apache web server on container hosts with the following tasks:
UPDATE APT CACHE
: Refreshes the package repository information to ensure the latest package versions are available, with a cache validity period of one hour.
UPGRADE APT PACKAGES
: Performs a full system upgrade to bring all installed packages up to date while cleaning up unnecessary files.
INSTALL APACHE2
: Ensures that the Apache2 web server package is installed on the target systems.
ENABLE MOD_REWRITE
: Activates the rewrite module for Apache2, which allows for URL manipulation and redirection.
The playbook includes a handler `RESTART APACHE2` that restarts the web server service whenever configuration changes are made, ensuring that modifications take effect immediately.
### Step 2: Run the `04_install_apache.yml` playbook
```bash
ansible-playbook 04_install_apache.yml --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
PLAY [INSTALL APACHE2, ENABLE MOD_REWRITE] **************************************************
TASK [Gathering Facts] **********************************************************************
ok: [web03]
ok: [web01]
ok: [web04]
ok: [web02]
TASK [UPDATE APT CACHE] *********************************************************************
changed: [web04]
changed: [web01]
changed: [web02]
changed: [web03]
TASK [UPGRADE APT PACKAGES] *****************************************************************
changed: [web01]
changed: [web03]
changed: [web02]
changed: [web04]
TASK [INSTALL APACHE2] **********************************************************************
changed: [web01]
changed: [web02]
changed: [web03]
changed: [web04]
TASK [ENABLE MOD_REWRITE] *******************************************************************
changed: [web02]
changed: [web03]
changed: [web01]
changed: [web04]
RUNNING HANDLER [RESTART APACHE2] ***********************************************************
changed: [web03]
changed: [web04]
changed: [web01]
changed: [web02]
PLAY RECAP **********************************************************************************
web01 : ok=6 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web02 : ok=6 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web03 : ok=6 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web04 : ok=6 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Compared to the playbooks in the previous tasks, we can see that each task is running on each of the four **Incus** system containers we added to the inventory.
If you run the same playbook again, you will see that the modified counter goes from 5 to 0.
### Step 3: Add a task to verify Apache2 service status
We now want to verify that the `apache2` web server is up and running. To do this, we will add a task to the playbook. Let's make a copy of the playbook in `05_install_apache.yml`.
```bash=
cat << 'EOF' > 05_install_apache.yml
# Playbook file content
```
```yaml=
---
- name: INSTALL APACHE2, ENABLE MOD_REWRITE
hosts: containers
become: true
tasks:
- name: UPDATE APT CACHE
# Updates package information to ensure latest versions are available
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600 # Consider cache valid for 1 hour
- name: UPGRADE APT PACKAGES
# Ensures all packages are up to date
ansible.builtin.apt:
upgrade: full
autoclean: true
register: apt_upgrade
changed_when: apt_upgrade is changed
- name: INSTALL APACHE2
# Installs Apache2 HTTP server
ansible.builtin.apt:
name: apache2
state: present
- name: ENABLE MOD_REWRITE
# Enables mod_rewrite module for Apache2
community.general.apache2_module:
name: rewrite
state: present
notify: RESTART APACHE2
- name: GET APACHE2 SERVICE STATUS
ansible.builtin.systemd:
name: apache2
register: apache2_status
- name: PRINT APACHE2 SERVICE STATUS
ansible.builtin.debug:
var: apache2_status.status.ActiveState
handlers:
- name: RESTART APACHE2
# Restarts Apache2 service
ansible.builtin.service:
name: apache2
state: restarted
```
Here we introduce the **systemd** module and the ability to **debug** within a playbook by displaying the value of a variable after **registering** to the state of a **systemd** service.
When the playbook is run, the relevant part of the output is given below:
```bash=
TASK [GET APACHE2 SERVICE STATUS] ***********************************************************
ok: [web01]
ok: [web04]
ok: [web03]
ok: [web02]
TASK [PRINT APACHE2 SERVICE STATUS] *********************************************************
ok: [web01] => {
"apache2_status.status.ActiveState": "active"
}
ok: [web02] => {
"apache2_status.status.ActiveState": "active"
}
ok: [web03] => {
"apache2_status.status.ActiveState": "active"
}
ok: [web04] => {
"apache2_status.status.ActiveState": "active"
}
```
### Step 4: Reconfigure Apache server to listen on port 8081
In this step, we will add two tasks that use the **lineinfile** Ansible module to edit configuration files.
Here is a copy of the new tasks to add to the `05_install_apache.yml` file playbook.
```yaml=
- name: SET APACHE2 LISTEN ON PORT 8081
ansible.builtin.lineinfile:
dest: /etc/apache2/ports.conf
regexp: ^Listen 80
line: Listen 8081
state: present
notify: RESTART APACHE2
- name: SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081
ansible.builtin.lineinfile:
dest: /etc/apache2/sites-available/000-default.conf
regexp: ^<VirtualHost \*:80>
line: <VirtualHost *:8081>
state: present
notify: RESTART APACHE2
```
SET APACHE2 LISTEN ON PORT 8081
: Modifies the main Apache2 configuration file `/etc/apache2/ports.conf` by searching for lines starting with `Listen 80` and replacing them with `Listen 8081`. The **lineinfile module** ensures that this change is idempotent, meaning that the substitution will only occur once regardless of how many times the playbook is run.
SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081
: Updates the default VirtualHost configuration file `/etc/apache2/sites-available/000-default.conf` to change the VirtualHost directive from port 80 to port 8081, ensuring that the VirtualHost configuration matches the new listening port. The regular expression `^<VirtualHost \*:80>` targets the specific line to be changed, and the module ensures idempotent execution by only making the change if it hasn't already been made.
Both tasks notify the `RESTART APACHE2` handler, ensuring that the Apache service is restarted after configuration changes to take advantage of the new port settings.
When the playbook is run, the relevant part of the output is given below:
```bash=
TASK [SET APACHE2 LISTEN ON PORT 8081] *****************************************************
changed: [web02]
changed: [web04]
changed: [web03]
changed: [web01]
TASK [SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081] *****************************************
changed: [web02]
changed: [web01]
changed: [web03]
changed: [web04]
TASK [GET APACHE2 SERVICE STATUS] **********************************************************
ok: [web01]
ok: [web02]
ok: [web03]
ok: [web04]
TASK [PRINT APACHE2 SERVICE STATUS] ********************************************************
ok: [web01] => {
"apache2_status.status.ActiveState": "active"
}
ok: [web02] => {
"apache2_status.status.ActiveState": "active"
}
ok: [web04] => {
"apache2_status.status.ActiveState": "active"
}
ok: [web03] => {
"apache2_status.status.ActiveState": "active"
}
RUNNING HANDLER [RESTART APACHE2] **********************************************************
changed: [web01]
changed: [web04]
changed: [web02]
changed: [web03]
PLAY RECAP *********************************************************************************
web01 : ok=10 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web02 : ok=10 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web03 : ok=10 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web04 : ok=10 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
The `changed` counter value is 3 and all apache2 services were restarted.
Finally, we cal list the TCP sockets open in listening state
```bash
ansible containers -m command -a "ss -ltn '( sport = :8081 )'" --ask-vault-pass --extra-vars @$HOME/.lab_passwd.yml
Vault password:
```
```bash=
web01 | CHANGED | rc=0 >>
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 511 *:8081 *:*
web03 | CHANGED | rc=0 >>
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 511 *:8081 *:*
web04 | CHANGED | rc=0 >>
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 511 *:8081 *:*
web02 | CHANGED | rc=0 >>
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 511 *:8081 *:*
```
In the results above, we can see that lines numbered 3, 6, 9, and 12 indicate that there are open TCP ports listening on 8081.
### Step 5: Verify access to the web services
When deploying new services, it is important to verify that they are actually reachable at the application layer.
To do this, we add the last task to the `05_install_apache.yml` playbook that makes HTTP requests from the Devnet development VM and verifies that the response code is `200 OK`.
```yaml=
- name: DEBUG IPV6 ADDRESS
ansible.builtin.debug:
msg: "IPv6 address for {{ inventory_hostname }}: {{ ansible_default_ipv6.address }}"
when: ansible_default_ipv6 is defined
- name: CHECK HTTP STATUS CODE FROM DEVNET VM # checkov:skip=CKV2_ANSIBLE_1:Using HTTP instead of HTTPS for internal testing
delegate_to: localhost
ansible.builtin.uri:
url: "http://[{{ ansible_default_ipv6.address }}]:8081"
status_code: 200
become: false
register: http_check
ignore_errors: true
when: ansible_default_ipv6 is defined
- name: SHOW HTTP RESULT
ansible.builtin.debug:
msg: "HTTP Status for {{ inventory_hostname }}: {{ http_check.status }} - {{ http_check.msg | default('Success') }}"
when: http_check is defined and http_check.status is defined
```
As our playbook starts with the **gather facts** job, a lot of ansible variables are set during this first phase.
In the example above, we use IPv6 address of each container in the HTTP URL and expect the code **200** as a succesful result.
* The **delegate_to: localhost** instructs the task to be run from the Devnet VM.
* The **become: false** tells the task must be run at the normal user level.
If we run the playbook successfully, we first get the list of Global Unique Addresses (GUA) IPv6 addresses of the containers, then we have a printout of the HTTP response code for these containers.
```bash=
TASK [DEBUG IPV6 ADDRESS] ****************************************************************
ok: [web01] => {
"msg": "IPv6 address for web01: 2001:678:3fc:34:216:3eff:fe00:404"
}
ok: [web02] => {
"msg": "IPv6 address for web02: 2001:678:3fc:34:216:3eff:fecc:2fac"
}
ok: [web03] => {
"msg": "IPv6 address for web03: 2001:678:3fc:34:216:3eff:fe75:84c5"
}
ok: [web04] => {
"msg": "IPv6 address for web04: 2001:678:3fc:34:216:3eff:fedc:e80"
}
TASK [CHECK HTTP STATUS CODE FROM DEVNET VM] *********************************************
ok: [web04 -> localhost]
ok: [web02 -> localhost]
ok: [web01 -> localhost]
ok: [web03 -> localhost]
TASK [SHOW HTTP RESULT] ******************************************************************
ok: [web01] => {
"msg": "HTTP Status for web01: 200 - OK (10703 bytes)"
}
ok: [web02] => {
"msg": "HTTP Status for web02: 200 - OK (10703 bytes)"
}
ok: [web04] => {
"msg": "HTTP Status for web04: 200 - OK (10703 bytes)"
}
ok: [web03] => {
"msg": "HTTP Status for web03: 200 - OK (10703 bytes)"
}
PLAY RECAP *******************************************************************************
web01 : ok=12 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web02 : ok=12 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web03 : ok=12 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web04 : ok=12 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
## Conclusion
This lab concludes by successfully automating the installation of web servers on Incus system containers using Ansible. It demonstrates how to configure Ansible, create dynamic inventories, and automate Apache installation on containers. The lab also covers setting up a virtual machine, creating an Ansible user, and managing secrets securely.
If you've reached these lines, I hope you've enjoyed the trip :smiley: