2042 views
# DevNet Lab 16 -- Using Ansible to automate the installation of web services on Incus containers [toc] --- > Copyright (c) 2026 Philippe Latu. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". ### Scenario In this lab, you will configure Ansible to communicate with a virtual machine hosting web servers in Incus system containers. Then, you will create playbooks to automate the installation of Incus on the web server virtual machine (VM). Next, you will build a dynamic inventory from Incus container manager runtime information. Lastly, you will create a custom playbook that installs Apache with specific instructions for each container. ![Lab 15 topology](https://md.inetdoc.net/uploads/7559f148-6474-4453-92d2-86016e7081b6.png) ### Objectives After completing the hands-on activities of this lab, you will be able to: - Set up Ansible on the DevNet development VM. - Start a target virtual machine for hosting Incus containers. - Generate and manage user secrets for Ansible using a dedicated user account. - Create a dynamic inventory based on the current Incus container manager state. - Automate and troubleshoot Apache installation on a group of containers. ## Part 1: Launch the Web server VM This virtual machine's network connection must belong to the Out-of-Band VLAN for automation processes and to an In-Band VLAN for application traffic. Therefore, before starting the new virtual machine that will host Incus containers connected to the In-Band VLAN, you first need to set a tap interface in trunk mode with the correct VLAN list. ### Step 1: Apply hypervisor switch port configuration Below is an example of a switch port configuration file that can be named `lab16-switch.yaml`. Edit this example to set your assigned switch port parameters. - Your tap interface name. - Your hypervisor out-of-band (OOB) VLAN identifier. - This lab VLAN identifier must be set to **303** for user in-band traffic. ```yaml= ovs: switches: - name: dsw-host ports: - name: tapYYY # <-- YOUR TAP NUMBER type: OVSPort vlan_mode: trunk trunks: [OOB_VLAN_ID, LAB_VLAN_ID] ``` Apply hypervisor switch port configuration. ```bash switch-conf.py --apply lab16-switch.yaml ``` ### Step 2: Start the Incus container server Next, you need to declare the properties of the new virtual server, especially its network configuration, which is specific to container hosting. The design point here is that we want the virtual server and its containers to be connected to different VLANs. This is why we choose to set an internal switch named `c-3po`. - The Out-of-Band (OOB) VLAN is dedicated to administration and automation from the DevNet VM. - The lab VLAN is dedicated to container traffic To create your virtual machine's YAML declaration file, such as `lab16-server.yaml`, you must copy and edit the example below. We start by calculating the MAC address. Here is a sample code for tap interface number **347**: ```python python3 ``` ```python= Python 3.13.12 (main, Feb 4 2026, 15:06:39) [GCC 15.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. Ctrl click to launch VS Code Native REPL >>> tap_number = 347 >>> mac_address = f"b8:ad:ca:fe:{(tap_number // 256):02x}:{(tap_number % 256):02x}" >>> print(mac_address) b8:ad:ca:fe:01:5b >>> ``` Now that we know the MAC address for our C-3PO internal switch, we can add it to the YAML declaration file. ```yaml= kvm: vms: - vm_name: lab16-vm os: linux master_image: debian-testing-amd64.qcow2 force_copy: false memory: 2048 tapnum: YYY # <-- YOUR TAP NUMBER cloud_init: force_seed: false hostname: lab16-server apt: # Preserve configuration files during package install # Add --no-install-recommends to incus install conf: | Dpkg::Options { "--force-confdef"; "--force-confold"; } APT::Install-Recommends "false"; APT::Install-Suggests "false"; packages: - openvswitch-switch - incus users: - name: admin shell: /bin/bash groups: users, adm, sudo sudo: ALL=(ALL) NOPASSWD:ALL ssh_authorized_keys: - ssh-ed25519 AAAA... # <-- YOUR CURRENT PUBLIC KEY ssh_pwauth: false netplan: network: version: 2 renderer: networkd ethernets: enp0s1: dhcp4: false dhcp6: false accept-ra: false bridges: c-3po: openvswitch: {} interfaces: [enp0s1] vlans: vlan${OOB_VLAN}: id: OOB_VLAN_ID # <-- YOUR OOB VLAN IDENTIFIER link: c-3po dhcp4: true dhcp6: false accept-ra: true macaddress: b8:ad:ca:fe:ZZ:ZZ # <-- YOUR MAC ADDRESS vlan${LAB_VLAN}: id: 303 # <-- YOUR INBAND VLAN IDENTIFIER link: c-3po dhcp4: false dhcp6: false accept-ra: true routes: - to: ::/0 via: fe80:12f::1 on-link: true table: 303 routing-policy: - from: 2001:678:3fc:12f::/64 table: 303 priority: 103 runcmd: - rm -f /etc/netplan/enp0s1.yaml - netplan apply - adduser admin incus - adduser admin incus-admin ``` Here is a short description of the main cloud-init options used in this declaration: `hostname`: : Sets the host name of the virtual machine, which will also appear in the shell prompt. `apt.conf`: : Provides custom APT options to preserve existing configuration files and disable the automatic installation of recommended and suggested packages. The goal is to limit Incus package dependency installations. `packages`: : Lists the Debian packages that must be installed at first boot (here: Open vSwitch and Incus). `users`: : Creates the `admin` user account, defines its shell and groups, and configures passwordless sudo rights. `ssh_authorized_keys`: : Adds the DevNet user public SSH key so that you can log in as `admin` without a password. `ssh_pwauth`: : Disables SSH password authentication to enforce key-based access only. `netplan`: : Defines the network configuration, including the physical interface, the Open vSwitch bridge, and VLAN interfaces for OOB and lab traffic. :::info The `routes` and `routing-policy` options are configured to manage IPv6 routing, ensuring that traffic from the lab subnet uses the correct gateway and routing table. ::: `runcmd`: : Runs additional commands at the end of the cloud-init process, such as cleaning up the old netplan file, applying the new configuration, and adding the `admin` user to the `incus` and `incus-admin` groups. Once you have set your MAC address and VLAN identifiers in this YAML declaration file, you can start the Incus container server virtual machine with the `lab-startup.py` script. ```bash lab-startup.py lab16-server.yaml ``` ### Step 3: Open a first SSH connection to the Incus container server As usual, do not forget to change the tap interface number at the right end of the link local IPv6 address. ```bash ssh admin@fe80::baad:caff:fefe:XXX%enp0s1 ``` - The `admin` user is added via the cloud-init process during the first boot of the Incus server VM. - XXX is the hexadecimal conversion value of the tap interface number. - OOB_VLAN_ID is the out-of-band VLAN identifier. Once the first SSH connection is verified, add a new entry in your SSH client configuration file on the DevNet VM. As you already did before, replace the YYY placeholder with the tap interface number you have used to start this VM. ```bash cat <<EOF >>$HOME/.ssh/config Host lab16-server HostName fe80::baad:caff:fefe:YYY%%enp0s1 User admin EOF ``` Run another passwordless authentication test. ```bash ssh lab16-server ``` ```bash= Linux lab16 6.18.9+deb14-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.18.9-1 (2026-02-07) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Mon Feb 16 16:28:10 2026 from fe80::baad:caff:fefe:0%vlan52 admin@lab16-server:~$ ``` ## Part 2: Configure Ansible on the DevNet VM The Incus container server hosting VM is now ready for Ansible automation. First, you need to configure Ansible and verify that you have access to the server from the DevNet virtual machine via SSH. ### Step 1: Create the Ansible directory and configuration file 1. Ensure the ~/labs/lab16 directory exists and navigate to this folder. ```bash mkdir -p ~/labs/lab16 && cd ~/labs/lab16 ``` 2. Install the Ansible Python virtual environment. There are two main ways to set up a new Ansible workspace. Packages and Python virtual environments are both viable options. Here we choose to install Ansible in a Python virtual environment to take advantage of the latest release. We start by creating a `requirements.txt` file. ```bash cat << EOF > requirements.txt ansible ansible-lint netaddr EOF ``` Then we install the tools in a virtual environment called `ansible`. ```bash python3 -m venv ansible source ./ansible/bin/activate pip3 install -r requirements.txt ``` 3. Create a new `ansible.cfg` file in the `lab16` directory from the shell prompt. ```bash= cat << 'EOF' > ansible.cfg # config file for Lab 16 Incus containers management [defaults] # Use inventory/ folder files as source inventory=inventory/ host_key_checking = False # Don't worry about RSA Fingerprints retry_files_enabled = False # Do not create them deprecation_warnings = False # Do not show warnings interpreter_python = /usr/bin/python3 [inventory] enable_plugins = auto, host_list, yaml, ini, toml, script [persistent_connection] command_timeout=100 connect_timeout=100 connect_retry_timeout=100 EOF ``` ### Step 2: Create a new inventory file Ensure the `inventory` directory is created: ```bash mkdir -p $HOME/labs/lab16/inventory ``` Create the `inventory/hosts.yaml` inventory file with the local link IPv6 address of your Container Server VM. ```bash= cat << 'EOF' > inventory/hosts.yaml --- vms: hosts: lab16-server: vars: ansible_ssh_user: admin all: children: vms: containers: EOF ``` This `hosts.yaml` file contains a single variable that designates the Ansible user account used to communicate with the container server VM. Exchanges between the DevNet VM and the container server VM are secured via the passwordless SSH authentication within the out-of-band VLAN. ### Step 3: Verify Ansible communication with the container server VM Now we can use the `ping` Ansible module to connect to the `lab16-server` entry defined in the inventory file. ```bash ansible lab16-server -m ping ``` ```bash= c-server | SUCCESS => { "changed": false, "ping": "pong" } ``` Since the Ansible ping is successful, we can proceed with setting up container management inside the target VM. ## Part 3: Initialize Incus container management with Ansible In order to be able to launch system containers and configure Web services in these containers, we first must initialize the Incus manager with an Ansible playbook. ### Step 1: Create the `01_incus_init.yaml` playbook Create the `01_incus_init.yaml` file and add the following tasks to it. Use a Bash heredoc to make sure that YAML indentation is correct. ```bash cat << 'EOF' > 01_incus_init.yaml # Playbook file content EOF ``` Here is a copy of the YAML content of the file `01_incus_init.yaml`. ```yaml= --- - name: INCUS INITIALIZATION hosts: lab16-server tasks: - name: CHECK IF INCUS IS INITIALIZED # If no storage pools are present, Incus is not initialized ansible.builtin.command: incus storage ls -cn -f yaml register: incus_status changed_when: false failed_when: incus_status.rc != 0 - name: INITIALIZE INCUS ansible.builtin.shell: | set -o pipefail cat << EOT | incus admin init --preseed config: {} networks: [] storage_pools: - config: {} description: "" name: default driver: dir profiles: - config: {} description: "" devices: eth0: name: eth0 nictype: bridged parent: c-3po type: nic vlan: 303 root: path: / pool: default type: disk name: default projects: [] cluster: null EOT register: incus_init changed_when: incus_init.rc == 0 failed_when: incus_init.rc != 0 when: incus_status.stdout == '[]' - name: SHOW INCUS DEFAULT PROFILE ansible.builtin.command: incus profile show default register: incus_profile changed_when: false failed_when: incus_profile.rc != 0 - name: COMPARE CURRENT PROFILE TO EXPECTED STATE ansible.builtin.assert: that: - "incus_profile.stdout is search('name: default')" - "incus_profile.stdout is search('parent: c-3po')" - 'incus_profile.stdout is regex(''vlan: "303"'')' - "incus_profile.stdout is search('pool: default')" fail_msg: Incus profile does not match expected state changed_when: false ``` This playbook is designed with idempotency as a core principle to ensure consistent and predictable results across multiple executions. The playbook focuses on initializing Incus only when necessary and verifying that the default profile aligns with the expected configuration. The main tasks are: **CHECK IF INCUS IS INITIALIZED**: : Runs `incus storage ls -cn -f yaml` and inspects the output to determine whether any storage pool already exists. If the output is empty (`[]`), Incus is considered not initialized. This task never reports changes; it only performs a state check. **INITIALIZE INCUS**: : Uses `incus admin init --preseed` with an inline YAML configuration to create a default storage pool, a default profile, and associate the `eth0` interface with the `c-3po` Open vSwitch bridge on VLAN 303. This task runs only when Incus is not yet initialized and reports a change when the initialization succeeds. **SHOW INCUS DEFAULT PROFILE**: : Retrieves the current `default` profile configuration with `incus profile show default`. This task is read‑only and does not change anything on the system. **COMPARE CURRENT PROFILE TO EXPECTED STATE**: : Uses `ansible.builtin.assert` to verify that the `default` profile contains the expected values: profile name `default`, parent bridge `c-3po`, VLAN `303`, and storage pool `default`. If any of these checks fail, the playbook stops with a clear error message indicating that the Incus profile does not match the expected state. ### Step 2: Run the `01_incus_init.yaml` playbook twice In this step, we run the Ansible playbook twice, looking specifically at the changed counter to assess task idempotency. ```bash ansible-playbook 01_incus_init.yaml ``` ```bash= PLAY [INCUS INITIALIZATION] ************************************************* TASK [Gathering Facts] ****************************************************** ok: [lab16-server] TASK [CHECK IF INCUS IS INITIALIZED] **************************************** ok: [lab16-server] TASK [INITIALIZE INCUS] ***************************************************** changed: [lab16-server] TASK [SHOW INCUS DEFAULT PROFILE] ******************************************* ok: [lab16-server] TASK [COMPARE CURRENT PROFILE TO EXPECTED STATE] **************************** ok: [lab16-server] => { "changed": false, "msg": "All assertions passed" } PLAY RECAP ****************************************************************** lab16-server : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` After this first playbook run, the `changed` counter value is 1, which means Incus is initialized. Let's do a second run. ```bash ansible-playbook 01_incus_init.yaml ``` ```bash= PLAY [INCUS INITIALIZATION] ************************************************* TASK [Gathering Facts] ****************************************************** ok: [lab16-server] TASK [CHECK IF INCUS IS INITIALIZED] **************************************** ok: [lab16-server] TASK [INITIALIZE INCUS] ***************************************************** skipping: [lab16-server] TASK [SHOW INCUS DEFAULT PROFILE] ******************************************* ok: [lab16-server] TASK [COMPARE CURRENT PROFILE TO EXPECTED STATE] **************************** ok: [lab16-server] => { "changed": false, "msg": "All assertions passed" } PLAY RECAP ****************************************************************** lab16-server : ok=4 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 ``` We now have a proof of idempotency as the `changed` counter value is zero and initialization task was skipped. ## Part 4: Instantiate containers with Ansible In this part, we begin to manage web services on-demand with Incus container instantiation based on an Ansible playbook. ### Step 1: Create a lab inventory template In Part 2, Step 2, we created the `inventory/hosts.yaml` file, which defines all the parameters necessary to run Ansible playbooks on the container server VM. Now we need to create a new inventory file called `inventory/lab.yaml` that defines all the system container parameters. The purpose here is to be able to run Ansible playbooks inside these containers. ```bash= cat << 'EOF' > inventory/lab.yaml --- containers: hosts: web[01:04]: vars: ansible_ssh_user: "{{ webuser_name }}" ansible_ssh_pass: "{{ webuser_pass }}" ansible_become_pass: "{{ webuser_pass }}" EOF ``` > Note: This inventory file is incomplete because it does not define the `ansible_host` variable for each container. Once the containers are started, we will get their addresses and complete the inventory. ### Step 2: Add a new entry in Ansible vault for container access In the previous step above, we defined a user named `webuser_name` with its password stored in the `webuser_pass` variable. We need to add the corresponding entries to the Ansible vault file named `$HOME/.lab16_passwd.yaml`. To do this, we open the vault file for editing. ```bash ansible-vault create $HOME/.lab16_passwd.yaml New Vault password: Confirm New Vault password: ``` There we will enter two variable names that will specify the name and password for each container user account. ```bash webuser_name: web webuser_pass: XXXXXXXXXX ``` As a reminder, you can use the `openssl` command to generate a somewhat strong secret for your container user account. ```bash openssl rand -base64 16 | tr -d '=' ``` ### Step 3: Create the `02_incus_launch.yaml` Ansible playbook to launch and configure access to containers The purpose of this step is to automate the deployment and configuration of Incus containers with a focus on idempotency. It will provision multiple Debian Trixie containers, create user accounts, and install necessary packages to prepare the containers for SSH access. Create the `02_incus_launch.yaml` file an add the following tasks to the file. You can use a bash heredoc to make sure that YAML indentation is correct. ```bash= cat << 'EOF' > 02_incus_launch.yaml # Playbook file content EOF ``` Here is a copy of the YAML content of the file `02_incus_launch.yaml`. ```yaml= --- - name: LAUNCH INCUS CONTAINERS (HOST SIDE) hosts: lab16-server vars: container_image: images:debian/forky container_names: "{{ groups['containers'] }}" container_wait_timeout_seconds: "{{ (container_names | length) * 10 }}" packages_to_install: - openssh-server - python3 - python3-apt - apt-utils webuser_password_marker: ./lab16_webuser_password_initialized tasks: - name: ENSURE CONTAINER IS PRESENT AND RUNNING ansible.builtin.shell: cmd: | set -o pipefail if incus info {{ item }} >/dev/null 2>&1; then state=$(incus list --format csv -c n,s | awk -F',' '$1=="{{ item }}"{print $2}') if [ "$state" = "STOPPED" ]; then incus start {{ item }} echo CHANGED_STARTED fi else incus launch {{ container_image }} {{ item }} echo CHANGED_LAUNCHED fi executable: /bin/bash loop: "{{ container_names }}" loop_control: label: "{{ item }}" register: container_launch_status changed_when: "'CHANGED_' in container_launch_status.stdout" - name: WAIT FOR CONTAINER READINESS ansible.builtin.shell: cmd: | for i in $(seq 1 {{ container_wait_timeout_seconds }}); do incus exec {{ item }} -- /bin/true >/dev/null 2>&1 && exit 0 sleep 1 done exit 1 executable: /bin/bash loop: "{{ container_names }}" loop_control: label: "{{ item }}" changed_when: false - name: UPDATE APT CACHE IN CONTAINERS ansible.builtin.shell: cmd: incus exec {{ item }} -- apt update executable: /bin/bash loop: "{{ container_names }}" changed_when: false - name: ENSURE REQUIRED PACKAGES ARE INSTALLED ansible.builtin.shell: cmd: incus exec {{ item }} -- apt install -y {{ packages_to_install | join(' ') }} executable: /bin/bash loop: "{{ container_names }}" register: apt_install_status changed_when: "'Setting up' in apt_install_status.stdout" - name: ENSURE WEBUSER ACCOUNT EXISTS ansible.builtin.shell: cmd: | set -o pipefail incus exec {{ item }} -- id {{ webuser_name }} >/dev/null 2>&1 || (incus exec {{ item }} -- adduser --quiet --gecos "" --disabled-password {{ webuser_name }} && echo CHANGED_USER_CREATED) executable: /bin/bash loop: "{{ container_names }}" register: user_status changed_when: "'CHANGED_USER_CREATED' in user_status.stdout" - name: INITIALIZE WEBUSER PASSWORD ON FIRST RUN ansible.builtin.shell: cmd: | incus exec {{ item }} -- test -f {{ webuser_password_marker }} || ( incus exec {{ item }} -- chpasswd <<<'{{ webuser_name }}:{{ webuser_pass }}' incus exec {{ item }} -- touch {{ webuser_password_marker }} echo CHANGED_PASSWORD_SET ) executable: /bin/bash loop: "{{ container_names }}" no_log: true register: password_status changed_when: "'CHANGED_PASSWORD_SET' in password_status.stdout" - name: ENSURE WEBUSER IS IN SUDO GROUP ansible.builtin.shell: cmd: | set -o pipefail incus exec {{ item }} -- id -nG {{ webuser_name }} | grep -qw sudo || (incus exec {{ item }} -- usermod -aG sudo {{ webuser_name }} && echo CHANGED_SUDO_ADDED) executable: /bin/bash loop: "{{ container_names }}" register: sudo_group_status changed_when: "'CHANGED_SUDO_ADDED' in sudo_group_status.stdout" - name: ENSURE SSH SERVICE ENABLED AND STARTED ansible.builtin.shell: cmd: | set -o pipefail if ! incus exec {{ item }} -- systemctl is-enabled --quiet ssh || ! incus exec {{ item }} -- systemctl is-active --quiet ssh; then incus exec {{ item }} -- systemctl enable --now ssh echo CHANGED_SSH_SERVICE fi executable: /bin/bash loop: "{{ container_names }}" register: ssh_service_status changed_when: "'CHANGED_SSH_SERVICE' in ssh_service_status.stdout" - name: CLEAN APT CACHE IN CONTAINERS ansible.builtin.shell: cmd: incus exec {{ item }} -- apt clean executable: /bin/bash loop: "{{ container_names }}" changed_when: false ``` This playbook is responsible for provisioning and preparing the Incus containers so they can be managed via SSH and Ansible. It is designed to be idempotent, so that you can run it multiple times without causing unnecessary changes once the containers are fully configured. The key tasks are: ENSURE CONTAINER IS PRESENT AND RUNNING: : For each container name in the `containers` inventory group, this task checks whether the container already exists and what its current state is. - If the container does not exist, it is created from the `container_image` (`images:debian/forky`). - If it exists but is stopped, it is started. The task reports a change only when a container is created or started. WAIT FOR CONTAINER READINESS: : This task waits until each container can successfully execute a simple command (`/bin/true`) using `incus exec`. It does not report any changes; it only ensures that all containers are ready before continuing. UPDATE APT CACHE IN CONTAINERS: : Runs `apt update` inside each container to refresh the package index. This is treated as a read-only operation from Ansible’s point of view and is always reported as unchanged. ENSURE REQUIRED PACKAGES ARE INSTALLED: : Installs the list of required packages (`openssh-server`, `python3`, `python3-apt`, `apt-utils`) inside each container. The `changed_when` condition inspects the command output to detect when packages are actually installed or upgraded, so that subsequent runs do not report changes if everything is already present. ENSURE WEBUSER ACCOUNT EXISTS: : Checks whether the `webuser_name` account already exists in each container. If it does not, the user is created and the task reports a change. If the user is already present, the task is marked as ok with no change. INITIALIZE WEBUSER PASSWORD ON FIRST RUN: : Sets the `webuser_pass` password only once per container, using a marker file (`lab16_webuser_password_initialized`) stored inside the container. On the first run, the password is set and the marker is created; on subsequent runs, the marker prevents the password from being reset, which is important for security and idempotency. ENSURE WEBUSER IS IN SUDO GROUP: : Verifies whether the web user is already a member of the `sudo` group. If not, the user is added to the group and the task reports a change. Otherwise, it remains unchanged. ENSURE SSH SERVICE ENABLED AND STARTED: : Checks that the `ssh` service is both enabled and running in each container. If the service is not enabled or not active, it is enabled and started, and the task reports a change; otherwise, it remains unchanged. CLEAN APT CACHE IN CONTAINERS: : Runs `apt clean` inside each container to remove cached package files and free disk space. This housekeeping task is always reported as unchanged. ### Step 4: Run the `02_incus_launch.yaml` playbook twice ```bash ansible-playbook 02_incus_launch.yaml \ --ask-vault-pass --extra-vars @$HOME/.lab16_passwd.yaml Vault password: ``` ```bash= PLAY [LAUNCH INCUS CONTAINERS (HOST SIDE)] ********************************** TASK [Gathering Facts] ****************************************************** ok: [lab16-server] TASK [ENSURE CONTAINER IS PRESENT AND RUNNING] ****************************** changed: [lab16-server] => (item=web01) changed: [lab16-server] => (item=web02) changed: [lab16-server] => (item=web03) changed: [lab16-server] => (item=web04) TASK [WAIT FOR CONTAINER READINESS] ***************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [UPDATE APT CACHE IN CONTAINERS] *************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [ENSURE REQUIRED PACKAGES ARE INSTALLED] ******************************* changed: [lab16-server] => (item=web01) changed: [lab16-server] => (item=web02) changed: [lab16-server] => (item=web03) changed: [lab16-server] => (item=web04) TASK [ENSURE WEBUSER ACCOUNT EXISTS] **************************************** changed: [lab16-server] => (item=web01) changed: [lab16-server] => (item=web02) changed: [lab16-server] => (item=web03) changed: [lab16-server] => (item=web04) TASK [INITIALIZE WEBUSER PASSWORD ON FIRST RUN] ***************************** changed: [lab16-server] => (item=(censored due to no_log)) changed: [lab16-server] => (item=(censored due to no_log)) changed: [lab16-server] => (item=(censored due to no_log)) changed: [lab16-server] => (item=(censored due to no_log)) TASK [ENSURE WEBUSER IS IN SUDO GROUP] ************************************** changed: [lab16-server] => (item=web01) changed: [lab16-server] => (item=web02) changed: [lab16-server] => (item=web03) changed: [lab16-server] => (item=web04) TASK [ENSURE SSH SERVICE ENABLED AND STARTED] ******************************* ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [CLEAN APT CACHE IN CONTAINERS] **************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) PLAY RECAP ****************************************************************** lab16-server : ok=10 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` For this first playbook run, the changed counter value is 5 because many tasks perform initial provisioning of containers, packages, users, and services. Let's do a second run to verify that the changed counter is zero. ```bash ansible-playbook 02_incus_launch.yaml \ --ask-vault-pass --extra-vars @$HOME/.lab16_passwd.yaml Vault password: ``` ```bash= PLAY [LAUNCH INCUS CONTAINERS (HOST SIDE)] ********************************** TASK [Gathering Facts] ****************************************************** ok: [lab16-server] TASK [ENSURE CONTAINER IS PRESENT AND RUNNING] ****************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [WAIT FOR CONTAINER READINESS] ***************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [UPDATE APT CACHE IN CONTAINERS] *************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [ENSURE REQUIRED PACKAGES ARE INSTALLED] ******************************* ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [ENSURE WEBUSER ACCOUNT EXISTS] **************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [INITIALIZE WEBUSER PASSWORD ON FIRST RUN] ***************************** ok: [lab16-server] => (item=(censored due to no_log)) ok: [lab16-server] => (item=(censored due to no_log)) ok: [lab16-server] => (item=(censored due to no_log)) ok: [lab16-server] => (item=(censored due to no_log)) TASK [ENSURE WEBUSER IS IN SUDO GROUP] ************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [ENSURE SSH SERVICE ENABLED AND STARTED] ******************************* ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) TASK [CLEAN APT CACHE IN CONTAINERS] **************************************** ok: [lab16-server] => (item=web01) ok: [lab16-server] => (item=web02) ok: [lab16-server] => (item=web03) ok: [lab16-server] => (item=web04) PLAY RECAP ****************************************************************** lab16-server : ok=10 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` On the second run, the `changed` counter value is 0. This confirms that the playbook is idempotent: once the containers are fully configured, running the same playbook again does not trigger any additional changes. ## Part 5: Complete a dynamic inventory Now that the containers have started, we need their addresses to complete the Ansible inventory. This will allow us to run playbooks in each container. We use Ansible to create a new YAML inventory file based on the container list information that Incus provides for the server VM. ### Step 1: Fetch container list and addresses Here is a short new playbook named `03_incus_inventory.yaml` which will retrieve configuration from the container server VM to the DevNet VM. You can use a Bash heredoc to make sure that YAML indentation is correct. ```bash= cat << 'EOF' > 03_incus_inventory.yaml # Playbook file content EOF ``` ```yaml= --- - name: BUILD CONTAINERS DYNAMIC INVENTORY hosts: lab16-server tasks: - name: GET INCUS CONTAINERS IPV6 IN CSV # Retrieves compact CSV output: <name>,<ipv6> (<interface>) ansible.builtin.command: cmd: incus ls -cn6 --format csv register: container_csv changed_when: false - name: BUILD INVENTORY DATA FROM CSV # Builds an Ansible inventory map: containers.hosts.<container>.ansible_host delegate_to: localhost ansible.builtin.set_fact: containers_hosts: >- {{ (containers_hosts | default({})) | combine({ (item.split(',')[0]): { 'ansible_host': (item.split(',')[1].split(' ')[0]) } }) }} loop: "{{ container_csv.stdout_lines }}" changed_when: false - name: WRITE INVENTORY FILE # Writes an Ansible-compatible YAML inventory file delegate_to: localhost ansible.builtin.copy: content: >- {{ { 'containers': { 'hosts': (containers_hosts | default({})) } } | to_nice_yaml(indent=2, sort_keys=False) }} dest: inventory/containers.yaml mode: "0644" ``` The main purpose here is to build a dynamic inventory with each container’s actual IPv6 address. Because our Incus network setup uses random MAC addresses, container IPv6 addresses are fully dynamic and cannot be hardcoded. The tasks are organized as follows: GET INCUS CONTAINERS IPV6 IN CSV: : Runs `incus ls -cn6 --format csv` on the Incus server to obtain a compact list of containers in CSV format. Each line contains the container name and its IPv6 address (with the interface in parentheses). This task is read‑only and never reports changes. BUILD INVENTORY DATA FROM CSV: : Runs on localhost (DevNet VM) and iterates over the CSV lines. For each line, it extracts the container name and the IPv6 address, then builds a dictionary of the form: `containers.hosts.<name>.ansible_host = <IPv6 address>`. This task only prepares data in memory and does not modify any files. WRITE INVENTORY FILE: : Still running on localhost, this task converts the in‑memory dictionary into YAML and writes it to `inventory/containers.yaml`. The task reports a change each time it rewrites the inventory file, which is expected because the inventory must always reflect the current Incus state. ### Step 2: Run the dynamic inventory Ansible Playbook Now that the playbook is in place, we can run `ansible-playbook` to generate the dynamic inventory file. ```bash ansible-playbook 03_incus_inventory.yaml ``` ```bash PLAY [BUILD CONTAINERS DYNAMIC INVENTORY] ***************************************** TASK [Gathering Facts] ************************************************************ ok: [lab16-server] TASK [GET INCUS CONTAINERS IPV6 IN CSV] ******************************************* ok: [lab16-server] TASK [BUILD INVENTORY DATA FROM CSV] ********************************************** ok: [lab16-server -> localhost] => (item=web01,2001:678:3fc:12f:1266:6aff:fed3:f7c2 (eth0)) ok: [lab16-server -> localhost] => (item=web02,2001:678:3fc:12f:1266:6aff:fe2c:4314 (eth0)) ok: [lab16-server -> localhost] => (item=web03,2001:678:3fc:12f:1266:6aff:fec7:6d7e (eth0)) ok: [lab16-server -> localhost] => (item=web04,2001:678:3fc:12f:1266:6aff:fe87:cd37 (eth0)) TASK [WRITE INVENTORY FILE] ******************************************************* changed: [lab16-server -> localhost] PLAY RECAP ************************************************************************ lab16-server : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` The `changed` counter value is 1 because we expect the playbook to create or overwrite the `inventory/containers.yaml` inventory file every time it runs. ### Step 3: Check Ansible inventory We can now run the `ansible-inventory` command to verify that the container server VM and its containers are properly addressed. ```bash ansible-inventory --yaml --list ``` ```yaml= all: children: containers: hosts: web01: ansible_become_pass: '{{ webuser_pass }}' ansible_host: 2001:678:3fc:12f:1266:6aff:fed3:f7c2 ansible_ssh_pass: '{{ webuser_pass }}' ansible_ssh_user: '{{ webuser_name }}' web02: ansible_become_pass: '{{ webuser_pass }}' ansible_host: 2001:678:3fc:12f:1266:6aff:fe2c:4314 ansible_ssh_pass: '{{ webuser_pass }}' ansible_ssh_user: '{{ webuser_name }}' web03: ansible_become_pass: '{{ webuser_pass }}' ansible_host: 2001:678:3fc:12f:1266:6aff:fec7:6d7e ansible_ssh_pass: '{{ webuser_pass }}' ansible_ssh_user: '{{ webuser_name }}' web04: ansible_become_pass: '{{ webuser_pass }}' ansible_host: 2001:678:3fc:12f:1266:6aff:fe87:cd37 ansible_ssh_pass: '{{ webuser_pass }}' ansible_ssh_user: '{{ webuser_name }}' vms: hosts: lab16-server: ansible_ssh_user: admin ``` This confirms that the containers group is now populated dynamically from the Incus runtime information instead of being hardcoded in a static inventory file. ### Step 4: Check Ansible SSH access to the containers We are also now able to run the `ansible` command with its **ping** module to check SSH access to all containers. ```bash ansible containers -m ping \ --ask-vault-pass --extra-vars @$HOME/.lab16_passwd.yaml Vault password: ``` ```bash= web01 | SUCCESS => { "changed": false, "ping": "pong" } web03 | SUCCESS => { "changed": false, "ping": "pong" } web02 | SUCCESS => { "changed": false, "ping": "pong" } web04 | SUCCESS => { "changed": false, "ping": "pong" } ``` Another way to check SSH access to the containers is to use the **command** module instead of **ping**. ```bash ansible containers -m command -a "/bin/echo Hello, World!" \ --ask-vault-pass --extra-vars @$HOME/.lab16_passwd.yaml Vault password: ``` ```bash= web04 | CHANGED | rc=0 >> Hello, World! web03 | CHANGED | rc=0 >> Hello, World! web02 | CHANGED | rc=0 >> Hello, World! web01 | CHANGED | rc=0 >> Hello, World! ``` ## Part 6: Create an Ansible playbook to automate Web service installation In this part, you will create and automate the installation of the Apache web server software in each container instance. The idea here is to illustrate that a single process can be applied to as many instances as needed. We will create a playbook named `install_apache.yaml` and add tasks to it at each step. 1. Install `apache2` 2. Enable the `mod_rewrite` module 3. Change the listen port to 8081 4. Check the HTTP status code to validate our configuration ### Step 1: Create the `04_install_apache.yaml` playbook As before, use a bash heredoc to copy and paste the playbook content without breaking the indentation. ```bash= cat << 'EOF' > 04_install_apache.yaml # Playbook content EOF ``` ```yaml= --- - name: INSTALL APACHE2, ENABLE MOD_REWRITE hosts: containers become: true tasks: - name: UPDATE APT CACHE # Updates package information to ensure latest versions are available ansible.builtin.apt: update_cache: true cache_valid_time: 3600 # Consider cache valid for 1 hour - name: UPGRADE APT PACKAGES # Ensures all packages are up to date ansible.builtin.apt: upgrade: full autoclean: true register: apt_upgrade changed_when: apt_upgrade is changed - name: INSTALL APACHE2 # Installs Apache2 HTTP server ansible.builtin.apt: name: apache2 state: present - name: ENABLE MOD_REWRITE # Enables mod_rewrite module for Apache2 community.general.apache2_module: name: rewrite state: present notify: RESTART APACHE2 handlers: - name: RESTART APACHE2 # Restarts Apache2 service ansible.builtin.service: name: apache2 state: restarted ``` This playbook installs and prepares the Apache HTTP server on all containers in the `containers` group. It is idempotent: once Apache and its configuration are in place, re-running the playbook will not trigger unnecessary changes. The main tasks are: UPDATE APT CACHE: : Uses the `apt` module to refresh the package index with a cache validity of one hour. This ensures that subsequent package installations use up-to-date information without forcing an update on every run. UPGRADE APT PACKAGES: : Runs a full upgrade of existing packages and cleans obsolete packages. The `changed_when` condition tracks whether any packages were actually upgraded so that the task does not report changes when the system is already up to date. INSTALL APACHE2: : Installs the `apache2` package if it is not already present. This task is idempotent: if Apache is already installed, it will simply report `ok` with no change. ENABLE MOD_REWRITE: : Uses the `community.general.apache2_module` module to enable the `rewrite` module. If the module is already enabled, the task remains unchanged; otherwise, it enables the module and notifies the handler. RESTART APACHE2 (handler): : The `RESTART APACHE2` handler restarts the Apache service only when notified (for example, after enabling `mod_rewrite`). This ensures that Apache is restarted only when configuration changes require it. ### Step 2: Run the `04_install_apache.yaml` playbook You can now run the playbook on all containers: ```bash ansible-playbook 04_install_apache.yaml \ --ask-vault-pass --extra-vars @$HOME/.lab16_passwd.yaml Vault password: ``` ```bash= PLAY [INSTALL APACHE2, ENABLE MOD_REWRITE] ********************************** TASK [Gathering Facts] ****************************************************** ok: [web03] ok: [web01] ok: [web04] ok: [web02] TASK [UPDATE APT CACHE] ***************************************************** changed: [web04] changed: [web01] changed: [web02] changed: [web03] TASK [UPGRADE APT PACKAGES] ************************************************* ok: [web02] ok: [web03] ok: [web04] ok: [web01] TASK [INSTALL APACHE2] ****************************************************** changed: [web01] changed: [web04] changed: [web02] changed: [web03] TASK [ENABLE MOD_REWRITE] *************************************************** changed: [web02] changed: [web03] changed: [web04] changed: [web01] RUNNING HANDLER [RESTART APACHE2] ******************************************* changed: [web01] changed: [web04] changed: [web03] changed: [web02] PLAY RECAP ****************************************************************** web01 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web02 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web03 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web04 : ok=6 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` On the first run, several tasks report changed because packages are installed, upgraded, and Apache is configured. On subsequent runs, if nothing has changed, the changed counter should remain at 0, which confirms idempotent behavior. ### Step 3: Add a task to get Apache2 service status We now want to verify that the `apache2` web server is up and running. To do this, we will add a task to the playbook. Let's make a copy of the playbook in `05_install_apache.yaml`. ```bash= cat << 'EOF' > 05_install_apache.yaml # Playbook file content ``` ```yaml= --- - name: INSTALL APACHE2, ENABLE MOD_REWRITE hosts: containers become: true tasks: - name: UPDATE APT CACHE # Updates package information to ensure latest versions are available ansible.builtin.apt: update_cache: true cache_valid_time: 3600 # Consider cache valid for 1 hour - name: UPGRADE APT PACKAGES # Ensures all packages are up to date ansible.builtin.apt: upgrade: full autoclean: true register: apt_upgrade changed_when: apt_upgrade is changed - name: INSTALL APACHE2 # Installs Apache2 HTTP server ansible.builtin.apt: name: apache2 state: present - name: ENABLE MOD_REWRITE # Enables mod_rewrite module for Apache2 community.general.apache2_module: name: rewrite state: present notify: RESTART APACHE2 - name: GET APACHE2 SERVICE STATUS ansible.builtin.systemd: name: apache2 register: apache2_status - name: PRINT APACHE2 SERVICE STATUS ansible.builtin.debug: var: apache2_status.status.ActiveState handlers: - name: RESTART APACHE2 # Restarts Apache2 service ansible.builtin.service: name: apache2 state: restarted ``` Here we introduce the **systemd** module and the ability to **debug** within a playbook by displaying the value of a variable after **registering** to the state of a **systemd** service. When the playbook runs, the relevant part of the output appears below: ```bash= TASK [GET APACHE2 SERVICE STATUS] ******************************************* ok: [web03] ok: [web04] ok: [web01] ok: [web02] TASK [PRINT APACHE2 SERVICE STATUS] ***************************************** ok: [web01] => { "apache2_status.status.ActiveState": "active" } ok: [web02] => { "apache2_status.status.ActiveState": "active" } ok: [web03] => { "apache2_status.status.ActiveState": "active" } ok: [web04] => { "apache2_status.status.ActiveState": "active" } PLAY RECAP ****************************************************************** web01 : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web02 : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web03 : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web04 : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` ### Step 4: Reconfigure Apache server to listen on port 8081 To avoid conflicts with other services and illustrate configuration management, you will now change the Apache listen port from 80 to 8081. In this step, we will add two tasks that use the **lineinfile** Ansible module to edit configuration files. Here is a copy of the new tasks to add to the playbook. ```yaml= - name: Configure Apache listen and virtual host port ansible.builtin.lineinfile: path: "{{ item.path }}" regexp: "{{ item.regexp }}" line: "{{ item.line }}" state: present loop: - path: /etc/apache2/ports.conf regexp: ^Listen 80$ line: "Listen {{ apache_port }}" - path: /etc/apache2/sites-available/000-default.conf regexp: ^<VirtualHost \*:80>$ line: "<VirtualHost *:{{ apache_port }}>" notify: Restart Apache ``` SET APACHE2 LISTEN ON PORT 8081 : Modifies the main Apache2 configuration file `/etc/apache2/ports.conf` by searching for lines starting with `Listen 80` and replacing them with `Listen 8081`. The **lineinfile module** ensures that this change is idempotent, meaning that the substitution will only occur once regardless of how many times the playbook is run. SET APACHE2 VIRTUALHOST LISTEN ON PORT 8081 : Updates the default VirtualHost configuration file `/etc/apache2/sites-available/000-default.conf` to change the VirtualHost directive from port 80 to port 8081, ensuring that the VirtualHost configuration matches the new listening port. The regular expression `^<VirtualHost \*:80>` targets the specific line to be changed, and the module ensures idempotent execution by only making the change if it hasn't already been made. Both tasks notify the `RESTART APACHE2` handler, ensuring that the Apache service is restarted after configuration changes to take advantage of the new port settings. ### Step 5: Verify access to the web services When deploying new services, it is important to verify that they are actually reachable at the application layer. To do this, copy the `05_install_apache.yaml` to `06_install_apache.yaml`, and add the task that launch HTTP requests from the DevNet development VM and verifies that the response code is `200 OK`. ```yaml= --- - name: Install and configure Apache on port 8081 hosts: containers become: true gather_facts: true vars: apache_service: apache2 apache_port: 8081 tasks: - name: Update apt cache ansible.builtin.apt: update_cache: true cache_valid_time: 3600 - name: Install Apache package ansible.builtin.apt: name: - apache2 state: present - name: Enable Apache rewrite module community.general.apache2_module: name: rewrite state: present notify: Restart Apache - name: Configure Apache listen and virtual host port ansible.builtin.lineinfile: path: "{{ item.path }}" regexp: "{{ item.regexp }}" line: "{{ item.line }}" state: present loop: - path: /etc/apache2/ports.conf regexp: ^Listen 80$ line: "Listen {{ apache_port }}" - path: /etc/apache2/sites-available/000-default.conf regexp: ^<VirtualHost \*:80>$ line: "<VirtualHost *:{{ apache_port }}>" notify: Restart Apache - name: Ensure Apache service is enabled and running ansible.builtin.service: name: "{{ apache_service }}" enabled: true state: started - name: Apply pending handler changes before health check ansible.builtin.meta: flush_handlers - name: Check HTTP status over IPv6 # checkov:skip=CKV2_ANSIBLE_1:Using HTTP instead of HTTPS for internal testing delegate_to: localhost ansible.builtin.uri: url: "http://[{{ ansible_default_ipv6.address }}]:{{ apache_port }}" status_code: 200 become: false register: http_check failed_when: false changed_when: false when: ansible_default_ipv6 is defined - name: Show HTTP check result ansible.builtin.debug: msg: "HTTP Status for {{ inventory_hostname }}: {{ http_check.status }} - {{ http_check.msg | default('Success') }}" when: http_check is defined handlers: - name: Restart Apache ansible.builtin.service: name: "{{ apache_service }}" state: restarted ``` As our playbook starts with the **gather facts** job, a lot of ansible variables are set during this first phase. In the example above, we use IPv6 address of each container in the HTTP URL and expect the code **200** as a succesful result. * The **delegate_to: localhost** instructs the task to be run from the DevNet VM. * The **become: false** tells the task must be run at the normal user level. If we run the playbook successfully, we first get the list of Global Unique Addresses (GUA) IPv6 addresses of the containers, then we have a printout of the HTTP response code for these containers. ```bash= PLAY [Install and configure Apache on port 8081] **************************** TASK [Gathering Facts] ****************************************************** ok: [web01] ok: [web04] ok: [web03] ok: [web02] TASK [Update apt cache] ***************************************************** ok: [web01] ok: [web03] ok: [web04] ok: [web02] TASK [Install Apache package] *********************************************** ok: [web02] ok: [web04] ok: [web03] ok: [web01] TASK [Enable Apache rewrite module] ***************************************** ok: [web04] ok: [web03] ok: [web01] ok: [web02] TASK [Configure Apache listen and virtual host port] ************************ changed: [web01] => (item={'path': '/etc/apache2/ports.conf', 'regexp': '^Listen 80$', 'line': 'Listen 8081'}) changed: [web04] => (item={'path': '/etc/apache2/ports.conf', 'regexp': '^Listen 80$', 'line': 'Listen 8081'}) changed: [web03] => (item={'path': '/etc/apache2/ports.conf', 'regexp': '^Listen 80$', 'line': 'Listen 8081'}) changed: [web02] => (item={'path': '/etc/apache2/ports.conf', 'regexp': '^Listen 80$', 'line': 'Listen 8081'}) changed: [web04] => (item={'path': '/etc/apache2/sites-available/000-default.conf', 'regexp': '^<VirtualHost \\*:80>$', 'line': '<VirtualHost *:8081>'}) changed: [web01] => (item={'path': '/etc/apache2/sites-available/000-default.conf', 'regexp': '^<VirtualHost \\*:80>$', 'line': '<VirtualHost *:8081>'}) changed: [web03] => (item={'path': '/etc/apache2/sites-available/000-default.conf', 'regexp': '^<VirtualHost \\*:80>$', 'line': '<VirtualHost *:8081>'}) changed: [web02] => (item={'path': '/etc/apache2/sites-available/000-default.conf', 'regexp': '^<VirtualHost \\*:80>$', 'line': '<VirtualHost *:8081>'}) TASK [Ensure Apache service is enabled and running] ************************* ok: [web04] ok: [web01] ok: [web02] ok: [web03] TASK [Apply pending handler changes before health check] ******************** TASK [Apply pending handler changes before health check] ******************** TASK [Apply pending handler changes before health check] ******************** TASK [Apply pending handler changes before health check] ******************** RUNNING HANDLER [Restart Apache] ******************************************** changed: [web03] changed: [web04] changed: [web01] changed: [web02] TASK [Check HTTP status over IPv6] ****************************************** ok: [web01 -> localhost] ok: [web04 -> localhost] ok: [web02 -> localhost] ok: [web03 -> localhost] TASK [Show HTTP check result] *********************************************** ok: [web01] => { "msg": "HTTP Status for web01: 200 - OK (10703 bytes)" } ok: [web02] => { "msg": "HTTP Status for web02: 200 - OK (10703 bytes)" } ok: [web03] => { "msg": "HTTP Status for web03: 200 - OK (10703 bytes)" } ok: [web04] => { "msg": "HTTP Status for web04: 200 - OK (10703 bytes)" } PLAY RECAP ****************************************************************** web01 : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web02 : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web03 : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 web04 : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ``` Finally, you can use the `ansible`ad-hoc command to list the TCP sockets open in listening state ```bash ansible containers -m command -a "ss -ltn '( sport = :8081 )'" \ --ask-vault-pass --extra-vars @$HOME/.lab16_passwd.yaml Vault password: ``` ```bash= web01 | CHANGED | rc=0 >> State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 511 *:8081 *:* web02 | CHANGED | rc=0 >> State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 511 *:8081 *:* web04 | CHANGED | rc=0 >> State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 511 *:8081 *:* web03 | CHANGED | rc=0 >> State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 511 *:8081 ``` In the results above, we can see that lines numbered 3, 6, 9, and 12 indicate that there are open TCP ports listening on 8081. ## Conclusion In this lab, you automated the deployment of web services on Incus containers hosted on a dedicated container server VM. You configured the underlying network fabric with Open vSwitch, initialized Incus with a predictable default profile, and used Ansible to provision, configure, and verify multiple containers in an idempotent way. You also built a dynamic inventory directly from the live Incus state, avoiding hardcoded IP addresses and making your playbooks resilient to changes in the container infrastructure. Finally, you installed and configured Apache on all containers, validated service availability, and demonstrated that small, focused playbooks can be combined to automate repeatable and maintainable end-to-end scenarios.