# IaC Lab 3 -- Using GitLab CI to run Ansible playbooks and build new IOS XE Virtual Routers
[toc]
---
> Copyright (c) 2026 Philippe Latu.
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
https://inetdoc.net
GitLab repository: https://gitlab.inetdoc.net/iac/lab03.git
### Scenario
This is the third lab in the series. The first lab ([Lab 01](https://md.inetdoc.net/s/f-mfjs-kQ)) focused on infrastructure automation by designing the declarative and procedural parts required to build new Debian GNU/Linux virtual machines from the [cloud.debian.org](https://cloud.debian.org) source. The second lab ([Lab 02](https://md.inetdoc.net/s/CPltj12uT)) introduced GitLab Continuous Integration (GitLab CI) by configuring a runner that executes the tasks from the first lab in a pipeline.
In this third lab, you will maintain the design decisions made in the first lab and continue to use the GitLab CI pipeline. However, you will replace the Linux virtual machines with Cisco IOS XE virtual routers, which expose a different set of network interfaces. The lab topology clearly separates an out-of-band VLAN for management and automation traffic from in-band VLANs for user traffic. Each virtual router uses three interfaces to mirror the ISR4321 hardware available in the physical lab, where the management interface is dedicated and isolated from user traffic at the chip level.
Unlike physical routers (where the management interface is isolated at the chip level), a virtual router relies on software configuration for isolation. This often confuses students who mistakenly try to configure the dedicated management interface as a user interface.
To address this, both physical and virtual routers are deployed using Zero Touch Provisioning ([ZTP](https://github.com/jeremycohoe/IOSXE-Zero-Touch-Provisioning)), and a Python startup script ensures that the management interface belongs to a dedicated Virtual Routing and Forwarding ([VRF](https://en.wikipedia.org/wiki/Virtual_routing_and_forwarding)) instance, isolated from all in-band interfaces. In this lab scenario, you perform hypervisor-level preparation, start two new virtual router instances, and configure in-band interfaces on each router within the same VLAN.
:::info
If you want to adapt this lab to your own environment, you will need to install and configure (or replace) the declarative management scripts used to build the examples.
On the network side, the `switch-conf.py` Python script reads a YAML declaration file to configure Open vSwitch tap port interfaces on the hypervisor.
On the virtual machine side, the `lab-startup.py` Python script reads another YAML declaration file to set virtual machine properties such as image selection, tap bindings, and network configuration.
All these tools are hosted at: https://gitlab.inetdoc.net/labs/startup-scripts
:::
### Objectives
After completing the steps in this lab, you will be able to:
- Set up an Ansible environment for managing Cisco IOS XE virtual routers
- Create declarative configurations for network devices using YAML files that describe both underlay and overlay aspects.
- Develop Ansible playbooks to automate virtual router provisioning, interface configuration, and default routing.
- Implement a GitLab CI pipeline that orchestrates the complete deployment process from hypervisor preparation to router configuration.
- Generate and consume dynamic inventories so that Ansible can automatically discover and manage newly launched virtual routers.
### Design decisions reminder
The purpose of this lab series is to maintain consistent design decisions when virtualizing Linux systems, Cisco IOS XE routers, or Cisco NX-OS switches. So, as in [Lab 01](https://md.inetdoc.net/s/f-mfjs-kQ#Part-2-Designing-the-Declarative-Part), you start with the bare-metal Type 2 hypervisor, which owns all the tap network interfaces.
The tap interfaces connect virtualized systems to an Open vSwitch switch named **dsw-host**. The preparation stage ensures that the switch ports configuration conforms to the **underlay network**.
:arrow_right: The hypervisor `host_vars` file named `hypervisor_name.yaml` contains all tap interfaces configuration declarations.
:arrow_right: The declaration of router interface configurations is split into separate files:
- The `group_vars/all.yaml` file contains the addressing plan shared by all virtual routers.
- The `host_vars/rXXX.yaml` file for each virtualized router contains its specific interface address part.
This allows the declarative part of the **overlay network** to be composed independently of the network addressing plan.
:arrow_right: The inventory of virtualized systems is dynamically built upon launch.
## Part 1: Configure Ansible on the DevNet VM
To begin, it is necessary to configure Ansible and verify SSH access to the hypervisor from the DevNet VM.
### Step 1: Create the Ansible directory and configuration file
Make the `~/iac/lab03` directory for example and navigate to this folder
```bash
mkdir -p ~/iac/lab03 && cd ~/iac/lab03
```
Check that **ansible** is installed
There are two main ways to set up a new Ansible workspace. While system packages and Python virtual environments are both viable options, using a modern virtual environment manager like `uv` ensures you have the fastest access to the latest tool versions. Create your environment by following these steps:
```bash
cat << EOF > pyproject.toml
[project]
name = "lab03"
version = "0.1.0"
description = "IaC Ansible labs"
readme = "README.md"
requires-python = ">=3.13"
dependencies = ["ansible", "ansible-lint", "ansible-pylibssh", "netaddr"]
EOF
```
As demonstrated in [Lab 1](https://md.inetdoc.net/s/f-mfjs-kQ), the `pyproject.toml` file acts as the centralised, standardised configuration for `uv`, defining your project's metadata and dependencies to enable `uv` to manage environments, installations, and locking in a reproducible way.
Both the project metadata and the associated toolchain must be installed on the developer and GitLab Runner roles. Therefore, you need to create a `setup_ansible_toolchain.sh` script. A copy of this script code is provided below.
```bash=
#!/usr/bin/env bash
set -euo pipefail
if [[ ${USER} == "gitlab-runner" ]]; then
readonly PROJECT_DIR="${HOME}"
readonly VENV_DIR="${PROJECT_DIR}/.venv"
else
readonly PROJECT_DIR="${PWD}"
readonly VENV_DIR="${PROJECT_DIR}/.venv"
fi
readonly PROJECT_FILE="pyproject.toml"
echo "Starting Ansible toolchain setup for user: ${USER}"
# 1. Create project metadata
cd "${PROJECT_DIR}"
if [[ -f ${PROJECT_FILE} ]]; then
rm -f "${PROJECT_FILE}"
fi
cat <<EOF >"${PROJECT_DIR}/${PROJECT_FILE}"
[project]
name = "lab03"
version = "0.1.0"
description = "IaC Ansible labs"
readme = "README.md"
requires-python = ">=3.13"
dependencies = ["ansible", "ansible-lint", "ansible-pylibssh", "netaddr"]
EOF
echo "${PROJECT_FILE} created in the ${PROJECT_DIR} directory."
# 2. Install 'uv' locally for the user if not present
if ! command -v uv >/dev/null 2>&1; then
echo "Installing 'uv'..."
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add standard astral installation paths to PATH for immediate use
export PATH="${HOME}/.local/bin:${PATH}"
else
echo "'uv' is already installed."
fi
# 3. Create the virtual environment only if it does not already exist
if [[ -d ${VENV_DIR} ]]; then
echo "Existing virtual environment detected at ${VENV_DIR}."
else
echo "Creating new virtual environment at ${VENV_DIR}..."
uv venv "${VENV_DIR}"
fi
# 4. Activate the target virtual environment if needed, then sync project dependencies
if [[ ${VIRTUAL_ENV-} == "${VENV_DIR}" ]]; then
echo "Virtual environment is already activated. Syncing dependencies from ${PROJECT_FILE}..."
else
echo "Activating virtual environment at ${VENV_DIR}..."
# shellcheck source=/dev/null
source "${VENV_DIR}/bin/activate"
fi
echo "Upgrading dependencies from ${PROJECT_FILE}..."
uv sync --active --upgrade
echo "--------------------------------------------------------"
echo "Toolchain setup complete! Virtual environment is at:"
echo "${VENV_DIR}"
echo "To activate manually, run: source ${VENV_DIR}/bin/activate"
echo "--------------------------------------------------------"
```
1. Start by running the script to install or upgrade the Python virtual environment as the developer.
```bash
bash setup_ansible_toolchain.sh
```
2. Next, run the same script while logged in as the GitLab Runner user.
```bash
sudo -u gitlab-runner bash setup_ansible_toolchain.sh
```
This ensures that the same toolchain is available for both roles.
### Step 2: Configure Ansible and verify SSH access to the hypervisor
Create a new `ansible.cfg` file in the `lab03` Git working directory with the following content.
```=
[defaults]
# Use inventory/ folder files as source
inventory = inventory/
host_key_checking = False # Don't worry about RSA Fingerprints
retry_files_enabled = False # Do not create them
deprecation_warnings = False # Do not show warnings
interpreter_python = /usr/bin/python3
[inventory]
enable_plugins = auto, host_list, yaml, ini, toml
[persistent_connection]
command_timeout=90
connect_timeout=90
connect_retry_timeout=90
ssh_type = libssh
```
> Note that the `inventory` key in the `[defaults]` section points to the `inventory/` directory. The plugins listed in the inventory section enable the creation of a dynamic inventory by merging all files in the directory.
Next, you can verify that the Ansible configuration and modules are up to date.
```bash
ansible --version
```
```bash=
ansible [core 2.20.4]
config file = /home/etu/iac/lab03/ansible.cfg
configured module search path = ['/home/etu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/etu/iac/lab03/.venv/lib/python3.13/site-packages/ansible
ansible collection location = /home/etu/.ansible/collections:/usr/share/ansible/collections
executable location = /home/etu/iac/lab03/.venv/bin/ansible
python version = 3.13.12 (main, Feb 4 2026, 15:06:39) [GCC 15.2.0] (/home/etu/iac/lab03/.venv/bin/python)
jinja version = 3.1.6
pyyaml version = 6.0.3 (with libyaml v0.2.5)
```
Use the `ansible-galaxy` command to download and update the Cisco IOS collection from Ansible Galaxy. This makes the necessary external network modules available to your playbooks.
```bash
ansible-galaxy collection install cisco.ios --upgrade
```
```bash=
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Downloading https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/cisco-ios-11.3.0.tar.gz to /home/etu/.ansible/tmp/ansible-local-8499959j8vh9b/tmpfzj__pfm/cisco-ios-11.3.0-q0d6f_ua
Installing 'cisco.ios:11.3.0' to '/home/etu/.ansible/collections/ansible_collections/cisco/ios'
cisco.ios:11.3.0 was installed successfully
'ansible.netcommon:8.4.0' is already installed, skipping.
'ansible.utils:6.0.1' is already installed, skipping.
```
Since you have already completed the `~/.ssh/config` file (as done in [Lab 1](https://md.inetdoc.net/s/f-mfjs-kQ)), you are ready to test the SSH connection to the hypervisor called `hypervisor_name`.
Here is an extract from the user's SSH configuration file:
```bash=
Host hypervisor_name
HostName fe80::XXXX:1%%enp0s1
User etudianttest
Port 2222
```
You make a minimal SSH connection and check for success with a return code of 0.
```bash
ssh -q hypervisor_name exit
echo $?
0
```
The SSH connection parameters will be used to complete the inventory hosts file later.
### Step 3: Edit or create a vault file
Edit or create an Ansible vault file called `${HOME}/.iac_passwd` and enter the vault password which will be used for all credentials stored in this vault.
Replace the `create` instruction by `edit` in the command below if the vault file has already been created.
```bash
ansible-vault create ${HOME}/.iac_passwd
New Vault password:
Confirm New Vault password:
```
This opens the default editor defined by the `$EDITOR` environment variable.
Enter two sets of variables: one for hypervisor access from your development VM called DevNet, and one for SSH access to all virtual routers created by Ansible playbooks.
```bash
hypervisor_user: XXXXXXXXXX
hypervisor_pass: YYYYYYYYYY
router_user: etu
router_pass: ZZZZZZZZZ
```
As you plan to integrate the Ansible playbooks of this lab into GitLab CI pipelines, you need to store the vault secret in a file and make it available to any Ansible command that follows.
- First, you store the vault secret in a file at the user's home directory level.
```bash
echo "ThisVaultSecret" >${HOME}/.vault_pass.txt
chmod 400 ${HOME}/.vault_pass.txt
```
:::warning
Don't forget to replace the `ThisVaultSecret` with your own secret password.
:::
- Second, you make sure that the `ANSIBLE_VAULT_PASSWORD_FILE` variable is set each time a new shell is opened.
```bash
touch ${HOME}/.profile
echo "export ANSIBLE_VAULT_PASSWORD_FILE=\$HOME/.vault_pass.txt" |\
tee -a ${HOME}/.profile
source ${HOME}/.profile
```
As mentioned in the previous lab, the architecture is simplified by avoiding the use of a centralised identity and secrets management solution. All identities and secrets are stored within the DevNet virtual machine.
In the context of the continuous integration pipeline, the `ansible` command is executed using the GitLab Runner user account. Consequently, the vault and its password must be copied from the developer user account to the GitLab Runner user account. Below is a copy of the `share_secrets.sh` script, similar to that in [Lab 2](https://md.inetdoc.net/s/CPltj12uT).
```bash=
#!/usr/bin/env bash
# The purpose of this script is to copy secrets from the developer's home
# directory to the GitLab Runner's home directory, and inject the runner's
# SSH public key into the Ansible Vault.
set -euo pipefail
# Constants
readonly HYPERVISOR_NAME="eve"
readonly DEV_USER="etu"
readonly RUNNER_USER="gitlab-runner"
readonly RUNNER_HOME="/home/${RUNNER_USER}"
readonly DEV_HOME="/home/${DEV_USER}"
readonly RUNNER_SSH_DIR="${RUNNER_HOME}/.ssh"
readonly VAULT_PASSWD_FILE=".vault_pass.txt"
readonly VAULT_FILE=".iac_passwd"
readonly ANSIBLE_VENV="${RUNNER_HOME}/.venv"
# Function to handle errors
error_exit() {
echo "ERROR: $1" >&2
exit 1
}
# Function to copy and set permissions securely in one atomic step
secure_copy() {
local src="$1"
local dest="$2"
local mode="${3:-0600}"
[[ -f ${src} ]] || error_exit "Source file ${src} not found"
install -m "${mode}" -o "${RUNNER_USER}" -g "${RUNNER_USER}" "${src}" "${dest}"
}
# Function to safely append the runner's public key to the Ansible Vault
update_vault_runner_key() {
local vault_file="$1"
local pass_file="$2"
local pub_key="$3"
local vault_bin="${ANSIBLE_VENV}/bin/ansible-vault"
local tmp_file
# Fallback to system ansible-vault if the venv isn't found
if [[ ! -x ${vault_bin} ]]; then
vault_bin="$(command -v ansible-vault || true)"
[[ -n ${vault_bin} ]] || error_exit "ansible-vault not found. Please run the toolchain setup script first."
fi
tmp_file="$(mktemp)"
echo "Decrypting vault to check for vm_runnerkey..."
"${vault_bin}" view --vault-password-file "${pass_file}" "${vault_file}" >"${tmp_file}" || {
rm -f "${tmp_file}"
error_exit "Failed to decrypt ${vault_file}"
}
# Check if the key already exists
if ! grep -qE '^[[:space:]]*vm_runnerkey[[:space:]]*:' "${tmp_file}"; then
# JSON-escape the value cleanly
local escaped_key
escaped_key="$(printf '%s' "${pub_key}" | sed 's/\\/\\\\/g; s/"/\\"/g')"
printf '\nvm_runnerkey: "%s"\n' "${escaped_key}" >>"${tmp_file}"
# Re-encrypt replacing the original file
"${vault_bin}" encrypt "${tmp_file}" \
--vault-password-file "${pass_file}" \
--output "${vault_file}" || {
rm -f "${tmp_file}"
error_exit "Failed to re-encrypt ${vault_file}"
}
# Restore proper ownership and permissions since root overwrote it
chown "${RUNNER_USER}":"${RUNNER_USER}" "${vault_file}"
chmod 0600 "${vault_file}"
echo "vm_runnerkey successfully added to the vault."
else
echo "vm_runnerkey is already present in the vault. No changes made."
fi
rm -f "${tmp_file}"
}
# Check if running as root
[[ ${EUID} -eq 0 ]] || error_exit "This script must be run as root"
echo "Setting up secrets for GitLab Runner..."
# Setup Vault files
secure_copy "${DEV_HOME}/${VAULT_PASSWD_FILE}" "${RUNNER_HOME}/${VAULT_PASSWD_FILE}"
secure_copy "${DEV_HOME}/${VAULT_FILE}" "${RUNNER_HOME}/${VAULT_FILE}"
# Setup profile
PROFILE="${RUNNER_HOME}/.profile"
echo "export ANSIBLE_VAULT_PASSWORD_FILE=${RUNNER_HOME}/${VAULT_PASSWD_FILE}" >"${PROFILE}"
chown "${RUNNER_USER}":"${RUNNER_USER}" "${PROFILE}"
chmod 0644 "${PROFILE}"
# Setup SSH directory using install (-d flag creates directories)
install -d -m 0700 -o "${RUNNER_USER}" -g "${RUNNER_USER}" "${RUNNER_SSH_DIR}"
# Copy SSH configuration file
if [[ -f "${DEV_HOME}/.ssh/config" ]]; then
secure_copy "${DEV_HOME}/.ssh/config" "${RUNNER_SSH_DIR}/config" 0644
fi
# Enable SSH passwordless configuration
echo "Setting up dedicated SSH identity for gitlab-runner..."
if [[ ! -f "${RUNNER_SSH_DIR}/id_ed25519" ]]; then
# Run ssh-keygen as the gitlab-runner user
su - "${RUNNER_USER}" -c "ssh-keygen -t ed25519 -C 'gitlab-runner@devnet-ci' -f ${RUNNER_SSH_DIR}/id_ed25519 -N ''"
echo "New SSH key pair generated for gitlab-runner."
else
echo "SSH key pair already exists for gitlab-runner. Skipping generation."
fi
# Read the generated public key into a bash variable
RUNNER_PUB_KEY=$(cat "${RUNNER_SSH_DIR}/id_ed25519.pub")
# Inject the public key into the runner's Ansible Vault
update_vault_runner_key "${RUNNER_HOME}/${VAULT_FILE}" "${RUNNER_HOME}/${VAULT_PASSWD_FILE}" "${RUNNER_PUB_KEY}"
# Push the public key to the hypervisor using the developer's existing SSH access
echo "Authorizing gitlab-runner on the hypervisor (${HYPERVISOR_NAME})..."
# Execute SSH as the developer, explicitly passing the config (-F) and identity (-i) files.
# Use curly braces { } to group the grep/echo logic safely.
REMOTE_CMD="mkdir -p ~/.ssh && chmod 700 ~/.ssh"
REMOTE_CMD+=" && { grep -q -F \"${RUNNER_PUB_KEY}\" ~/.ssh/authorized_keys"
REMOTE_CMD+=" || echo \"${RUNNER_PUB_KEY}\" >> ~/.ssh/authorized_keys; }"
REMOTE_CMD+=" && chmod 600 ~/.ssh/authorized_keys"
sudo -u "${DEV_USER}" ssh \
-F "${DEV_HOME}/.ssh/config" \
-i "${DEV_HOME}/.ssh/id_ed25519" \
-o StrictHostKeyChecking=accept-new \
"${HYPERVISOR_NAME}" \
"${REMOTE_CMD}"
# Verify that the key was added successfully
sudo -u "${RUNNER_USER}" ssh \
-o StrictHostKeyChecking=accept-new \
"${HYPERVISOR_NAME}" \
'echo Key is working!'
echo "Passwordless SSH access established successfully!"
exit 0
```
As new entries were added into the vault, you need to share them with the GitLab Runner user:
```bash
sudo bash ./share_secrets.sh
```
## Part 2: Prepare the lab environment and run the preparation stage on the hypervisor
To build and launch virtual routers, the first step is to prepare the required directories and gain access to virtual router images and launch scripts. The next step is to verify that all switch ports to which a virtual router will be connected are configured according to the declarative YAML file
### Step 1: Set up lab directories for inventory and variables
As mentioned earlier, it is critical to distinguish between the inventory directory and the host variables directory when using Ansible.
You can start by creating the `inventory` and `host_vars` directories
```bash
mkdir -p ~/iac/lab03/{inventory,group_vars,host_vars,templates}
```
The inventory directory files contain the necessary information, such as host names, groups, and attributes, required to establish network connections with these hosts. Here is a copy of the [**inventory/hosts.yaml**](https://gitlab.inetdoc.net/iac/lab03/-/blob/main/inventory/hosts.yaml?ref_type=heads) file.
```yaml=
---
hypervisors:
hosts:
hypervisor_name:
# ansible_host variable is defined in ${HOME}/.ssh/config file
# Host hypervisor_name
# HostName fe80::VVVV:1%%enp0s1
# User etudianttest
# Port 2222
ansible_host: hypervisor_name
vars:
ansible_ssh_user: "{{ hypervisor_user }}"
ansible_ssh_port: 2222
routers:
hosts:
rXXX:
rYYY:
all:
children:
hypervisors:
routers:
```
The YAML description above contains two groups: **hypervisors** and **routers**. Within the hypervisors group, `hypervisor_name` is currently the only member present with the necessary SSH network connection parameters.
The routers group comprises two members, namely **rXXX** and **rYYY**. At this stage, you do not know much except for the fact that you are going to instantiate two virtual routers.
The SSH network connection parameters for all virtual routers will be provided after they are started and the dynamic inventory Python script is executed.
### Step 2: Create the group and host variable storage directories
The [**group_vars**](https://gitlab.inetdoc.net/iac/lab03/-/tree/main/group_vars?ref_type=heads) directory is created with an [**all.yaml**](https://gitlab.inetdoc.net/iac/lab03/-/blob/main/group_vars/all.yaml?ref_type=heads) file that contains the variable declarations common to almost all playbooks.
```yaml=
---
lab_name: iac-lab03
lab_path: "{{ ansible_env.HOME }}/labs/{{ lab_name }}"
masters_dir: /var/cache/kvm/masters
image_name: c8000v-universalk9.17.18.02.qcow2
oob_vlan: VVVV # out-of-band VLAN ID
```
The `oob_vlan:` key refers to the **out-of-band** network used for communications between the DevNet (Ansible + gitlab-runner) virtual machine and the routers to start and configure.
The content of the [**host_vars**](https://gitlab.inetdoc.net/iac/lab03/-/tree/main/host_vars?ref_type=heads) directory will now be examined.
YAML description files for each host of the lab infrastructure can be found there.
Here are copies of:
- `host_vars/all.yaml`
- `host_vars/hypervisor_name.yaml`
- `host_vars/rXXX.yaml`
- `host_vars/rYYY.yaml`
:::warning
Be sure to edit this inventory file and replace **XXX**, **YYY**, and all other placeholders with the appropriate real names or values.
:::
- The [**all.yaml**](https://gitlab.inetdoc.net/iac/lab03/-/blob/main/host_vars/all.yaml?ref_type=heads) addressing plan file:
```yaml=
---
inband_vlans:
- vlan: 230
subnet4: 10.0.228.0/22
subnet6: 2001:678:3fc:e6::/64
default_routes:
ipv4_next_hop: 10.0.228.1
ipv6_next_hop: 2001:678:3fc:e6::1
- vlan: 800
subnet6: 2001:db8:320::/64
```
This file contains the addressing plan for the **in-band** networks. The example above is limited to two VLANs for simplicity. Virtual router interface addresses are computed natively by Ansible using these standard CIDR network blocks.
- The **hypervisor_name.yaml** defines the hypervisor distribution switch port configurations:
```yaml=
---
switches:
name: dsw-host
ports:
- name: tapAAA
type: OVSPort
vlan_mode: access
tag: "{{ oob_vlan }}"
- name: tapBBB
type: OVSPort
vlan_mode: access
tag: 230
- name: tapCCC
type: OVSPort
vlan_mode: access
tag: 800
- name: tapDDD
type: OVSPort
vlan_mode: access
tag: "{{ oob_vlan }}"
- name: tapEEE
type: OVSPort
vlan_mode: access
tag: 230
- name: tapFFF
type: OVSPort
vlan_mode: access
tag: 800
```
This file contains a list of tap interfaces to be configured, as specified in the design. The configuration parameters for each tap interface listed include the switch name, access mode port, and the tag or identifier of the connected VLAN.
- The YAML files for virtual routers reference the hypervisor tap interface connection and contain the network interface **in-band** VLAN configuration parameters.
Here is a copy of the first lab topology router **rXXX.yaml** with the default IPv4 and IPv6 routes for in-band network:
```yaml=
---
vrouter:
vm_name: rXXX
os: iosxe
master_image: "{{ image_name }}"
force_copy: false
tapnumlist: [AAA, BBB, CCC]
router_host_id: "{{ vrouter.tapnumlist[0] | int }}"
interfaces:
- interface_type: GigabitEthernet
interface_id: 2
description: --> VLAN 230
enabled: true
vlan_id: 230
- interface_type: GigabitEthernet
interface_id: 3
description: --> VLAN 800
enabled: true
vlan_id: 800
default_routes: "{{ inband_vlans[0].default_routes }}"
```
Each router YAML declaration file contains the address composition according to `group_vars/all.yaml`, which defines the network plan, and the virtual router `host_vars` directory file, which specifies the identifier part of each interface address.
Here the choice is made to use the first router tap interface number as the identifier part of IPv4 and IPv6 addresses.
- The copy of the second lab topology router **rYYY.yaml** is very similar to the previous one:
```yaml=
---
vrouter:
vm_name: rYYY
os: iosxe
master_image: "{{ image_name }}"
force_copy: false
tapnumlist: [DDD, EEE, FFF]
router_host_id: "{{ vrouter.tapnumlist[0] | int }}"
interfaces:
- interface_type: GigabitEthernet
interface_id: 2
description: --> VLAN 230
enabled: true
vlan_id: 230
- interface_type: GigabitEthernet
interface_id: 3
description: --> VLAN 800
enabled: true
vlan_id: 800
default_routes: "{{ inband_vlans[0].default_routes }}"
```
### Step 3: Verify Ansible configuration and perform hypervisor access test
Now, you are able to use the `ping` ansible module to communicate with the `hypervisors` group members defined in the inventory file.
```bash
ansible hypervisors -m ping \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
hypervisor_name | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
As the ansible ping is successful in this single hypervisor lab, you can go on with playbooks to start new virtual routers.
### Step 4: Run the prepare stage playbook
The playbook uses a Jinja2 template file to create the target YAML switch port configuration file. This is why you first need to create the `templates/switch.yaml.j2` file containing the following:
```yaml=
# Based on template at:
# https://gitlab.inetdoc.net/labs/startup-scripts/-/blob/main/templates/switch.yaml
ovs:
switches:
{% for switch in switches %}
- name: {{ switch.name }}
ports:
{% for port in switch.ports %}
- name: {{ port.name }}
type: {{ port.type }}
vlan_mode: {{ port.vlan_mode }}
{% if port.vlan_mode == 'trunk' %}
trunks: {{ port.trunks }}
{% elif port.vlan_mode == 'access' %}
tag: {{ port.tag }}
{% endif %}
{% endfor %}
{% endfor %}
```
Here is a copy of the [**01_prepare.yaml**](https://gitlab.inetdoc.net/iac/lab03/-/blob/main/01_prepare.yaml?ref_type=heads) Ansible playbook:
```yaml=
---
# The purpose of this playbook is to prepare the lab environment for the VMs.
# 1. Check the permissions of the masters directory to determine if it is accessible.
# 2. Ensure the required directories exist.
# 3. Create symbolic links to the masters directory.
# 4. Create the switch configuration file.
# 5. Apply the hypervisor switch ports configuration.
# 6. Save and fetch the switch configuration output.
- name: PREPARE LAB ENVIRONMENT
hosts: hypervisors
vars:
masters_dir: /var/cache/kvm/masters
tasks:
- name: CHECK MASTERS DIRECTORY PERMISSIONS
ansible.builtin.stat:
path: "{{ masters_dir }}"
register: masters_stat
- name: ASSERT MASTERS DIRECTORY IS ACCESSIBLE
ansible.builtin.assert:
that:
- masters_stat.stat.exists
- masters_stat.stat.isdir
- masters_stat.stat.readable
- masters_stat.stat.executable
fail_msg: "Directory {{ masters_dir }} is not readable or executable"
- name: ENSURE REQUIRED DIRECTORIES EXIST
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ ansible_env.HOME }}/labs"
- "{{ lab_path }}"
- name: CREATE SYMBOLIC LINK
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/masters"
src: "{{ masters_dir }}"
state: link
follow: false
- name: CREATE YAML SWITCH CONFIGURATION
ansible.builtin.template:
src: templates/switch.yaml.j2
dest: "{{ lab_path }}/switch.yaml"
mode: "0644"
loop: "{{ hostvars[inventory_hostname].switches }}"
- name: CONFIGURE HYPERVISOR SWITCH PORTS
ansible.builtin.command:
cmd: "{{ ansible_env.HOME }}/masters/scripts/switch-conf.py --apply {{ lab_path }}/switch.yaml"
chdir: "{{ lab_path }}"
register: switch_conf_result
changed_when: "'changed to' in switch_conf_result.stdout"
failed_when: switch_conf_result.rc != 0
- name: SAVE AND FETCH SWITCH CONFIGURATION OUTPUT
block:
- name: SAVE SWITCH CONFIGURATION OUTPUT
ansible.builtin.copy:
content: "{{ switch_conf_result.stdout }}\n{{ switch_conf_result.stderr }}"
dest: "{{ lab_path }}/switch-conf.log"
mode: "0644"
- name: FETCH SWITCH CONFIGURATION OUTPUT
ansible.builtin.fetch:
src: "{{ lab_path }}/switch-conf.log"
dest: trace/switch-conf.log
flat: true
mode: "0644"
rescue:
- name: HANDLE ERROR IN SAVING OR FETCHING SWITCH CONFIGURATION OUTPUT
ansible.builtin.debug:
msg: An error occurred while saving or fetching the switch configuration output.
```
Verify the masters directory access:
: Checks that the `{{ masters_dir }}` directory exists, is readable and executable, and halts the playbook immediately with a clear error message if any condition is not met.
Create the required directory structure and symbolic link:
: Ensures the `~/labs` and `{{ lab_path }}` directories exist on the hypervisor, then creates a `~/masters` symbolic link pointing to `{{ masters_dir }}` for stable access to virtual machine master images.
Generate and apply the switch port configuration:
: Renders the `switch.yaml.j2` Jinja2 template into a `switch.yaml` file and runs `switch-conf.py` to configure each tap interface in access mode with the VLAN identifier declared in the host variables file.
Save and retrieve the switch configuration output:
: Writes the combined output of `switch-conf.py` to a `switch-conf.log` file on the hypervisor, fetches it to the local `trace/` directory, and reports any error via a rescue handler without masking configuration issues.
When you run the playbook, you get the following output.
```bash
ansible-playbook 01_prepare.yaml \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
PLAY [PREPARE LAB ENVIRONMENT] **********************************************
TASK [Gathering Facts] ******************************************************
ok: [hypervisor_name]
TASK [CHECK MASTERS DIRECTORY PERMISSIONS] **********************************
ok: [hypervisor_name]
TASK [ASSERT MASTERS DIRECTORY IS ACCESSIBLE] *******************************
ok: [hypervisor_name] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [ENSURE REQUIRED DIRECTORIES EXIST] ************************************
ok: [hypervisor_name] => (item=/home/etudianttest/labs)
changed: [hypervisor_name] => (item=/home/etudianttest/labs/iac-lab03)
TASK [CREATE SYMBOLIC LINK] *************************************************
ok: [hypervisor_name]
TASK [CREATE YAML SWITCH CONFIGURATION] *************************************
changed: [hypervisor_name] => (item={'name': 'dsw-host', 'ports': [{'name':
'tapAAA', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': VVVV}, {'name':
'tapBBB', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 230}, {'name':
'tapCCC', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 800}, {'name':
'tapDDD', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': VVVV}, {'name':
'tapEEE', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 230}, {'name':
'tapFFF', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 800}]})
TASK [CONFIGURE HYPERVISOR SWITCH PORTS] ************************************
ok: [hypervisor_name]
TASK [SAVE SWITCH CONFIGURATION OUTPUT] *************************************
changed: [hypervisor_name]
TASK [FETCH SWITCH CONFIGURATION OUTPUT] ************************************
changed: [hypervisor_name]
PLAY RECAP ******************************************************************
hypervisor_name : ok=9 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
The master image files for the virtual routers are now ready to be copied and the virtual routers can be launched with their respective network interface parameters.
## Part 3: Launch virtual routers on the Hypervisor
In this part, you will create the [**02_declare_run.yaml**](https://gitlab.inetdoc.net/iac/lab03/-/blob/main/02_declare_run.yaml?ref_type=heads) playbook that copies the virtual router master images, launches the router instances on the hypervisor, and generates a dynamic Ansible inventory from the JSON output of the launch script.
At the end of this process, two virtual routers are operational and ready to be configured by other Ansible playbooks.
The following diagram summarizes the logic of this playbook.

### Step 1: Create the `02_declare_run.yaml` Ansible playbook
As with the previous Ansible playbook, this new one uses two templates: one to prepare the `lab-startup.py` script's YAML declaration, and another to generate a dynamic inventory of the lab's routers when they are started.
Start by creating the YAML lab topology declaration template file named `templates/lab.yaml.j2`.
```yaml=
---
# Based on template at:
# https://gitlab.inetdoc.net/labs/startup-scripts/-/blob/main/templates/
kvm:
vms:
{% for router in groups['routers'] %}
- vm_name: {{ hostvars[router].vrouter.vm_name }}
os: {{ hostvars[router].vrouter.os }}
master_image: {{ image_name }}
force_copy: {{ hostvars[router].vrouter.force_copy | default(false) | bool | to_json }}
tapnumlist: {{ hostvars[router].vrouter.tapnumlist | to_json }}
{% endfor %}
```
The `lab.yaml.j2` template renders a unified YAML declaration file for each virtual router by iterating over the routers group, composing the router properties — image name, tap interface list, operating system, and host identifier — from the host_vars declarations.
Next, create the dynamic inventory template file named `templates/inventory_lab.yaml.j2`.
```yaml=
---
routers:
hosts:
{% for item in inventory_hosts %}
{{ item.name | regex_replace('\.qcow2$', '') }}:
ansible_host: {{ item.ansible_host | default('') | regex_replace('%.*$', '') }}%enp0s1
ansible_port: 2222
{% endfor %}
vars:
ansible_ssh_user: "{% raw %}{{ router_user }}{% endraw %}"
ansible_ssh_pass: "{% raw %}{{ router_pass }}{% endraw %}"
ansible_connection: network_cli
ansible_network_os: ios
```
The `inventory_lab.yaml.j2` template renders the `inventory/lab.yaml` Ansible inventory file from the parsed launch JSON, mapping each router name to its management IPv6 link-local address and SSH connection parameters so that subsequent playbooks can reach the newly started instances.
Finally, make a copy of the content below into the playbook file named `02_declare_run.yaml`.
```yaml=
---
- name: PULL, CUSTOMIZE, AND RUN CLOUD LAB
vars_files:
- host_vars/all.yaml
vars:
lab_config_path: "{{ lab_path }}/lab.yaml"
default_file_mode: "0644"
default_dir_mode: "0755"
hosts: hypervisors
tasks:
- name: CHECK IF VIRTUAL ROUTERS ALREADY RUN
ansible.builtin.shell:
cmd: |
set -o pipefail
if "$HOME/vm/scripts/my-vms.py" ls --running --name "{{ hostvars[item].vrouter.vm_name }}" | grep -q '^No VM found\.$'; then
exit 0
else
echo "{{ hostvars[item].vrouter.vm_name }} is already running!"
exit 1
fi
executable: /bin/bash
register: running_vm
changed_when: false
failed_when: false
loop: "{{ groups['routers'] }}"
tags: launch_lab
- name: SET FACT FOR VMS STATUS
ansible.builtin.set_fact:
all_vms_stopped: "{{ (running_vm.results | map(attribute='rc') | sum) == 0 }}"
- name: PROVISION AND LAUNCH LAB
when: all_vms_stopped
block:
- name: PREPARE LOCAL DIRECTORIES AND CLEAN OLD FILES
delegate_to: localhost
block:
- name: ENSURE TRACE AND INVENTORY DIRECTORIES EXIST
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "{{ default_dir_mode }}"
loop:
- trace
- inventory
- name: REMOVE OLD INVENTORY FILE
ansible.builtin.file:
path: inventory/lab.yaml
state: absent
rescue:
- name: HANDLE LOCAL DIRECTORY PREP ERROR
ansible.builtin.fail:
msg: A failure occurred while cleaning or preparing local directories.
- name: BUILD UNIFIED LAB DECLARATION YAML FILE
ansible.builtin.template:
src: templates/lab.yaml.j2
dest: "{{ lab_config_path }}"
mode: "{{ default_file_mode }}"
tags: yaml_labfile
- name: LAUNCH VIRTUAL MACHINES IN JSON MODE
ansible.builtin.command:
cmd: "$HOME/vm/scripts/lab-startup.py --json {{ lab_config_path }}"
chdir: "{{ lab_path }}"
register: launch
failed_when: launch.rc != 0 and 'already' not in (launch.stdout | default(''))
changed_when: "' started!' in (launch.stdout | default('')) or ((launch.stdout | default('') | trim) is match('^\\{'))"
tags: launch_lab
rescue:
- name: HANDLE PROVISIONING ERROR
ansible.builtin.fail:
msg: An error occurred during lab provisioning or VM launch. Playbook halted.
- name: GENERATE LAB INVENTORY FILE DIRECTLY FROM STDOUT
tags: launch_lab
delegate_to: localhost
# Only run if we actually launched VMs and got JSON back
when:
- all_vms_stopped
- launch.stdout is defined
- (launch.stdout | trim) is match('^\\{')
block:
- name: PARSE LAUNCH JSON
ansible.builtin.set_fact:
launch_json: "{{ launch.stdout | from_json }}"
- name: ASSERT LAUNCH JSON CONTAINS VMS
ansible.builtin.assert:
that:
- launch_json is mapping
- launch_json.vms is defined
- launch_json.vms is iterable
fail_msg: Launch output JSON is missing a valid 'vms' array.
- name: OPTIONAL - SAVE TRACE LOCALLY
ansible.builtin.copy:
content: "{{ launch_json | to_nice_json }}"
dest: trace/launch_output.json
mode: "{{ default_file_mode }}"
- name: GENERATE LAB INVENTORY FILE
ansible.builtin.template:
src: templates/inventory_lab.yaml.j2
dest: inventory/lab.yaml
mode: "{{ default_file_mode }}"
rescue:
- name: HANDLE LOCAL INVENTORY TEMPLATING ERROR
ansible.builtin.fail:
msg: Failed to parse the JSON stdout or template the lab inventory file.
```
Here are the key points of this playbook:
Check whether virtual routers are already running:
: Queries each router entry in the inventory using the `my-vms.py` script and sets a boolean fact `all_vms_stopped` to control whether the provisioning block should execute.
Prepare the local environment and build the lab declaration file:
: Creates the local `trace/` and `inventory/` directories, removes any stale inventory file from a previous run, then renders the `lab.yaml.j2` Jinja2 template into a unified lab declaration YAML file on the hypervisor.
Launch the virtual routers in JSON mode:
: Runs `lab-startup.py --json` with the generated declaration file and captures its JSON output, which contains the connection details — including IPv6 link-local addresses — of the newly started router instances. A rescue handler halts the playbook with a clear message if the launch fails.
Generate the dynamic inventory from the launch JSON output:
: Parses the captured JSON, asserts that a valid vms list is present, optionally saves a local trace file, then renders the `inventory_lab.yaml.j2` template to produce the `inventory/lab.yaml` file used by subsequent playbooks to connect to the routers.
Elimination of Image Copy Redundancy:
: The `ansible.builtin.copy` task for the `.qcow2` master image was removed. Because the `lab-startup.py` script now natively handles image copying (instructed by the master_image and force_copy variables in the YAML declaration), Ansible no longer needs to do this job twice, significantly speeding up the provisioning phase.
### Step 2: Run the `02_declare_run.yaml` Ansible playbook
Here is an example of the playbook execution.
```bash
ansible-playbook 02_declare_run.yaml \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
PLAY [PULL, CUSTOMIZE, AND RUN CLOUD LAB] ***********************************
TASK [Gathering Facts] ******************************************************
ok: [hypervisor_name]
TASK [CHECK IF VIRTUAL ROUTERS ALREADY RUN] *********************************
ok: [hypervisor_name] => (item=rXXX)
ok: [hypervisor_name] => (item=rYYY)
TASK [SET FACT FOR VMS STATUS] **********************************************
ok: [hypervisor_name]
TASK [ENSURE TRACE AND INVENTORY DIRECTORIES EXIST] *************************
ok: [hypervisor_name -> localhost] => (item=trace)
ok: [hypervisor_name -> localhost] => (item=inventory)
TASK [REMOVE OLD INVENTORY FILE] ********************************************
ok: [hypervisor_name -> localhost]
TASK [BUILD UNIFIED LAB DECLARATION YAML FILE] ******************************
changed: [hypervisor_name]
TASK [LAUNCH VIRTUAL MACHINES IN JSON MODE] *********************************
changed: [hypervisor_name]
TASK [PARSE LAUNCH JSON] ****************************************************
ok: [hypervisor_name -> localhost]
TASK [ASSERT LAUNCH JSON CONTAINS VMS] **************************************
ok: [hypervisor_name -> localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [OPTIONAL - SAVE TRACE LOCALLY] ****************************************
changed: [hypervisor_name -> localhost]
TASK [GENERATE LAB INVENTORY FILE] ******************************************
changed: [hypervisor_name -> localhost]
PLAY RECAP ******************************************************************
hypervisor_name : ok=11 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
With both virtual routers successfully started and the dynamic inventory generated, the Ansible ping module can now be used to verify connectivity before moving on to the configuration stage.
### Step 3: Check Ansible SSH access to the target virtual routers
Here you use the **ping** Ansible module directly from the command line.
```bash
ansible routers -m ping \
--extra-vars @$HOME/.iac_passwd
```
```bash=
rXXX | SUCCESS => {
"changed": false,
"ping": "pong"
}
rYYY | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
You can also check that the inventory contains **rXXX** and **rYYY** entries with their own parameter set.
```bash
ansible-inventory --yaml --limit routers --list
```
```yaml=
all:
children:
routers:
hosts:
rXXX:
ansible_connection: network_cli
ansible_host: fe80::faad:caff:fefe:AAA%enp0s1
ansible_network_os: ios
ansible_port: 2222
ansible_ssh_pass: '{{ router_pass }}'
ansible_ssh_user: '{{ router_user }}'
default_routes: '{{ inband_vlans[0].default_routes }}'
image_name: c8000v-universalk9.17.18.02.qcow2
interfaces:
- description: --> VLAN 230
enabled: true
interface_id: 2
interface_type: GigabitEthernet
vlan_id: 230
- description: --> VLAN 800
enabled: true
interface_id: 3
interface_type: GigabitEthernet
vlan_id: 800
lab_name: iac-lab03
lab_path: '{{ ansible_env.HOME }}/labs/{{ lab_name }}'
masters_dir: /var/cache/kvm/masters
oob_vlan: VVVV
router_host_id: '{{ vrouter.tapnumlist[0] | int }}'
vrouter:
force_copy: false
master_image: '{{ image_name }}'
os: iosxe
tapnumlist:
- AAA
- BBB
- CCC
vm_name: rXXX
rYYY:
ansible_connection: network_cli
ansible_host: fe80::faad:caff:fefe:DDD%enp0s1
ansible_network_os: ios
ansible_port: 2222
ansible_ssh_pass: '{{ router_pass }}'
ansible_ssh_user: '{{ router_user }}'
default_routes: '{{ inband_vlans[0].default_routes }}'
image_name: c8000v-universalk9.17.18.02.qcow2
interfaces:
- description: --> VLAN 230
enabled: true
interface_id: 2
interface_type: GigabitEthernet
vlan_id: 230
- description: --> VLAN 800
enabled: true
interface_id: 3
interface_type: GigabitEthernet
vlan_id: 800
lab_name: iac-lab03
lab_path: '{{ ansible_env.HOME }}/labs/{{ lab_name }}'
masters_dir: /var/cache/kvm/masters
oob_vlan: VVVV
router_host_id: '{{ vrouter.tapnumlist[0] | int }}'
vrouter:
force_copy: false
master_image: '{{ image_name }}'
os: iosxe
tapnumlist:
- DDD
- EEE
- FFF
vm_name: rYYY
```
The inventory output confirms that the SSH connection parameters for each router — host address, port, credentials, and network OS — are fully resolved and ready for use.
The interface addresses shown for rXXX and rYYY are computed at runtime from the Jinja2 address expressions defined in `group_vars/all.yaml`, which means the addressing plan can be updated in a single place without modifying any playbook or host variable file.
At this stage, the infrastructure layer is complete: two virtual routers are running, reachable via SSH, and described by a self-consistent dynamic inventory. The next part will configure their network interfaces and default routes.
## Part 4: Configure the virtual routers
You have now reached the stage of configuring the overlay network. In this part, you will create the `03_configure_routers.yaml` Ansible playbook, which translates the VLAN-based interface declarations from the `host_vars` files into actual IOS XE router configurations and verifies end-to-end Internet connectivity using ICMP
### Step 1: Create the `03_configure_routers.yaml` playbook
All of the elements used in this playbook are taken from the variables defined in the `rXXX.yaml` and `rYYY.yaml` files stored in the `host_vars` directory. They are used to translate the declarative desired state of the network topology through the playbook procedure.
Here is a copy of the [**03_configure_routers.yaml**](https://gitlab.inetdoc.net/iac/lab03/-/blob/main/03_configure_routers.yaml?ref_type=heads):
```yaml=
---
- name: CONFIGURE ROUTER INTERFACES AND IP ADDRESSING
gather_facts: false
vars_files:
- group_vars/all.yaml
- host_vars/all.yaml
hosts: routers
pre_tasks:
- name: WAIT FOR VMS TO BECOME ACCESSIBLE
ansible.builtin.wait_for_connection:
delay: 5
sleep: 5
timeout: 300
connect_timeout: 5
- name: GATHER FACTS
# Gather facts about the remote system once the connection is available
ansible.builtin.setup:
when: ansible_facts | length == 0
- name: BUILD VLAN LOOKUP TABLE
ansible.builtin.set_fact:
inband_vlans_by_id: "{{ dict(inband_vlans | map(attribute='vlan') | zip(inband_vlans)) }}"
- name: VALIDATE VLAN ASSIGNMENTS
ansible.builtin.assert:
that:
- item.vlan_id is defined
- inband_vlans_by_id[item.vlan_id] is defined
fail_msg: "Interface {{ item.interface_type }}{{ item.interface_id }} references unknown vlan_id={{ item.vlan_id }}"
loop: "{{ interfaces | selectattr('enabled', 'equalto', true) | list }}"
tasks:
- name: CONFIGURE HOSTNAME
cisco.ios.ios_system:
hostname: "{{ vrouter.vm_name }}"
- name: CONFIGURE INTERFACES
cisco.ios.ios_interfaces:
config:
- name: "{{ item.interface_type }}{{ item.interface_id }}"
description: "{{ item.description }}"
enabled: "{{ item.enabled }}"
with_items: "{{ interfaces }}"
- name: CONFIGURE IPv4 ADDRESSES
cisco.ios.ios_l3_interfaces:
config:
- name: "{{ item.interface_type }}{{ item.interface_id }}"
ipv4:
- address: "{{ inband_vlans_by_id[item.vlan_id].subnet4 | ansible.utils.nthhost(router_host_id) }}/{{ inband_vlans_by_id[item.vlan_id].subnet4 | ansible.utils.ipaddr('prefix') }}"
loop: >-
{{
interfaces
| selectattr('enabled', 'equalto', true)
| selectattr('vlan_id', 'defined')
| selectattr('vlan_id', 'in', inband_vlans_by_id.keys() | list)
| selectattr('vlan_id', 'in', inband_vlans | selectattr('subnet4', 'defined') | map(attribute='vlan') | list)
| list
}}
loop_control:
label: "{{ item.interface_type }}{{ item.interface_id }} - IPv4"
- name: CONFIGURE IPv6 ADDRESSES
cisco.ios.ios_l3_interfaces:
config:
- name: "{{ item.interface_type }}{{ item.interface_id }}"
ipv6:
- address: "{{ inband_vlans_by_id[item.vlan_id].subnet6 | ansible.utils.nthhost(router_host_id) }}/{{ inband_vlans_by_id[item.vlan_id].subnet6 | ansible.utils.ipaddr('prefix') }}"
loop: >-
{{
interfaces
| selectattr('enabled', 'equalto', true)
| selectattr('vlan_id', 'defined')
| selectattr('vlan_id', 'in', inband_vlans_by_id.keys() | list)
| selectattr('vlan_id', 'in', inband_vlans | selectattr('subnet6', 'defined') | map(attribute='vlan') | list)
| list
}}
loop_control:
label: "{{ item.interface_type }}{{ item.interface_id }} - IPv6"
- name: CONFIGURE DEFAULT ROUTES
cisco.ios.ios_static_routes:
config:
- address_families:
- afi: ipv4
routes:
- dest: 0.0.0.0/0
next_hops:
- forward_router_address: "{{ default_routes.ipv4_next_hop }}"
- afi: ipv6
routes:
- dest: ::/0
next_hops:
- forward_router_address: "{{ default_routes.ipv6_next_hop }}"
- name: CHECK IPV4 AND IPV6 DEFAULT ROUTE
cisco.ios.ios_ping:
dest: "{{ item.dest }}"
afi: "{{ item.afi }}"
count: 10
state: present
register: result
when: default_routes is defined
failed_when: result.packet_loss | int > 10
with_items:
- { dest: 9.9.9.9, afi: ip }
- { dest: 2620:fe::fe, afi: ipv6 }
```
The key original points of this Ansible playbook are:
Ensure connection readiness and build the VLAN lookup table:
: Waits for each router to become reachable via SSH, gathers facts once the connection is established, then builds an `inband_vlans_by_id` dictionary from the `inband_vlans` list and asserts that every enabled interface references a VLAN ID that is actually declared in the addressing plan.
Configure the router hostname and interface descriptions:
: Sets the hostname on each router to match its `vm_name` using the `cisco.ios.ios_system` module, then applies descriptions and enabled/disabled status to all declared interfaces using `cisco.ios.ios_interfaces`.
Assign IPv4 and IPv6 addresses to enabled interfaces:
: For each enabled interface that references a declared VLAN, computes the interface address directly from the CIDR subnet block defined in `host_vars/all.yaml` using the `ansible.utils.nthhost` filter with the router's tap interface number as the host identifier. IPv4 and IPv6 addresses are applied in two separate tasks using `cisco.ios.ios_l3_interfaces`.
Configure default routes and verify Internet connectivity:
: Installs both an IPv4 (0.0.0.0/0) and an IPv6 (::/0) static default route using the next-hop addresses declared in `host_vars/all.yaml`, then sends 10 ICMP pings to [Quad9.net](https://quad9.net) service well-known addresses `9.9.9.9` and `2620:fe::fe` using `cisco.ios.ios_ping` and fails the task if packet loss exceeds 10%.
### Step 2: Run the `03_configure_routers.yaml` playbook
Here is a sample of the playbook execution that illustrates the use of network interfaces variables for each router of the network topology.
```bash
ansible-playbook 03_configure_routers.yaml \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
PLAY [CONFIGURE ROUTER INTERFACES AND IP ADDRESSING] ************************
TASK [WAIT FOR VMS TO BECOME ACCESSIBLE] ************************************
[WARNING]: WorkerProcess for [rYYY/TASK: WAIT FOR VMS TO BECOME ACCESSIBLE] errantly sent data directly to stderr instead of using Display:
ssh_strict_fopen: Failed to open a file /etc/ssh/ssh_known_hosts for reading: Aucun fichier ou dossier de ce nom
ok: [rYYY]
[WARNING]: WorkerProcess for [rXXX/TASK: WAIT FOR VMS TO BECOME ACCESSIBLE] errantly sent data directly to stderr instead of using Display:
ssh_strict_fopen: Failed to open a file /etc/ssh/ssh_known_hosts for reading: Aucun fichier ou dossier de ce nom
ok: [rXXX]
TASK [GATHER FACTS] *********************************************************
ok: [rXXX]
ok: [rYYY]
TASK [BUILD VLAN LOOKUP TABLE] **********************************************
ok: [rXXX]
ok: [rYYY]
TASK [VALIDATE VLAN ASSIGNMENTS] ********************************************
ok: [rXXX] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 230",
"enabled": true,
"interface_id": 2,
"interface_type": "GigabitEthernet",
"vlan_id": 230
},
"msg": "All assertions passed"
}
ok: [rYYY] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 230",
"enabled": true,
"interface_id": 2,
"interface_type": "GigabitEthernet",
"vlan_id": 230
},
"msg": "All assertions passed"
}
ok: [rXXX] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 800",
"enabled": true,
"interface_id": 3,
"interface_type": "GigabitEthernet",
"vlan_id": 800
},
"msg": "All assertions passed"
}
ok: [rYYY] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 800",
"enabled": true,
"interface_id": 3,
"interface_type": "GigabitEthernet",
"vlan_id": 800
},
"msg": "All assertions passed"
}
TASK [CONFIGURE HOSTNAME] ***************************************************
changed: [rXXX]
changed: [rYYY]
TASK [CONFIGURE INTERFACES] *************************************************
changed: [rYYY] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230})
changed: [rXXX] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230})
changed: [rYYY] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800})
changed: [rXXX] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800})
TASK [CONFIGURE IPv4 ADDRESSES] *********************************************
changed: [rYYY] => (item=GigabitEthernet2 - IPv4)
changed: [rXXX] => (item=GigabitEthernet2 - IPv4)
TASK [CONFIGURE IPv6 ADDRESSES] *********************************************
changed: [rYYY] => (item=GigabitEthernet2 - IPv6)
changed: [rXXX] => (item=GigabitEthernet2 - IPv6)
changed: [rXXX] => (item=GigabitEthernet3 - IPv6)
changed: [rYYY] => (item=GigabitEthernet3 - IPv6)
TASK [CONFIGURE DEFAULT ROUTES] *********************************************
changed: [rYYY]
changed: [rXXX]
TASK [CHECK IPV4 AND IPV6 DEFAULT ROUTE] ************************************
ok: [rXXX] => (item={'dest': '9.9.9.9', 'afi': 'ip'})
ok: [rXXX] => (item={'dest': '2620:fe::fe', 'afi': 'ipv6'})
ok: [rYYY] => (item={'dest': '9.9.9.9', 'afi': 'ip'})
ok: [rYYY] => (item={'dest': '2620:fe::fe', 'afi': 'ipv6'})
PLAY RECAP *****************************************************************
rXXX : ok=10 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
rYYY : ok=10 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
With both routers successfully configured, their in-band interfaces are now active, addressed, and able to reach the Internet through their default routes. The declarative approach — where VLAN assignments, subnet prefixes, and next-hop addresses are all defined in YAML files and resolved at runtime by the playbook — means that the same `03_configure_routers.yaml` playbook can be reused without modification for any number of routers, simply by updating the `host_vars` declarations. The overlay network is now fully operational and all the Ansible playbooks are ready to be assembled into a GitLab CI pipeline in the next part.
## Part 5: Set up GitLab Continuous Integration (GitLab CI)
When you arrive at this final part of the lab, it's time to gather all the Ansible playbooks into a GitLab continuous integration pipeline.
This pipeline leads to the GitOps way of working with IDE tools like Visual Studio Code. Each time a file is edited, committed and pushed to the Git repository, a new GitLab pipeline is automatically launched and the entire process is evaluated.
### Step 1: Create the `.gitlab-ci.yml` pipeline file
Here is a copy of the [**.gitlab-ci.yml**](https://gitlab.inetdoc.net/iac/lab03/-/blob/main/.gitlab-ci.yml?ref_type=heads) file with the 5 stages described below:
```yaml=
variables:
VAULT_FILE: /home/gitlab-runner/.iac_passwd
VAULT_PASSWORD_FILE: /home/gitlab-runner/.vault_pass.txt
default:
before_script:
# Inject the persistent virtual environment into the execution path
# This automatically activates Ansible for all jobs below
- export PATH="/home/gitlab-runner/.venv/bin:$PATH"
- export PATH="/home/gitlab-runner/.local/bin:$PATH"
stages:
- Toolchain
- Ping
- Prepare
- DeclareRun
- Configure
Prepare Runner toolchain:
stage: Toolchain
script:
- uv sync --active --upgrade
Ping hypervisors:
stage: Ping
script:
- ansible hypervisors -m ping --extra-vars "@${VAULT_FILE}"
Prepare hypervisors environment:
stage: Prepare
needs:
- Ping hypervisors
script:
- ansible-playbook 01_prepare.yaml
--extra-vars "@${VAULT_FILE}"
--vault-password-file "${VAULT_PASSWORD_FILE}"
Startup routers:
stage: DeclareRun
artifacts:
paths:
- inventory/
needs:
- Prepare hypervisors environment
script:
- ansible-playbook 02_declare_run.yaml
--extra-vars "@${VAULT_FILE}"
--vault-password-file "${VAULT_PASSWORD_FILE}"
Configure routers:
stage: Configure
artifacts:
paths:
- inventory/
needs:
- Startup routers
script:
- ansible-playbook 03_configure_routers.yaml
--extra-vars "@${VAULT_FILE}"
--vault-password-file "${VAULT_PASSWORD_FILE}"
```
In the context of this lab, the first choice was to use a shell executor runner, as presented in [Lab 02](https://md.inetdoc.net/s/CPltj12uT). The second choice made here is to use a version of Ansible that is pulled into a Python virtual environment.
The pipeline Python virtual environment must be persistent and shared by all stages. The same goes for Ansible secrets, which are stored in a vault file.
:::info
Remind that you do not use any external secret management service. For this reason, secrets must be transferred from the development user account to the gitlab-runner account. A dedicated scrip called [**share_secrets.sh**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/share_secrets.sh?ref_type=heads) was introduced in Lab 02.
:::
The key original points of this `.gitlab-ci.yml` file are:
Environment Setup for GitLab shell executor
: - Uses a custom PIP_CACHE_DIR variable for pip caching
- Defines a VAULT_FILE variable for storing sensitive information
- Implements caching for pip and ansible directories
Pipeline Structure
: - Organizes the CI/CD pipeline into five stages: Build, Ping, Prepare, DeclareRun, and Configure
Virtual Environment Creation
: - Creates a Python virtual environment named "ansible"
- Installs required packages from a requirements.txt file
- Installs or upgrades the cisco.ios Ansible Galaxy collection
Hypervisor Interaction
: - Pings hypervisors to ensure connectivity
- Prepares the hypervisor environment using a dedicated playbook
Router Management
: - Starts up virtual routers using a specific playbook
- Configures routers with another playbook
- Virtual router-related jobs generate artifacts in the `inventory/` directory
Job Dependencies
: - Establishes a clear dependency chain between jobs using the "needs" keyword
- Ensures jobs are executed in the correct order
Vault Integration
: Uses a vault file (${VAULT_FILE}) to pass extra variables to Ansible commands for secure credential management
### Step 2: View the results of the continuous integration pipeline
Here is a screenshot of the CI pipeline with the dependencies between the 5 stages.

Each stage is associated with a job, the results of which can be viewed individually. Here is an example.

Inventory artifacts generated during the **Startup routers** stage are also available after the complete pipeline has ended.

When you want to analyze the lab topology routers configuration job traces, you can extract the text from GitLab web interface or from VSCode. Here is a sample.
```=
Running with gitlab-runner 18.10.0 (ac71f4d8)
on devnet26 tFYHRcNtF, system ID: s_7b7ec03a419a
Preparing the "shell" executor
Using Shell (bash) executor...
Preparing environment
Running on devnet26...
Getting source from Git repository
Gitaly correlation ID: 01KMTD5RXB7JGRYTCTWESKSSG8
Fetching changes with git depth set to 20...
Dépôt Git existant réinitialisé dans /home/gitlab-runner/builds/tFYHRcNtF/0/iac/lab03/.git/
Checking out 8b4d2923 as detached HEAD (ref is main)...
Suppression de inventory/lab.yaml
Suppression de trace/
Skipping Git submodules setup
Downloading artifacts
Downloading artifacts for Startup routers (24765)...
Runtime platform arch=amd64 os=linux pid=108277 revision=ac71f4d8 version=18.10.0
Downloading artifacts from coordinator... ok correlation_id=01KMTD5T4GPXBPPEN2R5YPZV15 host=gitlab.inetdoc.net id=24765 responseStatus=200 OK token=64_sSdxna
Executing "step_script" stage of the job script
$ export PATH="/home/gitlab-runner/.venv/bin:$PATH"
$ export PATH="/home/gitlab-runner/.local/bin:$PATH"
$ ansible-playbook 03_configure_routers.yaml --extra-vars "@${VAULT_FILE}" --vault-password-file "${VAULT_PASSWORD_FILE}"
PLAY [CONFIGURE ROUTER INTERFACES AND IP ADDRESSING] ***************************
TASK [WAIT FOR VMS TO BECOME ACCESSIBLE] ***************************************
[WARNING]: WorkerProcess for [r1/TASK: WAIT FOR VMS TO BECOME ACCESSIBLE] errantly sent data directly to stderr instead of using Display:
ssh_strict_fopen: Failed to open a file /etc/ssh/ssh_known_hosts for reading: Aucun fichier ou dossier de ce nom
[WARNING]: WorkerProcess for [r2/TASK: WAIT FOR VMS TO BECOME ACCESSIBLE] errantly sent data directly to stderr instead of using Display:
ssh_strict_fopen: Failed to open a file /etc/ssh/ssh_known_hosts for reading: Aucun fichier ou dossier de ce nom
ok: [r1]
ok: [r2]
TASK [GATHER FACTS] ************************************************************
ok: [r2]
ok: [r1]
TASK [BUILD VLAN LOOKUP TABLE] *************************************************
ok: [r1]
ok: [r2]
TASK [VALIDATE VLAN ASSIGNMENTS] ***********************************************
ok: [r1] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 230",
"enabled": true,
"interface_id": 2,
"interface_type": "GigabitEthernet",
"vlan_id": 230
},
"msg": "All assertions passed"
}
ok: [r1] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 800",
"enabled": true,
"interface_id": 3,
"interface_type": "GigabitEthernet",
"vlan_id": 800
},
"msg": "All assertions passed"
}
ok: [r2] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 230",
"enabled": true,
"interface_id": 2,
"interface_type": "GigabitEthernet",
"vlan_id": 230
},
"msg": "All assertions passed"
}
ok: [r2] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800}) => {
"ansible_loop_var": "item",
"changed": false,
"item": {
"description": "--> VLAN 800",
"enabled": true,
"interface_id": 3,
"interface_type": "GigabitEthernet",
"vlan_id": 800
},
"msg": "All assertions passed"
}
TASK [CONFIGURE HOSTNAME] ******************************************************
changed: [r1]
changed: [r2]
TASK [CONFIGURE INTERFACES] ****************************************************
changed: [r1] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230})
changed: [r2] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'vlan_id': 230})
changed: [r1] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800})
changed: [r2] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 800', 'enabled': True, 'vlan_id': 800})
TASK [CONFIGURE IPv4 ADDRESSES] ************************************************
changed: [r1] => (item=GigabitEthernet2 - IPv4)
changed: [r2] => (item=GigabitEthernet2 - IPv4)
TASK [CONFIGURE IPv6 ADDRESSES] ************************************************
changed: [r1] => (item=GigabitEthernet2 - IPv6)
changed: [r2] => (item=GigabitEthernet2 - IPv6)
changed: [r1] => (item=GigabitEthernet3 - IPv6)
changed: [r2] => (item=GigabitEthernet3 - IPv6)
TASK [CONFIGURE DEFAULT ROUTES] ************************************************
changed: [r2]
changed: [r1]
TASK [CHECK IPV4 AND IPV6 DEFAULT ROUTE] ***************************************
ok: [r2] => (item={'dest': '9.9.9.9', 'afi': 'ip'})
ok: [r2] => (item={'dest': '2620:fe::fe', 'afi': 'ipv6'})
ok: [r1] => (item={'dest': '9.9.9.9', 'afi': 'ip'})
ok: [r1] => (item={'dest': '2620:fe::fe', 'afi': 'ipv6'})
PLAY RECAP *********************************************************************
r1 : ok=10 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
r2 : ok=10 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Uploading artifacts for successful job
Uploading artifacts...
Runtime platform arch=amd64 os=linux pid=108947 revision=ac71f4d8 version=18.10.0
inventory/: found 3 matching artifact files and directories
Uploading artifacts as "archive" to coordinator... 201 Created correlation_id=01KMTDKX1A2VPZA0SHKD6DVJBV id=24766 responseStatus=201 Created token=64_sSdxna
Cleaning up project directory and file based variables
Job succeeded
```
## Conclusion
In this lab, you reused the declarative design and GitLab CI pipeline from the previous labs to build and configure Cisco IOS XE virtual routers instead of Linux VMs. You prepared the hypervisor, launched router instances from YAML declarations, and generated a dynamic inventory so that Ansible could automatically discover and manage the new routers. Finally, you ran an end‑to‑end GitLab CI pipeline that executed the Ansible playbooks non‑interactively, making the entire deployment repeatable and easy to adapt to new topologies.