# IaC Lab 3 -- Using GitLab CI to run Ansible playbooks and build new IOS XE Virtual Routers
[toc]
---
> Copyright (c) 2025 Philippe Latu.
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
https://inetdoc.net
GitLab repository: https://gitlab.inetdoc.net/iac/lab-03.git
### Scenario
This is the third lab in the series. The first lab ([Lab 01](https://md.inetdoc.net/s/f-mfjs-kQ)) focused on automation by presenting the design of the declarative and procedural parts needed to build new Debian GNU/Linux virtual machines from the cloud.debian.org source. The second lab ([Lab 02](https://md.inetdoc.net/s/CPltj12uT)) introduced GitLab continuous integration (CI) by setting up a new session between the GitLab web service and a runner that executes the tasks from the first lab in a pipeline.
In this third lab, we maintain the design decisions made in the first lab. We continue to use the GitLab CI pipeline with the same stages as in the second lab. What's new is replacing the Linux virtual machines with Cisco IOS-XE routers with a different set of network interfaces.

The division between an Out of Band VLAN for management and automation traffic and In Band VLANs for user traffic remains the same.
The virtual router configuration was chosen to use three network interfaces because the physical routers available in the real lab room are ISR4321, which have three interfaces.
Unfortunately, there is a difference between the physical router, where the management network interface is isolated from user traffic at the chip level, and the virtual router, where the use of a management interface is a configuration choice. This is a source of confusion for students. They often try to configure this dedicated management interface as a user interface.
To address this issue, both physical and virtual routers are deployed using Zero Touch Programming ([ZTP](https://github.com/jeremycohoe/IOSXE-Zero-Touch-Provisioning)). The Python script run on initial startup of both types of router ensures that the management interface belongs to a Virtual Routing and Forwarding (VRF) that is isolated from other In Band interfaces.
We are now ready to begin this third lab with the following scenario: hypervisor-level preparation with the startup of two new virtual router instances and the configuration of an In Band interface on each router within the same VLAN.
:::info
If you want to adapt this lab to your own context, you will need to install and configure (or replace) the declarative management scripts used to write this document.
On the network side, we use the `switch-conf.py` Python script, which reads a YAML declaration file to configure Open vSwitch tap port interfaces on the hypervisor.
On the virtual machine side, we also use the `lab-startup.py` Python script, which reads another YAML declaration file to set the virtual machine properties such as its network configuration.
All these tools are hosted at this address: https://gitlab.inetdoc.net/labs/startup-scripts
:::
### Objectives
After completing the manipulation steps in this document, you will be able to:
- Set up an Ansible environment for managing Cisco IOS-XE virtual routers
- Create declarative configurations for network devices using YAML files
- Develop Ansible playbooks to automate router provisioning and configuration
- Implement a GitLab CI pipeline to orchestrate the entire deployment process
- Generate dynamic inventories for flexible and scalable network management
### Design decisions reminder
The purpose of this lab series is to maintain consistent design decisions when virtualizing Linux systems, Cisco IOS-XE routers, or Cisco NX-OS switches. So, as in [Lab 01](https://md.inetdoc.net/s/f-mfjs-kQ#Part-2-Designing-the-Declarative-Part), we start with the bare-metal Type 2 hypervisor, which owns all the tap network interfaces.
The tap interfaces connect virtualized systems to an Open vSwitch switch named **dsw-host**. The preparation stage ensures that the switch ports configuration conforms to the **underlay network**.
:arrow_right: The hypervisor `host_vars` file named `hypervisor_name.yml` contains all tap interfaces configuration declarations.
:arrow_right: The declaration of router interface configurations is split into separate files:
- The `group_vars/all.yml` file contains the addressing plan shared by all virtual routers.
- The `host_vars/rXXX.yml` file for each virtualized router contains its specific interface address part.
This allows the declarative part of the **overlay network** to be composed independently of the network addressing plan.
:arrow_right: The inventory of virtualized systems is dynamically built upon launch.
## Part 1: Configuration setup for Ansible on the DevNet VM
To begin, it is necessary to configure Ansible and verify SSH access to the hypervisor from the DevNet VM.
### Step 1: Create the Ansible directory and configuration file
1. Make the `~/iac/lab03` directory for example and navigate to this folder
```bash
mkdir -p ~/iac/lab03 && cd ~/iac/lab03
```
2. Check that **ansible** is installed
There are two main ways to set up a new Ansible workspace. Packages and Python virtual environments are both viable options. Both methods have their advantages and disadvantages.
To take advantage of the latest versions of tools in Python, we can create a virtual environment by following these steps:
```bash
cat << EOF > requirements.txt
ansible
ansible-lint
ansible-pylibssh
netaddr
EOF
```
```bash
python3 -m venv ansible
source ./ansible/bin/activate
pip3 install -r requirements.txt
```
Next, we can verify that the Ansible configuration and modules are up to date.
```bash
ansible --version
```
```bash=
ansible [core 2.18.2]
config file = /home/etu/iac/lab-03/ansible.cfg
configured module search path = ['/home/etu/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/etu/iac/lab-03/ansible/lib/python3.12/site-packages/ansible
ansible collection location = /home/etu/.ansible/collections:/usr/share/ansible/collections
executable location = /home/etu/iac/lab-03/ansible/bin/ansible
python version = 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (/home/etu/iac/lab-03/ansible/bin/python3)
jinja version = 3.1.5
libyaml = True
```
```bash
ansible-galaxy collection install cisco.ios --upgrade
```
```bash=
Starting galaxy collection install process
Process install dependency map
Starting collection install process
'cisco.ios:9.2.0' is already installed, skipping.
'ansible.netcommon:7.2.0' is already installed, skipping.
'ansible.utils:5.1.2' is already installed, skipping.
```
3. Create a new [**ansible.cfg**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/ansible.cfg?ref_type=heads) file in the `lab03` directory from the shell prompt
```toml=
[defaults]
# Use inventory/ folder files as source
inventory = inventory/
host_key_checking = False # Don't worry about RSA Fingerprints
retry_files_enabled = False # Do not create them
deprecation_warnings = False # Do not show warnings
interpreter_python = /usr/bin/python3
[inventory]
enable_plugins = auto, host_list, yaml, ini, toml, script
[persistent_connection]
command_timeout=100
connect_timeout=100
connect_retry_timeout=100
ssh_type = libssh
```
> Note that the inventory key entry refers to the inventory directory in the default section. The plugins listed in the inventory section enable the creation of a dynamic inventory by merging all files in the directory.
### Step 2: Check SSH access to the Hypervisor from the DevNet VM
Since we have already completed the `~/.ssh/config` file, we are ready to test the SSH connection to the hypervisor called `hypervisor_name`.
Here is an extract from the user's SSH configuration file:
```bash=
Host hypervisor_name
HostName fe80::XXXX:1%%enp0s1
User etudianttest
Port 2222
```
We make a minimal SSH connection and check for success with a return code of 0.
```bash
ssh -q hypervisor_name exit
echo $?
0
```
The SSH connection parameters will be used to complete the inventory hosts file later.
### Step 3: Create a new vault file
Back to the DevNet VM console, create a new vault file called `iac_lab03_passwd.yml` and enter the unique vault password which will be used for all users passwords to be stored.
```bash
ansible-vault create $HOME/.iac_passwd.yml
New Vault password:
Confirm New Vault password:
```
This opens the default editor defined by the `$EDITOR` environment variable.
There we enter two sets of variables: one for hypervisor access from our development VM called DevNet, and one for SSH access to all virtual routers created by Ansible playbooks.
```bash
hypervisor_user: XXXXXXXXXX
hypervisor_pass: YYYYYYYYYY
vm_user: etu
vm_pass: ZZZZZZZZZ
```
As we plan to integrate the Ansible playbooks of this lab into GitLab CI pipelines, we need to store the vault secret in a file and make it available to any Ansible command that follows.
- First, we store the vault secret in a file at the user's home directory level.
```bash
echo "ThisVaultSecret" >$HOME/.vault.passwd
chmod 600 $HOME/.vault.passwd
```
:::warning
Don't forget to replace the "ThisVaultSecret" with your own secret password.
:::
- Second, we make sure that the `ANSIBLE_VAULT_PASSWORD_FILE` variable is set each time a new shell is opened.
```bash
touch $HOME/.profile
echo "export ANSIBLE_VAULT_PASSWORD_FILE=\$HOME/.vault.passwd" |\
tee -a $HOME/.profile
source $HOME/.profile
```
## Part 2: Preparing the lab environment and running the preparation stage on the Hypervisor
To build and launch virtual routers, the first step is to prepare directories and gain access to virtual router images and launch scripts. Next, it is necessary to verify that all switch ports to which a virtual router will be connected are configured according to the declarative YAML file.
### Step 1: Set up lab directories for inventory and variables
As mentioned earlier, it is critical to distinguish between the inventory directory and the host variables directory when using Ansible.
We can start by creating the `inventory` and `host_vars` directories
```bash
mkdir -p ~/iac/lab03/{inventory,group_vars,host_vars}
```
The inventory directory files contain the necessary information, such as host names, groups, and attributes, required to establish network connections with these hosts. Here is a copy of the [**inventory/hosts.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/inventory/hosts.yml?ref_type=heads) file.
```yaml=
---
hypervisors:
hosts:
hypervisor_name:
# ansible_host variable is defined in $HOME/.ssh/config file
# Host hypervisor_name
# HostName fe80::XXXX:1%%enp0s1
# User etudianttest
# Port 2222
ansible_host: hypervisor_name
vars:
ansible_ssh_user: "{{ hypervisor_user }}"
ansible_ssh_pass: "{{ hypervisor_pass }}"
ansible_ssh_port: 2222
routers:
hosts:
rXXX:
rYYY:
all:
children:
hypervisors:
routers:
```
The YAML description above contains two groups: **hypervisors** and **routers**. Within the hypervisors group, `hypervisor_name` is currently the only member present with the necessary SSH network connection parameters.
The routers group comprises two members, namely **rXXX** and **rYYY**. At this stage, we do not know much except for the fact that we are going to instantiate two virtual routers.
The SSH network connection parameters for all virtual routers will be provided after they are started and the dynamic inventory Python script is executed.
### Step 2: Create the group and host variable storage directories
The [**group_vars**](https://gitlab.inetdoc.net/iac/lab-03/-/tree/main/group_vars?ref_type=heads) directory is created with an [**all.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/group_vars/all.yml?ref_type=heads) file that contains the variable declarations common to almost all playbooks.
```yaml=
---
lab_name: iac-lab03
lab_path: "{{ ansible_env.HOME }}/labs/{{ lab_name }}"
masters_dir: /var/cache/kvm/masters
image_name: c8000v-universalk9.17.16.01a.qcow2
oob_vlan: VVV # out-of-band VLAN ID
```
The `oob_vlan:` key refers to the **out-of-band** network used for communications between the DevNet (Ansible + gitlab-runner) virtual machine and the routers to start and configure.
The content of the [**host_vars**](https://gitlab.inetdoc.net/iac/lab-03/-/tree/main/host_vars?ref_type=heads) directory will now be examined.
YAML description files for each host of the lab infrastructure can be found there.
Here are copies of:
- `host_vars/all.yml`
- `host_vars/hypervisor_name.yml`
- `host_vars/rXXX.yml`
- `host_vars/rYYY.yml`
:::warning
Be sure to edit this inventory file and replace **XXX**, **YYY**, and all other placeholders with the appropriate real names or values.
:::
- The [**all.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/host_vars/all.yml?ref_type=heads) addressing plan file:
```yaml=
---
inband_vlans:
- vlan: 230
prefix4: 10.0.228
prefix6: 2001:678:3fc:e6
mask4: /22
mask6: /64
default_routes:
ipv4_next_hop: 10.0.228.1
ipv6_next_hop: 2001:678:3fc:e6::1
```
This file contains the addressing plan for the **in-band** networks. The example above is limited to a single VLAN for simplicity. Virtual router interface addresses are computed from these network prefixes and masks.
- The Hypervisor `host_vars` file: [**hypervisor_name.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/host_vars/hypervisor_name.yml?ref_type=heads)
```yaml=
---
switches:
- name: dsw-host
ports:
- name: tap_AAA_
type: OVSPort
vlan_mode: access
tag: "{{ oob_vlan }}"
- name: tap_BBB_
type: OVSPort
vlan_mode: access
tag: 230
- name: tap_CCC_
type: OVSPort
vlan_mode: access
tag: 999
- name: tap_DDD_
type: OVSPort
vlan_mode: access
tag: "{{ oob_vlan }}"
- name: tap_EEE_
type: OVSPort
vlan_mode: access
tag: 230
- name: tap_FFF_
type: OVSPort
vlan_mode: access
tag: 999
```
This file contains a list of tap interfaces to be configured, as specified in the design. The configuration parameters for each tap interface listed include the switch name, access mode port, and the tag or identifier of the connected VLAN.
- The YAML files for virtual routers reference the hypervisor tap interface connection and contain the network interface **in-band** VLAN configuration parameters.
Here is a copy of one router `host_vars`file: [**rXXX.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/host_vars/r1.yml?ref_type=heads)
```yaml=
---
vrouter:
vm_name: rXXX
os: iosxe
master_image: "{{ image_name }}"
force_copy: false
tapnumlist: [AAA, BBB, CCC]
interfaces:
- interface_type: GigabitEthernet
interface_id: 2
description: --> VLAN VVV
enabled: true
ipv4_address: "{{ inband_vlans[0].prefix4 }}.{{ (vrouter.tapnumlist[0] | int) % 256 }}{{ inband_vlans[0].mask4 }}"
ipv6_address: "{{ inband_vlans[0].prefix6 }}::{{ vrouter.tapnumlist[0] | int | string | format('x') }}{{ inband_vlans[0].mask6 }}"
- interface_type: GigabitEthernet
interface_id: 3
description: --> VLAN 999
enabled: false
default_routes: "{{ inband_vlans[0].default_routes }}"
```
Each router YAML declaration file contains the address composition according to `group_vars/all.yml`, which defines the network plan, and the virtual router `host_vars`directory file, which specifies the identifier part of each interface address.
Here we chose to use the first router tap interface number as the identifier part of IPv4 and IPv6 addresses.
### Step 3: Verify Ansible configuration and perform hypervisor access test
Now, we are able to use the `ping` ansible module to commincate with the `bob` entry defined in the inventory file.
```bash
ansible hypervisor_name -m ping --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
hypervisor_name | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
As the ansible ping is successful, we can go on with playbooks to start new virtual routers.
### Step 4: Run the prepare stage playbook
Here is a copy of the [**01_prepare.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/01_prepare.yaml?ref_type=heads) Ansible Playbook:
```yaml=
---
# The purpose of this playbook is to prepare the lab environment for the VMs.
# It performs the following steps:
# 1. Check the permissions of the masters directory to determine if it is accessible.
# 2. Ensure the required directories exist.
# 3. Create symbolic link to the masters directory.
# 4. Create the switch configuration file using the template.
# 5. Configure the hypervisor switch ports using the configuration file.
# 6. Save and fetch the switch configuration output for debugging.
# The playbook is executed on the hypervisors group of hosts.
- name: PREPARE LAB ENVIRONMENT
vars_files:
- group_vars/all.yml
hosts: hypervisors
tasks:
- name: CHECK MASTERS DIRECTORY PERMISSIONS
ansible.builtin.shell: |
if [ -r "{{ masters_dir }}" ] && [ -x "{{ masters_dir }}" ]; then
exit 0
else
echo "Directory {{ masters_dir }} is not readable or executable"
exit 1
fi
changed_when: false
register: perms_check
failed_when: perms_check.rc != 0
- name: ENSURE REQUIRED DIRECTORIES EXIST
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ lab_path }}"
- "{{ lab_path }}/fragments"
- name: CREATE SYMBOLIC LINK
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/masters"
src: "{{ masters_dir }}"
state: link
follow: false
- name: CREATE YAML SWITCH CONFIGURATION
ansible.builtin.template:
src: templates/switch.yaml.j2
dest: "{{ lab_path }}/switch.yaml"
mode: "0644"
loop: "{{ hostvars[inventory_hostname].switches }}"
- name: CONFIGURE HYPERVISOR SWITCH PORTS
ansible.builtin.command:
cmd: "{{ ansible_env.HOME }}/masters/scripts/switch-conf.py {{ lab_path }}/switch.yaml"
chdir: "{{ lab_path }}"
register: switch_conf_result
changed_when: "'changed to' in switch_conf_result.stdout"
failed_when: switch_conf_result.rc != 0
- name: SAVE AND FETCH SWITCH CONFIGURATION OUTPUT
block:
- name: SAVE SWITCH CONFIGURATION OUTPUT
ansible.builtin.copy:
content: "{{ switch_conf_result.stdout }}\n{{ switch_conf_result.stderr }}"
dest: "{{ lab_path }}/switch-conf.log"
mode: "0644"
- name: FETCH SWITCH CONFIGURATION OUTPUT
ansible.builtin.fetch:
src: "{{ lab_path }}/switch-conf.log"
dest: trace/switch-conf.log
flat: true
mode: "0644"
rescue:
- name: HANDLE ERROR IN SAVING OR FETCHING SWITCH CONFIGURATION OUTPUT
ansible.builtin.debug:
msg: An error occurred while saving or fetching the switch configuration output.
```
- In the initial phase, the tasks ensure that all the required directories and symlinks are in place in the user's home directory to run a virtual machine.
These operations are familiar to all students as they are provided at the top of each Moodle course page as shell instructions.
- In the second phase, the configuration of the tap switch ports is adjusted to match the attributes specified in the YAML file. In this particular lab context, each switch port is configured in **access mode** and belongs to the VLAN defined in the YAML file.
When we run the playbook, we get the following output.
```bash
ansible-playbook 01_prepare.yaml --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
PLAY [PREPARE LAB ENVIRONMENT] *************************************************
TASK [Gathering Facts] *********************************************************
ok: [hypervisor_name]
TASK [CHECK MASTERS DIRECTORY PERMISSIONS] *************************************
ok: [hypervisor_name]
TASK [ENSURE REQUIRED DIRECTORIES EXIST] ***************************************
changed: [hypervisor_name] => (item=/home/etudianttest/labs/iac-lab03)
changed: [hypervisor_name] => (item=/home/etudianttest/labs/iac-lab03/fragments)
TASK [CREATE SYMBOLIC LINK] ****************************************************
ok: [hypervisor_name]
TASK [CREATE YAML SWITCH CONFIGURATION] ****************************************
changed: [hypervisor_name] => (item={'name': 'dsw-host', 'ports': [{'name': 'tap3', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 52}, {'name': 'tap4', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 230}, {'name': 'tap5', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 999}, {'name': 'tap6', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 52}, {'name': 'tap7', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 230}, {'name': 'tap8', 'type': 'OVSPort', 'vlan_mode': 'access', 'tag': 999}]})
TASK [CONFIGURE HYPERVISOR SWITCH PORTS] ***************************************
ok: [hypervisor_name]
TASK [SAVE SWITCH CONFIGURATION OUTPUT] ****************************************
changed: [hypervisor_name]
TASK [FETCH SWITCH CONFIGURATION OUTPUT] ***************************************
changed: [hypervisor_name]
PLAY RECAP *********************************************************************
hypervisor_name : ok=8 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
The master image files for the virtual routers are now ready to be copied and the virtual routers can be launched with their respective network interface parameters.
## Part 3: Launch virtual routers on the Hypervisor
In this part, we will create a playbook called [**02_declare_run.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/02_declare_run.yml?ref_type=heads) that will handle the virtual router images, launch them, and run the Python script that completes the dynamic inventory.
At the end of this process, two virtual routers are now operational and ready to be configured by other Ansible playbooks.
Here is a diagram of the playbook logic:
```mermaid
flowchart TD
A[START] --> B[CHECK IF A VIRTUAL MACHINE ALREADY RUNS]
B --> C[SET FACT FOR VMS STATUS]
C --> D{all_vms_stopped?}
D -->|YES| G[COPY MASTER IMAGE TO LAB ROUTER IMAGES]
D -->|NO| N[HANDLE INVENTORY GENERATION]
G --> H[DELETE LAUNCH OUTPUT MESSAGES LOG FILE]
H --> I[DELETE LOCAL LAUNCH TRACE FILES AND LAB INVENTORY]
I --> J[BUILD LAB DECLARATION YAML FILE block]
subgraph "BUILD LAB DECLARATION"
J1[ENSURE TRACE AND INVENTORY DIRECTORIES]
J2[CREATE FRAGMENTS DIRECTORY]
J3[CREATE YAML HEADER]
J4[CREATE YAML DECLARATION FOR EACH VM]
J5[CHECK FOR CHANGES IN VM DECLARATIONS]
J6[CHECK LAB CONFIG FILE STATUS]
J7[MERGE YAML DECLARATIONS]
J1 --> J2 --> J3 --> J4 --> J5 --> J6 --> J7
end
J --> J1
J7 --> K[LAUNCH VIRTUAL MACHINE]
K --> L[SET FACT FOR VMS STARTED]
L --> N
subgraph "HANDLE INVENTORY GENERATION"
N1[SAVE LAUNCH OUTPUT MESSAGES]
N2[FETCH EXISTING LAUNCH OUTPUT]
N3[GENERATE NEW INVENTORY FILE]
N1 -->|when: all_vms_started| N2
N2 --> N3
end
N --> N1
N3 --> Z[END]
```
### Step 1: Create the `02_declare_run.yml` Ansible playbook
Here is a copy of the playbook.
```yaml=
---
# The purpose of this playbook is to prepare and launch virtual routers on the hypervisor.
# It performs the following steps:
# 1. Check if any VMs are already running to prevent duplicate instances.
# 2. Only if all VMs are stopped:
# a. Copy master image for each router.
# b. Delete any previous launch logs and inventory files.
# c. Create a YAML declaration for each VM and assemble into a lab configuration.
# d. Launch the virtual machines using the lab configuration.
# 3. For all runs (whether VMs were started or were already running):
# a. Save launch output messages if VMs were just started.
# b. Fetch the launch output log for processing.
# c. Generate a new inventory file from the launch output.
# The playbook is executed on the hypervisors group of hosts and includes conditional
# execution of tasks based on VM running state.
- name: COPY MASTER IMAGE AND RUN ROUTERS
vars_files:
- host_vars/all.yml
vars:
lab_config_path: "{{ lab_path }}/lab.yaml"
fragments_dir: "{{ lab_path }}/fragments"
launch_output_file: "{{ lab_path }}/launch_output.log"
default_file_mode: "0644"
default_dir_mode: "0755"
hosts: hypervisors
tasks:
- name: CHECK IF A VIRTUAL MACHINE ALREADY RUNS
ansible.builtin.shell:
cmd: |
set -o pipefail
if $(pgrep -af -U ${USER} | grep -q "={{ hostvars[item].vrouter.vm_name }}\.qcow2 "); then
echo "{{ hostvars[item].vrouter.vm_name }} is already running!"
exit 1
fi
exit 0
executable: /bin/bash
register: running_vm
changed_when: running_vm.rc != 0
failed_when: false
with_inventory_hostnames:
- routers
tags:
- launch_lab
- name: SET FACT FOR VMS STATUS
ansible.builtin.set_fact:
all_vms_stopped: "{{ (running_vm.results | map(attribute='rc') | select('eq', 0) | list | length == running_vm.results | length) }}"
- name: COPY MASTER IMAGE TO LAB ROUTER IMAGES
ansible.builtin.copy:
src: "{{ masters_dir }}/{{ image_name }}"
dest: "{{ lab_path }}/{{ item }}.qcow2"
mode: "{{ default_file_mode }}"
remote_src: true
when: all_vms_stopped
with_inventory_hostnames:
- routers
- name: DELETE LAUNCH OUTPUT MESSAGES LOG FILE
ansible.builtin.file:
path: "{{ launch_output_file }}"
state: absent
tags:
- launch_lab
when: all_vms_stopped
- name: DELETE LOCAL LAUNCH TRACE FILES AND LAB INVENTORY
delegate_to: localhost
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- trace/launch_output.log
- inventory/lab.yml
when: all_vms_stopped
- name: BUILD LAB DECLARATION YAML FILE
tags:
- yaml_labfile
block:
- name: ENSURE TRACE AND INVENTORY DIRECTORIES EXIST
delegate_to: localhost
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "{{ default_dir_mode }}"
loop:
- trace
- inventory
- name: CREATE FRAGMENTS DIRECTORY
ansible.builtin.file:
path: "{{ fragments_dir }}"
state: directory
mode: "{{ default_dir_mode }}"
- name: CREATE YAML HEADER FOR LAB CONFIGURATION
ansible.builtin.copy:
content: |
# Based on template at:
# https://gitlab.inetdoc.net/labs/startup-scripts/-/blob/main/templates/
kvm:
vms:
dest: "{{ fragments_dir }}/00_header_decl.yaml"
mode: "{{ default_file_mode }}"
- name: CREATE A YAML DECLARATION FOR EACH VIRTUAL MACHINE
ansible.builtin.template:
src: templates/lab.yaml.j2
dest: "{{ fragments_dir }}/{{ item }}_decl.yaml"
mode: "{{ default_file_mode }}"
with_items: "{{ groups['routers'] }}"
vars:
tapnumlist: "{{ hostvars[item].vrouter.tapnumlist }}"
- name: CHECK FOR CHANGES IN VIRTUAL MACHINE DECLARATIONS
ansible.builtin.find:
paths: "{{ fragments_dir }}"
patterns: "*_decl.yaml"
register: fragment_files
- name: CHECK LAB CONFIG FILE STATUS
ansible.builtin.stat:
path: "{{ lab_config_path }}"
register: lab_config_file
- name: MERGE YAML DECLARATIONS INTO LAB CONFIGURATION
ansible.builtin.assemble:
src: "{{ fragments_dir }}"
dest: "{{ lab_config_path }}"
mode: "{{ default_file_mode }}"
rescue:
- name: HANDLE ERROR IN LAB CONFIGURATION
ansible.builtin.debug:
msg: An error occurred while building the lab configuration.
always:
- name: CLEANUP TEMPORARY FILES
ansible.builtin.file:
path: "{{ fragments_dir }}"
state: absent
when: all_vms_stopped
- name: LAUNCH VIRTUAL MACHINE
ansible.builtin.command:
cmd: "$HOME/vm/scripts/lab-startup.py {{ lab_config_path }}"
chdir: "{{ lab_path }}"
register: launch
when: all_vms_stopped
failed_when: launch.rc != 0 and 'already' not in launch.stdout
changed_when: "' started!' in launch.stdout"
tags:
- launch_lab
- name: SET FACT FOR VMS STARTED
ansible.builtin.set_fact:
all_vms_started: "{{ (launch.stdout is defined and ' started!' in launch.stdout) }}"
when: launch is defined | default(false)
- name: HANDLE LAB INVENTORY GENERATION
tags:
- launch_lab
block:
- name: SAVE LAUNCH OUTPUT MESSAGES TO LOG FILE
ansible.builtin.copy:
content: "{{ launch.stdout | default('') }}\n{{ launch.stderr | default('') }}"
dest: "{{ launch_output_file }}"
mode: "{{ default_file_mode }}"
when: all_vms_started | default(false)
- name: FETCH EXISTING LAUNCH OUTPUT IF VMS ARE RUNNING
ansible.builtin.fetch:
src: "{{ launch_output_file }}"
dest: trace/launch_output.log
flat: true
mode: "{{ default_file_mode }}"
- name: GENERATE NEW INVENTORY FILE
delegate_to: localhost
ansible.builtin.command:
cmd: /usr/bin/env python3 ./build_lab_inventory.py
register: command_result
changed_when: command_result.rc == 0
rescue:
- name: HANDLE ERROR IN LAB INVENTORY GENERATION
ansible.builtin.debug:
msg: An error occurred while building the lab inventory.
```
Here are the key points of this playbook:
1. **VM State Detection**
- Checks if virtual routers are already running before attempting any operations
- Uses `pgrep` to detect running VM instances by their QCOW2 image name
- Sets the `all_vms_stopped` fact to control the operation flow of subsequent tasks
2. **Conditional Processing Strategy**
- Only performs resource-intensive tasks (image copying, VM configuration, launching) when VMs are not running
- Skips unnecessary steps when VMs are already operational
- Optimizes playbook runtime by avoiding redundant operations
3. **Automated Inventory Generation**
- Captures launch information containing IP addresses and connection details
- Processes this data to create a dynamic inventory file using a Python script
- Enables subsequent playbooks to connect to the newly created VMs
- Works whether VMs were just launched or were already running
### Step 2: Create the `build_lab_inventory.py` Python script
Here is a copy of the [**build_lab_inventory.py**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/build_lab_inventory.py?ref_type=heads) Python script:
```yaml=
#!/usr/bin/env python3
"""
Build Ansible inventory from virtual machines launch trace.
This script extracts router information from the launch trace file
and generates an Ansible inventory in YAML format.
"""
import logging
import os
import re
import sys
from pathlib import Path
import yaml
def setup_logging():
"""Configure logging for the script."""
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
return logging.getLogger(__name__)
def clean_ansi_codes(input_file, output_file):
"""Remove ANSI color codes from the trace file."""
logger.info(f"Cleaning ANSI codes from {input_file}")
ansi_escape = re.compile(r"\x9B|\x1B\[[0-?]*[ -/]*[@-~]")
try:
with open(input_file, "r", encoding="utf-8") as src:
with open(output_file, "w", encoding="utf-8") as dst:
for line in src:
dst.write(ansi_escape.sub("", line))
return True
except (IOError, OSError) as error:
logger.error(f"Error processing file: {error}")
return False
def extract_router_info(trace_file):
"""Extract router names and IPv6 addresses from the trace file."""
routers = {}
VM_PATTERN = "Router name"
ADDRESS_PATTERN = "mgmt G1 IPv6 LL address"
try:
with open(trace_file, "r", encoding="utf-8") as src:
lines = src.readlines()
vm_name = None
for line in lines:
line = line.strip()
if re.search(VM_PATTERN, line) and not vm_name:
parts = line.split(":", 1)
if len(parts) > 1:
vm_name = parts[1].strip().split(".")[0]
elif re.search(ADDRESS_PATTERN, line) and vm_name:
parts = line.split(" :", 1)
if len(parts) > 1:
address = parts[1].strip().split("%")[0]
vm_address = f"{address}%enp0s1"
routers[vm_name] = {
"ansible_host": vm_address,
"ansible_port": 2222,
}
vm_name = None
return routers
except (IOError, OSError) as error:
logger.error(f"Error reading trace file: {error}")
return {}
def generate_inventory(routers, output_file):
"""Generate the Ansible inventory file in YAML format."""
if not routers:
logger.error("No router information found. Cannot generate inventory.")
return False
inventory = {
"routers": {
"hosts": routers,
"vars": {
"ansible_ssh_user": "{{ vm_user }}",
"ansible_ssh_pass": "{{ vm_pass }}",
"ansible_connection": "network_cli",
"ansible_network_os": "ios",
},
}
}
try:
# Ensure the directory exists
os.makedirs(os.path.dirname(output_file), exist_ok=True)
with open(output_file, "w", encoding="utf-8") as dst:
yaml.dump(inventory, dst, sort_keys=False)
logger.info(f"Inventory successfully written to {output_file}")
return True
except (IOError, OSError) as error:
logger.error(f"Error writing inventory file: {error}")
return False
if __name__ == "__main__":
logger = setup_logging()
# Define file paths using pathlib for better path handling
trace_dir = Path("trace")
inventory_dir = Path("inventory")
trace_file = trace_dir / "launch_output.log"
clean_trace_file = trace_dir / "launch_output.save"
inventory_file = inventory_dir / "lab.yml"
# Check if trace file exists
if not trace_file.exists():
logger.error("Virtual machines launch trace file does not exist.")
logger.error("Are the virtual machines running?")
sys.exit(1)
# Remove existing clean trace file if it exists
if clean_trace_file.exists():
clean_trace_file.unlink()
# Clean ANSI codes from the trace file
if not clean_ansi_codes(trace_file, clean_trace_file):
sys.exit(1)
# Extract router information
routers = extract_router_info(clean_trace_file)
# Generate inventory file
if not generate_inventory(routers, inventory_file):
sys.exit(1)
logger.info("Inventory generation complete.")
sys.exit(0)
```
After the script is executed, a new file named `lab.yml` is created and added to the `inventory/` directory. Here is an example:
```yaml=
routers:
hosts:
rXXX:
ansible_host: fe80::faad:caff:fefe:3%enp0s1
ansible_port: 2222
rYYY:
ansible_host: fe80::faad:caff:fefe:6%enp0s1
ansible_port: 2222
vars:
ansible_ssh_user: '{{ vm_user }}'
ansible_ssh_pass: '{{ vm_pass }}'
ansible_connection: network_cli
ansible_network_os: ios
```
### Step 3: Run the `02_declare_run.yml` Ansible playbook
Here is an example of the playbook execution.
```bash
ansible-playbook 02_declare_run.yml --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
```
```bash=
PLAY [COPY MASTER IMAGE AND RUN ROUTERS] ***************************************
TASK [Gathering Facts] *********************************************************
ok: [hypervisor_name]
TASK [CHECK IF A VIRTUAL MACHINE ALREADY RUNS] *********************************
ok: [hypervisor_name] => (item=r1)
ok: [hypervisor_name] => (item=r2)
TASK [SET FACT FOR VMS STATUS] *************************************************
ok: [hypervisor_name]
TASK [COPY MASTER IMAGE TO LAB ROUTER IMAGES] **********************************
changed: [hypervisor_name] => (item=r1)
changed: [hypervisor_name] => (item=r2)
TASK [DELETE LAUNCH OUTPUT MESSAGES LOG FILE] **********************************
ok: [hypervisor_name]
TASK [DELETE LOCAL LAUNCH TRACE FILES AND LAB INVENTORY] ***********************
ok: [hypervisor_name -> localhost] => (item=trace/launch_output.log)
ok: [hypervisor_name -> localhost] => (item=inventory/lab.yml)
TASK [ENSURE TRACE AND INVENTORY DIRECTORIES EXIST] ****************************
changed: [hypervisor_name -> localhost] => (item=trace)
ok: [hypervisor_name -> localhost] => (item=inventory)
TASK [CREATE FRAGMENTS DIRECTORY] **********************************************
ok: [hypervisor_name]
TASK [CREATE YAML HEADER FOR LAB CONFIGURATION] ********************************
changed: [hypervisor_name]
TASK [CREATE A YAML DECLARATION FOR EACH VIRTUAL MACHINE] **********************
changed: [hypervisor_name] => (item=r1)
changed: [hypervisor_name] => (item=r2)
TASK [CHECK FOR CHANGES IN VIRTUAL MACHINE DECLARATIONS] ***********************
ok: [hypervisor_name]
TASK [CHECK LAB CONFIG FILE STATUS] ********************************************
ok: [hypervisor_name]
TASK [MERGE YAML DECLARATIONS INTO LAB CONFIGURATION] **************************
changed: [hypervisor_name]
TASK [CLEANUP TEMPORARY FILES] *************************************************
changed: [hypervisor_name]
TASK [LAUNCH VIRTUAL MACHINE] **************************************************
changed: [hypervisor_name]
TASK [SET FACT FOR VMS STARTED] ************************************************
ok: [hypervisor_name]
TASK [SAVE LAUNCH OUTPUT MESSAGES TO LOG FILE] *********************************
changed: [hypervisor_name]
TASK [FETCH EXISTING LAUNCH OUTPUT IF VMS ARE RUNNING] *************************
changed: [hypervisor_name]
TASK [GENERATE NEW INVENTORY FILE] *********************************************
changed: [hypervisor_name -> localhost]
PLAY RECAP *********************************************************************
hypervisor_name : ok=19 changed=10 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Step 4: Check Ansible SSH access to the target virtual routers
Here we use the **ping** Ansible module directly from the command line.
```bash
ansible routers -m ping --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
rXXX | SUCCESS => {
"changed": false,
"ping": "pong"
}
rYYY | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
We can also check that the inventory contains **rXXX** and **rYYY** entries with their own parameter set.
```bash
ansible-inventory --yaml --limit routers --list
```
```yaml=
all:
children:
routers:
hosts:
rXXX:
ansible_connection: network_cli
ansible_host: fe80::faad:caff:fefe:3%enp0s1
ansible_network_os: ios
ansible_port: 2222
ansible_ssh_pass: '{{ vm_pass }}'
ansible_ssh_user: '{{ vm_user }}'
default_routes: '{{ inband_vlans[0].default_routes }}'
image_name: c8000v-universalk9.17.16.01a.qcow2
interfaces:
- description: --> VLAN 230
enabled: true
interface_id: 2
interface_type: GigabitEthernet
ipv4_address: '{{ inband_vlans[0].prefix4 }}.{{ (vrouter.tapnumlist[0]
| int) % 256 }}{{ inband_vlans[0].mask4 }}'
ipv6_address: '{{ inband_vlans[0].prefix6 }}::{{ ''%x'' | format(vrouter.tapnumlist[0]
| int) }}{{ inband_vlans[0].mask6 }}'
- description: --> VLAN 999
enabled: false
interface_id: 3
interface_type: GigabitEthernet
lab_name: iac-lab03
oob_vlan: 52
vrouter:
force_copy: false
master_image: '{{ image_name }}'
os: iosxe
tapnumlist:
- 3
- 4
- 5
vm_name: rXXX
rYYY:
ansible_connection: network_cli
ansible_host: fe80::faad:caff:fefe:6%enp0s1
ansible_network_os: ios
ansible_port: 2222
ansible_ssh_pass: '{{ vm_pass }}'
ansible_ssh_user: '{{ vm_user }}'
default_routes: '{{ inband_vlans[0].default_routes }}'
image_name: c8000v-universalk9.17.16.01a.qcow2
interfaces:
- description: --> VLAN 230
enabled: true
interface_id: 2
interface_type: GigabitEthernet
ipv4_address: '{{ inband_vlans[0].prefix4 }}.{{ (vrouter.tapnumlist[0]
| int) % 256 }}{{ inband_vlans[0].mask4 }}'
ipv6_address: '{{ inband_vlans[0].prefix6 }}::{{ ''%x'' | format(vrouter.tapnumlist[0]
| int) }}{{ inband_vlans[0].mask6 }}'
- description: --> VLAN 999
enabled: false
interface_id: 3
interface_type: GigabitEthernet
lab_name: iac-lab03
oob_vlan: 52
vrouter:
force_copy: false
master_image: '{{ image_name }}'
os: iosxe
tapnumlist:
- 6
- 7
- 8
vm_name: rYYY
```
The key point here is that the interface addresses are computed from the Jinja2 templates, illustrating that this playbook processing is independent of the network addressing plan.
## Part 4: Virtual routers configuration
We have now reached the stage of configuring the overlay network. Here we will create a new Ansible playbook to configure virtual router network interfaces and test Internet access using ICMP requests.
### Step 1: Create the `03_configure_routers.yml` playbook
All of the elements used in this playbook are taken from the variables defined in the `rXXX.yml` and `rYYY.yml` files stored in the `host_vars` directory. They are used to translate the declarative desired state of the network topology through the playbook procedure.
Here is a copy of the [**03_configure_routers.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/03_configure_routers.yml?ref_type=heads):
```yaml=
---
# The purpose of this playbook is to configure the network settings on the virtual routers.
# It performs the following steps:
# 1. Wait for the virtual routers to become accessible via SSH.
# 2. Configure the hostname for each router according to its vm_name.
# 3. Configure interfaces with descriptions and enabled/disabled status.
# 4. Set up IPv4 and IPv6 addresses on enabled interfaces.
# 5. Configure default routes for IPv4 and IPv6.
# 6. Verify connectivity by testing pings to external destinations.
# The playbook is executed on the routers group of hosts.
- name: CONFIGURE ROUTER INTERFACES AND IP ADDRESSING
gather_facts: false
vars_files:
- group_vars/all.yml
- host_vars/all.yml
hosts: routers
pre_tasks:
- name: WAIT FOR VMS TO BECOME ACCESSIBLE
ansible.builtin.wait_for_connection:
delay: 5
sleep: 5
timeout: 300
connect_timeout: 5
- name: WAIT FOR SSH SERVICE
ansible.builtin.wait_for:
port: 22
state: started
timeout: 300
- name: GATHER FACTS
ansible.builtin.setup: # Gather facts about the remote system once the connection is available
when: ansible_facts | length == 0
tasks:
- name: CONFIGURE HOSTNAME
cisco.ios.ios_system:
hostname: "{{ vrouter.vm_name }}"
- name: CONFIGURE INTERFACES
cisco.ios.ios_interfaces:
config:
- name: "{{ item.interface_type }}{{ item.interface_id }}"
description: "{{ item.description }}"
enabled: "{{ item.enabled }}"
with_items: "{{ interfaces }}"
- name: CONFIGURE IPv4 ADDRESSES
cisco.ios.ios_l3_interfaces:
config:
- name: "{{ item.interface_type }}{{ item.interface_id }}"
ipv4:
- address: "{{ item.ipv4_address }}"
ipv6:
- address: "{{ item.ipv6_address }}"
with_items: "{{ interfaces | selectattr('enabled', 'equalto', true) | list }}"
- name: CONFIGURE DEFAULT ROUTES
cisco.ios.ios_static_routes:
config:
- address_families:
- afi: ipv4
routes:
- dest: 0.0.0.0/0
next_hops:
- forward_router_address: "{{ default_routes.ipv4_next_hop }}"
- afi: ipv6
routes:
- dest: ::/0
next_hops:
- forward_router_address: "{{ default_routes.ipv6_next_hop }}"
- name: CHECK IPV4 AND IPV6 DEFAULT ROUTE
cisco.ios.ios_ping:
dest: "{{ item.dest }}"
afi: "{{ item.afi }}"
count: 10
state: present
register: result
when: default_routes is defined
failed_when: result.packet_loss | int > 10
with_items:
- { dest: 9.9.9.9, afi: ip }
- { dest: 2620:fe::fe, afi: ipv6 }
```
The key original points of this Ansible playbook are:
Pre-task Connection Handling
: - Uses wait_for_connection to ensure VMs are accessible
- Waits for SSH service to be available on port 22
- Gathers facts only when ansible_facts is empty
Router Configuration
: - Configures hostname using **cisco.ios.ios_system** module
- Sets up interfaces with descriptions and enabled/disabled status using **cisco.ios.ios_interfaces**
IP Addressing
: - Configures both IPv4 and IPv6 addresses on enabled interfaces using **cisco.ios.ios_l3_interfaces**
- Uses a list comprehension to filter only enabled interfaces
Default Route Configuration
: - Sets up both IPv4 and IPv6 default routes using **cisco.ios.ios_static_routes**
- Uses a nested structure to define address families and routes
Connectivity Verification
: - Performs ping tests for both IPv4 (9.9.9.9) and IPv6 (2620:fe::fe) using **cisco.ios.ios_ping**
- Considers the test failed if packet loss exceeds 10%
### Step 2: Run the `03_configure_routers.yml` playbook
Here is a sample of the playbook execution that illustrates the use of network interfaces variables for each router of the network topology.
```bash=
PLAY [CONFIGURE ROUTER INTERFACES AND IP ADDRESSING] ***************************
TASK [WAIT FOR VMS TO BECOME ACCESSIBLE] ***************************************
ok: [r1]
ok: [r2]
TASK [WAIT FOR SSH SERVICE] ****************************************************
ok: [r1]
ok: [r2]
TASK [GATHER FACTS] ************************************************************
ok: [r2]
ok: [r1]
TASK [CONFIGURE HOSTNAME] ******************************************************
ok: [r2]
ok: [r1]
TASK [CONFIGURE INTERFACES] ****************************************************
ok: [r2] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'ipv4_address': '10.0.228.6/22', 'ipv6_address': '2001:678:3fc:e6::6/64'})
ok: [r1] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'ipv4_address': '10.0.228.3/22', 'ipv6_address': '2001:678:3fc:e6::3/64'})
ok: [r2] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 999', 'enabled': False})
ok: [r1] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 3, 'description': '--> VLAN 999', 'enabled': False})
TASK [CONFIGURE IPv4 ADDRESSES] ************************************************
ok: [r1] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'ipv4_address': '10.0.228.3/22', 'ipv6_address': '2001:678:3fc:e6::3/64'})
ok: [r2] => (item={'interface_type': 'GigabitEthernet', 'interface_id': 2, 'description': '--> VLAN 230', 'enabled': True, 'ipv4_address': '10.0.228.6/22', 'ipv6_address': '2001:678:3fc:e6::6/64'})
TASK [CONFIGURE DEFAULT ROUTES] ************************************************
changed: [r1]
changed: [r2]
TASK [CHECK IPV4 AND IPV6 DEFAULT ROUTE] ***************************************
ok: [r1] => (item={'dest': '9.9.9.9', 'afi': 'ip'})
ok: [r2] => (item={'dest': '9.9.9.9', 'afi': 'ip'})
ok: [r1] => (item={'dest': '2620:fe::fe', 'afi': 'ipv6'})
ok: [r2] => (item={'dest': '2620:fe::fe', 'afi': 'ipv6'})
PLAY RECAP *********************************************************************
r1 : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
r2 : ok=8 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
## Part 5: Continuous integration with GitLab CI
When we arrive at this final part of the lab, it's time to gather all the Ansible playbooks into a GitLab continuous integration (CI) pipeline.
This pipeline leads to the GitOps way of working with IDE tools like Visual Studio Code. Each time a file is edited, committed and pushed to the Git repository, a new GitLab pipeline is automatically launched and the entire process is evaluated.
### Step 1: Create the `.gitlab-ci.yml` pipeline file
Here is a copy of the [**.gitlab-ci.yml**](https://gitlab.inetdoc.net/iac/lab-03/-/blob/main/.gitlab-ci.yml?ref_type=heads) file with the 5 stages described below:
```yaml=
variables:
PIP_CACHE_DIR: $CI_PROJECT_DIR/.cache/pip
VAULT_FILE: /home/gitlab-runner/.iac_passwd.yml
cache:
paths:
- .cache/pip
- ansible/
stages:
- Build
- Ping
- Prepare
- DeclareRun
- Configure
# This job creates a virtual environment and installs the required packages
# - ansible latest version
# - ansible-lint latest version which is compatible with Python >= 3.12
# - ansible-pylibssh as defined in ansible.cfg
# - cisco.ios collection from Ansible Galaxy install or upgrade
Build venv:
stage: Build
script:
- python3 -m venv ansible
- source ./ansible/bin/activate
- pip3 install --upgrade -r requirements.txt
- ansible-galaxy collection install cisco.ios --upgrade
Ping hypervisors:
stage: Ping
needs:
- Build venv
script:
- source ./ansible/bin/activate
- ansible hypervisors -m ping --extra-vars "@${VAULT_FILE}"
Prepare hypervisors environment:
stage: Prepare
needs:
- Ping hypervisors
script:
- source ./ansible/bin/activate
- ansible-playbook 01_prepare.yml --extra-vars "@${VAULT_FILE}"
Startup routers:
stage: DeclareRun
artifacts:
paths:
- inventory/
needs:
- Prepare hypervisors environment
script:
- source ./ansible/bin/activate
- ansible-playbook 02_declare_run.yml --extra-vars "@${VAULT_FILE}"
Configure routers:
stage: Configure
artifacts:
paths:
- inventory/
needs:
- Startup routers
script:
- source ./ansible/bin/activate
- ansible-playbook 03_configure_routers.yml --extra-vars "@${VAULT_FILE}"
```
In the context of this lab, the first choice was to use a shell executor runner, as presented in [Lab 02](https://md.inetdoc.net/s/CPltj12uT). The second choice made here is to use a version of Ansible that is pulled into a Python virtual environment.
The pipeline Python virtual environment must be persistent and shared by all stages. The same goes for Ansible secrets, which are stored in a vault file.
:::info
We remind you that we do not use any external secret management service. For this reason, secrets must be transferred from the development user account to the gitlab-runner account. A dedicated scrip called [**share_secrets.sh**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/share_secrets.sh?ref_type=heads) was introduced in Lab 02.
:::
The key original points of this `.gitlab-ci.yml` file are:
Environment Setup for GitLab shell executor
: - Uses a custom PIP_CACHE_DIR variable for pip caching
- Defines a VAULT_FILE variable for storing sensitive information
- Implements caching for pip and ansible directories
Pipeline Structure
: - Organizes the CI/CD pipeline into five stages: Build, Ping, Prepare, DeclareRun, and Configure
Virtual Environment Creation
: - Creates a Python virtual environment named "ansible"
- Installs required packages from a requirements.txt file
- Installs or upgrades the cisco.ios Ansible Galaxy collection
Hypervisor Interaction
: - Pings hypervisors to ensure connectivity
- Prepares the hypervisor environment using a dedicated playbook
Router Management
: - Starts up virtual routers using a specific playbook
- Configures routers with another playbook
- Virtual router-related jobs generate artifacts in the `inventory/` directory
Job Dependencies
: - Establishes a clear dependency chain between jobs using the "needs" keyword
- Ensures jobs are executed in the correct order
Vault Integration
: Uses a vault file (${VAULT_FILE}) to pass extra variables to Ansible commands for secure credential management
### Step 2: View the results of the continuous integration pipeline
Here is a screenshot of the CI pipeline with the dependencies between the 5 stages.

Each stage is associated with a job, the results of which can be viewed individually. Here is an example.

Inventory artifacts generated during the **Startup routers** stage are also available after the complete pipeline has ended.

## Conclusion
This lab demonstrates how to use GitLab CI to automate the deployment and configuration of Cisco IOS-XE virtual routers using Ansible playbooks. By leveraging Infrastructure as Code principles, we've created a repeatable and version-controlled process for network provisioning. The integration of declarative configurations, dynamic inventory generation, and a multi-stage CI pipeline showcases the power of automation in network management, enabling faster deployments and reducing manual errors.