# IaC Lab 1 -- Use Ansible to build new Debian GNU/Linux Virtual Machines
[toc]
---
> Copyright (c) 2026 Philippe Latu.
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
https://inetdoc.net
GitLab repository: https://gitlab.inetdoc.net/iac/lab01/
### Scenario
In this lab, you will explore the basics of using Ansible to build and customise Debian GNU/Linux virtual machines. This is a first example of the **Infrastructure as Code** (IaC) **push** method where the DevNet VM (controlling server) uses Ansible to build a new target system *almost* from scratch.
The key design point here is to use VRF to create an automation out-of-band VLAN
channel that is isolated from the VM users in-band traffic.

The main stages of the scenario are as follows:
1. Start at the hypervisor shell level by pulling a base virtual machine image from [cloud.debian.org](https://cloud.debian.org/images/cloud/). Then resize each of the virtual machine main partition image files copied from the cloud image.
2. Use **cloud-init** to provision all virtual machine configurations. These configurations are passed from the **host_vars** file for each virtual machine.
3. Use **Virtual Routing and Forwarding** (VRF) to set up permanent out-of-band access to the network configuration of the virtual machines.
4. Connect the VMs to any in-band VLANs using a hypervisor switch port in trunk mode, enabling VLAN tags to identify the relevant broadcast domains.
:::info
If you want to adapt this lab to your own context, you will need to install and configure (or replace) the declarative management scripts used to write this document.
On the network side, you use the `switch-conf.py` Python script, which reads a YAML declaration file to configure Open vSwitch tap port interfaces on the hypervisor.
On the virtual machine side, you also use the `lab-startup.py` Python script, which reads another YAML declaration file to set the virtual machine properties such as its network configuration.
All these tools are hosted at this address: https://gitlab.inetdoc.net/labs/startup-scripts
:::
### Objectives
After completing the manipulation steps in this document, you will be able to:
- Use Ansible to automate the creation and configuration of Debian GNU/Linux virtual machines from scratch.
- Implement Virtual Routing and Forwarding (VRF) for network isolation between management and user traffic.
- Create and manage Infrastructure as Code (IaC) using declarative YAML files to define desired infrastructure state.
- Configure out-of-band management access separate from in-band network traffic for virtual machines.
- Develop and execute Ansible playbooks to automate various aspects of infrastructure deployment and configuration.
## Part 1: Configure Ansible on the DevNet VM
Before running any playbook, you need to install the Ansible toolchain and configure a workspace.
### Step 1: Install and configure the Ansible toolchain
To begin, set up a dedicated Ansible workspace on the DevNet VM and install the Ansible toolchain in a Python virtual environment managed by **`uv`**.
Make the `~/iac/lab01` directory for example and navigate to this folder
```bash
mkdir -p ~/iac/lab01 && cd ~/iac/lab01
```
Install **Ansible** Python virtual environment
There are many ways to set up a new Ansible workspace. Here, the choice has been made to install Ansible in a Python virtual environment managed by `uv` to take advantage of the latest release.
We start by creating a `pyproject.toml` file.
```bash
cat << EOF > pyproject.toml
[project]
name = "lab01"
version = "0.1.0"
description = "IaC Ansible labs"
readme = "README.md"
requires-python = ">=3.13"
dependencies = ["ansible>=13.4.0", "ansible-lint>=26.3.0"]
EOF
```
The `pyproject.toml` file serves as the central, standardized configuration for `uv`, defining your project’s metadata and dependencies so uv can manage environments, installs, and locking in a reproducible way.
Install the toolchain from the following `setup_ansible_toolchain.sh` script.
```bash=
#!/usr/bin/env bash
# Purpose: Sets up 'uv' and an Ansible virtual environment for the current user.
# This script should be executed directly by the target user (e.g., developer or gitlab-runner).
set -euo pipefail
if [[ ${USER} == "gitlab-runner" ]]; then
readonly PROJECT_DIR="${HOME}"
readonly VENV_DIR="${PROJECT_DIR}/.venv"
else
readonly PROJECT_DIR="${PWD}"
readonly VENV_DIR="${PROJECT_DIR}/.venv"
fi
readonly PROJECT_FILE="pyproject.toml"
echo "Starting Ansible toolchain setup for user: ${USER}"
# 1. Verify project metadata exists
cd "${PROJECT_DIR}"
if [[ ! -f ${PROJECT_FILE} ]]; then
cat <<EOF >"${PROJECT_DIR}/${PROJECT_FILE}"
[project]
name = "lab02"
version = "0.1.0"
description = "IaC Ansible labs"
readme = "README.md"
requires-python = ">=3.13"
dependencies = ["ansible>=13.4.0", "ansible-lint>=26.3.0"]
EOF
else
echo "${PROJECT_FILE} found in the ${PROJECT_DIR} directory."
fi
# 2. Install 'uv' locally for the user if not present
if ! command -v uv >/dev/null 2>&1; then
echo "Installing 'uv'..."
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add standard astral installation paths to PATH for immediate use
export PATH="${HOME}/.local/bin:${PATH}"
else
echo "'uv' is already installed."
fi
# 3. Create the virtual environment only if it does not already exist
if [[ -d ${VENV_DIR} ]]; then
echo "Existing virtual environment detected at ${VENV_DIR}."
else
echo "Creating new virtual environment at ${VENV_DIR}..."
uv venv "${VENV_DIR}"
fi
# 4. Activate the target virtual environment if needed, then sync project dependencies
if [[ ${VIRTUAL_ENV-} == "${VENV_DIR}" ]]; then
echo "Virtual environment is already activated. Syncing dependencies from ${PROJECT_FILE}..."
else
echo "Activating virtual environment at ${VENV_DIR}..."
# shellcheck source=/dev/null
source "${VENV_DIR}/bin/activate"
fi
echo "Upgrading dependencies from ${PROJECT_FILE}..."
uv sync --active --upgrade
echo "--------------------------------------------------------"
echo "Toolchain setup complete! Virtual environment is at:"
echo "${VENV_DIR}"
echo "To activate manually, run: source ${VENV_DIR}/bin/activate"
echo "--------------------------------------------------------"
```
Run the script to install or upgrade the Python virtual environment.
```bash
bash setup_ansible_toolchain.sh
```
```bash=
Starting Ansible toolchain setup for user: etu
'uv' is already installed.
Creating new virtual environment at /home/etu/.venv...
Using CPython 3.13.12 interpreter at: /usr/bin/python3.13
Creating virtual environment at: /home/etu/.venv
Activate with: source /home/etu/.venv/bin/activate
Activating virtual environment at /home/etu/.venv...
Upgrading dependencies from pyproject.toml...
Resolved 33 packages in 389ms
Prepared 31 packages in 210ms
Installed 31 packages in 1.06s
+ ansible==13.4.0
+ ansible-compat==25.12.1
+ ansible-core==2.20.3
+ ansible-lint==26.3.0
+ attrs==26.1.0
+ black==26.3.1
+ bracex==2.6
+ cffi==2.0.0
+ click==8.3.1
+ cryptography==46.0.5
+ distro==1.9.0
+ filelock==3.25.2
+ jinja2==3.1.6
+ jsonschema==4.26.0
+ jsonschema-specifications==2025.9.1
+ markupsafe==3.0.3
+ mypy-extensions==1.1.0
+ packaging==26.0
+ pathspec==1.0.4
+ platformdirs==4.9.4
+ pycparser==3.0
+ pytokens==0.4.1
+ pyyaml==6.0.3
+ referencing==0.37.0
+ resolvelib==1.2.1
+ rpds-py==0.30.0
+ ruamel-yaml==0.19.1
+ ruamel-yaml-clib==0.2.15
+ subprocess-tee==0.4.2
+ wcmatch==10.1
+ yamllint==1.38.0
--------------------------------------------------------
Toolchain setup complete! Virtual environment is at:
/home/etu/.venv
To activate manually, run: source /home/etu/.venv/bin/activate
--------------------------------------------------------
```
### Step 2: Configure Ansible and verify SSH access to the hypervisor
Create a new `ansible.cfg` file in the `lab01` directory.
Here is a copy of the [**ansible.cfg**](https://gitlab.inetdoc.net/iac/lab01/-/blob/main/ansible.cfg?ref_type=heads) file.
```=
[defaults]
# Use inventory/ folder files as source
inventory=inventory/
host_key_checking = False # Don't worry about RSA Fingerprints
retry_files_enabled = False # Do not create them
deprecation_warnings = False # Do not show warnings
interpreter_python = /usr/bin/python3
[inventory]
enable_plugins = auto, host_list, yaml, ini, toml, script
[persistent_connection]
command_timeout=100
connect_timeout=100
connect_retry_timeout=100
```
:::info
The key point of this configuration file is to use the `inventory/` directory for all files created during playbook processing. This is the way to achieve dynamic inventory at virtual machine startup.
:::
Once you have completed your `${HOME}/.ssh/config` file, you can test the SSH connection to the hypervisor called `hypervisor_name`.
Here is an extract from the developer user SSH configuration file:
```bash=
Host hypervisor_name
HostName fe80::XXXX:1%%enp0s1
User etudianttest
Port 2222
```
We make a minimal SSH connection and check for success with a return code of 0.
```bash
ssh -q hypervisor_name exit
echo $?
0
```
With the Ansible toolchain now installed and SSH access to the hypervisor confirmed, the DevNet VM is ready to automate provisioning of the lab infrastructure.
## Part 2: Designing the Declarative Part
Now that the Ansible automation tools have been installed, you can start planning the desired state of your lab infrastructure.
This is the most challenging part of the process, as you are starting from scratch. First, you must choose a starting point and provide a description of the expected results.
It is recommended that you start at the bare-metal hypervisor level before moving on to the virtual machine and its network configuration.
- At the start of the hypervisor system, all provisioned **tap** interfaces are owned by this Type 2 hypervisor.
- These tap interfaces connect virtual machines to an Open vSwitch switch like a patch cable. The tap interface names are also used to refer to the switch ports. All switch port configuration instructions use these tap interface names.
> In this context, you will use a switch port in trunk mode to forward the traffic of multiple broadcast domains or VLANs. Each frame passing through this trunk port uses an IEEE 802.1Q tag to identify the broadcast domain to which it belongs.
- Each virtual machine uses a tap interface number to connect its network interface to the switch port.
> In this context, you want the virtual machine network interface to use a dedicated Virtual Routing and Forwarding (VRF) table for automation operations. This keeps the virtual machine network traffic and the automation traffic completely independent and isolated.
- The VLAN used for automation operations is referred to as an **out-of-band** network because it is reserved exclusively for management traffic only and does not carry any user traffic.
- All other VLANs used for lab traffic are referred to as the **in-band** network because all user traffic passes through them.
Now that all design decisions are in place, they need to be translated into YAML description files that reflect the desired state.
### Step 1: The inventory directory and its content
When using Ansible, it is important to distinguish between the inventory directory and the host variables directory.
We can start by creating the `inventory` and `host_vars` directories
```bash
mkdir ~/iac/lab01/{inventory,host_vars}
```
The files in the `inventory/` directory contain the information required to establish network connections with these hosts, such as hostnames, groups, and attributes. Attached is a copy of the `inventory/hosts.yaml` file.
```yaml=
---
hypervisors:
hosts:
hypervisor_name:
# ansible_host variable is defined in $HOME/.ssh/config file
# Host hypervisor_name
# HostName fe80::XXXX:1%%enp0s1
# User etudianttest
# Port 2222
ansible_host: hypervisor_name
vars:
ansible_ssh_user: "{{ hypervisor_user }}"
# ansible_ssh_pass is useless when using passwordless authentication
# ansible_ssh_pass: "{{ hypervisor_pass }}"
ansible_ssh_port: 2222
vms:
hosts:
vmXX:
vmYY:
all:
children:
hypervisors:
vms:
```
The YAML description above contains two groups: **hypervisors** and **vms**. Within the hypervisors group, hypervisor_name is currently the only member present with the necessary SSH network connection parameters.
The **vms** group has two members, namely **vmXX** and **vmYY**. At this stage, you do not know much except for the fact that you are going to instantiate two virtual machines.
The SSH network connection parameters for all virtual machines will be provided after they are started and the dynamic inventory is built.
### Step 2: The host_vars directory and its content
The content of the `host_vars` directory will now be examined.
YAML description files for each host in the lab infrastructure can be found there. Here are copies of:
- `host_vars/hypervisor_name.yaml`
- `host_vars/vmXX.yaml`
- `host_vars/vmYY.yaml`
:::warning
Be sure to edit this inventory file and replace the placeholders **XXX** and **YYY** with the appropriate real names.
:::
- The hypervisor **`host_vars/hypervisor_name.yaml`** file:
```yaml=
---
lab_name: iac-lab01
lab_path: "{{ ansible_env.HOME }}/labs/{{ lab_name }}"
cloud_url: https://cloud.debian.org/images/cloud/forky/daily/latest/debian-14-genericcloud-amd64-daily.qcow2
image_name: debian-14-amd64.qcow2
filesystem_resize: 64 # GB
oob_vlan: _OOB_VLAN_ID_
switches:
- name: dsw-host
ports:
- name: tapXXX
type: OVSPort
vlan_mode: trunk
trunks: ["{{ oob_vlan }}", 230]
- name: tapYYY
type: OVSPort
vlan_mode: trunk
trunks: ["{{ oob_vlan }}", 230]
```
The first 5 lines of parameters refer to the image pull source that is common to all virtual machines in this lab.
The hypervisor YAML file `hypervisor_name.yaml` contains a list of tap interfaces to configure, as specified in the design. The configuration parameters for each listed tap interface include the switch port name, the trunk mode port, and the list of allowed VLANs in the trunk.
- The virtual machine **`host_vars/vmXX.yaml`** file:
```yaml=
---
vm_name: vmXX
os: linux
master_image: debian-14-amd64.qcow2
force_copy: false
memory: 2048
tapnum: XX
cloud_init:
force_seed: false
users:
- name: "{{ vm_username }}"
groups: adm, sudo, users
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- "{{ vm_userkey }}"
hostname: "{{ vm_name }}"
packages:
- openssh-server
- openvswitch-switch
- qemu-guest-agent
write_files:
- path: /etc/ssh/sshd_config.d/99-custom-port.conf
content: |
Port 22
Port 2222
append: false
netplan:
network:
version: 2
renderer: networkd
ethernets:
enp0s1:
dhcp4: false
dhcp6: false
vlans:
"vlan{{ oob_vlan }}":
id: "{{ oob_vlan }}"
link: enp0s1
dhcp4: true
dhcp6: false
accept-ra: true
vlan230:
id: 230
link: enp0s1
dhcp4: false
dhcp6: false
accept-ra: true
addresses:
- 10.0.{{ 228 + tapnum|int // 256 }}.{{ tapnum|int % 256 }}/22
routes:
- to: default
via: 10.0.228.1
nameservers:
addresses:
- 172.16.0.2
vrfs:
mgmt-vrf:
table: "{{ oob_vlan }}"
interfaces:
- "vlan{{ oob_vlan }}"
```
The virtual machine YAML files reference the hypervisor tap interface connection and contain the **in-band** network interface VLAN configuration parameters.
They contain all the **cloud-init** instructions to customize the virtual machine at startup.
Note that IPv4 addresses are calculated using the tap interface number. This is to prevent students from using the same IPv4 address for different virtual machines.
## Part 3: Access the hypervisor from the DevNet VM using Ansible
Now that the minimum inventory and host variables are in place, you need to verify that the hypervisor is accessible before pulling the virtual machine images.
To do this, you will create a vault in the home directory of the DevNet VM user to store all sensitive information, as you are not using an external service for this purpose.
### Step 1: Create a new Ansible project vault file
Back in the DevNet VM console, create a new vault file named `.iac_passwd` and enter the unique vault password that will be used for all user passwords you want to store.
```bash
ansible-vault create ${HOME}/.iac_passwd
New Vault password:
Confirm New Vault password:
```
This opens the default editor defined by the `$EDITOR` environment variable.
There you enter two sets of variables: one for hypervisor access from your development VM called DevNet, and one for passwordless SSH access to all target VMs created by Ansible playbooks.
```bash=
---
hypervisor_user: XXXXXXXXXX
# Useless when using passwordless authentication
# hypervisor_pass: YYYYYYYYYY
vm_username: admin
vm_userkey: ssh-ed25519 AAAA...
```
:::warning
Replace the 'AAA...' placeholder with your own SSH public key, which you can obtain via the following command: `cat $HOME/.ssh/id_ed25519.pub`
:::
Save your vault password to a file for future use with Ansible playbooks. The example below uses a fake secret:
```bash
echo "v4ult53cr3t" >${HOME}/.vault_pass.txt
chmod 600 ${HOME}/.vault_pass.txt
```
### Step 2: Verify Ansible communication with the Hypervisor
Now you can use the ping ansible module to connect to the `hypervisor_name` hypervisor entry defined in the inventory file.
```bash
ansible hypervisors -m ping \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
hypervisor_name | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
Since the Ansible ping is successful, you can proceed with playbooks to create new virtual machines.
## Part 4: Designing the procedural part
In [Part 2](#Part-2-Designing-the-Declarative-Part), you translated the lab design decisions into declarative YAML files to evaluate the desired state of the two virtual machines' network connections. Now you need to use this declarative information in procedures to effectively build, customize, and run the virtual machines.
To build and launch virtual machines, you must first prepare folders and obtain access to scripts that start virtual machines. Next, you need to configure the hypervisor switch ports used to connect virtual machines in **trunk** mode.
### Step 1: Preparation stage at the Hypervisor level
This is a copy of the first Ansible Playbook [**01_prepare.yaml**](https://gitlab.inetdoc.net/iac/lab01/-/blob/main/01_prepare.yaml), which includes two categories of procedures.
```yaml=
---
# The purpose of this playbook is to prepare the lab environment for the VMs.
# 1. Check the permissions of the masters directory to determine if it is accessible.
# 2. Ensure the required directories exist.
# 3. Create symbolic links to the masters directory.
# 4. Create the switch configuration file.
# 5. Apply the hypervisor switch ports configuration.
# 6. Save and fetch the switch configuration output.
- name: PREPARE LAB ENVIRONMENT
hosts: hypervisors
vars:
masters_dir: /var/cache/kvm/masters
tasks:
- name: CHECK MASTERS DIRECTORY PERMISSIONS
ansible.builtin.stat:
path: "{{ masters_dir }}"
register: masters_stat
- name: ASSERT MASTERS DIRECTORY IS ACCESSIBLE
ansible.builtin.assert:
that:
- masters_stat.stat.exists
- masters_stat.stat.isdir
- masters_stat.stat.readable
- masters_stat.stat.executable
fail_msg: "Directory {{ masters_dir }} is not readable or executable"
- name: ENSURE REQUIRED DIRECTORIES EXIST
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ ansible_env.HOME }}/labs"
- "{{ lab_path }}"
- "{{ lab_path }}/fragments"
- name: CREATE SYMBOLIC LINK
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/masters"
src: "{{ masters_dir }}"
state: link
follow: false
- name: CREATE YAML SWITCH CONFIGURATION
ansible.builtin.template:
src: templates/switch.yaml.j2
dest: "{{ lab_path }}/switch.yaml"
mode: "0644"
loop: "{{ hostvars[inventory_hostname].switches }}"
- name: CONFIGURE HYPERVISOR SWITCH PORTS
ansible.builtin.command:
cmd: "{{ ansible_env.HOME }}/masters/scripts/switch-conf.py --apply {{ lab_path }}/switch.yaml"
chdir: "{{ lab_path }}"
register: switch_conf_result
changed_when: "'changed to' in switch_conf_result.stdout"
failed_when: switch_conf_result.rc != 0
- name: SAVE AND FETCH SWITCH CONFIGURATION OUTPUT
block:
- name: SAVE SWITCH CONFIGURATION OUTPUT
ansible.builtin.copy:
content: "{{ switch_conf_result.stdout }}\n{{ switch_conf_result.stderr }}"
dest: "{{ lab_path }}/switch-conf.log"
mode: "0644"
- name: FETCH SWITCH CONFIGURATION OUTPUT
ansible.builtin.fetch:
src: "{{ lab_path }}/switch-conf.log"
dest: trace/switch-conf.log
flat: true
mode: "0644"
rescue:
- name: HANDLE ERROR IN SAVING OR FETCHING SWITCH CONFIGURATION OUTPUT
ansible.builtin.debug:
msg: An error occurred while saving or fetching the switch configuration output.
```
The three most important points about using Ansible modules in this playbook are:
File operations and checks:
: The `ansible.builtin.stat` module first verifies that the masters directory exists and is accessible, and the `ansible.builtin.file` module then creates the required lab directories and the symbolic link to this masters directory.
Template Rendering
: The `ansible.builtin.template` module is used to create a YAML switch configuration file, demonstrating Ansible's ability to generate dynamic content.
Command Execution
: The `ansible.builtin.command` module is used to run a custom Python script to configure switch ports.
When you run the playbook, you get the following output.
```bash
ansible-playbook 01_prepare.yaml \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
PLAY [PREPARE LAB ENVIRONMENT] *******************************************
TASK [Gathering Facts] ***************************************************
ok: [hypervisor_name]
TASK [CHECK MASTERS DIRECTORY PERMISSIONS] *******************************
ok: [hypervisor_name]
TASK [ASSERT MASTERS DIRECTORY IS ACCESSIBLE] ****************************
ok: [hypervisor_name] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [ENSURE REQUIRED DIRECTORIES EXIST] *********************************
ok: [hypervisor_name] => (item=/home/etudianttest/labs)
ok: [hypervisor_name] => (item=/home/etudianttest/labs/iac-lab01)
ok: [hypervisor_name] => (item=/home/etudianttest/labs/iac-lab01/fragments)
TASK [CREATE SYMBOLIC LINK] **********************************************
ok: [hypervisor_name]
TASK [CREATE YAML SWITCH CONFIGURATION] **********************************
ok: [hypervisor_name] => (item={'name': 'dsw-host', 'ports': [{'name': 'tap4', 'type': 'OVSPort', 'vlan_mode': 'trunk', 'trunks': [52, 230]}, {'name': 'tap5', 'type': 'OVSPort', 'vlan_mode': 'trunk', 'trunks': [52, 230]}]})
TASK [CONFIGURE HYPERVISOR SWITCH PORTS] *********************************
ok: [hypervisor_name]
TASK [SAVE SWITCH CONFIGURATION OUTPUT] **********************************
changed: [hypervisor_name]
TASK [FETCH SWITCH CONFIGURATION OUTPUT] *********************************
changed: [hypervisor_name]
PLAY RECAP ***************************************************************
hypervisor_name : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Step 2: Create a script to resize VM storage capacity
Once the generic cloud image file has been pulled from cloud.debian.org, the first thing you need to do is resize its main partition to increase the amount of space available.
Here is the [**resize.sh**](https://gitlab.inetdoc.net/iac/lab01/-/blob/main/resize.sh) script which is based on the `qemu-img` and `virt-resize` commands. It takes two input parameters: the filename of the virtual machine image and the amount of storage space to add.
```bash=
#!/bin/bash
set -euo pipefail
resize_cmd="virt-resize"
# Check required commands early.
for cmd in "${resize_cmd}" qemu-img jq cp mv grep basename; do
if ! command -v "${cmd}" >/dev/null 2>&1; then
printf "Command '%s' not found.\n" "${cmd}"
exit 1
fi
done
# Validate input arguments.
if [[ $# -ne 2 ]]; then
printf "Usage: %s <vm.qcow2> <size_in_GiB_to_add>\n" "$0"
exit 1
fi
vm="$1"
size_gib="$2"
backup="${vm}.orig"
tmp_vm="${vm}.tmp"
if [[ ! -f ${vm} ]]; then
printf "Input VM image '%s' does not exist.\n" "${vm}"
exit 1
fi
if [[ ! ${size_gib} =~ ^[0-9]+$ ]] || [[ ${size_gib} -eq 0 ]]; then
printf "Size must be a positive integer (GiB).\n"
exit 1
fi
cleanup() {
rm -f "${tmp_vm}"
}
trap cleanup EXIT
# Keep an untouched copy as source for virt-resize.
cp --reflink=auto --sparse=always "${vm}" "${backup}"
# Compute new target virtual size and create destination image.
current_bytes=$(qemu-img info --output=json "${backup}" | jq '.["virtual-size"]')
if [[ -z ${current_bytes} ]]; then
printf "Failed to read current virtual size from '%s'.\n" "${backup}"
exit 1
fi
new_bytes=$((current_bytes + size_gib * 1024 * 1024 * 1024))
qemu-img create -f qcow2 "${tmp_vm}" "${new_bytes}" >/dev/null
# Expand the root partition while copying to the new image.
"${resize_cmd}" --expand /dev/sda1 "${backup}" "${tmp_vm}"
# Replace original image only after a successful resize.
mv -f "${tmp_vm}" "${vm}"
rm -f "${backup}"
info=$(qemu-img info "${vm}")
vm_name=$(basename "${vm}" .qcow2)
grep "virtual size:" <<<"${info}" >"${vm_name}_resized.txt"
printf "VM disk image resized successfully.\n"
```
### Step 3: Create an Ansible playbook to pull, resize and launch VMs.
We are now ready to pull the base virtual machine image, make a copy for each lab VM, resize each VM's main partition, and build the dynamic inventory.
In this step, you will develop a playbook called [**02_pull_customize_run.yaml**](https://gitlab.inetdoc.net/iac/lab01/-/blob/main/02_pull_customize_run.yaml) that calls the script created in the previous steps if necessary.
Here is a diagram of the playbook logic:
```mermaid
flowchart TD
A[Start] --> B[Check VM state<br/>my-vms.py per VM]
B --> C[all_vms_stopped?]
C -- Yes --> D[Provision + Launch]
C -- No --> H[Inventory generation]
D --> D1[Download/copy/resize images]
D1 --> D2[Clean trace + inventory]
D2 --> D3[Build lab_unified.yaml]
D3 --> D4[Launch with --json]
D4 --> D5{stdout JSON?}
D5 -- Yes --> D6[Save remote launch_output.json]
D5 -- No --> H
D6 --> H
H --> I[Fetch launch_output.json]
I --> J{fetch ok?}
J -- No --> K[Skip inventory generation]
J -- Yes --> L[Parse JSON]
L --> L2{parse/template ok?}
L2 -- Yes --> M[Generate inventory/vm_<name>.yaml per VM]
L2 -- No --> K2[Fail templating then rescue to debug skip]
M --> N[End]
K --> N
K2 --> N
```
Here is a copy of the playbook code:
```yaml=
- name: PULL, CUSTOMIZE, AND RUN CLOUD LAB
vars:
lab_config_path: "{{ lab_path }}/lab.yaml"
launch_output_file: "{{ lab_path }}/launch_output.json"
default_file_mode: "0644"
default_dir_mode: "0755"
hosts: hypervisors
tasks:
- name: CHECK IF VIRTUAL MACHINES ALREADY RUN
ansible.builtin.shell:
cmd: |
set -o pipefail
if "$HOME/vm/scripts/my-vms.py" ls --running --name "{{ hostvars[item].vm_name }}" | grep -q '^No VM found\.$'; then
exit 0
else
echo "{{ hostvars[item].vm_name }} is already running!"
exit 1
fi
executable: /bin/bash
register: running_vm
changed_when: false
failed_when: false
loop: "{{ groups['vms'] }}"
tags: launch_lab
- name: SET FACT FOR VMS STATUS
ansible.builtin.set_fact:
all_vms_stopped: "{{ (running_vm.results | map(attribute='rc') | sum) == 0 }}"
- name: PROVISION AND LAUNCH LAB
when: all_vms_stopped
block:
- name: DOWNLOAD DEBIAN CLOUD IMAGE QCOW2 FILE
ansible.builtin.get_url:
url: "https://{{ cloud_url }}"
dest: "{{ lab_path }}/{{ image_name }}"
mode: "{{ default_file_mode }}"
- name: COPY CLOUD IMAGE TO VM IMAGE FILE FOR ALL VMS
ansible.builtin.copy:
src: "{{ lab_path }}/{{ image_name }}"
dest: "{{ lab_path }}/{{ item }}.qcow2"
remote_src: true
mode: "{{ default_file_mode }}"
force: false
loop: "{{ groups['vms'] }}"
- name: RESIZE EACH VM FILESYSTEM
ansible.builtin.script:
cmd: "./resize.sh {{ item }}.qcow2 {{ filesystem_resize }}"
chdir: "{{ lab_path }}"
creates: "{{ lab_path }}/{{ item }}_resized.txt"
loop: "{{ groups['vms'] }}"
- name: DELETE OLD LAUNCH OUTPUT
ansible.builtin.file:
path: "{{ launch_output_file }}"
state: absent
tags: launch_lab
- name: PREPARE LOCAL DIRECTORIES AND CLEAN OLD FILES
delegate_to: localhost
block:
- name: REMOVE OLD TRACE AND INVENTORY FILES
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- trace/launch_output.json
- inventory/lab.yml
- name: DELETE PREVIOUS GENERATED VM INVENTORY FILES
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop: "{{ lookup('ansible.builtin.fileglob', 'inventory/vm_*.yaml', wantlist=True) }}"
- name: ENSURE TRACE AND INVENTORY DIRECTORIES EXIST
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "{{ default_dir_mode }}"
loop:
- trace
- inventory
rescue:
- name: HANDLE LOCAL DIRECTORY PREP ERROR
ansible.builtin.fail:
msg: A failure occurred while cleaning or preparing local trace and inventory directories.
- name: BUILD UNIFIED LAB DECLARATION YAML FILE
ansible.builtin.template:
src: templates/lab_unified.yaml.j2
dest: "{{ lab_config_path }}"
mode: "{{ default_file_mode }}"
tags: yaml_labfile
- name: LAUNCH VIRTUAL MACHINE
ansible.builtin.command:
cmd: "$HOME/vm/scripts/lab-startup.py --json {{ lab_config_path }}"
chdir: "{{ lab_path }}"
register: launch
failed_when: launch.rc != 0 and 'already' not in launch.stdout
changed_when: "' started!' in (launch.stdout | default('')) or ((launch.stdout | default('') | trim) is match('^\\{'))"
tags: launch_lab
- name: SAVE LAUNCH OUTPUT MESSAGES TO LOG FILE
ansible.builtin.copy:
content: "{{ launch.stdout | default('') }}\n{{ launch.stderr | default('') }}"
dest: "{{ launch_output_file }}"
mode: "{{ default_file_mode }}"
when: "launch.stdout is defined and (launch.stdout | trim) is match('^\\{')"
rescue:
- name: HANDLE PROVISIONING ERROR
ansible.builtin.fail:
msg: An error occurred during lab provisioning, image customization, or VM launch. Playbook halted.
- name: HANDLE LAB INVENTORY GENERATION
tags: launch_lab
block:
- name: FETCH EXISTING LAUNCH OUTPUT
ansible.builtin.fetch:
src: "{{ launch_output_file }}"
dest: trace/launch_output.json
flat: true
mode: "{{ default_file_mode }}"
ignore_errors: true
register: fetch_output
- name: GENERATE INVENTORY FILES FROM TRACE
delegate_to: localhost
when: fetch_output is not failed
block:
- name: SET FACT FOR LAUNCH JSON
ansible.builtin.set_fact:
launch_json: "{{ lookup('file', 'trace/launch_output.json') | from_json }}"
- name: GENERATE ONE INVENTORY FILE PER VM
ansible.builtin.template:
src: templates/inventory_vm.yaml.j2
dest: "inventory/vm_{{ item.vm_name }}.yaml"
mode: "{{ default_file_mode }}"
loop: "{{ launch_json.vms }}"
loop_control:
label: "{{ item.vm_name }}"
rescue:
- name: HANDLE LOCAL INVENTORY TEMPLATING ERROR
ansible.builtin.fail:
msg: Failed to parse the trace JSON or template the individual VM inventory files.
rescue:
- name: HANDLE MISSING INVENTORY TRACE
ansible.builtin.debug:
msg: Inventory generation skipped. No valid launch_output.json found.
```
Here are the 7 most important points about using Ansible modules in this Ansible playbook:
Structured error handling:
: The playbook implements comprehensive error handling using block, rescue, and always sections across multiple stages, providing clear failure messages and graceful degradation when issues occur.
Idempotent operations:
: Tasks use parameters like creates, force: false, and when conditions to ensure idempotency. The playbook can be run multiple times without causing unintended changes or duplicate operations.
Early validation checks:
: The playbook uses ansible.builtin.shell with pipes and grep to check if VMs are already running before proceeding, preventing conflicts and unnecessary work.
Fact-based conditional execution:
: The playbook sets facts like all_vms_stopped using set_fact and then uses these facts with when conditions to control the entire provisioning block, making logic clear and maintainable.
Delegation for local operations:
: Tasks use delegate_to: localhost to perform local file operations (cleaning old traces, templating inventory files) while other tasks run on remote hypervisors, optimizing the workflow.
Loop control with labels:
: The playbook uses loop_control with label parameters to make loop output cleaner and more readable, especially when iterating over VM lists with complex data structures.
Return value capture and validation:
: Tasks use register to capture command outputs, then validate them with conditions like changed_when and failed_when, examining both return codes and output content for precise state detection.
### Step 4: Create the Jinja2 template files called by the playbook
Jinja2 templates serve as dynamic configuration generators that transform declarative inventory data into runtime-specific configuration files. The **`ansible.builtin.template`** module processes these templates by rendering Jinja2 expressions with actual values from host variables and facts, enabling infrastructure-as-code workflows where the same template adapts to different deployment contexts.
Here is the `templates/inventory_vm.yaml.j2` code:
```yaml=
---
vms:
hosts:
{{ item.vm_name }}:
ansible_host: {{ item.ipv6_link_local.split('%')[0] }}%enp0s1
ansible_port: 22
ansible_ssh_user: "{% raw %}{{ vm_username }}{% endraw %}"
```
This `inventory_vm.yaml.j2` template generates individual Ansible inventory files for each virtual machine by extracting the VM name and IPv6 link-local address from the launch output JSON, creating the necessary SSH connection parameters that allow Ansible to manage the newly provisioned VMs through their out-of-band management interface.
Here is the `templates/lab_unified.yaml.j2` code:
```yaml=
# Based on template at:
# https://gitlab.inetdoc.net/labs/startup-scripts/-/blob/main/templates/
kvm:
vms:
{% for item in groups['vms'] %}
{% set vm_template_vars = {
'tapnum': hostvars[item].tapnum,
'vm_name': hostvars[item].vm_name
} %}
- {{ lookup(
'ansible.builtin.template',
'host_vars/' ~ item ~ '.yaml',
template_vars=vm_template_vars
) | indent(6) }}
{% endfor %}
```
This slightly more complex `lab_unified.yaml.j2` template orchestrates a nested templating workflow that iterates over the vms inventory group, dynamically includes each VM's host_vars file using the lookup('ansible.builtin.template') plugin, and consolidates all VM declarations into a single unified YAML configuration file that the `lab-startup.py` script consumes to launch multiple virtual machines with their complete cloud-init customizations at once.
### Step 5: Run the `02_pull_customize_run.yaml` playbook
Here is a sample output of the playbook execution when the VMs are already started.
```bash
ansible-playbook 02_pull_customize_run.yaml \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
PLAY [PULL, CUSTOMIZE, AND RUN CLOUD LAB] ***********************************
TASK [Gathering Facts] ******************************************************
ok: [hypervisor_name]
TASK [CHECK IF VIRTUAL MACHINES ALREADY RUN] ********************************
ok: [hypervisor_name] => (item=vmXX)
ok: [hypervisor_name] => (item=vmYY)
TASK [SET FACT FOR VMS STATUS] **********************************************
ok: [hypervisor_name]
TASK [DOWNLOAD DEBIAN CLOUD IMAGE QCOW2 FILE] *******************************
skipping: [hypervisor_name]
TASK [COPY CLOUD IMAGE TO VM IMAGE FILE FOR ALL VMS] ************************
skipping: [hypervisor_name] => (item=vmXX)
skipping: [hypervisor_name] => (item=vmYY)
skipping: [hypervisor_name]
TASK [RESIZE EACH VM FILESYSTEM] ********************************************
skipping: [hypervisor_name] => (item=vmXX)
skipping: [hypervisor_name] => (item=vmYY)
skipping: [hypervisor_name]
TASK [DELETE OLD LAUNCH OUTPUT] *********************************************
skipping: [hypervisor_name]
TASK [REMOVE OLD TRACE AND INVENTORY FILES] *********************************
skipping: [hypervisor_name] => (item=trace/launch_output.json)
skipping: [hypervisor_name] => (item=inventory/lab.yml)
skipping: [hypervisor_name]
TASK [DELETE PREVIOUS GENERATED VM INVENTORY FILES] *************************
skipping: [hypervisor_name] => (item=/home/etu/labs/lab01/inventory/vm_vmXX.yaml)
skipping: [hypervisor_name] => (item=/home/etu/labs/lab01/inventory/vm_vmYY.yaml)
skipping: [hypervisor_name]
TASK [ENSURE TRACE AND INVENTORY DIRECTORIES EXIST] *************************
skipping: [hypervisor_name] => (item=trace)
skipping: [hypervisor_name] => (item=inventory)
skipping: [hypervisor_name]
TASK [BUILD UNIFIED LAB DECLARATION YAML FILE] ******************************
skipping: [hypervisor_name]
TASK [LAUNCH VIRTUAL MACHINE] ***********************************************
skipping: [hypervisor_name]
TASK [SAVE LAUNCH OUTPUT MESSAGES TO LOG FILE] ******************************
skipping: [hypervisor_name]
TASK [FETCH EXISTING LAUNCH OUTPUT] *****************************************
ok: [hypervisor_name]
TASK [SET FACT FOR LAUNCH JSON] *********************************************
ok: [hypervisor_name -> localhost]
TASK [GENERATE ONE INVENTORY FILE PER VM] ***********************************
ok: [hypervisor_name -> localhost] => (item=vmXX)
ok: [hypervisor_name -> localhost] => (item=vmYY)
PLAY RECAP ******************************************************************
hypervisor_name : ok=6 changed=0 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0
```
### Step 6: Check Ansible SSH access to the target virtual machines
Here you will use the **ping** Ansible module directly from the command line.
```bash
ansible vms -m ping \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
> You use the **vms** group entry defined in the main inventory file set in Part 1: `hosts.yaml`
```bash=
vmXX | SUCCESS => {
"changed": false,
"ping": "pong"
}
vmYY | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
You can also check the inventory contains **vmXX** and **vmYY** entries with their own parameters.
```bash
ansible-inventory --yaml --list
```
This completes the **Infrastructure as Code** part of creating virtual machines from scratch. You can now control and configure these new virtual machines from the DevNet VM.
## Part 5: Analyze new virtual machines network configuration
Now that you have access to your newly created virtual machines, let’s review the current network configuration. The main objective is to show that the VRF network configuration is working properly. Although you have out-of-band SSH access to each VM, their routing tables only display in-band network entries.
### Step 1: View the addresses assigned to the out-of-band VLAN
We just have to list addresses with the `ip` command using the **command** Ansible module on the DevNet VM.
Replace the 'VVVV' placeholder with your out-of-band VLAN identifier in the instruction below.
```bash
ansible vms -m command -a "ip addr ls dev vlanVVVV" \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
vmXX | CHANGED | rc=0 >>
5: vlanVVVV@enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mgmt-vrf state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:04 brd ff:ff:ff:ff:ff:ff
inet 198.18.53.XX/22 metric 100 brd 198.18.55.255 scope global dynamic vlanVVVV
valid_lft 3864sec preferred_lft 3864sec
inet6 2001:678:3fc:34:baad:caff:fefe:XX/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86004sec preferred_lft 14004sec
inet6 fe80::baad:caff:fefe:XX/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
vmYY | CHANGED | rc=0 >>
5: vlanVVVV@enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mgmt-vrf state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:05 brd ff:ff:ff:ff:ff:ff
inet 198.18.53.YY/22 metric 100 brd 198.18.55.255 scope global dynamic vlanVVVV
valid_lft 3867sec preferred_lft 3867sec
inet6 2001:678:3fc:34:baad:caff:fefe:YY/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86004sec preferred_lft 14004sec
inet6 fe80::baad:caff:fefe:YY/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
```
The most important point regarding network configuration is that each VM `vlanVVVV` interface is associated with the `mgmt-vrf` management VRF, which has IPv4 and IPv6 addresses in the **out-of-band** network.
### Step 2: View both out-of-band and in-band routing tables
As with the network addresses, the `ip` command and Ansible **command** module are used.
The default IPv4 routing table on each VM displays only in-band network entries.
```bash
ansible vms -m command -a "ip route ls" \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
vmYY | CHANGED | rc=0 >>
default via 10.0.228.1 dev vlan230 proto static
10.0.228.0/22 dev vlan230 proto kernel scope link src 10.0.228.YY
vmXX | CHANGED | rc=0 >>
default via 10.0.228.1 dev vlan230 proto static
10.0.228.0/22 dev vlan230 proto kernel scope link src 10.0.228.XX
```
We get the same result with the IPv6 default routing table.
```bash
ansible vms -m command -a "ip -6 route ls" \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
vmYY | CHANGED | rc=0 >>
2001:678:3fc:e6::/64 dev vlan230 proto ra metric 512 expires 2591979sec pref high
fe80::/64 dev enp0s1 proto kernel metric 256 pref medium
fe80::/64 dev vlan230 proto kernel metric 256 pref medium
default nhid 2775262729 via fe80:e6::1 dev vlan230 proto ra metric 512 expires 1779sec pref high
vmXX | CHANGED | rc=0 >>
2001:678:3fc:e6::/64 dev vlan230 proto ra metric 512 expires 2591979sec pref high
fe80::/64 dev enp0s1 proto kernel metric 256 pref medium
fe80::/64 dev vlan230 proto kernel metric 256 pref medium
default nhid 1156289287 via fe80:e6::1 dev vlan230 proto ra metric 512 expires 1779sec pref high
```
We have to specify the VRF context to get routing table entries for the management out-of-band network.
```bash
ansible vms -m command -a "ip route ls vrf mgmt-vrf" \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
vmXX | CHANGED | rc=0 >>
default via 198.18.52.1 dev vlanVVVV proto dhcp src 198.18.53.XX metric 100
172.16.0.2 via 198.18.52.1 dev vlanVVVV proto dhcp src 198.18.53.XX metric 100
172.16.0.9 via 198.18.52.1 dev vlanVVVV proto dhcp src 198.18.53.XX metric 100
198.18.52.0/22 dev vlanVVVV proto kernel scope link src 198.18.53.XX metric 100
198.18.52.1 dev vlanVVVV proto dhcp scope link src 198.18.53.XX metric 100
vmYY | CHANGED | rc=0 >>
default via 198.18.52.1 dev vlanVVVV proto dhcp src 198.18.53.YY metric 100
172.16.0.2 via 198.18.52.1 dev vlanVVVV proto dhcp src 198.18.53.YY metric 100
172.16.0.9 via 198.18.52.1 dev vlanVVVV proto dhcp src 198.18.53.YY metric 100
198.18.52.0/22 dev vlanVVVV proto kernel scope link src 198.18.53.YY metric 100
198.18.52.1 dev vlanVVVV proto dhcp scope link src 198.18.53.YY metric 100
```
IPv4 network configuration is provided by DHCP.
```bash
ansible vms -m command -a "ip -6 route ls vrf mgmt-vrf" \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
vmYY | CHANGED | rc=0 >>
2001:678:3fc:VVVV::/64 dev vlanVVVV proto ra metric 100 expires 86347sec pref medium
fe80::/64 dev vlanVVVV proto kernel metric 256 pref medium
multicast ff00::/8 dev vlanVVVV proto kernel metric 256 pref medium
default nhid 890730963 via fe80::VVVV:1 dev vlanVVVV proto ra metric 100 expires 1747sec pref medium
vmXX | CHANGED | rc=0 >>
2001:678:3fc:VVVV::/64 dev vlanVVVV proto ra metric 100 expires 86347sec pref medium
fe80::/64 dev vlanVVVV proto kernel metric 256 pref medium
multicast ff00::/8 dev vlanVVVV proto kernel metric 256 pref medium
default nhid 2329389769 via fe80::VVVV:1 dev vlanVVVV proto ra metric 100 expires 1747sec pref medium
```
IPv6 network configuration is provided by SLAAC Router Advertisement (RA).
### Step 3: Run ICMP and ICMPv6 tests to a public Internet address from the out-of-band VLAN
Here is an example of commands prefixed with `ip vrf exec` to send traffic from the out-of-band network interface. Only sudoers can run these commands in the VRF context.
```bash
ansible vms -m command -a "sudo ip vrf exec mgmt-vrf ping -c2 9.9.9.9" \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
vmXX | CHANGED | rc=0 >>
PING 9.9.9.9 (9.9.9.9) 56(84) bytes of data.
64 bytes from 9.9.9.9: icmp_seq=1 ttl=51 time=23.3 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=51 time=23.2 ms
--- 9.9.9.9 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 23.180/23.242/23.305/0.062 ms
vmYY | CHANGED | rc=0 >>
PING 9.9.9.9 (9.9.9.9) 56(84) bytes of data.
64 bytes from 9.9.9.9: icmp_seq=1 ttl=51 time=23.1 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=51 time=22.5 ms
--- 9.9.9.9 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 22.505/22.786/23.068/0.281 ms
```
```bash
ansible vms -m command -a "sudo ip vrf exec mgmt-vrf ping -c2 2620:fe::fe" \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
```bash=
vmYY | CHANGED | rc=0 >>
PING 2620:fe::fe (2620:fe::fe) 56 data bytes
64 bytes from 2620:fe::fe: icmp_seq=1 ttl=59 time=38.6 ms
64 bytes from 2620:fe::fe: icmp_seq=2 ttl=59 time=39.5 ms
--- 2620:fe::fe ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 38.578/39.035/39.493/0.457 ms
vmXX | CHANGED | rc=0 >>
PING 2620:fe::fe (2620:fe::fe) 56 data bytes
64 bytes from 2620:fe::fe: icmp_seq=1 ttl=59 time=37.6 ms
64 bytes from 2620:fe::fe: icmp_seq=2 ttl=59 time=35.7 ms
--- 2620:fe::fe ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 35.725/36.683/37.641/0.958 ms
```
All the above ICMP and ICMPv6 tests show that there is no packet loss and that packet routing is fully functional with IPv4 and IPv6.
## Part 6: Virtual machines system configuration
Finally, you can use the in-band VLAN network access to perform some system configuration tasks on the virtual machines.
### Step 1: Create a new virtual machine operating system configuration playbook
Create a new [**03_system_bits.yaml**](https://gitlab.inetdoc.net/iac/lab01/-/blob/main/03_system_bits.yaml?ref_type=heads) playbook file.
```yaml=
---
# Configure the system bits and pieces of the virtual machines.
# 1. Configure the VM locales.
# 2. Configure the VM timezone.
# 3. Install the Debian keyring.
- name: CONFIGURE SYSTEM BITS AND PIECES
hosts: vms
become: true
# Do not gather facts about the remote systems before they are accessible
gather_facts: false
pre_tasks:
- name: WAIT FOR VMS TO BECOME ACCESSIBLE
ansible.builtin.wait_for_connection:
delay: 5
sleep: 5
timeout: 300
connect_timeout: 5
- name: WAIT FOR SSH SERVICE
ansible.builtin.wait_for:
port: 22
state: started
timeout: 300
- name: GATHER FACTS
# Gather facts about the remote system once the connection is available
ansible.builtin.setup:
when: ansible_facts | length == 0
tasks:
- name: CONFIGURE VM LOCALES
community.general.locale_gen:
name: fr_FR.UTF-8
state: present
- name: CONFIGURE VM TIMEZONE
community.general.timezone:
name: Europe/Paris
- name: INSTALL DEBIAN KEYRING
ansible.builtin.apt:
name: debian-keyring
state: present
update_cache: true
```
This playbook addresses VM startup time management through strategic pre-tasks.
It uses `wait_for_connection` to ensure that VMs are reachable with:
- 5-second delay
- 5-second sleep interval
- 300-second timeout.
It also uses `wait_for` to confirm the availability of the SSH service.
Facts gathering is initially disabled and only performed when the connection is established, optimizing the efficiency of the playbook.
These measures ensure that configuration tasks are only performed when VMs are fully operational, eliminating potential errors due to premature execution.
### Step 2: Run the `03_system_bits.yaml` playbook
```bash
ansible-playbook 03_system_bits.yaml \
--extra-vars @$HOME/.iac_passwd \
--vault-password-file ${HOME}/.vault_pass.txt
```
Below is the output from a second run where all tasks had already been processed.
```bash=
PLAY [CONFIGURE SYSTEM BITS AND PIECES] *************************************
TASK [WAIT FOR VMS TO BECOME ACCESSIBLE] ************************************
ok: [vmXX]
ok: [vmYY]
TASK [WAIT FOR SSH SERVICE] *************************************************
ok: [vmXX]
ok: [vmYY]
TASK [GATHER FACTS] *********************************************************
ok: [vmXX]
ok: [vmYY]
TASK [CONFIGURE VM LOCALES] *************************************************
ok: [vmXX]
ok: [vmYY]
TASK [CONFIGURE VM TIMEZONE] ************************************************
ok: [vmYY]
ok: [vmXX]
TASK [INSTALL DEBIAN KEYRING] ***********************************************
ok: [vmYY]
ok: [vmXX]
PLAY RECAP ******************************************************************
vmXX : ok=6 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
vmYY : ok=6 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
## Conclusion
This lab has demonstrated the power of Infrastructure as Code (IaC) using Ansible to build and configure Debian GNU/Linux virtual machines from scratch. You explored key concepts including:
- Using declarative YAML files to define the desired infrastructure state
- Leveraging Ansible playbooks to automate VM creation and configuration
- Implementing Virtual Routing and Forwarding (VRF) for network isolation
- Configuring out-of-band management access separate from in-band traffic
By following this IaC approach, you were able to rapidly deploy consistent, customized VMs with proper networking setup. The skills and techniques covered provide a foundation for efficiently managing large-scale infrastructure deployments. Moving forward, these IaC practices can be extended to provision more complex environments, integrate with version control systems, and implement continuous deployment pipelines with GitLab. This is the horizon of the next document: [IaC Lab 2 – Using GitLab CI to run Ansible playbooks and build new Debian GNU/Linux virtual machines](https://md.inetdoc.net/s/CPltj12uT).