# IaC Lab 1 -- Use Ansible to build new Debian GNU/Linux Virtual Machines
[toc]
---
> Copyright (c) 2025 Philippe Latu.
Permission is granted to copy, distribute and/or modify this document under the
terms of the GNU Free Documentation License, Version 1.3 or any later version
published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included
in the section entitled "GNU Free Documentation License".
https://inetdoc.net
GitLab repository: https://gitlab.inetdoc.net/iac/lab-01-02/
### Scenario
In this lab, you will explore the basics of using Ansible to build and customise Debian GNU/Linux virtual machines. This is a first illustration of the **Infrastructure as Code** (IaC) **push** method where the DevNet VM (controlling server) uses Ansible to build a new target system *almost* from scratch.
The key design point here is to use VRF to create an automation out-of-band VLAN
channel that is isolated from the VM users in-band traffic.

The main stages of the scenario are as follows:
1. We start at the hypervisor shell level by pulling a base virtual machine base image from [cloud.debian.org](https://cloud.debian.org/images/cloud/). We then resize each of the virtual machine main partition image files copied from the cloud image.
2. From a networking perspective, virtual machines are customized to use **Virtual Routing and Forwarding** (VRF).
3. The properties of all VMs are declared by using **cloud-init**. All the **cloud-init** parameters are passed from the `host_vars` file of each virtual machine.
We can connect them to any network or VLAN by declaring new interfaces. In this context, the Debian virtual machines must be connected to a switch port in **trunk** mode to enable the use of VLAN tags to identify the relevant broadcast domains.
:::info
If you want to adapt this lab to your own context, you will need to install and configure (or replace) the declarative management scripts used to write this document.
On the network side, we use the `switch-conf.py` Python script, which reads a YAML declaration file to configure Open vSwitch tap port interfaces on the hypervisor.
On the virtual machine side, we also use the `lab-startup.py` Python script, which reads another YAML declaration file to set the virtual machine properties such as its network configuration.
All these tools are hosted at this address: https://gitlab.inetdoc.net/labs/startup-scripts
:::
### Objectives
After completing the manipulation steps in this document, you will be able to:
- Use Ansible to automate the creation and configuration of Debian GNU/Linux virtual machines from scratch.
- Implement Virtual Routing and Forwarding (VRF) for network isolation between management and user traffic.
- Create and manage Infrastructure as Code (IaC) using declarative YAML files to define desired infrastructure state.
- Configure out-of-band management access separate from in-band network traffic for virtual machines.
- Develop and execute Ansible playbooks to automate various aspects of infrastructure deployment and configuration.
## Part 1: Configure Ansible on the DevNet VM
We need to configure Ansible and verify that we can access the hypervisor from the DevNet VM via SSH. So, we start by creating the working directory, installing Ansible, and creating a project configuration file.
1. Make the `~/iac/lab01` directory for example and navigate to this folder
```bash
mkdir -p ~/iac/lab01 && cd ~/iac/lab01
```
2. Install **ansible** Python virtual environement
There are two main ways to set up a new Ansible workspace. Packages and Python virtual environments are both viable options. Here we choose to install Ansible in a Python virtual environment to take advantage of the latest release.
We start by creating a `requirements.txt` file.
```bash
cat << EOF > requirements.txt
ansible
ansible-lint
EOF
```
Then we install the `ansible` and `ansible-lint` tools in a virtual environment called `ansible`.
```bash
python3 -m venv ansible
source ./ansible/bin/activate
pip3 install -r requirements.txt
```
3. Create a new `ansible.cfg` file in the `lab01` directory from the shell prompt
Here is a copy of the [**ansible.cfg**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/ansible.cfg?ref_type=heads) file.
```=
[defaults]
# Use inventory/ folder files as source
inventory=inventory/
host_key_checking = False # Don't worry about RSA Fingerprints
retry_files_enabled = False # Do not create them
deprecation_warnings = False # Do not show warnings
interpreter_python = /usr/bin/python3
[inventory]
enable_plugins = auto, host_list, yaml, ini, toml, script
[persistent_connection]
command_timeout=100
connect_timeout=100
connect_retry_timeout=100
```
:::info
The key point of this configuration file is to use the `inventory/` directory for all files created during playbook processing. This is the way to achieve dynamic inventory at virtual machine startup.
:::
## Part 2: Designing the Declarative Part
Now that the Ansible automation tools are installed, it is time to plan for the desired state of our lab infrastructure.
This is the most challenging aspect of the task, as we start with a blank screen. A starting point must be chosen, followed by a description of the expected results.
The chosen approach is to start at the bare-metal hypervisor level and then move on to the virtual machine and its network configuration.
- At the start of the hypervisor system, all provisioned **tap** interfaces are owned by this Type 2 hypervisor.
- These tap interfaces connect virtual machines to an Open vSwitch switch like a patch cable. The tap interface names are also used to refer to the switch ports. All switch port configuration instructions use these tap interface names.
> In our context, we use a switch port in trunk mode to forward the traffic of multiple broadcast domains or VLANs. Each frame passing through this trunk port uses an IEEE 802.1Q tag to identify the broadcast domain to which it belongs.
- Each virtual machine uses a tap interface number to connect its network interface to the switch port.
> In our context, we want the virtual machine network interface to use a dedicated Virtual Routing and Forwarding (VRF) table for automation operations. This keeps the virtual machine network traffic and the automation traffic completely independent and isolated.
- The VLAN used for automation operations is referred to as an **Out of Band** network because it is reserved exclusively for management traffic only and does not carry any user traffic.
- All other VLANs used for lab traffic are referred to as the **In Band** network because all user traffic passes through them.
Now that all statements are in place, they need to be translated into YAML description files that reflect the desired state.
### Step 1: The inventory directory and its content
When using Ansible, it is important to distinguish between the inventory directory and the host variables directory.
We can start by creating the `inventory` and `host_vars` directories
```bash
mkdir ~/iac/lab01/{inventory,host_vars}
```
The files in the `inventory/` directory contain the necessary information, such as host names, groups, and attributes, required to establish network connections with these hosts. Here is a copy of the `inventory/hosts.yml` file. See [**hosts.yaml**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/inventory/hosts.yml) in the Git repository.
```yaml=
---
hypervisors:
hosts:
eve:
# ansible_host variable is defined in $HOME/.ssh/config file
# Host eve
# HostName fe80::XXXX:1%%enp0s1
# User etudianttest
# Port 2222
ansible_host: eve
vars:
ansible_ssh_user: "{{ hypervisor_user }}"
ansible_ssh_pass: "{{ hypervisor_pass }}"
ansible_ssh_port: 2222
vms:
hosts:
vmXXX:
vmYYY:
all:
children:
hypervisors:
vms:
```
The YAML description above contains two groups: **hypervisors** and **vms**. Within the hypervisors group, Eve is currently the only member present with the necessary SSH network connection parameters.
The **vms** group has two members, namely **vmXXX** and **vmYYY**. At this stage, we do not know much except for the fact that we are going to instantiate two virtual machines.
The SSH network connection parameters for all virtual machines will be provided after they are started and the dynamic inventory Python script is run.
### Step 2: The host_vars directory and its content
The content of the `host_vars` directory will now be examined.
YAML description files for each host in the lab infrastructure can be found there.
Here are copies of:
- `host_vars/eve.yml`
- `host_vars/vmXXX.yml`
- `host_vars/vmYYY.yml`
:::warning
Be sure to edit this inventory file and replace the placeholders **XXX** and **YYY** with the appropriate real names.
:::
- The hypervisor [**host_vars/eve.yml**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/host_vars/eve.yml?ref_type=heads) file:
```yaml=
---
lab_name: iac-lab-01-02
lab_path: "{{ ansible_env.HOME }}/labs/{{ lab_name }}"
cloud_url: cloud.debian.org/images/cloud/trixie/daily/latest/debian-13-genericcloud-amd64-daily.qcow2
image_name: debian-13-amd64.qcow2
filesystem_resize: 32 # GB
oob_vlan: _OOB_VLAN_ID_
switches:
- name: dsw-host
ports:
- name: tapXXX
type: OVSPort
vlan_mode: trunk
trunks: ["{{ oob_vlan }}", 230]
- name: tapYYY
type: OVSPort
vlan_mode: trunk
trunks: ["{{ oob_vlan }}", 230]
```
The first 5 lines of parameters refer to the image pull source that is common to all virtual machines in this lab.
The hypervisor YAML file `eve.yml` contains a list of tap interfaces to configure, as specified in the design. The configuration parameters for each listed tap interface listed include the switch port name, the trunk mode port, and the list of allowed VLANs in the trunk.
- The virtual machine [**host_vars/vmXXX.yml**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/host_vars/vmXXX.yml?ref_type=heads) file:
```yaml=
vm_name: vmXXX
os: linux
master_image: debian-13-amd64.qcow2
force_copy: false
memory: 2048
tapnum: XXX
cloud_init:
force_seed: false
users:
- name: "{{ vm_username }}"
groups: adm, sudo, users
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- "{{ vm_userkey}}"
hostname: "{{ vm_name }}"
netplan:
network:
version: 2
renderer: networkd
ethernets:
enp0s1:
dhcp4: false
dhcp6: false
vlans:
"vlan{{ oob_vlan }}":
id: "{{ oob_vlan }}"
link: enp0s1
dhcp4: true
dhcp6: false
accept-ra: true
vlan230:
id: 230
link: enp0s1
dhcp4: false
dhcp6: false
accept-ra: true
addresses:
- 10.0.{{ 228 + tapnum|int // 256 }}.{{ tapnum|int % 256 }}/22
routes:
- to: default
via: 10.0.228.1
nameservers:
addresses:
- 172.16.0.2
vrfs:
mgmt-vrf:
table: "{{ oob_vlan }}"
interfaces:
- "vlan{{ oob_vlan }}"
```
The virtual machine YAML files reference the hypervisor tap interface connection and contain the **in-band** network interface VLAN configuration parameters.
It contains all the **cloud-init** instructions to customize the virtual machine at startup.
Note that IPv4 addresses are calculated using the tap interface number. This is to prevent students from using the same IPv4 address for different virtual machines.
## Part 3: Access the hypervisor from the DevNet virtual machine using Ansible
Now that the minimal inventory and host variables are in place, it is necessary to verify hypervisor accessibility before pulling virtual machine images.
In this part, we will create a vault to store all secrets in the home directory of the DevNet virtual machine user, since we are not using an external service to handle sensitive information.
### Step 1: Check SSH access from DevNet VM to the Hypervisor
Since we have already completed the `~/.ssh/config` file, we are ready to test the SSH connection to the hypervisor called `eve`.
Here is an extract from the user's SSH configuration file:
```bash=
Host eve
HostName fe80::XXXX:1%%enp0s1
User etudianttest
Port 2222
```
We make a minimal SSH connection and check for success with a return code of 0.
```bash
ssh -q eve exit
echo $?
0
```
### Step 2: Create a new Ansible project vault file
Back in the DevNet VM console, create a new vault file named `.iac_passwd.yml` and enter the unique vault password that will be used for all user passwords you want to store.
```bash
ansible-vault create $HOME/.iac_passwd.yml
New Vault password:
Confirm New Vault password:
```
This opens the default editor defined by the `$EDITOR` environment variable.
There we enter two sets of variables: one for hypervisor access from our development VM called DevNet, and one for passwordless SSH access to all target VMs created by Ansible playbooks.
```bash=
hypervisor_user: XXXXXXXXXX
hypervisor_pass: YYYYYYYYYY
vm_username: admin
vm_userkey: ssh-ed25519 AAAA...
```
### Step 3: Verify Ansible communication with the Hypervisor
Now we can use the ping ansible module to connect to the `eve` hypervisor entry defined in the inventory file.
```bash
ansible eve -m ping --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
eve | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
Since the Ansible ping is successful, we can proceed with playbooks to create new virtual machines.
## Part 4: Designing the procedural part
In [Part 2](#Part-2-Designing-the-Declarative-Part), we translated the lab design decisions into declarative YAML files to evaluate the desired state of the two virtual machines' network connections. Now we need to use this declarative information in procedures to effectively build, customize, and run the virtual machines.
To build and launch virtual machines, we must first prepare folders and get access to scripts that start virtual machines. Next, we need to configure the hypervisor switch ports used to connect virtual machines in **trunk** mode.
### Step 1: Preparation stage at the Hypervisor level
This is a copy of the first Ansible Playbook [**01_prepare.yml**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/01_prepare.yml), which includes two categories of procedures.
```yaml=
---
# The purpose of this playbook is to prepare the lab environment for the VMs.
# It performs the following steps:
# 1. Check the permissions of the masters directory to determine if it is accessible.
# 2. Ensure the required directories exist.
# 3. Create symbolic links to the masters directory.
# 4. Create the switch configuration file.
# 5. Configure the hypervisor switch ports.
# 6. Save and fetch the switch configuration output.
# The playbook is executed on the hypervisors group of hosts.
- name: PREPARE LAB ENVIRONMENT
hosts: hypervisors
vars:
masters_dir: /var/cache/kvm/masters
tasks:
- name: CHECK MASTERS DIRECTORY PERMISSIONS
ansible.builtin.shell: |
if [ -r "{{ masters_dir }}" ] && [ -x "{{ masters_dir }}" ]; then
exit 0
else
echo "Directory {{ masters_dir }} is not readable or executable"
exit 1
fi
changed_when: false
register: perms_check
failed_when: perms_check.rc != 0
- name: ENSURE REQUIRED DIRECTORIES EXIST
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "0755"
loop:
- "{{ ansible_env.HOME }}/labs"
- "{{ lab_path }}"
- "{{ lab_path }}/fragments"
- name: CREATE SYMBOLIC LINK
ansible.builtin.file:
path: "{{ ansible_env.HOME }}/masters"
src: "{{ masters_dir }}"
state: link
follow: false
- name: CREATE YAML SWITCH CONFIGURATION
ansible.builtin.template:
src: templates/switch.yaml.j2
dest: "{{ lab_path }}/switch.yaml"
mode: "0644"
loop: "{{ hostvars[inventory_hostname].switches }}"
- name: CONFIGURE HYPERVISOR SWITCH PORTS
ansible.builtin.command:
cmd: "{{ ansible_env.HOME }}/masters/scripts/switch-conf.py {{ lab_path }}/switch.yaml"
chdir: "{{ lab_path }}"
register: switch_conf_result
changed_when: "'changed to' in switch_conf_result.stdout"
failed_when: switch_conf_result.rc != 0
- name: SAVE AND FETCH SWITCH CONFIGURATION OUTPUT
block:
- name: SAVE SWITCH CONFIGURATION OUTPUT
ansible.builtin.copy:
content: "{{ switch_conf_result.stdout }}\n{{ switch_conf_result.stderr }}"
dest: "{{ lab_path }}/switch-conf.log"
mode: "0644"
- name: FETCH SWITCH CONFIGURATION OUTPUT
ansible.builtin.fetch:
src: "{{ lab_path }}/switch-conf.log"
dest: trace/switch-conf.log
flat: true
mode: "0644"
rescue:
- name: HANDLE ERROR IN SAVING OR FETCHING SWITCH CONFIGURATION OUTPUT
ansible.builtin.debug:
msg: An error occurred while saving or fetching the switch configuration output.
```
The three most important points about using Ansible modules in this playbook are:
File Operations
: The `ansible.builtin.file` module is used extensively for directory creation and symbolic link management.
Template Rendering
: The `ansible.builtin.template` module is used to create a YAML switch configuration file, demonstrating Ansible's ability to generate dynamic content.
Command Execution
: The `ansible.builtin.command` module is used to run a custom Python script to configure switch ports.
When we run the playbook, we get the following output.
```bash
ansible-playbook 01_prepare.yml --extra-vars @$HOME/.iac_passwd.yml
```
```bash=
PLAY [PREPARE LAB ENVIRONMENT] *************************************************
TASK [Gathering Facts] *********************************************************
ok: [eve]
TASK [CHECK MASTERS DIRECTORY PERMISSIONS] *************************************
ok: [eve]
TASK [ENSURE REQUIRED DIRECTORIES EXIST] ***************************************
ok: [eve] => (item=/home/etudianttest/labs)
changed: [eve] => (item=/home/etudianttest/labs/iac-lab-01-02)
changed: [eve] => (item=/home/etudianttest/labs/iac-lab-01-02/fragments)
TASK [CREATE SYMBOLIC LINK] ****************************************************
ok: [eve]
TASK [CREATE YAML SWITCH CONFIGURATION] ****************************************
changed: [eve] => (item={'name': 'dsw-host', 'ports': [{'name': 'tapXXX', 'type': 'OVSPort', 'vlan_mode': 'trunk', 'trunks': [52, 230]}, {'name': 'tapYYY', 'type': 'OVSPort', 'vlan_mode': 'trunk', 'trunks': [52, 230]}]})
TASK [CONFIGURE HYPERVISOR SWITCH PORTS] ***************************************
ok: [eve]
TASK [SAVE SWITCH CONFIGURATION OUTPUT] ****************************************
changed: [eve]
TASK [FETCH SWITCH CONFIGURATION OUTPUT] ***************************************
changed: [eve]
PLAY RECAP *********************************************************************
eve : ok=8 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Step 2: Develop scripts to resize and complete inventory entries
Bash and Python scripts were chosen to separate the operations.
#### Increase virtual machine storage capacity
Once the generic cloud image file has been pulled from cloud.debian.org, the first thing we need to do is resize its main partition to increase the amount of space available.
Here is the [**resize.sh**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/resize.sh) script which is based on the `qemu-img` and `virt-resize` commands. It takes two input parameters: the filename of the virtual machine image and the amount of storage space to add.
```bash=
#!/bin/bash
resize_cmd="virt-resize"
# Check if virt-resize command is available
if ! command -v "${resize_cmd}" &>/dev/null; then
printf "Command '%s' not found. Please install 'libguestfs-tools' package.\n" "${resize_cmd}"
exit 1
fi
# Validate input arguments
if [[ $# -ne 2 ]]; then
printf "Usage: %s <vm> <size>\n" "$0"
exit 1
fi
vm="$1"
size="$2"
# Resize the VM disk image
if ! qemu-img resize "${vm}" "+${size}G"; then
printf "Failed to resize the VM disk image.\n"
exit 1
fi
# Backup the original VM disk image
if ! cp "${vm}" "${vm}.orig"; then
printf "Failed to create a backup of the VM disk image.\n"
exit 1
fi
# Expand the partition using virt-resize
if ! "${resize_cmd}" --expand /dev/sda1 "${vm}.orig" "${vm}"; then
printf "Failed to expand the partition using %s.\n" "${resize_cmd}"
rm "${vm}.orig"
exit 1
fi
# Remove the backup
rm "${vm}.orig"
# Get VM info and extract the virtual size
if ! info=$(qemu-img info "${vm}"); then
printf "Failed to get VM info.\n"
exit 1
fi
vm_name=$(basename "${vm}" .qcow2)
grep "virtual size:" <<<"${info}" >"${vm_name}_resized.txt"
printf "VM disk image resized successfully.\n"
exit 0
```
#### Build dynamic inventory
After starting the virtual machines, we need to update the Ansible inventory with a new YAML file created from the output messages of the startup operations. The YAML entry should contain the name of the virtual machine and its IPv6 local link address within the out-of-band management VLAN.
Here is the source code of the [**build_lab_inventory.py**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/build_lab_inventory.py):
```python=
#!/usr/bin/env python3
import os
import re
import sys
import yaml
"""
VM launch trace file
. The .txt file is generated by ovs-startup.sh script with color ANSI code
. The .save file is text only
"""
basename = "launch_output"
traceFile = "trace/" + basename + ".log"
clean_traceFile = "trace/" + basename + ".save"
GROUPNAME = "vms"
VM_PATTERN = "Virtual machine filename"
ADDRESS_PATTERN = "IPv6 LL address"
ANSIBLE_SSH_PORT = 22
def clean_ansi_codes(input_file, output_file):
"""Remove ANSI escape sequences from input file."""
ansi_escape = re.compile(r"\x9B|\x1B[)[0-?]*[ -/]*[@-~]")
with open(input_file, "r") as src, open(output_file, "w") as dst:
for line in src:
dst.write(ansi_escape.sub("", line))
def extract_vm_info(lines):
"""Extract VM name and address from trace file lines."""
hosts = {}
vm_name = None
for line in lines:
line = line.strip()
if re.search(VM_PATTERN, line) and not vm_name:
vm_name = line.split(":", 1)[1].strip().split(".")[0]
continue
if vm_name and re.search(ADDRESS_PATTERN, line):
address = line.split(" :", 1)[1].strip().split("%")[0]
vm_address = f"{address}%enp0s1"
hosts[vm_name] = {
"ansible_host": vm_address,
"ansible_port": ANSIBLE_SSH_PORT,
}
vm_name = None
return hosts
def main():
if not os.path.isfile(traceFile):
print("Virtual machines launch trace file does not exist.", file=sys.stderr)
print("Are the virtual machines running?", file=sys.stderr)
sys.exit(1)
# Clean ANSI codes
clean_ansi_codes(traceFile, clean_traceFile)
# Read and process trace file
with open(clean_traceFile, "r") as src:
hosts = extract_vm_info(src)
# Prepare inventory data
inventory = {
GROUPNAME: {
"hosts": hosts,
"vars": {
"ansible_ssh_user": "{{ vm_username }}",
},
}
}
# Write inventory
with open("inventory/lab.yml", "w") as dst:
yaml.dump(inventory, dst, sort_keys=False)
if __name__ == "__main__":
main()
```
### Step 3: Create an Ansible playbook to synthetise scripts operations.
We are now ready to pull the base virtual machine image, make a copy for each lab VM, resize each VM's main partition, and build inventory.
In this step, we develop a playbook called [**02_pull_customize_run.yml**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/02_pull_customize_run.yml) that calls the scripts developed in the previous steps.
Here is a diagram of the playbook logic:
```mermaid
flowchart TD
start[Start of playbook] --> checkVM[Check if VMs are running]
checkVM --> setVMStatus[Set all_vms_stopped]
setVMStatus --> decision{all_vms_stopped?}
decision -- Yes --> downloadImage[Download Debian cloud image]
decision -- No --> skipToInventory[Skip to inventory generation]
downloadImage --> copyImage[Copy image for each VM]
copyImage --> resizeFS[Resize filesystems]
resizeFS --> deleteLogs[Delete log files]
deleteLogs --> deleteLocalFiles[Delete local trace/launch and inventory/lab.yml files]
deleteLocalFiles --> buildDeclaration[BUILD LAB DECLARATION YAML FILE]
buildDeclaration --> ensureDirs[Create trace and inventory directories]
ensureDirs --> createFragDir[Create fragments directory]
createFragDir --> createHeader[Create YAML header]
createHeader --> createDecl[Create declaration for each VM]
createDecl --> checkChanges[Check for changes in declarations]
checkChanges --> merge[Merge into lab.yaml]
merge --> cleanFragments[Clean temporary files]
cleanFragments --> launchVM[Launch virtual machines]
launchVM --> setStarted[Set all_vms_started]
setStarted --> inventoryGen[HANDLE LAB INVENTORY GENERATION]
skipToInventory --> inventoryGen
inventoryGen --> saveLogs[Save launch messages]
saveLogs --> fetchLogs[Fetch launch logs]
fetchLogs --> genInventory[Generate inventory file]
genInventory --> finish[End of playbook]
subgraph "Executed only if all_vms_stopped = true"
downloadImage
copyImage
resizeFS
deleteLogs
deleteLocalFiles
buildDeclaration
ensureDirs
createFragDir
createHeader
createDecl
checkChanges
merge
cleanFragments
launchVM
end
```
```yaml=
---
# The purpose of this playbook is to perform the following actions:
# - Check if VMs are already running
# - Download a virtual machine image from cloud.debian.org if needed
# - Make a copy of the image for each virtual machine if needed
# - Resize each lab VM filesystems if needed
# - Build the lab declaration YAML file
# - Launch the virtual machines if needed
# - Generate the lab inventory file
# The playbook is executed on the hypervisors group of hosts.
- name: PULL, CUSTOMIZE, AND RUN CLOUD LAB
vars:
lab_config_path: "{{ lab_path }}/lab.yaml"
vm_fragments_dir: "{{ lab_path }}/fragments"
launch_output_file: "{{ lab_path }}/launch_output.log"
default_file_mode: "0644"
default_dir_mode: "0755"
hosts: hypervisors
tasks:
- name: CHECK IF A VIRTUAL MACHINE ALREADY RUNS
ansible.builtin.shell:
cmd: |
set -o pipefail
if $(pgrep -af -U ${USER} | grep -q "={{ hostvars[item].vm_name }}\.qcow2 "); then
echo "{{ hostvars[item].vm_name }} is already running!"
exit 1
fi
exit 0
executable: /bin/bash
register: running_vm
changed_when: running_vm.rc != 0
failed_when: false
with_inventory_hostnames:
- vms
tags:
- launch_lab
- name: SET FACT FOR VMS STATUS
ansible.builtin.set_fact:
all_vms_stopped: "{{ (running_vm.results | map(attribute='rc') | select('eq', 0) | list | length == running_vm.results | length) }}"
- name: DOWNLOAD DEBIAN CLOUD IMAGE QCOW2 FILE
ansible.builtin.get_url:
url: "https://{{ cloud_url }}"
dest: "{{ lab_path }}/{{ image_name }}"
mode: "{{ default_file_mode }}"
when: all_vms_stopped
- name: COPY CLOUD IMAGE TO VM IMAGE FILE FOR ALL VMS
ansible.builtin.copy:
src: "{{ lab_path }}/{{ image_name }}"
dest: "{{ lab_path }}/{{ item }}.qcow2"
remote_src: true
mode: "{{ default_file_mode }}"
force: false
with_inventory_hostnames:
- vms
when: all_vms_stopped
- name: RESIZE EACH VM FILESYSTEM
ansible.builtin.script:
cmd: "./resize.sh {{ item }}.qcow2 {{ filesystem_resize }}"
chdir: "{{ lab_path }}"
creates: "{{ lab_path }}/{{ item }}_resized.txt"
with_inventory_hostnames:
- vms
when: all_vms_stopped
- name: DELETE LAUNCH OUTPUT MESSAGES LOG FILE
ansible.builtin.file:
path: "{{ launch_output_file }}"
state: absent
tags:
- launch_lab
when: all_vms_stopped
- name: DELETE LOCAL LAUNCH TRACE FILES AND LAB INVENTORY
delegate_to: localhost
ansible.builtin.file:
path: "{{ item }}"
state: absent
loop:
- trace/launch_output.log
- inventory/lab.yml
when: all_vms_stopped
- name: BUILD LAB DECLARATION YAML FILE
tags:
- yaml_labfile
block:
- name: ENSURE TRACE AND INVENTORY DIRECTORIES EXIST
delegate_to: localhost
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "{{ default_dir_mode }}"
loop:
- trace
- inventory
- name: CREATE FRAGMENTS DIRECTORY
ansible.builtin.file:
path: "{{ vm_fragments_dir }}"
state: directory
mode: "{{ default_dir_mode }}"
- name: CREATE YAML HEADER FOR LAB CONFIGURATION
ansible.builtin.copy:
content: |
# Based on template at:
# https://gitlab.inetdoc.net/labs/startup-scripts/-/blob/main/templates/
kvm:
vms:
dest: "{{ vm_fragments_dir }}/00_header_decl.yaml"
mode: "{{ default_file_mode }}"
- name: CREATE A YAML DECLARATION FOR EACH VIRTUAL MACHINE
ansible.builtin.template:
src: templates/lab.yaml.j2
dest: "{{ vm_fragments_dir }}/{{ item }}_decl.yaml"
mode: "{{ default_file_mode }}"
with_items: "{{ groups['vms'] }}"
vars:
tapnum: "{{ hostvars[item].tapnum }}"
vm_name: "{{ hostvars[item].vm_name }}"
- name: CHECK FOR CHANGES IN VIRTUAL MACHINE DECLARATIONS
ansible.builtin.find:
paths: "{{ vm_fragments_dir }}"
patterns: "*_decl.yaml"
register: fragment_files
- name: CHECK LAB CONFIG FILE STATUS
ansible.builtin.stat:
path: "{{ lab_config_path }}"
register: lab_config_file
- name: MERGE YAML DECLARATIONS INTO LAB CONFIGURATION
ansible.builtin.assemble:
src: "{{ vm_fragments_dir }}"
dest: "{{ lab_config_path }}"
mode: "{{ default_file_mode }}"
when: >-
fragment_files.matched > 0 and
(
not lab_config_file.stat.exists or
fragment_files.files | map(attribute='mtime') | max > lab_config_file.stat.mtime
)
rescue:
- name: HANDLE ERROR IN LAB CONFIGURATION
ansible.builtin.debug:
msg: An error occurred while building the lab configuration.
always:
- name: CLEANUP TEMPORARY FILES
ansible.builtin.file:
path: "{{ vm_fragments_dir }}"
state: absent
when: all_vms_stopped
- name: LAUNCH VIRTUAL MACHINE
ansible.builtin.command:
cmd: "$HOME/vm/scripts/lab-startup.py {{ lab_config_path }}"
chdir: "{{ lab_path }}"
register: launch
when: all_vms_stopped
failed_when: launch.rc != 0 and 'already' not in launch.stdout
changed_when: "' started!' in launch.stdout"
tags:
- launch_lab
- name: SET FACT FOR VMS STARTED
ansible.builtin.set_fact:
all_vms_started: "{{ (launch.stdout is defined and ' started!' in launch.stdout) }}"
when: launch is defined | default(false)
- name: HANDLE LAB INVENTORY GENERATION
tags:
- launch_lab
block:
- name: SAVE LAUNCH OUTPUT MESSAGES TO LOG FILE
ansible.builtin.copy:
content: "{{ launch.stdout | default('') }}\n{{ launch.stderr | default('') }}"
dest: "{{ launch_output_file }}"
mode: "{{ default_file_mode }}"
when: all_vms_started | default(false)
- name: FETCH EXISTING LAUNCH OUTPUT IF VMS ARE RUNNING
ansible.builtin.fetch:
src: "{{ launch_output_file }}"
dest: trace/launch_output.log
flat: true
mode: "{{ default_file_mode }}"
- name: GENERATE NEW INVENTORY FILE
delegate_to: localhost
ansible.builtin.command:
cmd: /usr/bin/env python3 ./build_lab_inventory.py
register: command_result
changed_when: command_result.rc == 0
rescue:
- name: HANDLE ERROR IN LAB INVENTORY GENERATION
ansible.builtin.debug:
msg: An error occurred while building the lab inventory.
```
Here are the 7 most important points about using Ansible modules in this Ansible playbook:
Variety of modules
: The playbook uses various Ansible modules like `get_url`, `copy`, `script`, `file`, `template`, `find`, `stat`, `assemble`, `shell`, `command`, and `fetch` for different tasks.
Module parameters
: Each module is called with specific parameters tailored to its function, such as `url` and `dest` for `get_url`.
Idempotency
: Many tasks use parameters such as creates or `when` conditions to ensure idempotency.
Error handling
: The playbook implements error handling using `block`, `rescue`, and `always` sections.
Conditional execution
: Tasks use `when` statements to control execution based on conditions.
Looping
: The `with_items` and `with_inventory_hostnames` loops are used to apply tasks to multiple items.
Return value usage
: The playbook captures and uses return values from modules, such as in the `register` statements and subsequent tasks.
### Step 4: Run the `02_pull_customize_run.yml` playbook
Here is a sample output of the playbook execution.
```bash
ansible-playbook 02_pull_customize_run.yml --extra-vars @$HOME/.iac_passwd.yml
```
```bash=
PLAY [PULL, CUSTOMIZE, AND RUN CLOUD LAB] **************************************
TASK [Gathering Facts] *********************************************************
ok: [eve]
TASK [DOWNLOAD DEBIAN CLOUD IMAGE QCOW2 FILE] **********************************
changed: [eve]
TASK [COPY CLOUD IMAGE TO VM IMAGE FILE FOR ALL VMS] ***************************
changed: [eve] => (item=vmXXX)
changed: [eve] => (item=vmYYY)
TASK [RESIZE EACH VM FILESYSTEM] ***********************************************
changed: [eve] => (item=vmXXX)
changed: [eve] => (item=vmYYY)
TASK [CREATE FRAGMENTS DIRECTORY] **********************************************
ok: [eve]
TASK [CREATE YAML HEADER FOR LAB CONFIGURATION] ********************************
changed: [eve]
TASK [CREATE A YAML DECLARATION FOR EACH VIRTUAL MACHINE] **********************
changed: [eve] => (item=vmXXX)
changed: [eve] => (item=vmYYY)
TASK [CHECK FOR CHANGES IN VIRTUAL MACHINE DECLARATIONS] ***********************
ok: [eve]
TASK [CHECK LAB CONFIG FILE STATUS] ********************************************
ok: [eve]
TASK [MERGE YAML DECLARATIONS INTO LAB CONFIGURATION] **************************
changed: [eve]
TASK [CLEANUP TEMPORARY FILES] *************************************************
changed: [eve]
TASK [CHECK IF A VIRTUAL MACHINE ALREADY RUNS] *********************************
ok: [eve] => (item=vmXXX)
ok: [eve] => (item=vmYYY)
TASK [SET FACT FOR VMS STATUS] *************************************************
ok: [eve]
TASK [DELETE LAUNCH OUTPUT MESSAGES LOG FILE IF ALL VMS ARE STOPPED] ***********
ok: [eve]
TASK [LAUNCH VIRTUAL MACHINE] **************************************************
changed: [eve]
TASK [SET FACT FOR VMS STARTED] ************************************************
ok: [eve]
TASK [ENSURE TRACE AND INVENTORY DIRECTORIES EXIST] ****************************
changed: [eve -> localhost] => (item=trace)
ok: [eve -> localhost] => (item=inventory)
TASK [SAVE LAUNCH OUTPUT MESSAGES TO LOG FILE] *********************************
changed: [eve]
TASK [FETCH EXISTING LAUNCH OUTPUT IF VMS ARE RUNNING] *************************
changed: [eve]
TASK [GENERATE NEW INVENTORY FILE] *********************************************
changed: [eve -> localhost]
TASK [CLEANUP TEMPORARY FILES] *************************************************
ok: [eve]
PLAY RECAP *********************************************************************
eve : ok=21 changed=12 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Step 5: Check Ansible SSH access to the target virtual machines
Here we use the **ping** Ansible module directly from the command line.
```bash
ansible vms -m ping --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
> We use the **vms** group entry defined in the main inventory file set in Part 1: `hosts.yml`
```bash=
vmXXX | SUCCESS => {
"changed": false,
"ping": "pong"
}
vmYYY | SUCCESS => {
"changed": false,
"ping": "pong"
}
```
We can also check the inventory contains **vmXXX** and **vmYYY** entries with their own parameters.
```bash
ansible-inventory --yaml --list
```
This completes the **Infrastructure as Code** part of creating virtual machines from scratch. We can now control and configure these new virtual machines from the DevNet VM.
## Part 5: Analyze new virtual machines network configuration
Now that we have access to our newly created virtual machines, let's examine the current network configuration. The main point here is to show that the VRF network configuration works as expected. Even though we have an out-of-band SSH access to each VM, the VMS routing tables only show the VM's in-band network entries
### Step 1: View the addresses assigned to the out-of-band VLAN
We just have to list addresses with the `ip` command using the **command** ansible module on the DevNet virtual machine.
```bash
ansible vms -m command -a "ip addr ls" --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
vmYYY | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:05 brd ff:ff:ff:ff:ff:ff
altname enxb8adcafe0005
inet6 fe80::baad:caff:fefe:5/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: mgmt-vrf: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP group default qlen 1000
link/ether 5a:c2:c1:c4:4f:0d brd ff:ff:ff:ff:ff:ff
4: vlan230@enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:05 brd ff:ff:ff:ff:ff:ff
inet 10.0.228.5/22 brd 10.0.231.255 scope global vlan230
valid_lft forever preferred_lft forever
inet6 2001:678:3fc:e6:baad:caff:fefe:5/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 2591985sec preferred_lft 604785sec
inet6 fe80::baad:caff:fefe:5/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: vlanVVV@enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mgmt-vrf state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:05 brd ff:ff:ff:ff:ff:ff
inet 198.18.53.5/22 metric 100 brd 198.18.55.255 scope global dynamic vlanVVV
valid_lft 5717sec preferred_lft 5717sec
inet6 2001:678:3fc:34:baad:caff:fefe:5/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86145sec preferred_lft 14145sec
inet6 fe80::baad:caff:fefe:5/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
vmXXX | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:04 brd ff:ff:ff:ff:ff:ff
altname enxb8adcafe0004
inet6 fe80::baad:caff:fefe:4/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: mgmt-vrf: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP group default qlen 1000
link/ether a2:57:ca:25:7c:a4 brd ff:ff:ff:ff:ff:ff
4: vlan230@enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:04 brd ff:ff:ff:ff:ff:ff
inet 10.0.228.4/22 brd 10.0.231.255 scope global vlan230
valid_lft forever preferred_lft forever
inet6 2001:678:3fc:e6:baad:caff:fefe:4/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 2591985sec preferred_lft 604785sec
inet6 fe80::baad:caff:fefe:4/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
5: vlanVVV@enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master mgmt-vrf state UP group default qlen 1000
link/ether b8:ad:ca:fe:00:04 brd ff:ff:ff:ff:ff:ff
inet 198.18.53.4/22 metric 100 brd 198.18.55.255 scope global dynamic vlanVVV
valid_lft 4412sec preferred_lft 4412sec
inet6 2001:678:3fc:34:baad:caff:fefe:4/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 86145sec preferred_lft 14145sec
inet6 fe80::baad:caff:fefe:4/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
```
Here are the 2 most important points regarding network configuration with VRF illustrated:
Using VRF
: The `mgmt-vrf` interface is configured as the VRF master interface, allowing logical separation of management traffic.
Management VLAN
: The `vlanVVVV` interface is associated with the management VRF `mgmt-vrf`, with IP addresses in the **out-of-band** network.
### Step 2: View both out-of-band and in-band routing tables
As with the network addresses, the `ip` command and ansible **command** module are used.
The default IPv4 routing table on each VM displays only in-band network entries.
```bash
ansible vms -m command -a "ip route ls" --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
vmYYY | CHANGED | rc=0 >>
default via 10.0.228.1 dev vlan230 proto static
10.0.228.0/22 dev vlan230 proto kernel scope link src 10.0.228.YYY
vmXXX | CHANGED | rc=0 >>
default via 10.0.228.1 dev vlan230 proto static
10.0.228.0/22 dev vlan230 proto kernel scope link src 10.0.228.XXX
```
We get the same result with IPv6 default routing table.
```bash
ansible vms -m command -a "ip -6 route ls" --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
vmXXX | CHANGED | rc=0 >>
2001:678:3fc:e6::/64 dev vlan230 proto ra metric 512 expires 2591957sec pref high
fe80::/64 dev enp0s1 proto kernel metric 256 pref medium
fe80::/64 dev vlan230 proto kernel metric 256 pref medium
default nhid 1764829396 via fe80:e6::1 dev vlan230 proto ra metric 512 expires 1757sec pref high
vmYYY | CHANGED | rc=0 >>
2001:678:3fc:e6::/64 dev vlan230 proto ra metric 512 expires 2591957sec pref high
fe80::/64 dev enp0s1 proto kernel metric 256 pref medium
fe80::/64 dev vlan230 proto kernel metric 256 pref medium
default nhid 2529649889 via fe80:e6::1 dev vlan230 proto ra metric 512 expires 1757sec pref high
```
We have to specify the VRF context to get routing tables network entries for the management out-of-band network.
```bash
ansible vms -m command -a "ip route ls vrf mgmt-vrf" --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
vmYYY | CHANGED | rc=0 >>
default via 198.18.52.1 dev vlanVVV proto dhcp src 198.18.53.5 metric 100
172.16.0.2 via 198.18.52.1 dev vlanVVV proto dhcp src 198.18.53.5 metric 100
198.18.52.0/22 dev vlanVVV proto kernel scope link src 198.18.53.5 metric 100
198.18.52.1 dev vlanVVV proto dhcp scope link src 198.18.53.5 metric 100
vmXXX | CHANGED | rc=0 >>
default via 198.18.52.1 dev vlanVVV proto dhcp src 198.18.53.4 metric 100
172.16.0.2 via 198.18.52.1 dev vlanVVV proto dhcp src 198.18.53.4 metric 100
198.18.52.0/22 dev vlanVVV proto kernel scope link src 198.18.53.4 metric 100
198.18.52.1 dev vlanVVV proto dhcp scope link src 198.18.53.4 metric 100
```
IPv4 network configurationn is provided by DHCP.
```bash
ansible vms -m command -a "ip -6 route ls vrf mgmt-vrf" --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
vmYYY | CHANGED | rc=0 >>
2001:678:3fc:34::/64 dev vlanVVV proto ra metric 100 expires 86241sec pref medium
fe80::/64 dev vlanVVV proto kernel metric 256 pref medium
multicast ff00::/8 dev vlanVVV proto kernel metric 256 pref medium
default nhid 430493200 via fe80::34:1 dev vlanVVV proto ra metric 100 expires 1641sec pref medium
vmXXX | CHANGED | rc=0 >>
2001:678:3fc:34::/64 dev vlanVVV proto ra metric 100 expires 86241sec pref medium
fe80::/64 dev vlanVVV proto kernel metric 256 pref medium
multicast ff00::/8 dev vlanVVV proto kernel metric 256 pref medium
default nhid 796208888 via fe80::34:1 dev vlanVVV proto ra metric 100 expires 1641sec pref medium
```
IPv6 network configuration is provided by SLAAC Router Advertisment (RA).
### Step 3: Run ICMP tests to a public Internet address
At this stage of configuration, the out-of-band network connection is the only one available. Therefore, we need to run commands prefixed with `ip vrf exec` to send traffic from the out-of-band network interface. Also, only sudoers can run these commands in the VRF context.
```bash
ansible vms -m command -a "sudo ip vrf exec mgmt-vrf ping -c3 9.9.9.9" --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
vmXXX | CHANGED | rc=0 >>
PING 9.9.9.9 (9.9.9.9) 56(84) bytes of data.
64 bytes from 9.9.9.9: icmp_seq=1 ttl=47 time=28.2 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=47 time=27.2 ms
64 bytes from 9.9.9.9: icmp_seq=3 ttl=47 time=26.7 ms
--- 9.9.9.9 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 26.731/27.368/28.192/0.610 ms
vmYYY | CHANGED | rc=0 >>
PING 9.9.9.9 (9.9.9.9) 56(84) bytes of data.
64 bytes from 9.9.9.9: icmp_seq=1 ttl=47 time=27.5 ms
64 bytes from 9.9.9.9: icmp_seq=2 ttl=47 time=27.1 ms
64 bytes from 9.9.9.9: icmp_seq=3 ttl=47 time=26.8 ms
--- 9.9.9.9 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 26.840/27.171/27.538/0.286 ms
```
```bash
ansible vms -m command -a "sudo ip vrf exec mgmt-vrf ping -c3 2620:fe::fe" --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
vmYYY | CHANGED | rc=0 >>
PING 2620:fe::fe (2620:fe::fe) 56 data bytes
64 bytes from 2620:fe::fe: icmp_seq=1 ttl=59 time=40.2 ms
64 bytes from 2620:fe::fe: icmp_seq=2 ttl=59 time=39.4 ms
64 bytes from 2620:fe::fe: icmp_seq=3 ttl=59 time=38.9 ms
--- 2620:fe::fe ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 38.852/39.507/40.235/0.566 ms
vmXXX | CHANGED | rc=0 >>
PING 2620:fe::fe (2620:fe::fe) 56 data bytes
64 bytes from 2620:fe::fe: icmp_seq=1 ttl=59 time=39.6 ms
64 bytes from 2620:fe::fe: icmp_seq=2 ttl=59 time=39.7 ms
64 bytes from 2620:fe::fe: icmp_seq=3 ttl=59 time=38.5 ms
--- 2620:fe::fe ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 38.533/39.303/39.746/0.546 ms
```
All the above ICMP and ICMPv6 tests show that there no packet loss and that packet routing is fully functional with IPv4 and IPv6.
## Part 6: Virtual machines system configuration
Finally, we can use the in-band VLAN network access to perform some system configuration tasks on the virtual machines.
### Step 1: Create a new virtual machine operating system configuration playbook
There, we create a [**03_system_bits.yml**](https://gitlab.inetdoc.net/iac/lab-01-02/-/blob/main/03_system_bits.yml?ref_type=heads) playbook file.
```yaml=
---
# The purpose of this playbook is to configure the system bits and pieces of the
# virtual machines.
# It performs the following basic steps:
# 1. Configure the VM locales.
# 2. Configure the VM timezone.
# 3. Install the Debian keyring.
# The playbook is executed on the vms group of hosts.
- name: CONFIGURE SYSTEM BITS AND PIECES
hosts: vms
become: true
gather_facts: false # Do not gather facts about the remote systems before they are accessible
pre_tasks:
- name: WAIT FOR VMS TO BECOME ACCESSIBLE
ansible.builtin.wait_for_connection:
delay: 5
sleep: 5
timeout: 300
connect_timeout: 5
- name: WAIT FOR SSH SERVICE
ansible.builtin.wait_for:
port: 22
state: started
timeout: 300
- name: GATHER FACTS
ansible.builtin.setup: # Gather facts about the remote system once the connection is available
when: ansible_facts | length == 0
tasks:
- name: CONFIGURE VM LOCALES
community.general.locale_gen:
name: fr_FR.UTF-8
state: present
- name: CONFIGURE VM TIMEZONE
community.general.timezone:
name: Europe/Paris
- name: INSTALL DEBIAN KEYRING
ansible.builtin.apt:
name: debian-keyring
state: present
update_cache: true
```
This playbook addresses VM startup time management through strategic pre-tasks.
It uses `wait_for_connection` to ensure that VMs are reachable with:
- 5-second delay
- 5-second sleep interval
- 300-second timeout.
It also uses `wait_for` to confirm the availability of the SSH service.
Facts gathering is initially disabled and only performed when the connection is established, optimizing the efficiency of the playbook.
These measures ensure that configuration tasks are only performed when VMs are fully operational, eliminating potential errors due to premature execution.
### Step 2: Run the `03_system_bits.yml` playbook
```bash
ansible-playbook 03_system_bits.yml --ask-vault-pass --extra-vars @$HOME/.iac_passwd.yml
Vault password:
```
```bash=
PLAY [CONFIGURE SYSTEM BITS AND PIECES] ****************************************
TASK [WAIT FOR VMS TO BECOME ACCESSIBLE] ***************************************
ok: [vmXXX]
ok: [vmYYY]
TASK [WAIT FOR SSH SERVICE] ****************************************************
ok: [vmXXX]
ok: [vmYYY]
TASK [GATHER FACTS] ************************************************************
ok: [vmYYY]
ok: [vmXXX]
TASK [CONFIGURE VM LOCALES] ****************************************************
ok: [vmYYY]
ok: [vmXXX]
TASK [CONFIGURE VM TIMEZONE] ***************************************************
ok: [vmYYY]
ok: [vmXXX]
TASK [INSTALL DEBIAN KEYRING] **************************************************
ok: [vmYYY]
ok: [vmXXX]
PLAY RECAP *********************************************************************
vmXXX : ok=6 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
vmYYY : ok=6 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
## Conclusion
This lab has demonstrated the power of Infrastructure as Code (IaC) using Ansible to build and configure Debian GNU/Linux virtual machines from scratch. We explored key concepts including:
- Using declarative YAML files to define the desired infrastructure state
- Leveraging Ansible playbooks to automate VM creation and configuration
- Implementing Virtual Routing and Forwarding (VRF) for network isolation
- Configuring out-of-band management access separate from in-band traffic
By following this IaC approach, we were able to rapidly deploy consistent, customized VMs with proper networking setup. The skills and techniques covered provide a foundation for efficiently managing larger-scale infrastructure deployments. Moving forward, these IaC practices can be extended to provision more complex environments, integrate with version control systems, and implement continuous deployment pipelines with GitLab. This is the horizon of the next document: [IaC Lab 2 – Using GitLab CI to run Ansible playbooks and build new Debian GNU/Linux virtual machines](https://md.inetdoc.net/s/CPltj12uT).