Traffic Generator | Type | Version |
---|---|---|
The VM image is provided as a gzip file. It should be decompressed to get the qcow2 image to deploy as a VM.
The FTAS VM has Aviz ONES integrated into it and will take some time to initialise after the first boot.
You can connect to the console port of the VM to see the installation logs.
If your host server has Ubuntu Desktop and virt-manager installed you can use it to deploy the VM. Make sure you can start the Virtual Machine Manager and that it connects successfully to the local hypervisor.
Creating a VM with virt-manager is very straightforward, Use the following steps to deploy the FTAS VM
File -> New Virtual Machine -> Import existing disk image -> Forward
Browse to the FTAS disk image location and select Ubuntu as the OS name
Click "Forward" and select vCPU (min 2 cores) and Memory (4GB) for the VM
Click "Forward", give your VM a name and check "Customize configuration before install"
Select "NIC ...", in the "Network source" select the Linux bridge you created on the host machine
Apply the configuration and start the VM
Create an XML configuration file from the following template
The below lines can be changed to customize the VM installation:
Create a Linux bridge configuration file (bridged-network.xml) for libvirt from the following template
Define the Linux bridge for the VM
Start the VM
If you see a permission error run the virsh command with sudo may fix the issue
Check the VM status
If there is a DHCP server on the management network the VM will obtain its IP configuration from the DHCP server
If there is no DHCP server or you want to configure the IP address statically, Follow the below steps
Enter VM console
The default username is 'oper' with the default password 'oper@123'
Check connections and devices
Release IP assigned by DHCP
Configure static IP for the connection
Set a default Gateway address
Set the IP configuration mode to manual
Reapply the configuration to the interface
Verify the IP address
Test FTAS VM reachability from outside the VM, if the VM is not reachable please check the access rule at the below location,
If the above value is 1
please change it to 0
and the reachability issue should be resolved
There are some scaling scripts that require multiple network service servers (NTP, SYSLOG, TACACS+, etc.). In order to simulate this, we can add a secondary IP address to the VM NIC.
To add a secondary IP address, use the command
The FTAS VM has docker containers running and the following docker images installed:
DHCP container image ztp_dhcp(DHCP sevice)
ztp_dhcp(DHCP) services are not run by default as it might conflict with DHCP running in the DC infra.
Net Services container image netservices:v1
(NTP, SYSLOG, TACACS+ services). This container is run with the "--network=host" option. If you need to change the configurations of the services please find them in the following configuration files.
Dockers running by default:
IXIA
UHD
Chassis Type: Ixia UHD
Chassis Version: IxOS 9.10.2300.159
Protocol Build Number: 9.10.200
Card Type: UHD100T32
UHD: 1.5.49
IXIA
NOVUS
Chassis Type: Ixia XGS2
Chassis Version: IxOS 9.12.2100.7
Protocol Build Number: 9.12.2009.10
Card Type: NOVUS100GE8Q28+FAN+25G+50G
CPU: x86_64 8 cores or more with Virtualization enabled
Memory: 8GB or more system memory
Disk Space: 256GB or more available disk space
Network Interface: 1 GbE NIC
For FTAS with ONES integration, more disk space and RAM is needed
Memory: 16GB or more system memory Disk Space: 512GB or more available disk space
Ubuntu 20.04 or later (64-bit)
Other flavours of Linux that support KVM should also be able to run the FTAS.
However, it's important to note that these alternative distributions have been specifically tested for compatibility with FTAS, and therefore, users who opt for non-Ubuntu Linux systems to run FTAS should be aware that they may encounter compatibility issues and may need to perform additional configuration and testing on their own.
KVM (Kernel-based Virtual Machine) is the leading open-source virtualisation technology for Linux. It installs natively on all Linux distributions and turns underlying physical servers into hypervisors so that they can host multiple, isolated virtual machines (VMs).
We will be using KVM to serve as the hypervisor for the FTAS VM because KVM being a type-1 hypervisor, it outperforms all type-2 hypervisors, ensuring near-metal performance.
Please refer to the following steps to install it on the host machine
Ensure that the latest Ubuntu packages are installed
Install KVM packages
Check if KVM acceleration is ready
Add user to libvirt group
Verify if the libvirt user group is available using the below command
If the libvirt group is not available, it can be created using the below command
Then add the current user to the group
Set user and group for qemu. Update the qemu config with your user and libvirt group
Restart the libvirtd service
Check the status of the libvirtd service
If your server has a GUI desktop installed, you may want to install virt-manager. The virt-manager application is a desktop Graphical user interface for managing virtual machines through libvirt
It is recommended that the virtual NIC on the VM should be bridged with the physical NIC on the host machine.
In this sample configuration eno1 is the physical NIC of the host machine, which is typically used for SSH(Management).
Please assign the same static IP as on the physical Management NIC(enp1s0).
After this step, IP will be re-assigned to the bridge interface(br0) and the Physical interface(enp1s0) will act as a Layer-2 interface.
Apply the above configuration
This step will reset the SSH connection and reassign the static IP from the physical interface(enp1s0) to the bridge interface(br0).
Users can log in to the FTAS VM using one of the following methods
Console login
SSH login
The default username is 'oper' with the default password 'oper@123'
After logging in user will be dropped into the 'Bash shell'
with the following pre-defined folders and files
qjob.py - Script to schedule execution jobs.
testbeds - Directory to create and maintain testbed parameter files.
jobs - Directory containing the JSON file that holds the job queue. qjob.py script controls and edits this JSON file. Please don't edit the JSON file manually.
testsuites - Directory to maintain testsuite yaml files.
reports - Directory to store HTML reports of completed jobs.
configs - Directory to store test configs
jobs.py - Script to manipulate queue jobs. It is imported by the qjob.py utility.
logs - Maintains execution logs file of all jobs. Users can clean up the files in the logs and reports folder to regain disk space when needed.