arrow-left

All pages
gitbookPowered by GitBook
1 of 5

Loading...

Loading...

Loading...

Loading...

Loading...

User Login

Users can log in to the FTAS VM using one of the following methods

  • Console login

virsh console <VM_domain_name>

#Example -  
#sonic@sonic-39:~$ virsh console FTAS_VM01
#Connected to domain FTAS_VM01
#Escape character is ^]
#oper@ftasvm:~$ 
  • SSH login

circle-info

The default username is 'oper' with the default password 'oper@123'

After logging in user will be dropped into the 'Bash shell'

with the following pre-defined folders and files

  • qjob.py - Script to schedule execution jobs.

  • testbeds - Directory to create and maintain testbed parameter files.

  • jobs - Directory containing the JSON file that holds the job queue. qjob.py script controls and edits this JSON file. Please don't edit the JSON file manually.

ssh <username>@<mgmt ip address of the VM>

#Example -
#sonic@sonic-39:~$ ssh oper@192.168.3.37
#oper@192.168.3.37's password: 
#Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-137-generic x86_64)

# * Documentation:  https://help.ubuntu.com
# * Management:     https://landscape.canonical.com
# * Support:        https://ubuntu.com/advantage

#This system has been minimized by removing packages and content that are
#not required on a system that users do not log into.

#To restore this content, you can run the 'unminimize' command.
#Last login: Tue Feb 21 08:24:31 2023
#oper@ftasvm:~$ 

testsuites - Directory to maintain testsuite yaml files.

  • reports - Directory to store HTML reports of completed jobs.

  • configs - Directory to store test configs

  • jobs.py - Script to manipulate queue jobs. It is imported by the qjob.py utility.

  • logs - Maintains execution logs file of all jobs. Users can clean up the files in the logs and reports folder to regain disk space when needed.

  • oper@ftasvm:~$ ls -lrth
    total 44K
    drwxrwxr-x  3 oper oper 4.0K Nov 18 16:54 ones
    -rwxrwxr-x  1 oper oper 1.9K Jan 27 10:19 qjob.py
    drwxr-xr-x  2 oper oper 4.0K Jan 31 07:26 jobs
    drwxr-xr-x  2 oper oper 4.0K Feb  7 11:03 configs
    -rwx------  1 oper oper 6.2K Feb  7 11:03 jobs.py
    drwxrwxr-x  2 oper oper 4.0K Feb 18 12:54 __pycache__
    drwxrw-rw-  2 oper oper 4.0K Feb 18 12:54 logs
    drwxr-xr-x  2 oper oper 4.0K Feb 18 14:06 testbeds
    drwxr-xr-x  2 oper oper 4.0K Feb 18 14:07 testsuites
    drwxrwxrwx 10 oper oper 4.0K Feb 20 04:58 reports
    oper@ftasvm:~$ 

    Deploy the VM

    The VM image is provided as a gzip file. It should be decompressed to get the qcow2 image to deploy as a VM.

    The FTAS VM has Aviz ONES integrated into it and will take some time to initialise after the first boot.

    You can connect to the console port of the VM to see the installation logs.

    hashtag
    Create the VM using GUI App virt-manager

    If your host server has Ubuntu Desktop and virt-manager installed you can use it to deploy the VM. Make sure you can start the Virtual Machine Manager and that it connects successfully to the local hypervisor.

    Creating a VM with virt-manager is very straightforward, Use the following steps to deploy the FTAS VM

    • File -> New Virtual Machine -> Import existing disk image -> Forward

    • Browse to the FTAS disk image location and select Ubuntu as the OS name

    • Click "Forward" and select vCPU (min 2 cores) and Memory (4GB) for the VM

    • Click "Forward", give your VM a name and check "Customize configuration before install"

    • Select "NIC ...", in the "Network source" select the Linux bridge you created on the host machine

    • Apply the configuration and start the VM

    hashtag
    Create the VM using XML configuration

    • Create an XML configuration file from the following template

    The below lines can be changed to customize the VM installation:

    • Create a Linux bridge configuration file (bridged-network.xml) for libvirt from the following template

    • Define the Linux bridge for the VM

    • Start the VM

    If you see a permission error run the virsh command with sudo may fix the issue

    • Check the VM status

    hashtag
    Configure the IP address on the VM

    1. If there is a DHCP server on the management network the VM will obtain its IP configuration from the DHCP server

    2. If there is no DHCP server or you want to configure the IP address statically, Follow the below steps

    • Enter VM console

    circle-info

    The default username is 'oper' with the default password 'oper@123'

    • Check connections and devices

    • Release IP assigned by DHCP

    • Configure static IP for the connection

    • Set a default Gateway address

    • Set the IP configuration mode to manual

    • Reapply the configuration to the interface

    • Verify the IP address

    Test FTAS VM reachability from outside the VM, if the VM is not reachable please check the access rule at the below location,

    If the above value is 1 please change it to 0 and the reachability issue should be resolved

    There are some scaling scripts that require multiple network service servers (NTP, SYSLOG, TACACS+, etc.). In order to simulate this, we can add a secondary IP address to the VM NIC.

    To add a secondary IP address, use the command

    hashtag
    Network services containers

    The FTAS VM has docker containers running and the following docker images installed:

    • DHCP container image ztp_dhcp(DHCP sevice)

    circle-info

    ztp_dhcp(DHCP) services are not run by default as it might conflict with DHCP running in the DC infra.

    • Net Services container image netservices:v1 (NTP, SYSLOG, TACACS+ services). This container is run with the "--network=host" option. If you need to change the configurations of the services please find them in the following configuration files.

    Dockers running by default:

    sonic@sonic-39:~$ gunzip -c ftas_ones_vmi_1.1.2.qcow2.gz > ftas_ones_vmi_1.1.2.qcow2
    sonic@sonic-39:~$ ls -l
    total 8302936
    -rw-rw-r-- 1 sonic sonic 4929683456 Feb 21 06:21 ftas_ones_vmi_1.1.2.qcow2
    -rw-rw-r-- 1 sonic sonic 3572510886 Feb 21 06:20 ftas_ones_vmi_1.1.2.qcow2.gz
    sonic@sonic-39:~$ 

    Line #25 the Path to the qcow2 VM image file

  • Figure 2: Virt-Manager
    Figure 3: Create new VM from existing disk image
    Figure 4: Set vCPU and memory
    Figure 5: Network selection
    vi ftas.xml
    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>FTAS_VM01</name>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <vcpu placement='static'>4</vcpu>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-1.5'>hvm</type>
        <boot dev='hd'/>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <clock offset='utc'/>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/home/oper/taas_vm/taas_vm_v3.qcow2' />
          <target bus='virtio' dev='vda'/>
        </disk>
        <serial type='pty'>
          <source path='/dev/pts/3'/>
          <target port='0'/>
        </serial>
        <!-- Management interface eth0 -->
        <interface type='network'>
    	<model type='e1000' />
            <source network='br0'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x00' function='0x0'/>
        </interface>
       <controller type='usb' index='0'/>
        <memballoon model='virtio'>
          <alias name='balloon0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </memballoon>
      </devices>
    </domain>
    vi bridged-network.xml
    <network>
        <name>br0</name>
        <forward mode="bridge" />
        <bridge name="br0" />
    </network>
    #Execute the below command to attach the VM to the Linux Bridge 
    sonic@sonic-39:~$ virsh net-define bridged-network.xml
    sonic@sonic-39:~$ virsh net-start br0
    sonic@sonic-39:~$ virsh net-autostart br0
    sonic@sonic-39:~$ virsh net-list
     Name                 State      Autostart     Persistent
    ----------------------------------------------------------
     br0                  active     yes           yes
    
    sonic@sonic-39:~$ 
    virsh create <VM XML configuration file>
    
    #sonic@sonic-39:~$ virsh create ftas.xml 
    #Domain FTAS_VM01 created from ftas.xml
    #sonic@sonic-39:~$ 
    sonic@sonic-39:~$ virsh list
     Id    Name                           State
    ----------------------------------------------------
     8     FTAS_VM01                      running
    sonic@sonic-39:~$ 
    sonic@sonic-39:~$ virsh console FTAS_VM01
    Connected to domain ftas03
    Escape character is ^]
    
    ftasvm login: 
    sudo nmcli con show
    
    oper@ftasvm:~$ sudo nmcli con show
    NAME                UUID                                  TYPE      DEVICE 
    Wired connection 1  782de6d4-3867-3c5e-95fb-061ae39e5fae  ethernet  eth0   
    oper@ftasvm:~$ 
    # Capture the connection NAME of eth0 device
    sudo dhclient -v -r
    
    oper@ftasvm:~$ sudo dhclient -v -r
    Internet Systems Consortium DHCP Client 4.4.1
    Copyright 2004-2018 Internet Systems Consortium.
    All rights reserved.
    For info, please visit https://www.isc.org/software/dhcp/
    
    Listening on LPF/veth1dcacbe/b6:bc:e5:4a:7e:1f
    Sending on   LPF/veth1dcacbe/b6:bc:e5:4a:7e:1f
    <..>
    Sending on   Socket/fallback
    oper@ftasvm:~$ 
    sudo nmcli con mod "Wired connection 1" ipv4.addresses <ip address>/<prefix>
    
    #Example - sudo nmcli con mod "Wired connection 1" ipv4.addresses 192.168.0.37/24
    sudo nmcli connection modify "Wired connection 1" ipv4.gateway <GW Address>
    
    #Example - sudo nmcli connection modify "Wired connection 1" ipv4.gateway 192.168.0.1
    sudo nmcli con mod "Wired connection 1" ipv4.method manual
    sudo nmcli device reapply <dev_name>
    
    #Example - sudo nmcli device reapply eth0
    #verify the IP address
    ip a
    
    oper@ftasvm:~$ ip a
    <..>
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 52:54:00:37:3c:5c brd ff:ff:ff:ff:ff:ff
        inet 192.168.0.37/25 brd 192.168.0.255 scope global noprefixroute eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::70a4:9f2e:658c:4d29/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    <..>
    oper@ftasvm:~$ 
    
    
    #Verify IP method
    oper@ftasvm:~$ sudo nmcli -f ipv4.method con show "Wired connection 1"
    ipv4.method:                            manual
    oper@ftasvm:~$ 
    Host Machine
    sonic@sonic-39:~$ cat /proc/sys/net/bridge/bridge-nf-call-iptables 
    1
    sonic@sonic-39:~$ 
    Host Machine
    sudo echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables 
    FTAS VM
    sudo nmcli con mod "<con_name>" +ipv4.addresses <ip address>/<prefix>
    #Example - sudo nmcli con mod "Wired connection 1" ipv4.addresses 192.168.0.42/24
    
    # Reapply config
    sudo nmcli device reapply <dev_name>
    #Example - sudo nmcli device reapply eth0
    
    # Show IP address to verify
    ip a
    # Restart docker containers so their services can listen on new IP addresses
    oper@ftasvm:~$ docker images
    REPOSITORY    TAG       IMAGE ID       CREATED        SIZE
    ztp_dhcp      v1        599313a03bfb   41 hours ago   83.3MB
    netservices   v1        8a9c98506637   41 hours ago   259MB
    oper@ftasvm:~$ d
    # NTP configuration file:
    /etc/ntp.conf
    
    # SYSLOG Configuration file
    /etc/syslog-ng/conf.d/syslog_ng.conf
    
    # Log files location:
    /var/log/sonic_logs/<IP address of devices>.log
    
    # TACACS+ configuraration
    /etc/tacacs+/tac_plus.conf
    oper@ftasvm:~$ docker ps -a
    CONTAINER ID   IMAGE            COMMAND                  CREATED        STATUS        PORTS     NAMES
    8add12060a57   netservices:v1   "/usr/bin/supervisord"   41 hours ago   Up 41 hours             net_services
    oper@ftasvm:~$ 

    Supported Traffic Generators

    Traffic Generator
    Type
    Version

    IXIA

    UHD

    • Chassis Type: Ixia UHD

    • Chassis Version: IxOS 9.10.2300.159

    • Protocol Build Number: 9.10.200

    IXIA

    NOVUS

    • Chassis Type: Ixia XGS2

    • Chassis Version: IxOS 9.12.2100.7

    • Protocol Build Number: 9.12.2009.10

    Card Type: UHD100T32

  • UHD: 1.5.49

  • Card Type: NOVUS100GE8Q28+FAN+25G+50G

    Installation

    Host Requirementschevron-right
    Supported Traffic Generatorschevron-right
    Deploy the VMchevron-right
    User Loginchevron-right

    Host Requirements

    hashtag
    Hardware

    • CPU: x86_64 8 cores or more with Virtualization enabled

    • Memory: 8GB or more system memory

    • Disk Space: 256GB or more available disk space

    • Network Interface: 1 GbE NIC

    circle-info

    For FTAS with ONES integration, more disk space and RAM is needed

    Memory: 16GB or more system memory Disk Space: 512GB or more available disk space

    hashtag
    Operating System

    • Ubuntu 20.04 or later (64-bit)

    circle-info

    Other flavours of Linux that support KVM should also be able to run the FTAS.

    However, it's important to note that these alternative distributions have been specifically tested for compatibility with FTAS, and therefore, users who opt for non-Ubuntu Linux systems to run FTAS should be aware that they may encounter compatibility issues and may need to perform additional configuration and testing on their own.

    hashtag
    Hypervisor Software

    KVM (Kernel-based Virtual Machine) is the leading open-source virtualisation technology for Linux. It installs natively on all Linux distributions and turns underlying physical servers into hypervisors so that they can host multiple, isolated virtual machines (VMs).

    We will be using KVM to serve as the hypervisor for the FTAS VM because KVM being a type-1 hypervisor, it outperforms all type-2 hypervisors, ensuring near-metal performance.

    Please refer to the following steps to install it on the host machine

    • Ensure that the latest Ubuntu packages are installed

    • Install KVM packages

    • Check if KVM acceleration is ready

    • Add user to libvirt group

    Verify if the libvirt user group is available using the below command

    If the libvirt group is not available, it can be created using the below command

    Then add the current user to the group

    • Set user and group for qemu. Update the qemu config with your user and libvirt group

    • Restart the libvirtd service

    Check the status of the libvirtd service

    • If your server has a GUI desktop installed, you may want to install virt-manager. The virt-manager application is a desktop Graphical user interface for managing virtual machines through libvirt

    hashtag
    Network Configuration

    It is recommended that the virtual NIC on the VM should be bridged with the physical NIC on the host machine.

    In this sample configuration eno1 is the physical NIC of the host machine, which is typically used for SSH(Management).

    circle-info

    Please assign the same static IP as on the physical Management NIC(enp1s0).

    After this step, IP will be re-assigned to the bridge interface(br0) and the Physical interface(enp1s0) will act as a Layer-2 interface.

    Apply the above configuration

    triangle-exclamation

    This step will reset the SSH connection and reassign the static IP from the physical interface(enp1s0) to the bridge interface(br0).

    sudo apt-get update && sudo apt-get upgrade
    sudo apt install libvirt-clients libvirt-daemon-system libvirt-daemon virtinst bridge-utils qemu qemu-kvm
    kvm-ok
    
    # You should see a message like "KVM acceleration can be used"
    sonic@sonic-39:~$ kvm-ok
    INFO: /dev/kvm exists
    KVM acceleration can be used
    sonic@sonic-39:~$ 
    sudo getent group | grep libvirt
    
    sonic@sonic-39:~$ sudo getent group | grep libvirt
    libvirt:x:119:sonic,root
    libvirt-qemu:x:64055:libvirt-qemu
    libvirt-dnsmasq:x:120:
    sonic@sonic-39:~$ 
    sudo groupadd --system libvirt
    
    sonic@sonic-39:~$ sudo groupadd --system libvirt
    groupadd: group 'libvirt' already exists
    sonic@sonic-39:~$ 
    sudo usermod -a -G libvirt $(whoami)
    sudo vi /etc/libvirt/qemu.conf
    
    # Some examples of valid values are:
    #
    #       user = "qemu"   # A user named "qemu"
    #       user = "+0"     # Super user (uid=0)
    #       user = "100"    # A user named "100" or a user with uid=100
    #
    #user = "root"
    user = "<your host user>" 
    
    
    # The group for QEMU processes run by the system instance. It can be
    # specified in a similar way to the user.
    group = "libvirt"
    sudo systemctl stop libvirtd
    sudo systemctl start libvirtd
    sudo systemctl status libvirtd
    
    sonic@sonic-39:~$ sudo systemctl status libvirtd
    ● libvirtd.service - Virtualization daemon
       Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
       Active: active (running) since Sat 2023-02-18 10:16:26 UTC; 27s ago
         Docs: man:libvirtd(8)
               https://libvirt.org
     Main PID: 68774 (libvirtd)
        Tasks: 33 (limit: 32768)
       CGroup: /system.slice/libvirtd.service
               ├─54120 /usr/bin/qemu-system-x86_64 -name guest=ftas03,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-ftas03/master-key.aes -machine pc-i440fx-1.5,accel
               └─68774 /usr/sbin/libvirtd
    
    Feb 18 10:16:26 sonic-39 systemd[1]: Starting Virtualization daemon...
    Feb 18 10:16:26 sonic-39 systemd[1]: Started Virtualization daemon.
    lines 1-13/13 (END)
    sudo apt-get install virt-manager
    Netplan configuration for Linux bridge using DHCP
    #/etc/netplan/00-installer-config.yaml
    network:
      ethernets:
        enp1s0:
          dhcp4: no
      bridges:
        br0:
          interfaces: [enp1s0]
          dhcp4: yes
          mtu: 1500
          parameters:
            stp: true
            forward-delay: 4
          dhcp6: no
      version: 2
    Netplan configuration for Linux bridge using static IP
    #/etc/netplan/00-installer-config.yaml
    network:
      ethernets:
        enp1s0:
          dhcp4: no
      bridges:
        br0:
          interfaces: [enp1s0]
          addresses: [172.16.1.100/24]
          gateway4: 172.16.1.1
          mtu: 1500
          nameservers:
            addresses: [8.8.8.8, 8.8.4.4]
          parameters:
            stp: true
            forward-delay: 4
          dhcp4: no
          dhcp6: no
      version: 2
    sudo netplan apply