Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
QoS
Stress testing
EVPN/L2VXLAN enhancements
More SNMP coverage
Fast reboot testing
Platform CLI coverage
More verifications in resilience testcase
STP
Storm control
Static Lag
Dynamically find the route scalability limits
Route scalability enhancements
Default route advertisement
Max MED verification
Runtime optimization
MCLAG with Layer2
MCLAG with Layer3
EVPN VxLAN
IPv6 prefix route(4K, 64K, 128K) Scalability
BGP Dual Stack (64K and 128K) Scalability
Core file detection during test execution
ECMP Scalability
BGP Dual Stack Scalability
BGP Graceful Restart
New BGP netops usecases
Dynamic Port Breakout
Support for chaos on routed ports (no port channel configuration)
Support for chaos on routed port-channel
Debuggability enhancements
Syslog capture for all devices for the testcase duration
Techsupport dump for in case of test failure
Printing DOM information if ports fail to come up
Easier identification of Ixia sessions by appending test id to session name
Scalability enhancements:
Replaced the cases with fixed values by the variant of supported range test cases.
Parametrized scale value to the maximum supported range .(example: scale_val = 64/128/256/512 )
BGP Netops coverage:
eBGP multi-AS config, adjacency, route convergence and data path using router interface
eBGP Multi-AS Route Convergence and data path using loopback
BGP Node Drain - Add route map to remove and restore SPINE nodes 1 and 2 using the Community list
Test Link drain - Apply for Route-Map permit and for IPv6 Traffic/Prefixes and prefix lists
Node drain: with IPv6 Traffic, Test Node Drain (Spine 1 and Spine 2) for IPv6 Traffic/Prefixes
Debuggability enhancements:
Added device status data before and after test cases and added log messages.
Added FTAS version display in final report log.
Platform-specific suite files:
Added suite files for Wistron, Nvidia and EC
eBay specific suite files:
Added suite files for EC4630, EdgeCore AS-97xx
Platform/Version compatibility check:
Parameterized variables were added to validate the topology file and supported sonic version.
Support for auto cleanup before the test run:
Using this variable devices can be cleaned up forcefully before the test run.
Cleans up the ACL, routes, ipv4 interfaces, VLANs, port-channel configurations if any on the devices and brings down all the ports before the test run starts.
Users can log in to the FTAS VM using one of the following methods
Console login
SSH login
The default username is 'oper' with the default password 'oper@123'
After logging in user will be dropped into the 'Bash shell'
with the following pre-defined folders and files
qjob.py - Script to schedule execution jobs.
testbeds - Directory to create and maintain testbed parameter files.
jobs - Directory containing the JSON file that holds the job queue. qjob.py script controls and edits this JSON file. Please don't edit the JSON file manually.
testsuites - Directory to maintain testsuite yaml files.
reports - Directory to store HTML reports of completed jobs.
configs - Directory to store test configs
jobs.py - Script to manipulate queue jobs. It is imported by the qjob.py utility.
logs - Maintains execution logs file of all jobs. Users can clean up the files in the logs and reports folder to regain disk space when needed.
CPU: x86_64 8 cores or more with Virtualization enabled
Memory: 8GB or more system memory
Disk Space: 256GB or more available disk space
Network Interface: 1 GbE NIC
For FTAS with ONES integration, more disk space and RAM is needed
Memory: 16GB or more system memory Disk Space: 512GB or more available disk space
Ubuntu 20.04 or later (64-bit)
Other flavours of Linux that support KVM should also be able to run the FTAS.
However, it's important to note that these alternative distributions have been specifically tested for compatibility with FTAS, and therefore, users who opt for non-Ubuntu Linux systems to run FTAS should be aware that they may encounter compatibility issues and may need to perform additional configuration and testing on their own.
KVM (Kernel-based Virtual Machine) is the leading open-source virtualisation technology for Linux. It installs natively on all Linux distributions and turns underlying physical servers into hypervisors so that they can host multiple, isolated virtual machines (VMs).
We will be using KVM to serve as the hypervisor for the FTAS VM because KVM being a type-1 hypervisor, it outperforms all type-2 hypervisors, ensuring near-metal performance.
Please refer to the following steps to install it on the host machine
Ensure that the latest Ubuntu packages are installed
Install KVM packages
Check if KVM acceleration is ready
Add user to libvirt group
Verify if the libvirt user group is available using the below command
If the libvirt group is not available, it can be created using the below command
Then add the current user to the group
Set user and group for qemu. Update the qemu config with your user and libvirt group
Restart the libvirtd service
Check the status of the libvirtd service
If your server has a GUI desktop installed, you may want to install virt-manager. The virt-manager application is a desktop Graphical user interface for managing virtual machines through libvirt
It is recommended that the virtual NIC on the VM should be bridged with the physical NIC on the host machine.
In this sample configuration eno1 is the physical NIC of the host machine, which is typically used for SSH(Management).
Please assign the same static IP as on the physical Management NIC(enp1s0).
After this step, IP will be re-assigned to the bridge interface(br0) and the Physical interface(enp1s0) will act as a Layer-2 interface.
Apply the above configuration
This step will reset the SSH connection and reassign the static IP from the physical interface(enp1s0) to the bridge interface(br0).
Fabric Test Automation Suite (FTAS) is a comprehensive collection of test cases packaged as a virtual machine. The Objective of FTAS is to verify the necessary features and functions of SONiC NOS for Fabric Deployment Readiness. The test cases are primarily focused on qualifying the fabric for functions, features, scale, day-2 operations and chaos scenarios.
Traffic Generator | Type | Version |
---|---|---|
The VM image is provided as a gzip file. It should be decompressed to get the qcow2 image to deploy as a VM.
The FTAS VM has Aviz ONES integrated into it and will take some time to initialise after the first boot.
You can connect to the console port of the VM to see the installation logs.
If your host server has Ubuntu Desktop and virt-manager installed you can use it to deploy the VM. Make sure you can start the Virtual Machine Manager and that it connects successfully to the local hypervisor.
Creating a VM with virt-manager is very straightforward, Use the following steps to deploy the FTAS VM
File -> New Virtual Machine -> Import existing disk image -> Forward
Browse to the FTAS disk image location and select Ubuntu as the OS name
Click "Forward" and select vCPU (min 2 cores) and Memory (4GB) for the VM
Click "Forward", give your VM a name and check "Customize configuration before install"
Select "NIC ...", in the "Network source" select the Linux bridge you created on the host machine
Apply the configuration and start the VM
Create an XML configuration file from the following template
The below lines can be changed to customize the VM installation:
Create a Linux bridge configuration file (bridged-network.xml) for libvirt from the following template
Define the Linux bridge for the VM
Start the VM
If you see a permission error run the virsh command with sudo may fix the issue
Check the VM status
If there is a DHCP server on the management network the VM will obtain its IP configuration from the DHCP server
If there is no DHCP server or you want to configure the IP address statically, Follow the below steps
Enter VM console
The default username is 'oper' with the default password 'oper@123'
Check connections and devices
Release IP assigned by DHCP
Configure static IP for the connection
Set a default Gateway address
Set the IP configuration mode to manual
Reapply the configuration to the interface
Verify the IP address
Test FTAS VM reachability from outside the VM, if the VM is not reachable please check the access rule at the below location,
If the above value is 1
please change it to 0
and the reachability issue should be resolved
There are some scaling scripts that require multiple network service servers (NTP, SYSLOG, TACACS+, etc.). In order to simulate this, we can add a secondary IP address to the VM NIC.
To add a secondary IP address, use the command
The FTAS VM has docker containers running and the following docker images installed:
DHCP container image ztp_dhcp(DHCP sevice)
ztp_dhcp(DHCP) services are not run by default as it might conflict with DHCP running in the DC infra.
Net Services container image netservices:v1
(NTP, SYSLOG, TACACS+ services). This container is run with the "--network=host" option. If you need to change the configurations of the services please find them in the following configuration files.
Dockers running by default:
The FTAS test_runner
service collects the logs from the test case execution, captures and saves them to "~/logs/jobs.log
" file.
The FTAS also creates an HTML version of the reports available "~/reports/test_report_20230218_**/
"
To view the HTML test report, Visit the URL at http://<VM IP addr>:8090/
In the home folder of the logged-in user, there is a Python script name "qjob.py". This script handles test scheduling.
Following is a brief usage of the script:
The script can take the following actions:
Show the current queue
When no tests are scheduled the script will show an empty queue.
Adding test suite to the queue
we can add multiple test suits to the queue at any time to be executed.
Removing the test suite from the queue
Changing queue status
There are two statuses in the job queue:
When the job queue is in "paused" status, the test_runner service does not pick any job in the queue for execution.
When the job queue is in "running" status, the test_runner service picks the oldest job in the queue for execution. After the test case execution is complete the queue test_runner service changes the queue status to "paused".
Changing the queue status from running to paused when there is a running job, won't stop the running job but the test_runner service won't pick the next job for execution.
Kill or terminate a running job
After terminating the running job test_runner service pauses the queue.
Before scheduling any jobs validate the physical testbed to make sure all links are connected and are operationally UP, also we need a clean configuration file with valid interface settings (breakout, speed, FEC, admin status, etc.), no IP interfaces, no BGP instances, and no QoS. The scripts will use this clean configuration file to restore the DUT to its default configuration as part of the clean-up process.
The cleanup configuration file should be created at /etc/sonic/clean_config.json. The clean configuration file follows the config_db.json format. It must include the port-related settings for lane mapping, speed, and admin status.
For every DUT in the testbed, backup the default config(if needed)
Start with the default configuration after a fresh installation of SONiC
Alternatively, you can also create a clean config by editing the config_db.json
as below
Edit config_db.json
and remove the following configuration blocks and save the file
VLAN
VLAN_MEMBER
PORTCHANNEL
PORTCHANNEL_MEMBER
BGP configuration
Loopback interfaces
Edit "DEVICE_METADATA
" in /etc/sonic/config_db.json
as below
Configure "hostname" for each device (Example: Leaf01, Leaf02, Spine01, Spine02)
Add "docker_routing_config_mode": "split" configuration
Configure the below list through config_db.json:
The management IP address for eth0 and gateway
Port breakout (if any)
Port speed, FEC, Auto-negotiation on links connected to other devices and Ixia
Configure "admin_status": "down"
Save config_db.json
Copy config_db.json to /etc/sonic/clean_config.json
clean_config.json should be built using the text editor or SONiC CLI, but not both.
Load the configuration on the device
Add the following line to /etc/sonic/frr/vtysh.conf
Cleanup the BGP configuration from FRR
Build the Chaos testbed file using "~/testbeds/ftas_chaos_topo.py"
The chaos test suite loads a base configuration to all DUTs and Ixia for its test scripts. So ensure the following variables are set to False in the testbed file.
Please make sure DUTs have a clean or default configuration before the Chaos test run.
Chaos suite generates a statistics report file at ~/reports/report.txt to track its execution status and metric data for all test scripts. So please make sure to remove this file before running the Chaos test to avoid Ixia library errors
Test suite configuration is a text file where you can list all test functions (each test function is a test case) for a batch run. Test suite files are stored in the "~/testsuites" folder
Below is an example of how a test suite file is structured:
TEST***_FOLDER
- Path to the test artefacts including testbeds, testsuites and reports
TEST_CONTACT
- Email address of the test owner. This information is included in the test report
TESTSUITES
- Define the sub-test suites in a key-value pair for execution
"./essential/taas_platform_Interface_test.py"
- Location of the test script files.
SKIP
- Defines whether a particular test script will be included or excluded from the execution. If set to "SKIP": true,
then the tests will be excluded from execution.
COMMON_TESTBED
- The testbed file all test cases use.
TESTCASES
- A list of test functions (test cases) with the structure of "TESTCASES": [{"test_syslog_004": ""},{"test_syslog_002": ""}]
.
FTAS comes with the following predefined suites which can be run directly on the applicable platform and release combinations. These suite files are present in ~/testsuites
directory
A Testbed parameter file is a Python script which defines the testbed parameters as variables. Testbed files are available in the ~/testbeds folder.
All test scripts except Chaos can be run with full mesh 4 DUTs topology
Use the following sample script and steps to create your own testbed file.
Update the above details in the testbed file with your DUTs - DUT connections, Management IP, Login credentials, Link Speed etc.
The "name" parameter is very important. Provide a string to identify the respective for this parameter. This name is displayed in logs for easy identification of devices.
Refer to the topology diagram to find the link variables and update their values
All testbed files can be found in the folder "~/testbeds/"
There are two variables to control the cleanup after each test run in the testbed file.
They are CLEANUP_BY_REBOOT
and CLEANUP_BY_CFG_RELOAD
:
CLEANUP_
BY_REBOOT = True
, The script will restore the switch's configuration from the /etc/sonic/clean_config.json file and then reboot the switch. This process consumes additional execution time, but it ensures that the DUTs (Devices Under Test) are consistently configured in a clean and proper manner for subsequent test scripts.
CLEANUP_BY_CFG_RELOAD = True
, The script will restore the switch's configuration from the /etc/sonic/clean_config.json file and then issue the sudo config load
command to load the clean config file. This method takes less time to have a clean configuration on switches but may not work correctly sometimes.
If both CLEANUP_
BY_REBOOT
and CLEANUP_BY_
CFG_RELOAD
are set to False
, The scripts use the SONiC CLI procedure to un-configure whatever was configured on the switches by the scripts.
Ideally, the below three parameters should be set to False
CLEANUP_BY_REBOOT = False
CLEANUP_BY_CFG_RELOAD = False
CFG_RELOAD_BY_REBOOT = False
CLEANUP_BEFORE_TEST_RUN = False
, It can be used to do the forceful clean-up of all the devices before the test run
The following configuration will be cleaned up.
Apart from these, the following variables are specified in the testbed file:
CFG_RELOAD_BY_REBOOT = True
, The scripts initiate device reboots in instances where the config reload
command fails on the DUT due to certain reasons. This measure is taken as a workaround for such situations.
REBOOT_WAIT_TIME = 0
, Maximum wait time for the device to reboot
NTP_SERVER = <FTAS VM IP>
, FTAS VM serves as NTP server.
SYSLOG_SRVS = {"Servers": ["<FTAS VM IP>", "10.4.5.6"], "Log_Folder": "/var/log/sonic_logs"}
, first list member should be set to the FTAS VM IP
MAX_V4_ACL
, Maximum IPv4 ACL rules supported on the platform
MAX_V6_ACL
, Maximum IPv6 ACL rules supported on the platform
MAX_SECONDARY_SUBNET,
Maximum secondary subnets supported for SVI interface
MAX_IPV4_HOST_ROUTES
, Maximum IPv4 host routes supported on the platform
MAX_IPV6_HOST_ROUTES
, Maximum IPv6 host routes supported on the platform
MAX_IPV4_PREFIX_ROUTES
, Maximum IPv4 prefix routes supported on the platform
MAX_IPV6_PREFIX_ROUTES
, Maximum IPv6 prefix routes supported on the platform
TECHSUPPORT = True
, Takes techsupport dump (If True) for the DUTs in case of failures
TECHSUPPORT_SINCE = "hour ago"
, Specifies the argument to the show techsupport command, in case TECHSUPPORT = True
TECHSUPPORT_TIMEOUT = 300
, specifies the worst case timeout value for techsupport dump generation, in case TECHSUPPORT = True
ACCEPTABLE_DELTA = 0.5, Threshold value for acceptable packet/frame loss percentage.
CPU_MEM_THRESHOLD = 5 , Threshold value for acceptable change in CPU/memory utilization percentage
STRESS_AVAIL_CORES = 2 , Number of CPU cores reserved for system use; rest other CPU cores will undergo stress testing
STRESS_MEM_UTIL = 85 , Targeted percentage of total system memory to allocate for stress testing
SERVER_IP = "x.x.x.x" , IP address of the server hosting the stress-ng Docker image
SERVER_USER_ID = "oper" , User ID for SCP access to the server hosting the stress-ng Docker image
SERVER_PASSWORD = "oper@123" , Password for SCP access to securely transfer the stress-ng Docker image
Specify the port connected to the DUT like this: "s1_p1": "8"
, where the port number 8
is connected to Spine01_Port01
for Ixia traffic.
For example, Ixia port 8 can be configured like this: "localuhd/8"
, where localuhd
refers to the Ixia chassis and 8
is the UHD port number.
Specify the port connected to the DUT like this: "s1_p1": "1;8"
, where the port number 8
on Ixia card 1
is connected to Spine01_Port01
for Ixia traffic.
For example, Ixia port 1;8 can be configured like this: "<chassis_ip>;<card_no>;<Port_no>"
, where <chassis_ip>
refers to the Ixia chassis IP and 1;8
is the port with the card number.
The FTAS VM has been extensively tested with UHD version 1.3.3003.118, It is recommended to use UHD with the same version for smooth operations.
Ixia port format varies and depends on the Ixia Chassis type
For the testbeds where links between DUTs have less bandwidth than Ixia ports traffic tests may fail due to traffic drops on the least bandwidth links (i.e. DUTs may be connected with 1G ports and the Ixia link maybe 100G).
To avoid this situation a global parameter for the ixia traffic rate "global_traffic_rate"
in percentage could be used to enforce traffic rate in all tests.
When the global_traffic_rate
parameter is defined as a sub-parameter in the "Ixia_Ports"
all traffic streams in all test scripts would override their own rate value with global_traffic_rate
value configured.
When global_traffic_rate
is not defined all traffic streams in all test scripts would use their own static values for the traffic rate.
Here the "global_traffic_rate": 25
referrers to 25% speed for all Ixia ports
CLI_TIMEOUT, set the integer value for this variable when the command execution is slow or taking more than 30 sec to respond. Set the default value to 0 in the testbed file.
If you don't have 4 DUTs topology and want to run scripts that require only 2 DUTs, You can use 2dut_topo.py
testbed parameter file
There are two physical DUTs in the topology but the script might pick the name Spine and Leaf interchangeably. So in the testbed file, we should define parameters for both but they point to the same physical Spine and Leaf DUTs.
ftas_chaos_topo.py
(sample topology file) the testbed parameter file is used for Chaos test scripts only.
A new parameter PCH_CONFIGURATION = False,
has been added. This parameter determines if the tests are to be run on routed port (if False) or routed PortChannel (if True) configuration for the interfaces connecting DUTs.
Suitefile | Description | Number of testcases |
---|
Variable with Default values | Description |
---|
IXIA
UHD
Chassis Type: Ixia UHD
Chassis Version: IxOS 9.10.2300.159
Protocol Build Number: 9.10.200
Card Type: UHD100T32
UHD: 1.5.49
IXIA
NOVUS
Chassis Type: Ixia XGS2
Chassis Version: IxOS 9.12.2100.7
Protocol Build Number: 9.12.2009.10
Card Type: NOVUS100GE8Q28+FAN+25G+50G
PI.suite | All platform independent testcases | 105 |
PD.suite | All platform dependent testcases | 96 |
data_1dut.suite | All one DUT testcases applicable on data switches | 19 |
data_2dut.suite | All two DUT testcases applicable on data switches | 112 |
data_3dut.suite | All three DUT testcases applicable on data switches | 8 |
data_4dut.suite | All four DUT testcases applicable on data switches | 61 |
data_complete.suite | All testcases applicable on data switches | 201 |
mgmt_1dut.suite | All one DUT testcases applicable on management switches | 18 |
mgmt_2dut.suite | All two DUT testcases applicable on management switches | 77 |
mgmt_3dut.suite | All three DUT testcases applicable on management switches | 8 |
mgmt_complete.suite | All testcases applicable on management switches | 111 |
edgecore_4630_202111.suite | All testcases applicable on Edgecore 4630 platform for 202111 release | 142 |
edgecore_9716_202111.suite | All testcases applicable on Edgecore 9716 platform for 202111 release | 181 |
nvidia_202205.suite | All testcases applicable on NVIDIA platforms for 202205 release | 200 |
wistron_3200_ecs2.0.0.suite | All testcases applicable on Wistron 3200 platforms for ECS 2.0.0 release | 184 |
wistron_6512_ecs2.0.0.suite | All testcases applicable on Wistron 6512 platforms for ECS 2.0.0 release | 184 |
INTF_UP_WAIT_TIME = 30 | Timeout for the interface to be 'Operationally UP' |
CLI_TIMEOUT = 0 | Assign an integer value to this variable if the command execution on the device experiences slowness or takes longer than 30 seconds to respond. If the value is not explicitly set, the default will be 0, implying a 30-second wait time for the command execution output. |
MAX_V4_ACL = 64 | The maximum number of supported IPV4 ACL rules. The variable used in the testcase test_v4_acl_scale_max_supported in test script scalability/taas_qual_scale.py |
MAX_V6_ACL = 64 | The maximum number of supported IPV6 ACL rules. The variable used in the testcase test_qual_v6_acl_scale_max_supported in test script scalability/taas_qual_scale.py |
MAX_SECONDARY_SUBNET = 25 | The maximum number of supported secondary subnets under a vlan. The variable used in the testcase test_max_secondary_subnet_under_vlan in test script scalability/taas_qual_scale.py |
MAX_IPV4_HOST_ROUTES = 1000 | The maximum number of IPv4 host routes supported. The variable used in the testcase test_v4_host_routes_scale_max_supported in test script scalability/taas_qual_scale.py |
MAX_IPV6_HOST_ROUTES = 1000 | The maximum number of IPV6 host routes supported. The variable used in the testcase test_v6_host_routes_scale_max_supported in test script scalability/taas_qual_scale.py |
MAX_IPV4_PREFIX_ROUTES = 1000 | The maximum number of IPV4 prefix routes supported. The variable used in the testcase test_v4_prefix_routes_scale_max_supported in test script scalability/taas_qual_scale.py |
MAX_IPV4_NEXTHOPS = 256 | The maximum number of IPV4 next-hop supported. The variable used in the testcase test_v4_nexthops_scale_max_supported in test script scalability/taas_qual_scale.py |
STRESS_AVAIL_CORES = 2 | Number of CPU cores reserved for system use. Rest other CPU cores will undergo stress testing |
STRESS_MEM_UTIL = 85 | Targeted percentage of total system memory to allocate for stress testing |
This category covers the validation of mandatory features and functions required for data center deployments.
Description | Test Case ID | PD | Topology |
---|---|---|---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
Description | Test Case ID | PD | Topology |
---|---|---|---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
This section verifies the mandatory functions for management operation in a Fabric
Description | Test Case ID | PD | Topology |
---|---|---|---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
Description | Test Case ID | PD | Topology |
---|---|---|---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
Description | Test Case ID | PD | Topology |
---|---|---|---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
The Aviz Network Support team can be reached by
Sending an email to
Submitting a Ticket at
Live Chat on
A ticket can be submitted with or without an account at the
Mandatory Fields:
Subject
Issue Type (Post Deployment, Pre-Deployment, General Query, RMA)
Priority (Low, Normal, High, Urgent)
Description
Optional Fields:
External ID (Community Request ID or Past Case Number)
Hardware (Switch Model)
ASIC vendor (chipset)
Serial Number
Host Name
Attachments (Tech Support Dump, Screenshots, Logs)
For Technical Issues, we recommend the description include the following:
Repro steps, if the issue is reproducible
The sequence of events that lead to the failure state
Artefacts - Tech Support dump (tar.gz file), Logs, Command Outputs, Topology Diagrams etc...
Description | Test Case ID | PD | Topology |
---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
Description | Test Case ID | PD | Topology |
---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
Description | Test Case ID | PD | Topology |
---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
No. | Description | Test Case ID | PD | Topology |
---|
PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.
Verify SPAN with source ports (and LAG) to the destination port in ingress/egress/both directions.
test_port_span_001
Yes
Verify ERSPAN by configuring a mirror with the list of source ports/LAG to destination IP in ingress/egress/both directions
test_port_span_002
Yes
Verify ERSPAN by configuring a mirror from ACL match to destination IP in ingress directions
test_port_span_003
Yes
Verify whether SNMP configurations
test_snmp_config
No
Verify SNMP Get/GetNext/Walk requests MIBS: ENTITY, IF-MIB, IP-MIB,
test_snmp_commands
No
Verify Config load for incremental configuration
test_config_load
No
Verify IPv6 address configuration on Front Panel (Data) ports
test_front_panel_ports_ipv6
No
Verify config reload to restore configuration
test_cfg_backup_restore
No
Verify syslogs are generated properly on link down/up
test_syslog_002
No
Verify syslogs are generated properly on LACP UP/Down
test_syslog_004
No
Verify SSH from host to SONIC on management interface
test_ssh_001
No
Verify SSH from host to SVI interface and routed port
test_ssh_002
No
Verify whether the session is successfully closed right after SSH disconnect from the client.
test_ssh_003
No
Verify Tacacs+ with AAA authentication
test_tacacs_001
No
Verify NTP server works as clock source correctly
test_ntp_007
No
Verify timezone can be manually configured.
test_timezone_001
No
Verify ping from SONIC SVI interface and routed port
test_ping_001
No
Verify that ping works properly with multiple parameter combination
test_ping_009
No
Verify that SONiC Version and serial information can be retrieved via SNMP_WALK command
test_snmp_walk_version_serial
No
Verify that SONiC interface index information can be retrieved via SNMP_WALK command
test_snmp_walk_inf_index
No
Verify that SONiC interface name information can be retrieved via SNMP_WALK command
test_snmp_walk_inf_name
No
Verify that SONiC interface admin and oper status info can be retrieved via SNMP_WALK command
test_snmp_walk_inf_admin_oper
No
Verify that SONiC interface type info can be retrieved via SNMP_WALK command
test_snmp_walk_inf_type
No
Verify that tagged ports vlan id can be retrieved using the SNMP_WALK command
test_snmp_walk_vlan_tagged_ports
No
Verify that untagged ports vlan id can be retrieved using the SNMP_WALK command
test_snmp_walk_vlan_untagged_ports
No
Verify that LLDP neighbor info. can be retrieved via SNMP_WALK command
test_snmp_walk_lldp
No
Verify that routing information can be retrieved via SNMP_WALK command
test_snmp_walk_ip_routing_info
No
Verify that SONiC IP interface index and netmask info can be retrieved via SNMP_WALK command
test_snmp_walk_ip_inf_index_and_netmask
No
Verify that ping works properly when using LACP
test_ping_011
No
Verifying IPv4 (1518) MTU
test_mtu_001
Yes
Verify MTU functionality for Jumboframe packets
test_ports_mtu_002
Yes
Enable LLDP globally and disable per-port basis
test_lldp_001
No
Verify that user can enable/disable LLDP globally
test_lldp_002
No
Verify LLDP neighbors are learnt properly with proper ChassisID, portID, system name, system
test_lldp_013
No
Verify LACP member addition and removal
test_lacp_003
No
Verify LACP functionality across reboot
test_lacp_005
No
Verify LACP functionality after link failover/failback of physical interface
test_lacp_011
No
Verify LACP functionality after removal and addition of port-channel member
test_lacp_012
No
Verify whether user can create/delete VLAN
test_vlan_001
No
Verify whether user can add/modify/delete ports to the VLAN as tagged/untagged members
test_vlan_002
No
Verify the ability to configure a port as untagged VLAN member
test_vlan_004
No
Verify the ability to configure a port as tagged VLAN member
test_vlan_005
No
Verify that the user can configure port-channel interface as untagged VLAN member
test_vlan_007
No
Verify that the user can configure port-channel interface as tagged VLAN members
test_vlan_008
No
Warm Reboot - Device configuration impact for VLAN Config
test_vlan_011
Yes
Verify whether user can configure port as untagged member of a VLAN
test_vlan_014
No
Verify whether known unicast traffic is forwarded to the destination port-channel
test_vlan_016
No
EVPN_VXLAN Configuration and show commands
test_bgp4_evpn_vxlan_001
Yes
EVPN VXLAN for known unicast, BUM traffic (eBGP) with RIF
test_bgp4_evpn_vxlan_002
Yes
EVPN VXLAN for known unicast, BUM traffic (eBGP) with SVI
test_bgp4_evpn_vxlan_003
Yes
EVPN VXLAN for known unicast traffic (eBGP) with link events and router failure - RIF
test_bgp4_evpn_vxlan_005
Yes
EVPN VXLAN for known unicast traffic (eBGP) with link events and router failure - SVI
test_bgp4_evpn_vxlan_006
Yes
EVPN VXLAN for known unicast traffic (eBGP) with link events and router failure - RPCH
test_bgp4_evpn_vxlan_007
Yes
Asymmetric IRB with EVPN eBGP
test_bgp4_evpn_vxlan_015
Yes
Asymmetric IRB with EVPN iBGP
test_bgp4_evpn_vxlan_016
Yes
MC-LAG L2 validation using port-channel configuration
test_mclag_layer2_steady_state
Yes
MC-LAG L2 validation, Bring down the member link of SPINE01
test_mclag_layer2_member_link_down
Yes
MC-LAG L2 keepalive link down
test_mclag_layer2_peer_link_down
Yes
MCLAG-L2Active Reboot
test_mclag_layer2_active_reboot
Yes
MCLAG-L2 Standby Reboot
test_mclag_layer2_standby_reboot
Yes
Storm control CLI
test_storm_control_cli_verification
Yes
DUT throws proper error for invalid storm-control input
test_storm_control_invalid_input
Yes
Storm control with broadcast traffic
test_storm_control_broadcast
Yes
Storm control with unknown-unicast traffic
test_storm_control_unknown_unicast
Yes
Storm control with unknown-multicast traffic
test_storm_control_unknown_multicast
Yes
Storm control configuration and behavior
during warm-reboot
test_storm_control_warm_reboot
Yes
Configure STP on the devices check for loop free topology with root bridge selection
test_configure_stp_validate
Yes
Enable STP, ensure loop-free topology, configure priority and spine set as root bridge
test_stp_priority
Yes
Edge port transition to forwarding state with portfast enabled
test_port_fast
Yes
Create a static LAG and verify the traffic flow
test_pch_creation
Yes
Add delete members to static LAG and verify the traffic flow
test_pch_sec_member_add_del
Yes
Static LAG recovers after restarting the teamd container
test_lag_docker_teamd_reboot
Yes
Static LAG entry in redis
test_create_pch_check_rediscli
Yes
Static LAG member entry in redis
test_mem_pch_rediscli_check
Yes
Shut and no shut the static LAG
test_shut_noshut_pch
Yes
Verify that ping works over ECMP
test_ping_013
No
Verify that IP address can be configured over SVI
test_IP_001
No
Verify that IP address can be configured over routed port
test_IP_002
No
Verify SVI and routed ports can be admin down or up
test_IP_005
No
Verify connected route gets created for the SVI subnet in the ip route table.
test_IP_006
No
Verify IP interface is operational for SVI with LACP portchannel members
test_IP_011
No
Verify ip address can be configured over routed PCH.
test_IP_014
No
Verify BGP AS configuration works properly
test_bgp_001
No
Verify BGP peering happens with nodes in same AS and iBGP neighbor table gets updated properly
test_bgp_002
No
Verify BGP peering happens with nodes in different AS and eBGP neighbor table gets updated properly
test_bgp_003
No
Verify BGP route learning using eBGP with routes injected from IXIA
test_bgp_004
No
Verify BGP route removal using eBGP with routes withdrawn from IXIA
test_bgp_005
No
Verify BGP route relearn over different neighbor when interface is shutdown
test_bgp_006
No
Verify unnumbered functionality with iBGP
test_qual_bgp_001
Yes
Verify unnumbered functionality with eBGP
test_qual_bgp_002
Yes
Verify BGP route redistribution in DUT
test_qual_bgp_003
No
Verify BGP6 functionality
test_qual_bgp_004
No
Verify BGPV6 Functionality in DUT
test_qual_bgp_ebgp_004
No
Verify BGP AS-PATH prepend functionality
test_qual_bgp_007
No
Verify BGP route map match prefix list, access-list deny and permit functionality
test_qual_bgp_008
No
Verify BGP route map match AS-PATH permit and deny functionality
test_qual_bgp_009
No
Verify BGP route map match community list permit and deny functionality
test_qual_bgp_010
No
Verify BGP max MED functionality
test_qual_bgp_011
No
Verify BGP maximum prefix limit per peer functionality
test_qual_bgp_013
No
Verify BGP communities functionality
test_qual_bgp_014
No
Verify BGP regexp match single and multi AS permit & deny action using AS-path access lists
test_qual_bgp_015
No
Verify BGP regexp match any AS permit and deny action using AS-path access lists
test_qual_bgp_016
No
Verify BGP regexp match range of BGP communities functionality
test_qual_bgp_017
No
Verify BGP peering working with BGP listen range
test_qual_bgp_019
No
Verify VRF functionality
test_qual_vrf
Yes
Verify VLAN 1 support for Host connectivity
test_qual_vlan1
Yes
Verify IPV6 neighbor discovery
test_qual_ipv6_neighbor
No
Verify L3 DROP ACL functionality with matching source IP and source port
test_acl_001
No
Verify L3 DROP ACL functionality with matching destination IP and destination port
test_acl_002
No
Verify L3 DROP ACL functionality with matching SIP, DIP, SPORT, DPORT
test_acl_003
No
Verify L3 DROP ACL with ACL rule having subnet mask
test_acl_004
No
Verify L3 DROP ACL est acl rule with protocol = TCP
test_acl_005
No
Verify L3 DROP ACL - Test acl rule with protocol = UDP
test_acl_006
No
Verify L3 PERMIT ACL functionality with matching source IP and source port
test_acl_007
No
Verify L3 PERMIT ACL functionality with matching destination IP and destination port
test_acl_008
No
Verify L3 PERMIT ACL functionality with matching SIP, DIP, SPORT, DPORT
test_acl_009
No
Verify L3 PERMIT ACL with ACL rule having subnet mask
test_acl_010
No
Verify L3 PERMIT ACL - Test acl rule with protocol = TCP
test_acl_011
No
Verify L3 PERMIT ACL - Test acl rule with protocol = UDP
test_acl_012
No
Verify Drop ACL (IPv6) for matching source IPv6/L4 address and source IPv6L4 port
test_qual_ip6_acl_001
No
Verify Drop ACL (IPv6) for matching destination IPv6/L4 port and source IPv6L4 port
test_qual_ip6_acl_002
No
Verify drop ACL - matching IPv6 params subnet, dst, src ports combined
test_qual_ip6_acl_003
No
Verify PERMIT ACL (IPv6) for matching source IPv6/L4 address and source IPv6L4 port
test_qual_ip6_acl_004
No
Verify Permit ACL (IPv6) for matching destination IPv6/L4 port and source IPv6L4 port
test_qual_ip6_acl_005
No
Verify drop ACL - matching IPv6 params subnet, dst, src ports combined
test_qual_ip6_acl_006
No
Verify whether static ARP entry can be configured
test_arp_003
No
Verify that the DUT will respond to an ARP Request for the SVI interface
test_arp_007
No
Verify whether clear ARP entries works properly
test_arp_011
No
Verify whether ARP entries are flushed after some time
test_arp_012
Yes
Verify eBGP multi-AS config, adjacency, route convergence and data path using router interface
test_bgp_netops_001
No
eBGP Multi-AS Route Convergence and data path using loopback
test_bgp_netops_002
No
BGP Node Drain - Add route-map to remove and restore SPINE node 1 and 2 using Community list
test_bgp_netops_003_004
No
Test Link drain - Apply Route-Map permit
test_bgp_netops_005_006
No
Node drain: with IPv6 Traffic Test Node Drain (Spine 1 and Spine 2) for IPv6 Traffic/Prefixes
test_bgp_netops_007
No
Link drain with IPv6 Traffic Test Link Drain for IPv6 Traffic/Prefixes
test_bgp_netops_008
No
Node drain using prefix-lists
test_bgp_netops_009
No
Test Link Drain with Prefix Lists
test_bgp_netops_010
No
Node drain/restore using AS path prepend list
test_bgp_netops_011_012
No
Link drain using AS path prepend
test_bgp_netops_014
No
MC-LAG L3 validation using port-channel configuration
test_mclag_layer3_steady_state
Yes
MC-LAG L3 validation, Bring down the member link of SPINE01
test_mclag_layer3_member_link_down
Yes
MC-LAG L3 keepalive link down
test_mclag_layer3_keepalive_link_down
Yes
MCLAG-L3 Active Reboot
test_mclag_layer3_active_reboot
Yes
MCLAG-L3 Standby Reboot
test_mclag_layer3_standby_reboot
Yes
Symmetric IRB with EVPN eBGP-RIF with s1_as_num=3000, s2_as_num=4000
test_bgp4_evpn_vxlan_009
Yes
Symmetric IRB with EVPN iBGP-RIF with s1_as_num=5000, s2_as_num=5000
test_bgp4_evpn_vxlan_010
Yes
Symmetric IRB with EVPN iBGP-SVI with s1_as_num=5000, s2_as_num=5000
test_bgp4_evpn_vxlan_012
Yes
Symmetric IRB with EVPN eBGP-RPCH with s1_as_num=3000, s2_as_num=4000
test_bgp4_evpn_vxlan_013
Yes
Symmetric IRB with EVPN iBGP-RPCH with s1_as_num=5000, s2_as_num=5000
test_bgp4_evpn_vxlan_014
Yes
Apply QoS with DSCP-0 to TC-0 mapping
test_dscp_0_tc_0
Yes
Apply QoS with DSCP-8 to TC-1 mapping
test_dscp_8_tc_1
Yes
Apply QoS with DSCP-16 to TC-2 mapping
test_dscp_16_tc_2
Yes
Apply QoS with DSCP-24 to TC-3 mapping
test_dscp_24_tc_3
Yes
Apply QoS with DSCP-32 to TC-4
mapping
test_dscp_32_tc_4
Yes
Apply QoS with DSCP-40 to TC-5 mapping
test_dscp_40_tc_5
Yes
Apply QoS with DSCP-48 to TC-6 mapping
test_dscp_48_tc_6
Yes
Apply QoS with DSCP-56 to TC-7 mapping
test_dscp_56_tc_7
Yes
Validate whether the DUT applies QoS using DOT1P-0 to TC-0 mapping
test_dot1p0_tc0
Yes
Validate whether the DUT applies QoS using DOT1P-1 to TC-1 mapping
test_dot1p1_tc1
Yes
Validate whether the DUT applies QoS using DOT1P-2 to TC-2 mapping
test_dot1p2_tc2
Yes
Validate whether the DUT applies QoS using DOT1P-3 to TC-3 mapping
test_dot1p3_tc3
Yes
Validate whether the DUT applies QoS using DOT1P-4 to TC-4 mapping
test_dot1p4_tc4
Yes
Validate whether the DUT applies QoS using DOT1P-5 to TC-5 mapping
test_dot1p5_tc5
Yes
Validate whether the DUT applies QoS using DOT1P-6 to TC-6 mapping
test_dot1p6_tc6
Yes
Validate whether the DUT applies QoS using DOT1P-7 to TC-7 mapping
test_dot1p7_tc7
Yes
Verify basic EVPN VxLAN functionality
test_evpn_vxlan_feature
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN
test_evpn_vxlan_l3_traffic
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 16K routes
test_bgp_evpn_vxlan_l3_intra_vlan_scale_16k
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 32K routes
test_bgp_evpn_vxlan_l3_intra_vlan_scale_32k
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 64K routes
test_bgp_evpn_vxlan_l3_intra_vlan_scale_64k
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 128K routes
test_bgp_evpn_vxlan_l3_intra_vlan_scale_128k
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN
test_evpn_vxlan_l3_inter_vlan
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN for 16K routes
test_bgp_evpn_vxlan_inter_vlan_scale_16k
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN for 32K routes
test_bgp_evpn_vxlan_inter_vlan_scale_32k
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN for 64K routes
test_bgp_evpn_vxlan_inter_vlan_scale_64k
Yes
Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN for 128K routes
test_bgp_evpn_vxlan_inter_vlan_scale_128k
Yes
Scale up to 128 ACL for matching source IP/port and destination IP/port | test_v4_acl_scale_128 | Yes |
Scale up to 256 ACL for matching source IP/port and destination IP/port | test_v4_acl_scale_256 | Yes |
Scale up to 512 ACL for matching source IP/port and destination IP/port | test_v4_acl_scale_512 | Yes |
Scale up to max supported ACL for matching source IP/port and destination IP/port | test_v4_acl_scale_max_supported | Yes |
Scale up to 128 ACL for denying source IPv6/port and destination IPv6/port | test_qual_v6_acl_scale_128 | Yes |
Scale up to 256 ACL for denying source IPv6/port and destination IPv6/port | test_qual_v6_acl_scale_256 | Yes |
Scale up to 512 ACL for denying source IPv6/port and destination IPv6/port | test_qual_v6_acl_scale_512 | Yes |
Scale up to max supported ACL for denying source IPv6/port and destination IPv6/port | test_qual_v6_acl_scale_max_supported | Yes |
Verify IPv4 L3 Host Routes Scale for 2K | test_v4_host_routes_scale_2k | Yes |
Verify IPv4 L3 Host Routes Scale for 4K | test_v4_host_routes_scale_4k | Yes |
Verify IPv4 L3 Host Routes Scale for max | test_v4_host_routes_scale_max_supported | Yes |
Verify IPv6 L3 Host Routes Scale for 2K | test_v6_host_routes_scale_2k | Yes |
IPv6 host routes scale to 2K routes (iBGP) | test_v6_host_routes_scale_ibgp_2k | Yes |
Verify IPv6 L3 Host Routes Scale for 4k | test_v6_host_routes_scale_4k | Yes |
IPv6 host routes scale to 4K routes (iBGP) | test_v6_host_routes_scale_ibgp_4k | Yes |
Verify IPv6 L3 Host Routes Scale for MAX support | test_v6_host_routes_scale_max_supported | Yes |
IPv6 host routes scale to 32K routes (iBGP) | test_v6_host_routes_scale_ibgp_32k | Yes |
IPv6 host routes scale to 32K routes | test_v6_host_routes_scale_32k | Yes |
Verify IPv4 L3 prefix Routes scale for 2k | test_v4_prefix_routes_scale_2k | Yes |
Verify IPv4 L3 prefix Routes scale for 4k | test_v4_prefix_routes_scale_4k | Yes |
Verify IPv4 L3 prefix Routes scale for MAX SUPPORT | test_v4_prefix_routes_scale_max_supported | Yes |
Verify IPv6 L3 prefix routes scale | test_qual_scale_005 | Yes |
Verify IPv4 Nexthops scale up to 512 | test_v4_nexthops_scale_512 | Yes |
Verify IPv4 Nexthops scale up to 1024 | test_v4_nexthops_scale_1024 | Yes |
Verify IPv4 Nexthops scale max supported | test_v4_nexthops_scale_max_supported | Yes |
Verify IPv6 Nexthops scale | test_qual_scale_007 | Yes |
Scalability to 16 ECMP paths and 32K routes | test_16ecmp_32k_routes | Yes |
Scalability to 32 ECMP paths and 64K routes | test_32ecmp_64k_routes | Yes |
Scalability to 64 ECMP paths and 128K routes | test_64ecmp_128k_routes | Yes |
Scalability to "number of leaf1 ports" ECMP paths and 1K routes per path | test_ecmp_scale_routes | Yes |
BGP dual stack scale for 4K prefix routes | test_bgp_dual_stack_scale_4k | Yes |
BGP dual stack scale for 16K prefix routes | test_bgp_dual_stack_scale_16k | Yes |
BGP dual stack scale for 32K prefix routes | test_bgp_dual_stack_scale_32k | Yes |
BGP dual stack scale for 64K prefix routes | test_bgp_dual_stack_scale_64k | Yes |
BGP dual stack scale for 128K prefix routes | test_bgp_dual_stack_scale_128k | Yes |
BGP dual stack scale with 32K host routes with traffic | test_bgp_host_route_dual_stack_32k | Yes |
BGP dual stack scale with 32K LPM routes with traffic | test_bgp_lpm_route_dual_stack_scale_32k | Yes |
BGP dual stack scale for 32K prefix routes (with 16 bit subnet mask for IPv4 and 64bit subnet mask for IPv6 routes) with traffic | test_bgp_route_dual_stack_32k | Yes |
BGP graceful restart with dual stack scale to 32K prefix routes | test_bgp_graceful_restart_preserve_fw_state_32k | Yes |
BGP graceful restart with dual stack scale to 32K prefix routes and docker restart | test_bgp_graceful_docker_restart_32k | Yes |
Enable/disable BGP graceful restart with dual stack scale to 32K prefix routes | test_bgp_graceful_feature_disable_enable_32k | Yes |
BGP graceful restart with dual stack scale to 32K prefix routes and kill bgp process | test_bgp_kill_process_in_graceful_restart_32k | Yes |
BGP dual stack scale with 64K host routes with traffic | test_bgp_host_route_dual_stack_64k | Yes |
BGP dual stack scale with 128K host routes with traffic | test_bgp_host_route_dual_stack_128k | Yes |
BGP dual stack scale with 64K LPM routes with traffic | test_bgp_lpm_route_dual_stack_scale_64k | Yes |
BGP dual stack scale with 128K LPM routes with traffic | test_bgp_lpm_route_dual_stack_scale_128k | Yes |
BGP graceful restart with dual stack scale to 64K prefix routes | test_bgp_graceful_restart_preserve_fw_state_64k | Yes |
BGP graceful restart with dual stack scale to 128K prefix routes | test_bgp_graceful_restart_preserve_fw_state_128k | Yes |
BGP graceful restart with dual stack scale to 64K prefix routes and docker restart | test_bgp_graceful_docker_restart_64k | Yes |
BGP graceful restart with dual stack scale to 128K prefix routes and docker restart | test_bgp_graceful_docker_restart_128k | Yes |
BGP graceful restart with dual stack scale to 64K prefix routes and kill bgp process | test_bgp_kill_process_in_graceful_restart_64k | Yes |
BGP graceful restart with dual stack scale to 128K prefix routes and kill bgp process | test_bgp_kill_process_in_graceful_restart_128k | Yes |
Scalability to 4K IPv6 prefix routes with traffic. | test_ipv6_l3_prefix_routes_scale_4k | Yes |
Scalability to 64K IPv6 prefix routes with traffic | test_ipv6_l3_prefix_routes_scale_64k | Yes |
Scalability to 128K IPv6 prefix routes with traffic | test_ipv6_l3_prefix_routes_scale_128k | Yes |
128 IPv4 ACL rules (matching source IP/port and destination IP/port) | test_ingress_v4_acl_scale_128 | yes |
256 IPv4 ACL rules (matching source IP/port and destination IP/port) | test_ingress_v4_acl_scale_256 | yes |
512 IPv4 ACL rules (matching source IP/port and destination IP/port) | test_ingress_v4_acl_scale_512 | yes |
(MAX_V4_ACL) IPv4 ACL rules (matching source IP/port and destination IP/port) | test_ingress_v4_acl_scale_max_supported | yes |
Scaling BGP dynamic host routes with traffic. | test_bgp_dynamic_route_scale | No |
BGP withdraw prefixes(5 times) with dual stack scale 160k(IPv4+IPv6) prefix routes. | test_bgp_dual_stack_scale_intf_flap_160k | yes |
BGP session down and docker restart with dual stack scale to 160k(IPv4+IP6) prefix routes | test_bgp_dual_stack_scale_clear_bgp_160k | yes |
BGP withdraw prefixes(5 times) with dual stack scale 160k(IPv4+IPv6) prefix routes. | test_bgp_dual_stack_convergence_withdrawal_160k | yes |
Default Route with BGP Advertised Routes | test_bgp_default_route | No |
Maximum max-MED Value on Startup | test_bgp_convergence_with_max_med_001 | No |
Stress device CPU and memory with 16K (IPv4 + IPv6) prefix routes | test_bgp_dual_stack_stress_16k | Yes |
Stress device CPU and memory with 32K (IPv4 + IPv6) prefix routes | test_bgp_dual_stack_stress_32k | Yes |
Stress device CPU and memory with 64K (IPv4 + IPv6) prefix routes | test_bgp_dual_stack_stress_64k | Yes |
Stress device CPU and memory with 128K (IPv4 + IPv6) prefix routes | test_bgp_dual_stack_stress_128k | Yes |
Stress device CPU and memory with 32K (IPv4 + IPv6) host routes | test_bgp_host_route_dual_stack_stress_32k | Yes |
Stress device CPU and memory with 64K (IPv4 + IPv6) host routes | test_bgp_host_route_dual_stack_stress_64k | Yes |
Stress device CPU and memory with 128K (IPv4 + IPv6) host routes | test_bgp_host_route_dual_stack_stress_128k | Yes |
Stress device CPU and memory with 32K (IPv4 + IPv6) LPM routes | test_bgp_lpm_route_dual_stack_stress_32k | Yes |
Stress device CPU and memory with 64K (IPv4 + IPv6) LPM routes | test_bgp_lpm_route_dual_stack_stress_64k | Yes |
Stress device CPU and memory with 128K (IPv4 + IPv6) LPM routes | test_bgp_lpm_route_dual_stack_stress_128k | Yes |
1 | Verify Chaos Base configuration | test_deploy_verify_base_config | Yes |
2 | Verify Route Scaling time on syncd restart without traffic for 2K routes | test_chaos_route_scaling_time_syncd_2k | Yes |
3 | Verify Route Scaling time on syncd restart without traffic for 8K routes | test_chaos_route_scaling_time_syncd_8k | Yes |
4 | Verify Route Scaling time on syncd restart without traffic for 16K routes | test_chaos_route_scaling_time_syncd_16k | Yes |
5 | Verify Route Scaling time on syncd restart without traffic | test_chaos_route_scaling_time_syncd_32k | Yes |
6 | Verify Route Scaling time on syncd restart without traffic | test_chaos_route_scaling_time_syncd_64k | Yes |
7 | Verify Route Scaling time on FRR restart without traffic for 2K routes | test_chaos_route_scaling_time_frr_2k | Yes |
8 | Verify Route Scaling time on FRR restart without traffic for 8K routes | test_chaos_route_scaling_time_frr_8k | Yes |
9 | Verify Route Scaling time on FRR restart without traffic | test_chaos_route_scaling_time_frr_16k | Yes |
10 | Verify Route Scaling time on FRR restart without traffic | test_chaos_route_scaling_time_frr_32k | Yes |
11 | Verify Route Scaling time on FRR restart without traffic | test_chaos_route_scaling_time_frr_64k | Yes |
12 | Verfiy Route Scaling time on orchagent restart without traffic for 2k routes | test_chaos_route_scaling_time_orchagent_2k | Yes |
13 | Verfiy Route Scaling time on orchagent restart without traffic for 8k routes | test_chaos_route_scaling_time_orchagent_8k | Yes |
14 | Verfiy Route Scaling time on orchagent restart without traffic for 16k routes | test_chaos_route_scaling_time_orchagent_16k | Yes |
15 | Verify Route Scaling time on orchagent restart without traffic for 32k routes | test_chaos_route_scaling_time_orchagent_32k | Yes |
16 | Verify Route Scaling time on orchagent restart without traffic for 64k routes | test_chaos_route_scaling_time_orchagent_64k | Yes |
17 | Verify Route Scaling time on syncd restart with traffic for 2K routes | test_chaos_route_scaling_time_syncd_traffic_2k | Yes |
18 | Verify Route Scaling time on syncd restart with traffic for 8K routes | test_chaos_route_scaling_time_syncd_traffic_8k | Yes |
19 | Verify Route Scaling time on syncd restart with traffic for 16k routes | test_chaos_route_scaling_time_syncd_traffic_16k | Yes |
20 | Verify Route Scaling time on syncd restart with traffic for 32k routes | test_chaos_route_scaling_time_syncd_traffic_32k | Yes |
21 | Verify Route Scaling time on syncd restart with traffic for 64k routes | test_chaos_route_scaling_time_syncd_traffic_64k | Yes |
22 | Verify Route Scaling time on FRR restart with traffic for 2k routes | test_chaos_route_scaling_time_frr_traffic_2k | Yes |
23 | Verify Route Scaling time on FRR restart with traffic for 8k routes | test_chaos_route_scaling_time_frr_traffic_8k | Yes |
24 | Verify Route Scaling time on FRR restart with traffic for 16k routes | test_chaos_route_scaling_time_frr_traffic_16k | Yes |
25 | Verify Route Scaling time on FRR restart with traffic for 32k routes | test_chaos_route_scaling_time_frr_traffic_32 | Yes |
26 | Verify Route Scaling time on FRR restart with traffic for 64k routes | test_chaos_route_scaling_time_frr_traffic_64k | Yes |
27 | Verify Route Scaling time on orchagent restart with traffic with 2k routes | test_chaos_route_scaling_time_orchagent_traffic_2k | Yes |
28 | Verify Route Scaling time on orchagent restart with traffic with 8k routes | test_chaos_route_scaling_time_orchagent_traffic_8k | Yes |
29 | Verify Route Scaling time on orchagent restart with traffic with 16k routes | test_chaos_route_scaling_time_orchagent_traffic_16k | Yes |
30 | Verify Route Scaling time on orchagent restart with traffic with 32k routes | test_chaos_route_scaling_time_orchagent_traffic_32k | Yes |
31 | Verify Route Scaling time on orchagent restart with traffic | test_chaos_route_scaling_time_orchagent_traffic_64k | Yes |
32 | Verify Spine failure and re-insertion impact with 2K routes. | test_deploy_spine_failure_insertion_impact_2k | Yes |
33 | Verify Spine failure and re-insertion impact with 8K routes. | test_deploy_spine_failure_insertion_impact_8k | Yes |
34 | Verify Spine failure and re-insertion impact with 16K routes. | test_deploy_spine_failure_insertion_impact_16k | Yes |
35 | Verify Spine failure and re-insertion impact with 32K routes. | test_deploy_spine_failure_insertion_impact_32k | Yes |
36 | Verify Spine failure and re-insertion impact with 64K routes. | test_deploy_spine_failure_insertion_impact_64k | Yes |
37 | Verify Link failure: LEAF-SPINE with 2K routes | test_deploy_spine_failure_link_impact_2k | Yes |
38 | Verify Link failure: LEAF-SPINE with 8K routes | test_deploy_spine_failure_link_impact_8k | Yes |
39 | Verify Link failure: LEAF-SPINE with 16K routes | test_deploy_spine_failure_link_impact_16k | Yes |
40 | Verify Link failure: LEAF-SPINE with 32K Routes | test_deploy_spine_failure_link_impact_32k | Yes |
41 | Verify Link failure: LEAF-SPINE with 64K routes | test_deploy_spine_failure_link_impact_64k | Yes |
42 | Verify stabilty with Continuous DUT reboot | test_chaos_continuous_reboot | Yes |
43 | Verify Continuous Route push and withdrawal with traffic for 2K routes | test_deploy_longevity_2k | Yes |
44 | Verify Continuous Route push and withdrawal with traffic for 8K routes | test_deploy_longevity_8k | Yes |
45 | Verify Continuous Route push and withdrawal with traffic for 16K routes | test_deploy_longevity_16k | Yes |
46 | Verify Continuous Route push and withdrawal with traffic for 32K routes | test_deploy_longevity_32k | Yes |
47 | Verify Continuous Route push and withdrawal with traffic for 64K routes | test_deploy_longevity_64k | Yes |
48 | Verify Warm Reboot - Device configuration impact with 2K routes | test_deploy_spine_warmreboot_impact_2k | Yes |
49 | Verify Warm Reboot - Device configuration impact with 8K routes | test_deploy_spine_warmreboot_impact_8k | Yes |
50 | Verify Warm Reboot - Device configuration impact with 16K routes | test_deploy_spine_warmreboot_impact_16k | Yes |
51 | Verify Warm Reboot - Device configuration impact with 32K routes | test_deploy_spine_warmreboot_impact_32k | Yes |
52 | Verify Warm Reboot - Device configuration impact with 64K routes | test_deploy_spine_warmreboot_impact_64k | Yes |
53 | Verify Stability with high Kernel CPU and observe its impact on the docker containers | test_chaos_high_kernel_cpu_utilization | Yes |
54 | Verify Routed PCH with 2k routes | test_deploy_impact_lacp_unconfig_2k | Yes |
55 | Verify Routed PCH with 8k routes | test_deploy_impact_lacp_unconfig_8k | Yes |
56 | Verify Routed PCH with 16k routes | test_deploy_impact_lacp_unconfig_16k | Yes |
57 | Verify Routed PCH with 32k routes | test_deploy_impact_lacp_unconfig_32k | Yes |
58 | Verify Routed PCH with 64k routes | test_deploy_impact_lacp_unconfig_64k | Yes |
59 | Fast reboot on spine with traffic and 2k host routes | test_deploy_spine_fastreboot_impact_2k | Yes |
60 | Fast reboot on leaf1 with traffic and 2k host routes | test_deploy_leaf1_fastreboot_impact_2k | Yes |
61 | Fast reboot on leaf2 with traffic and 2k host routes | test_deploy_leaf2_fastreboot_impact_2k | Yes |
62 | Fast reboot on spine with traffic and 2k host routes | test_deploy_spine_fastreboot_impact_8k | Yes |
63 | Fast reboot on leaf1 with traffic and 2k host routes | test_deploy_leaf1_fastreboot_impact_8k | Yes |
64 | Fast reboot on leaf2 with traffic and 2k host routes | test_deploy_leaf2_fastreboot_impact_8k | Yes |
Verify Platform Information
test_platform_001
Yes
Verify Platform Health Status
test_platform_002
Yes
Verify Platform CPU and Process Status
test_platform_003
Yes
Verify Platform PSU
test_platform_004
Yes
Verify Platform Tech-Support
test_platform_005
Yes
Verify Port Auto-negotiation
test_autoneg_001
Yes
Verify physical port operational down/up
test_ports_002
No
Verify port configuration across reboot
test_ports_005
No
Verify Port Information for status, description and transceiver infornation
test_ports_006
Yes
Verify Port Counters for framesize 128
test_ports_009_14
Yes
Verify Port transceiver information
test_ports_020
Yes
Verify FEC Configuration for RS and None
test_ports_fec_001
Yes
Dynamic port breakout with supported breakout modes between leaf1 and leaf2
test_port_breakout_001
Yes
Verify Secondary subnet under a VLAN upto 10 subnets | test_10_secondary_subnet_under_vlan | Yes |
Verify Secondary subnet under a VLAN upto 20 subnets | test_20_secondary_subnet_under_vlan | Yes |
Verify Secondary subnet under a VLAN upto MAX subnets | test_max_secondary_subnet_under_vlan | Yes |
Verify syslog servers scale | test_qual_scale_011 | No |
Verify NTP server scale | test_qual_scale_012 | No |