arrow-left

Only this pageAll pages
gitbookPowered by GitBook
1 of 26

FTAS R3.4

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Support

Loading...

Test Bed Configuration

A Testbed parameter file is a Python script which defines the testbed parameters as variables. Testbed files are available in the ~/testbeds folder.

hashtag
Full Mesh 4 DUTs Topology

All test scripts except Chaos can be run with full mesh 4 DUTs topology

Use the following sample script and steps to create your own testbed file.

  • Update the above details in the testbed file with your DUTs - DUT connections, Management IP, Login credentials, Link Speed etc.

  • The "name" parameter is very important. Provide a string to identify the respective for this parameter. This name is displayed in logs for easy identification of devices.

  • Refer to the topology diagram to find the link variables and update their values

  • All testbed files can be found in the folder "~/testbeds/"

hashtag
Testbed file variables

There are two variables to control the cleanup after each test run in the testbed file.

They are CLEANUP_BY_REBOOT and CLEANUP_BY_CFG_RELOAD:

  • CLEANUP_BY_REBOOT = True, The script will restore the switch's configuration from the /etc/sonic/clean_config.json file and then reboot the switch. This process consumes additional execution time, but it ensures that the DUTs (Devices Under Test) are consistently configured in a clean and proper manner for subsequent test scripts.

  • CLEANUP_BY_CFG_RELOAD = True, The script will restore the switch's configuration from the /etc/sonic/clean_config.json file and then issue the sudo config load command to load the clean config file. This method takes less time to have a clean configuration on switches but may not work correctly sometimes.

Apart from these, the following variables are specified in the testbed file:

  • CFG_RELOAD_BY_REBOOT = True, The scripts initiate device reboots in instances where the config reload command fails on the DUT due to certain reasons. This measure is taken as a workaround for such situations.

  • REBOOT_WAIT_TIME = 0, Maximum wait time for the device to reboot

  • NTP_SERVER = <FTAS VM IP>, FTAS VM serves as NTP server.

hashtag
Parameters specific to resilience testcases

  • TRAFFIC_THRESHOLD =1, acceptable packet/frame loss percentage in traffic testcases

  • CPU_THRESHOLD = 5 , acceptable change in CPU utilization percentage

  • MEM_THRESHOLD = 5 , acceptable change in memory utilization percentage

hashtag
Variables with the default values

Variable with Default values
Description

hashtag
Configuring Ixia Variables

hashtag
For UHD Chassis:

  • Specify the port connected to the DUT like this: "s1_p1": "8" , where the port number 8 is connected to Spine01_Port01 for Ixia traffic.

  • For example, Ixia port 8 can be configured like this: "localuhd/8", where localuhd refers to the Ixia chassis and 8

hashtag
For Novus Chassis:

  • Specify the port connected to the DUT like this: "s1_p1": "1;8" , where the port number 8 on Ixia card 1 is connected to Spine01_Port01 for Ixia traffic.

  • For example, Ixia port 1;8 can be configured like this: "<chassis_ip>;<card_no>;<Port_no>", where <chassis_ip>

circle-info

The FTAS VM has been extensively tested with UHD version 1.3.3003.118, It is recommended to use UHD with the same version for smooth operations.

circle-exclamation

Ixia port format varies and depends on the Ixia Chassis type

hashtag
Control Ixia traffic rate globally

For the testbeds where links between DUTs have less bandwidth than Ixia ports traffic tests may fail due to traffic drops on the least bandwidth links (i.e. DUTs may be connected with 1G ports and the Ixia link maybe 100G).

To avoid this situation a global parameter for the ixia traffic rate "global_traffic_rate" in percentage could be used to enforce traffic rate in all tests.

When the global_traffic_rate parameter is defined as a sub-parameter in the "Ixia_Ports" all traffic streams in all test scripts would override their own rate value with global_traffic_rate value configured.

When global_traffic_rate is not defined all traffic streams in all test scripts would use their own static values for the traffic rate.

Here the "global_traffic_rate": 25 referrers to 25% speed for all Ixia ports

CLI_TIMEOUT, set the integer value for this variable when the command execution is slow or taking more than 30 sec to respond. Set the default value to 0 in the testbed file.

hashtag
2 DUTs Topology

If you don't have 4 DUTs topology and want to run scripts that require only 2 DUTs, You can use 2dut_topo.py testbed parameter file

There are two physical DUTs in the topology but the script might pick the name Spine and Leaf interchangeably. So in the testbed file, we should define parameters for both but they point to the same physical Spine and Leaf DUTs.

hashtag
Full Mesh 4 DUTs Topology for Chaos

ftas_chaos_topo.py (sample topology file) the testbed parameter file is used for Chaos test scripts only.

A new parameter PCH_CONFIGURATION = False, has been added. This parameter determines if the tests are to be run on routed port (if False) or routed PortChannel (if True) configuration for the interfaces connecting DUTs.

//Sample TestBed file
LEAF01_Ports = {"s2_p1": "Ethernet32", "s2_p1_speed": 100000,
                "s2_p2": "Ethernet36", "s2_p2_speed": 100000,
                "s1_p1": "Ethernet0", "s1_p1_speed": 100000,
                "s1_p2": "Ethernet4", "s1_p2_speed": 100000,
                "l2_p1": "Ethernet16", "l2_p1_speed": 100000,
                "l2_p2": "Ethernet20", "l2_p2_speed": 100000,
                "ixia_p1": "Ethernet60", "ixia_p1_speed": 100000,
                "ixia_p2": "Ethernet48", "ixia_p2_speed": 100000,
                "port_mtu": 9100
                }
LEAF01 = {"IP": "10.4.4.66", 
          "PROTO": "http", "REST_PORT": 6002, "SSH_PORT": 22,
          "CLI_PROMPTS": params.CLI_PROMPTS,
          "cliErrors": params.CLI_ERROR_REGEXP, "cliWarnings": params.CLI_WARN_REGEXP,
          "ssh_user": "admin",
          "ssh_passwd": "YourPaSsWoRd",
          "Timeout": 30, "ports": LEAF01_Ports, "name": "MLNX-LEAF01",
          "backup_cfg_file": "clean_config.json"
          }

If both CLEANUP_BY_REBOOT and CLEANUP_BY_CFG_RELOAD are set to False, The scripts use the SONiC CLI procedure to un-configure whatever was configured on the switches by the scripts.

  • CFG_RELOAD_BY_REBOOT = False, The scripts will save the running configuration and reboot the device wherever config reload is required.

  • Ideally, the below three parameters should be set to False

    • CLEANUP_BY_REBOOT = False

    • CLEANUP_BY_CFG_RELOAD = False

    • CFG_RELOAD_BY_REBOOT = False

  • CLEANUP_BEFORE_TEST_RUN = False. It can be used to do the forceful clean-up of all the devices before the test run

  • SYSLOG_SRVS = {"Servers": ["<FTAS VM IP>", "10.4.5.6"], list of syslog servers to use. First list member should be set to the FTAS VM IP. The second IP can be a dummy IP.

  • Log_Folder: "/var/log/sonic_logs" : The log folder on FTAS VM to be used for syslog testing

  • MAX_V4_ACL , Maximum IPv4 ACL rules supported on the platform

  • MAX_V6_ACL , Maximum IPv6 ACL rules supported on the platform

  • MAX_SECONDARY_SUBNET, Maximum secondary subnets supported for SVI interface

  • MAX_IPV4_HOST_ROUTES, Maximum IPv4 host routes supported on the platform

  • MAX_IPV6_HOST_ROUTES, Maximum IPv6 host routes supported on the platform

  • MAX_IPV4_PREFIX_ROUTES, Maximum IPv4 prefix routes supported on the platform

  • MAX_IPV6_PREFIX_ROUTES, Maximum IPv6 prefix routes supported on the platform

  • TECHSUPPORT = True, Takes techsupport dump (If True) for the DUTs in case of failures

    • TECHSUPPORT_SINCE = "hour ago", Specifies the argument to the show techsupport command while collecting the techsupport dump in case of failures. This is valid only when TECHSUPPORT = True . Valid values are: hour ago (default), yesterday , or any other values supported by SONiC show techsupport --since option.

    • TECHSUPPORT_TIMEOUT, specifies the worst case timeout value for techsupport dump generation, in case TECHSUPPORT = True

  • Stress testcases

    • STRESS_AVAIL_CORES = 2 , Number of CPU cores reserved for system use; rest other CPU cores will undergo stress testing. Example: if total cores are 16. Then STRESS_AVAIL_CORE=2 would mean that 14 cores will be stressed, and only two cores will be available for the test case.

    • STRESS_MEM_UTIL = 85 , the percentage memory to be stressed during stress testing. In this case 85% of memory would be stressed, leaving only 15% memory available.

    • SERVER_IP = <FTAS_VM IP> , IP address of the server hosting the stress-ng docker image . The docker is part of FTAS VM.

  • SERVER_USER_ID = "oper" , User ID for SCP access to the server hosting the stress-ng Docker image

  • SERVER_PASSWORD = "oper@123" , Password for SCP access to securely transfer the stress-ng Docker image

  • MAX_IPV4_HOST_ROUTES = 1000

    The maximum number of IPv4 host routes supported. The variable used in the testcase test_v4_host_routes_scale_max_supported in test script scalability/taas_qual_scale.py

    MAX_IPV6_HOST_ROUTES = 1000

    The maximum number of IPV6 host routes supported. The variable used in the testcase test_v6_host_routes_scale_max_supported in test script scalability/taas_qual_scale.py

    MAX_IPV4_PREFIX_ROUTES = 1000

    The maximum number of IPV4 prefix routes supported. The variable used in the testcase test_v4_prefix_routes_scale_max_supported in test script scalability/taas_qual_scale.py

    MAX_IPV4_NEXTHOPS = 256

    The maximum number of IPV4 next-hop supported. The variable used in the testcase test_v4_nexthops_scale_max_supported in test script scalability/taas_qual_scale.py

    STRESS_AVAIL_CORES = 2

    Number of CPU cores reserved for system use. Rest other CPU cores will undergo stress testing

    STRESS_MEM_UTIL = 85

    Targeted percentage of total system memory to allocate for stress testing

    is the UHD port number.
    refers to the Ixia chassis IP and
    1;8
    is the port with the card number.

    INTF_UP_WAIT_TIME = 30

    Timeout for the interface to be 'Operationally UP'

    CLI_TIMEOUT = 0

    Assign an integer value to this variable if the command execution on the device experiences slowness or takes longer than 30 seconds to respond. If the value is not explicitly set, the default will be 0, implying a 30-second wait time for the command execution output.

    MAX_V4_ACL = 64

    The maximum number of supported IPV4 ACL rules. The variable used in the testcase test_v4_acl_scale_max_supported in test script scalability/taas_qual_scale.py

    MAX_V6_ACL = 64

    The maximum number of supported IPV6 ACL rules. The variable used in the testcase test_qual_v6_acl_scale_max_supported in test script scalability/taas_qual_scale.py

    MAX_SECONDARY_SUBNET = 25

    The maximum number of supported secondary subnets under a vlan. The variable used in the testcase test_max_secondary_subnet_under_vlan in test script scalability/taas_qual_scale.py

    Figure 8: Links variables for full mesh 4 DUTs
    Figure 9: Link variables for Chaos testbed
    IXIA_Ports = {"s1_p1": "<port_no>", "s1_p2": "<port_no>", "s1_p3": "<port_no>", 
                  "media": "fiber", "speed": "100G",
                  "port_configs": {
                  "<localuhd>/<8>": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
    IXIA_Ports = {"s1_p1": "<card_no>;<Port_no>", "s1_p2": "<card_no>;<Port_no>", "s1_p3": "<card_no>;<Port_no>", 
                  "media": "fiber", "speed": "100G",
                  "port_configs": {
                  "<chassis_ip>;<card_no>;<Port_no>": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
    # Ixia Parameters #
    IXIA_Ports = {"s1_p1": "8", "s1_p2": "9", "s1_p3": "10", 
                  "s2_p1": "11", "s2_p2": "12",
                  "l1_p1": "8", "l1_p2": "9", "l1_p3": "10",
                  "l2_p1": "11", "l2_p2": "12",
                  "media": "fiber", "speed": "100G", "global_traffic_rate": 25,
                  ...
    full_mesh_topo.py
    """
        Description: Testbed information
    """
    from genlibs import params
    from genlibs import const
    
    gParams = params.GLOBAL_PARAMS
    pParams = params.PLATFORM_PARAMS
    # PLS. DON'T CHANGE ANYTHING FROM HERE TO TOP OF THE FILE
    # =================================================
    ALL_DUTS = ["LEAF01", "LEAF02", "SPINE01", "SPINE02"]
    # =================================================
    CLEANUP_BY_REBOOT = False
    CLEANUP_BY_CFG_RELOAD = False
    CFG_RELOAD_BY_REBOOT = False
    CHECK_COMPATIBILITY = False
    CLEANUP_BEFORE_TEST_RUN = False
    
    NET_SERVICES_CONTAINER_NAME = "net_services"
    NTP_SERVER = "10.4.5.245"
    INTF_UP_WAIT_TIME = 30
    REBOOT_WAIT_TIME = 300
    CLI_TIMEOUT = 0
    MAX_V4_ACL = 64
    MAX_INGRESS_V4_ACL = 64
    MAX_V6_ACL = 64
    MAX_SECONDARY_SUBNET = 25
    MAX_IPV4_HOST_ROUTES = 1000
    MAX_IPV6_HOST_ROUTES = 1000
    MAX_IPV4_PREFIX_ROUTES = 1000
    MAX_IPV6_PREFIX_ROUTES = 32000
    MAX_IPV4_ROUTES_PER_NEXTHOP = 256
    MAX_IPV4_NEXTHOPS = 2048
    TECHSUPPORT = True
    TECHSUPPORT_SINCE = "hour ago"
    TECHSUPPORT_TIMEOUT = 300
    STRESS_AVAIL_CORES = 2  # Number of CPU cores reserved for system use; all other cores will undergo stress testing
    STRESS_MEM_UTIL = 85    # Targeted percentage of total system memory to allocate for stress testing
    SERVER_IP = "10.20.0.75" # IP address of the server hosting the stress-ng Docker image
    SERVER_USER_ID = "oper"  # User ID for SCP access to the server hosting the stress-ng Docker image
    SERVER_PASSWORD = "oper@123" # Password for SCP access to securely transfer the stress-ng Docker image
    
    SYSLOG_SRVS = {"Servers": ["10.4.5.245", "10.4.5.6"], "Log_Folder": "/var/log/sonic_logs"}
    TACACS_SRVS = [{"address": "10.4.5.177", "secret_key": "T@c@csSonic123"},
                   {"address": "10.4.5.179", "secret_key": "T@c@csSonic123"}]
    TACACS_USERS = {"admin_user": "tacadmin", "admin_passwd": "sadmin@123", "oper_user": "tacuser",
                    "oper_passwd": "suser@123"}
    
    IXIA_Ports = {"l1_p1": "21", "l1_p2": "24",
                  "l2_p1": "22", "l2_p2": "23",
                  "global_traffic_rate": 80,
                  "media": "fiber", "speed": "100G",
                  "port_configs": {
                      "localuhd/21": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                      "localuhd/22": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                      "localuhd/23": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                      "localuhd/24": {"speed": "100G", "auto_negotiation": False, "rs_fec": True}
                      }
                  }
    
    IXIA = {"IP": "10.4.4.10", "username": "aviz", "password": "aviz@123", "ports": IXIA_Ports}
    
    LOGSRV1 = {"IP": "10.1.1.11", "SSH_PORT": 22, "ssh_user": "aviz", "ssh_passwd": "IxiaAviz2020", "Timeout": 30,
               "name": "Syslog1", "CLI_PROMPTS": params.LINUX_PROMPTS}
    
    # Linux Server to host services likes: NTP, Syslog, Tac_plus, etc. avtest user is in sudo group and no password
    TESTSRV1 = {"IP": "10.109.9.112", "SSH_PORT": 22, "ssh_user": "avtest", "ssh_passwd": "avtest@123", "Timeout": 30,
                "name": "TestSrv1", "CLI_PROMPTS": params.LINUX_PROMPTS}
    
    SPINE01_Ports = {"l1_p1": "Ethernet0", "l1_p1_speed": 100000,
                     "l1_p2": "Ethernet4", "l1_p2_speed": 100000,
                     "l2_p1": "Ethernet24", "l2_p1_speed": 100000,
                     "l2_p2": "Ethernet28", "l2_p2_speed": 100000,
                     "s2_p1": "Ethernet32", "s2_p1_speed": 100000,
                     "s2_p2": "Ethernet36", "s2_p2_speed": 100000,
                     "port_mtu": 9100
                     }
    SPINE01 = {"IP": "10.4.4.65",
               "PROTO": "http", "REST_PORT": 9001, "SSH_PORT": 22,
               "CLI_PROMPTS": params.CLI_PROMPTS,
               "cliErrors": params.CLI_ERROR_REGEXP, "cliWarnings": params.CLI_WARN_REGEXP,
               "ssh_user": "admin",
               "ssh_passwd": "YourPaSsWoRd",
               "Timeout": 30, "ports": SPINE01_Ports, "name": "MLNX-SPINE01",
               "backup_cfg_file": "clean_config.json"
               }
    
    SPINE02_Ports = {"l1_p1": "Ethernet16", "l1_p1_speed": 100000,
                     "l1_p2": "Ethernet20", "l1_p2_speed": 100000,
                     "l2_p1": "Ethernet0", "l2_p1_speed": 100000,
                     "l2_p2": "Ethernet4", "l2_p2_speed": 100000,
                     "s1_p1": "Ethernet32", "s1_p1_speed": 100000,
                     "s1_p2": "Ethernet36", "s1_p2_speed": 100000,
                     "port_mtu": 9100
                     }
    SPINE02 = {"IP": "10.4.4.67",
               "PROTO": "http", "REST_PORT": 6018, "SSH_PORT": 22,
               "CLI_PROMPTS": params.CLI_PROMPTS,
               "cliErrors": params.CLI_ERROR_REGEXP, "cliWarnings": params.CLI_WARN_REGEXP,
               "ssh_user": "admin",
               "ssh_passwd": "YourPaSsWoRd",
               "Timeout": 30, "ports": SPINE02_Ports, "name": "MLNX-SPINE02",
               "backup_cfg_file": "clean_config.json"
               }
    
    LEAF01_Ports = {"s2_p1": "Ethernet16", "s2_p1_speed": 100000,
                    "s2_p2": "Ethernet20", "s2_p2_speed": 100000,
                    "s1_p1": "Ethernet0", "s1_p1_speed": 100000,
                    "s1_p2": "Ethernet4", "s1_p2_speed": 100000,
                    "l2_p1": "Ethernet32", "l2_p1_speed": 100000,
                    "l2_p2": "Ethernet36", "l2_p2_speed": 100000,
                    # provide the breakout modes for the supp ports in the below format
                    # "l2_p1": "Ethernet72", "l1_p3_speed": 100000,
                    # "l2_p1_breakout": "1x100G[40G], 2x50G, 4x25G, 4x10G",
                    "ixia_p1": "Ethernet60", "ixia_p1_speed": 100000,
                    "ixia_p2": "Ethernet48", "ixia_p2_speed": 100000,
                    "port_mtu": 9100
                    }
    LEAF01 = {"IP": "10.4.4.66",
              "PROTO": "http", "REST_PORT": 6002, "SSH_PORT": 22,
              "CLI_PROMPTS": params.CLI_PROMPTS,
              "cliErrors": params.CLI_ERROR_REGEXP, "cliWarnings": params.CLI_WARN_REGEXP,
              "ssh_user": "admin",
              "ssh_passwd": "YourPaSsWoRd",
              "Timeout": 30, "ports": LEAF01_Ports, "name": "MLNX-LEAF01",
              "backup_cfg_file": "clean_config.json"
              }
    
    LEAF02_Ports = {"s2_p1": "Ethernet0", "s2_p1_speed": 100000,
                    "s2_p2": "Ethernet4", "s2_p2_speed": 100000,
                    "s1_p1": "Ethernet24", "s1_p1_speed": 100000,
                    "s1_p2": "Ethernet28", "s1_p2_speed": 100000,
                    "l1_p1": "Ethernet32", "l1_p1_speed": 100000,
                    "l1_p2": "Ethernet36", "l1_p2_speed": 100000,
                    # provide the breakout modes for the supp ports in the below format
                    # "l1_p3": "Ethernet72", "l1_p3_speed": 100000,
                    # "l1_p3_breakout": "1x100G[40G], 2x50G, 4x25G, 4x10G",
                    "ixia_p1": "Ethernet60", "ixia_p1_speed": 100000,
                    "ixia_p2": "Ethernet48", "ixia_p2_speed": 100000,
                    "port_mtu": 9100
                    }
    LEAF02 = {"IP": "10.4.4.68",
              "PROTO": "http", "REST_PORT": 6002, "SSH_PORT": 22,
              "CLI_PROMPTS": params.CLI_PROMPTS,
              "cliErrors": params.CLI_ERROR_REGEXP, "cliWarnings": params.CLI_WARN_REGEXP,
              "ssh_user": "admin",
              "ssh_passwd": "YourPaSsWoRd",
              "Timeout": 30, "ports": LEAF02_Ports, "name": "MLNX-LEAF02",
              "backup_cfg_file": "clean_config.json"
              }
    2dut_topo.py
    """
        Description: Testbed information
    """
    from genlibs import params
    from genlibs import const
    
    gParams = params.GLOBAL_PARAMS
    pParams = params.PLATFORM_PARAMS
    
    # =================================================
    ALL_DUTS = ['SPINE01', 'SPINE02']
    # =================================================
    CLEANUP_BY_REBOOT = False
    CLEANUP_BY_CFG_RELOAD = False 
    CFG_RELOAD_BY_REBOOT = False
    CHECK_COMPATIBILITY = False
    CLEANUP_BEFORE_TEST_RUN = False
    
    NTP_SERVER = "10.4.5.4"
    
    INTF_UP_WAIT_TIME = 30
    REBOOT_WAIT_TIME = 120
    CLI_TIMEOUT = 0
    
    MAX_V4_ACL = 64
    MAX_V6_ACL = 64
    MAX_SECONDARY_SUBNET = 25
    MAX_IPV4_HOST_ROUTES = 1000
    MAX_IPV6_HOST_ROUTES = 1000
    MAX_IPV4_PREFIX_ROUTES = 1000
    MAX_IPV4_NEXTHOPS = 256
    
    ZTP_PARAMS = {"ZTP_HTTP_SRV_ADDR": "10.4.5.177", "ZTP_HTTP_SRV_PORT": "8090", "ZTP_FOLDER": "/home/oper/reports/ztp",
                  "DHCP_CONTAINER": "ztp_dhcp"}
    
    NET_SERVICES_CONTAINER_NAME = "net_services"
    
    SYSLOG_SRVS = {"Servers": ["<SYSLOG server IP1>", "<SYSLOG server IP2>"], 'Log_Folder': "/var/log/sonic_logs"}
    TACACS_SRVS = [{"address": "<IP address1>", "secret_key": "T@c@csSonic123"},
                   {"address": "<IP address2>", "secret_key": "T@c@csSonic123"}]
    TACACS_USERS = {"admin_user": "tacadmin", "admin_passwd": "sadmin@123",
                    "oper_user": "tacuser", "oper_passwd": "suser@123"}
    
    # Ixia Parameters #
    IXIA_Ports = {"s1_p1": "8", "s1_p2": "9", "s1_p3": "10",
                  "s2_p1": "11", "s2_p2": "12",
                  "l1_p1": "8", "l1_p2": "9", "l1_p3": "10",
                  "l2_p1": "11", "l2_p2": "12",
                  "media": "fiber", "speed": "100G",
                  "port_configs": {
                      "localuhd/8": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                      "localuhd/9": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                      "localuhd/10": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                      "localuhd/11": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                      "localuhd/12": {"speed": "100G", "auto_negotiation": False, "rs_fec": True},
                  }}
    
    IXIA = {"IP": "<Ixia IP>", "username": "<ix username>", "password": "<Ixia passwd>", "ports": IXIA_Ports}
    
    # DUTs Parameters #
    SPINE01_Ports = {"s2_p1": "Ethernet0", "s2_p1_speed": 100000,
                     "s2_p2": "Ethernet8", "s2_p2_speed": 100000,
                     "ixia_p1": "Ethernet232",
                     "ixia_p2": "Ethernet240",
                     "ixia_p3": "Ethernet248"}
    
    SPINE01 = {"IP": "<SPINE1 IP>", 'SSH_PORT': 22, "CLI_PROMPTS": params.CLI_PROMPTS,
               "cliErrors": params.CLI_ERROR_REGEXP,
               "cliWarnings": params.CLI_WARN_REGEXP,
               "climode": const.CliModes.SONiC_CLI, "ports": SPINE01_Ports,
               "ssh_user": "admin", "ssh_passwd": "Innovium123",
               "Timeout": 30, "name": "Spine1",
               "backup_cfg_file": "clean_config.json"}
    
    SPINE02_Ports = {"s1_p1": "Ethernet0", "s1_p1_speed": 100000,
                     "s1_p2": "Ethernet8", "s1_p2_speed": 100000,
                     "ixia_p1": "Ethernet240",
                     "ixia_p2": "Ethernet248"}
    SPINE02 = {"IP": "<SPINE2 IP>", 'SSH_PORT': 22, "CLI_PROMPTS": params.CLI_PROMPTS,
               "cliErrors": params.CLI_ERROR_REGEXP,
               "cliWarnings": params.CLI_WARN_REGEXP,
               "climode": const.CliModes.SONiC_CLI, "ports": SPINE02_Ports,
               "ssh_user": "admin", "ssh_passwd": "Innovium123",
               "Timeout": 30, "name": "Spine2",
               "backup_cfg_file": "clean_config.json"}
    
    LEAF01_Ports = {"l2_p1": "Ethernet32", "l2_p1_speed": 100000,
                    "l2_p2": "Ethernet40", "l2_p2_speed": 100000,
                    "ixia_p1": "Ethernet240",
                    "ixia_p2": "Ethernet248"}
    
    LEAF01 = {"IP": "<SPINE1 IP>", 'SSH_PORT': 22, "CLI_PROMPTS": params.CLI_PROMPTS,
              "cliErrors": params.CLI_ERROR_REGEXP,
              "cliWarnings": params.CLI_WARN_REGEXP,
              "climode": const.CliModes.SONiC_CLI, "ports": LEAF01_Ports,
              "ssh_user": "admin", "ssh_passwd": "Innovium123",
              "Timeout": 30, "name": "Leaf1",
              "backup_cfg_file": "clean_config.json"}
    
    LEAF02_Ports = {"l1_p1": "Ethernet32", "l1_p1_speed": 100000,
                    "l1_p2": "Ethernet40", "l1_p2_speed": 100000,
                    "ixia_p1": "Ethernet240",
                    "ixia_p2": "Ethernet248"}
    
    LEAF02 = {"IP": "<SPINE2 IP>", 'SSH_PORT': 22,
              "CLI_PROMPTS": params.CLI_PROMPTS,
              "cliErrors": params.CLI_ERROR_REGEXP,
              "cliWarnings": params.CLI_WARN_REGEXP,
              "climode": const.CliModes.SONiC_CLI, "ports": LEAF02_Ports,
              "ssh_user": "admin", "ssh_passwd": "Innovium123",
              "Timeout": 30, "name": "Leaf2",
              "backup_cfg_file": "clean_config.json"}
    ftas_chaos_topo.py
    """
        Description: Testbed information
    """
    from genlibs import params
    from genlibs import const
    
    gParams = params.GLOBAL_PARAMS
    pParams = params.PLATFORM_PARAMS
    
    # =================================================
    ALL_DUTS = ['LEAF01', 'LEAF02', 'SPINE01', 'SPINE02']
    # =================================================
    result_dir = "/home/oper/reports"
    
    CLEANUP_BY_REBOOT = False
    CLEANUP_BY_CFG_RELOAD = False
    
    West_Ixia_Params = {"ports": [21], "ixmedia": "fiber", "ixspeed": "100G", "peer": "leaf1"}
    
    East_Ixia_Params = {"ports": [22], "ixmedia": "fiber", "ixspeed": "100G", "peer": "leaf2"}
    IXIA = {"IP": "10.4.4.10", "username": "aviz", "password": "aviz@123", "ixmedia": "fiber", "ixspeed": "100G"}
    INTF_UP_WAIT_TIME = 30
    REBOOT_WAIT_TIME = 300
    CLI_TIMEOUT = 0
    PCH_CONFIGURATION = False  # to disable the PCH configuration in chaos scripts, default is True
    TECHSUPPORT = True
    TECHSUPPORT_SINCE = "hour ago"
    TECHSUPPORT_TIMEOUT = 120
    ACCEPTABLE_DELTA = 0.5 #Threshold for acceptable packet/frame loss percentage
    CPU_MEM_THRESHOLD = 5 ## Threshold for acceptable change in CPU/memory utilization percentage
    
    # Use a network with prefixlen = 24
    MASTER_NETWORK = "172.16.1.0/24"
    
    S1_Ports = [
        # Links from Spine1 to Leaf1
        {"s1": "Ethernet0", "l1": "Ethernet0", "speed": 100000,
         "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf1"},
    #    {"s1": "Ethernet4", "l1": "Ethernet4", "speed": 100000,
    #     "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf1"},
        # Links from Spine1 to Leaf2
        {"s1": "Ethernet24", "l2": "Ethernet24", "speed": 100000,
         "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf2"}]
    #    {"s1": "Ethernet28", "l2": "Ethernet28", "speed": 100000,
    #     "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf2"}]
    
    S2_Ports = [
        # Links from Spine2 to Leaf1
        {"s2": "Ethernet16", "l1": "Ethernet16", "speed": 100000,
         "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf1"},
    #    {"s2": "Ethernet20", "l1": "Ethernet20", "speed": 100000,
    #     "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf1"},
        # Links from Spine2 to Leaf2
        {"s2": "Ethernet0", "l2": "Ethernet0", "speed": 100000,
         "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf2"}]
    #    {"s2": "Ethernet4", "l2": "Ethernet4", "speed": 100000,
    #     "netinfo": {"spine_ip": "", "leaf_ip": ""}, "peer": "leaf2"}]
    
    L1_Ixia_Ports = [
        # Links from Leaf1 to Ixia
        {"ixia": "21", "l1_ixia": "Ethernet60", "speed": 100000,
         "netinfo": {"ixia_ip": "", "leaf_ip": ""},
         "port_configs": {"localuhd/21": {"speed": "100G", "auto_negotiation": False, "rs_fec": True}}}]
    
    L2_Ixia_Ports = [
        # Links from Leaf1 to Ixia
        {"ixia": "22", "l2_ixia": "Ethernet60", "speed": 100000,
         "netinfo": {"ixia_ip": "", "leaf_ip": ""},
         "port_configs": {"localuhd/22": {"speed": "100G", "auto_negotiation": False, "rs_fec": True}}}]
    #    {"ixia": "10", "l2_ixia": "Ethernet100", "speed": 1000000,
    #     "netinfo": {"ixia_ip": "", "leaf_ip": ""}}]
    
    SPINE01 = {"IP": "10.4.4.65", 'SSH_PORT': 22,
               "CLI_PROMPTS": params.CLI_PROMPTS,
               "cliErrors": params.CLI_ERROR_REGEXP,
               "cliWarnings": params.CLI_WARN_REGEXP,
               "climode": const.CliModes.SONiC_CLI,
               "ssh_user": "admin", "ssh_passwd": "YourPaSsWoRd",
               "Timeout": 30, "ports": S1_Ports, "name": "Spine1",
               "backup_cfg_file": "clean_config.json"}
    
    SPINE02 = {"IP": "10.4.4.67", 'SSH_PORT': 22,
               "CLI_PROMPTS": params.CLI_PROMPTS,
               "cliErrors": params.CLI_ERROR_REGEXP,
               "cliWarnings": params.CLI_WARN_REGEXP,
               "climode": const.CliModes.SONiC_CLI,
               "ssh_user": "admin", "ssh_passwd": "YourPaSsWoRd",
               "Timeout": 30, "ports": S1_Ports, "name": "Spine2",
               "backup_cfg_file": "clean_config.json"}
    
    LEAF01 = {"IP": "10.4.4.66", 'SSH_PORT': 22,
              "CLI_PROMPTS": params.CLI_PROMPTS,
              "cliErrors": params.CLI_ERROR_REGEXP,
              "cliWarnings": params.CLI_WARN_REGEXP,
              "climode": const.CliModes.SONiC_CLI,
              "ssh_user": "admin", "ssh_passwd": "YourPaSsWoRd",
              "Timeout": 30, "ports": S1_Ports + S2_Ports, "name": "Leaf1",
              "backup_cfg_file": "clean_config.json"}
    
    LEAF02 = {"IP": "10.4.4.68", 'SSH_PORT': 22,
              "CLI_PROMPTS": params.CLI_PROMPTS,
              "cliErrors": params.CLI_ERROR_REGEXP,
              "cliWarnings": params.CLI_WARN_REGEXP,
              "climode": const.CliModes.SONiC_CLI,
              "ssh_user": "admin", "ssh_passwd": "YourPaSsWoRd",
              "Timeout": 30, "ports": S1_Ports + S2_Ports, "name": "Leaf2",
              "backup_cfg_file": "clean_config.json"}
    

    Layer 3

    Description
    Test Case ID
    PD
    Topology

    Scale up to 128 ACL for matching source IP/port and destination IP/port

    test_v4_acl_scale_128

    Yes

    Scale up to 256 ACL for matching source IP/port and destination IP/port

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    test_v4_acl_scale_256

    Yes

    Scale up to 512 ACL for matching source IP/port and destination IP/port

    test_v4_acl_scale_512

    Yes

    Scale up to max supported ACL for matching source IP/port and destination IP/port

    test_v4_acl_scale_max_supported

    Yes

    Scale up to 128 ACL for denying source IPv6/port and destination IPv6/port

    test_qual_v6_acl_scale_128

    Yes

    Scale up to 256 ACL for denying source IPv6/port and destination IPv6/port

    test_qual_v6_acl_scale_256

    Yes

    Scale up to 512 ACL for denying source IPv6/port and destination IPv6/port

    test_qual_v6_acl_scale_512

    Yes

    Scale up to max supported ACL for denying source IPv6/port and destination IPv6/port

    test_qual_v6_acl_scale_max_supported

    Yes

    Verify IPv4 L3 Host Routes Scale for 2K

    test_v4_host_routes_scale_2k

    Yes

    Verify IPv4 L3 Host Routes Scale for 4K

    test_v4_host_routes_scale_4k

    Yes

    Verify IPv4 L3 Host Routes Scale for max

    test_v4_host_routes_scale_max_supported

    Yes

    Verify IPv6 L3 Host Routes Scale for 2K

    test_v6_host_routes_scale_2k

    Yes

    IPv6 host routes scale to 2K routes (iBGP)

    test_v6_host_routes_scale_ibgp_2k

    Yes

    Verify IPv6 L3 Host Routes Scale for 4k

    test_v6_host_routes_scale_4k

    Yes

    IPv6 host routes scale to 4K routes (iBGP)

    test_v6_host_routes_scale_ibgp_4k

    Yes

    Verify IPv6 L3 Host Routes Scale for MAX support

    test_v6_host_routes_scale_max_supported

    Yes

    IPv6 host routes scale to 32K routes (iBGP)

    test_v6_host_routes_scale_ibgp_32k

    Yes

    IPv6 host routes scale to 32K routes

    test_v6_host_routes_scale_32k

    Yes

    Verify IPv4 L3 prefix Routes scale for 2k

    test_v4_prefix_routes_scale_2k

    Yes

    Verify IPv4 L3 prefix Routes scale for 4k

    test_v4_prefix_routes_scale_4k

    Yes

    Verify IPv4 L3 prefix Routes scale for MAX SUPPORT

    test_v4_prefix_routes_scale_max_supported

    Yes

    Verify IPv6 L3 prefix routes scale

    test_qual_scale_005

    Yes

    Verify IPv4 Nexthops scale up to 512

    test_v4_nexthops_scale_512

    Yes

    Verify IPv4 Nexthops scale up to 1024

    test_v4_nexthops_scale_1024

    Yes

    Verify IPv4 Nexthops scale max supported

    test_v4_nexthops_scale_max_supported

    Yes

    Verify IPv6 Nexthops scale

    test_qual_scale_007

    Yes

    Scalability to 16 ECMP paths and 32K routes

    test_16ecmp_32k_routes

    Yes

    Scalability to 32 ECMP paths and 64K routes

    test_32ecmp_64k_routes

    Yes

    Scalability to 64 ECMP paths and 128K routes

    test_64ecmp_128k_routes

    Yes

    Scalability to "number of leaf1 ports" ECMP paths and 1K routes per path

    test_ecmp_scale_routes

    Yes

    BGP dual stack scale for 4K prefix routes

    test_bgp_dual_stack_scale_4k

    Yes

    BGP dual stack scale for 16K prefix routes

    test_bgp_dual_stack_scale_16k

    Yes

    BGP dual stack scale for 32K prefix routes

    test_bgp_dual_stack_scale_32k

    Yes

    BGP dual stack scale for 64K prefix routes

    test_bgp_dual_stack_scale_64k

    Yes

    BGP dual stack scale for 128K prefix routes

    test_bgp_dual_stack_scale_128k

    Yes

    BGP dual stack scale with 32K host routes with traffic

    test_bgp_host_route_dual_stack_32k

    Yes

    BGP dual stack scale with 32K LPM routes with traffic

    test_bgp_lpm_route_dual_stack_scale_32k

    Yes

    BGP dual stack scale for 32K prefix routes (with 16 bit subnet mask for IPv4 and 64bit subnet mask for IPv6 routes) with traffic

    test_bgp_route_dual_stack_32k

    Yes

    BGP graceful restart with dual stack scale to 32K prefix routes

    test_bgp_graceful_restart_preserve_fw_state_32k

    Yes

    BGP graceful restart with dual stack scale to 32K prefix routes and docker restart

    test_bgp_graceful_docker_restart_32k

    Yes

    Enable/disable BGP graceful restart with dual stack scale to 32K prefix routes

    test_bgp_graceful_feature_disable_enable_32k

    Yes

    BGP graceful restart with dual stack scale to 32K prefix routes and kill bgp process

    test_bgp_kill_process_in_graceful_restart_32k

    Yes

    BGP dual stack scale with 64K host routes with traffic

    test_bgp_host_route_dual_stack_64k

    Yes

    BGP dual stack scale with 128K host routes with traffic

    test_bgp_host_route_dual_stack_128k

    Yes

    BGP dual stack scale with 64K LPM routes with traffic

    test_bgp_lpm_route_dual_stack_scale_64k

    Yes

    BGP dual stack scale with 128K LPM routes with traffic

    test_bgp_lpm_route_dual_stack_scale_128k

    Yes

    BGP graceful restart with dual stack scale to 64K prefix routes

    test_bgp_graceful_restart_preserve_fw_state_64k

    Yes

    BGP graceful restart with dual stack scale to 128K prefix routes

    test_bgp_graceful_restart_preserve_fw_state_128k

    Yes

    BGP graceful restart with dual stack scale to 64K prefix routes and docker restart

    test_bgp_graceful_docker_restart_64k

    Yes

    BGP graceful restart with dual stack scale to 128K prefix routes and docker restart

    test_bgp_graceful_docker_restart_128k

    Yes

    BGP graceful restart with dual stack scale to 64K prefix routes and kill bgp process

    test_bgp_kill_process_in_graceful_restart_64k

    Yes

    BGP graceful restart with dual stack scale to 128K prefix routes and kill bgp process

    test_bgp_kill_process_in_graceful_restart_128k

    Yes

    Scalability to 4K IPv6 prefix routes with traffic.

    test_ipv6_l3_prefix_routes_scale_4k

    Yes

    Scalability to 64K IPv6 prefix routes with traffic

    test_ipv6_l3_prefix_routes_scale_64k

    Yes

    Scalability to 128K IPv6 prefix routes with traffic

    test_ipv6_l3_prefix_routes_scale_128k

    Yes

    128 IPv4 ACL rules (matching source IP/port and destination IP/port)

    test_ingress_v4_acl_scale_128

    yes

    256 IPv4 ACL rules (matching source IP/port and destination IP/port)

    test_ingress_v4_acl_scale_256

    yes

    512 IPv4 ACL rules (matching source IP/port and destination IP/port)

    test_ingress_v4_acl_scale_512

    yes

    (MAX_V4_ACL) IPv4 ACL rules (matching source IP/port and destination IP/port)

    test_ingress_v4_acl_scale_max_supported

    yes

    Scaling BGP dynamic host routes with traffic.

    test_bgp_dynamic_route_scale

    No

    BGP withdraw prefixes(5 times) with dual stack scale 160k(IPv4+IPv6) prefix routes.

    test_bgp_dual_stack_scale_intf_flap_160k

    yes

    BGP session down and docker restart with dual stack scale to 160k(IPv4+IP6) prefix routes

    test_bgp_dual_stack_scale_clear_bgp_160k

    yes

    BGP withdraw prefixes(5 times) with dual stack scale 160k(IPv4+IPv6) prefix routes.

    test_bgp_dual_stack_convergence_withdrawal_160k

    yes

    Default Route with BGP Advertised Routes

    test_bgp_default_route

    No

    Maximum max-MED Value on Startup

    test_bgp_convergence_with_max_med_001

    No

    Stress device CPU and memory with 16K (IPv4 + IPv6) prefix routes

    test_bgp_dual_stack_stress_16k

    Yes

    Stress device CPU and memory with 32K (IPv4 + IPv6) prefix routes

    test_bgp_dual_stack_stress_32k

    Yes

    Stress device CPU and memory with 64K (IPv4 + IPv6) prefix routes

    test_bgp_dual_stack_stress_64k

    Yes

    Stress device CPU and memory with 128K (IPv4 + IPv6) prefix routes

    test_bgp_dual_stack_stress_128k

    Yes

    Stress device CPU and memory with 32K (IPv4 + IPv6) host routes

    test_bgp_host_route_dual_stack_stress_32k

    Yes

    Stress device CPU and memory with 64K (IPv4 + IPv6) host routes

    test_bgp_host_route_dual_stack_stress_64k

    Yes

    Stress device CPU and memory with 128K (IPv4 + IPv6) host routes

    test_bgp_host_route_dual_stack_stress_128k

    Yes

    Stress device CPU and memory with 32K (IPv4 + IPv6) LPM routes

    test_bgp_lpm_route_dual_stack_stress_32k

    Yes

    Stress device CPU and memory with 64K (IPv4 + IPv6) LPM routes

    test_bgp_lpm_route_dual_stack_stress_64k

    Yes

    Stress device CPU and memory with 128K (IPv4 + IPv6) LPM routes

    test_bgp_lpm_route_dual_stack_stress_128k

    Yes

    Scalability to 2 ECMP paths and 4K (IPv4 + IPv6) routes

    test_dualstack_2_ecmp_4k_routes

    Yes

    Scalability to 4 ECMP paths and 8K (IPv4 + IPv6) routes

    test_dualstack_4_ecmp_8k_routes

    Yes

    Scalability to 8 ECMP paths and 16K (IPv4 + IPv6) routes

    test_dualstack_8_ecmp_16k_routes

    Yes

    Scalability to 16 ECMP paths and 32K (IPv4 + IPv6) routes

    test_dualstack_16_ecmp_32k_routes

    Yes

    Scalability to 32 ECMP paths and 64K (IPv4 + IPv6) routes

    test_dualstack_32_ecmp_64k_routes

    Yes

    Scalability to 62 ECMP paths and 124K (IPv4 + IPv6) routes

    test_dualstack_62_ecmp_124k_routes

    Yes

    Scalability to 64 ECMP paths and 128K (IPv4 + IPv6) routes

    test_dualstack_64_ecmp_128k_routes

    Yes

    Shut/no shut interface between devices with 8 ECMP and 16K (IPv4 + IPv6) routes

    test_dualstack_ecmp_routes_dut_int_flap

    Yes

    Clear BGP sessions with 8 ECMP and 16K (IPv4 + IPv6) routes

    test_dualstack_ecmp_routes_clear_bgp

    Yes

    Shut/no shut interface host side interface with 8 ECMP and 16K (IPv4 + IPv6) routes

    test_dualstack_ecmp_routes_host_int_flap

    Yes

    Advertise IPv4 and IPv6 routes over IPv4 neighbors

    test_dualstack_ipv4_neighbors

    Yes

    Scalability to 8 ECMP paths and 16K iBGP (IPv4 + IPv6) routes

    test_dualstack_ecmp_ibgp

    Yes

    Scalability up to 8 ECMP paths and 16K (IPv4 + IPv6) routes with support for warm reboot

    test_ecmp_dualstack_warm_reboot

    Yes

    BGP graceful restart with dual-stack 64K (IPv4+IPv6) prefix routes, and BGP disable

    test_bgp_graceful_feature_disable_enable_64k

    No

    BGP graceful restart with dual-stack 128K (IPv4+IPv6) prefix routes, and BGP disable

    test_bgp_graceful_feature_disable_enable_128k

    No

    Installation

    Host Requirementschevron-rightSupported Traffic Generatorschevron-rightDeploy the VMchevron-rightUser Loginchevron-right

    Introduction

    Fabric Test Automation Suite (FTAS) is a comprehensive collection of test cases packaged as a virtual machine. The Objective of FTAS is to verify the necessary features and functions of SONiC NOS for Fabric Deployment Readiness. The test cases are primarily focused on qualifying the fabric for functions, features, scale, day-2 operations and chaos scenarios.

    Figure 1: FTAS Architecture

    Supported Traffic Generators

    Traffic Generator
    Type
    Version

    IXIA

    UHD

    • Chassis Type: Ixia UHD

    • Chassis Version: IxOS 9.10.2300.159

    • Protocol Build Number: 9.10.200

    IXIA

    NOVUS

    • Chassis Type: Ixia XGS2

    • Chassis Version: IxOS 9.12.2100.7

    • Protocol Build Number: 9.12.2009.10

    Host Requirements

    hashtag
    Hardware

    • CPU: x86_64 8 cores or more with Virtualization enabled

    • Memory: 8GB or more system memory

    • Disk Space: 256GB or more available disk space

    • Network Interface: 1 GbE NIC

    circle-info

    For FTAS with ONES integration, more disk space and RAM is needed

    Memory: 16GB or more system memory Disk Space: 512GB or more available disk space

    hashtag
    Operating System

    • Ubuntu 20.04 or later (64-bit)

    circle-info

    Other flavours of Linux that support KVM should also be able to run the FTAS.

    However, it's important to note that these alternative distributions have been specifically tested for compatibility with FTAS, and therefore, users who opt for non-Ubuntu Linux systems to run FTAS should be aware that they may encounter compatibility issues and may need to perform additional configuration and testing on their own.

    hashtag
    Hypervisor Software

    KVM (Kernel-based Virtual Machine) is the leading open-source virtualisation technology for Linux. It installs natively on all Linux distributions and turns underlying physical servers into hypervisors so that they can host multiple, isolated virtual machines (VMs).

    We will be using KVM to serve as the hypervisor for the FTAS VM because KVM being a type-1 hypervisor, it outperforms all type-2 hypervisors, ensuring near-metal performance.

    Please refer to the following steps to install it on the host machine

    • Ensure that the latest Ubuntu packages are installed

    • Install KVM packages

    • Check if KVM acceleration is ready

    • Add user to libvirt group

    Verify if the libvirt user group is available using the below command

    If the libvirt group is not available, it can be created using the below command

    Then add the current user to the group

    • Set user and group for qemu. Update the qemu config with your user and libvirt group

    • Restart the libvirtd service

    Check the status of the libvirtd service

    • If your server has a GUI desktop installed, you may want to install virt-manager. The virt-manager application is a desktop Graphical user interface for managing virtual machines through libvirt

    hashtag
    Network Configuration

    It is recommended that the virtual NIC on the VM should be bridged with the physical NIC on the host machine.

    In this sample configuration eno1 is the physical NIC of the host machine, which is typically used for SSH(Management).

    circle-info

    Please assign the same static IP as on the physical Management NIC(enp1s0).

    After this step, IP will be re-assigned to the bridge interface(br0) and the Physical interface(enp1s0) will act as a Layer-2 interface.

    Apply the above configuration

    triangle-exclamation

    This step will reset the SSH connection and reassign the static IP from the physical interface(enp1s0) to the bridge interface(br0).

    Deploy the VM

    The VM image is provided as a gzip file. It should be decompressed to get the qcow2 image to deploy as a VM.

    The FTAS VM has Aviz ONES integrated into it and will take some time to initialise after the first boot.

    You can connect to the console port of the VM to see the installation logs.

    hashtag
    Create the VM using GUI App virt-manager

    If your host server has Ubuntu Desktop and virt-manager installed you can use it to deploy the VM. Make sure you can start the Virtual Machine Manager and that it connects successfully to the local hypervisor.

    Creating a VM with virt-manager is very straightforward, Use the following steps to deploy the FTAS VM

    • File -> New Virtual Machine -> Import existing disk image -> Forward

    • Browse to the FTAS disk image location and select Ubuntu as the OS name

    • Click "Forward" and select vCPU (min 2 cores) and Memory (4GB) for the VM

    • Click "Forward", give your VM a name and check "Customize configuration before install"

    • Select "NIC ...", in the "Network source" select the Linux bridge you created on the host machine

    • Apply the configuration and start the VM

    hashtag
    Create the VM using XML configuration

    • Create an XML configuration file from the following template

    The below lines can be changed to customize the VM installation:

    • Create a Linux bridge configuration file (bridged-network.xml) for libvirt from the following template

    • Define the Linux bridge for the VM

    • Start the VM

    If you see a permission error run the virsh command with sudo may fix the issue

    • Check the VM status

    hashtag
    Configure the IP address on the VM

    1. If there is a DHCP server on the management network the VM will obtain its IP configuration from the DHCP server

    2. If there is no DHCP server or you want to configure the IP address statically, Follow the below steps

    • Enter VM console

    circle-info

    The default username is 'oper' with the default password 'oper@123'

    • Check connections and devices

    • Release IP assigned by DHCP

    • Configure static IP for the connection

    • Set a default Gateway address

    • Set the IP configuration mode to manual

    • Reapply the configuration to the interface

    • Verify the IP address

    Test FTAS VM reachability from outside the VM, if the VM is not reachable please check the access rule at the below location,

    If the above value is 1 please change it to 0 and the reachability issue should be resolved

    There are some scaling scripts that require multiple network service servers (NTP, SYSLOG, TACACS+, etc.). In order to simulate this, we can add a secondary IP address to the VM NIC.

    To add a secondary IP address, use the command

    hashtag
    Network services containers

    The FTAS VM has docker containers running and the following docker images installed:

    • DHCP container image ztp_dhcp(DHCP sevice)

    circle-info

    ztp_dhcp(DHCP) services are not run by default as it might conflict with DHCP running in the DC infra.

    • Net Services container image netservices:v1 (NTP, SYSLOG, TACACS+ services). This container is run with the "--network=host" option. If you need to change the configurations of the services please find them in the following configuration files.

    Dockers running by default:

    Test Case Execution

    In the home folder of the logged-in user, there is a Python script name "qjob.py". This script handles test scheduling.

    Following is a brief usage of the script:

    The script can take the following actions:

    • Show the current queue

    oper@ftasvm:~$ ./qjob.py -a show
    Job_Queue:  []
    Queue Status:  paused
    oper@ftasvm:~$ 

    When no tests are scheduled the script will show an empty queue.

    • Adding test suite to the queue

    we can add multiple test suits to the queue at any time to be executed.

    • Removing the test suite from the queue

    • Changing queue status

    There are two statuses in the job queue:

    When the job queue is in "paused" status, the test_runner service does not pick any job in the queue for execution.

    When the job queue is in "running" status, the test_runner service picks the oldest job in the queue for execution. After the test case execution is complete the queue test_runner service changes the queue status to "paused".

    circle-info

    Changing the queue status from running to paused when there is a running job, won't stop the running job but the test_runner service won't pick the next job for execution.

    • Kill or terminate a running job

    After terminating the running job test_runner service pauses the queue.

    Test Setup

    Before scheduling any jobs validate the physical testbed to make sure all links are connected and are operationally UP, also we need a clean configuration file with valid interface settings (breakout, speed, FEC, admin status, etc.), no IP interfaces, no BGP instances, and no QoS. The scripts will use this clean configuration file to restore the DUT to its default configuration as part of the clean-up process.

    The cleanup configuration file should be created at /etc/sonic/clean_config.json. The clean configuration file follows the config_db.json format. It must include the port-related settings for lane mapping, speed, and admin status.

    hashtag
    How to create a clean config file

    • For every DUT in the testbed, backup the default config(if needed)

    • Start with the default configuration after a fresh installation of SONiC

    • Alternatively, you can also create a clean config by editing the config_db.json as below

      • Edit config_db.json and remove the following configuration blocks and save the file

    • Configure the below list through config_db.json:

      • The management IP address for eth0 and gateway

    • Port breakout (if any)

    • Port speed, FEC, Auto-negotiation on links connected to other devices and Ixia

    • Configure "admin_status": "down"

    • Save config_db.json

    • Copy config_db.json to /etc/sonic/clean_config.json

    circle-info

    clean_config.json should be built using the text editor or SONiC CLI, but not both.

    • Load the configuration on the device

    • Add the following line to /etc/sonic/frr/vtysh.conf

    • Cleanup the BGP configuration from FRR

    hashtag
    Chaos Test Setup

    hashtag
    TestBed File

    Build the Chaos testbed file using "~/testbeds/ftas_chaos_topo.py"

    hashtag
    Disable all cleanup options

    The chaos test suite loads a base configuration to all DUTs and Ixia for its test scripts. So ensure the following variables are set to False in the testbed file.

    hashtag
    Clean up DUTs Configuration Manually

    Please make sure DUTs have a clean or default configuration before the Chaos test run.

    hashtag
    Remove the test statistics report file

    Chaos suite generates a statistics report file at ~/reports/report.txt to track its execution status and metric data for all test scripts. So please make sure to remove this file before running the Chaos test to avoid Ixia library errors

    User Login

    Users can log in to the FTAS VM using one of the following methods

    • Console login

    virsh console <VM_domain_name>
    
    #Example -  
    #sonic@sonic-39:~$ virsh console FTAS_VM01
    #Connected to domain FTAS_VM01
    #Escape character is ^]
    #oper@ftasvm:~$ 
    • SSH login

    circle-info

    The default username is 'oper' with the default password 'oper@123'

    After logging in user will be dropped into the 'Bash shell'

    with the following pre-defined folders and files

    • qjob.py - Script to schedule execution jobs.

    • testbeds - Directory to create and maintain testbed parameter files.

    • jobs - Directory containing the JSON file that holds the job queue. qjob.py script controls and edits this JSON file. Please don't edit the JSON file manually.

    What's New?

    hashtag
    Release 3.4

    • New IPv6 usecases

    • Graceful restart hadening

    • VXLAN - type 5 route scalability to 80000 routes

    • VXLAN - VM Mobility use case

    • VRRP coverage

    • Management VRF

    • Control plane ACL (CACL) coverage

    hashtag
    Release 3.3

    • VRRP

    • DHCP Relay

    • syslog TCPDump

    • Tacacs passkey encryption,

    hashtag
    Release 3.2

    hashtag
    Enhanced BGP Scalability Testing

    • Support for testing up to 64 ECMP paths and 128K (IPv4 + IPv6) routes

    • Enhanced BGP resilience testing

    • iBGP route testing with up to 8 ECMP paths and 16K routes

    hashtag
    Comprehensive L3 EVPN-VXLAN Testing

    • BGP unnumbered and numbered L3 EVPN-VXLAN fabric validation

    • Inter-VLAN and intra-VLAN L3 traffic forwarding tests with route scaling from 16K to 128K

    • Network resilience tests, including node drain and link drain scenarios

    hashtag
    Additional Improvements

    • Warm reboot support for ECMP scalability testing

    • QoS testing with DWRR scheduling algorithm verification

    • Dynamic route scale determination

    hashtag
    Release 3.1

    • QoS

    • Stress testing

    • EVPN/L2VXLAN enhancements

    • More SNMP coverage

    hashtag
    Release 3.0

    • STP

    • Storm control

    • Static Lag

    • Dynamically find the route scalability limits

    hashtag
    Release 2.3.0

    • MCLAG with Layer2

    • MCLAG with Layer3

    • EVPN VxLAN

    hashtag
    Release 2.2.0 (10/5/2023)

    • ECMP Scalability

    • BGP Dual Stack Scalability

    • BGP Graceful Restart

    hashtag
    Release 2.1.0 (7/7/2023)

    • Scalability enhancements:

      • Replaced the cases with fixed values by the variant of supported range test cases.

      • Parametrized scale value to the maximum supported range .(example: scale_val = 64/128/256/512 )

    Test Suite Configuration

    Test suite configuration is a text file where you can list all test functions (each test function is a test case) for a batch run. Test suite files are stored in the "~/testsuites" folder

    Below is an example of how a test suite file is structured:

    • TEST***_FOLDER - Path to the test artefacts including testbeds, testsuites and reports

    Test Case Results Reporting

    The FTAS test_runner service collects the logs from the test case execution, captures and saves them to "~/logs/jobs.log" file.

    The FTAS also creates an HTML version of the reports available "~/reports/test_report_20230218_**/"

    To view the HTML test report, Visit the URL at http://<VM IP addr>:8090/

    Test Cases

    Layer 2

    Description
    Test Case ID
    PD
    Topology

    Feature

    This category covers the validation of mandatory features and functions required for data center deployments.

    Management

    This section verifies the mandatory functions for management operation in a Fabric

    Description
    Test Case ID
    PD
    Topology

    Platform

    Description
    Test Case ID
    PD
    Topology
    sonic@sonic-39:~$ gunzip -c ftas_ones_vmi_1.1.2.qcow2.gz > ftas_ones_vmi_1.1.2.qcow2
    sonic@sonic-39:~$ ls -l
    total 8302936
    -rw-rw-r-- 1 sonic sonic 4929683456 Feb 21 06:21 ftas_ones_vmi_1.1.2.qcow2
    -rw-rw-r-- 1 sonic sonic 3572510886 Feb 21 06:20 ftas_ones_vmi_1.1.2.qcow2.gz
    sonic@sonic-39:~$ 
    oper@ftasvm:~$ ./qjob.py -h
    usage: qjob.py [-h] [-a {add,remove,show,kill_job} | -S {running,paused}] [-s SUITEFILE] [-V]
    
    Test Job Queue Submitter
    
    optional arguments:
      -h, --help            show this help message and exit
      -a {add,remove,show,kill_job}, --action {add,remove,show,kill_job}
                            add: Add job to queue; remove: remove job from queue; show: show queue; kill_job: kill running job
      -S {running,paused}, --status {running,paused}
                            Set queue execution status. ["running" or "paused"]
      -s SUITEFILE, --suitefile SUITEFILE
                            Yaml testsuite file to send to the execution queue
      -V, --version         Show FTAS VM version
    oper@ftasvm:~$ 
    ssh <username>@<mgmt ip address of the VM>
    
    #Example -
    #sonic@sonic-39:~$ ssh oper@192.168.3.37
    #oper@192.168.3.37's password: 
    #Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-137-generic x86_64)
    
    # * Documentation:  https://help.ubuntu.com
    # * Management:     https://landscape.canonical.com
    # * Support:        https://ubuntu.com/advantage
    
    #This system has been minimized by removing packages and content that are
    #not required on a system that users do not log into.
    
    #To restore this content, you can run the 'unminimize' command.
    #Last login: Tue Feb 21 08:24:31 2023
    #oper@ftasvm:~$ 

    BFD

  • QoS Buffer Testing

  • SNMP walk to loopback interface

  • IPv6 enhancements

  • Static Anycast Gateway (SAG) validation with bidirectional L3 traffic
    MCLAG testcase hardening
  • NetOps testcase Hardening

  • Fast reboot testing

  • Platform CLI coverage

  • More verifications in resilience testcase

  • Route scalability enhancements

  • Default route advertisement

  • Max MED verification

  • Runtime optimization

  • IPv6 prefix route(4K, 64K, 128K) Scalability
  • BGP Dual Stack (64K and 128K) Scalability

  • Core file detection during test execution

  • New BGP netops usecases
  • Dynamic Port Breakout

  • Support for chaos on routed ports (no port channel configuration)

  • Support for chaos on routed port-channel

  • Debuggability enhancements

    • Syslog capture for all devices for the testcase duration

    • Techsupport dump for in case of test failure

    • Printing DOM information if ports fail to come up

    • Easier identification of Ixia sessions by appending test id to session name

  • BGP Netops coverage:

    • eBGP multi-AS config, adjacency, route convergence and data path using router interface

    • eBGP Multi-AS Route Convergence and data path using loopback

    • BGP Node Drain - Add route map to remove and restore SPINE nodes 1 and 2 using the Community list

    • Test Link drain - Apply for Route-Map permit and for IPv6 Traffic/Prefixes and prefix lists

    • Node drain: with IPv6 Traffic, Test Node Drain (Spine 1 and Spine 2) for IPv6 Traffic/Prefixes

  • Debuggability enhancements:

    • Added device status data before and after test cases and added log messages.

    • Added FTAS version display in final report log.

  • Platform-specific suite files:

    • Added suite files for Wistron, Nvidia and EC

  • eBay specific suite files:

    • Added suite files for EC4630, EdgeCore AS-97xx

  • Platform/Version compatibility check:

    • Parameterized variables were added to validate the topology file and supported sonic version.

  • Support for auto cleanup before the test run:

    • Using this variable devices can be cleaned up forcefully before the test run.

    • Cleans up the ACL, routes, ipv4 interfaces, VLANs, port-channel configurations if any on the devices and brings down all the ports before the test run starts.

  • Card Type: UHD100T32

  • UHD: 1.5.49

  • Card Type: NOVUS100GE8Q28+FAN+25G+50G

    Featurechevron-right
    Scalabilitychevron-right
    Resiliencechevron-right
    Platformchevron-right
    Portschevron-right
    Layer 2chevron-right
    Layer 3chevron-right
    Managementchevron-right

    Scalability

    Managementchevron-rightLayer 2chevron-rightLayer 3chevron-right
    oper@linux:~$ ./qjob.py -S running
    VLAN
  • VLAN_MEMBER

  • PORTCHANNEL

  • PORTCHANNEL_MEMBER

  • BGP configuration

  • Loopback interfaces

  • Edit "DEVICE_METADATA" in /etc/sonic/config_db.json as below

    • Configure "hostname" for each device (Example: Leaf01, Leaf02, Spine01, Spine02)

    • Add "docker_routing_config_mode": "split" configuration

  • testsuites - Directory to maintain testsuite yaml files.

  • reports - Directory to store HTML reports of completed jobs.

  • configs - Directory to store test configs

  • jobs.py - Script to manipulate queue jobs. It is imported by the qjob.py utility.

  • logs - Maintains execution logs file of all jobs. Users can clean up the files in the logs and reports folder to regain disk space when needed.

  • sudo apt-get update && sudo apt-get upgrade
    sudo apt install libvirt-clients libvirt-daemon-system libvirt-daemon virtinst bridge-utils qemu qemu-kvm
    kvm-ok
    
    # You should see a message like "KVM acceleration can be used"
    sonic@sonic-39:~$ kvm-ok
    INFO: /dev/kvm exists
    KVM acceleration can be used
    sonic@sonic-39:~$ 
    sudo getent group | grep libvirt
    
    sonic@sonic-39:~$ sudo getent group | grep libvirt
    libvirt:x:119:sonic,root
    libvirt-qemu:x:64055:libvirt-qemu
    libvirt-dnsmasq:x:120:
    sonic@sonic-39:~$ 
    sudo groupadd --system libvirt
    
    sonic@sonic-39:~$ sudo groupadd --system libvirt
    groupadd: group 'libvirt' already exists
    sonic@sonic-39:~$ 
    sudo usermod -a -G libvirt $(whoami)
    sudo vi /etc/libvirt/qemu.conf
    
    # Some examples of valid values are:
    #
    #       user = "qemu"   # A user named "qemu"
    #       user = "+0"     # Super user (uid=0)
    #       user = "100"    # A user named "100" or a user with uid=100
    #
    #user = "root"
    user = "<your host user>" 
    
    
    # The group for QEMU processes run by the system instance. It can be
    # specified in a similar way to the user.
    group = "libvirt"
    sudo systemctl stop libvirtd
    sudo systemctl start libvirtd
    sudo systemctl status libvirtd
    
    sonic@sonic-39:~$ sudo systemctl status libvirtd
    ● libvirtd.service - Virtualization daemon
       Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
       Active: active (running) since Sat 2023-02-18 10:16:26 UTC; 27s ago
         Docs: man:libvirtd(8)
               https://libvirt.org
     Main PID: 68774 (libvirtd)
        Tasks: 33 (limit: 32768)
       CGroup: /system.slice/libvirtd.service
               ├─54120 /usr/bin/qemu-system-x86_64 -name guest=ftas03,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-8-ftas03/master-key.aes -machine pc-i440fx-1.5,accel
               └─68774 /usr/sbin/libvirtd
    
    Feb 18 10:16:26 sonic-39 systemd[1]: Starting Virtualization daemon...
    Feb 18 10:16:26 sonic-39 systemd[1]: Started Virtualization daemon.
    lines 1-13/13 (END)
    sudo apt-get install virt-manager
    Netplan configuration for Linux bridge using DHCP
    #/etc/netplan/00-installer-config.yaml
    network:
      ethernets:
        enp1s0:
          dhcp4: no
      bridges:
        br0:
          interfaces: [enp1s0]
          dhcp4: yes
          mtu: 1500
          parameters:
            stp: true
            forward-delay: 4
          dhcp6: no
      version: 2
    Netplan configuration for Linux bridge using static IP
    #/etc/netplan/00-installer-config.yaml
    network:
      ethernets:
        enp1s0:
          dhcp4: no
      bridges:
        br0:
          interfaces: [enp1s0]
          addresses: [172.16.1.100/24]
          gateway4: 172.16.1.1
          mtu: 1500
          nameservers:
            addresses: [8.8.8.8, 8.8.4.4]
          parameters:
            stp: true
            forward-delay: 4
          dhcp4: no
          dhcp6: no
      version: 2
    sudo netplan apply
    oper@ftasvm:~$ ./qjob.py -a add -s testsuites/data_2dut.suite   
    oper@ftasvm:~$ ./qjob.py -a add -s testsuites/data_4dut.suite  
    
    oper@ftasvm:~$ ./qjob.py -a show
    Job_Queue:  ['/home/oper/testsuites/data_2dut.suite', '/home/oper/testsuites/data_4dut.suite']
    Queue Status:  paused
    oper@ftasvm:~$ 
    oper@ftasvm:~$ ./qjob.py -a remove -s /home/oper/testsuites/data_4dut.suite 
    [INFO]: Test job removed /home/oper/testsuites/data_4dut.suite
    oper@ftasvm:~$ 
    
    oper@ftasvm:~$ ./qjob.py -a show
    Job_Queue:  ['/home/oper/testsuites/data_2dut.suite']
    Queue Status:  paused
    oper@ftasvm:~$ 
    oper@linux:~$ ./qjob.py -a kill_job
    Trying to terminate running job...done
    [INFO]: The queue is paused. Please update its status after your debugging
    
    oper@linux:~$ ./qjob.py -a remove -s /home/oper/testsuites/data_2dut.suite 
    [INFO]: Test job removed /home/oper/testsuites/data_2dut.suite
    oper@ftasvm:~$ 
    cd /etc/sonic/
    cp config_db.json config_db.json.bak
    #reset to factory default
    sudo rm /etc/sonic/config_db.json
    sudo config-setup factory
    sudo reboot
    // Sample DEVICE_METADATA
        "DEVICE_METADATA": {
            "localhost": {
                "buffer_model": "traditional",
                "default_bgp_status": "up",
                "default_pfcwd_status": "disable",
                "docker_routing_config_mode": "split",
                "hostname": "INVM32K-02",
                "hwsku": "Wistron_sw_to3200k_32x100",
                "mac": "00:30:64:6f:61:ad",
                "platform": "x86_64-wistron_sw_to3200k-r0",
                "type": "not-provisioned"
            }
        },
        "MGMT_INTERFACE": {
            "eth0|10.4.4.65/23": {
                "gwaddr": "10.4.4.1"
            }
        },
    Sample port configuration block
            "Ethernet0": {
                "admin_status": "down",
                "alias": "etp1",
                "index": "1",
                "lanes": "0,1,2,3",
                "mtu": "9100",
                "speed": "100000"
            },
    sudo cp /etc/sonic/config_db.json /etc/sonic/clean_config.json
    sudo config reload -y -f
    service integrated-vtysh-config
    vtysh
    show run
    config t
    
    <remove all BGP configurations>
    
    write memory
    CLEANUP_BY_REBOOT = False
    CLEANUP_BY_CFG_RELOAD = False
    cd ~/reports/
    rm report.txt 
    oper@ftasvm:~$ ls -lrth
    total 44K
    drwxrwxr-x  3 oper oper 4.0K Nov 18 16:54 ones
    -rwxrwxr-x  1 oper oper 1.9K Jan 27 10:19 qjob.py
    drwxr-xr-x  2 oper oper 4.0K Jan 31 07:26 jobs
    drwxr-xr-x  2 oper oper 4.0K Feb  7 11:03 configs
    -rwx------  1 oper oper 6.2K Feb  7 11:03 jobs.py
    drwxrwxr-x  2 oper oper 4.0K Feb 18 12:54 __pycache__
    drwxrw-rw-  2 oper oper 4.0K Feb 18 12:54 logs
    drwxr-xr-x  2 oper oper 4.0K Feb 18 14:06 testbeds
    drwxr-xr-x  2 oper oper 4.0K Feb 18 14:07 testsuites
    drwxrwxrwx 10 oper oper 4.0K Feb 20 04:58 reports
    oper@ftasvm:~$ 

    Line #25 the Path to the qcow2 VM image file

  • Figure 2: Virt-Manager
    Figure 3: Create new VM from existing disk image
    Figure 4: Set vCPU and memory
    Figure 5: Network selection

    TEST_CONTACT - Email address of the test owner. This information is included in the test report

  • TESTSUITES - Define the sub-test suites in a key-value pair for execution

    • "./essential/taas_platform_Interface_test.py" - Location of the test script files.

    • SKIP - Defines whether a particular test script will be included or excluded from the execution. If set to "SKIP": true, then the tests will be excluded from execution.

    • COMMON_TESTBED - The testbed file all test cases use.

    • TESTCASES - A list of test functions (test cases) with the structure of "TESTCASES": [{"test_syslog_004": ""},{"test_syslog_002": ""}].

  • hashtag
    Customer suites

    FTAS comes with the following predefined suites which can be run directly on the applicable platform and release combinations. These suite files are present in ~/testsuites directory

    Suitefile
    Description
    Number of testcases

    PI.suite

    All platform independent testcases

    148

    PD.suite

    All platform dependent testcases

    209

    data_1dut.suite

    All one DUT testcases applicable on data switches

    82

    Detailed test logs for a test case

    hashtag
    Deleting Test Reports

    oper@ftasvm:~/reports/test_report_20230218_125505$ ls -lrth
    total 40K
    drwxr-xr-x 2 root root 4.0K Feb 18 13:04 taas_acl_test.py
    -rw-r--r-- 1 root root  22K Feb 18 13:04 ftas_2duts_topo.py.html
    -rw-r--r-- 1 root root 9.0K Feb 18 13:04 ftas_2dut_suite.yaml.html
    oper@ftasvm:~/reports/test_report_20230218_125505$ 
    Test suite summary report
    oper@ftasvm:~$ cd reports/
    oper@ftasvm:~/reports$ sudo rm -rf test_report_20230218_125505
    
    oper@ftasvm:~$ cd logs/
    oper@ftasvm:~/logs$ rm -rf jobs.log

    Verify MTU functionality for Jumboframe packets

    test_ports_mtu_002

    Yes

    Enable LLDP globally and disable per-port basis

    test_lldp_001

    No

    Verify that user can enable/disable LLDP globally

    test_lldp_002

    No

    Verify LLDP neighbors are learnt properly with proper ChassisID, portID, system name, system

    test_lldp_013

    No

    Verify LACP member addition and removal

    test_lacp_003

    No

    Verify LACP functionality across reboot

    test_lacp_005

    No

    Verify LACP functionality after link failover/failback of physical interface

    test_lacp_011

    No

    Verify LACP functionality after removal and addition of port-channel member

    test_lacp_012

    No

    Verify whether user can create/delete VLAN

    test_vlan_001

    No

    Verify whether user can add/modify/delete ports to the VLAN as tagged/untagged members

    test_vlan_002

    No

    Verify the ability to configure a port as untagged VLAN member

    test_vlan_004

    No

    Verify the ability to configure a port as tagged VLAN member

    test_vlan_005

    No

    Verify that the user can configure port-channel interface as untagged VLAN member

    test_vlan_007

    No

    Verify that the user can configure port-channel interface as tagged VLAN members

    test_vlan_008

    No

    Warm Reboot - Device configuration impact for VLAN Config

    test_vlan_011

    Yes

    Verify whether user can configure port as untagged member of a VLAN

    test_vlan_014

    No

    Verify whether known unicast traffic is forwarded to the destination port-channel

    test_vlan_016

    No

    EVPN_VXLAN Configuration and show commands

    test_bgp4_evpn_vxlan_001

    Yes

    EVPN VXLAN for known unicast, BUM traffic (eBGP) with RIF

    test_bgp4_evpn_vxlan_002

    Yes

    EVPN VXLAN for known unicast, BUM traffic (eBGP) with SVI

    test_bgp4_evpn_vxlan_003

    Yes

    EVPN VXLAN for known unicast traffic (eBGP) with link events and router failure - RIF

    test_bgp4_evpn_vxlan_005

    Yes

    EVPN VXLAN for known unicast traffic (eBGP) with link events and router failure - SVI

    test_bgp4_evpn_vxlan_006

    Yes

    EVPN VXLAN for known unicast traffic (eBGP) with link events and router failure - RPCH

    test_bgp4_evpn_vxlan_007

    Yes

    Asymmetric IRB with EVPN eBGP

    test_bgp4_evpn_vxlan_015

    Yes

    Asymmetric IRB with EVPN iBGP

    test_bgp4_evpn_vxlan_016

    Yes

    MC-LAG L2 validation using port-channel configuration

    test_mclag_layer2_steady_state

    Yes

    MC-LAG L2 validation, Bring down the member link of SPINE01

    test_mclag_layer2_member_link_down

    Yes

    MC-LAG L2 keepalive link down

    test_mclag_layer2_peer_link_down

    Yes

    MCLAG-L2Active Reboot

    test_mclag_layer2_active_reboot

    Yes

    MCLAG-L2 Standby Reboot

    test_mclag_layer2_standby_reboot

    Yes

    Storm control CLI

    test_storm_control_cli_verification

    Yes

    DUT throws proper error for invalid storm-control input

    test_storm_control_invalid_input

    Yes

    Storm control with broadcast traffic

    test_storm_control_broadcast

    Yes

    Storm control with unknown-unicast traffic

    test_storm_control_unknown_unicast

    Yes

    Storm control with unknown-multicast traffic

    test_storm_control_unknown_multicast

    Yes

    Storm control configuration and behavior

    during warm-reboot

    test_storm_control_warm_reboot

    Yes

    Configure STP on the devices check for loop free topology with root bridge selection

    test_configure_stp_validate

    Yes

    Enable STP, ensure loop-free topology, configure priority and spine set as root bridge

    test_stp_priority

    Yes

    Edge port transition to forwarding state with portfast enabled

    test_port_fast

    Yes

    Create a static LAG and verify the traffic flow

    test_pch_creation

    Yes

    Add delete members to static LAG and verify the traffic flow

    test_pch_sec_member_add_del

    Yes

    Static LAG recovers after restarting the teamd container

    test_lag_docker_teamd_reboot

    Yes

    Static LAG entry in redis

    test_create_pch_check_rediscli

    Yes

    Static LAG member entry in redis

    test_mem_pch_rediscli_check

    Yes

    Shut and no shut the static LAG

    test_shut_noshut_pch

    Yes

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    Verify that ping works properly when using LACP

    test_ping_011

    No

    Verifying IPv4 (1518) MTU

    test_mtu_001

    Yes

    Verify ERSPAN by configuring a mirror from ACL match to destination IP in ingress directions

    test_port_span_003

    Yes

    Verify whether SNMP configurations

    test_snmp_config

    No

    Verify SNMP Get/GetNext/Walk requests MIBS: ENTITY, IF-MIB, IP-MIB,

    test_snmp_commands

    No

    Verify Config load for incremental configuration

    test_config_load

    No

    Verify IPv6 address configuration on Front Panel (Data) ports

    test_front_panel_ports_ipv6

    No

    Verify config reload to restore configuration

    test_cfg_backup_restore

    No

    Verify syslogs are generated properly on link down/up

    test_syslog_002

    No

    Verify syslogs are generated properly on LACP UP/Down

    test_syslog_004

    No

    Verify SSH from host to SONIC on management interface

    test_ssh_001

    No

    Verify SSH from host to SVI interface and routed port

    test_ssh_002

    No

    Verify whether the session is successfully closed right after SSH disconnect from the client.

    test_ssh_003

    No

    Verify Tacacs+ with AAA authentication

    test_tacacs_001

    No

    Verify NTP server works as clock source correctly

    test_ntp_007

    No

    Verify timezone can be manually configured.

    test_timezone_001

    No

    Verify ping from SONIC SVI interface and routed port

    test_ping_001

    No

    Verify that ping works properly with multiple parameter combination

    test_ping_009

    No

    Verify that SONiC Version and serial information can be retrieved via SNMP_WALK command

    test_snmp_walk_version_serial

    No

    Verify that SONiC interface index information can be retrieved via SNMP_WALK command

    test_snmp_walk_inf_index

    No

    Verify that SONiC interface name information can be retrieved via SNMP_WALK command

    test_snmp_walk_inf_name

    No

    Verify that SONiC interface admin and oper status info can be retrieved via SNMP_WALK command

    test_snmp_walk_inf_admin_oper

    No

    Verify that SONiC interface type info can be retrieved via SNMP_WALK command

    test_snmp_walk_inf_type

    No

    Verify that tagged ports vlan id can be retrieved using the SNMP_WALK command

    test_snmp_walk_vlan_tagged_ports

    No

    Verify that untagged ports vlan id can be retrieved using the SNMP_WALK command

    test_snmp_walk_vlan_untagged_ports

    No

    Verify that LLDP neighbor info. can be retrieved via SNMP_WALK command

    test_snmp_walk_lldp

    No

    Verify that routing information can be retrieved via SNMP_WALK command

    test_snmp_walk_ip_routing_info

    No

    Verify that SONiC IP interface index and netmask info can be retrieved via SNMP_WALK command

    test_snmp_walk_ip_inf_index_and_netmask

    No

    SNMP walk to loopback interface to retrieve the interface index and netmask

    test_snmp_walk_to_loopback_to_get_intfindex_netmask

    No

    SNMP walk to loopback interface to retrieve the mac address

    test_snmp_walk_to_loopback_to_get_macaddress

    No

    SSH over SVI interface

    test_ssh_004

    No

    Verify syslog capture in TCP dumps on link down/up events

    test_syslog_tcp_dump

    No

    TACACS passkey is encrypted in the configuration

    test_tacacs_passkey_encryption

    No

    Verify that a configured time zone is synchronized with the system time

    test_frr_timezone_sync

    No

    NTP configuration with mgmt VRF

    test_ntp_vrf

    No

    MGMT VRF configuration

    test_mgmt_vrf

    No

    SSH using control plane ACL

    test_ctrlplane_acl_ssh

    No

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    Verify SPAN with source ports (and LAG) to the destination port in ingress/egress/both directions.

    test_port_span_001

    Yes

    Verify ERSPAN by configuring a mirror with the list of source ports/LAG to destination IP in ingress/egress/both directions

    test_port_span_002

    Yes

    Verify Platform CPU and Process Status

    test_platform_003

    Yes

    Verify Platform PSU

    test_platform_004

    Yes

    Verify Platform Tech-Support

    test_platform_005

    Yes

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    Verify Platform Information

    test_platform_001

    Yes

    Verify Platform Health Status

    test_platform_002

    Yes

    Layer 3

    Description
    Test Case ID
    PD
    Topology

    Verify that ping works over ECMP

    test_ping_013

    No

    Verify that IP address can be configured over SVI

    test_IP_001

    No

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    Layer 2

    Description
    Test Case ID
    PD
    Topology

    Verify Secondary subnet under a VLAN upto 10 subnets

    test_10_secondary_subnet_under_vlan

    Yes

    Verify Secondary subnet under a VLAN upto 20 subnets

    test_20_secondary_subnet_under_vlan

    Yes

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    Ports

    Description
    Test Case ID
    PD
    Topology

    Verify Port Auto-negotiation

    test_autoneg_001

    Yes

    Verify physical port operational down/up

    test_ports_002

    No

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    Management

    Description
    Test Case ID
    PD
    Topology

    Verify syslog servers scale

    test_qual_scale_011

    No

    Verify NTP server scale

    test_qual_scale_012

    No

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    How to contact Aviz Networks Support?

    hashtag
    Contact Support

    The Aviz Network Support team can be reached by

    1. Sending an email to support@aviznetworks.comenvelope

    2. Submitting a Ticket at

    3. Live Chat on

    hashtag
    Submitting a Ticket

    A ticket can be submitted with or without an account at the

    Mandatory Fields:

    • Subject

    • Issue Type (Post Deployment, Pre-Deployment, General Query, RMA)

    • Priority (Low, Normal, High, Urgent)

    Optional Fields:

    • External ID (Community Request ID or Past Case Number)

    • Hardware (Switch Model)

    • ASIC vendor (chipset)

    For Technical Issues, we recommend the description include the following:

    • Repro steps, if the issue is reproducible

    • The sequence of events that lead to the failure state

    • Artefacts - Tech Support dump (tar.gz file), Logs, Command Outputs, Topology Diagrams etc...

    Resilience

    No.
    Description
    Test Case ID
    PD
    Topology

    1

    Verify Chaos Base configuration

    test_deploy_verify_base_config

    Yes

    2

    circle-info

    PD(Platform dependent) means that FTAS is designed to work with specific hardware and software configurations and may not be compatible with other platforms.

    vi ftas.xml
    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>FTAS_VM01</name>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <vcpu placement='static'>4</vcpu>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-1.5'>hvm</type>
        <boot dev='hd'/>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <clock offset='utc'/>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback'/>
          <source file='/home/oper/taas_vm/taas_vm_v3.qcow2' />
          <target bus='virtio' dev='vda'/>
        </disk>
        <serial type='pty'>
          <source path='/dev/pts/3'/>
          <target port='0'/>
        </serial>
        <!-- Management interface eth0 -->
        <interface type='network'>
    	<model type='e1000' />
            <source network='br0'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x00' function='0x0'/>
        </interface>
       <controller type='usb' index='0'/>
        <memballoon model='virtio'>
          <alias name='balloon0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </memballoon>
      </devices>
    </domain>
    vi bridged-network.xml
    <network>
        <name>br0</name>
        <forward mode="bridge" />
        <bridge name="br0" />
    </network>
    #Execute the below command to attach the VM to the Linux Bridge 
    sonic@sonic-39:~$ virsh net-define bridged-network.xml
    sonic@sonic-39:~$ virsh net-start br0
    sonic@sonic-39:~$ virsh net-autostart br0
    sonic@sonic-39:~$ virsh net-list
     Name                 State      Autostart     Persistent
    ----------------------------------------------------------
     br0                  active     yes           yes
    
    sonic@sonic-39:~$ 
    virsh create <VM XML configuration file>
    
    #sonic@sonic-39:~$ virsh create ftas.xml 
    #Domain FTAS_VM01 created from ftas.xml
    #sonic@sonic-39:~$ 
    sonic@sonic-39:~$ virsh list
     Id    Name                           State
    ----------------------------------------------------
     8     FTAS_VM01                      running
    sonic@sonic-39:~$ 
    sonic@sonic-39:~$ virsh console FTAS_VM01
    Connected to domain ftas03
    Escape character is ^]
    
    ftasvm login: 
    sudo nmcli con show
    
    oper@ftasvm:~$ sudo nmcli con show
    NAME                UUID                                  TYPE      DEVICE 
    Wired connection 1  782de6d4-3867-3c5e-95fb-061ae39e5fae  ethernet  eth0   
    oper@ftasvm:~$ 
    # Capture the connection NAME of eth0 device
    sudo dhclient -v -r
    
    oper@ftasvm:~$ sudo dhclient -v -r
    Internet Systems Consortium DHCP Client 4.4.1
    Copyright 2004-2018 Internet Systems Consortium.
    All rights reserved.
    For info, please visit https://www.isc.org/software/dhcp/
    
    Listening on LPF/veth1dcacbe/b6:bc:e5:4a:7e:1f
    Sending on   LPF/veth1dcacbe/b6:bc:e5:4a:7e:1f
    <..>
    Sending on   Socket/fallback
    oper@ftasvm:~$ 
    sudo nmcli con mod "Wired connection 1" ipv4.addresses <ip address>/<prefix>
    
    #Example - sudo nmcli con mod "Wired connection 1" ipv4.addresses 192.168.0.37/24
    sudo nmcli connection modify "Wired connection 1" ipv4.gateway <GW Address>
    
    #Example - sudo nmcli connection modify "Wired connection 1" ipv4.gateway 192.168.0.1
    sudo nmcli con mod "Wired connection 1" ipv4.method manual
    sudo nmcli device reapply <dev_name>
    
    #Example - sudo nmcli device reapply eth0
    #verify the IP address
    ip a
    
    oper@ftasvm:~$ ip a
    <..>
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether 52:54:00:37:3c:5c brd ff:ff:ff:ff:ff:ff
        inet 192.168.0.37/25 brd 192.168.0.255 scope global noprefixroute eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::70a4:9f2e:658c:4d29/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    <..>
    oper@ftasvm:~$ 
    
    
    #Verify IP method
    oper@ftasvm:~$ sudo nmcli -f ipv4.method con show "Wired connection 1"
    ipv4.method:                            manual
    oper@ftasvm:~$ 
    Host Machine
    sonic@sonic-39:~$ cat /proc/sys/net/bridge/bridge-nf-call-iptables 
    1
    sonic@sonic-39:~$ 
    Host Machine
    sudo echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables 
    FTAS VM
    sudo nmcli con mod "<con_name>" +ipv4.addresses <ip address>/<prefix>
    #Example - sudo nmcli con mod "Wired connection 1" ipv4.addresses 192.168.0.42/24
    
    # Reapply config
    sudo nmcli device reapply <dev_name>
    #Example - sudo nmcli device reapply eth0
    
    # Show IP address to verify
    ip a
    # Restart docker containers so their services can listen on new IP addresses
    oper@ftasvm:~$ docker images
    REPOSITORY    TAG       IMAGE ID       CREATED        SIZE
    ztp_dhcp      v1        599313a03bfb   41 hours ago   83.3MB
    netservices   v1        8a9c98506637   41 hours ago   259MB
    oper@ftasvm:~$ d
    # NTP configuration file:
    /etc/ntp.conf
    
    # SYSLOG Configuration file
    /etc/syslog-ng/conf.d/syslog_ng.conf
    
    # Log files location:
    /var/log/sonic_logs/<IP address of devices>.log
    
    # TACACS+ configuraration
    /etc/tacacs+/tac_plus.conf
    oper@ftasvm:~$ docker ps -a
    CONTAINER ID   IMAGE            COMMAND                  CREATED        STATUS        PORTS     NAMES
    8add12060a57   netservices:v1   "/usr/bin/supervisord"   41 hours ago   Up 41 hours             net_services
    oper@ftasvm:~$ 
    oper@ftasvm:~$ cd testsuites/
    oper@ftasvm:~/testsuites$ ls -lrth
    total 156K
    -rw-rw-r-- 1 oper oper  405 Jul  4 07:57 verify_yaml.py
    -rw-rw-r-- 1 oper oper  539 Jul  4 07:57 verify_testbed.suite
    -rw-rw-r-- 1 oper oper  808 Jul  4 07:57 mgmt_3dut.suite
    -rw-rw-r-- 1 oper oper 4.7K Jul  4 07:57 mgmt_2dut.suite
    -rw-rw-r-- 1 oper oper 1.8K Jul  4 07:57 mgmt_1dut.suite
    -rw-rw-r-- 1 oper oper  11K Jul  4 07:57 edgecore_9716_202111.suite
    -rw-rw-r-- 1 oper oper  839 Jul  4 07:57 data_3dut.suite
    -rw-rw-r-- 1 oper oper 1.8K Jul  4 07:57 data_1dut.suite
    -rw-rw-r-- 1 oper oper 6.9K Jul  4 07:57 PD.suite
    -rw-r--r-- 1 oper oper  12K Jul  6 16:13 wistron_6512_ecs2.0.0.suite
    -rw-r--r-- 1 oper oper  12K Jul  6 16:13 wistron_3200_ecs2.0.0.suite
    -rw-r--r-- 1 oper oper  12K Jul  6 16:13 nvidia_202205.suite
    -rw-r--r-- 1 oper oper 6.5K Jul  6 16:13 mgmt_complete.suite
    -rw-r--r-- 1 oper oper 7.7K Jul  6 16:13 edgecore_4630_202111.suite
    -rw-r--r-- 1 oper oper 6.4K Jul  6 16:13 data_2dut.suite
    drwxrwxr-x 2 oper oper 4.0K Jul  7 07:27 ebay
    -rw-r--r-- 1 oper oper 5.9K Jul  7 07:27 PI.suite
    -rw-r--r-- 1 oper oper  12K Jul 14 11:02 data_complete.suite
    -rw-r--r-- 1 oper oper 4.5K Jul 16 10:37 data_4dut.suite
    -rw-r--r-- 1 oper oper 8.2K Jul 17 05:58 copy_data_complete.suite
    oper@ftasvm:~/testsuites$ 
    PI.suite
    ---
     "TESTBED_ROOT_FOLDER": "/home/oper/testbeds"
     "TESTSCRIPT_ROOT_FOLDER": "/home/oper"
     "TESTSUITE_ROOT_FOLDER": ""
     "TEST_REPORT_ROOT_FOLDER": "/home/oper/reports"
     "CHECK_COMPATIBILITY": true
     "TEST_CONTACT": "phili@aviznetworks.com"
     "TESTSUITES": {
       # 2 DUT test suites/scripts
        "./feature/taas_acl_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_acl_001": ""}, {"test_acl_002": ""}, {"test_acl_003": ""},{"test_acl_004": ""},{"test_acl_005": ""},{"test_acl_006": ""},
                       {"test_acl_007": ""},{"test_acl_008": ""},{"test_acl_009": ""},{"test_acl_010": ""},{"test_acl_011": ""},{"test_acl_012": ""}]
       },
        "./feature/taas_mtu_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_mtu_001": ""}]
       },
        "./feature/taas_mgmt_Syslog_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_syslog_002": ""},{"test_syslog_004": ""}]
       },
        "./feature/taas_autoneg_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_autoneg_001": ""}]
       },
        "./feature/taas_layer2_Vlan_ixia_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_vlan_014": ""},{"test_vlan_016": ""}]
       },
        "./feature/taas_layer2_LACP_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_lacp_003": ""}, {"test_lacp_005": ""}, {"test_lacp_011": ""},{"test_lacp_012": ""}]
       },
        "./feature/taas_layer2_LLDP_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_lldp_001": ""}, {"test_lldp_002": ""}, {"test_lldp_013": ""}]
       },
         "./feature/taas_layer3_ARP_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_arp_003": ""}, {"test_arp_007": ""}, {"test_arp_011": ""},{"test_arp_012": ""}]
       },
         "./feature/taas_layer3_IP_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_IP_001": ""}, {"test_IP_002": ""}, {"test_IP_005": ""},{"test_IP_006": ""},{"test_IP_011": ""},{"test_IP_014": ""}]
       },
         "./feature/taas_mgmt_Ping_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_ping_001": ""}, {"test_ping_009": ""}, {"test_ping_011": ""}, {"test_ping_013": ""}]
       },
          "./feature/taas_platform_Interface_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_ports_002": ""}, {"test_ports_005": ""}, {"test_ports_006": ""},{"test_ports_008": ""},{"test_ports_009_14": ""}]
       },
        "./scalability/taas_qual_Scale.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_10_secondary_subnet_under_vlan": ""}, {"test_20_secondary_subnet_under_vlan": ""}, {"test_max_secondary_subnet_under_vlan": ""}, {"test_v4_host_routes_scale_2k": ""},
                       {"test_v4_host_routes_scale_4k": ""}, {"test_v4_host_routes_scale_max_supported": ""}, {"test_v6_host_routes_scale_2k": ""}, {"test_v6_host_routes_scale_4k": ""},
                       {"test_v6_host_routes_scale_max_supported": ""}, {"test_v4_prefix_routes_scale_2k": ""}, {"test_v4_prefix_routes_scale_4k": ""}, {"test_v4_prefix_routes_scale_max_supported": ""},
                       {"test_v4_nexthops_scale_512": ""}, {"test_v4_nexthops_scale_1024": ""}, {"test_v4_nexthops_scale_max_supported": ""}, {"test_qual_scale_007": ""}, {"test_qual_scale_011": ""},
                       {"test_v4_acl_scale_128": ""}, {"test_v4_acl_scale_256": ""}, {"test_v4_acl_scale_512": ""}, {"test_v4_acl_scale_max_supported": ""},
                       {"test_qual_v6_acl_scale_128": ""}, {"test_qual_v6_acl_scale_256": ""}, {"test_qual_v6_acl_scale_512": ""},
                       {"test_qual_v6_acl_scale_max_supported": ""}]
       },
        "./feature/taas_mgmt_SSH_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_ssh_001": ""},{"test_ssh_002": ""},{"test_ssh_003": ""}]
       },
       "./feature/taas_layer3_bgp_netops_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "ftas_2duts_topo.py",
         "TESTCASES": [{"test_bgp_netops_002": ""}, {"test_bgp_netops_005_006": ""}, {"test_bgp_netops_008": ""},
                       {"test_bgp_netops_010": ""}]
       },
        "./feature/taas_layer3_BGP_ixia_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_bgp_004": ""},{"test_bgp_005": ""}]
       },
       "./feature/taas_qual_PortCfg_test.py": {
         "SKIP":false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_ports_fec_001": ""},{"test_ports_mtu_002": ""},{"test_ports_counters": ""}]
       },
       "./feature/taas_qual_L3_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_qual_bgp_004": ""},{"test_qual_bgp_007": ""},{"test_qual_bgp_008": ""},
                       {"test_qual_bgp_009": ""},{"test_qual_bgp_010": ""},{"test_qual_bgp_013": ""},{"test_qual_bgp_014": ""},
                       {"test_qual_bgp_015": ""},{"test_qual_bgp_016": ""},{"test_qual_bgp_017": ""},
                       {"test_qual_bgp_019": ""},{"test_qual_vlan1": ""},{"test_qual_vrf": ""}]
       },
        "./feature/taas_qual_Mgmt_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_port_span_001": ""},{"test_port_span_002": ""},{"test_port_span_003": ""},
                       {"test_front_panel_ports_ipv6": ""}]
       },
    
         "./feature/taas_layer2_Vlan_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{ "test_vlan_004": "" },{ "test_vlan_005": "" },{ "test_vlan_007": "" }, {"test_vlan_008": ""},{"test_vlan_011": ""}]
       }, 
    
        "./feature/taas_qual_Security_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "full_mesh_topo.py",
         "TESTCASES": [{"test_qual_ip6_acl_001": ""},{"test_qual_ip6_acl_002": ""},{"test_qual_ip6_acl_003": ""},
                       {"test_qual_ip6_acl_004": ""},{"test_qual_ip6_acl_005": ""},{"test_qual_ip6_acl_006": ""}]
       },
        "./feature/taas_qual_SNMP_test.py": {
         "SKIP": false,
         "COMMON_TESTBED": "ftas_mesh_topo.py",
         "TESTCASES": [{"test_snmp_walk_inf_admin_oper": ""}, {"test_snmp_walk_ip_inf_index": ""}, {"test_snmp_walk_ip_to_mac": ""}]
       }
    }

    data_2dut.suite

    All two DUT testcases applicable on data switches

    162

    data_3dut.suite

    All three DUT testcases applicable on data switches

    18

    data_4dut.suite

    All four DUT testcases applicable on data switches

    115

    data_complete.suite

    All testcases applicable on data switches

    400

    mgmt_1dut.suite

    All one DUT testcases applicable on management switches

    22

    mgmt_2dut.suite

    All two DUT testcases applicable on management switches

    88

    mgmt_3dut.suite

    All three DUT testcases applicable on management switches

    8

    mgmt_complete.suite

    All testcases applicable on management switches

    126

    edgecore_4630_202111.suite

    All testcases applicable on Edgecore 4630 platform for 202111 release

    269

    edgecore_9716_202111.suite

    All testcases applicable on Edgecore 9716 platform for 202111 release

    344

    nvidia_202205.suite

    All testcases applicable on NVIDIA platforms for 202205 release

    235

    wistron_3200_ecs2.2.2.suite

    All testcases applicable on Wistron 3200 platforms for ECS 2.2.2 release

    257

    wistron_6512_ecs2.0.0.suite

    All testcases applicable on Wistron 6512 platforms for ECS 2.2.2 release

    284

    Description
    Serial Number
  • Host Name

  • Attachments (Tech Support Dump, Screenshots, Logs)

  • https://support.aviznetworks.com/hc/en-us/requests/newarrow-up-right
    https://support.aviznetworks.comarrow-up-right
    support portalarrow-up-right

    Verify that IP address can be configured over routed port

    test_IP_002

    No

    Verify SVI and routed ports can be admin down or up

    test_IP_005

    No

    Verify connected route gets created for the SVI subnet in the ip route table.

    test_IP_006

    No

    Verify IP interface is operational for SVI with LACP portchannel members

    test_IP_011

    No

    Verify ip address can be configured over routed PCH.

    test_IP_014

    No

    Verify BGP AS configuration works properly

    test_bgp_001

    No

    Verify BGP peering happens with nodes in same AS and iBGP neighbor table gets updated properly

    test_bgp_002

    No

    Verify BGP peering happens with nodes in different AS and eBGP neighbor table gets updated properly

    test_bgp_003

    No

    Verify BGP route learning using eBGP with routes injected from IXIA

    test_bgp_004

    No

    Verify BGP route removal using eBGP with routes withdrawn from IXIA

    test_bgp_005

    No

    Verify BGP route relearn over different neighbor when interface is shutdown

    test_bgp_006

    No

    Verify unnumbered functionality with iBGP

    test_qual_bgp_001

    Yes

    Verify unnumbered functionality with eBGP

    test_qual_bgp_002

    Yes

    Verify BGP route redistribution in DUT

    test_qual_bgp_003

    No

    Verify BGP6 functionality

    test_qual_bgp_004

    No

    Verify BGPV6 Functionality in DUT

    test_qual_bgp_ebgp_004

    No

    Verify BGP AS-PATH prepend functionality

    test_qual_bgp_007

    No

    Verify BGP route map match prefix list, access-list deny and permit functionality

    test_qual_bgp_008

    No

    Verify BGP route map match AS-PATH permit and deny functionality

    test_qual_bgp_009

    No

    Verify BGP route map match community list permit and deny functionality

    test_qual_bgp_010

    No

    Verify BGP max MED functionality

    test_qual_bgp_011

    No

    Verify BGP maximum prefix limit per peer functionality

    test_qual_bgp_013

    No

    Verify BGP communities functionality

    test_qual_bgp_014

    No

    Verify BGP regexp match single and multi AS permit & deny action using AS-path access lists

    test_qual_bgp_015

    No

    Verify BGP regexp match any AS permit and deny action using AS-path access lists

    test_qual_bgp_016

    No

    Verify BGP regexp match range of BGP communities functionality

    test_qual_bgp_017

    No

    Verify BGP peering working with BGP listen range

    test_qual_bgp_019

    No

    Verify VRF functionality

    test_qual_vrf

    Yes

    Verify VLAN 1 support for Host connectivity

    test_qual_vlan1

    Yes

    Verify IPV6 neighbor discovery

    test_qual_ipv6_neighbor

    No

    Verify L3 DROP ACL functionality with matching source IP and source port

    test_acl_001

    No

    Verify L3 DROP ACL functionality with matching destination IP and destination port

    test_acl_002

    No

    Verify L3 DROP ACL functionality with matching SIP, DIP, SPORT, DPORT

    test_acl_003

    No

    Verify L3 DROP ACL with ACL rule having subnet mask

    test_acl_004

    No

    Verify L3 DROP ACL est acl rule with protocol = TCP

    test_acl_005

    No

    Verify L3 DROP ACL - Test acl rule with protocol = UDP

    test_acl_006

    No

    Verify L3 PERMIT ACL functionality with matching source IP and source port

    test_acl_007

    No

    Verify L3 PERMIT ACL functionality with matching destination IP and destination port

    test_acl_008

    No

    Verify L3 PERMIT ACL functionality with matching SIP, DIP, SPORT, DPORT

    test_acl_009

    No

    Verify L3 PERMIT ACL with ACL rule having subnet mask

    test_acl_010

    No

    Verify L3 PERMIT ACL - Test acl rule with protocol = TCP

    test_acl_011

    No

    Verify L3 PERMIT ACL - Test acl rule with protocol = UDP

    test_acl_012

    No

    Verify Drop ACL (IPv6) for matching source IPv6/L4 address and source IPv6L4 port

    test_qual_ip6_acl_001

    No

    Verify Drop ACL (IPv6) for matching destination IPv6/L4 port and source IPv6L4 port

    test_qual_ip6_acl_002

    No

    Verify drop ACL - matching IPv6 params subnet, dst, src ports combined

    test_qual_ip6_acl_003

    No

    Verify PERMIT ACL (IPv6) for matching source IPv6/L4 address and source IPv6L4 port

    test_qual_ip6_acl_004

    No

    Verify Permit ACL (IPv6) for matching destination IPv6/L4 port and source IPv6L4 port

    test_qual_ip6_acl_005

    No

    Verify drop ACL - matching IPv6 params subnet, dst, src ports combined

    test_qual_ip6_acl_006

    No

    Verify whether static ARP entry can be configured

    test_arp_003

    No

    Verify that the DUT will respond to an ARP Request for the SVI interface

    test_arp_007

    No

    Verify whether clear ARP entries works properly

    test_arp_011

    No

    Verify whether ARP entries are flushed after some time

    test_arp_012

    Yes

    Verify eBGP multi-AS config, adjacency, route convergence and data path using router interface

    test_bgp_netops_001

    No

    eBGP Multi-AS Route Convergence and data path using loopback

    test_bgp_netops_002

    No

    BGP Node Drain - Add route-map to remove and restore SPINE node 1 and 2 using Community list

    test_bgp_netops_003_004

    No

    Test Link drain - Apply Route-Map permit

    test_bgp_netops_005_006

    No

    Node drain: with IPv6 Traffic Test Node Drain (Spine 1 and Spine 2) for IPv6 Traffic/Prefixes

    test_bgp_netops_007

    No

    Link drain with IPv6 Traffic Test Link Drain for IPv6 Traffic/Prefixes

    test_bgp_netops_008

    No

    Node drain using prefix-lists

    test_bgp_netops_009

    No

    Test Link Drain with Prefix Lists

    test_bgp_netops_010

    No

    Node drain/restore using AS path prepend list

    test_bgp_netops_011_012

    No

    Link drain using AS path prepend

    test_bgp_netops_014

    No

    MC-LAG L3 validation using port-channel configuration

    test_mclag_layer3_steady_state

    Yes

    MC-LAG L3 validation, Bring down the member link of SPINE01

    test_mclag_layer3_member_link_down

    Yes

    MC-LAG L3 keepalive link down

    test_mclag_layer3_keepalive_link_down

    Yes

    MCLAG-L3 Active Reboot

    test_mclag_layer3_active_reboot

    Yes

    MCLAG-L3 Standby Reboot

    test_mclag_layer3_standby_reboot

    Yes

    Apply QoS with DSCP-0 to TC-0 mapping

    test_dscp_0_tc_0

    Yes

    Apply QoS with DSCP-8 to TC-1 mapping

    test_dscp_8_tc_1

    Yes

    Apply QoS with DSCP-16 to TC-2 mapping

    test_dscp_16_tc_2

    Yes

    Apply QoS with DSCP-24 to TC-3 mapping

    test_dscp_24_tc_3

    Yes

    Apply QoS with DSCP-32 to TC-4

    mapping

    test_dscp_32_tc_4

    Yes

    Apply QoS with DSCP-40 to TC-5 mapping

    test_dscp_40_tc_5

    Yes

    Apply QoS with DSCP-48 to TC-6 mapping

    test_dscp_48_tc_6

    Yes

    Apply QoS with DSCP-56 to TC-7 mapping

    test_dscp_56_tc_7

    Yes

    Validate whether the DUT applies QoS using DOT1P-0 to TC-0 mapping

    test_dot1p0_tc0

    Yes

    Validate whether the DUT applies QoS using DOT1P-1 to TC-1 mapping

    test_dot1p1_tc1

    Yes

    Validate whether the DUT applies QoS using DOT1P-2 to TC-2 mapping

    test_dot1p2_tc2

    Yes

    Validate whether the DUT applies QoS using DOT1P-3 to TC-3 mapping

    test_dot1p3_tc3

    Yes

    Validate whether the DUT applies QoS using DOT1P-4 to TC-4 mapping

    test_dot1p4_tc4

    Yes

    Validate whether the DUT applies QoS using DOT1P-5 to TC-5 mapping

    test_dot1p5_tc5

    Yes

    Validate whether the DUT applies QoS using DOT1P-6 to TC-6 mapping

    test_dot1p6_tc6

    Yes

    Validate whether the DUT applies QoS using DOT1P-7 to TC-7 mapping

    test_dot1p7_tc7

    Yes

    Verify basic EVPN VxLAN functionality

    test_evpn_vxlan_feature

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 16K routes

    test_bgp_unnumbered_evpn_vxlan_inter_vlan_scale_16k

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 32K routes

    test_bgp_unnumbered_evpn_vxlan_inter_vlan_scale_32k

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 64K routes

    test_bgp_unnumbered_evpn_vxlan_inter_vlan_scale_64k

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 128K routes

    test_bgp_unnumbered_evpn_vxlan_inter_vlan_scale_128k

    Yes

    Configure BGP L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 16K routes

    test_bgp_numbered_evpn_vxlan_inter_vlan_scale_16k

    Yes

    Configure BGP L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 32K routes

    test_bgp_numbered_evpn_vxlan_inter_vlan_scale_32k

    Yes

    Configure BGP L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 64K routes

    test_bgp_numbered_evpn_vxlan_inter_vlan_scale_64k

    Yes

    Configure BGP L3 EVPN-VXLAN and send L3 inter-VLAN traffic for 128K routes

    test_bgp_numbered_evpn_vxlan_inter_vlan_scale_128k

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN

    test_bgp_evpn_vxlan_l3_ecmp_with_node_drain

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN

    test_bgp_evpn_vxlan_l3_ecmp_with_link_drain

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN

    test_bgp_evpn_vxlan_with_sag

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 intra-VLAN traffic for 16K routes

    test_bgp_unnumbered_evpn_vxlan_intra_vlan_scale_16k

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 intra-VLAN traffic for 32K routes

    test_bgp_unnumbered_evpn_vxlan_intra_vlan_scale_32k

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 intra-VLAN traffic for 64K routes

    test_bgp_unnumbered_evpn_vxlan_intra_vlan_scale_64k

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 intra-VLAN traffic for 128K routes

    test_bgp_unnumbered_evpn_vxlan_intra_vlan_scale_128k

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN

    test_evpn_vxlan_l2_traffic

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 16K routes

    test_bgp_evpn_vxlan_l3_intra_vlan_scale_16k

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 32K routes

    test_bgp_evpn_vxlan_l3_intra_vlan_scale_32k

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 64K routes

    test_bgp_evpn_vxlan_l3_intra_vlan_scale_64k

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with intra-VLAN for 128K routes

    test_bgp_evpn_vxlan_l3_intra_vlan_scale_128k

    Yes

    Configure EVPN-VXLAN and send bidirectional L3 traffic with inter-VLAN

    test_evpn_vxlan_l3_inter_vlan

    Yes

    Configure EVPN-VXLAN and send bidirectional traffic with inter-VLAN for 16K routes

    test_bgp_evpn_vxlan_l2_inter_vlan_scale_16k

    Yes

    Configure EVPN-VXLAN and send bidirectional traffic with inter-VLAN for 32K routes

    test_bgp_evpn_vxlan_l2_inter_vlan_scale_32k

    Yes

    Configure EVPN-VXLAN and send bidirectional traffic with inter-VLAN for 64K routes

    test_bgp_evpn_vxlan_l2_inter_vlan_scale_64k

    Yes

    Configure EVPN-VXLAN and send bidirectional traffic with inter-VLAN for 128K routes

    test_bgp_evpn_vxlan_l2_inter_vlan_scale_128k

    Yes

    Configure BGP unnumbered L3 EVPN-VXLAN and send L3 inter-VLAN traffic with reboot

    test_bgp_l3_evpn_vxlan_route_reboot

    Yes

    QoS DWRR scheduling algorithm with the default configuration

    test_qos_dwrr_happy_baseline

    Yes

    Calculate the congestion buffer size based on the frame size

    test_single_queue_congestion_buffer_frame_size

    Yes

    Buffer test with multiple congested queues

    test_buffer_test_multiple_congested_queues

    Yes

    DHCP relay static MAC functionality

    test_dhcp_relay_static_mac_functionality

    No

    DHCP relay static mac functionality with SVI interface

    test_dhcp_relay_svi_functionality

    No

    DHCP relay functionality after a warm reboot of the relay agent

    test_dhcp_relay_warmreboot

    No

    Restart dhcp relay docker

    test_dhcp_relay_restart

    No

    Verify DHCP relay with the client and server using trunk VLAN configuration

    test_dhcp_relay_vlan_tagged

    No

    DHCP relay server not available

    test_dhcp_relay_server_not_available

    No

    DHCP relay with multiple DHCP servers

    test_run_dhcp_relay_multi_server_test

    No

    Test VRRP steady state configuration

    test_vrrp_steady_state_with_reboot

    Yes

    Test VRRP interface tracking functionality

    test_vrrp_interface_tracking

    Yes

    Test VRRP priority change scenario

    test_vrrp_change_priotity

    Yes

    Test VRRP configuration with non-preemptive mode enabled.

    test_vrrp_non_preemptive

    Yes

    Test VRRP behavior during link down scenario

    test_vrrp_link_down

    Yes

    Test configuration and behavior of multiple VRRP instances on the same VLAN.

    test_vrrp_multiple_instance

    Yes

    Test VRRP functionality in a VRF-aware environment.

    test_vrrp_with_vrf_with_reboot

    Yes

    Inject and withdraw 80000 type-5 routes

    test_inject_withdrawal_type5_routes

    Yes

    Inject 80000 type-5 routes and restart bgp container on the DUT

    test_vxlan_vrf_bgp_docker_restart

    Yes

    Restart swss docker on remote leaf to simulate VTEP failure and verify convergence

    test_vxlan_vrf_swss_docker_restart

    Yes

    Inject 80000 type-5 routes and reboot the peer

    test_vxlan_type5_routes_peer_reboot

    Yes

    VM mobility

    test_vmotion

    Yes

    IPv6 host routes scale to 64k route (iBGP)

    test_v6_host_routes_scale_ibgp_64k

    Yes

    Interface description CLI

    test_interface_description

    No

    IPv6 link-local address and PortChannel interface

    test_ipv6_portchannel_linklocal

    Yes

    IPv6 link-local address and VLAN interface

    test_ipv6_vlan_linklocal

    IPv6 route injection and withdrawal with BGP unnumbered

    test_bgpu_v6_inject_withdraw_255

    Verify Secondary subnet under a VLAN upto MAX subnets

    test_max_secondary_subnet_under_vlan

    Yes

    Verify port configuration across reboot

    test_ports_005

    No

    Verify Port Information for status, description and transceiver infornation

    test_ports_006

    Yes

    Verify Port Counters for framesize 128

    test_ports_009_14

    Yes

    Verify Port transceiver information

    test_ports_020

    Yes

    Verify FEC Configuration for RS and None

    test_ports_fec_001

    Yes

    Dynamic port breakout with supported breakout modes between leaf1 and leaf2

    test_port_breakout_001

    Yes

    Verify Route Scaling time on syncd restart without traffic for 2K routes

    test_chaos_route_scaling_time_syncd_2k

    Yes

    3

    Verify Route Scaling time on syncd restart without traffic for 8K routes

    test_chaos_route_scaling_time_syncd_8k

    Yes

    4

    Verify Route Scaling time on syncd restart without traffic for 16K routes

    test_chaos_route_scaling_time_syncd_16k

    Yes

    5

    Verify Route Scaling time on syncd restart without traffic

    test_chaos_route_scaling_time_syncd_32k

    Yes

    6

    Verify Route Scaling time on syncd restart without traffic

    test_chaos_route_scaling_time_syncd_64k

    Yes

    7

    Verify Route Scaling time on FRR restart without traffic for 2K routes

    test_chaos_route_scaling_time_frr_2k

    Yes

    8

    Verify Route Scaling time on FRR restart without traffic for 8K routes

    test_chaos_route_scaling_time_frr_8k

    Yes

    9

    Verify Route Scaling time on FRR restart without traffic

    test_chaos_route_scaling_time_frr_16k

    Yes

    10

    Verify Route Scaling time on FRR restart without traffic

    test_chaos_route_scaling_time_frr_32k

    Yes

    11

    Verify Route Scaling time on FRR restart without traffic

    test_chaos_route_scaling_time_frr_64k

    Yes

    12

    Verfiy Route Scaling time on orchagent restart without traffic for 2k routes

    test_chaos_route_scaling_time_orchagent_2k

    Yes

    13

    Verfiy Route Scaling time on orchagent restart without traffic for 8k routes

    test_chaos_route_scaling_time_orchagent_8k

    Yes

    14

    Verfiy Route Scaling time on orchagent restart without traffic for 16k routes

    test_chaos_route_scaling_time_orchagent_16k

    Yes

    15

    Verify Route Scaling time on orchagent restart without traffic for 32k routes

    test_chaos_route_scaling_time_orchagent_32k

    Yes

    16

    Verify Route Scaling time on orchagent restart without traffic for 64k routes

    test_chaos_route_scaling_time_orchagent_64k

    Yes

    17

    Verify Route Scaling time on syncd restart with traffic for 2K routes

    test_chaos_route_scaling_time_syncd_traffic_2k

    Yes

    18

    Verify Route Scaling time on syncd restart with traffic for 8K routes

    test_chaos_route_scaling_time_syncd_traffic_8k

    Yes

    19

    Verify Route Scaling time on syncd restart with traffic for 16k routes

    test_chaos_route_scaling_time_syncd_traffic_16k

    Yes

    20

    Verify Route Scaling time on syncd restart with traffic for 32k routes

    test_chaos_route_scaling_time_syncd_traffic_32k

    Yes

    21

    Verify Route Scaling time on syncd restart with traffic for 64k routes

    test_chaos_route_scaling_time_syncd_traffic_64k

    Yes

    22

    Verify Route Scaling time on FRR restart with traffic for 2k routes

    test_chaos_route_scaling_time_frr_traffic_2k

    Yes

    23

    Verify Route Scaling time on FRR restart with traffic for 8k routes

    test_chaos_route_scaling_time_frr_traffic_8k

    Yes

    24

    Verify Route Scaling time on FRR restart with traffic for 16k routes

    test_chaos_route_scaling_time_frr_traffic_16k

    Yes

    25

    Verify Route Scaling time on FRR restart with traffic for 32k routes

    test_chaos_route_scaling_time_frr_traffic_32

    Yes

    26

    Verify Route Scaling time on FRR restart with traffic for 64k routes

    test_chaos_route_scaling_time_frr_traffic_64k

    Yes

    27

    Verify Route Scaling time on orchagent restart with traffic with 2k routes

    test_chaos_route_scaling_time_orchagent_traffic_2k

    Yes

    28

    Verify Route Scaling time on orchagent restart with traffic with 8k routes

    test_chaos_route_scaling_time_orchagent_traffic_8k

    Yes

    29

    Verify Route Scaling time on orchagent restart with traffic with 16k routes

    test_chaos_route_scaling_time_orchagent_traffic_16k

    Yes

    30

    Verify Route Scaling time on orchagent restart with traffic with 32k routes

    test_chaos_route_scaling_time_orchagent_traffic_32k

    Yes

    31

    Verify Route Scaling time on orchagent restart with traffic

    test_chaos_route_scaling_time_orchagent_traffic_64k

    Yes

    32

    Verify Spine failure and re-insertion impact with 2K routes.

    test_deploy_spine_failure_insertion_impact_2k

    Yes

    33

    Verify Spine failure and re-insertion impact with 8K routes.

    test_deploy_spine_failure_insertion_impact_8k

    Yes

    34

    Verify Spine failure and re-insertion impact with 16K routes.

    test_deploy_spine_failure_insertion_impact_16k

    Yes

    35

    Verify Spine failure and re-insertion impact with 32K routes.

    test_deploy_spine_failure_insertion_impact_32k

    Yes

    36

    Verify Spine failure and re-insertion impact with 64K routes.

    test_deploy_spine_failure_insertion_impact_64k

    Yes

    37

    Verify Link failure: LEAF-SPINE with 2K routes

    test_deploy_spine_failure_link_impact_2k

    Yes

    38

    Verify Link failure: LEAF-SPINE with 8K routes

    test_deploy_spine_failure_link_impact_8k

    Yes

    39

    Verify Link failure: LEAF-SPINE with 16K routes

    test_deploy_spine_failure_link_impact_16k

    Yes

    40

    Verify Link failure: LEAF-SPINE with 32K Routes

    test_deploy_spine_failure_link_impact_32k

    Yes

    41

    Verify Link failure: LEAF-SPINE with 64K routes

    test_deploy_spine_failure_link_impact_64k

    Yes

    42

    Verify stabilty with Continuous DUT reboot

    test_chaos_continuous_reboot

    Yes

    43

    Verify Continuous Route push and withdrawal with traffic for 2K routes

    test_deploy_longevity_2k

    Yes

    44

    Verify Continuous Route push and withdrawal with traffic for 8K routes

    test_deploy_longevity_8k

    Yes

    45

    Verify Continuous Route push and withdrawal with traffic for 16K routes

    test_deploy_longevity_16k

    Yes

    46

    Verify Continuous Route push and withdrawal with traffic for 32K routes

    test_deploy_longevity_32k

    Yes

    47

    Verify Continuous Route push and withdrawal with traffic for 64K routes

    test_deploy_longevity_64k

    Yes

    48

    Verify Warm Reboot - Device configuration impact with 2K routes

    test_deploy_spine_warmreboot_impact_2k

    Yes

    49

    Verify Warm Reboot - Device configuration impact with 8K routes

    test_deploy_spine_warmreboot_impact_8k

    Yes

    50

    Verify Warm Reboot - Device configuration impact with 16K routes

    test_deploy_spine_warmreboot_impact_16k

    Yes

    51

    Verify Warm Reboot - Device configuration impact with 32K routes

    test_deploy_spine_warmreboot_impact_32k

    Yes

    52

    Verify Warm Reboot - Device configuration impact with 64K routes

    test_deploy_spine_warmreboot_impact_64k

    Yes

    53

    Verify Stability with high Kernel CPU and observe its impact on the docker containers

    test_chaos_high_kernel_cpu_utilization

    Yes

    54

    Verify Routed PCH with 2k routes

    test_deploy_impact_lacp_unconfig_2k

    Yes

    55

    Verify Routed PCH with 8k routes

    test_deploy_impact_lacp_unconfig_8k

    Yes

    56

    Verify Routed PCH with 16k routes

    test_deploy_impact_lacp_unconfig_16k

    Yes

    57

    Verify Routed PCH with 32k routes

    test_deploy_impact_lacp_unconfig_32k

    Yes

    58

    Verify Routed PCH with 64k routes

    test_deploy_impact_lacp_unconfig_64k

    Yes

    59

    Fast reboot on spine with traffic and 2k host routes

    test_deploy_spine_fastreboot_impact_2k

    Yes

    60

    Fast reboot on leaf1 with traffic and 2k host routes

    test_deploy_leaf1_fastreboot_impact_2k

    Yes

    61

    Fast reboot on leaf2 with traffic and 2k host routes

    test_deploy_leaf2_fastreboot_impact_2k

    Yes

    62

    Fast reboot on spine with traffic and 2k host routes

    test_deploy_spine_fastreboot_impact_8k

    Yes

    63

    Fast reboot on leaf1 with traffic and 2k host routes

    test_deploy_leaf1_fastreboot_impact_8k

    Yes

    64

    Fast reboot on leaf2 with traffic and 2k host routes

    test_deploy_leaf2_fastreboot_impact_8k

    Yes

    MCLAG 4DUT
    MCLAG 4DUT
    MCLAG 4DUT
    MCLAG 4DUT