Windows Server 2022 (VM) cannot get 10Gbps throughput test via iperf

HCI_JR89 Lv1Posted 27 Feb 2024 12:37

Dear all,
I am using the Sangfor HCI , with the cluster setup environment 3 hosts.
Unfortunately all WIndows Server VM not able to get 10Gbps bandwidth throughput between each VMs. I am suspect something goes wrong with the network card drivers/adapter. Appreciate can help. Thanks

Farina Ahmed has solved this question and earned 20 coins.

Posting a reply earns you 2 coins. An accepted reply earns you 20 coins and another 10 coins for replying within 10 minutes. (Expired) What is Coin?

Enter your mobile phone number and company name for better service. Go

This may be related to network card drivers or adapter settings. At first ensure that the network cards on both the host and the VMs support 10Gbps speeds and are properly configured to utilize it. Check for any driver updates or compatibility issues that might be impacting performance. Also verify that the network configuration within the HCI cluster is optimized for high-speed communication and that there are no bottlenecks or limitations within the network infrastructure.
Is this answer helpful?
CLELUQMAN Lv3Posted 04 Mar 2024 14:51
  
how much bandwidth did u get?
Newbie698499 Posted 04 Mar 2024 15:12
  
Making the best decisions for your life is a challenge in the interesting <a href="https://bitlifegame.io">bitlife</a>. Are you prepared to face all the obstacles the world has to offer, including a complicated life filled with difficult choices, many ups and downs?
HCI_JR89 Lv1Posted 04 Mar 2024 23:45
  
4gbps - 5gbps. Network adapter suppose can reach 10Gbps .
mdamores Lv3Posted 05 Mar 2024 11:48
  
Hi,

You might be right regarding your hunch but you may try some of the troubleshooting steps below:
1. ensure network adapters on Windows VMs are configured for 10Gbps and confirm that the network card adapters installed on the VMs are compatible with 10Gbps
2. utilize tools like iPerf3 to measure actual network throughput to check if the issue is limited to specific VMs or across all the cluster.
3. you may try to download and install the latest network card driver updates for your Windows VMs, you may also consider updating the firmware of network adapters of the Sangfor HCI nodes itself.
4. If all else fails, please reach out to Sangfor support for immediate assistance and resolution.
Newbie517762 Lv5Posted 05 Mar 2024 14:25
  
HiHi,

Have you enabled the "High Performance Mode - jumbo frame" in the Overlay Interface Configure, pls find below for your Ref:
jerome_itable Lv2Posted 05 Mar 2024 16:32
  
Several factors beyond network card drivers/adapters can contribute to not achieving 10 Gbps bandwidth throughput between Windows Server VMs on your Sangfor HCI with 3 hosts. Here are some potential causes and troubleshooting steps to investigate:

1. Verify Hardware Configuration:

    Network interface card (NIC) type: Ensure all VMs and physical hosts involved in the communication have 10 Gbps capable NICs. Check the specifications of your Sangfor HCI model and the installed NICs.
    Cabling: Use Cat 6A or higher rated cables to support 10 Gbps speeds. Faulty or incompatible cables can bottleneck performance.
    Switch configuration: Verify that the switch ports connecting the hosts and VMs are configured for 10 Gbps full duplex mode. Consult your switch's documentation for specific configuration steps.

2. Explore Software Configuration:

    Virtual Switch (vSwitch) settings: Within the Sangfor HCI management interface, check the vSwitch settings for:
        MTU (Maximum Transmission Unit): Ensure it's set to 9000 bytes for optimal 10 Gbps performance.
        Teaming policy: If NIC teaming is enabled, verify the teaming mode is appropriate for your scenario (e.g., "balance-rr" for round-robin load balancing).
    Windows Server NIC driver updates: Ensure up-to-date network drivers are installed on all Windows Server VMs. Outdated drivers might not support full 10 Gbps functionality.

3. Identify Bottlenecks and Measure Performance:

    Storage performance: While unlikely the main culprit, consider if slow storage access on the Sangfor HCI is causing a bottleneck. Run storage benchmarks to assess performance.
    CPU and RAM limitations: Monitor CPU and RAM utilization on both VMs and hosts during attempted 10 Gbps transfers. High resource usage can impact network performance.
    Benchmarking tools: Use network benchmarking tools like iperf3 to measure actual achievable bandwidth between VMs. This can help isolate the issue to specific VMs or network segments.

4. Additional Considerations:

    Virtualization overhead: While minimal, virtualization can introduce some overhead and might not reach the exact 10 Gbps wire speed.
    Sangfor support: If you've exhausted the above and still encounter issues, consult Sangfor support. They can provide specific guidance and troubleshooting steps based on your Sangfor HCI model and configuration.
Zonger Lv4Posted 05 Mar 2024 20:29
  
Please follow below steps to troubleshoot the issue with the 10Gbps bandwidth throughput between Windows Server VMs in your Sangfor HCI environment.

1. Verify network configuration:
        * Check if the network adapters in your Windows Server VMs are configured for 10Gbps speed and full-duplex mode.
        * Ensure that the network adapters are compatible with the 10Gbps network speed.
2. Update network card drivers/adapters:
        * Check for any available updates for the network card drivers/adapters on the Windows Server VMs.
        * Install the latest drivers/adapters to ensure optimal network performance.
3. Test network connectivity:
        * Use tools like Ping, Ping Plotter, or Wireshark to test the network connectivity between the VMs. This will help identify if there are any connectivity issues between the VMs.
4. Check network congestion:
        * Monitor the network traffic and resource utilization on the host servers and network switches. High congestion might limit the throughput between the VMs.
5. Verify Sangfor HCI configuration:
        * Check the network configuration and settings on the Sangfor HCI cluster, including MTU size, jumbo frames, and network QoS settings.
        * Ensure that the network settings on the HCI cluster are optimized for 10Gbps throughput.
6. Test with a different OS:
        * If possible, test the throughput between VMs running a different operating system to rule out any OS-specific issues.

Hope, you should be able to identify and resolve the issue with the 10Gbps bandwidth throughput between your Windows Server VMs in the Sangfor HCI environment.
Enrico Vanzetto Lv3Posted 05 Mar 2024 23:35
  
Hi, have you double check that there's no ohter vm that are using net network when you are performing speed test? Try use iperf in client/server mode and do a simple test from two vm that are on the same hci host.
After that, ensure that there's no other process (third part backup or another program) that comsume network bandwidth and redo the same test on two virtual machine sthat are on a different hci nodes.
pmateus Lv2Posted 06 Mar 2024 00:39
  
Hi,

If you reach 4 or 5 Gbps i suspect that the issue is related with jumbo frames not being enabled.

I Can Help:

Change

Moderator on This Board

2
20
0

Started Topics

Followers

Follow

Board Leaders