TransWikia.com

TCP Window Size

Server Fault Asked by loneknight on January 3, 2021

Hoping someone can clarify a query I have in relation to TCP window size and whether it could be contributing to my slow throughput achieved via iPerf.

I took a Wireshark capture from a client while doing a standard iPerf test from the client (win 2016 server) to the server (backup appliance possibly linux) – network speed is 10gb. I seem to be only getting throughput 1-2gbit/sec. Trying to work out what could be the cause.

During the 3-way handshake, client advertised its window size as 64k with a 4 scaling factor. And the server advertised 14k with a 128 scaling factor.

I noticed as the transfer progressed, the window size increased till 212,992 for the client and the server got up to 3,142,016. And the max bytes in flight was 242,032.

I then repeated the same test directly between two windows 2016 vm’s both connected to a single switch via 10gb interfaces. Results was similar, the bytes in flight never seem to exceed approx. 242k.

I always thought the window size advertised during the 3-way handshake is the max limit. But that seems to be not the case as the window size progressively increased due to some congestion algorithm (?) until hitting some limit evidenced by the bytes in flight max value of 242k. So I was never able to have more than 242k un-ack’d bytes though my receive window (3MB) had plenty of room.

I figured that there is something in path is causing the congestion window to limit and I guess this is what we are trying to work out. I will be repeating the same iPerf test and capture between two windows server VMs on the same switch just to to see if it is able to push any higher or showing similar symptoms.

Thanks in advance.

Output of TCP parameters. I did modify the Receive Window Auto-Tuning Level to Experimental. It did make some difference, throughput went to 4gb but still nowhere near 10gb.


TCP Global Parameters
----------------------------------------------
Receive-Side Scaling State : enabled
Chimney Offload State : disabled
NetDMA State : disabled
Direct Cache Access (DCA) : disabled
Receive Window Auto-Tuning Level : normal
Add-On Congestion Control Provider : default
ECN Capability : enabled
RFC 1323 Timestamps : disabled
Initial RTO : 3000
Receive Segment Coalescing State : enabled
Non Sack Rtt Resiliency : disabled
Max SYN Retransmissions : 2
TCP Fast Open : disabled

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP