MyPage is a personalized page based on your interests.The page is customized to help you to find content that matters you the most.


I'm not curious

Understanding Network Latency in Ethernet Switches

Published on 21 September 18
297
0
0

In modern Ethernet switches, network latency is a vital part in key performance measurement, especially for those high-performance networks and clustered computing applications. Then, what is network latency in Ethernet switches? What causes it? Let’s find out the answers below together.

What Is Network Latency in Ethernet Switches?

Network latency is a term used to indicate any kind of delay that happens in data communication over a network. An Ethernet switch latency, or network latency in an Ethernet switch, represents for a period that Ethernet packet spends traversing a network switch. A network in which small delays occur is called a low-latency network. While, networks that suffer from long delays are called high-latency networks.

Understanding Network Latency in Ethernet Switches - Image 1

Normally, one switch latency can be reported or defined in two ways, namely, one-way latency and round-trip latency. The former is measured by counting the total time it takes a packet to travel from its source to its destination, while the latter refers to the time taken for information to get to its destination and back again. Nowadays, the round-trip delay is a leading measurement and has a key impact on the performance of the network.

Generally, a network latency is measured from port to port on an Ethernet switch. The way reporting it depends on the switching paradigm (cut-through or store-and-forward) that a network switch adopts. Store-and-forward paradigm requires that the entire packet is received and buffered by the switch before a forwarding decision is made. While, cut-through forwarding, on the other hand, allows for packet transmission to commence on the egress port as soon as enough of the packet has been received to make the forwarding decision.

What Causes the Network Latency?

There may be many reasons resulting in a network latency. The possible contributors to network latency include the following factors:

  • The time it takes for a packet to physically travel from its source to a destination.
  • Anti-virus and similar security process, which needs time to finish message recombination and dismantling before sending.
  • Error from router or switch since each gateway needs to spend time checking and changing the packet headers.
  • Storage delays when packets suffers from storage or disk access delays at intermediate devices like switches and bridges.
  • Software bugs from user’s side.
  • The problem is from the transmission medium itself, which takes some time to transmit one packet from a source to a destination from fiber optics to coaxial cables.
  • Delays occur even when packets travel from one node to another at the speed of light.

Network Latency Test

Actually, one can accomplish a network latency test to analyze the switch latency. Network latency can be measured with different tools and methods in Ethernet switches, such as IEEE specification RFC2544, netperf, or Ping Pong.

Different methods measure latency at different points through the system software stack. IEEE specification RFC2544 provides an industry-accepted method of measuring latency of store and forward devices. Netperf can test latency with request or response tests (TCP_RR and UDP_RR). While, Ping Pong is a method for measuring latency in a high-performance computing cluster, which measures the round-trip time of remote procedure calls (RPCs) sent through the message passing interface (MPI).

How to Minimize Network Latency in Ethernet Switches

In many cases, switch latency can be a much stronger determinant of application performance and user experience than link bandwidth. Normally, for the most commonly used Gigabit Ethernet switch, the switch latency ranges from 50 to 125 microseconds. And for another popular switch, 10GbE switch, the switch latency usually ranges from 5 to 50 microseconds. Excessive network latency limits the performance of network applications by delaying packet arrival. Therefore, minimizing network latency is much more important. You can conduct a network latency test first and then reroute the packets to reduce the delay to the least.

Conclusion

All in all, network latency is a key factor which influences the performance of an Ethernet switch. It can be caused by various reasons and measured in several ways. You can minimize the switch latency after doing the corresponding network latency test.

Original source: http://www.fiberopticshare.com/network-latency-in-ethernet-switches.html

In modern Ethernet switches, network latency is a vital part in key performance measurement, especially for those high-performance networks and clustered computing applications. Then, what is network latency in Ethernet switches? What causes it? Let’s find out the answers below together.

What Is Network Latency in Ethernet Switches?

Network latency is a term used to indicate any kind of delay that happens in data communication over a network. An Ethernet switch latency, or network latency in an Ethernet switch, represents for a period that Ethernet packet spends traversing a network switch. A network in which small delays occur is called a low-latency network. While, networks that suffer from long delays are called high-latency networks.

Understanding Network Latency in Ethernet Switches - Image 1


Normally, one switch latency can be reported or defined in two ways, namely, one-way latency and round-trip latency. The former is measured by counting the total time it takes a packet to travel from its source to its destination, while the latter refers to the time taken for information to get to its destination and back again. Nowadays, the round-trip delay is a leading measurement and has a key impact on the performance of the network.

Generally, a network latency is measured from port to port on an Ethernet switch. The way reporting it depends on the switching paradigm (cut-through or store-and-forward) that a network switch adopts. Store-and-forward paradigm requires that the entire packet is received and buffered by the switch before a forwarding decision is made. While, cut-through forwarding, on the other hand, allows for packet transmission to commence on the egress port as soon as enough of the packet has been received to make the forwarding decision.

What Causes the Network Latency?

There may be many reasons resulting in a network latency. The possible contributors to network latency include the following factors:

  • The time it takes for a packet to physically travel from its source to a destination.
  • Anti-virus and similar security process, which needs time to finish message recombination and dismantling before sending.
  • Error from router or switch since each gateway needs to spend time checking and changing the packet headers.
  • Storage delays when packets suffers from storage or disk access delays at intermediate devices like switches and bridges.
  • Software bugs from user’s side.
  • The problem is from the transmission medium itself, which takes some time to transmit one packet from a source to a destination from fiber optics to coaxial cables.
  • Delays occur even when packets travel from one node to another at the speed of light.
Network Latency Test

Actually, one can accomplish a network latency test to analyze the switch latency. Network latency can be measured with different tools and methods in Ethernet switches, such as IEEE specification RFC2544, netperf, or Ping Pong.

Different methods measure latency at different points through the system software stack. IEEE specification RFC2544 provides an industry-accepted method of measuring latency of store and forward devices. Netperf can test latency with request or response tests (TCP_RR and UDP_RR). While, Ping Pong is a method for measuring latency in a high-performance computing cluster, which measures the round-trip time of remote procedure calls (RPCs) sent through the message passing interface (MPI).

How to Minimize Network Latency in Ethernet Switches

In many cases, switch latency can be a much stronger determinant of application performance and user experience than link bandwidth. Normally, for the most commonly used Gigabit Ethernet switch, the switch latency ranges from 50 to 125 microseconds. And for another popular switch, 10GbE switch, the switch latency usually ranges from 5 to 50 microseconds. Excessive network latency limits the performance of network applications by delaying packet arrival. Therefore, minimizing network latency is much more important. You can conduct a network latency test first and then reroute the packets to reduce the delay to the least.

Conclusion

All in all, network latency is a key factor which influences the performance of an Ethernet switch. It can be caused by various reasons and measured in several ways. You can minimize the switch latency after doing the corresponding network latency test.

Original source: http://www.fiberopticshare.com/network-latency-in-ethernet-switches.html

This blog is listed under Networks & IT Infrastructure Community

Related Posts:
Post a Comment

Please notify me the replies via email.

Important:
  • We hope the conversations that take place on MyTechLogy.com will be constructive and thought-provoking.
  • To ensure the quality of the discussion, our moderators may review/edit the comments for clarity and relevance.
  • Comments that are promotional, mean-spirited, or off-topic may be deleted per the moderators' judgment.
You may also be interested in
 
Awards & Accolades for MyTechLogy
Winner of
REDHERRING
Top 100 Asia
Finalist at SiTF Awards 2014 under the category Best Social & Community Product
Finalist at HR Vendor of the Year 2015 Awards under the category Best Learning Management System
Finalist at HR Vendor of the Year 2015 Awards under the category Best Talent Management Software
Hidden Image Url

Back to Top