Network Latency Analysis using ICMP under Varying Traffic Conditions

Computer Networks | VIT University, Chennai | Winter Semester 2025–2026

B.Tech CSE | SCOPE Department, VIT Chennai

Guided by Dr. SUBBULAKSHMI T

INTRODUCTION

Network performance is a critical aspect of modern communication systems. Among the various parameters that define network quality, latency — measured as Round Trip Time (RTT) — is one of the most important indicators of how efficiently data travels across a network.
This project investigates how ICMP (Internet Control Message Protocol) based latency behaves under three distinct traffic conditions: normal, medium, and heavy traffic. Using Python-based tools (Scapy) for automated packet transmission and RTT measurement, and hping3 for traffic generation, this study provides a detailed analysis of how congestion affects not just average delay, but also jitter (latency variation) and estimated throughput.
The work was carried out in two phases — DA1 involved manual RTT measurement using Wireshark, while DA2 and DA3 extended this to programmatic analysis with graphical visualization across multiple runs and traffic conditions. This blog documents the complete methodology, results, and inferences drawn from the experiment.

OBJECTIVES

  • To measure network latency (RTT) using ICMP Echo Request and Echo Reply packets under varying traffic conditions.
  • To analyze how network congestion affects jitter (latency variation) and estimated throughput.
  • To compare network behavior across normal, medium, and high traffic conditions using graphical analysis.
  • To study the relationship between packet queueing delay and traffic intensity using Scapy and hping3.
  • To draw meaningful inferences from multiple experimental runs and identify patterns in network performance degradation.
  • REFERENCE

    This DA was inspired by ICMP-based latency analysis techniques demonstrated at SharkFest — the Wireshark Developer and User Conference, which provides real-world packet analysis case studies.

    Reference link: https://sharkfest.wireshark.org

    Additional references used:


    SOURCE DESCRIPTION

    The foundation of this project is ICMP packet analysis — specifically the use of Echo Request (Type 8) and Echo Reply (Type 0) packets to calculate Round Trip Time. The initial understanding was built through Wireshark-based manual analysis (DA1), where ICMP packets were captured and RTT was computed from timestamps. This was extended in DA2 and DA3 using Scapy to automate packet transmission, RTT measurement, jitter calculation, and throughput estimation. Traffic load was controlled using hping3, which allowed generation of precise packet rates to simulate medium and high congestion scenarios without opening multiple browser tabs.

    ARCHITECTURE 


    The above diagram illustrates the complete system architecture used in this experiment. The local machine runs three tools simultaneously — Wireshark for packet capture (DA1), Scapy for automated ICMP packet transmission and RTT measurement (DA2/DA3), and hping3 for generating controlled background traffic to simulate medium and high load conditions. The ICMP Echo Request packets are sent to the target host (8.8.8.8 — Google DNS), which responds with Echo Replies. The RTT values are computed from the time difference between request and reply. These values are then processed to calculate jitter and estimated throughput, which are visualized using Python's matplotlib library. All captured packets are saved as .pcapng files and uploaded to GitHub along with the Python scripts.

    PROCEDURE 

    Normal Traffic

    No background traffic was generated. ICMP packets were sent directly to 8.8.8.8 (Google DNS) using Scapy and RTT was recorded for 50 packets across 5 runs.

    Command used:

    sudo python3 latency_normal.py

    Medium Traffic

    Background TCP traffic was generated using hping3 at a controlled rate while simultaneously running the Scapy latency script.

    Command used:

    sudo hping3 -S -p 443 -i u20000 -c 500 8.8.8.8
    sudo python3 latency_medium.py
    -S = SYN flag, -p 443 = port 443, -i u20000 = interval 20000 microseconds, -c 500 = 500 packets


    High Traffic

    Packet rate was increased significantly to simulate heavy congestion.

    Command used:

    sudo hping3 -S -p 443 -i u5000 -c 500 8.8.8.8
    sudo python3 latency_high.py



    -i u5000 = interval 5000 microseconds (4× faster than medium), creating significantly higher background load.

     INFERENCES

    1. Normal traffic shows a stable average RTT of ~13.96 ms across all 5 runs, indicating a consistent network baseline. The low variation between runs confirms that without background traffic, the network path to 8.8.8.8 remains stable. This stability is expected since no competing traffic is consuming bandwidth. The consistent RTT values across 50 packets per run demonstrate that Google's DNS server responds reliably under idle network conditions. This forms the baseline reference for comparing medium and high traffic behavior.




    2.The first packet in each normal run shows a spike of up to 250 ms due to ARP resolution and routing cache warm-up. When the first ICMP packet is sent, the system must resolve the MAC address of the next hop router through ARP, adding significant delay. Subsequent packets benefit from the cached ARP entry and settled routing tables, causing RTT to drop sharply. This pattern is consistent across all 5 normal runs, confirming it is a systematic measurement artifact. This first-packet anomaly should be excluded when calculating true average network latency.





    3. Jitter under normal traffic remains below 6 ms, confirming minimal variation in packet delivery timing. Low jitter indicates that packets are being delivered at a consistent rate without queuing or congestion effects. This is the ideal network condition for real-time applications like VoIP and video conferencing. The jitter graph shows small fluctuations between consecutive packets, all within acceptable thresholds. Normal traffic conditions are therefore suitable for latency-sensitive applications.



    4. RTT standard deviation under normal traffic is the lowest among all conditions at ~2.30 ms, indicating strong network stability. Standard deviation measures how spread out the RTT values are from the average — a low value means consistent performance. This confirms that normal traffic produces predictable and reliable latency. Compared to medium (3.92 ms) and high traffic conditions, the normal traffic standard deviation is significantly lower. This metric reinforces that congestion directly increases RTT unpredictability.





    5. Medium traffic raises average RTT to ~23.24 ms — a 66% increase over the normal baseline of 13.96 ms. This increase is caused by the background TCP traffic generated by hping3 at an interval of u20000 microseconds competing for bandwidth. The network now has to handle both ICMP measurement packets and background SYN packets simultaneously. Queueing delay begins to appear as the router buffers accumulate packets before forwarding. This demonstrates how even moderate background traffic can significantly degrade latency.





    6. RTT peaks of up to 720 ms were observed in medium traffic Run 2, indicating occasional severe queueing bursts. These spikes occur when the router's buffer becomes temporarily full and packets must wait before being forwarded. The spike at packet 1 is followed by a return to moderate RTT values, suggesting the congestion is intermittent rather than sustained. This burst behavior is characteristic of bursty TCP traffic generated by hping3. Such spikes would cause noticeable degradation in real-time communication applications.





    7.Jitter under medium traffic reaches up to 120 ms — roughly 20 times higher than normal traffic jitter of 6 ms. This dramatic increase in jitter is caused by the irregular arrival of ICMP reply packets due to competing TCP traffic. The jitter graph shows large swings between consecutive packets, indicating highly variable delivery timing. This level of jitter (>30 ms) would cause audible distortion in VoIP calls and visible lag in video conferencing. Medium traffic conditions are therefore unsuitable for real-time latency-sensitive applications.




    8. Average jitter under medium traffic (~54.33 ms) is dramatically higher than normal (~3.16 ms), confirming congestion-induced instability. This 17× increase in average jitter shows that congestion affects delivery consistency far more than it affects average delay. The jitter bar chart clearly shows the jump from normal to medium traffic conditions. This finding supports the conclusion that jitter is a more sensitive congestion indicator than average RTT. Network monitoring tools should prioritize jitter measurement alongside average latency for accurate performance assessment.






    9. Despite higher traffic, medium Run 3 shows an average RTT of 20.64 ms which is only slightly above the normal baseline, suggesting congestion effects are intermittent rather than sustained. This indicates that hping3-generated traffic at u20000 interval does not continuously saturate the network. There are windows of low congestion between bursts of TCP SYN packets where ICMP packets experience near-normal latency. This bursty nature of congestion makes average RTT an unreliable single metric for network health assessment. Multiple runs are therefore necessary to capture the true range of network behavior under medium traffic.



    10. High traffic produces average RTT values between 40–53 ms, with Run 4 peaking at 53.54 ms — nearly 4 times the normal baseline of 13.96 ms. The hping3 interval of u5000 microseconds generates packets 4 times faster than medium traffic, creating significantly higher competition for bandwidth. Router buffers fill more frequently and stay full for longer periods, causing sustained queueing delay. Unlike medium traffic where spikes were occasional, high traffic shows elevated RTT consistently across the entire run. This confirms that high traffic fundamentally changes the network's behavior from stable to congested.





    11.High traffic RTT graphs show large spikes distributed throughout the entire run, unlike normal traffic where spikes are confined only to the first few packets. In normal traffic, spikes appear at packet 1 due to ARP resolution and disappear quickly. Under high traffic, spikes appear at packets 1, 20, 25, 35, 40, 44 — scattered randomly throughout. This random distribution of spikes indicates sustained and unpredictable congestion rather than a one-time initialization artifact. This pattern makes high traffic conditions completely unreliable for any latency-sensitive network application.


    12. Jitter under high traffic exceeds 200 ms in multiple consecutive packets, indicating severe and sustained network instability. The jitter graph shows values climbing from ~95 ms at packet 2 to over 218 ms at packet 8, remaining elevated at 185-192 ms. This sustained high jitter is caused by persistent buffer overflow in the network path due to the high rate of hping3 packets. For reference, acceptable jitter for VoIP applications is below 30 ms — high traffic conditions exceed this by over 7 times. These conditions would render real-time communication applications completely unusable.



    13. The box plot reveals that RTT spread (interquartile range) widens significantly from normal to medium to high traffic conditions. The normal traffic box is narrow and low, indicating tight clustering of RTT values around the median. The medium traffic box is wider and higher, showing greater spread in RTT values. The high traffic box is the widest and highest, with outliers extending well above the main distribution. This progressive widening of the box plot visually confirms that congestion increases both average latency and RTT variability simultaneously.


    14.Standard deviation of RTT increases from ~2.30 ms (normal) to ~3.92 ms (medium) to ~5.50 ms (high), quantifying the growth in variability. This progressive increase confirms that each additional level of traffic intensity adds measurable unpredictability to network performance. The standard deviation nearly doubles from normal to high traffic conditions. This metric provides a single quantitative measure of how congestion affects network reliability. Network SLAs (Service Level Agreements) typically specify maximum allowable standard deviation — high traffic conditions would violate most SLA thresholds.



    15. The average RTT bar chart clearly shows a monotonic increase: 13.96 ms → 23.24 ms → 45.29 ms across normal, medium, and high traffic. This progressive increase confirms that RTT scales with traffic intensity in a predictable direction. The jump from normal to medium (9.28 ms increase) is smaller than the jump from medium to high (22.05 ms increase), indicating non-linear growth. This non-linearity suggests that beyond a certain traffic threshold, network performance degrades at an accelerating rate. The bar chart provides a clear visual summary of how congestion affects average network latency.




    16. Estimated throughput decreases as traffic load increases, since higher RTT means fewer effective bits are transferred per unit time. Under normal traffic, throughput is highest because RTT is low and packets are delivered quickly. As RTT increases under medium and high traffic, the effective throughput per ICMP packet drops proportionally. The throughput comparison graph shows normal traffic consistently outperforming medium and high traffic across all runs. This inverse relationship between RTT and throughput demonstrates the fundamental impact of congestion on network efficiency.




    17. The RTT line comparison graph shows that high traffic runs have overlapping RTT values with medium traffic in some runs, suggesting network capacity was not fully saturated. In runs 2 and 4, high traffic RTT values are close to medium traffic values rather than being dramatically higher. This overlap indicates that the network path to 8.8.8.8 has sufficient capacity to handle hping3 traffic without complete saturation. True network saturation would show a much larger and consistent gap between medium and high traffic RTT values. This finding suggests that even higher packet rates would be needed to fully saturate the network path.






    18. Jitter is a more sensitive congestion indicator than average RTT — jitter increased 20 times from normal to medium traffic, while average RTT only doubled. Average RTT went from 13.96 ms to 23.24 ms (1.66× increase) while average jitter went from 3.16 ms to 54.33 ms (17× increase). This disproportionate growth in jitter compared to RTT confirms that jitter responds earlier and more strongly to congestion. Network monitoring systems that only track average RTT would significantly underestimate the impact of congestion on application performance. Jitter should be the primary metric for detecting early-stage network congestion.



    19. ICMP packets continued to receive replies even under heavy traffic conditions, confirming that ICMP is not dropped by the target host (8.8.8.8 / Google DNS) under load. Despite hping3 generating 500 TCP SYN packets at u5000 microsecond intervals, all ICMP Echo Requests received replies. This demonstrates the robustness of Google's DNS infrastructure in handling mixed traffic loads without dropping control plane packets. It also confirms that the latency increases observed are due to queueing delay in intermediate routers, not packet loss at the destination. This makes ICMP a reliable measurement tool even under congested network conditions.



    20. Network performance degradation is non-linear — the jump from medium to high traffic is proportionally larger in jitter than in average RTT, confirming that congestion affects delay variation more than mean delay. Average RTT increased by 95% from medium to high traffic, while average jitter increased by 85% from medium to high. However the absolute jitter values (54 ms → 100 ms) represent a much larger impact on application experience than the RTT values (23 ms → 45 ms). This non-linear relationship means that small increases in traffic intensity beyond a threshold cause disproportionately large impacts on network stability. This is the most important finding of this experiment and has direct implications for network capacity planning.



     NEW FINDINGS

  • Jitter is a far more sensitive indicator of network congestion than average RTT — it increased 20× from normal to medium traffic while average RTT only doubled.
  • The first ICMP packet in every run consistently shows an anomalously high RTT due to ARP cache resolution and routing table lookup — this is a systematic measurement artifact, not true network congestion.
  • Network performance degradation under congestion is non-linear — the impact on jitter grows disproportionately faster than the impact on average latency.
  • Even under heavy hping3-generated traffic, Google's DNS server (8.8.8.8) never dropped ICMP packets, demonstrating its robust traffic handling capability.
  • Medium traffic occasionally produced RTT values comparable to normal conditions, indicating that congestion effects are bursty and intermittent rather than uniform.
  • RECOMMENDATIONS

  • Network monitoring systems should track jitter alongside average RTT for more accurate congestion detection.
  • The first packet RTT should be excluded from latency averages to avoid skewed results in automated measurement tools.
  • ICMP-based latency testing should be run across multiple iterations (minimum 5 runs) to account for natural variation in network conditions.
  • Traffic generation for testing should use packet-rate-based tools like hping3 rather than browser tabs to ensure controlled and reproducible conditions.
  • For real-time applications like VoIP or video conferencing, jitter thresholds should be set below 30 ms — values observed under high traffic (200+ ms) would cause severe degradation.
  • USE OF AI 

    AI tools — specifically Claude (Anthropic) and ChatGPT (OpenAI) — were used at multiple stages of this assignment:

    1. Code Debugging — AI helped identify and fix issues in the Scapy script related to packet timeout handling and RTT calculation accuracy.
    2. Graph Visualization — AI assisted in generating additional comparative graphs (box plots, standard deviation charts, throughput comparison) using matplotlib, which were beyond the scope of the original code.
    3. Conceptual Clarity — AI was used to understand the difference between latency, jitter, and throughput, and how each is affected by network congestion.
    4. Documentation — AI helped structure the blog content, expand inferences with technical reasoning, and improve the quality of written explanations.
    5. Data Interpretation — AI helped interpret why the first packet in each run showed anomalously high RTT values (ARP resolution artifact).

    The use of AI significantly improved the depth and quality of analysis while the core experimental work — packet capture, traffic generation, and data collection — was performed independently.

    CONCLUSION

    This experiment successfully demonstrated the impact of network congestion on latency, jitter, and throughput using ICMP-based measurement. Across three traffic conditions — normal, medium, and high — a clear and consistent pattern emerged: as traffic intensity increases, both average RTT and jitter increase, but jitter grows disproportionately faster, making it the more reliable indicator of congestion.

    The study confirmed that network performance degradation is non-linear and bursty in nature. Even under heavy traffic, ICMP packets were never dropped by the target host, highlighting the resilience of Google's DNS infrastructure. The combination of Wireshark (DA1), Scapy-based automation (DA2), and multi-run graphical analysis (DA3) provided a comprehensive view of network latency behavior from manual observation to programmatic analysis.

    This work reinforces the importance of measuring multiple network parameters simultaneously and running experiments across multiple iterations for statistically reliable conclusions.

    YOUTUBE VIDEO

    https://youtu.be/kuyTxe7NR7E


    GITHUB LINK

    All Python scripts and Output Screenshots used in this experiment are available in the GitHub repository linked below. 

    https://github.com/varrunn560/DA3-Network-Latency-Analysis

    ACKNOWLEDGEMENT

    I would like to express my sincere gratitude to the following:

    • My Parents — for their constant support and encouragement throughout my academic journey.
    • VIT Chennai — for providing the infrastructure, laboratory facilities, and academic environment that made this experiment possible.
    • Dr. SUBBHULAKSHMI T— for designing this assignment in a way that encouraged real experimentation and deeper understanding of network performance analysis, and for the detailed DA guidelines.
    • SCOPE Department, VIT Chennai — for the structured curriculum and lab facilities provided for the Computer Networks Lab, Semester 4, B.Tech CSE (2024–2025).
    • My friends and classmates — for their peer support, technical discussions, and feedback during the course of this assignment.
    • Wireshark & Scapy open-source communities — for providing powerful, freely available tools that made this analysis possible.

    PEER COMMENTS

    1. Good analysis of how jitter increases under heavy traffic conditions. The use of Scapy for automated RTT measurement is well implemented.

    2. Nice work on comparing latency across normal, medium and high traffic conditions. The graphs clearly show the impact of congestion on network performance.

    3.Interesting use of hping3 for traffic generation. The RTT values recorded under high traffic clearly demonstrate queueing delay effects.

    4.Good practical implementation of ICMP based latency measurement. The comparison graphs between all three traffic conditions are very clear and informative.

    5.Well documented experiment. The observation that jitter is a better indicator of congestion than average RTT is a key finding.

    Comments

    1. Good analysis of how jitter increases under heavy traffic conditions. The use of Scapy for automated RTT measurement is well implemented.

      ReplyDelete
    2. Nice work on comparing latency across normal, medium and high traffic conditions. The graphs clearly show the impact of congestion on network performance.

      ReplyDelete
    3. Interesting use of hping3 for traffic generation. The RTT values recorded under high traffic clearly demonstrate queueing delay effects.

      ReplyDelete
    4. Well documented experiment. The observation that jitter is a better indicator than average rtt is a key finding

      ReplyDelete
    5. Good practical implementation of ICMP based latency measurement. The comparison graphs between all three traffic conditions are very clear and informative.

      ReplyDelete
    6. The finding that jitter increased 17x while RTT only doubled from normal to medium traffic is a strong quantitative result that shows deep analysis.

      ReplyDelete
    7. Good observation about the first packet ARP resolution artifact. This is often overlooked in basic latency measurements but correctly identified here as a systematic error.

      ReplyDelete

    Post a Comment