Network Latency Analysis using ICMP under Varying Traffic Conditions
B.Tech CSE | SCOPE Department, VIT Chennai
Guided by Dr. SUBBULAKSHMI T
INTRODUCTION
This project investigates how ICMP (Internet Control Message Protocol) based latency behaves under three distinct traffic conditions: normal, medium, and heavy traffic. Using Python-based tools (Scapy) for automated packet transmission and RTT measurement, and hping3 for traffic generation, this study provides a detailed analysis of how congestion affects not just average delay, but also jitter (latency variation) and estimated throughput.
The work was carried out in two phases — DA1 involved manual RTT measurement using Wireshark, while DA2 and DA3 extended this to programmatic analysis with graphical visualization across multiple runs and traffic conditions. This blog documents the complete methodology, results, and inferences drawn from the experiment.
OBJECTIVES
REFERENCE
This DA was inspired by ICMP-based latency analysis techniques demonstrated at SharkFest — the Wireshark Developer and User Conference, which provides real-world packet analysis case studies.
Reference link: https://sharkfest.wireshark.org
Additional references used:
- Wireshark Official Documentation: https://www.wireshark.org/docs/wsug_html_chunked/
- ICMP Protocol — RFC 792: https://www.rfc-editor.org/rfc/rfc792
- Scapy Documentation: https://scapy.readthedocs.io/en/latest/
- hping3 Manual: http://www.hping.org/manpage.html
SOURCE DESCRIPTION
ARCHITECTURE
The above diagram illustrates the complete system architecture used in this experiment. The local machine runs three tools simultaneously — Wireshark for packet capture (DA1), Scapy for automated ICMP packet transmission and RTT measurement (DA2/DA3), and hping3 for generating controlled background traffic to simulate medium and high load conditions. The ICMP Echo Request packets are sent to the target host (8.8.8.8 — Google DNS), which responds with Echo Replies. The RTT values are computed from the time difference between request and reply. These values are then processed to calculate jitter and estimated throughput, which are visualized using Python's matplotlib library. All captured packets are saved as .pcapng files and uploaded to GitHub along with the Python scripts.
PROCEDURE
Normal Traffic
No background traffic was generated. ICMP packets were sent directly to 8.8.8.8 (Google DNS) using Scapy and RTT was recorded for 50 packets across 5 runs.Command used:
sudo python3 latency_normal.py
Medium Traffic
Background TCP traffic was generated using hping3 at a controlled rate while simultaneously running the Scapy latency script.Command used:
sudo hping3 -S -p 443 -i u20000 -c 500 8.8.8.8
sudo python3 latency_medium.py
-S = SYN flag, -p 443 = port 443, -i u20000 = interval 20000 microseconds, -c 500 = 500 packets
sudo python3 latency_medium.py
-S = SYN flag, -p 443 = port 443, -i u20000 = interval 20000 microseconds, -c 500 = 500 packets
High Traffic
Packet rate was increased significantly to simulate heavy congestion.Command used:
sudo hping3 -S -p 443 -i u5000 -c 500 8.8.8.8
sudo python3 latency_high.py
sudo python3 latency_high.py
INFERENCES
12. Jitter under high traffic exceeds 200 ms in multiple consecutive packets, indicating severe and sustained network instability. The jitter graph shows values climbing from ~95 ms at packet 2 to over 218 ms at packet 8, remaining elevated at 185-192 ms. This sustained high jitter is caused by persistent buffer overflow in the network path due to the high rate of hping3 packets. For reference, acceptable jitter for VoIP applications is below 30 ms — high traffic conditions exceed this by over 7 times. These conditions would render real-time communication applications completely unusable.
14.Standard deviation of RTT increases from ~2.30 ms (normal) to ~3.92 ms (medium) to ~5.50 ms (high), quantifying the growth in variability. This progressive increase confirms that each additional level of traffic intensity adds measurable unpredictability to network performance. The standard deviation nearly doubles from normal to high traffic conditions. This metric provides a single quantitative measure of how congestion affects network reliability. Network SLAs (Service Level Agreements) typically specify maximum allowable standard deviation — high traffic conditions would violate most SLA thresholds.
15. The average RTT bar chart clearly shows a monotonic increase: 13.96 ms → 23.24 ms → 45.29 ms across normal, medium, and high traffic. This progressive increase confirms that RTT scales with traffic intensity in a predictable direction. The jump from normal to medium (9.28 ms increase) is smaller than the jump from medium to high (22.05 ms increase), indicating non-linear growth. This non-linearity suggests that beyond a certain traffic threshold, network performance degrades at an accelerating rate. The bar chart provides a clear visual summary of how congestion affects average network latency.
17. The RTT line comparison graph shows that high traffic runs have overlapping RTT values with medium traffic in some runs, suggesting network capacity was not fully saturated. In runs 2 and 4, high traffic RTT values are close to medium traffic values rather than being dramatically higher. This overlap indicates that the network path to 8.8.8.8 has sufficient capacity to handle hping3 traffic without complete saturation. True network saturation would show a much larger and consistent gap between medium and high traffic RTT values. This finding suggests that even higher packet rates would be needed to fully saturate the network path.
18. Jitter is a more sensitive congestion indicator than average RTT — jitter increased 20 times from normal to medium traffic, while average RTT only doubled. Average RTT went from 13.96 ms to 23.24 ms (1.66× increase) while average jitter went from 3.16 ms to 54.33 ms (17× increase). This disproportionate growth in jitter compared to RTT confirms that jitter responds earlier and more strongly to congestion. Network monitoring systems that only track average RTT would significantly underestimate the impact of congestion on application performance. Jitter should be the primary metric for detecting early-stage network congestion.
19. ICMP packets continued to receive replies even under heavy traffic conditions, confirming that ICMP is not dropped by the target host (8.8.8.8 / Google DNS) under load. Despite hping3 generating 500 TCP SYN packets at u5000 microsecond intervals, all ICMP Echo Requests received replies. This demonstrates the robustness of Google's DNS infrastructure in handling mixed traffic loads without dropping control plane packets. It also confirms that the latency increases observed are due to queueing delay in intermediate routers, not packet loss at the destination. This makes ICMP a reliable measurement tool even under congested network conditions.
NEW FINDINGS
RECOMMENDATIONS
USE OF AI
AI tools — specifically Claude (Anthropic) and ChatGPT (OpenAI) — were used at multiple stages of this assignment:
- Code Debugging — AI helped identify and fix issues in the Scapy script related to packet timeout handling and RTT calculation accuracy.
- Graph Visualization — AI assisted in generating additional comparative graphs (box plots, standard deviation charts, throughput comparison) using matplotlib, which were beyond the scope of the original code.
- Conceptual Clarity — AI was used to understand the difference between latency, jitter, and throughput, and how each is affected by network congestion.
- Documentation — AI helped structure the blog content, expand inferences with technical reasoning, and improve the quality of written explanations.
- Data Interpretation — AI helped interpret why the first packet in each run showed anomalously high RTT values (ARP resolution artifact).
The use of AI significantly improved the depth and quality of analysis while the core experimental work — packet capture, traffic generation, and data collection — was performed independently.
CONCLUSION
This experiment successfully demonstrated the impact of network congestion on latency, jitter, and throughput using ICMP-based measurement. Across three traffic conditions — normal, medium, and high — a clear and consistent pattern emerged: as traffic intensity increases, both average RTT and jitter increase, but jitter grows disproportionately faster, making it the more reliable indicator of congestion.
The study confirmed that network performance degradation is non-linear and bursty in nature. Even under heavy traffic, ICMP packets were never dropped by the target host, highlighting the resilience of Google's DNS infrastructure. The combination of Wireshark (DA1), Scapy-based automation (DA2), and multi-run graphical analysis (DA3) provided a comprehensive view of network latency behavior from manual observation to programmatic analysis.
This work reinforces the importance of measuring multiple network parameters simultaneously and running experiments across multiple iterations for statistically reliable conclusions.
YOUTUBE VIDEO
GITHUB LINK
All Python scripts and Output Screenshots used in this experiment are available in the GitHub repository linked below.
ACKNOWLEDGEMENT
I would like to express my sincere gratitude to the following:
- My Parents — for their constant support and encouragement throughout my academic journey.
- VIT Chennai — for providing the infrastructure, laboratory facilities, and academic environment that made this experiment possible.
- Dr. SUBBHULAKSHMI T— for designing this assignment in a way that encouraged real experimentation and deeper understanding of network performance analysis, and for the detailed DA guidelines.
- SCOPE Department, VIT Chennai — for the structured curriculum and lab facilities provided for the Computer Networks Lab, Semester 4, B.Tech CSE (2024–2025).
- My friends and classmates — for their peer support, technical discussions, and feedback during the course of this assignment.
- Wireshark & Scapy open-source communities — for providing powerful, freely available tools that made this analysis possible.




















Good analysis of how jitter increases under heavy traffic conditions. The use of Scapy for automated RTT measurement is well implemented.
ReplyDeleteNice work on comparing latency across normal, medium and high traffic conditions. The graphs clearly show the impact of congestion on network performance.
ReplyDeleteInteresting use of hping3 for traffic generation. The RTT values recorded under high traffic clearly demonstrate queueing delay effects.
ReplyDeleteWell documented experiment. The observation that jitter is a better indicator than average rtt is a key finding
ReplyDeleteGood practical implementation of ICMP based latency measurement. The comparison graphs between all three traffic conditions are very clear and informative.
ReplyDeleteThe finding that jitter increased 17x while RTT only doubled from normal to medium traffic is a strong quantitative result that shows deep analysis.
ReplyDeleteGood observation about the first packet ARP resolution artifact. This is often overlooked in basic latency measurements but correctly identified here as a systematic error.
ReplyDelete