Network Conditions Explained: What Each Parameter Does and When to Use It

Updated March 2026

When you open a network throttling tool, you're typically presented with five parameters: download speed, upload speed, latency, packet loss, and DNS delay. Most developers treat them all as different ways to make the connection "slow" — pick any one, turn it up, and see what happens.

That instinct is wrong. Each parameter degrades the network in a fundamentally different way, causes different symptoms in your app, and maps to different real-world conditions. A high-latency connection doesn't behave like a low-bandwidth one. Packet loss doesn't feel like a slow download. And if you test with just one parameter — usually bandwidth — you're missing entire categories of bugs that your users will find for you.

This guide explains what each parameter actually does, both at the network level and in terms of what your users experience. It's the reference you'll want when you're setting up a custom network profile and asking yourself: "What values should I use, and why?"

Download speed

Download speed is the rate at which your device can receive data from the network, measured in Kbps or Mbps. When you limit download speed, you're capping the throughput of all incoming data — API responses, images, scripts, video streams, file downloads.

What happens at the network level

TCP fills the available pipe. When the download limit is lower than the server's send rate, incoming data queues up and is delivered at the throttled rate. A 10 MB API response that takes 80 milliseconds on a fast connection takes over 13 seconds at 750 Kbps. The data arrives correctly — it just arrives slowly. There are no errors, no dropped packets, and no retransmissions. Everything works, just on a longer timeline.

What it feels like in your app

Large assets take visibly longer to arrive. Images load progressively or not at all. API responses with big payloads — lists, search results, file metadata — take seconds instead of milliseconds. Video buffers. File downloads crawl. But small requests — a login check, a short JSON response, a redirect — still feel responsive, because the bottleneck is the volume of data, not the time it takes to start.

Bugs it exposes

Real-world scenarios

When to use it

Test download speed in isolation when you're working on asset loading, image-heavy pages, file downloads, video streaming, or any flow that pulls large payloads from the server. It's the right parameter for answering: "What happens when data takes a long time to arrive?"

Upload speed

Upload speed is the rate at which your device can send data to the network. It's the mirror of download speed, but it's tested far less often — partly because developers rarely think about the outgoing direction, and partly because most app interactions are read-heavy.

What happens at the network level

When upload speed is limited, outgoing data — POST bodies, file uploads, form submissions — queues up and is sent at the throttled rate. The server doesn't receive the full request until the last byte clears the pipe. A 5 MB photo upload that takes half a second on WiFi takes over 40 seconds at 100 Kbps.

What it feels like in your app

Form submissions with large payloads appear to hang — the user clicks "submit" and nothing happens for seconds. Photo and video uploads stall, sometimes without any progress indication. Real-time features that send data upstream — chat messages, collaborative editing, live location updates — feel laggy or unresponsive. The app feels like it's ignoring the user's input, because the outbound data is still in the queue.

Bugs it exposes

Real-world scenarios

Upload speeds are almost always lower than download speeds. On cellular networks, upload is typically 3–5x slower than download. On many home connections, the asymmetry is even larger — a 50 Mbps download connection might have only 5 Mbps upload. This means the upload direction is often the bottleneck for any feature that sends user-generated content.

When to use it

Test upload speed in isolation when you're working on file upload flows, form submissions with large payloads (images, documents, attachments), or real-time features that send data upstream. It's the right parameter for answering: "What happens when the user's outgoing data is slow?"

Latency (round-trip delay)

Latency is the time it takes for a packet to travel from your device to the server and back, measured in milliseconds. It's the most misunderstood parameter, because people confuse it with bandwidth — but they measure completely different things.

What happens at the network level

Latency adds a delay to every round trip. Every TCP handshake, every TLS negotiation, every DNS lookup, every HTTP request-response cycle is slowed by the round-trip time. A page that makes 10 sequential API calls at 200ms latency adds 2 full seconds of delay — even if each response is only a few bytes. The data transfers quickly once it starts; the delay is in the waiting.

This is critical to understand: latency compounds with sequential requests. One request at 200ms latency is barely noticeable. Ten sequential requests at 200ms latency is a 2-second stall that your users will absolutely notice.

What it feels like in your app

Everything feels slow to start, but once data begins arriving, it arrives quickly. The time-to-first-byte is high, but the throughput is normal. Screens that make a single request feel slightly sluggish. Screens that make many sequential requests — load user data, then fetch preferences, then load the feed, then resolve avatars — feel broken. The delay is additive across every round trip.

Latency is not bandwidth

This distinction matters enough to call out explicitly. You can have high bandwidth with high latency — like a satellite internet connection that transfers data fast once it starts, but has 500ms+ round trips. Everything streams quickly, but every new request takes half a second before the first byte arrives. Conversely, you can have low bandwidth with low latency — like a 3G connection where data trickles slowly but the initial response comes back promptly. These two scenarios produce completely different bugs.

If you only test by lowering bandwidth, you'll never find the bugs that latency causes. And latency bugs are everywhere, because modern apps tend to be chatty — making many small requests rather than few large ones.

Bugs it exposes

Real-world scenarios

When to use it

Test latency in isolation when you're working on chatty apps, request waterfalls, apps with many sequential API calls, or anything using WebSockets or long-polling. It's the right parameter for answering: "What happens when every request takes longer to start?"

Packet loss

Packet loss is the percentage of network packets that are dropped in transit and never arrive at their destination. If latency is the most misunderstood parameter, packet loss is the most overlooked. Developers test bandwidth and latency but rarely packet loss — which is why packet-loss bugs are the ones users find first.

What happens at the network level

When a packet is lost, TCP detects the gap and retransmits it. This retransmission takes time — the sender has to notice the packet is missing (usually by waiting for acknowledgments that don't arrive), then send it again, then wait for the retransmitted packet to be acknowledged. Each retransmission cycle adds hundreds of milliseconds of delay, and TCP's congestion control algorithm responds to loss by slowing down the entire connection.

This is what makes packet loss so destructive: even 1–2% loss can cause significant degradation. It's not just the lost packets themselves — it's TCP's reaction to them. The connection slows down, backs off, and tentatively ramps back up, only to slow down again when the next packet is lost. The result is a connection that stutters unpredictably rather than being consistently slow.

What it feels like in your app

Intermittent, unpredictable failures. Requests that sometimes work and sometimes don't. A download that progresses smoothly, then stalls for several seconds, then resumes. A page that loads instantly on one attempt and takes 10 seconds on the next. The randomness is the defining characteristic — unlike latency or bandwidth, which degrade the connection consistently, packet loss makes the connection unreliable.

Why it's the hardest to test without a tool

You can simulate low bandwidth by tethering to a slow connection or moving far from a WiFi router. You can roughly simulate latency with a VPN to a distant server. But packet loss is inherently random, and you can't reliably reproduce specific percentages without a tool that drops packets deliberately. This is a big part of why packet-loss bugs go untested — there's no natural way to create the conditions.

Bugs it exposes

Real-world scenarios

When to use it

Test packet loss when you're working on retry logic, error recovery, streaming connections, WebSockets, or any feature that needs to handle intermittent failures gracefully. It's the right parameter for answering: "What happens when the connection is unreliable?"

DNS delay

DNS delay is the time added to domain name resolution — the lookup that converts a hostname like api.example.com into an IP address. It's the most targeted parameter, affecting only the first request to each unique domain.

What happens at the network level

Before your app can connect to a server, it needs to resolve the hostname. Normally this takes a few milliseconds and the result is cached for subsequent requests. DNS delay extends this resolution time, simulating conditions where the DNS server is slow, overloaded, or far away.

The effect is front-loaded: the first request to a domain pays the full DNS delay, but subsequent requests to the same domain use the cached result and aren't affected. If your app talks to one API domain, DNS delay hits you once. If your page loads resources from 15 different domains — your API, your CDN, your analytics provider, your font service, your error tracker — DNS delay hits you 15 times.

What it feels like in your app

Cold starts are slow. The first time a user opens your app or visits your site after a fresh DNS cache, every domain needs to be resolved. On a fast connection this is invisible. With DNS delay, it adds a noticeable pause before anything loads. But once the app is running and DNS entries are cached, everything feels normal.

Bugs it exposes

Real-world scenarios

DNS delay is more common than most developers realize. Users on ISP-provided DNS servers — which is the default for most people — can experience 50–200ms DNS lookups, especially for less common domains. Users on congested networks, VPNs, or networks with aggressive DNS filtering see even higher delays. Corporate networks with DNS-based security filtering can add 100ms+ to every new domain lookup.

When to use it

Test DNS delay when you're working on cold start performance, pages that load resources from many domains, or apps that talk to multiple API endpoints on different hostnames. It's the right parameter for answering: "What happens when DNS resolution is slow?"

Combining parameters for real-world conditions

In the real world, network degradation never comes as a single isolated parameter. A 3G connection has low bandwidth and high latency. Café WiFi has variable bandwidth and packet loss. A satellite connection has high bandwidth but extreme latency. Testing each parameter in isolation helps you understand what's happening, but testing combinations is what actually simulates what your users experience.

Common combinations

Profile Download Upload Latency Packet loss Notes
3G 750 Kbps 250 Kbps 100ms Standard baseline for mobile testing
Lossy café WiFi 2 Mbps 1 Mbps 50ms 2% Crowded coffee shop or coworking space
High-latency satellite 10 Mbps 2 Mbps 550ms High throughput but painful round trips
Congested cellular 1 Mbps 500 Kbps 200ms 3% Edge-of-coverage mobile
Edge (2G) 240 Kbps 200 Kbps 300ms 1% Worst common case, still found in developing markets

How to approach it

Start by testing each parameter individually to understand which type of degradation your app is most vulnerable to. Then test the combinations that match your users' actual conditions. If your users are primarily on mobile, focus on 3G and lossy cellular. If they're on desktop, focus on congested WiFi and high-latency connections. The slow network simulation guide has more detail on matching profiles to scenarios.

Network Throttler lets you set all five parameters independently — or use built-in profiles that combine them for realistic conditions. One click to enable, auto-disable timer so you don't forget. Download it here.

You might also find this useful: How to Debug Network Issues in Your macOS App

See also: Building a Network Testing Habit