TL;DR
Time to First Byte (TTFB) is the latency between the moment your browser sends an HTTP request and the moment it receives the very first byte of the response. It’s a composite measurement covering DNS lookup, TCP handshake, TLS handshake, the server’s request-processing time, and the network round-trip back to your browser.
Good TTFB values:
- Excellent: under 200ms
- Good: 200-500ms
- Mediocre: 500-1000ms (Google considers anything over 800ms “needs improvement” for Core Web Vitals)
- Bad: over 1000ms
If a site’s TTFB is consistently over 1500ms, the user feels it as “this site is slow.” If it suddenly jumps from 300ms to 3000ms, that’s almost always a sign of an outage in progress.
What TTFB actually measures — every layer
TTFB is sometimes misunderstood as “server response time”, but it’s actually a sum of several distinct stages, each of which can fail or slow down independently:
│ 1. DNS lookup (10–200ms typical)
│ Resolves domain to IP address
│
│ 2. TCP connection (1 round trip)
│ SYN, SYN-ACK, ACK to establish the connection
│
│ 3. TLS handshake (1–2 round trips for HTTPS)
│ Certificate exchange and key negotiation
│
│ 4. HTTP request transmission (small, typically <10ms)
│ Headers and any request body sent
│
│ 5. Server processing time (highly variable)
│ App generates the response: DB queries, template rendering, API calls
│
│ 6. First byte of response sent back across the network (latency-bound)
│
└─► TTFB = total time from "request sent" to "first byte received"
Adding round-trip latency: a user 100ms away from the server (typical transcontinental) starts at a baseline of ~250ms before any server work happens, just from DNS + TCP + TLS. Adding 200ms for server processing puts you at 450ms. That’s why “good” TTFB starts at around 200ms — under that, the laws of physics start to push back.
What’s a “good” TTFB?
Numbers in context:
- Static cached page from a CDN edge node 50ms away from you: 80-150ms TTFB. This is the floor.
- Database-backed dynamic page on a well-tuned origin: 200-400ms.
- Same-region request to a Wordpress blog: 300-700ms.
- Cross-continental request to a heavily database-bound page: 500-1500ms.
- Page with cold database queries, several internal API calls, and no caching: 1500-5000ms. Users feel this clearly.
- Anything over 5000ms: usually a stuck request that will eventually time out.
Google’s Core Web Vitals recommend TTFB under 800ms as a baseline target. Faster is always better, but 800ms is the bar at which the rest of the page-load pipeline (LCP, FID, CLS) can still be optimized to reach Google’s “good” thresholds.
Why TTFB matters for SEO and conversion
Two reasons:
- It’s a Core Web Vital input. Largest Contentful Paint (LCP) is heavily affected by TTFB — if the server takes 2 seconds to send the first byte, the LCP can’t possibly fire until at least 2 seconds in. Google uses LCP as a ranking signal.
- It’s a perceptual signal. Users abandon slow sites. Studies consistently show conversion rates drop ~7-12% for every 100ms of additional load time. TTFB is a large slice of that.
A site with a 3-second TTFB doesn’t just feel slow — it ranks worse and converts less.
Why TTFB suddenly spikes during an outage
When a site is overloaded, every layer that contributes to TTFB starts queueing. Here’s the typical pattern of a slow-rolling outage:
Baseline: DNS 30ms │ TCP 60ms │ TLS 90ms │ Server 200ms │ ► TTFB 380ms
Load 50%: DNS 30ms │ TCP 60ms │ TLS 90ms │ Server 400ms │ ► TTFB 580ms
Load 80%: DNS 30ms │ TCP 80ms │ TLS 110ms│ Server 1200ms│ ► TTFB 1420ms
Load 95%: DNS 30ms │ TCP 200ms│ TLS 350ms│ Server 4000ms│ ► TTFB 4580ms
Load 100%: ──────── TIMEOUT, request never returns ────────
The pattern matters because it gives operators a head start. If you’re watching TTFB and it’s been rising steadily, you have minutes before it hits the timeout cliff. Our monitoring infrastructure records latency on every check, so a sustained TTFB rise on a /check/{domain} page often precedes a confirmed outage by 5-15 minutes.
How to measure TTFB
As a visitor, in your browser
- Chrome DevTools: open the Network tab, click any request, look at the “Waiting (TTFB)” line in the timing breakdown. Includes all six stages above by default.
- Firefox DevTools: Network tab, click a request, then “Timings” tab.
- Safari Web Inspector: Network tab, similar breakdown under “Timing”.
The browser’s measurement starts after DNS+TCP+TLS, so DevTools-reported TTFB is closer to “server time + transit” than the full request lifecycle.
As an operator, in your monitoring
Most APM tools (Datadog, New Relic, Sentry Performance) emit TTFB as a first-class metric. If you’re tracking only one performance metric, this is the one — it’s the broadest indicator of server health.
Using curl from the command line
The classic one-liner:
curl -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS: %{time_appconnect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" -o /dev/null -s https://example.com
This breaks out each stage so you can see which layer is slow.
How a website operator improves TTFB
In rough order of impact-per-effort:
1. Cache aggressively
Static assets behind a CDN. Database-backed pages cached at the application layer (Redis, Memcached) for at least 60 seconds. Even a 1-minute cache cuts the average TTFB in half on most sites.
2. Use a CDN with edge presence near your users
Cloudflare, Akamai, Fastly, AWS CloudFront. The edge caches your static and (with some configuration) dynamic content. A user in Europe hitting your origin in Virginia drops from 150ms transcontinental latency to 20ms regional.
3. Optimize the slowest database queries
Run EXPLAIN ANALYZE on the queries that show up in your APM. Add the missing indexes. Most “the site got slow” incidents trace back to one or two queries that were always slow but only became user-visible at scale.
4. Move to faster hosting
Shared hosting → VPS → dedicated → cloud-with-good-network. Each step typically halves TTFB.
5. Use HTTP/2 or HTTP/3
Reduces the TLS handshake overhead by reusing connections. HTTP/3 also uses QUIC, which combines TCP+TLS into a single round trip.
6. Compress responses
gzip or brotli. Smaller responses transfer faster, but more importantly, the server-side compression cost is usually outweighed by the network-time savings.
7. Reduce template-rendering complexity
Some frameworks render very slowly under load. Profile your render times and optimize the slowest templates. Sometimes the answer is server-side rendering with caching; sometimes it’s static-site generation.
When TTFB diagnoses an outage
If you’re seeing rising TTFB but no error responses yet, the failure modes typically progress like this:
- TTFB rising, no errors yet — site is overloaded, queueing has started. Action: scale up before the cliff.
- TTFB very high (5+ seconds), some 503s — load shedding has begun. Action: identify whether it’s a specific endpoint or site-wide.
- Mix of 503s and 504s — some requests are being shed (503), some are timing out (504). Action: the queue is overflowing. Auto-scaling or emergency capacity needed immediately.
- 502s start appearing — backend is crashing under load. Action: the auto-scale didn’t keep up. Reduce traffic via WAF rules or rate-limiting if scaling can’t catch up.
- Requests time out entirely — full outage. Action: the site is functionally down for users.
Our /check/{domain} pages show the response-time chart for every monitored site. A sustained TTFB rise on the chart is a leading indicator that a real outage is coming, even before the status flips from green to yellow.
Related concepts
- HTTP 502 Bad Gateway — what happens when TTFB grows past the gateway’s timeout.
- HTTP 503 Service Unavailable — what overloaded servers return to protect themselves before TTFB hits the timeout.
- DNS resolution — slow DNS adds directly to user-perceived TTFB.
- SSL/TLS errors — handshake failures show up in TTFB diagnostics.
To see TTFB in context for any monitored site, search our homepage for the domain — every status page includes a 24-hour response-time chart. For a global view of sites currently showing latency anomalies, check the live outages feed.