Where's the fastest place to put my server? How much does it matter?

Using my own web server accesslogs and public latency data to get a quantitative answer and why roundtrips are such a pain.

...

As network latencies grow, strange things can happen: "fat" sites can become fast (especially if served completely from CDN) and "thin" sites that use APIs can become slow. A typical latency for a desktop/laptop user is 200ms, for a 4G mobile user, 300-400ms.

I've assumed 40 megabit bandwidth, TLS, latency to CDN of 40ms and no existing connections.

"Origin" here means the primary webserver (as opposed to "edge" CDN caches).

What's the fastest place to put my server? Beyond the time taken for servers to respond to requests it takes time just to traverse the internet, just to get a packet from A to B.

To estimate what the theoretical best physical place to put my own server is I've combined publicly available data on latencies with my own web server accesslogs. I'm aiming to get a rough, quantitative, answer that's based on a real data set.

Why location matters

Time taken to traverse the internet is added to the time taken to respond to a request. Even if your API can respond to a request in 1ms, if the user is in London and your API server is in California the user still has to wait ~130 milliseconds for the response.

It's a bit worse than just 130 milliseconds. Depending on what a user is doing they may end up making a number of those roundtrips. To download a web page usually requires five full roundtrips: one to resolve the domain name via DNS, one to establish the TCP connection, two more to set up an encrypted session with TLS and one, finally, for the page you wanted in the first place.

Subsequent requests can (but don't always) reuse the DNS, TCP and TLS setup but a new roundtrip is still needed each time the server is consulted, for example for an API call or a new page.

130ms sounded fast at first, but the rigmarole of just getting a page and then making an couple of API calls can easily end up taking most of a second just in terms of time waiting for the network. All the other time required: for the server to decide what response to send to your request, time downloading the thing and then rendering whatever it is in your browser - that is all extra.

The two kinds of "fast" for networks

One of the confusing things about networking is the inspecific way in which people talk of getting "faster" networking: "faster" residental broadband for example, or "fast ethernet" (100 megabits per second, no longer impressive).

This kind of "faster" is not in fact talking about speed. Greater speed would be reduced latency - so faster roundtrips. Instead "faster" networking is really about greater bandwidth: more bytes per second.

APIs or CDNs

One thing that does make things faster: a Content Distribution Network (or CDN). Instead of going all the way to California perhaps you can retrieve some of the web page from a cache in central London. Doing this saves time - perhaps taking just 50 milliseconds, a saving of 60%. Caches work great for CSS files, images and javascript - stuff that doesn't change for each user. It doesn't work as well for the responses to API calls, for which the responses are different for each user, and sometimes, each time.

A quantitative approach

A happy few can serve everything from their CDN. News sites, for example, show the exact same thing to everyone. Others are less lucky and can make only limited, or no, use of caching. These poor people have to pick a location for their main server to help them get their bytes to the users who want them as fast as possible. If they want to make that choice with the sole aim of reducing latency, where should they pick?

Here's what I did:

  1. I took my own accesslogs for a two week period in September just after I'd published something new. I got about a million requests during this period from 143k unique IPs. I excluded obvious robots (which was ~10% of requests).
  2. I used Maxmind's GeoIP database to geocode each IP address in those accesslogs to geographic co-ordinates.
  3. I then used WonderNetwork's published latency data for internet latencies between ~240 world cities.
  4. I mapped those cities (semi-manually, which was pretty painful) from their names to Geonames ids - which gave me co-ordinates for the cities.
  5. Then I loaded all of the above into a Postgres database with the PostGIS extension installed so I could do geographical queries.
  6. I queried to estimate how long, by percentile, requests would have taken if I'd had my server in each of the 200 cities.

The results

In the table below I've recorded the outcome: how long users would take to complete a single roundtrip to my server if it were in each city. I've done this by percentiles so you have:

All numbers are in milliseconds.

See full results as a table (click to expand)

I've included a bit of Javascript in this page, so you can click on the headings to sort.

City p50 p75 p99
Manhattan 74 97 238
Detroit 89 115 245
Secaucus 71 96 246
Piscataway 75 98 251
Washington 82 105 253
Chicago 90 121 253
Kansas City 98 130 254
Indianapolis 96 125 254
St Louis 96 127 256
Cincinnati 92 121 257
Houston 104 134 257
Syracuse 77 102 257
Scranton 78 103 258
Quebec City 83 113 259
South Bend 92 118 259
Montreal 83 104 259
Charlotte 91 110 259
Salem 74 98 259
Buffalo 80 111 259
Albany 75 100 260
Monticello 94 123 260
Baltimore 80 105 260
Asheville 95 118 260
New York 77 103 261
Berkeley Springs 84 112 261
Minneapolis 102 133 261
Barcelona 102 148 261
Dallas 112 140 262
Des Moines 104 131 262
San Jose 139 165 263
Brunswick 77 101 264
Atlanta 88 113 264
San Francisco 136 168 264
Halifax 80 102 265
Philadelphia 77 100 266
Basel 97 146 267
Green Bay 103 131 267
Pittsburgh 88 117 267
Bern 99 147 267
Denver 112 141 267
Miami 103 129 267
Raleigh 88 111 268
Knoxville 114 135 268
Boston 77 105 268
Valencia 108 148 268
Jackson 105 132 268
Memphis 101 131 268
Jacksonville 95 122 268
Madrid 95 138 268
London 76 130 268
San Diego 138 162 269
San Antonio 112 138 269
Salt Lake City 120 151 269
Toronto 87 111 269
Cleveland 97 122 269
Austin 113 141 270
Colorado Springs 110 136 270
Orlando 103 126 270
Antwerp 93 137 271
Oklahoma City 114 147 271
Saskatoon 115 140 272
Lansing 98 127 272
Seattle 141 164 272
Columbus 92 120 273
Bristol 76 129 274
Tampa 104 130 274
Lausanne 95 139 274
Ottawa 85 111 274
Falkenstein 91 137 275
Maidstone 76 129 275
Paris 80 129 275
Toledo 102 129 275
Savannah 117 146 276
The Hague 82 138 276
Liege 87 136 277
Lincoln 100 124 277
New Orleans 115 142 278
Amsterdam 82 140 278
Las Vegas 136 163 279
Vienna 102 149 279
Coventry 80 132 279
Cromwell 80 106 280
Arezzo 109 160 280
Cheltenham 79 131 280
Sacramento 137 167 280
Alblasserdam 82 137 281
Vancouver 142 165 281
Fremont 131 157 283
Gosport 76 137 284
Frankfurt 93 136 284
Carlow 88 136 285
Phoenix 128 153 285
Portland 132 159 285
Cardiff 78 131 285
Luxembourg 87 137 285
Bruges 83 135 285
Eindhoven 85 133 285
Groningen 87 139 286
Manchester 80 137 286
Brussels 90 139 287
Brno 106 148 287
Edinburgh 84 136 287
Nuremberg 89 136 288
Albuquerque 125 159 289
Los Angeles 141 164 289
Ljubljana 110 152 289
Lugano 97 147 290
Zurich 103 146 290
Dronten 84 133 290
Newcastle 87 147 290
Rome 96 147 291
Dusseldorf 90 140 291
Munich 98 144 291
Venice 106 156 292
Edmonton 139 165 292
Copenhagen 96 145 292
St Petersburg 113 163 293
Dublin 85 143 293
Redding 142 178 293
Vilnius 110 162 293
Belfast 79 125 294
Nis 113 158 294
Douglas 87 143 294
Rotterdam 82 139 295
Bergen 107 157 295
Strasbourg 89 141 295
Roseburg 148 172 296
Graz 104 147 296
San Juan 117 141 298
Warsaw 108 161 299
Frosinone 105 153 299
Riyadh 159 206 300
Prague 103 152 301
Ktis 102 158 302
Mexico 139 164 302
Belgrade 113 160 302
Guadalajara 128 155 303
Milan 96 146 305
Bratislava 102 154 306
Osaka 181 240 307
Zagreb 103 150 308
Tallinn 108 162 308
Helsinki 105 156 308
Hamburg 127 166 309
Oslo 98 153 311
Bucharest 120 162 311
Riga 113 159 312
Panama 150 177 313
Tokyo 188 238 313
Kiev 119 168 313
Stockholm 102 153 314
Budapest 110 162 314
Kharkiv 128 169 315
Gothenburg 115 167 316
Pristina 122 167 316
Tirana 128 184 316
Geneva 96 142 316
Siauliai 113 163 317
Cairo 133 182 318
Sapporo 196 255 318
Bogota 170 188 319
Palermo 119 183 320
Gdansk 107 152 320
Caracas 149 176 320
Sofia 114 161 321
Westpoort 79 134 321
Honolulu 173 196 321
Roubaix 102 157 321
Kazan 138 190 322
Winnipeg 169 190 322
Varna 120 173 322
Tel Aviv 138 194 322
Lisbon 115 166 324
Jerusalem 145 198 324
Ankara 139 195 327
Heredia 164 188 327
Athens 128 183 329
Reykjavik 127 180 329
Paramaribo 166 194 330
Algiers 120 173 332
Chisinau 127 180 333
Bursa 135 188 334
Thessaloniki 134 187 336
Limassol 141 186 337
Lyon 95 145 340
Mumbai 204 248 340
Medellin 163 186 344
Valletta 120 176 345
Baku 160 205 346
Melbourne 227 269 346
Fez 149 198 348
Tunis 124 180 348
Koto 217 254 348
Dubai 192 243 350
Tbilisi 153 208 351
Malaysia 195 235 352
Hyderabad 214 260 354
Bangalore 212 252 355
Izmir 137 187 357
Adelaide 241 272 359
Chennai 221 248 359
Moscow 127 172 359
Lahore 217 270 361
Novosibirsk 163 206 362
Sydney 237 272 363
Karaganda 180 231 363
Vladivostok 223 264 364
Taipei 265 293 364
Lima 169 199 364
Istanbul 135 182 366
Hong Kong 199 223 366
Auckland 244 291 367
Jakarta 207 245 368
Seoul 231 277 371
Beirut 136 195 372
Accra 168 216 373
Singapore 190 246 374
Sao Paulo 193 213 375
Joao Pessoa 182 220 378
Perth 243 267 379
Ho Chi Minh City 253 287 380
Wellington 251 295 383
Brasilia 226 249 384
Manila 251 281 385
Pune 202 251 386
Dhaka 231 268 386
Phnom Penh 243 267 386
Santiago 202 230 390
Lagos 191 233 391
Quito 162 188 392
New Delhi 230 264 395
Johannesburg 237 283 398
Bangkok 222 254 401
Canberra 262 295 402
Dar es Salaam 214 267 407
Dagupan 239 268 408
Christchurch 257 309 409
Hanoi 235 264 415
Cape Town 216 262 417
Buenos Aires 232 253 417
Guatemala 217 249 418
Brisbane 261 288 422
Indore 304 352 457
Zhangjiakou 236 264 457
Nairobi 233 277 468
Kampala 244 287 480
Hangzhou 239 267 517
Shenzhen 242 275 523
Shanghai 300 367 551
Montevideo 738 775 902

You can also download the full results as a csv, if that's easier.

The result: east coast of North America good, right on the Atlantic better

The best places are all in North America, which is probably not a total surprise given that it's a pretty dense cluster of English speakers with another cluster not all that far away (in latency terms) in the UK/ROI and then a lot of English-as-a-second-language speakers in Europe. Being right on the Atlantic is best of all: New Jersey and New York state have many of the best places for p99 and it doesn't vary too much, at the top, between p50 and p99.

If you're wondering why small New Jersey towns like Secaucus and Piscataway are so well connected - they have big data centres used by America's financial sector.

As it stands, my server is currently in Helsinki. That's because, unusually for Finland, it was the cheapest option. I only pay about three quid a month for this server. If I moved it to somewhere in New Jersey, and spent more, users would definitely save time in aggregate: half of roundtrips would be completed in 75ms rather than 105ms, a saving of 30%. Over several roundtrips that would probably mount up to around a sixth of a second off the average of first-time page loads, which is not too bad. In case you can't tell, this website isn't hugely taxing for web browsers to render so cuts in the network wait time would make it considerably quicker.

Since I don't dynamically generate anything on this site, the truth is that I'd be best off with a CDN. That would really save a lot of time for everyone: it's nearly twice as good to be served from a CDN (~40ms) than to be in the fastest place (71ms).

How this might change over time

Latencies aren't fixed and they might improve over time. Here's a table of roundtrip latencies from London to other world cities with more than 5 million people, comparing against the theoretical maximum speed, the speed of light:

City name Distance (km) Real latency Theoretical max Slowdown factor
New York 5,585 71 37 1.9
Lima 10,160 162 68 2.4
Jakarta 11,719 194 78 2.5
Cairo 3,513 60 23 2.6
St Petersburg 2,105 38 14 2.7
Bangalore 8,041 144 54 2.7
Bogota 8,500 160 57 2.8
Buenos Aires 11,103 220 74 3.0
Lagos 5,006 99 33 3.0
Moscow 2,508 51 17 3.0
Sao Paulo 9,473 193 63 3.1
Bangkok 9,543 213 64 3.3
Hong Kong 9,644 221 64 3.4
Istanbul 2,504 60 17 3.6
Lahore 6,298 151 42 3.6
Tokyo 9,582 239 64 3.7
Hangzhou 9,237 232 62 3.8
Shanghai 9,217 241 61 3.9
Mumbai 7,200 190 48 4.0
Taipei 9,800 268 65 4.1
Dhaka 8,017 229 53 4.3
Seoul 8,880 269 59 4.5

(Please note, a correction: the above table previously compared real roundtrips with theoretical straight line journeys - this has now been corrected, for more details see these two comments for discussion and more details - like how part of this is due to the nature of fibre optic cables and submarine cable curvature.)

As you can see, New York's latency is within a factor of 2 of the speed of light but routes to other places like Dhaka and Seoul are much slower: being 4 times the speed of light. There are probably understandable reasons why the London to New York route has been so well optimised though I doubt it hurts that it's mostly ocean between them, so that undersea cables can run directly. Getting to Seoul or Dhaka will be a more circuitous route.

I should probably mention that new protocols promise to reduce the number of round trips. TLS 1.3 can create an encrypted session with one round trip rather than two and HTTP3 can club together the HTTP round trip with the TLS one, meaning you now only need three: one for DNS, one single roundtrip for both a connecton and an encrypted session and then finally a third for the subject of your request.

One false hope some people seem to have is that new protocols like HTTP3 do away with the need for Javascript/CSS bundling. That is based on a misunderstanding: while HTTP/3 will remove some initial roundtrips it does not remove subsequent roundtrips for extra Javascript or CSS. So bundling is sadly here to stay.

Data weaknesses

While I think this is an interesting exercise - and hopefully indicative - I should be honest and say that the quality of the data I'm using is solidly in the "medium-to-poor" category.

Firstly, the GeoIP database's ability to predict the location of an IP address is mixed. Stated (ie: probably optimistic) accuracy ranges up to about 1000 kilometers in some cases, though for my dataset it thinks the average accuracy is 132km with a standard deviation of 276km - so not that accurate but I think still useful.

My source of latency data, WonderNetwork, are really reporting point-in-time latency from when I got it (30th November 2020) as opposed to long term data. Sometimes the internet does go on the fritz in certain places.

WonderNetwork have a lot of stations but their coverage isn't perfect. In the West it's excellent - in the UK even secondary towns (like Coventry) are represented. Their coverage worldwide is still good but more mixed. They don't have a lot of locations in Africa or South America and some of the latencies in South East Asia seem odd: Hong Kong and Shenzhen are 140ms away from each other when they're only 50km apart - that's a slowdown factor compared to the speed of light of more than a thousand times. Other mainland China pings are also strangely bad, though not on that scale. Perhaps the communists are inspecting each ICMP packet by hand?

The other problem with the latency data is that I don't have the true co-ordinates for the datacentres that the servers are in - I had to geocode that myself with some scripting and a lot of manual data entry in Excel (I've published that sheet on github to save anyone from having to redo it). I've tried hard to check these but there still might be mistakes.

By far the biggest weakness, though, is that I'm assuming that everyone is starting right from the centre of their closest city. This isn't true in practice and bias this adds can vary. Here in the UK, residental internet access is a total hack based on sending high frequency signals over copper telephone lines. My own latency to other hosts in London is about 9ms - which sounds bad for such a short distance but is still 31ms better than average. Many consumer level routers are not very good and add a lot of latency. The notorious bufferbloat problem is also a common source of latency, particularly affecting things that need a consistent latency level to work well - like videoconferencing and multiplayer computer games. Using a mobile phone network doesn't help either. 4G networks add circa 100ms of lag in good conditions but of course are much worse when the signal is poor and there are a lot of link-level retransmissions.

I did try assuming the global average latency per kilometer (about 0.03ms) to compensate for distance from the closest city but I found this just added a bunch of noise to my results as for many IPs in my dataset this is an unrealistic detour: the closest city I have for them isn't that close at all.

Generality

It's fair to wonder to what extent my results would change for a different site. It's hard to say but I suspect that the results would be approximately the same for other sites which are in English and don't have any special geographical component to them. This is because I reckon that people reading this blog are probably pretty uniformly distributed over the English speaking population of the world.

If I was writing in Russian or Italian the geographic base of readers would be pretty different and so the relative merits of different cities from a latency point of view would change.

It wasn't too hard for me to run this test and I've released all the little bits of code I wrote (mostly data loading and querying snippets) so you could easily rerun this on your own accesslogs without too much effort. Please write to me if you do that, I'd love to know what results you get.

Gratuitous roundtrips

Picking a good spot for your server only goes so far. Even in good cases you will still have nearly a hundred milliseconds of latency for each roundtrip. As I said above there can be as many as five roundtrips when you visit a page.

Having any unnecessary roundtrips will really slow things down. A single extra roundtrip would negate a fair chunk of the gains from putting your server in a fast place.

It's easy to add roundtrips accidentally. A particularly surprising source of roundtrips are cross-origin (CORS) preflight requests. For security reasons to do with preventing cross-site scripting attacks, browsers will "check" certain HTTP requests made from Javascript. This is done by sending a request to the same url beforehand with the special OPTIONS verb. The response to this will decide whether the original request is allowed or not. The rules for when exactly preflighting is done are complicated but a surprising number of requests are caught up in the net: notably including JSON POSTs to subdomains (such as api.foo.com when you're on foo.com) and third party webfonts. CORS preflighting checks use a different set of caching headers to the rest of HTTP caching which are rarely set correctly and anyway are only applicable for subsequent requests.

A lot of sites these days are written as "single page apps", where you load some static bundle of Javascript (hopefully from a CDN) and which then makes a (hopefully low) number of API requests inside your browser to decide what to show on the page. The hope is that this is faster after the first request as you don't have to redraw the whole screen when a user asks for a second page load. Usually, it doesn't end up helping much because a single HTML page tends to get replaced with multiple chained API calls. A couple of chained API calls to an origin server is almost always slower than redrawing the whole screen - particularly over a mobile network.

I always think it's a bit rubbish when I get a loading bar on a web page - you already sent me a page, why didn't you just send the page I wanted! One of the great ironies of the web is that while Google don't do a good job of crawling these single page apps they certainly produce a lot of them. The "search console" (the website formerly known as "webmaster tools") is particularly diabolical. I suppose Google don't need to worry overly about SEO.

Bandwidth improves quickly but latency improves slowly

Internet bandwidth just gets better and better. You can shove a lot more bytes down the line per second than you could even a few years ago. Latency improvements, however, are pretty rare and as we get closer to the speed of light the improvement will drop off completely.

100 megawhats per second is less compelling when you still have to wait the same half a second for each page to load.


Contact/etc

Please do send me an email about this article, especially if you disagreed with it.

See other things I've written.

Get an alert when I write something new, by email or RSS rss-logo.

I am on:

🇫🇮 I have moved to Helsinki and am organising the Helsinki Python group. I'm giving a one of the talks at the April meetup on "bank" python. If you know someone willing to give a talk or lend us space to meet, please do get in touch. 🇫🇮

If you are feeling charitable towards me: please try out my side-project, csvbase, or "Github, but for data tables".


See also

Last year APNIC analysed CDN performance across the world, and concluded that 40ms is typical. I wish they'd included percentile data in this post but I can still get the vague impression that CDNs perform best in the West and less well in South American, China and Africa which is a problem given that most servers are based in the West.

While I was writing this post a number there was an outbreak of page-weight-based "clubs", like the "1MB club" and the, presumably more elite, 512K Club. I suppose I approve of the sentiment (and it's all in the name of fun I'm sure) I think they're over-emphasising the size of the stuff being transferred. If you're in London, asking for a dynamically generated page from California, it will still take a most of a second (130ms times 5 round trips) regardless of how big the thing is.

The submarine cable map is always fun to look at. If you want to see a sign of the varying importance of different places: the Channel Islands (population 170 thousand) have 8 submarine cables, including two that simply connect Guernsey and Jersey. Madagascar (population 26 million) has just four. I also think it's funny that even though Alaska and Russia are pretty close there isn't a single cable between them.

If you want to reproduce my results I've published my code and data on Github. I'm afraid that does not include my accesslogs which I can't make public for privacy reasons. Please don't expect me to have produced a repeatable build process for you: that takes a lot more time and effort so it's provided on a "some assembly required" basis. :)