Back

Monitoring latency: Vercel Serverless Function vs Vercel Edge Function

Monitoring latency: Vercel Serverless Function vs Vercel Edge Function
TD
Thibault Le Ouay Ducasse
5 min read
education

In our previous article, we compared the latency of various cloud providers but did not include Vercel. This article will compare the latency of Vercel Serverless Function with Vercel Edge Function.

We will test a basic Next.js application with the app router. Below is the code for the routes:

We have 4 routes, 3 using the NodeJS runtime and one is using Edge runtime.

  • /api/ping is using the NodeJS runtime
  • /api/ping/warm is using the NodeJS runtime
  • /api/ping/cold is using the NodeJS runtime
  • /api/ping/edge is using the Edge runtime

Each route have a different maxDuration, it's a trick to avoid bundling the functions in the same physical functions.

Here is the repository of the application.

Vercel Serverless Function - NodeJS runtime

They are using the NodeJS 18 runtime. We have access to all the nodejs API. Our function are deployed in a single location: iad1 - Washington, D.C., USA.

Upgrading to Node.js 20 could enhance cold start performance, but it's still in beta.

We analyzed the header of each request and observe that all requests are processed in a data center near our location before being routed to our serverless location.

  • ams -> fra1 -> iad1
  • gru -> gru1 -> iad1
  • hkg -> hkg1 -> iad1
  • iad -> iad1 -> iad1
  • jnb -> cpt1 -> iad1
  • syd -> syd1 -> iad1

We never encountered a request routed to a different data center, and we never hit the Vercel cache.

Warm - /api/ping/warm

uptime

100%

fails

0#

total pings

12,090#

p50

246ms

p75

305ms

p90

442ms

p95

563ms

p99

855ms

Vercel warm p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP50P95P99

Amsterdam, Netherlands

🇳🇱 ams

173ms782ms869ms

Ashburn, Virginia, USA

🇺🇸 iad

62ms358ms767ms

Hong Kong, Hong Kong

🇭🇰 hkg

287ms470ms959ms

Johannesburg, South Africa

🇿🇦 jnb

374ms522ms1,003ms

Sydney, Australia

🇦🇺 syd

248ms347ms886ms

Sao Paulo, Brazil

🇧🇷 gru

190ms369ms705ms

We are pinging this functions every 5 minutes to keep it warm.

Cold - /api/ping/cold

uptime

100%

fails

0#

total pings

2,010#

p50

859ms

p75

933ms

p90

1,004ms

p95

1,046ms

p99

1,156ms

Vercel cold p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP50P95P99

Amsterdam, Netherlands

🇳🇱 ams

832ms958ms1,015ms

Ashburn, Virginia, USA

🇺🇸 iad

719ms822ms894ms

Hong Kong, Hong Kong

🇭🇰 hkg

901ms1,024ms1,073ms

Johannesburg, South Africa

🇿🇦 jnb

991ms1,128ms1,211ms

Sydney, Australia

🇦🇺 syd

866ms996ms1,044ms

Sao Paulo, Brazil

🇧🇷 gru

823ms994ms1,173ms

We are pinging this functions every 30 minutes to ensure the functions will be scaled down.

Cold Roulette - /api/ping

uptime

100%

fails

0#

total pings

6,036#

p50

305ms

p75

791ms

p90

914ms

p95

972ms

p99

1,086ms

Vercel roulette p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP50P95P99

Amsterdam, Netherlands

🇳🇱 ams

225ms872ms986ms

Ashburn, Virginia, USA

🇺🇸 iad

113ms777ms831ms

Hong Kong, Hong Kong

🇭🇰 hkg

295ms948ms1,063ms

Johannesburg, South Africa

🇿🇦 jnb

385ms1,027ms1,139ms

Sydney, Australia

🇦🇺 syd

258ms914ms1,027ms

Sao Paulo, Brazil

🇧🇷 gru

269ms916ms1,040ms

We are pinging this functions every 10 minutes. It's an inflection point where we never know if the function will be warm or cold.

Vercel Edge Function

Vercel Edge Functions is using the Edge Runtime. They are deployed globally and executed in a datacenter close to the user.

They have limitations compared to the NodeJs runtime, but they have a faster cold start.

We analyzed the request header and found that the X-Vercel-Id header indicates the request is processed in a datacenter near the user.

  • ams -> fra1
  • gru -> gru1
  • hkg -> hkg1
  • iad -> iad1
  • jnb -> cpt1
  • syd -> syd1

Edge - /api/ping/edge

uptime

100%

fails

0#

total pings

6,042#

p50

106ms

p75

124ms

p90

152ms

p95

178ms

p99

328ms

Vercel edge p50 latency between 10. Mar and 13. Mar 2024 aggregated in a 1h window.
RegionTrendP50P95P99

Amsterdam, Netherlands

🇳🇱 ams

132ms203ms373ms

Ashburn, Virginia, USA

🇺🇸 iad

116ms168ms259ms

Hong Kong, Hong Kong

🇭🇰 hkg

111ms162ms272ms

Johannesburg, South Africa

🇿🇦 jnb

125ms210ms349ms

Sydney, Australia

🇦🇺 syd

96ms146ms347ms

Sao Paulo, Brazil

🇧🇷 gru

112ms240ms348ms

We are pinging this functions every 10 minutes.

Conclusion

| Runtime | p50 | p95 | p99 | | --------------------- | --- | ----- | ----- | | Serverless Cold Start | 859 | 1,046 | 1,156 | | Serverless Warm | 246 | 563 | 855 | | Edge | 106 | 178 | 328 |

Globablly Edge functions are approximately 9 times faster than Serverless functions during cold starts, but only 2 times faster when the function is warm.

Edge functions have similar latency regardless of the user's location. If you value your users and have a worldwide audience, you should consider Edge Functions.

Create an account on OpenStatus to monitor your API and get notified when your latency increases.