Choosing Your Backend: Onsite vs Cloud vs Serverless vs Edge

A thorough comparison between the popular deployment models

Aziz Nal
ITNEXT

--

Whether you’re building the next blockchain app that’ll change the world or creating yet another twitter clone, you’ll need to deploy your app somewhere.

In this article, I talk about the models of Serverful, Serverless Functions, and Edge Functions; comparing them in terms of what they are, what they are not, and drawing some interesting comparisons between them.

After reading, you’ll have a better idea of what the difference between serverful and serverless is, and specifically, how conventional serverless functions differ from edge functions.

Table of contents:

The Differentiation and Hierarchy

There are two ways to group the models in this article:

  1. Serverful:
    which refers to long-running servers with mid to high-end hardware. This includes onsite and cloud servers.
  2. Serverless:
    which refers to low-end servers that spawn per request (or bunch of requests), and then are destroyed once the request is handled. This includes serverless and edge functions.

In a way, we can view these as a series of abstractions:

  1. Onsite Serverful: Base layer where everything — hardware, software, network — is managed directly.
  2. Cloud Serverful: Abstracts away physical hardware maintenance. While server configurations and software management remain, the need for on-premises hardware is eliminated.
  3. Serverless & Edge Functions: Removes the need to manage server configurations and scaling. Edge functions further abstract by optimizing code execution closer to users for responsiveness.

The next section covers detailed breakdowns of each model.

Serverful: Onsite vs Cloud

Serverful — Centralized servers

With serverful, you have a couple of main options. First, you could host your servers yourself, on-site. That means you physically have the servers somewhere you can access and you configure their networking and how you deploy your apps to them mostly manually.

A server is anything from your 10 year old laptop with a broken screen to a fully-loaded server room

a humble local server
a dedicated server room with many servers

The second way you would do serverful is by deploying a server on the cloud. This way you leave managing a lot of the complexity of a server to your cloud provider and you’re able to focus more on making use of the server.

Regardless of which one you choose, serverful grants you fine-grained control over your server and the type of technology you want to use on it with little limitation.

The main difference between onsite vs cloud is usually costs. You can get a feel for the costs with the following charts:

The onsite curve starts aggressive then flattens out, while cloud starts slow but gains speed rapidly.

In the above chart, assume only the costs of the servers themselves are included. Realistically you would also have the costs of the salaries of the engineers who setup and maintain these servers but that has to do more with the complexity of your setup rather than using onsite vs cloud servers, so they’re not included in this chart.

With onsite, you have to purchase all your hardware upfront. Other pains include having to make sure that your hardware is compatible both for your current usage as well as for potential upgrades down the line. But once you’ve made a purchase, that’s really it mostly.

The cloud is a sort of trap. It starts dirt-cheap since you literally only rent the amount of hardware you need. As your traffic increases, you get to add more powerful hardware quite easily.

The issue here is that cloud providers’ pricing gets really crazy really quickly after a certain threshold, to the point where even Amazon themselves have found that moving away from it saves costs substantially.

Again, these costs don’t cover your DevOps team salaries. It’s just the rented hardware and services.

Code Example

// setting up your server
const server = initServer();

// adding middleware
server.use(someMiddleware);
server.use(someOtherMiddleware);

// setting up route handlers
server.get("/", handleRootGet);
server.post("/", handleRootPost);
server.get("/login", handleLoginGet);
server.post("/login", handleLoginPost);

// starting up the server
try {
server.listen("0.0.0.0", 3000);
} catch (e) {
console.error("Oh no. Server crashed. Who'da thunked it.", e);
}

// route handler function definitions
function handleRootGet(req, res) { /* ... */ }
function handleRootPost(req, res) { /* ... */ }

function handleLoginGet(req, res) { /* ... */ }
function handleLoginPost(req, res) { /* ... */ }

What is Serverless?

The saying goes that serverless is not really serverless, it’s just someone else’s server which is true but not really the point if you think about it.

What is Serverless?

Serverless is an auto-scaling, pay-per-execution, provider-managed architecture.

In serverless, you write your code as functions which you then deploy that correspond to servers managed by your cloud provider.

How is a serverless architecture set up?

With a serverless setup, you would use a cloud provider’s services, such as AWS Lambda, Google Cloud Functions, or Vercel, and they handle deploying your code, as well as scaling it depending on your current traffic.

A function may be called programmatically from your code, or it could be associated with a URL.

Got 0 users? No worries, you’re charged per execute so your cloud provider won’t contribute to your bankruptcy (this time ;) ).

Got a sudden influx of 10 million users compared to the 3 million you’re used to because it’s black friday? Have more functions! Have all the functions, and then have some more 🤑

Code Examples

// Example 1: Your typical serverless function definition
export default async function handler(request, context) {
// do your logic here

return new Response("Done!", {
status: 200,
});
}
// Example 2: Serverless functions in NextJS are exported as the name of the
// Http method which they handle
export async function GET(request, context) {
// do your logic here

return new Response("Done!", {
status: 200,
});
}

export async function POST(request, context) {
// do your logic here

return new Response("Done!", {
status: 200,
});
}

When is a serverless function created and destroyed?

A serverless function is created when an event (e.g. a request) triggers it to be created. The startup of a new function is called a Cold Start. Once the function is started, it’s called warm.

A warm function stays active for a short period after it’s initial cold start, allowing it to handle incoming requests without the need for a new cold start. AWS, for example, keeps a few lambdas warm for faster request handling, with a tradeoff of slightly more costs.

As a warm function is handling a request, new requests are queued for handling after the current one is finished.

If more requests are received than a single function can handle, then a new function is cold-started and the cycle starts again.

If a function is idle for long enough, that is, if no requests are received, then it’s terminated.

There are a few new challenges one must consider when dealing with these short-lived functions, such as amount of database connections currently active to your database. If setup incorrectly, each call to a function may be a new connection to your database which would eventually saturate the database and cause you to self-DOS.

Of course, for each problem there is a solution. For the example database connection saturation issue, the solution would be connection pooling.

Where do serverless functions live?

Serverless functions, just like a cloud server, live in a region of your choosing. Typically, you can choose to deploy all your functions in one region or use a different region for each function depending on your use case.

However, when choosing your deployment region, you should be mindful of where your other services live, such as your database.

Deploying services in different regions causes unnecessary delays as the request hops from one region to another
Keeping services close together keeps response times significantly shorter

The Edge

Edge functions are still serverless, but with a bunch of ups and downs.

Edge functions are functions that run on the edge of the network, as close to the caller as possible.

This may seem like a perfect drop-in solution for better performance at first, but there are quite a few caveats when it comes to edge functions.

For example, consider the following situation:

It may not be obvious why this is a bad setup, but that’s because this diagram is lying to us. It’s not as simple as sending a request from the function to the database, rather it looks more like this:

Think about it. It’s very likely that you would need to query your database multiple times in a request. For example, even in a simple sign up request, you would need a couple of queries:

  1. Confirm user doesn’t already exist
  2. Add user to the database

In this situation, the long distance between the edge function and the database can add significant latency to the response. A better setup would be to deploy the edge function close to the database:

Although we’ve added more latency between the caller and the edge function, the real latency is between the edge function and the database due to the many potential round-trip requests.

This is dependent on your particular edge function.

The Pros of Edge

The main advantage of edge is quicker response times. This is due to two main reasons:

  1. Edge is geographically close to the caller. The request and response don’t have to travel a long distance.
  2. Edge is much lighter than a server. For example, Vercel edge functions use the javascript v8 engine runtime allowing for much quicker startups compared to conventional serverless.

The Cons of Edge

The biggest down side with edge is usually compatibility. Since edge runs in a very light-weight environment, you miss out on a lot of APIs and libraries.

For example, Vercel edge functions state that you have no access to most native Node APIs. This extends to you not being able to use most database drivers as well.

Other limitations include:

  • Code size limit: Your function may not exceed a certain size.
  • Vendor lock-in: Every cloud provider has their own opinion about how functions should be written, making it difficult to change providers without re-writing code.

Comparisons Between The Deployment Models

In the table below, you can see comparisons of different aspects of the deployment models mentioned in this article.

--

--

Writer for

Full Stack Web Developer. Very passionate about software engineering and architecture. Also learning human languages!