Grug-Brained SSR with 20 Lines of Code

Charles Chen
15 min readMar 10


apex predator of grug is complexity

complexity bad

one day code base understandable and grug can get work done, everything good!

next day impossible: complexity demon spirit has entered code and very dangerous situation!


If that sums up how you feel about the renewed hype around SSR, then stick with me here, because we’re about to do something just a little crazy and sidestep this SSR hype cycle with 20 lines of code and a decidedly Grug Brained solution.

This article is the first part of my Turas Engineering series where I dive into some of the engineering work I’ve been putting into my nights and weekends project ( as a solo dev.


Before I dive in, it’s important to understand my mindset here.

Turas was started on the last days of 2022 and 3 days later, it was up and running. By day 7, I felt good enough about it that I shared it to Reddit and I’ve been hacking away at it ever since.

This context is important because as a one-man nights and weekends project, my decision making heuristics are really simple:

  • No infrastructure. I don’t want to pay for it, I don’t want to maintain it, I don’t want to upgrade it; no, no, no.
  • Low friction. It may be days before I come back to the project and friction makes it hard to jump in and do small bits and pieces. Every decision must not create friction. Friction will kill momentum.
  • Fast cycles. Because I’m working nights and weekends only, I don’t have a lot of time to waste waiting for builds; every decision has to orient with speed. The faster the better. The less build time the better. The less tab switching, the better.

If those heuristics resonate with you, :high_five: my friend and welcome to my TED talk!

With this background I can now tell you the story of why and how I came up with my 5-minute, 20-line SSR solution that might just work for your use case as well.

The Problem

Turas is built as a single page application because at the core it is a complex data/schedule management UI that interacts directly with Google Cloud Firestore.

One Saturday morning, as I started working on a trip story publishing feature — which wasn’t really in the cards when I started this whole thing — it became obvious that I needed a way to dynamically generate some content on the server for each distinct story URL. Otherwise, client generated HTML head <meta> simply wouldn’t work correctly when sharing the story link on a social platform.

A published story in action in a glorious 6fps gif.

Here’s an example of how a Turas story link is rendered in Discord prior to the hack using a static set of head <meta> tags:

As rendered in Discord before my SSR hack. It is using static head meta tags to generate this content.

You can see the problem: without generating the markup on the server, every story would receive the same social/SEO metadata which would be less than optimal. Ideally, every story that was published would have its own unique social/SEO metadata so that it displays like this:

The final output of my 5 minute SSR hack in Facebook. Note how this metadata is specific to this story because now we have dynamic head meta tags rendered on the server.

To better understand how this works, plug this URL:

into to see the output:

<!-- HTML Meta Tags -->
<title>Big Sur, CA |</title>
<meta name="description" content="Big Sur is a rugged stretch of California’s central coast between Carmel and San Simeon. Bordered to the east by the Santa Lucia Mountains and the west by the Pacific Ocean, it’s traversed by narrow, 2-lane State Route 1, known for winding turns, seaside cliffs and views of the often-misty coastline. The sparsely populated region has numerous state parks for hiking, camping and beachcombing.This itinerary takes you through a 5 day trip from beaches to waterfalls and various landmarks along California's coast!">

<!-- Facebook Meta Tags -->
<meta property="og:url" content="">
<meta property="og:type" content="website">
<meta property="og:title" content="Big Sur, CA |">
<meta property="og:description" content="Big Sur is a rugged stretch of California’s central coast between Carmel and San Simeon. Bordered to the east by the Santa Lucia Mountains and the west by the Pacific Ocean, it’s traversed by narrow, 2-lane State Route 1, known for winding turns, seaside cliffs and views of the often-misty coastline. The sparsely populated region has numerous state parks for hiking, camping and beachcombing.This itinerary takes you through a 5 day trip from beaches to waterfalls and various landmarks along California's coast!">
<meta property="og:image" content="">

<!-- Twitter Meta Tags -->
<meta name="twitter:card" content="summary_large_image">
<meta property="twitter:domain" content="">
<meta property="twitter:url" content="">
<meta name="twitter:title" content="Big Sur, CA |">
<meta name="twitter:description" content="Big Sur is a rugged stretch of California’s central coast between Carmel and San Simeon. Bordered to the east by the Santa Lucia Mountains and the west by the Pacific Ocean, it’s traversed by narrow, 2-lane State Route 1, known for winding turns, seaside cliffs and views of the often-misty coastline. The sparsely populated region has numerous state parks for hiking, camping and beachcombing.This itinerary takes you through a 5 day trip from beaches to waterfalls and various landmarks along California's coast!">
<meta name="twitter:image" content="">

<!-- Meta Tags Generated via -->

(Or just view source on the page)

With one weekend to get the whole of the publishing feature working, I was up against my first challenge: do I deploy a backend to do SSR? Just to generate some headers? Do I write a whole separate codebase to generate static stories?

A Grug-Brained Approach to SSR

My initial thought was to set up an instance of Astro.js and generate the whole thing (no Next.js; I use it professionally and it decidedly does not spark joy as Marie Kondo might say). But as I started down this path, I could feel rule #1 creeping up on me. It meant that I’d have to have another codebase and another runtime. Not just to deploy, but also locally every time I fired up the project which meant more friction (rule #2). And thinking about it some more, it seemed silly to deploy a whole apparatus for my simple needs (rule #3).

I needed something simpler, lighter, and faster that somehow required less brain cells at the same time (not many left to spare, I tell ya).

It dawned on me that because Turas is an SPA stored in Google Cloud Storage, if I could just grab the latest index.html, modify it, and send it back I would be set.

To do this, all I needed was this stupidly simple Cloud Function:

import * as functions from "firebase-functions";
import axios from "axios";
import dayjs from "dayjs";

export const generateStory = functions.https.onRequest(async (req, res) => {
const { data: originalHtml }: { data: string } = await axios(
responseType: "document",

let modifiedHtml = originalHtml.replace(
`[GENERATED BY CLOUD FUNCTIONS @ ${dayjs().toISOString()}]`

res.set("Cache-Control", "public, max-age=300, s-maxage=600");

That 20 lines of code is the foundation of my SSR hack. It’s so simple, you don’t even need me to explain it to you.

Rather than even figuring out how to get to the index.html file using the Storage APIs and where exactly it’s stored in storage, why not just use axios and fetch it (shameful, I know 🤣). Now I will always have the latest deployed version; the same one that the user should be getting without modifications.

There are a lot of things working in my favor here:

  1. Because it’s just routing back to static GCP Storage, even with an HTTP request, it should be relatively fast since it’s Google to Google.
  2. Because Google’s CDN is sitting in front of this, as long as the Cache-Control header is set for a sufficiently long window, the actual cost of getting this page the first time is negligible; the CDN is going to do the heavy lifting.
  3. Because of Google’s excellent Firebase emulator — which I’m already using to run Firestore locally— there’s no need to add any additional runtime to my local dev experience; I just fire up the emulator and I’m ready to go.

Big wins all around.

There are a few details that we also need to take care of in the application itself. For starters, requests for{tripUid} or http://localhost.../story/{tripUid} need to be mapped to this Cloud Function.

This is really easily done via the firebase.json configuration file:

"hosting": {
"public": "web/dist",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**"],
"rewrites": [
"source": "/story/**",
"function": "generateStory"
"source": "**",
"destination": "/index.html"

Now when a user hits a URL like it hits the first rewrite for /story/** and the request is routed to this Function to process first.

The Function loads the HTML, replaces a bit of text, and returns the HTML — which is just the modified static index.html of the latest published SPA .

Once the HTML is returned to the browser and the SPA starts up, the client -side application needs to simply map it to this same route:

name: 'Story',
path: '/story',
component: () => import('@/layouts/StoryLayout.vue'),
props: {
// When we are showing in the Story path, we don't need container mode.
// When we are showing it inline, we want container mode.
containerMode: false,
children: [
name: 'TripStory',
path: '/story/:uid/:slug?',
component: () => import('@/features/story/StoryPanel.vue'),

See how the routes line up?

# This route is first routed to the Function and the browser gets back HTML

# Then when the HTML and JS are loaded, the client loads into the same route.

The server gets hit first; the SPA takes over once the HTML is loaded in the browser.

The overall flow looks like this:

  1. User browser makes a request
  2. Google Firebase Hosting has a route mapped to the Function which receives the request
  3. The Function requests the index.html page from the application itself
  4. The Function receives the index.html contents and modifies it
  5. The Function sends the modified HTML to the browser
  6. The browser SPA matches the URL to the route in the SPA

This general approach will work with any backend; not just Firebase Hosting and Functions. I could just as well build this with a .NET console app in Cloud Run (though I’d end up with more dev/deploy overhead). If more throughput is needed, this can always be deployed into a Cloud Run instance with a 20 line .NET 6 console app doing the same thing.

The Production Function

The initial Function was a proof of concept. And it worked!

In 20 lines of code and no additional infrastructure (save for 1 new serverless Function), I had a working SSR endpoint.

Now it just needs to grab the data and render that into the page. Ideally, it would also be able to work locally in the emulator so I could test the full flow locally without having to deploy (because deployments waste time; rule #3).

So let's add the rest of the logic (yes; we’re way over 20 lines here, but the extra lines are just formatting and replacing strings — YMMV):

import sanitizeHtml from "sanitize-html";
import * as functions from "firebase-functions";
import axios from "axios";
import dayjs from "dayjs";
import { Story } from "@/shared/story";
import { serverStorage } from "./../common/ServerStorage";
import { firebaseConnector } from "../common/ServerFirebaseConnector";

const indexUrls = {
live: "",
local: "http://localhost:3090/index.html",

const allowNone = {
allowedTags: [],
allowedAttributes: {},

export const generateStory = functions
minInstances: 1,
maxInstances: 20,
.https.onRequest(async (req, res) => {
const env = firebaseConnector.isEmulator ? "local" : "live";
const indexUrl = indexUrls[env];
const { data: originalHtml }: { data: string } = await axios(indexUrl, {
responseType: "document",

let modifiedHtml = originalHtml.replace(
`[GENERATED BY CLOUD FUNCTIONS @ ${dayjs().toISOString()}]`

const parts = req.url.split("/");

let tripUid = parts[2]; // Index 0 is "", index 1 is "story"
tripUid = tripUid.substring(0, 8); // Only take the first 8
// Here we load the story JSON from Cloud Storage from a known bucket.
const storyJson = await serverStorage.getStory(tripUid);
const story = JSON.parse(storyJson) as Story;

if (story.isPrivate) {
modifiedHtml = modifiedHtml.replace(
<script id="__TURAS_STORY__">var __turasStoryJson = { isPrivate: true }</script>`
} else {
modifiedHtml = modifiedHtml.replace(
<script id="__TURAS_STORY__">var __turasStoryJson = ${storyJson}</script>`

const slug =
.replace(/[^\w\d-_]/gi, "-")
.replace(/-+/g, "-");

const title = sanitizeHtml(story.shortTitle ??, allowNone);

const htmlTitle = sanitizeHtml(
story.shortTitle ?? + " | Turas",

const description = sanitizeHtml(
story.introText ?? + "|",

modifiedHtml = modifiedHtml
/content=" - Travel. Organized."/g,
`content="${title} |"`
/content=" the collaborative trip planning tool for organized travelers."/g,
story.headlineImageRef?.url ?? ""
.replace("<title>Turas</title>", `<title>${htmlTitle}</title>`);

res.set("Cache-Control", "public, max-age=3600, s-maxage=3600");

(I’ll save the pretty 500 and 404 error handling for some other day)

The Function simply determines if axios loads the index.html from the local emulator or from the live URL and then grabs the published JSON from Storage — again, either using the emulator or GCP Storage — and adds it to the response HTML.

Wait, is this fast? It looks too dumb to be fast!

The __turasStoryJson global variable holds the static JSON contents of the published story which is pull from a known bucket. On the client side, now the app just needs to load this JSON into the SPA.

Without the minInstances set to 1, this SSR engine is effectively free. With the minInstances set to 1, it costs just under $2/month but there’s no cold start at all anymore (the free alternative would be to use Cloud Scheduler to keep my instances warm).

4 warm instances; at just $7.63/mo. or about $1.91 each warm instance for a month. Nice touch from Google Firebase to provide the change in billing right inline.

Before we move on, you might be wondering: wait, is this fast? It looks too dumb to be fast! (Because, you know, I was thinking the same thing). The nice thing is that it actually doesn’t matter once the CDN has a hold of it (but yes, it is fast on its own as well):

Results from CDN. Yeah, that looks fast to me (YMMV based on which edge cache you hit) .

What about the page performance? Does it suffer? Let’s see what Lighthouse says:

Wowzers! Despite the warning, this is a first run in Incognito mode. (The SEO is bad because I don’t have a robots.txt nor crawlable links).

That looks like a success to me!

If I need more speed, the next logical thing is to just template the HTML right into the .js on build and skip any I/O at all. But this has the downside of longer deploys (rule #3) since I would then have to deploy the Function each time when deploying the static assets. If more runtime speed is needed, this Ace is still up my sleeve.

Addressing the SPA Developer Experience

To run this in the local emulator now requires running Hosting in the emulator as well since this is what performs the magic of routing the request to the Function. However, the Hosting emulator doesn’t support hot reload so that’s a no-go; it is meant to emulate loading from your production distribution, not for fast development cycles.

But because the application is simply loading back into the SPA itself, addressing this friction is exceedingly easy: locally, just load the page directly from the storage emulator when viewing it as a standalone page and otherwise, load it from the SPA’s state during editing:

if (props.fromStorage) {
// This is in editing mode and we know the trip already has a story.
resolvedStory = await storyStore.resolveFromStorage(
route.params.uid as string
} else if ( === 'TripStory') {
// This is in a standalone view; we load it from the global scope. This is
// rendered into the page by the function.
if (window.__turasStoryJson) {
// This is in production and testing with hosting emulator.
resolvedStory = window.__turasStoryJson
} else {
// We try to load it from storage for local dev
resolvedStory = await storyStore.resolveFromStorage(
route.params.uid as string
} else {
// Load the story for the first time from the store.
resolvedStory = await storyStore.resolveFromStore()

This made both the developer experience and user experience magical; the result is a fully functioning preview of the page without having to do much extra work and with no extra infrastructure I could get an exact mirror from Cloud Functions.

(There is an alternate approach which is to configure Vite to proxy the request to the emulated Function for the route during dev)

The logical flow of how this works in the SPA. Edit mode is the authoring mode in which case, we always load it from storage as the last published version. For the standalone page in production, we always load from the global window scope. In local development, the published story is loaded from the storage emulator. And finally, if the story hasn’t been published, it is always loaded from the Pinia store in edit mode. (Yes, I know, I can refactor this and remove a branch; brain cells)

What does this look like in practice?

I have an iframe-free, identical preview of the page that is connected to my reactive application store during editing and I can have zero-lag signaling from my editing experience to my preview experience. I can see exactly how the rendered page will look when it is served from Functions…without having to run the emulator.

This is a great DX. But on top of it,

  • The authoring user gets a real-time interface when editing (without an iframe).
  • Yet when the application loads from the server, it serves static HTML and JSON to the page.
  • Because the standalone story is rendered in a separate layout and route when loaded from the server, Vue Router’s automatic route splitting results in a production page that doesn’t have to carry the full weight of the editing experience in the standalone view so it loads pretty fast and I get the benefits of a single codebase.
  • At the same time, when I’m developing locally, I don’t need to run the Firebase Hosting emulator and I can just load it directly from the Storage emulator.
The “heavy” payloads are split from the “light” payloads. No need to write a separate application; it’s fast and light enough with the route splitting.

Of course, it’s not quite as good as rendering the full HTML on the server (and I’m sure there are groans that this isn’t even real SSR), but that would require more complexity and more infrastructure for minimal benefit in this case.

Do You _Need_ Full SSR?

So here’s the question du jour: do we even need full SSR? With this hack, I’m able to get my OpenGraph and Twitter cards to correctly render a unique preview per story:

How it looks in Twitter’s finnicky cards.

This page is already so fast, that full-blown SSR of the HTML seems unnecessary and of no benefit for my use case. In actuality, I don’t think the full machinery of Next.js nor Astro.js can beat this — all things considered.

In some cases — for SEO — it would be helpful to have markup (the stuff that tells Google your store hours, if a product is in stock, your phone numbers, etc.) rendered to the page using Microdata, RDF-a, or JSON-LD. If those are important, the solution is to just generate standalone islands of hidden markup or JSON (in the case of JSON-LD) and Google and Bing are going to be just as happy.

Will it scale? The editing front end of is entirely static and cached via Google CDN. Because this SSR backend is just Google Cloud Functions loading static HTML (the output of which is also cached via CDN), it’ll scale as much as I need it to (though I capped it at 20 instances). The main issue I found is that cold starts are quite abysmal (which would be the case with Next.js or Astro.js in Cloud Run as well). While I would have preferred to keep this totally free, having 1 warm instance for $1.91/30d isn’t too bad and provides a much better user experience.

While this solution is using Cloud Functions via Firebase, the same general approach would work with any backend like C#/.NET/Java/Go/Rust in a container in Cloud Run.

My conclusion? Most teams and projects probably don’t need a full-blown contraption like Next.js (death glares from Guillermo Rauch; man’s got lemons).

Next time you are tempted to reach for the heavy, complex solution that will slow you down, why not try something simpler? While this solution design is the result of a set of heuristics meant for a nights and weekend project, its lightweight and serverless nature means that it will scale even to the heaviest traffic without much infrastructure to operate.

One of my all-time favorite talks on serverless architectures.

Such approaches can mean that — whether you’re working alone or in a team — there’s less friction and more momentum. The faster you can build it, the lighter the solution, the more momentum you can accumulate. Not only that, the logic in my micro-SSR solution is so simple, that anyone can understand it in one glance; no need to learn the complexities of Next.js or Nuxt.js just to eek out some extra performance and SEO/social <meta>. As an added bonus, I’ll minimally have to worry about patching, migrating, or upgrading this code; I can just leave it there and let it do its thing.

so grug say again and say often: complexity very, very bad


Every once in a while, if you step back, you’ll be surprised at what the complexity merchants are trying to peddle these days and just how much inertia their wares will create when the simple, stupid solution is often more than enough (remember “Keep It Simple Stupid”?).

If you enjoyed this story, subscribe for email updates and click follow for future updates from my Turas Engineering Series where I focus on how to build serverless solutions with less infrastructure, low friction, and fast cycle times. Follow @turasapp on Twitter and check out ( for yourself and let me know what you think!

Special thanks to Lefan Tan who’s questioning inspired this; Lefan, Roberto Alarcon, and Arash Rohani for reviewing draft versions of this article and providing their feedback.



Charles Chen

Maker of software ▪ Currently: ▪ Open to contract projects: GCP, AWS, Fullstack, Postgres, NoSQL, JS/TS, React, Vue, Node, and C#/.NET