JoyConf 2026 is back. Content Confidence. Human Connection. Save your spot!

From 27 to 4 Minutes: How Concurrency Improvements Powered Storyblok's SSG Migration

Developers
Daniel Mendoza

Storyblok is the first headless CMS that works for developers & marketers alike.


Note:

We updated this post to reflect further improvements in build time. Read on to find out how.

Did navigating to this page feel snappy? That’s because, at the start of 2026, we migrated the Storyblok website from exclusively server-side rendering (SSR) to a hybrid of SSR and static site generation (SSG).

What was the motivation?

Simple. Our SSR setup scored poorly in the Time To First Byte (TTFB) Core Web Vitals.

Storyblok's website consumes content from multiple sources: a dedicated Storyblok space, Algolia, and Greenhouse. Migrating to SSG means rebuilding when a change is made; implementing this with 300 editors, while supporting search and third-party content, makes that difficult. This then leads to support via dynamic content (the selective use of SSR and client-side rendering).

The website is built on Astro, where, if your project is set to SSG, it still supports SSR at the page level. However, you cannot define files that support both SSG and SSR, meaning that the entire file needs to be either SSG or SSR. This demands a clear distinction between dynamically (on the server) and statically generated pages. We can support a hybrid website, not hybrid pages.

All this influenced our architecture and how we manage content.

The challenges

Handle job openings with content from Greenhouse

There are two types of job pages within the Storyblok website. The first is the careers page that lists open positions. The page is statically generated, but the listings are rendered on the client. This setup improves performance because users get a rendered page with static information while the listings themselves load.

The second is the individual job page, which accepts a job ID via a query parameter, populates the job description, and embeds a Greenhouse iframe for submitting applications. However, SSG doesn't support query parameters, because the full route needs to be known at build time. One fix could be moving the ID into the slug, but third-party analytics tools rely on query parameters for referral links and social tracking, so that wasn't an option.

Job pages stay SSR.

Algolia indices vs. static pages

Our Algolia implementation relies on multiple indices: a general index for global search, specialized indices for case studies, and more. The actual pages, such as the case studies, are pre-rendered, while a Netlify serverless function keeps the Algolia indices in sync with Storyblok data. This means that search results can reflect changes without requiring a full rebuild.

Since the website index takes ~2 hours to regenerate, rebuilding it each time isn't an option. That created a consistency problem. The Algolia index is always live, but the pre-rendered pages are frozen at build time. Without guardrails, the listing component could surface a case study that doesn't have a pre-rendered page yet. To prevent this, we filter out Algolia records created after the build started. Since Algolia records don't have built-in timestamps, we added a first_published_at_timestamp field to all indices. This field's value is Storyblok's first_published_at property, which we filter against a PUBLIC_BUILD_STARTED_AT_TIMESTAMP_MS environment variable set at the start of each build.

This prevents broken links, but not all inconsistencies. If a case study is renamed mid-build, the listing displays the new title while the case study page displays the old one until the next rebuild.

Data source consistency

Multiple components across different pages fetch the same data sources. If an editor published a data source change mid-build, some pages would be built with the old values and others with the new ones. We solved this using the CLI v4's ability to pull data sources into local JSON files and reading from those during the build instead of hitting the Content Delivery API. On the app level, we removed write access to data sources for non-developers and switched fields that content authors frequently update to text fields.

The global layout

A single configuration story drives the header, footer, cookie banner, and more. Think global page layout that renders every page. With a 30-minute build, anyone could republish it mid-way, leaving some pages built with the old config and others with the new one. To ensure pages use the same config as the last build, we needed SSR.

The first instinct may be to use the cv parameter to pin a cached version. However, that's not how cv works: if a story isn't cached for that value, the API just returns the current version. The solution was to download the config story to a local JSON file at the start of every build, and read from that file in production, while the dev server and Visual Editor preview still fetch it live from the API.

Hint:

Check the comprehensive Caching concept to learn more about best practices and optimization strategies.

Blog, events, and plugin listings with pagination

Listing pages for blog posts, events, and plugin descriptions are all paginated via query parameters—which don't work in SSG. We kept these pages in SSR rather than re-architecting URLs and routing all at once.

That introduced a new problem: the SSR listing page fetches currently published stories, but the individual entry pages are pre-rendered and frozen at build time. A blog post published after the last build would appear in the listing but lead to a 404 error page. The fix mirrors the Algolia solution: use the same PUBLIC_BUILD_STARTED_AT_TIMESTAMP_MS environment variable to filter out stories published after the build started.

The solution has the same limitations, though. It doesn't cover relation resolution, so a featured blog entry added after the last build could still surface as a broken link. And content can still go out of sync: rename a post mid-build, and the listing displays the new title while the post page displays the old one.

Keeping these pages in SSR also required creating a dedicated BlogListingPage content type, since the BlogListing component reads query parameters and couldn't use the generic, pre-rendered EnterprisePage type. We also had to make sure editors couldn't accidentally create a second story of that type, since it would fall through to the catch-all pre-rendered route instead of the dedicated SSR page file.

Storyblok doesn't have a concept of singleton content types, so we enforced this in the code.

Content personalization

We already had a component that lets editors serve different content based on a query parameter. We used it to personalize landing pages per target account. But again, query parameters don't work in SSG. Since there was no clean way to replicate the behavior, we removed the component and handled personalization using separate pages.

Different error pages

The 404 error page showed different content depending on which path you tried to visit: a contextual experience that relied on knowing the original slug. With SSR, Astro makes it straightforward: a rewrite that attaches a header to the original path. But Astro rewrites don't work on pre-rendered pages.

We decided to switch to Netlify redirects, which support attaching custom request headers. The 404 page stayed SSR so it could read those headers and serve the right content. The tradeoff? Non-forced Netlify redirects only fire when no static file exists for that path, so any prefix we wire up this way is off-limits for SSR routes in the future.

Trigger builds

Before the move to SSG, Netlify deployments were triggered automatically on every merge to main. With builds taking roughly 30 minutes and a high merge volume, that quickly became impractical. Development continued on main, but production deployments switched to a dedicated production branch. A script force-pushed main to production only when new commits existed. Otherwise, rebuilds were triggered via Netlify build hooks routed through existing serverless functions.

Infrastructure for cron jobs

Our setup didn't require permanent servers, so the infrastructure for scheduled jobs didn't exist. Without adding cost or delaying the project, GitHub Actions becomes the obvious choice. It works, but it comes with tradeoffs: jobs run 10–15 minutes late, and occasional internal server failures are unavoidable.

Support for on-demand rebuilds

Storyblok has employees all over the world. 300 users with edit access, spread across time zones, means content changes happen every few minutes. With 30-minute builds, deploying on every merged commit was no longer viable. We redesigned how builds get triggered and made a deliberate call to prevent editors from triggering rebuilds themselves.

The key takeaway is that SSG turned a runtime performance problem into a build-time throughput and concurrency problem.

The concurrency and developer experience fixes that made it possible

Once we accepted that builds were our new bottleneck, the next step was reducing build time.

Here’s what made that possible:

Tier-based rate limits

Storyblok's js-client had a rate limit defaulting to 5 requests per second. This was easy to miss and meant users were unknowingly throttled far below what the API actually supports. Both our customers and the deployment platform reported rate limiting as a bottleneck. The most recent release, js-client 7.2.0, introduces a ThrottleQueueManager with a dynamic, tiered system that automatically selects the right limit based on request type: 1,000 requests per second for published stories, 50 for single stories or small listings, and so on, matching the real constraints of the Content Delivery API. The client now also parses X-RateLimit response headers to adjust limits in real time and bypass rate limiting entirely for in-memory cached published responses.

No manual configuration needed. This alone delivered about 35% reduction in build time.

Exponential backoff with full jitter

Static builds ask for a lot of content in parallel. Scale that across multiple processes and machines, and things start to fail: bursts of traffic trip limits, then synchronized retries trip limits again.

Exponential backoff with full jitter, introduced in October 2025, breaks that cycle. Instead of every request retrying at the same intervals, retries are spread out randomly, reducing contention and improving success rates under load. This change also increased the retry ceiling from 3 to 12 attempts and reduced batch sizes in CLI migration workflows from 100 to 6, which lowers burst pressure on the API. The result is a client that degrades gracefully under contention rather than failing fast, and doesn't require any changes from the systems consuming it.

Learn:

About exponential backoff and Jitter

  • Exponential backoff increases the delay between retries exponentially to reduce the risk of overloading the system.
  • Jitter adds randomness to those delays, so that clients don't all retry at the same intervals. Synchronized retries in distributed systems might cause repeated spikes.

Node streams in the CLI

The CLI v4's stream pipeline keeps 12 concurrent requests in-flight continuously across all stages simultaneously. While one stage is waiting on HTTP, another is writing to disk, and another is transforming. The result? Up to ~4x speed-up.

Each processing stage uses a semaphore to cap concurrency at 12 parallel requests. Crucially, each stage signals when it's ready for the next item immediately, instead of waiting for its current work to finish; the pipeline keeps flowing, and requests are always in-flight.

Built-in lifecycle hooks ensure nothing is dropped. Before any stage closes, it waits for all in-flight operations to settle. Backpressure is handled automatically by the pipeline, so each stage only receives new items as fast as it can process them.

Page sizes also went up: 100 per page for stories and assets, and 500 for migrations, reducing the number of round-trips needed to paginate through large datasets. All of these decisions shift the bottleneck from sequential HTTP latency to the actual throughput limits of the API.

Parallel page builds

At this point, we've already reduced the build time from 27 minutes to 19 minutes. A significant improvement, but plenty of room to reduce it even further. Fortunately, the updated js-client introduced another opportunity: add concurrency to the page builds.

Astro offers a build.concurrency option that builds multiple pages simultaneously.

While it can be useful for speeding up overall build times, Astro recommends this option only when alternative optimization methods fail. The problem is that setting a value that's too high results in too many simultaneous page builds, strains memory, and ultimately slows the process.

After careful tests, the latest website update builds up to eight pages concurrently, resulting in a massive reduction in overall build time.

The result: from 27 minutes to 4 minutes

Working around the migration constraints and adjusting the concurrency behavior allowed us to cut build time from ~27 minutes to ~4 minutes.

The new setup delivered such significant improvements that we can now support automatic hourly redeployments while also allowing editors to manually trigger builds whenever needed.

And it wasn't just us. Customers like NordVPN experienced a 3-4x improvement after hitting the same bottleneck.

What’s next?

After resolving the biggest concurrency bottlenecks, our focus stays on further reducing build times:

  • We continue to refine the hybrid approach, identify which SSR pages are good candidates for pre-rendering, and which genuinely need to stay dynamic.
  • We need to implement reliable cron jobs—potentially good candidates for a scheduled trigger workflow in Storyblok’s FlowMotion.
  • We plan to investigate what’s the best way to keep dynamic page lists synced with their static counterparts.

Just like any other web project, there are always opportunities for improvement.