As part of the announcement, Vercel unveiled a whole host of exciting new features. These new features include a native compiler written in Rust (usurping Babel!), middleware support, URL imports (like Deno!), React 18 and server component API support, and much more.
Instead of going through all these amazing new things, as many have done already, I wanted to take an in-depth look at the most wide-reaching difference-maker in this update: the Rust compiler. Every app built with Next.js will make use of this, so it’s important to understand what real-life impact this will have.
Feeling a little Rusty
Below is a performance comparison, sourced from the release announcement, found here:
Now, this is a release announcement, not a technical paper, so I’m not going to criticize that data presentation. It’s certainly impressive looking, but lacking substance and depth. It’s certainly not an indicator of real-world mileage making this switch. I can understand engineering managers across the industry having doubts about moving to a new build stack from something as mature as Babel, especially if they don’t have in-depth data better indicating the real-world return on investment (more the reward for the risk of upgrading now).
I wanted to provide a proper build performance comparison for real-world use cases, alongside a “control”, so I picked 3 test cases, testing between Next.js 11.1 and Next.js 12:
- A brand new
create-next-appapp, with a very simple homepage (Our control).
- A static website using SSG and ISR, featuring images, blog posts etc. (Actually my website, tpjnorton.com).
- A complex web application using SSR, Next API routes etc.
These three examples were chosen to provide meaningful numbers for some common use cases.
It’s easy to meaningfully measure build time improvements as they’re well-reported (just out of the build) and easy to reproduce. However, getting data for fast refresh is hard to do precisely, and obtaining results in a way that really communicates improvements objectively is harder still. So, for the purposes of good science, I’m going to actually only collect and report build data. I’ll also clarify that all these apps use TypeScript. I’ve disabled
eslint during builds to focus more on actual build performance, and measure both initial (no
.next directory) and incremental build times. Oh, and I’m on Windows 10.
Let’s get going!
Case 1: Starter App
First, we create a brand new next app using the following command:
yarn create next-app --typescript
This gets us set up with a very basic project, using Next 12. Then, we can get a Next 11 version by doing the same and altering dependencies to get the same starter with Next.js 11.
Case 2: Static Website
The tests were performed using an existing Next.js 11.1 app first, then upgrading the site to Next.js 12, and re-running the same initial and incremental build tests.
Case 3: Complex SSR App
I took the same approach as for the static website for this. First initial and incremental build times were measured using Next.js 11, then the same again on Next.js 12.
Oh boy, you’re not going to like this.
Here is a chart showing my results. Columns are grouped by the type of build, and each color is for each different use case (Google Sheet here):
With the exception of the starter app, all the builds with Next.js 12 are slower than with Next 11. I’m extremely surprised. For the starter app, I could understand there being a slower build process, as there’s a lot of non-code-compiling work to do to complete a
next build. However, as we move across the board the static app to the SSR app, which is also a move to apps with increasing amounts of code (the static app is small-mid size, and the SSR app is large), this overhead should diminish comparatively speaking, and we should be seeing some healthy performance gains with
At least, that’s what I would expect.
I think it’s difficult to make any real guesses about why this might be, and at best, that’s all they’ll be for me, for now, guesses. My instinct tells me that
swc is much faster, but there’s something around how the new toolchain is invoked/used that’s causing the in-practice slowdown for me. Maybe it’s not been tested on Windows much yet — it’s hard to be sure.
It’s important to clarify that my experiments are not very rigorous. I wanted mostly to get a ballpark figure for how much faster builds would be, as I was extremely excited by the news of this new version. As always, software projects (especially Next.js) constantly evolve, and while major changes introduce kinks as seen here, they are often resolved very quickly. I also want to clarify that I have a personal bias with this too — I’m a huge fan of Next.js and have used it for both new and ported existing projects to use it, and this move to native toolchains, while yet to bear fruit for me, is a brave and powerful innovation in the world of the web. So I’m keen to cut them some slack where I can 😄.
I think it’s very easy to jump on something new like this and write a half-baked hit-piece for clicks about how rubbish something is because “it didn’t work well for you”. That’s not what I’m trying to do here. Though I acknowledge these results appear controversial, I feel it’s important to share them, as it teaches a meaningful lesson to developers and managers alike about adopting shiny and exciting new tech — it’s often best to wait a bit first.
Thanks for reading!