Skip to main content
995

April 13th, 2026 ×

Next.js Vendor Lock-in No More

or
Topic 0 00:00

Transcript

Wes Bos

Welcome to Syntax. Today, we have Tim and Jimmy on today. Tim is the, of course, the lead dev of Next. Js, and Jimmy is the head of Next. Js. And we're they're here to talk to us today about some cool stuff that they're rolling out around their adapters platform, which is is it going to allow you to host Next. Js on different providers like Cloudflare and Netlify JS well as, like, different run times as well. I mean, like, if you wanna use Node or BUN or whatever, you can you can do that.

Wes Bos

Conversation takes a little bit to get warmed up, but trust me, it's gonna be worth it because these are some super smart guys, and I actually really enjoy this conversation. We talk about caching, Next. JS, performance. We talked about turbo pack and all the turbo pack internals and the whole process of building that. We, of course, asked them, why don't you just use Vite like everybody else is asking? And they had a really interesting answer to that as well, which is maybe maybe one day. And, the infrastructure, just like in general. Like like, Next. Js, of course, is is code, but it's also if you warp it to run like Vercel, it it requires a whole bunch of infrastructure, so we kinda go through that as well. Let's get into

Guest 1

it.

Wes Bos

So the other day, you guys launched what the adapter platform. Tell us about what that is. Yeah. So we launched the adapters,

Guest 1

like, API as, like, stable, which has been, like, a long time coming, as you said. We've been working on it for close to a Yarn, at this point. And, so the idea, right, is that we kinda, Next Wes by itself, you know, it runs super well, like, with with, like, sort of, like, next start, you know, like, on a simple sort of, like, VPS. You can sort of tap into, like, all capacities of Next.

Guest 1

But when it comes to kind of scaling it, it's always been maybe slightly, slightly more difficult just because, like, Next. Js capacities, like, the features, like caching and ISR just requires, like, you know, synchronization, across, like, you know, multiple nodes if you need it. So, or with your CDN if you require it. And we basically introduced a layer so that you can basically tap into this and, you know, have, like, a somewhat of a stable contract so that you can easily adapt it wherever you kinda wanna host it. Like, it like, the adapters API, it's useful for yourself if you're just, like, hosting, like, you know, just across, like, five, simple servers, you you you might not even want, like, a VPN, like, sorry, like a CDN or something. But it's also useful if you're someone like CloudFlare or Netlify and you're, like, hosting Nextiva ESLint a specific way with, like, serverless, And that JS in, like, you want potentially, like, to host your middleware in in a different way. And so, yeah, we created that API, as a as a need, to answer the needs of the community.

Guest 1

We also created, like, an ecosystem working group, which is basically composed of, like, us and partners across Google, Netify, Cloudfare. Used to have AWS in the group, but they they dropped out at some point. But, the idea is that we since a lot of, like, the Next. Js features require that level of, you know, a certain level of, like, infrastructure work Yeah. To make it work on scale, we basically, basically creating groups so we can, like, share ideas in advance, could collect their feedbacks, and kinda, you know, make sure they don't they don't, you know, dislike supporting Nextiva's on their own platform. Yeah. Yeah. So so you mentioned that you've been working on this for,

Scott Tolinski

quite a while now.

Scott Tolinski

So I I take it then that this wasn't a knee jerk reaction to the slop pork. Right?

Guest 1

Yeah. Not not really. No.

Guest 1

If you if you look back, we published the RFC a year ago, and, maybe we I don't the timeline is quite fuzzy to me at this point too because it's been so long. But, like, we had started engaging with, like, NetFlare and CloudFlare and the OpenX guys even a few months before that.

Guest 1

So not really. Node.

Guest 1

Can't say it hasn't, like, ESLint fork term. Yeah. I I think one of the main points there, right, is that, like, the main reason why Cloudflare did this is that they kinda, like, wanted to avoid, like, you know, some sort of, like, vendor look in, and they wanted to make it, like, somewhat easier to support some version of NetConnect on Cloudflare.

Guest 1

And so we did wanna Scott to the community that we also, well, we had to Yeah. Had been cooking on this. It's a great timeline, though, because we internally, this match up with, like, our testing or on testing. We were like, the reason why it wasn't stable JS just that we wanted, all of our own website to be working on this adapter's API first before

Guest 3

before we published it. Yeah. Yeah. So we've been dog footing it for, for quite some time, like, outside of, like, this release. Right? Like, this release JS, market stable, but, we we introduced, I think it was beta in, at Nextiva's call, last year in October.

Guest 3

But the the reason that we hadn't marked it stable yet is that also Wes had been documenting it on our own application. So, like, we built the adapter for for sale, and we had been rolling it out to, like, all, like, for sale's own applications, like, similar to how we do it for Nexus features, like, in general.

Guest 3

And, and that took some time because, like, we have a pretty large application and, like, we wanted to iron out all the, like, issues that we had with this, like because because for reference like this, like, think about this, like, adapter that we had before was already like, it had been battle tested across, like, like, eight years or so. Like, Like, since we introduced, like, now v two, when we, like, introduced the serverless function ESLint Vercel of that, like, this is still the same adapter from back then.

Guest 3

So that all the, like, possible edge cases that that you could find in, like, Vercel application like, Vercel version of, of Nextiva's. This new adapter JS, like, from the ground up built on the adapter's API.

Guest 3

So we're really, like, dog footing this, like, in the same way that, like, every other platform is. And Okay. And because of that, like, we we had to do, like, some extra checks to make sure that, like, can roll it out to everyone, in in this way.

Guest 3

So, yeah, like, that that's, like, why the timeline was also, like, spread out over a year instead of, like, hey. We're gonna have this new thing. Yeah. Because in pnpm practice, like, if you think about it, there's, like, the adapters API inside of Nexus itself, like, that's not that complicated. It's like a couple of functions that have, like, a typed contract.

Guest 3

Like, the real point is, like, you Node have a integration, layer that, you don't have to, like, refer to engineer on any side of things, like, not on wholesale, not on any other platform.

Guest 3

Because, like, typically, like, you could already host, like, Next. Js on Netlify or Glassware or, like, through Pnpm on AWS or things like that.

Guest 3

But, basically, like, this new API makes it a lot easier for these platform teams to just integrate with Next. Js directly instead of, like, having to, like, find which files need to be included in the serverless side Yeah. Things like that. Hack around it. It's like, I I was trying OpenNext

Wes Bos

on CloudFlare for for quite a while, and I Wes, like, digging into it. It's during the early days of the OpenNext CloudFlare, so I was doing, like, bug reports and whatnot. And I was looking into it, and, like, a lot of it was just, like, regexes and taking the bundled output and, like, uncompiling it and, like, I was like, oh, this really is just a bunch of hacks on the output. Like,

Guest 3

everything that Nextiva's, like, knows, like, basically, like, all this, like, metadata around, like, what is a route, what is a like, how does it match, like, that kind of thing. Yeah. That's not all part of the adapters API. Yeah. And, like, let's talk about, like, the different pieces to to this because

Wes Bos

I I think one of the hard re one of the reasons why it was historically hard to put Next. Js on other platforms is because it it kinda blurs the line between, like like, this is a Node app, but, also, there's this all of this infrastructure that that needs to come along with it. And, like like you said, that that special sauce of why it was so easy on Vercel was because, like, you guys have had your own adapter platform and and have have made it, like, line up with all the the different pieces. Right? You got CDN and databases and caching and and all that type of stuff. If if somebody's, like, just looking at this right now, like, what are the different pieces of infrastructure that are needed for, like, a a Next. Js app that uses every single feature under the sun?

Guest 1

The thing is, something that, like, I kinda want to try to dispel a little bit is that, like, a lot of the what we're talking about, like, the the sort of, like, complexity that comes with, like, some of the Next. Js features, those are, like, mostly sort of, like, additive, compared to, like, the other frameworks on its own if you're just, like, you know, if you're just looking for a framework that kinda just, like, you know, service side renders and or just, like, you know, serves API routes and serve some, like, cached. They're not that stuff just kinda works out of the box. Right? There's there's nothing you don't need, like, crazy info or anything to do so. You just need, like, a a server that can return to your response. Where we go a bit further, is when it comes to basically everything caching related. And then we've made kind of updates to our docs to kinda clarify this, but those things are mostly, I'd say, performance optimizations.

Guest 1

Right? Like, partial prerendering, which JS, like, one of the features that allows you to kinda serve, like, a static page and then kinda compose with with, like, some dynamic contents. I I love those. I want those features. Those are great features. Yeah. Yeah. Yeah. And it works with Nextart. Right? Like, you know, you're you're gonna hit your server. It's gonna return, like, the without rendering. It's just gonna return you the the static HTML, and then it's gonna do, like, the render. Right? And when it comes to Vercel, the way we we kinda optimize for this is that this Node this this the CDN this sorry. This this HTML shell can be served from the the CDN rather. Right? Like, closer to the user, and then we'll, like, invoke, like, certain background.

Guest 1

This is, like, purely just taking the the primitives that we've taken that that we've put in the framework and then just, like, optimizing them. So that I think that answered the question a little bit. Right? Like, I think to get, like, the experience, what I would want what I would recommend, anyone who's, like, adopting the or adapters to to get those features is you want your server close to your database. Right? If you're doing, like, any research, like, just the thing that makes sense in general. And then you wanted your static contents always close as close to the user as possible.

Guest 1

And you need something that kinda handles the connection between the two.

Guest 1

And it could be, like, on CloudFlare, it could be like a, you know, like a a worker. Right? Mhmm. And and that worker would just, like, execute at the edge, return to you the CDN content, the the the static shell, and then, like, invoke another worker that is closer to the to the origin, basically.

Guest 3

Mhmm. And even for this case or, like, if you're talking about PPR, for example, like, the this static shell, like, even if you serve that from a server that does dynamic rendering. So, like, if you serve it from next start, like, it's still a win because the, like, you still have this, like, immediate response.

Guest 3

The only difference is that it's not this response is not immediately coming from, like, a CDN per se, but it's instead, like, serve from your, like, server that you have hosted.

Guest 3

And that's the same way if you go to any other platform.

Guest 3

It works in the same way. But what you can do, like, additively to that is, like, you can make it so that your CDN supports this, like, new primitive, like, VPR, for example, and then it can serve that shell from the, from the CDN itself while stitching the the dynamic render as well. There's,

Guest 1

like, maybe, yeah, the PPR part JS, like, an optimization.

Guest 1

I will say probably, like, something that I would consider kinda important and that we still wanna make improvements on is, the cache Scott of synchronization story. Right? Let's say you're having, like, five servers Yarn using Next, and, like, you might, you know, you you might, like, started with, like, NextStart, on all your, like, instances.

Guest 1

If you're like, NextGen's caching, like, the the the pretty powerful thing about it is that, like, we allow you to kinda invalidate it, whenever you want, and then we do it in the background.

Guest 1

And, of course, whenever you you you you call this sort of revalidation methods, that one thing that is important for your provider, or yourself JS that, like, a revalidation is, like, just to share across all nodes. And so that's that's one of the things that we wanna we're we're making better with the adapter's API, but we still, you know, still got some some work to do, I'd say. Okay. And, like, when when someone's, like, having a cache,

Wes Bos

like like, when when, like, a page or component is cached, where is that typically stored? Is that thrown into, like, a key value to file, or does it does it matter?

Guest 1

So we have multiple layers of of caching as you as you you might see from the the memes, about our docs. Right? We, and I don't know if you have have familiar you are with, like, the or cache components.

Wes Bos

Oh, yeah. So you're like, Node. And I've I've I've many times said, like, I I want that in every single Yeah. Thing. Like, one of the reasons I'm such a huge React Vercel component fan is the ability to, like, fetch and cache and just do everything in the component rather than, like, everybody else's, like, route layer, except for Svelte.

Wes Bos

Svelte is along with the synchronous now. Yeah.

Guest 1

And and see the idea behind use cache and the idea and why we chose, like, you know, a very almost, like, generic kind of name for a cache components JS that we kinda want you to think about caching at every possible layers. So it starts, like, purely through the the clients. So if you use use cache, you can actually add a property that will just say, you know, cache it on the on the browser so that, like, across my session, the the page is, like, stored here, and I and I can configure, like, how long it should it should be kept around. Then it can also happen at, like, runtime, when you server side render your page.

Guest 1

Now this cache lives also can also live, well, on your on your server across your request, and you can have, like, a a time the JS you wanna keep it around. We can also have this at build time, and for static pages where now this this cache page should live in your CDN too. And so so there there are multiple layers to it, but but but we think, basically, it allows you to build, like, you know, the the sort of most composable kinda app. Some some, some pages, for example. Yeah. It's fine if they live, on the CDN, like, and they're they're never really sort of, like, Wes validate. And if you do so, you're doing it, like, you know, kinda random. Not randomly, but, like, occasionally, like a CMS or something. Mhmm. But those things don't matter, right, if you're, like, a page, like, in an app like ChatGPT Wes you would rather wanna cache on the on the ESLint instead.

Guest 1

One thing I'm really excited about, for example, is that we could we wanna extend the the use cache to, like, layer so that we can have also, like, an offline kinda kinda layer. So right now, like, we can cache it on clients across sessions, but the idea is, like, what if we just, like, also kept it across reloads, seamlessly? Like like and then you would just be able to, reload your page while offline, and then still have, like, the data from or something out of the Bos, right, without having to do, like, potentially, like, a, a a synchronization, layer.

Wes Bos

Wes, that'd be cool.

Guest 3

I like that a lot. So a lot of the APIs that we're adding Yarn, like, like, edit like, especially with cache components. Like, there's, like, many things that we haven't, like, really talked about, like, this offline thing that Jimmy's talking about.

Guest 3

They're, like right now, they're building the foundation of, of what this, like, caching layer will look like. The other thing is, like, while Jimmy said that, like, we want you to think about caching in every layer.

Guest 3

The default is actually to, be, like, as dynamic as possible. So if you use, like, dynamic APIs, it will basically like, it feel like, like, how you're you were using, like, a service of props before or things like that. If you use, like, dates or things like that, things become dynamic still. And then you like, basically, the the biggest change from, like, Nexus 13 and, like, the the first, like, iteration of all the discussion APIs that we want you to just, like, build the app first, and then you can start optimizing. Whereas previously, you were already in this, like, optimized state by default, and that that was, like, kinda confusing.

Guest 3

So we really want you to, like, see these APIs as, like, additive. Right? Like, you we want to add to use cache instead of, like, it's cached by default or, like, you have to reason about that that kind of thing.

Guest 3

And then, like, when you want to add to use cache, like Jimmy said, you can do it everywhere, in in the sense that, like, you can add you do it at the page Vercel. You can do it at, the individual function level or at, like, components even.

Guest 3

And that's something that Wes, previously, Scott even possible. Like, even in the first version, like, we we like, it was either it's fully static or it's fully dynamic.

Guest 3

And, like, it started with the the the static thing, and then, like, everything is cached, but now it's, you have to opt into it. And,

Wes Bos

and it feels nicer in in a way as well, like, when you're building the app. Yep. And, like, say somebody builds out an XJS app, you know, bunch of pages, bunch of components, does a whole bunch of stuff, and they say, okay. Like, I'm I feel like the app is in a really good spot. Now I, either want to move to use using less resources or or make the app a little bit faster. Now I'm gonna start looking into cash. Like, what what would be your first, like, couple steps of attacking, like, the lowest hanging fruit?

Guest 1

The nice thing about our model is that we kinda we still try to Node you toward, like, making the choice as you develop. I think I think other different frameworks kinda have, like, this other sort of, like, less maybe less opinionated way of Yeah. Developing. You can, like, sort of, like you can start with, like, just client side fetching.

Guest 1

And, and at some point, yeah, you do need to kinda invest into, like, optimizing your app, pulling emit potentially into the the server or the loader or something.

Guest 1

And, while we don't cache for you by default anymore, a big part of, like, the programming model now on on cache components is that, like, you you'll write your your code as, like, intended, but then, like, Nextiva will kinda mention, like, with with, like, a little, like, nice, warning window. Like, hey. Either you need to add, like, a suspense boundary here so that you keep this thing dynamic or you just cache it now, with, like, use cache.

Guest 1

And then you, you know, it's like a simple drop in directive, and you sort of, like, our setup from the get go already. Right? Because then it allows us to progressively do it. And so, yeah, I think I think this is kinda like the the the nice part of, with that's that's why I really like Next. Js by itself.

Guest 1

Right now, today, right, if you have, like, a slow Next. Js app, the first thing I suggest, right, is, like, you're probably not on cache components just because, like, it's somewhat somewhat very recent. Like, we haven't, like, put, you know, as much effort as we, as we could have into the the documentation and experience yet. Right? Yeah. But, that's the first thing. Right? That's that's what ups you into this.

Guest 1

Hey. I'm gonna warn you about this little thing. And, really take it take it from there. Read the the use case docs. Right? Like, no one no one likes learning having to learn about, like, a new directive here and there. And, you know, for most people, performance is also somewhat somewhat fine.

Guest 1

But, I think, you know, the our our idea is just weird. Whenever you're ready, we'll sort of, like, we'll be there.

Wes Bos

Mhmm. Let's say. Waiting for us. That's that's great, Jimmy.

Wes Bos

I like that.

Scott Tolinski

Yeah. I'm I'm curious about with with your whole adapter pattern, is there, like, a minimum subset of features that adapters need to support before considering, like, before considered to be, like, a blessed adapter by the next team? Yeah. It's a great question.

Guest 1

Not really. So so the the idea is that, I don't, like, I don't wanna get kept necessarily on, like, whatever you like, you know, someone's platform is supporting or not. Sometimes, like, the thing with, like, PPR is that, like, it does require, like, a little bit of warp if you want to do this at scale.

Guest 1

And there's no right and wrong here. It's more like just, like, the users kinda should make their own choice. But one thing we do require, as part of, like, the adapters, working group, if they wanna be blessed, is that they should just run our test suite. So we've made some work to make this available for, everyone until you just, Node know, you just plug in your API keys and, like, just like the the pipeline on, like, how to just deploy it with, like, a few scripts here and there.

Guest 1

And then you can just use participate in in the same way VineX has used it and, and just basically assess how good your platform is at supporting Next. And even then, it doesn't really cover, like, it it's not very, like, opinionated about, like, whether or not you're running PPR from a CDN or not.

Guest 1

We don't really sort of check for performance expectations here rather than just, like, you load the page and it's just working as intended.

Guest 1

It's not but maybe it could be taking twenty seconds. Don't I don't really care about this the as long as the test pass. And the other thing is, like, we're we're also not specifically gatekeeping on a amount of test passing.

Guest 3

So, what we did is we we we give you the test suite.

Guest 3

The only thing we ask so there there's two, sets of adapters. Right? Like, there's adapters that that anyone could build, and there's the the ones that are, like, part of the docs that we, like, explicitly mention.

Guest 3

And then we also publish the results from the test suite for.

Guest 3

Yeah. This so this doesn't mean that you have to, say, like, build a an adapter for for, like, any platform that doesn't pass, like, every single test, but it will split out by, like, in the test suite. Like, we we keep track of, like, which feature like, which test is related to which feature, and then, we publish, like, this adapter supports all features, like, a for VPR, for example, or, like, it doesn't have, like, some other feature. And, then it will like, then you can choose for yourself, like, if I if you want to use the adapter or not.

Guest 3

Okay. Yeah. So that's, like, basically, like, how, we set it up. Like, everyone in the in the working group is already working towards, Like, they either published a adapter already or they are working on an adapter because they want to roll it out, gradually as well.

Guest 3

And and the that set of adapters will be part of the this, this set that we're, going to call out in in the docs. And if there's, like, other, platforms that want to do the same, like, we we have, like, a clear set of, like, this is what you have to do to set up the test suite and, how you can,

Wes Bos

like, get your adapter part of this Wes as well. And this is not just for, like, different hosting pod providers as well. It's it's also for for different runtimes as well. Right? Like, there's a already a BUN adapter, which will then allow you to kick out, like, obviously, work with BUNs in I think it uses BUNs internal server.

Wes Bos

It has a BUN SQLite database for caching, things like that.

Wes Bos

So this is not just, like, different like, only to run on Netlify or whatever, but it's also, like, if you're if you literally have, like, a BUN server sitting in your closet, you can use this. Yep. Yeah. And so that's one case.

Guest 3

The Node that is really interesting to me as well is the Kubernetes one that, Google is right now.

Guest 3

Like, basically, that would allow you to to scale the, like, the Nexus server, like, as is using Kubernetes.

Guest 3

Node. We built together with Jared from, from BUN, just like BUN adapter also to prove out, like, that that you can use the adapter's API, like, for something different than just, like, a a specific platform per se. Yeah. Yeah. I'm curious to see what, what, like, people built, using this. Because, like, you can do, like, more like, I saw someone tried to, use the adapter's API, like, right after it came out to to, like, create, like, single, like, single file executables,

Guest 1

that move up the server and things like that. Part of, like, why I was pretty excited about, like, pushing for the adapter's API is that we we kinda want to give you almost, like, full control over, like, everything NextGen can do and then what it allows you to do, like, so that at first, you know, that that is targeted toward, like, platforms.

Guest 1

But one of the things, for example, that we are planning on working on next with the the Cloud for folks is, yeah, going deeper into the runtime story. Right now, it it works currently mostly for build.

Guest 1

But when you're working on, like, something like well, with worker d, for example, there's a bunch of APIs that are Scott not necessarily available from like, based on based on node. Right? And so we wanna, I think Vite has done something similar with their environment API. We kinda wanna do something. Right? Allow you to to have in dev as close of an experience as possible as what you're gonna get when you deploy it.

Wes Bos

So that's planned. Because that was my next question of, like like, one of the hugest pain in the butt with CloudFlare is that, like, if you are running Node locally and then you deploy the thing to Cloudflare, there's always this pain in the ass little things that come up. And and the solution to that JS, like, debugging it sucks because you gotta, like, sit there, commit, deploy, wait for the thing to build. You know? And they've solved a lot of that by allowing you to run, like, WorkerD locally, which means you can hit those issues right away before you deploy the thing. But you're saying so that eventually will come to this Wes you can use it different run times in dev as well? Yeah. That's the idea. Right? Because as a as a user, I also also hate this. Right? Like,

Guest 1

like, what we're driving really hard within experience is that, like, dev start, and when you push it on Vercel JS you know, behaves exactly the the the same. Like, debugging capacities are the same. And, like like, I just I just kinda want to push that forward, and and that too. Like, it's what we're selling, you know, with with NetJest by itself is not like I'm not I'm not married to the idea of, like, Node. I'm not married to, like, WorkerD or, like, BUNT JS a runtime itself. Like, what we're selling is APIs just just Yeah. Nicely Yeah. Then APIs and, like, if, like, hopefully, we don't, like, like, we don't like, a lot of the stuff we do, obviously, is very targeted towards Node.

Guest 1

Mhmm. Just because that's the main default. But, like, if it came a point where, like, BONE becomes, like, you know, the the natural, like, the the natural, like, platform to to which to build on JavaScript apps, then I would want to I would want Next to support that

Wes Bos

Yeah. As well. Right? Does that mean you have to have all of Turbo Pack running on these other runtimes as well?

Guest 1

I've Node. Depends on, like, there there might be some some limitations there, right, in in terms of, like, obviously, if you run, like, on a on a crazy OS that just doesn't support, like, like, our build of, like, Turbo Pack, then then then we do we do still kinda need, some things.

Guest 1

But TurboPac also TurboPac is mostly written in in Wes. Oh, yeah. So fully Doesn't matter. Right? Yeah. Yeah. So TurboPac,

Guest 3

like, itself is, you can see that as, like, a separate binary that we're running, and we're compiling it to every, like, mostly supported, like, OS, basically. Right? So if you're using, like, a convoluted OS, then, like, that that might be a problem, like, for, like, like, every, like, majorly used operating systems. Like, we're only, like, operating system. Right? Like, runtime, like, for example, like, BUN, like, you can you can use BUN for for next dev, today because you replace it with Node, with note.

Guest 3

And that knows how to run this. Like, it has the Node JS APIs to, like, interface with, an API. So we can just, like, run everything as is. If that needed something slightly different or anything like that, we could still run it. That that's not a, not an issue.

Guest 3

But, yeah, Nexus, like, Nexus itself, it's all mostly TypeScript, some JavaScript, and then, the the bundle is basically a 100% Wes at this point.

Guest 1

Mhmm. But, yeah, to to be clear, do Wes do rely on, like, pretty Scott of Node bearing APIs to, like, anything local storage, for example? And those things were potentially either, like, they're, like, you know, supported pretty well or they might not be supported fully sometimes.

Guest 1

And so in those cases, like, that's where we need to do some work to either, like, try to obstruct away some of our usage so that, like, then a platform could then plug in their equivalent or something.

Guest 1

Oh, okay. I see. Yeah. I I that's I think, you know, as we work with, like, Cloudflare, on on supporting it supporting cache components better, I think we'll obviously run into Scott of that kind of issues.

Wes Bos

Yeah. Oh, yeah. Yeah. There was Node weird thing with Cloudflare. Like, their implementation of a sync local storage doesn't I'm just looking it up.

Wes Bos

Enter width you can't use or something weird where you can't, like, bootstrap something with an existing object, which is kind of a bummer. But, like, that's also not really next. It it it is next problem, but also, like, that's that's more of a CloudFlare problem, you know, if they don't support those APIs.

Guest 1

Yeah. Yeah. The idea is that we make, you know, as much as possible as we can on our end. But in the end, like, it's still up to the platform to, you know, choose to invest into making the warp. They they control the runtime. Right? So they really can also,

Guest 3

do some work in advance here. Mhmm. It is similar across frameworks as Wes. Because, like, if other frameworks use async local storage, they, you know, you run into the same problem. Right? Yeah.

Wes Bos

One question everybody has, and I'm sure you're sick of hearing this right now, so feel free to to punch me. But everybody's like, why doesn't Node just run on Vite? And do do you have any any response to that? Because, like, I know making turbo pack was was a huge feat, and I know it's edge cases all the way down, and that's a very frustrating I'm sure people think you can just boop, boop, boop, add Vite to it.

Wes Bos

But, like, what what are your opinions on that or your responses to that? If you're not watching, I just set up straighter.

Guest 3

The yeah. So we started building TurboPack.

Guest 3

I can't remember which year. It's been some time. The reason we started building it, is, like, for context, like, we were we were using Webpack before, like most other frameworks at the time. And, basically, Webpack, it has some limitations. The main thing that the the main limitation it has is that it's, single threaded. So it runs in, it's mostly JavaScript, not TypeScript.

Guest 3

It has a very extensive plug in API, but really the biggest, like, problem that we ran into as applications kept scaling. So, like, the, like, think about it this way. Like, if, like, Wes or Scott starts building a a new application today, they're probably, like, pulling in, like, 10 x more code than they were ten years ago when we start Nextiva.

Guest 3

Yeah. So over time, like, basically, you you start hitting this, this problem of, like, there's so much code to compile. Now everything gets slower and slower. There were some takes on how to solve that problem, which included, and then, like, Next itself, also made this, like, this problem ish two x Vercel by having two competitors. So you're not running one Webpack. You're running two Webpacks.

Guest 3

Like, one Webpack for the browser, like, your browser Node, and one Webpack for your server code. So for, like, the server side rendering and all of that. Like, we had to do that because, like, otherwise, you can't you can't run browser Node. That's, like, been all for the browser in the Vercel. Like, your, like, type of window would be replaced or things like that. You couldn't do as many optimizations, like, all of that.

Guest 3

So you were basically, like, what Nexus was doing was, like, it was, like, trying to orchestrate multiple instances of Webpack and trying to make sure that they compile at the same time. They finish at the same time. They run-in parallel as much as possible.

Guest 3

But in practice, like, you can't really do that because, like, one might depend on the other.

Guest 3

And then when you have, rec server components, now you have this, like, it's a scheduling problem, basically. Like, you have used ESLint, and used client could import a used server, like a server action.

Guest 3

And this, like the first file that you find is a, server component. So, like, that Vercel Node, so that's running in the server compiler. But the moment it, finishes compiling, it has found all the like, it collects all the used client, and then it injects it into the this, like, second Wes instance. And it's like, hey. You know what? Now we're going to compile that.

Guest 3

And then when that second instance runs, and it finds all the user server, it now has to run, the comp like, the comp the the first compiler that we, like, triggered again to then, like, compile all this new code that we found.

Guest 3

And, basically, like, that means that you're going from, like, server ESLint, server client, and and it can effectively recursively go through more than that, but we we blocked you from doing that. In practice, you should be able to do that, though. Like, you should be able to import more client components that have more server actions and and, like, and so forth.

Guest 3

And this might be, like this the other thing that we want is to have, server components that you can, like, import and render from, like, using your server, for example, in the future.

Guest 3

And, like, this would may basically make that, like, a a compilation problem. Right? So, like, we had this kind of, like, slowness problem, and it it got it, like, got worse, like, as we added more, like, compilation, work to it and, like, this this orchestration, basically. So, So, like, that was the like, really quick, that was, like, the problem that we had besides, like, the application getting a lot larger and, like, that also being problem. So Mhmm. You said really quick that you gave a whole explanation

Guest 1

of, of this Vercel of the server component architecture.

Guest 3

I know. Yeah. Yeah. So so that's, like, the problem or or, like, where we came from. Right? I just want to clear that up because, like, it might sound very simple to say, like, oh, yeah. Just, like, drop the bundler and do something else. Yeah. So at that point, like, Wes were like, okay. Let's have a look at, like, where things are at in the overall ecosystem. Like, what is everyone else doing? And what we find is, like, effectively, every other framework was doing the same thing that we were doing, which is, regardless of what vendor you were using, even if you were using more modern ones, like, like feet or, roll up or, like, anything else Mhmm. Effectively, they were all like, if you're building a a a framework that have multiple layers, so, like, server and client or things like that, they were all doing this orchestration issue. Like, they were all Yeah.

Guest 3

Compiling like like, you had two feed instances instead of two Wes instances or things like that. I have my personal

Wes Bos

WeKu website, which is React Server Components, I think it has four VIT instances.

Guest 3

Yeah. It works in the same like, like, everyone did the same thing because, like, that makes sense. Right? Like, if you, like, if you're not building the builder yourself, like, you you hit this thing where you're like, oh, yeah. I need to run multiple of these because, like, I have multiple output targets or things like that.

Guest 3

Wes I explained this, like, I was only talking about server and browser, but you could also think about, like, at run time or, like, all these other, like, output targets. Right? Button maybe or things like that.

Guest 3

You would have to, like, like, basically, we got some problem where you're running, like, many instances. So yeah. So we looked at it at the time, which, is is multiple years ago now before there was, like, any like, the the only other, I I will give credit to Parcel. Parcel had this API where they could do server ESLint, server client, server client. Like, that that was a thing that they already added. You have a Webex application. Like, your your, like, existing Nexus application is, like, completely geared towards, like, you have all the Webex specific API. So this means, like, their name works in a certain way, new URL works in a certain way. Yeah. Like, import meta works in a certain way, like, all of these things. All the resolvers of all the files work the same way. Yeah. You've probably hit this thing, like, in the past where, like, like, Feed, for example, like, they introduced, we're gonna use these modules for everything.

Guest 3

So now when you're migrating, like, a, like, a, app that was built using Webpack and you're migrating to Feed, like, you would have the same problem. You had to rewrite some of your code or things like that. Yeah. There are more reasons for this, but, like, feel like the the problem that we had was, like, there's multiple binders. There's, like, combat with the, like, the previous Nexus apps that Wes, like and we had millions at the time already. Right? So Mhmm.

Guest 3

This isn't, like, a small scale problem. Like, we like, if we give you a lot of breaking changes, like, now, like, millions of people have to do work to actually get to the latest version or, like, to use the this, like, improvements.

Guest 3

There were, like, no solutions out there at the time.

Guest 3

So, effectively, it it was a question of, like, do we take existing bundlers and try to cram this, like, new features into them to make it work in this way? And then, like, work with this, like, like, other bundlers or things like that? Or do Wes, start from like, take everything that we've learned from all these, like, the other bundlers? Because, like, the like, a lot of the, like, Sanity just, like, built on, like, a lot of this, like, knowledge of, like, mistakes that were made for Webex, for example. Right? Or Yeah. Or things like that.

Guest 3

So can we take, like, all of these learnings from building all these different platforms and then, like, a lot of people that that came into Vercel, like, people that came from Svelte and, and all of that? Like, take all of this knowledge and then build something new that, takes, like, learnings from from everything that came before, similar to how most new, opens up projects are built.

Guest 3

So that's, like, where we started, and that's how we started building, TurboPack.

Guest 3

And then, we had this like, pretty quickly, we had, like, a a fully functional blender.

Guest 3

You take inputs to create outputs. It runs transforms, like, all of this.

Guest 3

That was actually not the hard part. Like, building a a a, like, Bundler, like, we we know how to do this and, yeah, we we we hired all the right people to work on it, and and we we had that pretty quickly.

Guest 3

So a, like, create create react app type app was, was, like, we we got to that point pretty quickly.

Guest 3

The problem was that, why it took, like, quite some time to to get, like, fully to, like, 100% of NexSys Wes passing JS that, NexSys has a lot of Wes. Like like, I think, like, 11 to 13,000 tests if you take, like, both that and those. Knows about those tests. Yeah.

Guest 3

So, like, this test suite is super large, but it tests all edge cases. That includes things that vendors would usually be testing. So it includes HMR. It includes, Bos refresh. It includes, like, all of these edge cases, every bundle feature.

Guest 3

Because, when, like, when I was building Nexius, like, very early on, we were just, like, every single edge case that we ran into, like, instead of, just, like, upstreaming every, like, test that we have for that, we also added a test just in general to make sure that it keeps working in Nextiva's because, like, we were doing this orchestration thing, like I explained. Mhmm. Yeah. And we you would hit issues with the orchestration itself, which means that it's not a bundler issue. It's actually an actual issue.

Guest 3

So we already had this, like, very extensive test suite of, like, how a bundler should behave.

Guest 3

We we basically have to go through and, like, fix every single edge case so that it behaves in the same way or super close to what it was doing before Mhmm. Including a lot of the behaviors that Webpack has.

Guest 3

So an example here is there's a comment called Webpack ignore true that you can add into an import. I'm not sure if you ever seen it. That makes it, like, ignore the import when, when bundling.

Guest 3

A lot of Npm packages have that, for example.

Guest 3

I Yeah. Never seen it before myself, but as Wes Scott building this, like, obviously, we're into it because, like, Turbo Pack would start building it, for example. Mhmm. I found out, like, other frameworks also start or, like, other vendors also start building it. But then in Webpack, it would, like, just ignore it and, like, there would be no compiler error whatsoever.

Guest 3

So we have, like, interrupt. Like, we have support for, like, a lot of these, like, Wes specific, APIs as well.

Guest 3

And we make sure that, like, for the most part, your application, like, you'd have to rewrite, like, a lot of, things to to get to this newer version.

Guest 3

So that covers this, like, millions of people case.

Guest 3

At the same time, what we've seen, like, while we were building this, like like, other, like, vendors, they they kept innovating as well. There was RS pnpm that, tries to, like, implement, the whole, like, Webpack API, interface Yeah.

Guest 3

While giving you, this, like, parallelization and, like, better, like, like, slightly better bundling and, like, parallelization.

Guest 3

And, at the same time, also, you saw the feed team build rollout, which is really cool as well, which has, like, a lot of the, like, it takes a lot of the same learnings that we had for for TurboPacks. So, like, in practice, like, the the way I feel about it today is, like, that all of these vendors have started to Scott of, like, converge onto a similar type of feature set, and the way they work is actually very similar.

Guest 3

The only difference is, like, trade offs that you make around, like, gushing, like, incremental compilation, like, things like that. So for something I didn't talk about for Pnpm specifically is, like, we went all in on this, like, incremental compilation architecture.

Guest 3

So similar to, like, salsa and roast or or things like that, which is, like, you can annotate functions that are cached automatically and, paralyzed automatically. And this Wes especially important for us because, like, we were also seeing besides just, like, the new apps are 10 x larger, the existing apps are 10 x larger as Wes.

Guest 3

And we want this incremental compilation. Like, basically, you want everything to be incremental compilation. So this means if you're in a build, like, the first build, like, the very first time might be slow or slower, than, than, like, any other vendor, for example. But after that, we have every single thing that it did cached.

Guest 3

And if you make another build, it's a lot faster. So it's, for example, like, if you have an application that's, like, super large, like, like, 100,000 plus modules or things like that, and it it might take, like, a minute to build or something like that, the incremental, compilation would be a few seconds to get to, like, this, the the, like, the next build after that, basically.

Guest 3

And And it would be, like, every build after that is just faster than the very first Node, and you, like, rarely hit the just, like, I'm starting from scratch case.

Guest 3

Yeah. And, similarly, we want that to be the same for development. Right? So, like, if you boot development, you, make your changes, you do, like, hundreds of HMRs, you quit the server, and then the next day you boot it up again, it should be as fast as, it was, like getting an HMR, for example, like a a fast refresh.

Wes Bos

That always drives me nuts when people post, like, oh, it took booted up in, like, three milliseconds, and the first load of my Next app took, like, twenty seconds or something. Like like, that always drives me nuts when people don't understand how how that works.

Wes Bos

Like like, my own personal Waco website takes, like, thirty seconds before I can actually they even Yeah. Console log the URL. And then after that, it's fine. You know? But, that that always drives me nuts. I'm sure it drives you nuts as well when when people post stuff like that.

Guest 3

If you compare frameworks, if you're looking at the you boot up and you see a, URL. So for example, you you boot up, like, any other, framework, and and it says Yeah. Like, local host 3,000. Right? And then it says, like, ready in, like, sometime, for example, like, like, twelve milliseconds or fifty milliseconds or something like that.

Guest 3

In Nextiva's, it would say 500 or, like, one second or something like that.

Guest 3

What we found is, or, like, what I found, like, looking into this because I was, like, confused about, like, those those, like, no other work happened. It's just just, like, for us, it Wes, like, a timing issue. Like, we would load the Nextiva's config, for example, which which might be including, like, all your, like like, external plug ins, like, Sanity or or things like that. It would be requiring all of those before it actually set, like, hey. Let's like, the server is ready.

Guest 3

It took this much milliseconds.

Guest 3

So recently, we, we optimized that by first logging out all of this, like, the the server is ready type, like, logging, and then, like, requiring the config and things like that in in parallel, basically.

Guest 3

And we're, like, some of, like, interesting optimizations of, yeah, how does, like, perceived performance is is there as well.

Scott Tolinski

And then Have you considered just faking the numbers?

Wes Bos

Just, never. No. I would have guessed it. Dot j s? I think that was.

Wes Bos

Yeah.

Wes Bos

Yeah. That's great. By the way, I just looked it up, and, that Webpack magic comments was me.

Wes Bos

I opened the issue, on on that. I was running into a weird issue where I have, like, a dynamic import. And in Webpack, when you have dynamic imports, it Yeah. It gets everything. Right? And I have, like, a a a video in the same folder, and it was, like, loading the entire video and and then crashing.

Guest 3

That was a a weird one. I'm sure you run into all people doing stupid stuff. Yeah. And there were on top of that, it was just, like, the test suite had a lot of these cases already. So, like, it might be that, like, West from, like, eight years ago, had already reported this issue, and, like, we had a test for it. Right? Like, there was a lot of cases like that as well.

Guest 3

Things that we would have never caught,

Guest 1

without this test suite. People using it in weird ways. There JS a bit of pros and cons of, like, you know, sort of, like, maintaining a framework for so long JS that, like, the the battle tested, sort of features that we have, they're in a way, they're, like, the things that are almost, like, slowing us down. Because you could start, like, a new framework today. Right? And then you go to kind of eat, and it would look like it works, and then you kinda don't have, like, the the burden of, like, Scott of maintaining too much.

Guest 1

Mhmm. But eventually, at scale, like, what we what we found over the years is just that there there are sort of many cases that kinda tend, to pile up there.

Guest 1

And, I feel like the the big TLDR of, like, what we have gone for to Rebecca first is just, like, like, what's what's really important for our team is that we just keep, you know, shipping as fast as possible.

Guest 1

And so, you know, it could be like a symptom. Like, it could be I don't know if you heard about the term of, like, like, Scott of, like, not invented here, kind of problem where, if you go at Google or Meta, you know, they they have, like, every version of, like, what you use externally, but, like, built purpose made for their own system because it's just actually if you have the resources, it is just always the fastest solution.

Guest 1

And and kinda like what we're saying earlier, right, is like Nextiva JS on its own, somewhat somewhat simple. Right? You could take our test suite and try to recreate it some some parts of it, and and and you could get, you know, somewhat somewhat somewhat pretty pretty far, on on on your own. Right? And like Tim said, we spent, like, just an insane amount of time just trying to make sure it wasn't like a breaking change.

Guest 1

Now maybe the the juicy part I I can share here is, like, you know, like, we're not married to Node or Node or anything. Like, what matters more is the experience that we cannot provide to our users in the same way. Like, for example, we used to ship with, like, ESLint commands, that used to kinda just, like, bundle because that was, like, the, you know, the the industry standard at the time. Yep.

Guest 1

We've kind of reconsidered and, like, we now we're we're just not doing anything, but we might, you know, consider a word where maybe we should run and run, like, Biome or something like this. Yeah. And, in the same vein, transparently, like, every Yarn, we ask ourselves, you know, like, do we do we wanna sort of, like, try exploring Veid? Do we wanna sort of continue in TurboPag? Node we're working with the RSVPG folks. Like, what's the what's the best end result, right, for our reserves? And, the big blocker was really just the amount of work that it would take to kinda make this, like, a non breaking change. There was also just the fact that, like, there's there's nothing, I think, that, like, pushes a a a bundler as much as, like, Nextiva Wes today because we we talked about, like, you know, server components, server actions, cache components.

Guest 1

Like you said, there's, like, why could you do some stuff today? But, like, I don't think there's quite something that is quite that has to support quite, like, you know, as as many sort of, like, larger, websites there.

Guest 1

Yeah. Yep. And so if we wanted to find out, I think we kinda need to spend, like, a lot of time into exploring this ourselves.

Guest 1

And, and I think the very latest here is just, like, maybe in the age of AI.

Guest 1

Maybe this is, like, very doable.

Guest 1

Right? Maybe we can, like, finally have, like, an answer to the with, like, next work better Node.

Guest 1

I think I think it's, definitely still, like, an open question

Wes Bos

Oh, cool. On our end. Well, that that's good. That's good. And is TurboPack part of, like, the next repo now? Like, are are other projects using TurboPac, or is this just just a next thing now? Right now, it's part of the Nexus repository.

Guest 3

It it is a fully, standalone tool that you can run.

Guest 3

The only thing is that, we we don't have added like, we haven't added a public API API for this yet. The reason for that is that we wanted to really, like, prove out that, like, this vendor has a right to exist, that it can, like, run on these, like, Nexus applications, and, that doesn't have, like, any, like, bugs on bundling, like, Npm packages and things like that. Mhmm.

Guest 3

We're past that point at this point, and we do have, like, a CLI for right now. The the only thing is, like, figuring out, like, how to like, what the public API looks like.

Guest 3

And then, especially, like, one of the things that's that's missing is, like, we can run, like, Webex loaders, for example, and things like that. Like, we don't have a plug in API specifically for TurboPack right Node. And that's, like, what we're currently, exploring as well, like, what that will look like.

Guest 3

Then there shouldn't be a reason that you can't run, Throwback.

Guest 3

But right now, the like, if you wanted to use it, for example, like, there there is, like, another, bundler called, YouTube, like, u t o o. So UTOO, is, like, another, like, framework slash bundler that's built on top of the Terra Pak core.

Guest 3

And they, they're like, using it very successfully. They're converting back to, TerraPack as well, because they're using it in in this way.

Guest 3

And they're building, like, pretty, like, large applications on this.

Guest 3

And it's, yeah. So, like, there's no reason that you shouldn't, like, that you wouldn't be able to use this, as a as a standalone vendor, like, in the same way that you would use is built or feed or anything like that.

Guest 3

Mhmm. Really, the, like, it's on us to add, like, a public API for this.

Guest 3

And, really, the only reason that we haven't yet is, like, we want to be really intentional about it. Like, we want to support that thing if you put it out, and, and give the right level of support there.

Wes Bos

Cool.

Wes Bos

You should make the API just the vD API.

Wes Bos

Do you have us with your view on them?

Guest 3

We've been talking about that as well because, like, Pete has a a pretty decent API.

Guest 3

The only, like, the only thing that we we've been, like, thinking about there is, like, there there are some, like, fertilization, concern as well, regarding that. But it should be doable. Like, it's it's not like, the the individual pieces are very similar.

Guest 3

There's also the, like, unplug in, like, API.

Guest 3

That that might be good as well because there there's a lot of, like, plugins that are already written using using unplug in, which is, like it basically bridges.

Guest 3

You can basically think of unplugging, like, OpenNext for, like, like, Webpack loaders and plug ins, or, like, like, similar, API.

Guest 3

Because it bridges like this, like,

Wes Bos

every adapter to, like, one single unified API. Yeah. That would that'd be Node do you think we'll ever see that, like, you know, like like, web Wes and whatever has been fairly standardized across everything? Do you think we'll ever see that for, like, bundler tools? It kinda depends because, like, the the builders themselves

Guest 3

are, quite different. But as I just told you, like, they're a lot more aligned at this point in in the the way that they work. I think for, like, transforms, for example, like, transforms are are really, like, basic, like, source input, some metadata, and then, like, return an output.

Guest 3

Like Mhmm. Pnpm and plug in, like, gives you, like, a a standardized API for that. Like, having, like, one standardized API across, like, all, banners would be possible there. The only question is, like, getting it to like, to everyone to agree on, like, this is the one thing that you should do. Yarn standards.

Guest 1

And Never heard about web components?

Wes Bos

Yeah. Yeah. Yeah. Exactly.

Wes Bos

Good point.

Guest 3

So it's really yeah. It's it's that. And then plug ins is slightly different because, like, the the way that the vendors work under the hood is, like, very different. Like like, Throwback is nothing like Webpack under the hood. Like, it's, it works completely different. There's different phases. There's, like, the the way that it works is is just completely different. Mhmm. So, like, for plug ins, like, for example, you could write a plug in in Webpack, today that, like, re like, reorders chunks or something like that. Like, it gets, like, access to all your modules, and you can put the modules in different chunks or create, like, extra chunks or things like that.

Guest 3

Like, having a like, an inter like, Wes want to have, like, a more intentional API for this because otherwise, like, it's very easy to introduce, like, very large slowdowns or, like like, single point of failure bottlenecks, basically, where where things get a lot slower.

Guest 3

And then, like, naturally, because, like, having a plug in API, it's it allows everyone to like, add these plug ins, like, it would very quickly become, like, everything gets a lot slower for every app or things like that.

Guest 3

So Wes want to be very intentional about, like, what are the APIs, how are they surfaced. Like, similar to, like, we didn't talk about this, but adapters has a specific log line in the the build that will show you how long the adapter took instead of, like, that time being attributed to to Nextiva's internals or things like that.

Wes Bos

You got me diving into this u two.

Guest 3

It's it's from Alipay Yes. By the way. Thank you. Yeah. I I forgot, which company built it. Yeah. That's really interesting. That's

Wes Bos

Chinese JavaScript JS, like, a whole different world.

Wes Bos

You know?

Guest 1

Yeah. It's crazy. They have some pretty cool tools. This is kinda why we, had started working with, pnpm year to, like, and it was like this we we had the same realization. Right? They they they just kinda introduced us to how, like, how Biden's gonna compose their own apps internally. And in in some ways, they have the they they might even have, like, the the largest set of, like, Nextiva apps, the and that, like, the world we'll never see.

Guest 1

That that I don't I don't know how many engineers that they have over there. They they kinda showed us, like, the other sort of frameworks, and it turns out there was, like, another, framework that was also, like, very much, like, very similar to to ESLint internally. And those companies are so big that they have, like, they can have, like, four web infrastructure teams.

Guest 1

It's always really fascinating, to see.

Guest 1

I wish I wish we we shared more there. Yeah. It's

Wes Bos

one of the best talks I've I've had, or I watched was Zach Jackson. He's one of the devs on Wes pack, and he works for ByteDance. We had him on on the podcast as well. And, like, man, that was fascinating to just to get a peek into how some of these big companies work and and their infrastructure.

Wes Bos

And I'm sure you guys hear from it as well JS, like, if you can save them, like, 1%, that's sometimes just, like, millions of dollars a year in productivity and and compute and server time, all that stuff. For sure.

Guest 3

There's also, the something that Wes, didn't didn't touch on is, like, this this whole strategy of, like, building Turbo Pack and, giving it out to, to everyone using Nextiva's, as as the default, and and, like, very compatible has actually paid off. Because, like, on Nexus 16, there's, like, 92% of the, like, development sessions are actually using Turbo Pack. Awesome. It's like the the rest is, is like Webpack customization, basically. Like, people that have, like, customized their Webpack config, and they they might still be able to use it today. Because, like, if you're only using WIPAC, loaders, you can already add them to TurboPack, and they will just run for like, I I wanna say, like, every loader runs, but, like, like, most of these, like, simple input output transforms, they, they they work, for sure. Yeah. Beautiful.

Wes Bos

Cool. Anything else we didn't cover before we I know we're running up on an hour here, but I wanna make sure if we covered everything you wanted to touch on. No. No. I think we're good. So now is the part of the show where we get into sick picks.

Scott Tolinski

Tim, I know you've been on the show before, so you you know what a sick pick is. Jimmy, it's really just anything you're enjoying in life right now.

Scott Tolinski

Could be as, like, a podcast or a YouTube channel or any type of product, a phone charger, a a hula hoop, whatever you want. Yeah.

Guest 1

I should have I should have asked for sponsors before.

Guest 1

I, I really like, I don't know if you guys are, like, much into, like, coffee beans or not. I feel like a lot of, like, engineers are Oh, yeah. Vercel.

Guest 1

But there's this this service. I think it's based in California. It's called, like, ad adrangea, but their coffee beans are amazing. If you like, you know, sort of, like, really, really light, almost like experimental kind of coffee.

Guest 1

Yeah. Yeah. Very crudy.

Guest 1

Unrecommended.

Wes Bos

Oh, hide it's like the hydrangea coffee roasters. They're like the flower.

Guest 1

Yeah. Cool. This looks great. Beautiful back there.

Guest 3

Tim, what do you got for us? I still remember this section from last time. Last time, I I called out Apple TV, because, of sci fi mostly.

Guest 3

Oh, yeah. I will I will call it out again. If you're into sci fi, Apple TV is such a like, so good. But that's not the one I want to call out. So, I'll I'll do two. So the second one is choir podcast.

Guest 3

Yeah. I I think y'all at, Century are are also sponsoring it, because I hear you hear the the ESLint, every time.

Guest 3

Is this Scott? If you haven't heard about it, it's pretty popular. So you might have heard about it if you're into podcasts, but, they they do, like, four hour, five hour to, like, even longer, podcasts. They talk through the history of companies and how they started, where they're at now, like, the whole, thing, that if you're into any of the history of, like, Microsoft or Google or, like, any other company, it's it's definitely a a nice listen. And I usually, like, put it on when doing chores or whatever, and then, yeah, time just ESLint by.

Guest 3

The the one that I found super interesting is the interview with, Steve Ballmer.

Guest 3

Because, like, Steve Ballmer has such a, like, crazy and, like, I've never, like, listened to anyone who has that much energy, like, for Yeah. For, like, two two, three hours straight just, like, talking about everything.

Scott Tolinski

Yeah. Acquired is true. Yeah. Watch this. This looks awesome. I really like the Hermes episode and the, what was another one? The Costco episode. And the Porsche one was really good, Wes, if you haven't listened to that because Doug Demuro JS a guest on it. So Oh, cool. No. I'm Yeah. Definitely gonna check that out. Sick. Yeah. Really good. Awesome.

Wes Bos

Seamus plugs, anything you'd like to plug to the audience before we headways?

Guest 1

I kinda would like, for the audience to kinda read the the blog post about, like that we we're gonna publish about the adapters.

Guest 1

Yep. Because I feel like not a lot of people kinda read those kinds of, like, blog Bos. And we put a lot of efforts into it. But also, I think it's really interesting about, like, kinda how we how Wes came together. Right? Like, because Mhmm. Because it does Scott of, like, you know, it establish a little timeline about how we work. And, like, more importantly, I think, like, kinda the the game the engagements that that we're kinda, you know, having with the the group. Like, what are the exact roles that we wanna have, like, around the adapters and, like, just just, you know, sort of, like, how we do kind of care, like like, actually care about, like, making Nextiva's work well everywhere.

Guest 1

And, yeah, like, I think I think it's a nice web Bos. Yeah. Definitely.

Wes Bos

We'll link it up in the show notes. Yeah. Awesome.

Guest 3

For me, it's an actual 16.2, jokingly called Node Leopard by a lot of people.

Guest 3

So it's this new release that we did, two weeks ago. It includes adapters, but, it includes so much other things. Like, it includes, like, up to 60% faster rendering, like, like, this is faster dev starter that we talked about.

Guest 3

The server actions are now logged when you call them in development, to to make it easier to see what's happening there.

Guest 3

Like, we we added so many things into this that we had to write three blog posts to cover it.

Guest 3

So,

Wes Bos

hydration diff indicator.

Guest 3

Yes. So if you like, we already have a hydration diff, but now it includes the the text of, like, what is server and client. But, it was good to call it out again because, like, apparently, Node everyone had seen it before.

Guest 3

It's basically, like, if you have a hydration error, it will show you exactly what was the cause of the hydration error or, like, at least, like, the the the place where where it happens in your code. Yeah.

Guest 3

And then what else? Oh, yeah. So we're we're doing a lot of research into what do frameworks look like with AI.

Guest 3

As it's probably on your mind as Wes. Like, you're you're using, like, AI agents and, like, things keep improving quite rapidly and, like, like, we we built this, like, first version that was, like, an access MCP.

Guest 3

But now we're also looking at this, like, agent browser. And If you haven't seen that yet, like, look up agent browser, it's, it's really interesting as well. And, basically, like, you wrote a separate blog post called Nexa 16.2 AI improvements, and that talks about, like, a bunch of learnings that we had from, like, building, like, frameworks for, like, AI agents next to, humans building on, on access.

Guest 3

And and includes, like, one of the learnings is that, like, if you add an agents that MD that has, like, a like, a small snippet that knows where to look for the docs, that it will suddenly become, like, significantly better at figuring out, like, where it had to await your search brands or how to, like, write a an extra structure or, like, some code that it was previously adding use client to. We didn't add use client to anymore, and and we've been doing a lot of research into that,

Wes Bos

which is super interesting. We should probably talk about it, like, another time. Yeah. But it's, yeah. That that was my, shameless plug. Yep. Sick. Awesome. Well, thank you guys both for coming on. Thank you for all your work on Next, and I'm pretty excited about this. Appreciate all your time. Thank you for for having us. Peace. Yeah. Thank you so much.

Share