March 18th, 2026 ×
Cloudflare’s Next.js Slop Fork
Transcript
Wes Bos
Welcome to Syntax. Today, we have Steve Faulkner on, and he is the creator of vNext. If you didn't hear about it, basically, Cloudflare took Next. Js, and they created what is lovingly called a slop fork, which is they, ported the entire Next. Js framework to Vite. And then they posted it, and there Wes this big thing. And we're not here too much to talk about, like, like, the drama of of all of this and whatnot, and whether Next. Js is hard to run on CloudFlare and whatnot. But I think we're more interested here of, like, how how did you do this? You know? Like like, how how was this created? How what's your process for tackling something that is not like, oh, cool. You made, like, a little app where you can, like, drag and drop stuff around, or you made a photo booth app, but, like, you literally took, like, an existing I'm not gonna say spec, but, like like, test suite and and replicated most of the software
Guest 1
behind it. So super interesting one. I'm super happy to have him on. Welcome, Steve. Thanks so much for coming on. Yeah. Happy to be here. Like I said, I'm I'm I'm excited to tell the world more about this. I think it's a cool story. It's a cool story about AI and just the world we're living in right now and how rapidly it's changing. I I think this is gonna be a wild year. Wild maybe wild five or ten years with for software development. Yeah. Buckle up. So give us a quick rundown of of who you are and what you do. My name JS Steve Faulkner. I am, the director of engineering here for Workers. So pretty much on kind of the whole workers org at CloudFlare. Includes other things like our agents product, containers, some of the stuff around Wrangler CLI. So but I I didn't list all the teams. Apologies to to who I missed, but it's roughly, like, 80 people. I have been here for a couple Yarn, and, yeah, that's that's my role. I'm I'm not writing code every day. That's not what I do. I've seen a a lot of people said a lot of stuff about what I did and about the blog, and I think the only real correction that I want to make is a lot of people have been calling me a a 100 x engineer or something like that. I would I would use a title 100 x engineering manager.
Guest 1
That's how I would I think I think that would be the correct the correct term. Yeah. Given the state of AI,
Scott Tolinski
isn't that, like, kind of where we're landing, though? Those are the people with the, the superpowers or the, 100 x,
Guest 1
engineer manager. You know? And we we'll jump right into that. Like, I I think AI is an application factor. Right? If you know what you need to do, I think you can use AI to do it faster it works in both directions. Right? I mean, I've I've seen it negatively amplify things when you don't know what you need to do or or you have the wrong, you know, kind of direction. Right? It it just helps you go, but you the human still needs to set the direction. I was curious about the the term,
Scott Tolinski
slop fork that has been thrown around given the nature of AI writing it. How does that term, like, hit you? Did how do you feel about the the term slop fork being used here? I think it's a funny term. I kind of, like, have embraced it. I mean, I've talked about,
Guest 1
slot forking other things now. Sort of after this came out, I mean, somebody said, oh, like, we should, you know, jokingly, somebody was like, oh, we should Scott for Kubernetes and rewrite it in Rust. So I was like, oh, that's an amazing idea. So I think all the terms that are coming out, like, vibe coding, like, clanker is the latest one, things like that. I I don't I find all these things funny. I I presume there's, like, I don't take any offense ESLint it at all. Wes I saw Scott fork, I, like, almost dropped my phone. I was like, this is
Wes Bos
the greatest term ever coined, and I was like, I'm gonna start slop fork and stuff just so I can say it. Oh, so, like, set it up for us. Before before we get into how you did it,
Guest 1
let's just talk real quick about, like, why you did it. Like, why did you essentially fork Next. Js to to get it to run on Veed? Let me rewind time a little bit, which is that, you know, maybe about a year ago, we were trying to figure out how do we better support Next. Js on Cloud. Right? Like, that was just this hot topic internally. I think there's been a lot written about this, so hopefully Scott too contentious. But, you know, Next.
Guest 1
Js has problems hosting, especially on other, like, run times other serverless providers. Right? It's sort of very some of the things are very, like, bespoke to Node and to Vercel. And so, you know, generally, you know, you you can host it a lot of places, but I think when you see the edges of the problem, right, and, like, you know, certain features or certain things don't work quite as well, like, that's where you get into trouble. And so we were looking at, like, paths forward here. And one of the paths we were talking about at the time was actually, well, what what if we just wrote our own compiler for next? Right? Like, what if we just took the next API and just did it, you know, this API service that did it ourselves? Like, this is not the idea Wes almost a year old, probably even a little longer than that. We had somebody actually go try to do this for a bit and and just do a POC. And the answer was, this is gonna be six months of work, five engineers, you know, replicating so much. It's just gonna, like, take so much time. It's just it's just wasn't feasible. Right? And so at that time, we really doubled down on Open Next, which we we still are very involved in Open Next project and still moving on to Open Next. I keep telling people, like, if you want battle tested production code that's not three weeks old, like, please use OpenX. That's great. Right? Yeah. So that's kinda, like, where the idea probably started. And then funny enough, we Wes actually we tried very briefly again. We had an intern who actually tried this because he thought it was a cool idea. And so I said, oh, just do pages router. Let's just see if we can get it working, you Node. Very talented intern and, you know, let's throw this out of couldn't get that done either. Right? So we tried twice, and then I think everything really just changed when, you know, like everybody else, December, January, suddenly these models just hit this next level. And Yeah. I was doing a lot of manager stuff with these models. Right? Like, you know, my my job here is not to write code. Right? So I'm I'm using, these models to, like, summarize meeting notes and to track Jira tickets and to pull in summaries from channels. And, you know, at CloudFlare, we have a lot of, like, internal MCPs now that we're using.
Guest 1
And so I I was kinda using AI for manager brain, and I I was also using a little bit of code here and there, and I was like, you know what? These are really good now. I was like, I wonder if we could just do this with AI. Like, what like, Ralph Wiggum was kind of a thing. I did a couple Ralph Wiggum style projects.
Guest 1
I was like, Next. Js has this great test suite. What if I just I just throw it at that problem? Right? And that's what I did. I think it started on a Friday. It was like a Friday afternoon, Friday evening. Did a plan, ESLint a couple hours on that. That was the the start of just going back and forth with Opus and just saying like, hey, here's how I think this should work. Here's Node things I think we should cover.
Guest 1
And I think I woke up the next morning and, like, was clicking around in the app router demo, like, the app router playground. And I was like, woah. Like, this this kinda works. Right? Like Yeah. It it wasn't working perfectly, but enough that you're like, this this is there's something.
Wes Bos
Wow. And so, like, let's talk about that planning stage as well. Like, that's if someone were to tell me, alright. Take Next. Js and implement it in Vite. Like, what is your process for tackling that? Do you do you do you say, like, oh, take the Wes? Or or do you ask it? How would you approach it? And, like, how how much time do you spend building up this plan, and and how much of that is that you actually understand how software engineering works?
Guest 1
Definitely, I think I was uniquely suited to this because I I've just my background with, you know, Next. Js, and so understanding the problem. And and we've now we use Vite, by the way, we have our own Cloudflow plug in for Vite, and so we use Vite other places for other frameworks. And so I I I understood the shape of, like, what needed to happen. Right? Mhmm. So it's a it's a couple hour long process for me to come come up with that plan. And and I it was iterative. I'm working through it with open code. Open code Opus is, you know, the default stack for me. I definitely spent a lot of time going back and forth. I'm also a big voice to text guy. I use Super Whisper.
Guest 1
I don't think that there's, like, some magic really good prompting I did here. This is me just sort of, like, brain dumping at the computer using my voice for, you know, ten minutes.
Guest 1
It comes back and interprets what I said, kinda fills in the blanks, and then me saying, that didn't don't do that. Right? Don't do that. Don't do that. Like, those kinds of things. Right? Like, at at some point, they they even suggested, I think, sort of, like, ripping out React or something like that. Node
Guest 3
I was like, I no. I don't wanna do that. It's like, I always hope for this project. Right? So
Scott Tolinski
So when you're doing this type of planning work, are are you creating just a a bunch of,
Guest 1
markdown files? Are there any processes or skills that you're finding work best for you in this? All markdown files, I think this this is the best tool we have at the moment. I'll be honest, I think it's kind of a local maximum. Like, my guess is is that in, you know, 23Years, you're not gonna be writing a bunch of markdown files to a repo. Like, the LLM seemed to be very good at it today, but I think as they sort of learn we learn new techniques, I think we'll we'll figure out something that's a little more like LLM native. Right? Mhmm. Just seems strange to me that this is, like, the best we've got. Right? But right now, it actually works pretty well. We had, like, a plan markdown file, and then I had a wonder it was around testing too. So probably one of the parts I spent the most time on was, like, guiding it on which Wes to pull in from next. The next test suite is is huge. I mean, it's 8,000 something tests.
Guest 1
And a lot of it is testing either, like, what next itself emits or, testing things that maybe warp just, like, not day one features I wanted. And so I did have to spend some time guiding it. One of the maybe, like, unlocks I had was instead of trying to get the test suite running, like, the next test suite running, I actually just said, just port the Wes. Right? Like, you can just, like, literally go test by test and figure out which ones. And then it used, like, a tracking document to, like, track each one of those tests along the way. And by porting the test, you mean, like like, moving them to Wes or, like like, actually implementing the code behind each of these tests? Yeah. Like, moving them to Wes. Right? And and both. Right? Like, you know so so moving it to, like, my own vTest setup. So the the test from setup from day one was vTest and Playwright, using those together.
Wes Bos
And then you just, like, what? Let it rip overnight or or, like like, how much of you was just, like, it came back after twenty minutes and you had to, like, type more into it? I actually asked OpenCode about this. So I was curious
Guest 1
because I I had the I had the same Wes, and and I I the people or the app? No. No. I the the app. Right. Yeah. Okay. Sorry. Not not the people. I I did a little internal session that's similar to this about, like, how I did this, for Cloudflare folks.
Guest 1
And one of the things I did to prepare for that is I actually told OpenCode to go look at all its session data for the last week and to analyze it and figure out what I did when. Mhmm. And it had some really interesting things. So it said my my peak token usage was at 3AM, which I am not awake at 3AM. Right? So I definitely did a lot of, like, setting it up with a lot of tasks to do overnight.
Guest 1
And I I wouldn't say I was, like, full Ralph Wiggum, like, with a bash loop and stuff like that. This is more just, like, giving it a document that said, okay.
Guest 1
Hammer out these 10 things and then, you know, just keep going. Deno, it my experience is pretty good at that. Right? I mean, sometimes it gets stuck, but pretty good. And then it said my my habits were very barbell shaped. So there was either really short sessions that were, like, two, three, four minutes long or really long sessions that were, like, one to two hours.
Guest 1
And so I think that's where, like if I if I think back to that weekend, that kinda matches my recollection of what I was doing. Either these, like, little Scott, like, go just go fix this thing. This thing doesn't work. Or, like, go off in the deep end and explore something super deep. I got two young kids at Node, so, like, I wasn't spending, like, all weekend on this. I was, like, literally, like, I remember kinda being, like, like, we were, you know, going to the playground and get Node, and I'd, like, run to my computer. Just like, let me just kick me all in for a second, and then I'll, like, go back and do kid stuff. Right? How were you tracking that usage? This is all the OpenCode data. So this is OpenCode has all its own session data stored. I think they use SQLite, and they store all this, like, information. And so I just told her to go look in the SQLite database and figure it out. So there was no formal process. You like you said, you just I mean, for the the loop. Right? You were just pointing the agent at it. Which agent or, which model were you using for that? Opus4,
Scott Tolinski
46?
Guest 1
45. So 4546 actually came out when I was working on the project. So about halfway through the project, I just switched all into 46. 99% of the code is written by Opus, 46 or 45. Near the end stage, I started doing a little more, like, reviews where there was, like, times where I was trying to get the code reviewed a little bit better, and then I would throw sometimes Node at it just to get, like, another opinion. Yeah. How do you find the difference there, or have you noticed too much? That's a good Wes. And I I've actually I know people online will say, like, oh, like, Opus writes the code and Codex reviews the code. I did that for a while.
Guest 1
I actually have kinda backed off that. I just haven't noticed that switching the model provides that much difference. Mhmm. It's almost just as good to just have the model review itself.
Guest 1
And and I also find that reviewing the review is helpful. Right? Like, sometimes I'll just put it in a review loop or say, like, review the code, fix the problems, then review yourself again, fix the problems. Like, it'll do two, three iterations of that before it, you know, doesn't find anything. And while you're doing all this, like like, what's your actual, like, open code setup? Do you have plug ins, skills,
Wes Bos
agents, files, MCP servers hooked up? Like, I often think about these guys that just spend all day tune turning the knobs on their setup. That's me. Knob turner. Yeah. I just started getting into high, and that's, like, constant knob turning. So yeah. Yes. So very minimal. So I use the,
Guest 1
desktop app mostly.
Guest 1
I've kinda gotten to the full iteration where I've I've I am a desktop app kinda guy. I'm not a a terminal UI kinda guy. That's just like my background. I use Versus Code.
Guest 1
I kinda got deep into the terminal UI for a while because it was just nice and good, and then the desktop app got better, and so I switched back to the desktop app. So almost all of this was desktop app.
Guest 1
As far as MCP servers and, you know, special skills and stuff like that, nope. Don't really use any of that. We have I said there are internal MCPs, but I didn't even those, I didn't really use for that. I had those turned off. No special agents.
Guest 1
We do Node have a specific vNext sort of, like, agent that we use for some of our reviews on the repo. So we we have found that getting an agent with, like, a bunch of context is helpful.
Guest 1
The agents Node file for this repo Wes generated by the agent itself when we started. And, you know, I I along the way, I would tell it, hey, please go update agents MD, make sure it's got everything it needs. I will add there's two MCP servers that I have had a little bit of luck with better results than just, like, not using them. And that is, contact seven. I think it's about the Upstash folks that really has a bunch of stuff indexed around, open source libraries and things like that. And then also the, EXA, EXA search. Yep.
Guest 1
I have found that provides just a sort of general better search experience for the LLM. So I pretty much have both of those when I work in on this project. I have those on now by default. It's not like this massive bump, but I would say, you know, my total vibe take is 20% better just being able to have those two. And while the LLM is is testing this stuff, that's that's all just, like, v test stuff? Or or at any point, did it start opening up the browser and clicking links and doing things like that? Definitely. So I could talk about that. I talked about this in the blog post, but, coincidentally, agent browser by Vercel, which is sort of a wrap around Playwright, but it's like a nice CLI tool that lets you do everything. Agent browser, very good. I used that a lot. So, at one point, I think I installed that skill. Skill. I should mention that. I don't really have a lot of skills either, but the agent browser skill is very good. Agent browser itself, very good. And I would say, here's the production, you know, app router playground. Here's something I'm trying to do. Here is the the next app router playground, something I'm trying to go do. And I would just give it instructions to go replicate and figure out what's going on. Definitely did a lot of debugging with that.
Guest 1
And I I was honestly a little blown away by some of these things. Like, I remember one time I said, you know, the the scroll is janky. Right? Which is like Mhmm. Yeah.
Guest 1
JS the LLM gonna figure that out? And it did. It it was like, oh, I I see it now. Right? And then, you know, Node, I was a little blown away. I was like, oh, okay.
Scott Tolinski
Along the lines of agent browser and Opus, I have an issue with Opus specifically.
Scott Tolinski
Always, like, just totally failing with screenshots from agent browser, and it coming in with, like, a This screenshot is too large for Opus, and then I have to start a completely new session because it pollutes and kills that session. This has happened to me a couple times. Okay. I was gonna say, I was like, is this something that is only happening to me? But seeing as you use these same tools, I figured that was worth, like if I have a long running process and I have agent browser, it can kill that process and, like, ruin a a long running thing for me. Yeah. Just huge bummer. I've definitely hit this myself, and and it really corrupts at least an open code for me the whole session, and so I have to, like, start over.
Guest 1
And I will say that, these sessions sometimes get really valuable. There's been times where I will stop and say, wow. What I'm doing here is actually, like, I really wanna save this, and I will say to it, like, save, like, a compaction to a markdown file of this session so that I can kinda come back to it later.
Guest 1
Because, certainly, like, I've built up in the context, like, enough interesting stuff that I'm like, wow. There's there's, like, good good things going on here. Yeah. And how closely are you monitoring that context? Are you sending things to sub agents or anything like that, or is it just not? Really. I I I definitely had I'm I'm not gonna say it's perfect. I I'm gonna have some days where, you know, you hit context compaction, and then you're like, oh, no. It's gonna it's gonna just, like you know, it's gonna start off on a weird foot, and and that happens.
Guest 1
I would say this is where I've noticed open code seems to have improved a lot in the last few weeks even, where I used to have more problems around this. And then Node, even just like in the last week, I don't really have much problems around context compactioning, where it just sort of does it right and and whatever internal prompts they're using for this seem to be pretty good. Another thing I'll I'll forget that I I forgot to mention is I I started this file with the LLM called discoveries Scott MD at some ESLint. And and even it may have even suggested this to me. I don't even know if it Wes, like, my idea. It was all just the list of things that were like, oh, like, this version of React, Webpack, you know, Dom renderer doesn't work with the CJS module that doesn't load with the v stuff, like, all that kind of stuff. And it just kept the log in there. So when it hit these bugs or hit or hit these, like, ecosystem issues, it's sort of, like, figure out what to do and move on,
Wes Bos
rather than, like, just hitting it over and over again. I love that that move as well. So I'm I'm working on a 10 stack start app right now with the CloudFlare v plug in and all that stuff. And it it keeps hitting this, like, loop where it, realizes that it imported some server side code into a client module, and then that triggers like a warning because it it it's trying to tree shake it out. And then it it tries to like make it into dynamic imports, which is a nightmare.
Wes Bos
And then, like, it's just this constant spinning. And I hit it, like, three or four times, and I I Wes, like, been, like, no. Wes, like, we solved this already, and, like, you obviously don't know what to do here because you're just going in this crazy loop. You know Wes it goes, like, but wait. They already said this. Oh, but looking at it again, I did x, y, and z. That that drives me nuts when it just spins forever like that. And at that point, I was like, alright. I'm I'm starting to throw I was just throwing it in my agents dot m d, which is like these. This is how you solve
Guest 1
this specific problem, or even just, like, posting in a gist or something like that and linking out to it. It's kinda, like, piggybacking off that. One of the insights that I've had is that, like, agents are really good at taking feedback. Right? Like, you humans aren't. Right? Like, if I if if somebody writes a document and I say, that's a bad document and you write it again, the next generation JS not gonna be, like, light years better. Right? Mhmm. But agents, when you tell them to do something differently, you provide them new context, they really get better. And this is where this is where I think a lot of people there's a lot of people still kind of coming along on this journey. Right? Like, I I mean, I have developer friends who sort of don't really even wanna touch this stuff. And what happens is is their their first interaction is they look at it and they say, oh, well, it it didn't do the thing right. So Yeah. Therefore, it cannot do the thing right.
Guest 1
And and I'm like, oh, you just gotta hold on a little bit longer because if you just correct it, then on the fourth or fifth loop, it'll actually will just it'll it'll start doing it right. It'll stop making that mistake. That's where I think it really trips people up. It's, like, wrapping their head around, like, how good these agents are at interpreting course correction.
Scott Tolinski
Yeah. I do find that too. Like, people people just send a a prompt. It outputs something they didn't expect and say this tool is garbage,
Guest 1
rather than yeah. Well, as as programmers, that's how our brains are trained. Right? Like, you got you got humans on the one end, which are can take feedback, but are sort of very squishy and bad at dealing with it. Right? And then on the other end, you have programs, which are, you Node, like, if you write a program and it does something and it fails, you expect you run it again, it will fail. Right? Mhmm. And then Mhmm. LLMs sit in this weird, squishy in between nondeterminism thing. And this is where the nondeterminism is a feature, not a benefit. Right? Like, it it's like, yeah. It I'll put garbage Terraform that took down your production database, but, like, you can tell it not to do that again, and it probably won't. Probably.
Wes Bos
Probably, maybe. Yeah.
Guest 1
Probably, maybe.
Guest 1
Well, I'll say, like, I'll I'll add on. Like, I I am Scott, like, this AI maximalist. I think I have had my skeptical moments. I have also been coming on this journey the last few months that everybody else has. I I'm really excited about where we're going, but, like, I'm also simultaneously terrified. Right? Like, I I see the mistakes people are making. I see the mistakes that I'm making with it, and I I know there's real gaps here. And so, you know, there's people saying all these terrible things are gonna happen because of AI.
Guest 1
And I'm like, you're yeah. You're probably right. I don't know. Like, you know, that's Yeah. But it's also amazing. It's both at the same time.
Wes Bos
Well, let's talk about, like, code quality a little bit on that as well. Like, the code that it was kicking out, was it of good Sanity? And did you ever hit these, like, areas that it would just go down and and start doing awful stuff? Right? Like, I I spent the other day I spent, like, an hour writing up the most beautiful doc. It was a Friday night, and I Wes, like, 05:00.
Wes Bos
Hit the button. I put it on the cursor long running agents, which can go for ten hours, you Node, and I was, like, man. Came back on Monday, and it had written, like, direct SQL queries. It just Node stepped all of my ORM, And, like, I was just looking at it like, man, I I spent, like, three hours just undoing a lot of the, like, bad stuff that it did, and I was like, I thought I did a good enough job planning for this. And, like, did you ever hit that Wes you're just the code quality or the direction that it went was just not it? For sure. I have definitely that. And I would say that I every time I tried to look at the code, I I definitely was
Guest 1
not super thrilled. It's not code I necessarily would have written directly. Like, it it it's verbose. And part of me on this project was having to let go of that a little bit, right, and say, okay, what's the goal here? The goal is to get compatibility, and the goal is to get the test passing and, like, have confidence that that, you know, that is the answer and that it's gonna work. Right? I mean, I I framed this as an experiment. It's still an experiment. This is an experiment to how far you can push AI.
Guest 1
And it's uncomfortable for me, but I've had to sort of let go a little bit of, like, yeah, the code quality is not maybe great, but does that really matter? And if if it does matter later, we'll fix it later. I'll give you a very specific example. One of the things that went down the rabbit hole on was it did a lot of, code generation here. So this is true today. So if you look at the vNext code base, what is actually contained in like the client bundle is actually generated from template strings, right? Like interpolated, template strings.
Guest 1
And it it really rubs me the wrong way. Right? It's not type checked. It's not linted.
Guest 1
There's tests for, like, the end to end behavior, but, like, there's Scott, like, unit tests for little bits and bobs.
Guest 1
And so this is something I've actually been working with all the other contributors on is that we're trying to extract this out. So we're saying, hey. No. LLM, you kinda did a bad thing here. You went way too Node down this path of, like, doing something that's, I think, hard to maintain for humans and for LLMs. Right? Because now it's, like, you know, massive interpolated strings of code is also hard for LLMs. And so we're trying to, like, extract that out and get those into, like, proper, like, you know, linted type checked code that is then, like, you know, pulled in in the right places.
Scott Tolinski
And if you want to see all of the errors in your application, you'll want to check out Sentry at century.io/syntax.
Scott Tolinski
You don't want a production application out there that, well, you have no visibility into in case something is blowing up, and you might not even know it. So head on over to century.io/syntax.
Scott Tolinski
Again, we've been using this tool for a long time, and it totally rules. Alright.
Scott Tolinski
So I've been working with Py lately and writing extensions and orchestration loops and all that kind of stuff. And, like, the the thought process is, like, every single feature, it takes different passes at it. Okay. Here's a, you know, linting pass. Here's a whatever pass. Here's a a style ESLint pass. Here's a UI pass, a UX, all that accessibility stuff. And each of those is like a it's its own separate thing. It feels right now like a lot of extra work when, like, I could probably just say, oh, hey. This wasn't good. Try try to get with these.
Scott Tolinski
So I'm still trying to figure out, like, what that optimal workflow is for, like, preventing those types of things because it ultimately
Guest 1
you do continually go back and forth and back and forth with it until it gives you something that's decent. This is where, like, the I think guardrails are really important for AI. I mean, the test suites are really important to, you know, linting, formatting, those kinds of things. But then they sort of can also box it in a little bit. Right? So it's like you all Mhmm. You almost wanna, like, have, like, these small, nice tasks that are very easily contained and have those guardrails. But every once in a while, you wanna give the LLM sort of, like, a free pass of, like, saying, okay. Well,
Scott Tolinski
what would you do if you could just redo this entirely or do something differently? Yeah. I found that to be helpful too. Yeah. I I regularly run audits, like, audit this, audit this, tell me this, what you're you know, what you would change about it here. And sometimes, it's just, like, always looking for things to change even if there aren't
Guest 1
things that it should change. But a lot of times, there is, like, valuable insights in there, things you didn't think about. What about security with this type of stuff? So I I don't know if this is true. You can maybe tell me, but, like, the Vercel folks put in a bug bounty against the Cloudflare bug bounty program. I'm sure they took their whole backlog of security vulnerabilities and ran it against this. Can you talk about security a little bit and and tell us, was that actually true? Did did they pay out Vercel? Some of this is is is still, like, in process. Right? These things take a a minute to, like, for us to go and fix everything and triage reports and do all but we got a lot of reports outside of Vercel too. So they did send us reports. I'm very appreciative of that. Honestly, like, some people were framing it JS, like, sort of a gotcha. And I'm like, it's a week old project. Of course, there's security vulnerabilities. Right? Like like, yeah. This is great. I I please send us more. I wanna know what's a problem so I can send us the LLM. Yeah. And shovel it right in. AI this again, this whole story is about AI and AI and more AI. So we have AI triaging security vulnerabilities. We have AI finding its own security vulnerabilities. I can't share too much about this yet, but we're we're going to have a blog post about this where we we actually did build our own AI agent and harness to find some security vulnerabilities in this project because we we we saw what was coming in, and we were saying, well, we could probably just find these ourselves because these are clearly AI written, so we should do this ourselves.
Guest 1
And then we did, and then we took that thing and started pointing it to other projects and it started finding other vulnerabilities. And we were like, oh, this is really good. Right? So we're gonna have some more coming out about this in the future, but Oh, cool. We're using this as a learning opportunity. Right? How do we use AI? And what we've learned right now around security is that it's actually pretty good at some of those things too. So AI is triaging security vulnerabilities. AI is fixing them. AI is validating them. AI is responding to reports and going back and forth with people on, you know, what's going on. So we're trying to use all these tools everywhere. To be very specific, like, yes, very aware there's some security problems. I mean, you can go look at the repo and see the ones we fixed. I think we've Deno, something Wes Node, like, maybe twenty sixth or twenty seventh release since, we launched this thing, like, about, you know, two weeks ago. So we're we're in there fixing stuff. We're maintaining it. We're keeping going. I would say everybody, like, I I'm I'm trying to map this out right now. Like, how do we get to, like, what we we'll remove the experimental label and call it stable or something like that or call it beta Yeah. Just to give people a little more confidence that, yeah, like, this thing is ready ready for production. It's like, this is something that the end game is, like, you will hopefully release this thing, and it will be, like, usable for people that wanna run Next. Okay? Customers doing that now. Right? I mean, we've told customers about, you know, the where it has gaps and where the risks Yarn, and especially, like, if you there's a lot of people out there that use Next in a very limited way. Right? Like, they are the sort of make a static site with Next, and maybe they've got a few random API endpoints or, they do maybe one page JS, you know, dynamically generated, but the rest of their site is all static. And so we have people that have sort of that I'd say Node, like, kind of, like, narrow feature set of Next, and they're having a pretty good time with this so far. Do you think that makes sense
Wes Bos
to literally port the whole obviously, it does. This is a dumb question to ask you, but does it make sense to port the framework versus just, like, telling it to move the website
Guest 1
to to, like, another framework? That is a great question. And I, I think I said something about this in the blog post. I've said this exact words to customers. I said, if you like Next and you wanna use Next, then this is a good option. If you don't like Next, if that's your problem, then you can spend $10 in tokens and be on a different framework. Right? And we've seen customers do that. I mean, there's so many options out there. Astro, Panstack, Solid. Right? Mhmm. Like the if you wanna be on another framework, like the cost of switching is so low, thanks to AI.
Guest 1
Especially if you've got any kind of good end to end test suite. So I I have said this to people, and and like I said, I I didn't do this because, like, I love Next. Js, and I wanted to like, that's not why I did this. I did this because I was, like, interested in AI. And I've I've definitely said to people, if you don't wanna use Next. Js, spend the tokens, go try another framework. Totally. I mean, I
Wes Bos
I've been trying to move off of a fairly a fairly large project off of Express for a long time, and I've just never been, like, Express is one of those things where it doesn't bug me enough, to wanna move off of it. You know? And then finally, I was just I let this thing rip for, like, an hour or whatever, and and it was totally moved over to Hano. And I was like, man, that is amazing. The portability. I think we should Scott, we should probably do an entire show on that. It's just, like, migrating
Guest 1
is the the barrier to entry of of migrating is so low right now. Totally. Yeah. I do wonder how that's gonna I mean, I talked about it as opposed to, but how it's gonna change the incentives around, like, what we do and how we write software and where abstractions matter and where they don't. I don't claim to know the answer. I just know that the line is probably gonna change. Yeah. It's all very fascinating. And and who knows, like,
Scott Tolinski
with the the way that frameworks are changing, CJ and I over here have been, like, talking quite a bit about, like, frameworks. And right now, you know, frameworks are designed to be authored by us, but, like, what types of changes would we make, in, like, a next generation of frameworks that were designed specifically to be authored by AI? Like, what types of optimizations would be needed there for a framework to perform or to for the AI to be able to write framework code better? It's just like an interesting thought experiment. We don't have anything to show for it, but we're both hacking on stuff just seeing if we could make anything interesting.
Guest 1
I I think we'll see some AI first frameworks. I think we'll probably see an AI first programming language. Right? Somebody will come up with a way for Interesting. Something they think AI JS could just be better at. Now all of these will suffer from the bootstrapping problem of, well, it's not in the training data, so, you know, it won't get recommended or won't understand how to do it. But I I think that's just gonna get solved. That's that's actually my personal take is that Wes cannot live in a universe where we're always, like, two years behind on what LLMs are telling us to do. And so I I think there will be some part of the technology or the training or the post training or or something that will get solved here. Like, some somebody will come up with some technique to say, like, we're gonna inject in, you know, all the relevant data for this new language because it's super critical that AI uses it. What would a AI programming language look like? What do you think? And I talked about guardrails. Right? So so probably something that's typed, something, yeah, awesome. I look at the programming languages now, you know, like, Rust is is verbose, but has, like, all of these guardrails.
Guest 1
Famously, Rust people will say, well, if it compiles, it probably runs. Right? And it probably works. So Rust kind of seems close to what you want, but almost like you want something that's verbose and simple, which is more like Go. Right? Like, Go is sort of designed to be, you know, well, there's only one way to do thing or a couple ways to do things. And so, I don't know. So something that has sort of more of the guardrails of Rust, but then, like, the simplicity of Go, I guess that that's what I would say. It's like my guess.
Wes Bos
And do you think we would have, like, a, like, a rigid syntax similar to how how those languages Yarn? Or do you think it would be more free flowing
Guest 1
English? I'm guessing rigid because I think it needs, like, those guardrails. Right? Like, a limited syntax.
Guest 1
I I say all this too JS I I love I'm a TypeScript guy. Like, that's my background. I I was a front end dev historically before I kept getting put in charge of infrastructure teams.
Guest 1
And so I like writing TypeScript. And so if TypeScript, you know, doesn't make it in the AI world, I'll be a little sad about it. That's good. Actually, that was one more question I had
Wes Bos
about your Node setup. Did you have the TypeScript LSP hooked up in in OpenCode? I I think it's on by Deno, or does was it just grep and strings?
Guest 1
No. It's on by default Node, and so I think it just was working and in the background. I don't know if it made much of a difference at all. I will say that it gets caught up on the LSP sometimes where the LSP is out of date. I don't know if you ever see this in open Node. We'll say, like, oh, there's an LSP error, and then it will catch itself. It'll say, well, I I know that the type check passed, so the LSP is just not caught up or something. Right? Like, it's in the background. And I noticed it gets it gets caught up on the and hook hiccup on that sometimes. I don't have any data that says the LSP stuff is helpful, but I don't have any data that for me that says it's not helpful. Right? Like if it was running and I got the project done, so baby. Yeah. I'm curious to see when the, like the TypeScript Go stuff becomes a bit more mainstream where they can type check your entire project in, like, twenty milliseconds. I'm curious to see if that will help it at all. We do use that here. So we use, TypeScript Go, ESLint, aux format. Mhmm. Yeah. Like, we use all three of those. So I definitely and then Vite, Vitez. I'd prioritize, like, all the latest and greatest speedy tools because I I knew that that would be important. Right? Like, you you want that, like, feedback loop. Right? Yeah. So Yeah. I agree. It's if it's like a like a three second
Wes Bos
TSX compile every single time it does something to check that there's no errors, it's
Scott Tolinski
it it really slows you down. Yeah. I do feel like ESLint and Oxvara, those those types of tools are I think once they hit
Guest 1
more people, I think people are gonna be really impressed with how how great they are. I mean, that that's all from void zero from the v team. I mean, they they are cooking over there. Right? They are. V eight, which JS just rolled down instead of roll up, is not the default yet. It's still V eight seven if you so before V Next, it uses V eight seven by default, and you can opt into V eight. I believe they're going to have something, you know, kind of beta soon ish. I don't I don't wanna commit to any timelines because they haven't told me. But, you know, as soon as that's ready to even be in, like, a kinda public beta, we're gonna swap the default because it it improves build times by, like, another two x or something. It's it's pretty drastic. Smokes. Yeah. Crazy. Yeah. The folks are cooking lately.
Scott Tolinski
They definitely are. It it feels like the CloudFlare folks are are cooking too.
Scott Tolinski
And, also, I I've noticed a lot of Vercel, folks are are even being scooped up by by Cloudflare. I know Vercel often has the reputation for having, like, really good DX, and and Cloudflare has always had the reputation of being, like, great tools. Is Cloudflare taking that DX part of it more seriously now? Is that the message?
Guest 1
100%. So, when when I joined, CloudFlare, that was a big part of, like, what the goal was. Right? And so how do we get the right people in that can sort of, like, have this good taste for DX, and how can we just let them go? And, you know, it's, you know, it's it's a big company. It takes time.
Guest 1
I I often you know, people ask, you know, what what's your job at CloudFlare? And I use the analogy of, like, I I am not responsible for fighting fires. I decide where fire stations get built right at my level. Right? Like, that's the kind of timeline horizon I'm on. So for me to understand if I'm being successful is, you know, I have to look back two years to see, like, well, did I make a right decision there that resulted in, you know, a team being formed that created the right conditions for this to get better? I do think that is where things are headed at CloudFlare. We've invested a ton in hiring the right people, and we and we've given like, we have a whole new design engineering team that's responsible for our dashboard, and they are just burning through pages trying to, like, make them better.
Guest 1
So we got work to do, but I think it's way better than it was a couple years ago. So I I hope other people think that's true. Every time I load up CloudFlare, something looks a little bit better here and there. So I I think it's noticeable already, and and then that effort, I think, will continue to, I think, pay dividends for y'all. Because, that was, you know, I think, a rub with Cloudflare before was navigating it even. And we got we have a a bunch of projects that I I can't share necessarily in the works, but we got a bunch of stuff in the pipe that I think is gonna make us even better. I think we're tackling it from, like, sort of multiple layers. We have people that are like, go fix the dashboard today. Make it good. Make it faster. Make it better. Right? And so we have people focused on that. And then we have even some people now focused on, well, what JS the entire future of the platform look like? Like, how do we reinvent, like, a surface area of our platform such that it not only is it good for humans, but good for agents. Right? Wes, you know, like, how do you build tooling that is, like, agent first and agents? Because it's different. Right? Like, an agent an agent needs good DX, but agentic DX is different than human DX. Right? It doesn't need to look nice, but it needs to still be, like, intelligible to an agent so it understands, like, well, I need to click on this thing, or I need to go follow this path to figure out what to do. Cool.
Guest 1
Anything else we didn't touch upon before we wrap this up? I'll give you just, like, super high level. Right? Like, I said it before. I'm I'm both simultaneously, like, thrilled and terrified about all this stuff and where it's gonna go. Right? Like, I mean, it feels like we get to be alive at this this, like, what feels like a big technology revolution. Right? I mean, this is like the printing press or the steam engine or something. Right? I mean, the closest thing we have in our lifetimes is mobile, which I think is still an order of magnitude smaller. Maybe the Internet. But even the Internet, I mean, it took a while for us to get all the pipes to everybody's houses. You know, it it it takes a while to roll out, and you get the latest and greatest on your phone within, like, twenty four hours, the same as anybody else, right, when these things get released. And so Mhmm. Not only is it, like, the magnitude of the revolution that's happening, but also the compression of how fast it's happening. Some days, I feel so ahead of the curve with this stuff, and then the other days, I'll look at what other people are doing, and I'm just like, wow. I I barely scratching the surface. Yeah. It do you have any ideas of, like, what industries
Wes Bos
are sort of, like, next for this type? Like, obviously, it's doing very well in coding, and it's doing very well in, like like, white collar office jobs, you know, like accounting, Excel, that email.
Wes Bos
It's being abused in marketing and and whatnot. What's next? Like, what do you see? Probably the next big thing I imagine is
Guest 1
I have to think that that medicine, right, is going to get Mhmm. You know, pretty upended by this. I think it'll be very similar to what's happening in in coding Wes there's a lot of things that can be done by an LLM, but then you Node, like, an actual doctor with experience to, like, steer it along the way. Right? You know, you you can't just do all of its own stuff. And so I have family members in medicine, and I I just sort of see what they do and the direction they're headed. It's already there. I mean, some of the these hospitals are already using LLMs and, like, have, like, you know, LLM based voice TypeScript and things like that. Like, some of the tech is there and then in use. I think there's a lot of regulation in that industry, so it'll take a while to sort of, you know, permeate fully. But I think medicine is something that could get very, very upended just in terms of, like, how fast you can understand what's going on with the patient. Right? Mhmm. So.
Wes Bos
I I'm kind of excited for like, obviously there's, there's horrifying things that could happen, in regards to like the wrong people having that data and and not being able to cover your, your payments for things like that. But also, like, I have a Apple Watch on that's just like I have all of this data of how I slept, my heart rate, all of this stuff, you Node, and then you you take a blood test and then take that and then compare it against, what, 20,000 other people, a 100,000 other people of all these different cases. There's there's gotta be something there. Right? And I'm sure people are working on that. I think this is where, like, we we JS technologists and the people sort of inventing this stuff around us have to
Guest 1
try our best to sort of steer things in the right direction because this technology gets used for good and bad stuff. Like, I mean, I I I I brought up the example of printing press before. I mean, the printing press changed the world. Printing press started wars and revolutions.
Guest 1
Right? You know? Like, all bad stuff happened too. And so I think both will be true in this case, and and we just gotta try to keep our thumb a little bit on the scale of, like, use this for the right things to help people and make things better. Right? That's and it's it's gonna be hard, but that's what I'm I'm trying to do. So now's the part of the show where we're getting the sick picks, which is just things that you're really enjoying in life,
Scott Tolinski
because all this stuff is scary and crazy, but we gotta have things that we like. So do you have something in life right now that is just you're enjoying could be a product, a a show, a podcast, anything?
Guest 1
Sick pics.
Guest 1
Here's the thing. I'm I'm I'm exploring becoming a watch guy.
Guest 1
Oh.
Guest 1
This is a new this is a relatively new thing for me in the last, like, few months. So, yeah, I actually I inherited a watch from my grandfather, that he got for retirement. So it's got, like, some sentimental value for me. Now it is gold and flashy, and so it's not really my style. And so that sort of begged the question, well, what is your style if you were gonna be a watch guy? And so this is this is where I landed. This is the, IWC Yarn 20.
Guest 1
It is, made by, well, IWC.
Guest 1
And so this is their just one of their kind of, like, standard pilot watches. It's it it's not like the flashiest, most out there watch, but, I've been very happy with it so far. Yeah. How do you find out what your watch guy style is? How do you how do you develop that? I it probably took me, like, three months to decide to buy this particular watch. So I went tried on a bunch, just kind of felt what felt right. Honestly, I I didn't know either until I went and tried some on. And then, there's a few watches that you kinda just leave the store and you go, like, I feel like I left I, like, I forgot something there. Right? Like, when I left the store. And I was like, what? What am I missing? And then I'm like, oh, wait. No. I just really liked how that watch felt, and this is this is sort of one of them. But you can tell I've got a I've got a couple others in mind that maybe will come through the pipe. But this is this is my first, like, proper watch purchase. So, yeah, that's kind of been a fun thing. I I also just like so I'm actually a mechanical engineer. It's my that's what I went to school for, before I bumbled into software engineering Scott of by accident. But, you know, we didn't cover my background, but, you know, Node there's a whole story there. I worked on, like, the Ford Mustang for a little while at Ford. I kinda did some software on mechanical engineering. So I really like the mechanical stuff around watches. I mean, the fact that it is, like, on your wrist and there's little spinny things in there that just run all day, like, it's it's just very, very cool to me. Oh, that's cool. That's actually something I was building the other day. I use this three d library called Manifold.
Wes Bos
It's it's like a it's like a c library, but there's there's TypeScript bindings for it. You could just write JavaScript. And, essentially, it will just, like, create three d shapes that are, like, watertight. Right? And, I I do a lot of three d printing and whatnot. And and probably about a year ago, I made this website called bracket.engineer, which is is for making brackets that hold up, like, power supplies and whatnot. And then I I came back to it a couple months ago, whatnot, once the the models got much better.
Wes Bos
And I was like, I wonder if it can do things like, like Sanity gears and and whatnot and and do all the math behind that, and I I thought that was a kind of interesting it made all the, like, actual mechanical engineers mad because they're like, like, we have calculations for this already.
Wes Bos
But it it worked pretty good. It it took a little bit to get there, but I was very surprised. It's I've seen a little bit out of, like, the LLMs kind of intruding in the, like, traditional engineering
Guest 1
category. Wes are you gonna say? Like mechanical engineering. And it's it's kind of been fun to see because it there's a lot going there. Right? Like, so I I love being a software engineer. I love writing software. But when you look at what it is like to engineer for the real world, right, it JS it's it's just, like, entirely different. Right? Like, it it's very different in terms of, like, what you have to think about and, like, sort of the the all the trade offs you have to make, right, around, like, materials. And you might do these things to make something, like, ESLint 5% better. Right? And that's, like, a huge success. Right? And, you know or you have so many more constraints, and software just feels like this, like, sort of boundless, like, do whatever you want. Right? I don't know. Just do whatever you want on the web page. Right? Put put more RAM in it, and we'll be fine. Yeah. Yeah. It it's part of why I got into software engineering, but I I definitely missed some of the mechanical stuff. Totally. I I I haven't dumped into three d printing yet because I know when I do, it will be bad for me, and I will get way too deep into it. Oh, man. Wow. Oh, you will. Yeah. That was my life. I didn't get one for probably five years because I was like, I need to be obsessed. And I've had it for a year, and I am obsessed.
Guest 1
We're gonna be moving next Yarn. We're moving we have, like, this, like, lofty condo thing now. We're gonna move into, you know, a big proper house, and I will have space for something like that Wes I can have, like, my dedicated like, here's the three d printer off, you know, in the in the corner workshop.
Guest 1
And I I'm gonna do it then.
Scott Tolinski
Oh, yeah. And then it won't ever stop printing. Yeah. It will just be printing twenty four seven. Yeah.
Scott Tolinski
So is there anything you would like to plug before we get out of here?
Guest 1
Obviously, vNext. Please go try it. See if it works. The best way to try it is to just point your LLM at it. Just say, migrate me to vNext, then point it at the repo, and there's a skill, and it will just it'll just, like, figure it out. I would love more feedback about what works, what doesn't work. I mean, I I know it's not perfect, like so we're trying to, you know, fix things as fast as they can come in. I think we've everything we've Vercel, like, a 150 PRs in the first few weeks. I mean, we're we're in there. We've got people working on fixing stuff. Right? So that's shameless plug one.
Guest 1
Shameless plug two is CloudFlare. I work at CloudFlare. I don't know. Maybe I haven't mentioned that enough. You should go use CloudFlare and CloudFlare Workers, and it's great. And there's so many other things beyond just, like, host your website you can do on CloudFlare. We're really building out, like, an entire developer platform. It is, like, the cloud of the future, and it's gonna be amazing. And you should come play with all this stuff. Sick. Big big Cloudflare fan here. Hold on. I have a I got a Cloudflare blanket behind me.
Wes Bos
Nice. Nice. Hey. Good CloudFlare fun. Cool. I appreciate all your time. Thanks so much for coming on, and, we'll have to catch up in another time. Yeah. Of course. I really enjoyed it, so I hope it turns out really good. Alright.