Skip to main content
791

July 5th, 2024 × #serverless#lambda#javascript

LLRT The Serverless Runtime w/ Richard Davison

Deep dive into LLRT, Amazon's new crazy fast JavaScript runtime tailored for serverless environments like Lambda. Covers background, implementation, benchmarks, use cases and more.

or
Topic 0 00:00

Transcript

Scott Tolinski

Welcome to Syntax. In this supper club, we're talking to Richard Davidson.

Scott Tolinski

We get into deep all about the new LORT runtime for JavaScript that they're working on over at Amazon.

Topic 1 00:15

LLRT runtime for JS is crazy fast, 30x improvement in cold start

Scott Tolinski

This thing is crazy fast. In in some cases, it can be over a 30 times improvement in cold start time. So, we get into all of the details about how this thing is able to be so incredibly fast, how he's putting this together, what you can use it for, what are the ideal use cases for serverless in general, and just all the ins and outs about building a runtime for a serverless platform.

Topic 2 00:39

Richard's background - early Node developer, ecommerce, serverless

Scott Tolinski

So without further ado, let's get to the interview.

Scott Tolinski

Welcome to the show, Richard. How's it going, man?

Guest 1

Thank you so much, Scott. Thanks for having me. It's it's it's great.

Guest 1

Beautiful weather here in in the Wes Coast of Sweden.

Guest 1

Cannot complain. It's about 25 degrees Celsius. Sunny's shining. You know? Awesome.

Scott Tolinski

I love it. Love it. So maybe before we get into any of the LLRT stuff, let's maybe just give if you could give us a background on, you know, who you are, what you've been doing, and and how long you've been doing it.

Guest 1

Sure. So, my name is Richard Davison.

Guest 1

I'm a I wanna call it developer slash architect from Sweden.

Guest 1

So I've been, building with Wes technologies basically my entire career. So I started out with Node, I think, almost 10 years ago at my basically my first real job at university.

Guest 1

So we were quite early back then. It was at CGI, so a big consultancy company. We were really early in the Node think Wes in the node ecosystem or, we started building when it was something the version was something like 0.84.

Guest 1

I don't quite remember. But we were very early on there, so we kind of jumped straight into to JavaScript.

Guest 1

And can in in hindsight, you could argue if if the the product or the the tooling was mature and everything, but, still, quite fun. So I started doing that for a couple of years. Yeah. It was very challenging, of course, because it did it's quite different back then than as opposed to how it is now. It it was wild back then. There was no, like,

Wes Bos

ecosystem. Like, all of the packages were being made, and it's it's totally different space now.

Guest 1

I don't remember even if there was, like, Npm or not. I can't remember. I think that Wes even may have been even before NPM. I don't know. But, yeah, it was pretty wild. It didn't ESLint. Promises or anything like that. I think it Wes, like, third party. Promises started arriving, but it was not async wait and not any anything that was built into to specification or anything like that. So quite quite, special. So I did that for for a couple of years and we focused on building apps Wes that was, you know, the big thing to do, writing and building mobile apps. And we used JavaScript to build them cross platform.

Guest 1

And there was a bunch of technologies that were used to do that. I think something called Titanium if memory serves right, or, I don't even know if they're still around, but we use that technology to build like cross platform.

Guest 1

That was way, way before React even existed or React Native or something like that.

Guest 1

Then a transition to, the e commerce industry. So also very fun because you get to tackle a lot of challenges because basically every customer had their own demands and their own requirements. The user experience was the center around everything in e commerce, right? Because it directly translates to the revenue. So if you can improve the user experience for our customer's website, we can make it faster, more efficient.

Guest 1

It directly translated to increase revenue which was also very fun and challenging, especially since the frameworks that we used to do at the side of the company that we used to work with with with on the side of the company was was a bit, outdated. They had a bit of legacy. So so, I built some tooling around that to try to break them apart into more about microservice architecture.

Guest 1

And finally, I ended up at AWS. I was recruited, I think, 2 and a half years ago, something like that. So a recruiter reached out to me and ended up here. So now I'm at AWS building predominantly with modern apps or app mod technologies. So that means serverless and container technologies. And of course, a lot of Lambda is very close to my heart and JavaScript has been with me this entire this entire journey.

Wes Bos

Yeah. So is it most mostly JavaScript that you're working on at AWS or,

Guest 1

other types of APIs? Bag. Yeah. It's a it's a lot of technology. It's since I came from this kind of, like, diverse background where every, basically, every customer had different requirements and they Yeah. Sometimes yeah. Sometimes we also had projects that we took over or improved or something like that. So it kind of required required you to have a very broad sense of understanding of different technologies.

Wes Bos

Yeah. So we're here to talk about LRT, which is Amazon is building a new JavaScript runtime.

Wes Bos

And this is wild to me that we now have way more JavaScript runtimes on the the server. You know? Like, we've obviously had Node, and then we've got Bun and Deno and and Cloudflare Workers. And I did a whole talk. I think I counted up to to 9 of them, and that was before Amazon came out with this LLRT.

Topic 3 05:19

Many JS runtimes now - Node, Deno, Bun, Cloudflare Workers - why another?

Wes Bos

So why is Amazon building yet another JavaScript runtime?

Guest 1

And Wes, what is it? Yeah.

Guest 1

Yeah. Sure.

Guest 1

So it's not it's not actually, like, an a an official Amazon or AWS product right now. Node. It's still like more of an experimental package. So we're trying to, you know, build something that targets a specific set of features that we haven't seen anywhere else.

Guest 1

And this is kind of the reason why I came up with this idea internally.

Guest 1

And just take a step back and and, talk about, you know, what it is and and the rationale around it is my job also consists of building tooling for partners and customers that work with Amazon Technologies.

Guest 1

And building tooling can involve everything from making their life easier to making their experience of their end customers better.

Guest 1

Sometimes that can even involve building run times.

Guest 1

Not everybody does that, but I'm a geek, so I like geeking out in things, in technology.

Guest 1

But I saw that there was a lack of technologies, specifically tailored and built around serverless architecture or specifically Lambda functions to have something that was tailored for Lambda functions. Right? Lambda is is, is a very capable service. So it's it basically works by letting customers build a nano service. Right? If you if you take your microservice and break it down into even smaller pieces, we call these functions.

Guest 1

So you you build small functions and then you can expose them to the public internet or react on events.

Guest 1

So react on things happening inside of database platform. Someone puts something in a database or if someone puts some data in an object storage or whatever. So it's kind of the events driven architecture glue.

Guest 1

And these Lambda functions can all run different programming languages. And the great thing about them is that you don't have to worry about scaling or you don't have to worry about keeping them up and running, or resilience, or, availability, or anything like that. But we we handle that for you. So as the load increases or as they process more events, they simply scale automatically.

Guest 1

So you can focus on building or building applications that solve your business problems rather than spending a lot of time on on building complex infrastructure.

Guest 1

But to be honest, it's quite complicated to get that right, right? To to have yeah. To set up a scalable architectures.

Wes Bos

Yeah. Well, I'll just give 1 quick example for anyone listening who's like, I'm not sure, like like, what that might be JS, like like, imagine you you had an application, which at the end of the month, it would email a PDF of, like, a receipt to every single 1 of your customers. Right? Like, the old way to do it, it would be you would have a server running that makes those PDFs, and and you might have some, like, cron job where it takes 3 days to generate all of those and and send them out. But with and then you're paying for, like, 28 days worth of server just to sort of sit there, while you're waiting for it. With with, like, a with a Lambda or serverless, you can basically just, like, let that thing sleep.

Wes Bos

And then for however long you need, you can go you could spin up a 1, 000 of them and very quickly do all of that work, and and you're just simply just paying for the amount of time that you're using, and then they I I don't know if you guys use the word spin down, but, like, you've the it sort of closes out, and now now that computer is no longer running. Right?

Guest 1

Yep. Yeah. That that that's about it. And it's you know, the great thing about them is that you're only paying for the compute that is consumed. And it's not like you're paying for the machine, underlying machine that is consuming CPU resources or anything like that. It's actually the time it's spent on processing your event. So it's very fine grained. So it's built per millisecond level. Right? So as soon as you get an an event in and that event can be something like Wes described here where, you know, at the end of the month, you need to create a PDF or whatever. But it can also be an event that reacts to like an HTTP call that is a synchronous event. So someone, hitting up an endpoint, posting some data or requesting some data via get Wes or whatever. And then you can respond to that by a Lambda function and it will expose an API. This is a very common pattern as well. So it's not only processing of asynchronous stuff, so meaning like reacting to things happening inside of AWS, but it's also synchronous stuff being attached to something like an API gateway.

Guest 1

You can even expose an endpoint directly to a Lambda function so you can invoke them without an API gateway, etcetera.

Guest 1

Scott it's a very versatile service.

Guest 1

And this kind of also works as a sort of an extension to a lot of other services. So if you want to add capabilities, for instance you want to add some custom authorization to an API in API gateway, you can use a Lambda function to have customized code to have that authorization.

Guest 1

We have bunch of different services that, very well integrate with AWS Lambda. So that's the like 1 of the major benefits that you can get quickly going and start building real value in your product rather than focus on building like nondifferentiating infrastructure that you just Node. That is that kind of that, you know, accidental complexity, that you have to add to your application.

Scott Tolinski

Yeah. In regards to LLRT, like, what makes LLRT specifically applicable to that that context?

Topic 4 11:29

LLRT tailored for Lambda functions unlike other general purpose runtimes

Guest 1

Yeah.

Guest 1

I was just gonna get to that, and I'm glad you're Node. Good. Cool. Yeah. Yes. So it's a good, cool segue. So like I explained before here, when we talked about these different programming languages and and and sort of frameworks that can run on Lambda, you have a bunch of options.

Guest 1

So you can run Java applications. You can run Python applications, Node. Js applications, .net applications, and a whole heap of other predefined and provided by AWS runtimes, as we call them.

Guest 1

You can also run custom runtimes. This means that you can build your own. So as long as it's a Linux program, you can basically ship whatever you want. You can even have a Cobalt Lambda function running, right? As long as it communicates with underlying API and it's not really, super advanced. It sounds advanced, but it's not really as long as it can do HTTP, you can you can put it on there. And as long as it runs on Linux, you can put it on there. Right? But the thing is that these languages and frameworks, they are built for general purpose.

Guest 1

So you Node, Java was created in the 90s.

Guest 1

Python was created in the 90s.

Guest 1

They all have a bunch of legacy, but and back then in 90s, the concept of of serverless and these ephemeral Scott lived running instances that quickly spin up and and, like you said, Wes, spins down again and are discarded, that concept didn't really exist.

Guest 1

So you basically you started a server.

Guest 1

Maybe you shut it down when you created you know, you had to patch it or, deploy an update or anything like that. You did that in the middle of the night when there was no users, and then you started it up again and pretty much left it alone. Right? So the concept of building for that that, and that feature of having Scott fast startup, it didn't really exist. So they hadn't Mhmm.

Guest 1

The general purpose programming languages and frameworks that exist, they they haven't been designed around that premise. Right? So they they weren't optimized for fast startup. They were optimized for long running tasks and have good performance when when running over a longer period of time. That's not true for every runtime in in, c plus plus or C or Bos, or anything like that, are the exceptions.

Guest 1

But the exception can kind of be drawn here with compiled languages versus interpreted languages, or languages that has a sort of a cross platform approach, kind of the same thing. Those are not as prominent to Fast Startup as compiled languages are. But those are actually the languages that most people use. I mean, there are a ton of people writing JavaScript applications and Python applications and Java applications.

Guest 1

But they all suffer from this sort of legacy and also from not being so fast to start. Still works really Wes, but for that low latency, there are not really any alternatives except for going to Rust or C plus plus or C, even Go, right, to have this more of a compiled approach. But then you sacrifice on flexibility because you have to recompile in order to change something. And you don't see the source code anymore in a similar way.

Guest 1

There's a value of being able to kind of your source code is also what you execute. Right? You run straight off the interpreter as it solves.

Guest 1

So that's kind of the rationale, around this project. Awesome.

Wes Bos

Let's talk real quick about Sentry, the application monitoring software for your application.

Wes Bos

Throw it on to your you it works on your front end. It works on the back end. It's gonna give you information and insights into why your application is breaking or why it is slow. All kinds of great information just like some magic glasses for figuring out what's going on with your application. Check it out. Century.ioforward/ syntax.

Wes Bos

Right now, people who run Node. Js in, like, a serverless or in a Lambda, I'm actually curious if if that's the most popular runtime, if if you know that. Do you know that?

Node.js very popular on Lambda but has legacy concerns

Guest 1

It's 1 of the most popular ones. Right? There's a lot of I don't think I can share even though all the the statistics, but I know I mean, there there there's basically there's a lot of of Node. Js functions and also a lot of Python functions as well. I think those 2 are the most popular ones, Java and Scott net and, and others.

Guest 1

But

Wes Bos

Node and Python are extremely popular to to run on Lambda. Yes. Yeah. And so people that are running Node in that, like you said, is they're hitting like like, what issues specifically are you hitting? Is it the the time that it takes to initially spin up when you've you've not hit a function before?

Guest 1

Yeah. So so yeah, it really depends on what you're trying to build. Or in most cases, this is not a huge deal. And you might experience it like to your point, like you mentioned there was when you haven't invoked them in a while. So when you haven't sent an event to be processed, then it goes kind of into sleep mode, meaning that it conserves resources because you only pay when it's running compute, right? So this is sometimes referred to as a cold Scott. When it needs to provision the resources again, it needs to download your Node, that you co package that you published through Lambda, right? It needs to download that, spin up a new execution environment. So that is a VM.

Guest 1

So every Lambda function runs inside of a virtual machine.

Guest 1

This is a special virtual machine called Firecracker.

Guest 1

And it's optimized for fast startup and it's sort of a super lightweight virtual machine. So it loads the code into this virtual machine.

Guest 1

And then it starts it up. And all of this takes a bit of time. And you also provision with your Lambda function code, you provision how much resources it's allowed to have because it's a linear relationship between how much resources does this Lambda function have versus the Scott, right? So the more resources you give it, and this is known as memory configuration.

Guest 1

So the more memory you you you provide your Lambda function, the more CPU it will have, but also the greater Scott. And it's a completely linear relationship there. So twice the memory will give you, twice the CPU and also twice the costs. You wanna keep the Scott down. But that also would contribute to this cold start.

Guest 1

But what we've seen here in production metrics is that cold starts aren't really a super, super prominent challenge, right, or super prominent problem because it mainly exists during the development lifecycle.

Guest 1

So it typically occurs between 1 to 2% of all invocations and even less so. So it really depends on your traffic patterns, etcetera. Mhmm. But they can still be a bit disruptive to this seamless user experience that that we wanna provide to our end users. So it will be better if we can get rid of them for all, but there's obviously a tradeoff. We have to load these runtimes and languages that weren't specifically designed for have this low memory configuration. The lowest 1 is actually 128 megabytes.

Guest 1

So you can imagine Node, like, spinning up. Yeah.

Wes Bos

Spinning up with that. It's Oh, I didn't think of that. I was just thinking that, yeah, it's faster. Right? Like, we have somebody DM me the other day says, hey, Wes. I'm from Germany.

Wes Bos

Sometimes the syntax site is so slow to load, and then after that, it's super fast. And I thought Yep. It's probably the 1st guy from Germany for that day visiting that like, spinning that up, and it takes a second to get going, and then it's it's warm, people call it. But I didn't think of what you just said, which is, price wise. If you have to do things that don't necessarily need a ton of compute, then you can go for the really cheap

Guest 1

Lambda. Right? Mhmm. Exactly.

Wes Bos

What kind of stuff would that be doing? Maybe just, like, redirecting or checking off or something like that?

Guest 1

Yeah. But it's it's a ton of stuff actually because what we see most customers doing inside of Lambda, they they write ESLint integration code or or sometimes referred as glue code. They do some data transformation.

Guest 1

They maybe, massage the data or remove remove some unwanted fields or aggregate data from different sources.

Guest 1

And a lot of customers are using or communicating with AWS SDK or calling downstream services, right? So this is what most people do. And if you think about that, that's not really compute intensive tasks. It's more IO intensive, right? When you're waiting on a downstream dependency.

Guest 1

So configuring memory low, also keeping costs low would make sense here since you don't want to just charge when you don't need a compute. You don't need basically the CPU to do to do a network call. Right? That that is in practice. But in reality, it also consumes some.

Guest 1

It requires some CPU to do the TLS handshake. When you do a HTTPS call, there's TLS, there's encrypted traffic. It has to create a secure connection, and there's a bunch of data flowing back and forth. And there's like a ton of optimizations around that that you can do for the people that are really into Lambda and know about all that stuff.

Guest 1

Have optimized that heavily. But it's still required. So if I can remove a bunch of things and create a custom runtime that is super lightweight and that starts fast and has the most important things that you would do or that a lot of customers are doing, if that can be created, it's maybe worth the trade off of not having the full API capabilities like Node or Deno has. Just to be clear, LLRT is not meant to replace Node, Deno, Bos, or anything like that. Those are very ambitious projects.

Guest 1

Yeah. And I'm very impressed on how far they got all in all of these projects.

Topic 6 21:39

LLRT not meant to replace Node/Deno, complements where low latency needed

Guest 1

There's no way that that this can ever achieve the full compatibility.

Guest 1

And that's not the design either. It's meant to complement them where you want to have this really low latency.

Guest 1

And to be honest, right, then the simplest way of making things faster is to not do as much. So if I can move our APIs and if I can create a lighter runtime, then I will have faster startup by just by doing that. So if if Node gets removed a bunch of APIs, quite honestly, it will be much faster. Yeah. But they don't know, you know, they can't do that because they people are dependent on those APIs.

Guest 1

They don't know what APIs are being used. So I have an advantage here. I can say I always support this. And if it doesn't work for you, then you can use Node.

Scott Tolinski

Yeah. Yeah. No kidding. The benchmarks in in the GitHub repo, at least, are are really pretty wild, especially for the cold starts like you mentioned. Yep. It's when compared to Node, it's it's shocking.

Wes Bos

Yeah. Yeah. Like, for an example, a cold start on a HTTP request is a 155 milliseconds versus 48. Right? And that might not seem like a lot. I guess a 100 milliseconds is is a lot when you're talking about somebody sitting there waiting for the page to load. But if you're if you're talking about paying by the millisecond, you I'm sure there's customers at Amazon who are salivating at this being like, you're telling me we could cut our bill by 3 Wes you're really splitting hairs there paying by compute. I'm sure they could, save quite a bit of money. Is is that true? I'm just guessing here. Yeah. And it's it's actually even more more so. So if if if you look at the there's benchmarks on the repo doing

Guest 1

a very simple thing but also Wes common thing. It's put some data onto our serverless database, DynamoDB. We have a a multi column key value store called DynamoDB Wes you can store data in a service way as well. So it works really well with with Lambda. So the example just puts the event that gets sent into the Lambda function.

Guest 1

Remember, we're processing events here. This can be HTTP or whatever. And it puts the data into DynamoDB and just returns an okay response.

Guest 1

So running that with Node, with node Wes here, you get, for a cold start, inside of Lambda, it takes around 1400 milliseconds for the best case. And in an LRRT, it takes 50. So that's like Yeah. Yeah, 1400 divided by 50 is like Wes times faster, right, for the fastest case. For the worst case, it's something like 15 times.

Guest 1

And, you know, what's the trade off? The trade off is is what I briefly explained before is that you have a limited API.

Guest 1

Yeah. You have a very limited API. Right? And you have to savings.

Guest 1

It's downgraded. Savings. Exactly. Yeah. So it's not only for cold start. It's also for warm starts. And there's a couple of reasons why this can be so fast.

Guest 1

So the major reason is this is a completely, you know, new project built from the ground up. So I don't have to adhere to all the compliance and all the backwards compatibility that that node has. Right? So it's it's I've cheated a bit here. Right? I can build something from scratch, but I can also tailor it for serverless environments. I know what CPUs it's running on. I can enable features inside of the runtime that only works for those CPUs like SIMD, that is single instruction, multiple data, very advanced like low level CPU instructions that can work effectively with Scott of data at the same time. I can also use Rust, which is a compiled language that is also very optimal and very fast instead of JavaScript.

Guest 1

So the runtime itself is actually implemented in Rust. And it uses another JavaScript engine that is written in C.

Topic 7 25:34

LLRT runtime itself written in Rust for speed, uses C JS engine QuickJS

Guest 1

So in Node for comparison, and also Deno uses V8.

Guest 1

And V8, we've talked about this many times on the show, but it's an engine from the Chrome browser.

Guest 1

It comes from Chrome.

Guest 1

BAN uses a different engine from the Safari web browser or WebKit.

Guest 1

But all of these engines, they are very complicated.

Guest 1

They're also very capable.

Guest 1

But they also tailored for longer running tasks or they have best performance over things running for a longer time because they were built for web browsers.

Guest 1

When you open YouTube or open a website, you're not gonna stay in it for 1 150 milliseconds and then close the tab, right? You you don't care about that as much. Of course startup is important but not really as as important as it is for that long run long running sustained performance. So they're all built around that premise. That has some implications when running on startup. So I would rather save the compute used for optimizing the code that is running inside of V 8 or JavaScript core to run my code of the interpreter, so meaning that runs straight off the source code instead.

Guest 1

In a couple of ways of how that is done, I don't know if if we can if we should go go into that, but there's a bunch of techniques being applied. But the the main reason for the performance gains is basically this super lightweight engine using built in C called QuickJS.

Guest 1

So it's less than 1 megabyte in size.

Guest 1

And it supports ECMA 2023, so a very recent standard, ECMA standard.

Guest 1

And in comparison, I think V 8 is something around 20, 30, or even 40 megabytes.

Guest 1

So it's like 40 times more complicated than than the QuickJS. Right? So so just just that in itself will contribute a lot to to the the fast startup.

Wes Bos

And the developer behind Cook JS is Fabrice Ballard, who is also the author of FFmpeg amongst a million other things. Like, just just a genius. I remember a while ago when when they started started talking about QuickJS, I'm like, oh, it's kinda interesting. Like, maybe you can throw it on embedded device. But, like, is it really gonna, like are they really gonna implement all of JavaScript in this thing? And it's amazing that it is. Right? Like so they implement the JavaScript language, which is things like variables and functions and a single weight.

Wes Bos

And then it's it's your job then to implement runtime features like fetch. Is that right?

Guest 1

Yeah. Exactly right. So so an engine JS it's not a it's not a runtime. So similar to a car, I mean, an engine is what kind of powers the whole thing.

Guest 1

But it doesn't have a lot of APIs. So you have, like you said, the underlying, like, fundamental components. You have the variable declarations.

Guest 1

You have the whole scoping, you know, global scoping and function scopes and function execution and all of that. And you also have arrays and array buffers and all of those APIs that exist Node, like, the primitive types, and also some of them, Wes kind of types with typed arrays and all of that. But it doesn't contain anything other than that. So there's no fetch. There's no console even, there's Node, Yeah. Yeah, no file system, nothing. No sockets, Deno, You have to, you know, basically everything that you import from Node. When you have an import or a require, that doesn't exist. So basically, what you have in global scope is what you have inside of of the JS engine.

Guest 1

QuickJS itself, is actually also a small runtime that it exists as a separate piece of the engine. So you can actually use it kind of standalone with the console but that's not being used in LLRT. So we're only using the engine because quick JS runtime is very lightweight and it's not really tailored for the same use case. So all of those APIs are now implemented in Rust, which means that they Yarn really, really fast. In comparison with with Node, most of the APIs are actually implemented in JavaScript.

Guest 1

It's only the low level stuff like timers and things and the event loop and all of that. That is implemented in c plus plus for Node. But you have, the majority of all APIs are in in JS, which kind of you know, if you remove some of the the features of the v 8 engine, like a just in time compiler, that would would not perform very well because it's built around the promise of that.

Scott Tolinski

I see. Richard, I gotta say, you've been hitting our show note questions.

Scott Tolinski

Like, we have a list of questions to ask you. And every single time I'm about to ask a question, you answer it already. So that's it's pretty amazing here. Just wanted to shout that out. Awesome.

Topic 8 30:25

QuickJS very fast lightweight JS engine by FFmpeg author

Wes Bos

Do you know what other use cases, like or I've been trying to get Fabrice on the show because I I wanna ask him. But I'm curious if you know as well as, like, why did he make this? Why did he make Quick JS? Was there a certain problem, or did he just think, I wonder if I can write JavaScript in c?

Guest 1

Yeah. I oh, that's a very good question. This this guy is a magician. I I think he has to be he has to be 1 of the world's, if not the world's best like programmer.

Guest 1

If he sees a problem, yeah, if he sees a problem, he's like, I'm gonna do it better than anyone else. And he creates FFmpeg or. Yeah. He even even created a Windows emulator in the browser that can actually run Windows XP in JavaScript, which is actually world be without FFmpeg?

Scott Tolinski

It it's like yeah.

Guest 1

Yeah. Yeah. He's, Yeah. He's absolutely you know, he's insane how how talented that guy is. It's, I can't even fathom how, you know, his skill level is beyond anything I've ever seen before. It's, he has a website that is basically just like a text file of all of these amazing projects.

Guest 1

That is like, yeah, PC emulator in JavaScript. JS Linux, that's awesome. I mean, you should put that in the show notes.

Scott Tolinski

Yeah, yeah, definitely will do.

Guest 1

That actually boots Windows XP like the real thing. It's like a kernel in JavaScript.

Guest 1

And here we are writing people struggling with understanding,

Wes Bos

effects in in React than I did too. Right? And then Yeah. I can hardly I struggled with a set time out inside of a React hook the other day, and I was like, there's people running Windows XP in the browser.

Wes Bos

Yeah.

Wes Bos

We

Guest 1

are not the same. Kernels. Yeah. The kernel in I'm like, what is a kernel? Oh, yeah. Okay. Then then this guy is yeah.

Wes Bos

That's cool. It always makes me feel good, though, when people like that see JavaScript and go, yeah. Like, that is a good language for regular people to use. You know? Yeah. And now let me let me build something that is very hard so that regular people can write a JavaScript function and get some major benefit out of it. Yeah. And it can run, like, on on embedded devices or whatever. I'm I'm not sure if that was the

Guest 1

the reason why. I don't actually know. I think maybe he just he likes doing these crazy challenges that nobody else has done before, And then just make it Node, super good.

Guest 1

But it's actually been used. I think Minecraft uses this inside of their TypeScript thing Wes you communicate with with the Minecraft world or whatever.

Guest 1

It's being used, like, here and there. And then nowadays, they even run, like, JavaScript on in space. Right? The James Webb Space Telescope run some weird runtime, I think. Not Quick JS, unfortunately, but there's some old thing. I don't know why, but it seems a bit risky. They get, like, undefined, and the whole thing crashes.

Guest 1

Yeah.

Guest 1

His face.

Scott Tolinski

So so back to back to, LRT, you said it Wes built with Rust. What what made you pick Rust specifically?

Guest 1

I wanted to to have a compiled language. That was basically a requirement as a foundation. Right? I wanted to have a compiled language that that was super fast and also something that was kind of rather, you know, I could be affected with.

Topic 9 33:55

Rust picked for LLRT as fast, mature compiled language

Guest 1

So I picked Rust, and I kind of instantly regretted myself because I'm like, oh, I can't.

Guest 1

I I fought as much with with the compiler as it helped me or with the tooling. So the learning curve is very steep.

Guest 1

And I've Deno some Rust projects before. This is not my 1st Rust projects. But initially, when I started with Rust, like, almost I think everybody else struggles a lot. But then it kind of loses up a bit.

Guest 1

And I very like the fact that the Rust help you write safe code and safe that is being memory safe. So a lot of these really high performance compiled languages have problems where they give you a lot of control, but you also have to keep, you know, you have to make sure that you're doing the right things, otherwise you will get the runtime error. And the runtime errors is not like in JavaScript where it says, oops. This is undefined.

Guest 1

The whole thing crashes. Right? This it's unrecoverable thing. The most common 1 of those things is if you have a memory access error, right? You're trying to read or write to memory that has not been allocated or has been freed. So when you have to allocate memory and stuff, all of that, I wanted to That sound a bit too risky and too complicated. I don't wanna do that, but I still wanna have performance.

Guest 1

And it's also quite mature. I think it's been around for like 10 years Node. So it has a huge ecosystem. So that was some of the some of the kind of the ticks I wanted to have Wes we're picking picking a compiled language or or like a foundation for this project. I wanna have it to compile, to be super fast because performance is key, right? And I want it to be mature. So they kind of ruled out like sig.

Guest 1

Go is pretty mature but it isn't as performant as Rust arguably. And it also uses a garbage collector. I don't wanna have that because it can add pulses. Right? I wanna have consistent fast performance.

Guest 1

There was pretty much Rust, and I had some prior experience. And I could be somewhat at least productive.

Guest 1

I think I'm a lot more productive now than I was when I started the project because I spent so much time in it. But it's still I think it's a great language.

Guest 1

Hard to learn, though. That's why, you know, I I built LRT so people can still use JavaScript and have Yeah. Similar performance. Not same, but similar.

Wes Bos

Mhmm.

Wes Bos

Yeah. That's great. That's why like, the the higher you go in in the ease of using a language, you always it's a trade off. Right? You always give up some stuff in order to make it easy to use. Right? And here Node we are, me and Scott, at the end of the line, just using JavaScript because that's that's the best our little brains can do. But underneath, right, like, it's eventually, it it it goes down to ones and zeros and and runs on the silicon.

Guest 1

Yeah. It's crazy actually when you think about that. Like, how many abstraction you know, layers of abstraction that there JS actually.

Guest 1

So you you mean you have nowadays, you're talking on top of the stack. I don't know. Is there something that produces outputs TypeScript? I don't think it is. Like but say that that TypeScript is on top of a stack, then TypeScript is transpiled into JavaScript. So that's kind of a semi abstraction. I don't know. Right? And then you have JavaScript that sits on top of something in the browser that has a runtime that is written probably in C or C plus plus which is an abstraction over assembly or other things, right? And assembly is also an abstraction over like the instruction set of the CPU because assembly is also, like, believe it or not, an abstraction.

Guest 1

Then you have the actual, like, machine code being executed, which is all being held together by silicon that is made in like a factory nanometer scale.

Guest 1

Yeah. I Node, it's- It's wild. Amazing how this whole thing is, you know, you know, card house or whatever holds together. Yeah. Yeah. Because it's so many labor. Anything works. I was talking to a guy who works at,

Wes Bos

1 of these camera companies that that provides, like like, very high end security cameras for, like, police cars and things like that. And he writes the, like, I don't know, the c code that runs on the camera, and, he's talking about, like, just processing. He's like, we do he's like, we don't even talk milliseconds in the office. You Node? It's nanoseconds here. And here I am on the opposite end being like, oh, we have WebRTC.

Wes Bos

You know? I can hook up my webcam and and get access to it in the browser. And it's crazy that they're they're so far apart from each other.

Guest 1

Yeah. Yeah. It's it's a similar with, like, stock exchange. Right? When they talk about if they have a millisecond of latency, people lose a ton of money, right? They have the They even move like the server closer to the exchange because the bandwidth or the data transfer is faster because the physical location is absolutely massive.

Guest 1

And I think a lot of those like, stock exchanges, they Yarn in, c plus plus But also, I've heard that a lot of them are actually Java because Java is, again, really good for these long running tasks when it's kind of auto optimizes itself, thanks to the the just in time compiler that is constantly being running and optimizing the code. So if if, like, a Java program has been running for a really, really long time, it has similar performance, sometimes even better than than, c plus plus or or or c.

Wes Bos

Yeah. It can be. It's not always. Yeah. Have you seen that the sorry. Better over time? Like, it gets faster? You said if it's been running, it's like a fine wine. My Node app just gets slower if they run for a while. Yeah. You got a memory leak. Yeah. Yeah. Exactly. They actually do because it's,

Guest 1

of course, this JS, like, a bit depends on on on many, many things, but but in general, the concept of a just in time compiler.

Guest 1

So Node also has 1 because it's inside V8 and similar Deno uses the same thing. So adjusting time compiler is exactly what it sounds. It's like a compiler that is built into the runtime itself. So Java has a Java virtual machine. It's a runtime.

Guest 1

It has this just in time compiler that takes the the Java bytecode.

Guest 1

So similar to how JavaScript is executed in Node, it analyzes like the parameters sent to the function and the actual function code that is in there, what path it takes, what branches, like what if statements, etcetera.

Guest 1

And then it generates optimized machine code. So like recompile code, it generates on the fly automatically.

Guest 1

And depending on what arguments are being passed to it and how it's executed. So say there's like an if statement that is almost never executed, it will be get compiled away.

Guest 1

So it can be more efficient.

Guest 1

C plus plus and other compilers also does Wes, and you can give it hints and say that this thing here is like a cold path. So it's not very likely to happen because the compiler doesn't know always, right? But this happens automatically for both JavaScript and and Java running their code. So Node does this, Deno does this, Bos does this as well. So it constantly tries to optimize.

Guest 1

And so if you're running for a really, really long time, you will have a huge performance benefits. That's when you see when you do this JavaScript benchmark, comparing 1 JavaScript, doing hashing or something in JavaScript, and comparing that with something else. So running that in LLRT, for instance, if you compare doing like a 1000000 iterations of a loop, that would suck in Pnpm, would have horrible performance because it doesn't have a just in time compiler.

Guest 1

But on the other hand, doing hashing and things, all of that is implemented in Rust, so we'll have the most epic performance.

Guest 1

Whereas if you know many things are in C plus plus but many things aren't, right? So it might not even be fast. But a ESLint time compiler is generally much, much faster over time. Hence, the Scott exchanges uses these really powerful machines that have all of its, you know, methods and functions been running for 1, 000 and or or millions of iterations and and have been super optimized automatically.

Wes Bos

Okay. That makes sense. Man, it's Wes talked to, Jake from Fastly, and he kinda does the opposite, which is they Yeah. Try to compile your top level code beforehand, so that you don't hit that that just in time,

Guest 1

sort of benefit. Actually do that. There's, like, a compiled cache in v 8. So you can you can run code, and then you can tell node to dump the the v 8 compiled cache. So, like, all of the compiled code that v 8 has stored in memory, you can dump that to a file and then you can restore that like next time you you start the program. So that will save a bunch of time not having to analyze and try to figure out the optimal way on how to compile your JavaScript code.

Guest 1

What can happen though, and that often happens in runtimes with a just in time compiler JS something called a de optimization.

Guest 1

Since we're now the runtime makes assumptions on how the code is being executed.

Guest 1

What if those assumptions are untrue? Well, then it has to recompile and throw away the old compiled Node. And this is really expensive.

Guest 1

This is 1 of the reason why you see sometimes in Node, you see fluctuating response times for doing exactly the same thing. You can't figure out why, oh, why does this take, Node? It took 200 milliseconds this time, but only 50 next time Wes is happening.

Guest 1

This JS usually an effect of the optimization that is happening. And it's especially prominent if you have low resources or few resources because it can't keep everything compiled and optimized.

Guest 1

It cuts memory intensive, right? It has to evict things from its compiled cache. So that's often what we see inside of Lambda, especially because you have you know, you wanna keep the memory low.

Guest 1

So you tweak it down. And then the the node runtime has to struggle in trying to figure out, okay, what what what method should I keep in my compiled cache? This 1 is maybe not executed so frequently. Let's throw it away and then recompile it again. And now I'm sacrificing precious CPU, and it Bos Yeah. Costs money, and it takes time.

Wes Bos

That's super interesting. Man, I'm I'm learning a lot about how, like, the lower level stuff works. I really appreciate that.

Wes Bos

I wanna ask 1 more thing about the node compat. You said, like, this obviously is not going to be a node replacement. It's good for things that it's good for.

Topic 10 45:00

LLRT supports subset of Node APIs, easy to switch between

Wes Bos

But if you look at the compat matrix on the docs, it's actually it's actually pretty good in terms of, like, what APIs are are are supported.

Wes Bos

What are some things that would not work well or at all in, LLRT?

Guest 1

Yeah. So first, if you look at that COMFAT metrics, it's a bit you know, it has a few exclamation marks.

Guest 1

So when we say that Wes support FS, right, we're not supporting the full FS API, It's partially supported.

Guest 1

Wes kind of also by design try to support what things, things that that that our customers and and users actually are using. And we see sometimes we get Wes. I need to have this because, you know, reason a, b, or c. And then we try to consider it because every API we add it it adds the weight of the runtime. So Wes would make it slightly, slightly, slightly slower for every 1 of those APIs that we implement. But it also naturally takes time to implement them. But that being said, we do support a lot of APIs. We support most of the things that you need from FS promises. But we don't support the regular callback FS module, right? So you would have to Okay, I would have to change my code to use FS promises, which I think arguably that you should, because it's I think it's a bit nicer to use asyncawait rather than than callbacks and promises.

Guest 1

That being said, LLRT has has designed its APIs with the same specification as node. So when we have like JS read file, we use the same signature as node.

Guest 1

So when when you wanna use like an API that is not supported, you can switch back to Node, and you haven't wasted any effort. And that's just a major decision that we took early because we don't want people building specifically for LRT. Ideally Wes want, but we don't want them to Node that risk. We need to earn the trust, right? We need to say, Okay, try this out. If it doesn't work or if you in the future have a new requirement Yeah.

Guest 1

You can't rely on us, you know, implementing that. Then you can switch back to node or move that out to to a break that apart into 2 separate pieces. But you should not waste effort. That's a very important thing, right? It should be relatively simple to transition.

Guest 1

So that's 1 of the major

Wes Bos

downsides So no of course. No custom APIs, in LLRT. You know? Like, whereas, like like, Bun and Deno have their own there's no standard FS API. Right? So Bun and Deno have an implant to their own. Node has their own, and, like, you're not gonna come around and make a a 5th API that we need to do.

Topic 11 47:53

LLRT plans to support WebContainers spec

Wes Bos

Node, will you will you try to target the winter CG spec, or is that too large of a thing for the the project?

Guest 1

No. That's that's the plan. And, also, I I forgot to answer this from your previous Wes. Some of the things that doesn't work. Right? So Oh, yeah. Sorry. Yeah. So Win for CG is is JS like the the long term goal. Ideally, we could meet meet it this year. It depends that the spec is also still a work in progress. There's a few things in there, which is a bit weird for our use case like web web assembly functions, right, executing Wes.

Guest 1

That basically Vercel lucky would would require us to embed, like, a WASM runtime. So in our runtime in, a runtime, and communicating between the 2.

Guest 1

That may might be out of out of scope and a bit too complicated here.

Guest 1

The other thing that is currently not working very well is streams.

Topic 12 48:48

Streams API currently lacking full support in LLRT

Guest 1

So the Wes streams API, it's enormous. It's a huge API.

Guest 1

You can actually pull in, like, the streams from Node. It's actually a separate package on Npm that you can use for the same implementation that Node uses.

Guest 1

It's huge. It's like 8, 000 lines of of JS and a lot of edge cases.

Guest 1

And you can imagine that wouldn't run very well without the just in time compiler if you have a lot of, you know, bytes flowing through, transformer functions or read streams, write streams, and beam pipes in between and everything. So, ideally, that should be native, but it might not be feasible in implementing that huge API, in a native native fashion.

Guest 1

It might be a bit too complicated. So we we might have to find, hybrid in between. Right? So so the streams API is obviously a a must.

Guest 1

Otherwise, you can't, you know, download, larger files than than what you have in memory if you always read it into memory. Right? So this is something that we're working on. But we try to find like a sensible approach to Node to kind of go crazy on scope, but also have good performance.

Guest 1

So this is 1 of the tables right now. There's no stream support.

Guest 1

It you can use, some features from streams. We we embed that JS API, that 8, 000 line JS, but not everything works. Like, you can't extend, the streams. You can't, like, use some of the the lower level APIs doesn't work really Wes, and they aren't properly tested. So it's still a Yeah. Very experimental work in progress. The streams API is huge. And, like, those you can if you start reading a stream, it's locked,

Wes Bos

and you have to t a stream if you wanna be able to to read it and, like, whoo, I do not envy the person who has to implement that.

Wes Bos

Yeah.

Wes Bos

Yeah.

Guest 1

And even understanding how it works.

Guest 1

I mean, I thought I knew streams. Then I started looking at the specification under the hood stuff. And it's like, woah, this is really complex. So it's like it has a flowing mode and a pause Node, that it depends on how much should be read into the buffer and has an internal buffer that gets full. And, yeah, depending on how much you read, that buffer will be it's it's extremely complicated.

Guest 1

III made, like, a semi light version of it, for child processes, not with old features.

Guest 1

But it's most of the things that you need so you can react on data and things like that from child processes because I think it's really important to be able to use that. So you can run FFmpeg in Lambda with LLRT and get low latency. Right? You can even stream stream the output from Stream from that. Yeah. From SSmpeg encoding. Right? You can do that. Yeah.

Wes Bos

For sure. Yeah. That that would be great. We Wes have to run it in.

Wes Bos

What do we run it in right now? AWS, I hope. It's yes. It is it is an AWS. It's not a shell. But Right? Yeah. Okay. I'm working. No. It's not a media encoder. It's why why am I forgetting the name of this? It's running native stuff on inside of JavaScript? WASM. WASM. We're running it inside of WASM, and there's there's a performance hit there. And there's also limitations for file size, but it, the trade off was worth it. What about WebSockets? Is that implemented, or does that something it needs to be implemented?

Guest 1

Yeah. I think it needs to be implemented. I'm not sure if it's in the spec or not.

Guest 1

I think it is.

Guest 1

I have not started working on that or the team hasn't started working on that specifically.

Guest 1

But I think it should be in there because it's something that people use a lot. We do have some basic socket support. So we have some TLS socket support, obviously with the lack of a full stream API that makes it a bit clunky to work with. But we have TLS, not TLS sockets, but actually, raw sockets. Right? So they're not running behind the TLS connection. So that means that they have to be unsecure. But you can expose, like, a simple HTTP server ESLint internally and have a TLS in another layer if you want to.

Guest 1

It's not super complicated to add in TLS, but that kind of Node on top of speed. So there's a lot of things that depend on having a complete or, more complete streams API that than than we support right now. Wow.

Topic 13 53:10

Unsure if LLRT will become official AWS product in future

Scott Tolinski

So do you do you see a future where this will be promoted to be, like, a an official AWS, you know, product at some ESLint, or is is it just an experiment? You know? Yeah. I sure hope so. It's a bit too early to tell right now.

Guest 1

Obviously, that comes with a lot of, like, more guarantees about, you know, SLAs and support mechanisms and structures and organizationally as Wes. But, I hope so.

Guest 1

But I I couldn't tell right now if that if that will happen. Hopefully, it will. And I I stay very positive. And Wes we you know, we are discussing this obviously internally as well Because we've seen that there is obviously a need. People have been, really receptive. And, I think there's room for something specific.

Guest 1

And ideally, once winter CG reaches a bit more maturity, people can pick and choose, right? So they can get more guarantees on their workloads. They can use LLRT when running on Lambda. They could put the same thing, run it on their local machine in something else, and it will behave and work the same way.

Guest 1

And by the way, LRT also works on your machine. If it's a win, or not a Windows, if it's a Linux or or Mac machine, no Windows support yet. It could be added, but that's not a priority as well. Yeah. But, yeah.

Wes Bos

Man, I feel like my my brain grew like 3 sizes in this conversation.

Wes Bos

I was very I'm always very interested to learn about the stuff that's a little deeper than clicking buttons and making loops. You know? So that's been fantastic.

Wes Bos

Yeah. Let's move into the last section that we have here, which JS, sick picks and shameless plugs. Not sure if you came prepared with either of those. Oh, no. I didn't come prepared for that. That's all.

Wes Bos

I Wes listening to listening to too many of your shows. Can we put you on the spot for a sick pick?

Guest 1

I guess you know what it is. To say then, QuickJS ESLint for a sick pick. I mean, it's, or everything. Can I say everything that the the Lord, has Node? It's a huge sick pic. Or the JS Tolinski, there are too many sick pics there, right? So shameless plug, oof, that's a good 1.

Guest 1

I don't know, I don't Node. Maybe people bashing on JavaScript too much, right, because they say that, Node, it's not like performance, it's not that good, or why are people trying to build all of these really advanced tools around the language that wasn't purposely built for for running on the web or or not running on the web, but running on back ends or running on whatever. And I think that's a bit Shameless plug, I mean, people loved to use the language. They're really productive in it. I think it's great. It's easy to learn, write, and read.

Guest 1

And it Node evolving all the time. And there's a huge Sanity around it, an ecosystem, an Npm or JSR, and all of the innovation that is happening, I think it's fantastic. So, shameless plug on people bashing on JS.

Guest 1

Pretty weak 1, but yeah.

Wes Bos

No. Beautiful. I like it. I agree too.

Wes Bos

Awesome. Thank you so much for coming on. Really appreciate all your time. This was fantastic, and I learned a lot. Yeah. Thanks so much, Richard. Awesome.

Guest 1

Thank you, guys. See you. See you in the next 1.

Share