746

March 22nd, 2024 × #Kubernetes#Containers#WebAssembly#Infrastructure

Infrastructure for TS Devs: Kubernetes, WASM and Containers with David Flanagan

David Flanagan explains Kubernetes, containers, WebAssembly, and self-hosted infrastructure to Wes and Scott. He provides tips for managing your own servers and recommendations for learning more.

or
Topic 0 00:00

Transcript

Wes Bos

Welcome to Syntax.

Wes Bos

We found somebody to explain what the hell Kubernetes is to us. And we're really excited to talk to David Flanagan, today who is a Kubernetes engineer. And, I'll start with the story really quickly before I I introduce him. We had the Demark and SPF episode a couple weeks ago that I was I was prepping for, and I was making sure all my ducks were in line. And I was I was talking about how I had, like, domain aliases with 2 different domain names, and my SPF was working with one of them, but not with the other one. And I was is this normal? And David jumped in. Node JS like, hey. Like, send me an email, and just analyzing the headers and and checking things and, like, immediately understood, everything about these, like, server demark and all that crazy stuff. So, obviously, pretty smart guy. Does a lot of stuff on on YouTube and on Twitter, and we'll talk about what all he does under raw Node, but he's gonna explain to us Kubernetes, servers, hosting. Hopefully, you as a JavaScript or a full stack dev as you think, maybe I should run my own servers.

Wes Bos

Maybe David will be able to help us today. So welcome, David. Thanks so much for coming on.

Guest 1

No. It's my pleasure.

Guest 1

Happy to help, happy to be here, and excited to share some bare metal Kubernetes containers infrastructure Node nonsense with everybody.

Scott Tolinski

Love that. Yeah.

Scott Tolinski

Yeah. Kubernetes is one of those things that it's come up enough on this show that we used to joke about. I mean, even Wes joked about it at the jump of this episode that we have no idea what the hell it JS, or we just got in recently to, like, Coolify or self hosting. And Kubernetes keeps popping up in that whole world of self hosting. So really stoked to have you on, to talk all about

Wes Bos

all of the things that we can't explain with this stuff. Maybe we'll people are, like, listening in the 1st couple minutes and be like, should I keep going on this one? So, like like, should people know like like, a a regular web developer, do should they know how to how to run their own servers? It seems like we're seeing this massive pushback towards people going back to the server, and I think a lot of that has to do with this whole you you hear people say Zirp, Deno interest rate.

Wes Bos

What's the p of Zirp? But what is it? Zirp.

Topic 1 02:20

Cloud hosting costs are rising due to investor pressure to make money

Wes Bos

Deno interest rate policy.

Wes Bos

That's the actual name of it is that a lot of these services for hosting your your websites, for hosting your databases, they are all raising their prices right now, and that's because a lot of the investors are saying, alright.

Wes Bos

Interest rates are up. I need you guys to start making money. So, like, what do you think, David? Do web developers need to to know how to host their own stuff?

Guest 1

I mean, I think it's becoming definitely more prominent, and the answer may be yes.

Topic 2 02:52

Web developers may need to learn how to host their own infrastructure

Guest 1

Because, you know, we've always had this trade off to be Node, first and foremost, I mean, I'm not a web developer.

Guest 1

I've been a back end developer for over 20 years. Always worked with infrastructure and bare metal.

Guest 1

And, only recently started to dabble in a little bit of web, but mostly TypeScript and using it in other domains that isn't traditionally applied.

Guest 1

But what I'm seeing is that that conversation about capital expenditure is changing.

Guest 1

You know, serverless used to be the thing that TypeScript developers and web developers could get away with for free forever, and it was just I never have to pay for a server. I deployed to for sale and Netlify or render, fire up, whatever. Right? And it doesn't cost me any money. I could scale infinitely, but really it doesn't work that way. These things get stupidly expensive the minute you go past that free tier, if you hit that free tier.

Guest 1

And the the costs come charging pretty quick.

Guest 1

So, you know, Node one little bit of a story on why it may be interesting for people to learn more about bare metal and hosting their own stack.

Guest 1

I used to work for a company called Equinix Metal, formerly Packet, and their product is cloud infrastructure, but you get a full bare metal machine.

Guest 1

Like when I do demos for people, I'm running a 1 meg binary running in Rust, that needs about 2 kilobytes of RAM and I run it on a machine with a 128 gig of RAM and 78 cores. Right? Just why? Because it's fun. Right? So finding the ability to use that hardware Scott at that scale Yeah. Controlling your stack from top to bottom Wes, we're just your Scott. Now the point of that story of why I worked at Equinix Metal is I used to help a company called LogDNA who had a 1,000,000 pound per month bill with AWS. And by moving to bare metal, they got that down to under 300,000 per month, which is still a lot of money, but, you know, it's a 60% reduction in cost. So once you get past those 3 tiers, maybe you need to analyze the market where you need to be hosting your your Wes

Scott Tolinski

people say bare metal, like, what what does that mean explicitly for for people who might not know? Sounds sick.

Scott Tolinski

Yeah. It does.

Topic 3 05:04

Bare metal hosting provides direct access to hardware without virtualization

Guest 1

When you have compute and memory and IO on a little square box or maybe a rectangle box stuck into a rack and you SSH onto it with no virtualization whatsoever.

Guest 1

So, of course, every cloud provider has bare metal, but you don't interact with it. You work with a virtual machine, the hypervisor that you're given. Bare metal means there's none of that. You're working directly with the CPU, etcetera.

Wes Bos

So Oh, okay. So, like, when you you sign up for a a GoDaddy account or a DigitalOcean droplet or something like that, There is some beefy server slapped into a rack somewhere, and you're running your code on the same box. It's it's in its own container, virtualized Linux container, but it's actually sharing the CPU with other people who are on the the same box. Right?

Guest 1

I mean, I I'll say yes, but I'm gonna go into that with a little bit more nuance. Right? It's very rare very, very, very rare that you ever get a Linux container as your environment because you know what? People don't trust you.

Guest 1

And I know that may come as a shock, but containers are great because we get access to Linux name spaces.

Guest 1

That means we don't get visibility of all the processes on the host. We don't have access to the fit well. We shouldn't be able to see the entire fail system or memory and all this other stuff.

Guest 1

But these providers can't run that risk that you're gonna go poking and find a breakout of that container. So what actually happens is whenever you get a VPS or a virtual machine on the EC two, there's your Elution, they're actually running a hypervisor layer, Zen or KVM or something else, which gives you a virtual machine and then they stick your load perhaps still in a container but within the virtual machine because they need that kernel level isolation.

Guest 1

Meaning, they don't trust you speaking to the host kernel. You're gonna get a little Vercel. There, they have a better sandbox.

Wes Bos

Oh, so it's 2 levels. You have a container inside of a a VM on a Linux box. That's right?

Guest 1

Very standard setup. Yeah. And this is how Chromebooks work as Wes. If you've ever tried to use the Linux more than a Chromebook, you actually it's a Linux host with a virtual machine with your containers inside of it. So

Wes Bos

Oh, interesting. And that's because they don't want anybody monkeying around with, like, the what? Like, what could you possibly do if you have straight access to the bare metal? Like, delete the CPU?

Guest 1

But not that you'll be able to delete the CPU. Are we talking about the trobic use case here or on the cloud? Just just in general, like, if if someone does give you access to that low level stuff, you can do

Wes Bos

nefarious things, right, or you can break things that would cause a lot of support?

Guest 1

For definitely, right, let's go to use case. So let's assume I'm a cloud provider and I say, hey, here's a container on my bare metal. You've got access to the kernel.

Guest 1

Go nuts.

Guest 1

Now in theory, I've done things right and I've mapped the root user away so that you don't have access to root user. I've given you a confined, file system. You've only got restricted view. You've not got any process ID table stuff. All you see is a very small part of that. However, the code that you execute is still executed against the host kernel. Meaning, if you have a 0 day or some exploit that you've established on your own and you could speak to that kernel, you could ask for it to give you all the files. You could ask for all the memory. You could, inject CPU instructions Sanity other processes. There's a whole bunch of malicious and nefarious activity that you could do. And that's why the issue there of Zen or KVM is brought in at the cloud provider level.

Scott Tolinski

Wes you're hosting bare like, actual bare metal, are you still going to a service provider like like Hetzner or something to host on bare metal, or are you are there's the companies that prioritize bare metal over, you know, typical virtualized private servers?

Guest 1

Yeah. There's not that many cloud providers Yarn willing to give you a bare metal box, and for a few different reasons we can get into. And the biggest one is Equinix Metal. They were formerly called Packet. You may have heard of them under that Node.

Guest 1

And they've built all of their own tooling to do this. So they have this project that can when you click a button and say give me some bare metal, it wipes the disks, installs the operating system, and gets it to you in about 90 seconds to 2 minutes, which is just unbelievable for what you're actually getting.

Guest 1

There are some other providers like Hetzner, their Scaleway, and OVH. They all offer some bare metal that you can have access to. And recently, AWOL recently, last year, 2 years ago, AWS started offering their MEL instances as well when especially useful because you can get them with the Graviton Yarn processors, which are wicked powerful.

Wes Bos

You can go even further. Right? Like, we've seen 37 signals. They literally just went to Dell and and bought a couple blade servers, and then they they obviously have a a rack somewhere at a, like, a data center. Maybe you could run it in your office. I I don't know. Can you?

Guest 1

Of course. Yeah. I mean Yeah. As he said, things used to work. We always just have cupboards in our offices, and we'd run a small 16 year rack with a bunch of machines in it and you stick an IPV 4 on it. Obviously, those are expensive these days, but that aside, yeah, you can run your own bare metal, but there's always the catch. It's economies of Scott. 37 signals, they can afford to do this. They've got such high costs in the cloud that they're always going to save money. LogDNA have such high Scott. They're always going to save money.

Guest 1

Regular company building a Scott up that just spends less than $10 a month on his cloud bill, that's the that's the most cost efficient way to do it pnpm you get to that certain Scott and then maybe you kind of step back and and change things. At a certain point, it it makes sense like, oh, we can hire somebody at 100, 200, $300 a year and pay for the actual servers to sort of maintain this thing because it's gonna be cheaper than

Wes Bos

shelling out for some other service.

Guest 1

Yeah.

Guest 1

But that's not to kind of dismiss the challenges of running bare metal either. Having done this a lot over the last 20 years, storage is really hard to get right. You know, we take things for granted these days with the cloud. Like, you know, having an unlimited storage on s three, having super fast Node disks that we can disconnect and attach to any device on the cloud within a region, etcetera.

Guest 1

Doing that in a bare metal environment, very painful.

Guest 1

Very, very painful. And that's me not even getting into Kubernetes yet. That's just storage.

Guest 1

Yeah. Yeah. Or 1 or 2 servers. And then when we bring in Kubernetes where workloads become ephemeral and they transfer, they move around, and the storage has to go with it. Like, data has Sanity, and that gravity pulls Yarn. So you've Scott to be really careful there. Man. Well, let let's get into that then. What the hell is Kubernetes? Can you tell us lowly JavaScript developers what what that weird word means? So alright.

Topic 4 11:59

Kubernetes is a container orchestrator that manages and runs containers

Guest 1

Kubernetes is a container orchestrator.

Guest 1

It is a supervisor that used to run my container for me and its job is just to run that container. If your container exits or crashes, it will restart it. If it needs access to certain permissions, to certain Wes, to certain networking policies, it will attach them.

Guest 1

And at the very basic level, that's all Kubernetes does.

Guest 1

You just say, hey. Run my Node application on this container for me, and then you don't need to worry about it.

Wes Bos

Okay. And why would somebody need Kubernetes? Is it because they need to scale their infrastructure easily or, like, what's the benefit of of using that versus just SSH into the box, curl whatever Node, run the run the app, and you're off and running. And, obviously, sometimes I get people mad at me on Twitter because I say these things, and, obviously, that's I know why, but I'm asking for the audience here.

Guest 1

Alright. Let's pnpm in a world where you're both running a Syntax Cloud. Right? You've got your 2 machines. You've got 1 in your basement, another one in Scott's.

Guest 1

You wanna run your Node application and at a crash. So you've just run container run, you go to sleep, at at fails, your alerting kicks off, Sanity JS going nuts.

Guest 1

Somebody's Scott effects there. Right? So who's getting paged? Which one of you is it? So someone has to wake

Wes Bos

Definitely me.

Guest 1

So someone has to wake up, and then they have to go into the SSH, and they have to restart it, and then they hit okay. Now anyone listening who's familiar with containers will say, hey. There's restart policies. Alright. Cool. Or someone else is starting going, hey. There's system d. Like, we can use that as the supervisor. Right. Okay. Right. I get it as well. However, all of these are bound for the context of 1 single machine.

Guest 1

And Kubernetes doesn't make sense to run on 1 single machine. If Kubernetes makes sense, that's when you have more than 1 machine. When we then have to say, okay. We got these workloads that we want to run. They can run-in any of these machines. And if it crashes, it may be best just to move it to another machine because maybe there's something wrong with this one and we'll isolate it, we'll carden it, whatever. We'll fix it when we wake up because nobody likes getting woken up at 3 AM. And then we carry on our business and schedule it somewhere else. Kubernetes, as soon as you have more than 1 machine, you really need to start paying attention to it. And there's I know that in the the front end world, the web world, it's not been something that's been coming charging at people. But if you look at the cloud landscape right now, look at AWS, look at Azure, look at Google Cloud, render, digital, academia, every one of them, They all offer a managed Kubernetes service and that's because in the back end world, it has taken over. This is Node way to run a container based workload at scale with resiliency and redundancy.

Guest 1

So we can't avoid it, unfortunately.

Scott Tolinski

Yeah. So so you have a central Kubernetes instance that is then orchestrating amongst other other machines essentially and other other hosts?

Guest 1

Yeah. Kind of. So Kubernetes is is split into its kind of 2 phases. There's the control plane and there's the the the worker Node the worker plane. The control plane consists of an API server, a data store which is typically etcd, a controller manager, a scheduler, all these all these fancy bets that decide where things run and how to fix them if they go wrong.

Guest 1

You push that into the managed side, so you're using AKS, EKS, etcetera.

Guest 1

Then all you've got is a fleet of worker nodes that just want jobs to run, and those are the parts that you probably interact with as a developer that wants to schedule a container.

Wes Bos

Okay. And so we can use that was one of my questions JS, like, Kubernetes is not just for I'm running my own servers. But if you wanna go with some of the less featured just raw hosts out there, Google plat cloud platform, AWS, IBM, all of the big ones out there, you can you can use Kubernetes to to spin up instances on those different cloud providers. Is that that's a pretty common thing?

Guest 1

Yeah. So you can just ask any of these cloud providers to give you a managed Kubernetes control plane. They handle all the hard stuff. The the the scheduler and the data store, the like, etcetera itself is an absolute monster to operate even beyond Kubernetes.

Wes Bos

So I'm sorry. What was that? Say that again. SCD?

Guest 1

SCD, ECD.

Guest 1

It's a key value store from from HashiCorp.

Guest 1

I think they were there. No.

Guest 1

They built console on top of it to d. Yeah.

Guest 1

And CoreOS, I think they built it to d.

Wes Bos

Okay.

Wes Bos

Man. And does does Kubernetes help with like, let's say you get a massive spike in traffic where you need to increase your the amount of compute that you have or, maybe your machine learning, you're like, okay. We need to process this stuff for, like, 3 days. We need, like, 50 servers to be working on it just for 3 days, and then we'll we'll spin back down to to our normal workload. Does it help in those instances where you just sort of need compute on demand?

Guest 1

Yeah. Definitely.

Guest 1

There's a whole bunch of projects in this space. So let's again assume you're just going down to Sanity Kubernetes. Like, I as much as I love doing Kubernetes and bare metal, like, I'm a complete I do not recommend anyone listening to this. Please don't know. I actually on my my YouTube channel, I have a show called Clustered, which is where I actually spun up bare metal Kubernetes clusters to every single week and gave them to random people on the Internet and said go break them and then we fixed them live, which was scary but fun.

Guest 1

Anyway, yeah. Don't do it yourself. Use a managed service. And the way that those work is they have something called the metric server that runs on Kubernetes that monitors all of the workloads, memory consumption, CPU.

Guest 1

It can also monitor for the scheduler being unavailable to schedule something. So if you say, hey. Run me 12 of these and it can't schedule them. These are all signals to the control plane or at least to a cluster auto scaler which goes, oh, we need more compute. And it will go away to the cloud provider as they give me 10 more servers and schedule your workloads And then when there's overprovisioned, it will start to scale back down over time as well. So you get to really define the scale up and scale down policies, and it depends on how big that credit card is and who you're working for. Yeah. I've been fortunate with some companies where I could say scale up a 100 Node and then scale down 10 every hour and that's okay. But some places, you know, like, oh, maybe just scale up 2 then if we'll scale up 2 more if we need. But then it all comes down to how many nines you need at the end of the day for your application.

Guest 1

If it's user facing, maybe it's internal, can you just can you handle 8 minutes of downtime per year, or can you handle 80 minutes of downtime per year?

Wes Bos

Oh, man. That's actually one thing.

Wes Bos

Very rarely does my schooling come into play, but I went to to school for, information technology management, and that was one of the things, the nines. What is it? What's the 8 nines or what's the 9 nines? What's the most popular one? You're not getting 9 nines, but, you know, 5 nines is probably Five nines. That was it. And 5 nines is how much downtime per year?

Guest 1

I think that's, like, 8 minutes.

Guest 1

Hold on. Let me Google that. I should know better. I should know my nines by Node. Less than 2.25

Wes Bos

minutes per year or five minutes and 15 seconds. So 5 nines is 99.999% uptime.

Wes Bos

So that it's only okay for your application to be down for 5 minutes and 15 seconds per year, which is that's that's a lot, man. Like, I I can't think of any any apps that have not been down for less than that every single year. You know? Like, do you do you know any apps that have have maybe not gone down? Like, I I think even Google has gone down at certain points.

Guest 1

Yeah. Well, I mean, measuring this stuff is all about kinda put your finger in there and hope for the best. They just pages never accurately reflect what's actually happening with these applications. So their status page says they've got 5 nines, but realistically, they're doing worse than that. But then it comes down to the one that, like let me go slightly off tangent here. So it comes down to what you define to be an outage. Right? Now in a microservice architecture, which you most likely have because we call that the cloud native architecture if you're deploying to Kubernetes, then if a service is down for 10 minutes, but your your end user doesn't see any regression and performance or accessibility or anything like that Sanity, It's not really an outage, so Wes don't count it.

Guest 1

Of course, if Google.com front page is down and people can't search, that's an outage. So that's where we apply, like, a s one, s two, s three is that's a severity written. And outages, generally, if an s one, they'll wake you up. You fix them. Those go against your 5 nines and everything else. You built the resiliency. You architect for failures so that they don't affect that overall. Okay. Yeah. So I I told the story a couple of months ago how one of my apps was

Wes Bos

crashing, but it was only crashing every 3 hours or so. And I was I run 3 instances of it. So and it restarts in, like, 20 seconds or something like that. So, like, I was, like, looking at it, and I was like, I'm gonna go to bed because it wasn't actually ever down. 1 of them was down at Node point for 20 seconds, but there was never any overlap where all of them were down at the exact same time. And that's, I guess, one of the the benefits of doing this type of thing. Right? Like you said, if one goes down, then Kubernetes will know to maybe move that to another location because maybe that server has has gone bad.

Guest 1

Exactly.

Guest 1

You take your high value containers. And and Kubernetes, we call these deployments. Right? A deployment says I'm going to run 1 or more containers for you. And they identify the things that do have some sort of uptime guarantee or we have, you know we can attach money to. Right? Every time a user gets this and it's down like Amazon, that cost them 1,000,000 if they're down for a minute. Right? So those things you're gonna over provision. You're gonna say, okay. We're gonna run 200 of them. We only need a 150, but we've got a 25% capacity there if we need to in ESLint of an error. So yeah. Yeah. So when you say container, okay, to give the audience

Scott Tolinski

perspective here, can you give just a basic what is a container? Does that inherently mean we're talking about Docker, or is Docker just a type of container?

Guest 1

The Docker containers existed before Docker, but Docker were the pioneers. Those are the ones that made the tooling easy.

Guest 1

So people it's it's quite common for people to see a Docker container or Docker, and they just mean a Linux container. And that that's fine. Right? Call it wherever the hell you want.

Guest 1

But what a container actually is, let's try and imagine you're setting at your terminal. Right? You've got your shell open, dead shell fish, whatever you're ESLint, Node shell, and you type Wes.

Guest 1

When you type that LS, you're listing all of the fails that exist in a fail system. That LS itself is a process that runs on the kernel and it consumes some sort of CPU and memory.

Guest 1

And container says that we're gonna isolate that process so that it can't see other processes and it can only see the file system that we give it. And even to the point where you can change the time zone within a container and say that it actually believes it's in Los Angeles instead of being in Glasgow for whatever reason. These are all called namespaces in Linux. So, you know, process.

Guest 1

There's mount which is file system.

Guest 1

There's a user time space 1. Node. Time share, Node space, which is like the time and date and all that stuff.

Guest 1

And all we do is give a very simplified view of a system. So to the point where you could run CSS slash and typically you'd see a root file system, But what if that root fail system was only the root fail system within your container? And that's what's actually happening under the hood.

Guest 1

Yeah. You could even be root in a container which is UID 0, but in the host, you're actually UID 67,512.

Guest 1

Wes, yeah, there's loads of really cool things happening there within the container in the way that the machine spaces work.

Wes Bos

And what what do you like using for containers? Are you into Docker?

Guest 1

I run Docker on my Mac for sure because it handles the the VM layers. So I got Linux native containers. It's got good support.

Guest 1

So, you know, I'm on an pnpm Mac, which means I can't run a lot of container images that are built for AMD 64.

Guest 1

So yeah. Docker's Docker's great. But in a server environment, I use container d. So we are talking about Kubernetes. Let's just provide some more background context for anyone who's not familiar. Kubernetes is a graduated project within the Cloud Native Computing Foundation, the CNCF.

Guest 1

Container d is the container runtime for Kubernetes. It used to be Docker. It's not anymore.

Guest 1

But even Docker itself is using container d under the hood, which in turn uses the c. So it's this hierarchy of layer of abstractions.

Guest 1

So, yeah, container d on the server, docker on the desktop, and then there are loads of other tools. You don't have to use these ones, but these are the main ones that most people are familiar with. The reason so many people use Docker is they literally invented the Dockerfile.

Guest 1

And building these containers was really hard prior to 2013.

Guest 1

The Docker file came along and made this a very simple text file where we just do Docker build and we get a container image spot of the the other side, which is pretty cool.

Wes Bos

Oh, that makes so much sense now. So Docker is just a way to interface and create a container d, and then a container d can be put anywhere via Kubernetes on on different infrastructure. Is that correct?

Guest 1

Okay. Let's bust this in more vocabulary. This has not been in under here. Right? Docker is a command line tool that you can execute commands. If you do Docker build, it builds something that we call an OCI artifact or image. This is an open container initiative project.

Guest 1

So because Docker made this popular, they kinda control the spec, and then other people were like, hey. We've got a container run time. So they all work together on OCI. And Node we have OCI images which can be executed or run by any compliant container runtime.

Guest 1

Okay. Yeah.

Guest 1

It gets complicated very, very quickly because there's so many moving parts underneath.

Topic 5 26:21

Containers provide isolated processes with limited system access

Wes Bos

Yeah. But probably not something like, is that something if you're gonna get run your own Kubernetes or a host to Kubernetes on on AWS, is that something you really have to care about? Do you have to get into that? Just just build that just build that container image and ship it. That's all you need to care but meh. Cool. Yeah. Now I'm curious about WASM because WebAssembly is a technology that allows us to run, different languages kind of in a container in the browser.

Wes Bos

And then recently, we've been seeing people take the WASM tech and move it to oh, you can't shouldn't just have to run it just in the browser. It should be able to run anywhere kind of like a container. So do do you have any thoughts on on that and the tech around that?

Guest 1

Yeah. I am fully invested in WebAssembly Node, So let me try and again provide a bit more context.

Guest 1

There's a really great quote from Solomon Hykes. Solomon Hykes was the founder or at least one of the founders of Docker.

Guest 1

And I'm sure he's gonna yell at me for bringing this up because he I think he wants to forget he said it. But he said on Twitter, so it's public domain, that if WASI and WebAssembly had existed in 2013, there would have been no need to invent Docker, which is such a powerful statement. Right? Now the reason I'm, you know, I'm still doing Kubernetes. I'm still doing containers. Right? There is a certain type of workload that has to be in that space.

Guest 1

But what we're seeing now is there is a lot of workloads that don't need everything that comes with a container and those can run-in a WebAssembly sandbox.

Guest 1

Now WebAssembly runs on a browser, has no access to networking except for the the fetch API. It has no access to fail systems. It can't do anything that a real application or a node application would need, but that's where Waze comes in. So this is the web assembly systems interface which provides a POSIX like API for web assembly workloads.

Guest 1

And as of really recently, a couple of weeks ago even, there was the Wazee preview 2 announcement.

Guest 1

And this is the 1st time the WASI spec has changed to support something called the component model which we'll get into.

Guest 1

The component model means I can take away this simply binary and I can run it but I can enrich it with new features. I can give it access to TCP and UDP sockets. I can give it access to file systems. I can give it access to wherever I want. And to the point, the component API is still flexible.

Guest 1

I could say I'm going to give you access to a key value API.

Guest 1

You can say it's get and set. And then you have no idea what powers up behind the scenes. It could be Redis.

Guest 1

Where this gets more powerful is the operator of that runtime and the components can swap out your Redis for Kafka, MongoDB, whatever they want, and you never need to know. So we get this kind of onion architecture that we can apply to our applications. And then as as developers, all we focus about is our application needs these APIs to run, it speaks to something, we don't care what, and we store data, we fetch data, we do whatever we have to do.

Guest 1

So WebAssembly and Wazee, Wes preview 2 becomes a very powerful, platform for running stuff with a few key benefits over containers. Right? Now I wouldn't ask you to show it by hands, but I'm assuming you both built a Docker container at one point in relief, hopefully.

Wes Bos

Unfortunately, yes. Yes.

Wes Bos

Scott and I are both kinda, oh, Docker camp because it's just

Guest 1

a little bit painful every now and then, but, yes, we have. Right. So there's lots of things you have to be in mind when you build a Dockerfile. Right? You've got something called layers, which affect the build cache and those layers are additive. You can never delete something on a layer to reduce the size of the the overall container artifact. So if you do, like, an app install llama llm 17,000,000,000 gigabyte llm Node, and then you run your command and then the next layer you see delete on that, your image is still gonna be that huge massive size. And people learn that's the hard way, unfortunately.

Guest 1

On the other side, web assembly binaries are teeny tiny.

Guest 1

The last 1 I shipped to production was 5 meg in size. First JS the average container size is hundreds of megs, if not gigs of megs, which I see all too common.

Guest 1

Then JS the really cool thing is that the start up time for a container. Now you've done Lambda or any serverless container based environment. You've heard of the cold start problem. The cold start problem says that in the worst case, this container may take up to 200 300 milliseconds to start.

Guest 1

Should know what the Scott up time for an invocation on a web assembly module is?

Wes Bos

What?

Guest 1

We measure it in nanoseconds.

Wes Bos

Node Seriously?

Guest 1

Yes.

Guest 1

Wow. So if if you can ship small megabyte in size binaries with a startup time measured in nanoseconds, Your serverless platform just got a lot more interesting. Right? So

Wes Bos

So if I'm understanding like, I'm trying to understand, like, what would somebody want to use WASI for? Like like, a WASI container or or is that what they're called, WASI containers?

Guest 1

They're just WebAssembly

Topic 6 31:23

WASI allows running WebAssembly binaries with access to system resources

Wes Bos

binaries, artifacts. It's not really like a condition name. So let let let's say, like, I have a a Rust script that will do some processing of a file and then, and then kick it out the other end.

Wes Bos

You're saying, like, it's not a Linux environment. Right? So you don't have access to TCP and file system and all that stuff, but these these new things you're talking about are going to allow us to to have new APIs for

Guest 1

saving data, file system, networking, etcetera. Is that is that right so far? Yeah. Those now all exist with as a preview tool. We has that we have components for networking. We have components for fail and disk access. This this is no standard, and and you can do that today with Rust, with TypeScript, with ZEG, etcetera.

Scott Tolinski

Okay. Wow. Are are people shipping this, like, currently to production, or is this still

Guest 1

Oh, yeah. Definitely. There's some really amazing platforms out there to make this work.

Guest 1

I think one of the key selling points, you know, besides the stuff that I've already said about the size and the implication speed JS that, you know, if you've worked with Docker in your local machine, it's it's not it's a good experience, but it's not a great experience. Especially for node applications where you have to mount and a fail system and then the reloads could get really slow.

Guest 1

But those disappear. And also this architecture. I'm an pnpm one Mac and someone, you know, the container image is built for AMG 64. Everything's broken. I can't do anything.

Guest 1

With WebAssembly, all that disappears. There's no virtualization beyond the WebAssembly runtime itself. So things are native, which means we can bind most of them. We can run them in any machine. The same web assembly workload I can pile on my Mac. I can give to somebody running Windows or Linux on a different CPU architecture, and it still just runs as x m.

Wes Bos

That's what blows my mind is that it's easier for me to pnpm install FFmpeg and run it in WASM than it is to brew install. But if you're on Windows, do this and all these things, and it's a different version. And as long as somebody has built that process for WASM, it will run literally anywhere that runs WASM. Correct?

Guest 1

Exactly. Yeah.

Wes Bos

And what, like, what other kinds of things would you put in in Wasm? Like, maybe video processing would be 1. Image resizing might be another Node. But, like,

Guest 1

what what else are people using it for? I mean, I'm just building standard applications. Like, the last thing I shipped to production was a URL shortener. And Yeah. Yeah. It just because it's it's so easy to do, and I get to work in my own like, I like writing Rust code. And if I had to compare that Rust to WebAssembly natively without any extra hoops to jump through.

Guest 1

It's quite Node. And then you just deploy it to a platform that supports WebAssembly and then it's online and I can I can just use it which is Okay? Yeah. It's just nice. And the tool chain's the same locally. I'm I'm building a Wes application. I'm not building something for a container. I'm not building something for Cloudflare workers. I'm just building a Rust app that happens to be shipped as a WebAssembly module.

Wes Bos

And we had Jake Champion on who works at Fastly, and he took SpiderMonkey, which is a JavaScript engine, and compiled it to WASM. And then he takes your code and and puts it in there as Wes. So you can you can write languages that cannot be precompiled, right, as long as it you ship the compiler with it or, sorry, you you ship the runtime with it. But can I put JavaScript in a in a WASM or WASI file?

Guest 1

So, I mean, yeah, you could. Definitely. However, you know, JavaScript has really good and TypeScript has really good support now and run times that compel that to WebAssembly without having to ship an interpreter with it. But for languages that don't, you know, let's go back to the classic PHP. Right? We've all run it at some point, but VMware has really led the charge on compiling the PHP comp compiler interpreter to a WebAssembly module, and then you can just mount all of your PHP code into it and run it and it runs in a WebAssembly module, which is just wild to me as Wes. To the point where you can actually do a script or in a Wes page, pull in that that exact PHP compiler JS a WebAssembly module and then JavaScript with PHP code and it runs in your browser.

Guest 1

PHP in the browser. Yes. Script type If it serves for PHP dash WASM. People have actually run Drupal 7 in the browser via the WebAssembly interpreter.

Scott Tolinski

Why?

Guest 1

I mean, why not?

Wes Bos

It but, like, that I know that stuff's silly, but it really makes me understand, like, what it what Wasm actually does. You know?

Guest 1

And People have to handle a lot of things in the browser. You can now run Git in the browser. Someone can build Git into WebAssembly module, the SQLite. There's a whole bunch of really cool stuff happening.

Wes Bos

Yeah. We we just the episode before this, we just talked about the SQLite, adapter that interfaces with, IndexedDB, but it's running in in Wasm.

Wes Bos

Yeah.

Wes Bos

Wow.

Wes Bos

That is so cool that you can can do this. And, like and, obviously, it doesn't just have to run-in the browser. It can run anywhere. That and that's what Wazee is. Right?

Guest 1

Yeah. So WASI just gives us that component model where we can lay it on the functionality that the browser is never gonna accept because it is a secure sandbox for applications.

Guest 1

There's many run times as well. You know, there's Wazimir, Wazamesh, Wazm Node. All of these things run WASI workloads on your traditional laptop or server infrastructure.

Wes Bos

Wow. And how would how would does a regular person go make a WASM image? Is that just something like like, would Scott and I ever make a a WASM image, or is there going to be, like, images out there that already been created for the type of stuff we wanna do? Like, I'll give Node example is our, we have, like, a Rust script in our code base that populates a testing database, so if you wanna run it locally. Right? And Scott has to compile that in order for it to to run, and he has to ship the compiled version. But, like, wouldn't it be cool just to, like, run the Rust code directly via WASM?

Guest 1

Yeah. You could definitely do that. You just update your Rust toolchain. You add the WASM Wes 832 support.

Guest 1

You change the target Wes you compile it. You get a module and then you execute it with Wes or WASM cloud, etcetera. And as long as you've got the right components in place, which admittedly is not as plug and play as we'd like right now, but that's Yeah. Like is very new. Right? Wazee preview 2 is a few weeks old.

Guest 1

The easiest way right now for for you and Scott to get started would be to check out a project called Spin by Fermion.

Guest 1

This is the batteries included easiest way for anybody to start writing Wes assembly today. And it's got an SDK that provides key value storage, networking request, everything.

Guest 1

They're also helping lead or contribute to the WASI 2 previous spec. So they're implemented as we go.

Wes Bos

But there's there's probably also, like, a Rust interpreter that has been built into WASM that we could run, you know, like, if we wanna run it on demand. Because that's what we had somebody on from the Node. Js project, and they say, yeah. Like, we're pretty close to being able to just import a dot Rust file or something like that. And if if you have everything set up properly, it would just interpret and run it without having the tooling set up on your computer.

Topic 7 38:25

Spin provides batteries-included WebAssembly development

Wes Bos

I just think that's pretty cool.

Guest 1

Possibly. I mean, I'm I'm not familiar with that effort, but I'm not surprised either. You know? I mean, coming from my perspective, Rust is so easy to compel to Wazir. I've always just done it that way. But it makes sense to provide very simple runtime for that because anyone has written more than 1 Rust program knows. The Rust compiler is very slow, and we pnpm a lot of our times doing the SKCD things, putting about their chairs with swords. Yeah.

Guest 1

Oh, that's cool. ESLint is pretty neat. I've put the link in our our show notes here for anybody who wants to check this out. Yeah. They've really simplified that whole developer experience side of building WebAssembly applications. And you just do a pnpm deploy and it sticks it on their cloud and it's, like, readily available. And I actually wrote a application that was written in Rust and I encoded all my information into it and we had the URL, it spits out, like, a VCF contact card for when I go to conferences, and it's just this Wes assembly module JS my business card now, which is kinda cool. Right.

Scott Tolinski

Yeah.

Wes Bos

Can you I don't know if you you know anything about this, but I'm always curious about the difference between, like, a Pnpm or AMD processor and, like, an Yarn 64 Processor. Right? Like, our MacBooks are m 1, m 2, m 3. Those are ARM chips. Right? Are are servers like, what do servers use? Do you they use both? Is there a benefit to using 1 over the other?

Guest 1

So traditionally, servers have always been pnpm AMD 64.

Guest 1

That is the instruction set that they kind of use, the architecture that they use, but, you know, ARM has has really changed the way the world thinks about computing with risk. Risk is we're just instructions that computer. They basically condensed everything and said Wes don't need all this stuff. Let's simplify it.

Guest 1

And they've just been leading the charge on making that fast, really, really fast. You Node, there's something called SIMD, which is a really pushing beyond the realm of possibility, which is where you have a single instruction that runs in multiple types of data. So you're you know, if you get your compiler right, something that Rust does really Wes, it can optimize for these things for the architecture you're building to. And then the the real selling point of Yarn over Intel is just the power consumption is drastically less. You know, we live in a world now where we have to be very careful about our consumption of the planet's resources, you know, green energy, and Yarn really makes a significant dent in server infrastructure there, which is why it's been so heavily pushed.

Wes Bos

Interesting. Yeah. Because I I got a DM on Instagram after our last show on this stuff, and he said, hey. Check this out.

Wes Bos

He's running this thing called a Turing Pi, which is a and he'd slotted, like, 8 Raspberry Pis.

Wes Bos

The the Turing Pi runs Kubernetes, and he's able to, like, have this, like, botnet of it's Node a botnet, but, like, able to have, like, 8 different computers running all at once as a Node Vercel. Instead of 1 beefy server, he has 8 Raspberry Pis running at once. Have you ever looked into that?

Guest 1

Yeah. One of my friends, Alex Ellis, he's very big into this. He's got his own PieHat, and he runs 8 of them as his own home automation and infrastructure thing. It's never really something I've dabbled into. I've always had access to chunky Equinix metals. I've been a bit spoiled in that regard.

Scott Tolinski

Yeah. But yeah.

Guest 1

I can there's a really strong Vercel. He's worked with Yarn stuff that you should probably chat to. I'll drop his details in with you later. His name is Daniel Magnum.

Guest 1

He can go into that in ridiculous detail if you wanna chat about Yarn stuff at some point.

Wes Bos

Cool.

Scott Tolinski

Yeah. What what are people running on these things if if you have a pie cluster in your home office or something like that? I know Wes and I both run, like, NAS drives with all sorts of containers. But what are people doing on this stuff out of their house?

Guest 1

A lot of what I see, like, you know, when I go to KubeCon and such, is people are running home assistant, and then they're getting very you know, they're having a lot of fun just automating as much as their home as possible.

Guest 1

Yeah. A lot of people also hook it up to webhooks from different, you know, cloud providers, GitHub, whatever services they're using, have them hit their home machines, and then they build more automations on top of that. Discord Bos, Slack bots, whatever.

Guest 1

It's just a really simple way of experimenting and building

Wes Bos

cluster style compute at home, which is quite nice. Yeah. Yeah. From this guy I was talking to, he was saying, like, yeah, I run all my home server stuff on it. It's a really great way to learn Kubernetes because you're not running a single server with a single process.

Wes Bos

You're running 8 different processes, and you can kinda understand how that works. And I could also see it being, like, really handy. Like, I run I run a home media server, and those things will transcode video as you're watching it. So my kids are watching it on their iPad, and it it'll transcode it in real time, or or pretranscode it. Right? And and sometimes if you have couple people streaming from it at times, you can you can get some lag. So wouldn't it be nice if I had an army of servers in my basement here that could just increase, and, like, the Kubernetes would say, oh, well, hey. Move this, streaming task to a different process.

Guest 1

Yeah. I mean, if somebody's streaming and the device supports, you know, warp p, that's fine. Then, you know, everyone can knock themselves out. You can have 12 concurrent connections, but then you always gonna have 4 k person and then everything falls over and then Yeah. Oh, crap. I need to scale this up now. And that that that's exactly what Kubernetes does. You Node, it it brings a whole bunch more to the table, but the the the easiest way to think about it is, you know, it runs 1 or more containers and scales them for you. So Mhmm. Next thing you know, you got a home lab. Whole home you're on home lab YouTube Yeah. Watching, Building a supercomputer.

Wes Bos

But, like, that's also what I saw is I've seen people build supercomputers where they just do instead of doing one big computer, it's millions of low power devices. Right? Like, imagine taking all 8 year old Android devices and plugging in. I don't I don't I'm talking out of my butt right now, but imagine you took, like, all the phones that nobody ever wants anymore, and you they still have chips in them. Right? Like, imagine you slotted those all in, and we'll be able to architect them. I'm sure somebody's doing that.

Guest 1

I mean, just all of that old hardware out there, I'd I'd hope someone's putting that to good use through some other means definitely.

Guest 1

Yeah.

Guest 1

What do you have one of these browser testing companies offers you, like, a mobile view. Maybe it's just running on, like, a 20 year old Android device somewhere. Wes know?

Scott Tolinski

Yeah. Yeah.

Wes Bos

Do you have a a home server set up Vercel, or are you running everything in, somewhere else?

Guest 1

I've got a couple of mini PCs on my desk that run a combination of Linux and Windows. The Windows that don't really turn on too much, but now and again, I do like to try and do a better gaming.

Guest 1

And with the Linux side, I've really been experimenting with, container based desktops. You know? So it's traditional Linux would be like run Ubuntu or run Fedora.

Guest 1

But there's a really cool project called Vercel Blue and Bluefin Wes the actual kernel starts and then runs a container which provides your desktop environment. And then if you ever want to modify it, you update the Dockerfile essentially. Say, add Wes new package and reboot, and then you get a brand new fresh machine every time, which is kinda cool. That's cool.

Wes Bos

Yeah. I often wonder about that running my own like, Versus Node, a lot of people are running it in the browser, and they run on their own infrastructure.

Wes Bos

And you obviously get the the editor UI, but you also get, like, a VM with it where you can run your your back end stuff on. And I often wonder, like, wouldn't it be cool? It's I think it's called code server or something like that. Wouldn't it be cool to, like, learn how to self host my own version of that? Yeah.

Scott Tolinski

Hey. Might fit in well with the offline first stuff that we were talking about earlier today.

Scott Tolinski

I I think, you know, because we are getting kinda close to the end here. I I do wanna get to 1 quick topic really quick before we we start to, you know, change course. But if if people are interested in trying out Kubernetes, like, they hear this and they say this is something that's interesting to me. Where do you even start with this stuff?

Guest 1

So Kubernetes dot io is probably the best place. The documentation is pretty feature complete. There is a a challenge with Kubernetes.

Guest 1

It's a very fast moving project.

Guest 1

They used to do 4 releases, 4 major releases every Yarn. They've now dropped that to 3 because people kept complaining that Wes too fast.

Guest 1

But the the documentation is they've got a great team that work on that every quarter and keep it up to date. So I would go there. The tutorials are fantastic.

Topic 8 47:33

Rockwood Academy YouTube channel has Kubernetes and cloud native content

Guest 1

You know? I'll also say people should just check out the Rockwood Academy, which is my YouTube channel. I've got, like, 400 hours of Kubernetes and cloud native content there.

Guest 1

So if you want it depends how deep you wanna go down the rabbit hole. Right? If you just wanna learn Kubernetes 101 and get your 1st deployment going, go through the documentation.

Guest 1

If you wanna understand how the container run times work and the control play and then entity and what happens when these things break, then go check out my channel.

Guest 1

And there's another really good website called Cube Simplify and one more called Learn Kates. These have phenomenal resources on them as well.

Wes Bos

So Sweet. We're gonna we'll link those up in the show notes for sure because it's rawkode.academy for anyone trying to type it in right now. Yeah. I mean,

Guest 1

Kubernetes is is complex. Right? Yeah. There's a lot of moving parts. It's very volatile. It changes.

Guest 1

You have to learn at least a dozen resources that you deploy to a cluster just to get 1 workload working. Mhmm. Really, if you want to do it proper at production grid.

Guest 1

And the only way to learn that really is just to get hands on. So really you just need to get your cluster. Use mini cube, use docker desktop, whatever you want. Get a cluster locally and start kicking the tires on that thing. Write your 1st deployment, write your 1st pod, get your 1st service, and, yeah, take it a step by step from there.

Wes Bos

Nice. One more thing I wanted to to touch upon is this idea of infrastructure as code.

Wes Bos

And we've we've had Brian Larew on a couple times. He's big into that as Wes, where you define what your infrastructure looks like via code. Right? Like, you can check it into your your code base. It's it's in some sort of config file, but you took it Node step further to something I've never seen. Your DNS, all of your records for your domain name JS is is as a text file on your GitHub so that if you ever lose it and it it cuts down on this thing called ClickOps, which is like, oh, to set it up, you have to log in and click these buttons and add these permissions. So how how do you Yeah. Can you talk about infrastructure as code a little?

Guest 1

I can. I mean, I'm hoping to be at a conference in Europe, a JavaScript conference this summer Wes I have a talk called deleting raw code off the face of the Internet, which is where I delete every one of my DNS records and my domains live on stage.

Guest 1

And because I have a lot of confidence in my automation.

Guest 1

So I use, I automate everything.

Guest 1

I actually wipe my machine every 30 days and restore it. So I wrote a Rust project called Contria that takes a YAML interface for all my applications and my Scott files and provisions them for me. Woah.

Guest 1

But that just does my machine. I was like, oh, well, what about my email? What about my DNS? What about all my domains? So then I started using I used to work at Pulumi, so I was using Pulumi for a while.

Guest 1

But I'm now using Terraform CDK, which is using TypeScript with Terraform to do the same thing. So I've got it hooked up to Cloudflare.

Guest 1

I say here's all my domains.

Guest 1

I give them all my records pnpm literally I just do, CDKTF apply or deploy and it goes and checks everything is correct. And if not, it reconciles them and then shuts back down. Then I can run that 10 times a day. I could run it once a day. I could run it 14 times an hour. Whatever I want, it's always gonna make sure that what I want actually exists.

Guest 1

And it's just a really powerful way of doing it to the point where I could go on stage at this conference, delete everything with a CDKTF destroy, show people my website doesn't work, they can't email me, my URL sharpener is broken, But within 10 minutes, have that all spun back up exactly as it was before.

Scott Tolinski

Amazing. Is is do you do you talk about that at all on your YouTube channel? Because I am very interested in learning more there.

Guest 1

Yeah. I've got a few links on infrastructure's code and using these tools. I'll definitely share them with you as well so you could check that out.

Guest 1

Cool. But it's really it's really easy to get started. And what I love about this, right, is because it's TypeScript, you know. One of the things that bug me about Terraform is when it comes to duplicate in Node, you have to do it all the time because it has no concept of really reusing stuff beyond modules and modules are a huge pain. But with TypeScript, I can literally just say, I actually I have a function on my helper class called enable Google Workspace, and that sets up the MX records. It sets up the Deacon, the D Yarn, the SPF, all of that. It just takes the key as a parameter and it's all done for me. I've got enable, you know, CloudFlare Pnpm, enable fast Fastify. I've got enable Sentry. Whatever I want to exist is just 1 function away, and then I can apply that to any domain within my stack. And it just becomes really powerful when you build your own abstractions that fit exactly the model that you have in your head and then apply that to the world.

Wes Bos

Man. And that that doesn't nuke, like, your data. Right? Like, your files in your your database? No.

Guest 1

No? Okay. I mean, if I do a destroy and I have a database that has provisioned, it's gone bye bye. But then, you know, it's like for backups.

Guest 1

And if you do it right, you can you can restore, you can do everything that you need to Node because you can define whatever workflow you want.

Guest 1

So it's a very powerful model and, you can do a lot with it. And the nice thing is, you know, there's a Terraform provider for everything. It doesn't matter if you're using Porkbun, Cloudflare, you know, GoDaddy, whatever.

Guest 1

There's probably a Terraform provider. I mean, there's a Terraform provider to odds of pizza. But, I mean, I'm pretty sure we can manage your data.

Guest 1

Yeah. Someone wrote a Domino's, a Terraform provider.

Guest 1

Highly amusing. Not practical whatsoever, but, you know, if you want a pepperoni pizza, why not infrastructure Node it?

Wes Bos

I I think I saw some TikToks of a guy, like, live coding to order a pizza in, like, so many different ways.

Wes Bos

And, one time, he got 2, showed up. It was 2 separate orders, so, obviously, there was a bug somewhere. I I'll see if I can find it. That was hilarious.

Guest 1

Yeah. I mean, a a happy bug, he got 2 pieces. But, yeah, I'm sure he spent twice the money. So always unit test your infrastructure as code as well. That's important.

Wes Bos

Oh, that's great, man. Like, this has been so enlightening in terms of, like, what this whole side of the world looks like, which is it's funny because, like, I've been a a Linux computer user. I've I've ran servers. I've had home servers for more than half my life. But when it gets into, like, actually larger infrastructure stuff, like, real stuff that matters, it's it's a whole different world. So it's it's interesting to see that this stuff is so accessible and also with all the WASM stuff getting better as well.

Guest 1

Well, let's have a closing thought from Astell, then let's cross the streams. You know? Kubernetes, like I said, it's not going anywhere. I still write containers. I still ship them to Kubernetes.

Guest 1

But there is a really growing presence of changes within Kubernetes to support WebAssembly workloads natively.

Guest 1

So Docker themselves wrote a shim called run Wazee, which is a container d shim that can execute, WebAssembly modules as if they were containers, but they're not containers.

Guest 1

That can now be run on a Kubernetes cluster, and then you have 2 different types of workloads that can be scheduled as Kubernetes to run your containers here, here, and here. But you can also tell it to invoke WebAssembly modules on demand based on roots coming into your cluster as well. So the world hopefully, the future we get in 6, twelve, 24 months is a single control plane that everyone should learn now, Kubernetes, that can run a multitude of workloads whether that be Wes assembly, maybe even JVM. Yeah. There's a big movement right now with Graul native as well.

Guest 1

You Node, so why not run that on our Kubernetes control plane with our WebAssembly modules and our containers, and things get particularly interesting. So

Scott Tolinski

After years years of laughing about not understanding Kubernetes, I think I've reached the point where I want to learn it.

Scott Tolinski

So thank you, David. Job done. Awesome.

Guest 1

Yeah.

Wes Bos

Beautiful. Yeah. Thank you so much. We'll move into the last section, which is, a shameless plug and a sick pick. So, we'll sorry. We'll start with the sick pick first. Do you did you bring anything that you like to sick pick?

Guest 1

Yeah. There's been a new project that I discovered just this week that kinda touches on everything that we're talking about. It's called Golem Cloud.

Topic 9 55:41

Golem Cloud offers durable executions for WebAssembly

Guest 1

And what they've done is they're looping in to the durable execution thing that's happening right now. Everybody seems to be talking about durable execution.

Guest 1

And they've said, okay. Well, let's flip this on its head, and they've given us you write your code, compare it to WebAssembly, we'll run it on our bespoke and give you a durable execution to the point where you can say sleep for 2 years and that web assembly module will be invoked in 2 years' time right where it left off with the same state, which is just magic in my book.

Guest 1

It's by the people that built ZIO for Scala.

Guest 1

So these are people that are big into functional programming, and that comes to Gollum too. So you get, like, this actor driven workflow system with durable execution, Wes, web assembly, and it's just my eyes lit up when I found out. I was like, this is the coolest thing I've ever seen. It's super early. Nobody should go and adopt it in production right now, but go definitely check that out. So, like, you could, like, literally pause the thing in the middle of a a function,

Wes Bos

a calculating, and whatever is in memory and would all the saved, do you be able to resume that?

Guest 1

Yeah. Like, it's for both the workflows Wes previously you'd have to say, okay. Quit this workflow. And then in 2 years' time, start me a new one with this date. Now you just say process Scott sleep and then the runtime understands, okay, let's just pause and check point this and then come back when that's the time is up. And then your workflow is just a single function that defines everything that you need to do even though it runs over days, weeks, months, or years, which is really cool.

Wes Bos

Yeah. That's cool. I even wonder if you could use that for, like, a queuing system as well. You Node, sometimes you you you put something into the queue, and you say, hey. Alright.

Wes Bos

ESLint a week from now, run this and check if they've done x, y, and z. And if not, then

Guest 1

do something like a good process. Queue in their state, and it would be checkpoint. Another project that's trying to do exactly that, though, is called restate Scott dev.

Guest 1

Spelling rust because all these things Yarn.

Guest 1

And it has something called a keyed service, which can applicate queue and process these things sequentially 1 by 1 with whatever workflow characteristics you want. But now I feel like I'm cheating and giving you 2 sick picks.

Wes Bos

No. No. I'm I'm like sticking the list left and right here. Yeah. There's lots of links in this episode. We me and Scott have been Node crazy finding these links and putting them in there.

Scott Tolinski

I I'm actually mad because you got both of these last 2 before me, and I was like I had them on my my clipboard. I was very Wes jumps them in there.

Guest 1

Cool. Yeah. And what's really interesting, all these things are written in Rust, but they all provide a TypeScript SDK, which I think is really cool.

Wes Bos

Alright. Shameless plugs. What would you like to plug to the audience? You can have as many as you like.

Guest 1

Really just my YouTube channel. Go check out the Rockwood Academy. Like, if you've suffered to Wes episodes long enough to hear me see Kubernetes and Wes assembly, then my YouTube channel is the right place for you to come and learn more. So just check that out at rockhode.liveoryoutube.com/rockhodeacademy.

Wes Bos

Cool. Wicked. Yeah. We'll link that up. Thanks so much for your time. Appreciate all this, information.

Guest 1

No. Thank you for having me. It's it's been weird. Like, this is the 1st time I've ever heard both of you speak at one x because I listen to 1 x.

Wes Bos

Yeah. That's we get that a lot, especially in person. People like, wow. Your voice sounds different IRL. You're not at at two x.

Guest 1

Cool. Alright. But, yeah, thank you so much. It's been fun.

Share