746

March 22nd, 2024 × #Kubernetes#Containers#WebAssembly#Infrastructure

Infrastructure for TS Devs: Kubernetes, WASM and Containers with David Flanagan

David Flanagan explains Kubernetes, containers, WebAssembly, and self-hosted infrastructure to Wes and Scott. He provides tips for managing your own servers and recommendations for learning more.

or
Topic 0 00:00

Transcript

Guest 1

No. It's my pleasure.

Guest 1

Happy to help, happy to be here, and excited to share some bare metal Kubernetes containers infrastructure Node nonsense with everybody.

Scott Tolinski

Love that. Yeah.

Scott Tolinski

Yeah. Kubernetes is one of those things that it's come up enough on this show that we used to joke about. I mean, even Wes joked about it at the jump of this episode that we have no idea what the hell it JS, or we just got in recently to, like, Coolify or self hosting. And Kubernetes keeps popping up in that whole world of self hosting. So really stoked to have you on, to talk all about

Topic 1 02:20

Cloud hosting costs are rising due to investor pressure to make money

Guest 1

I mean, I think it's becoming definitely more prominent, and the answer may be yes.

Topic 2 02:52

Web developers may need to learn how to host their own infrastructure

Guest 1

Because, you know, we've always had this trade off to be Node, first and foremost, I mean, I'm not a web developer.

Guest 1

I've been a back end developer for over 20 years. Always worked with infrastructure and bare metal.

Guest 1

And, only recently started to dabble in a little bit of web, but mostly TypeScript and using it in other domains that isn't traditionally applied.

Guest 1

But what I'm seeing is that that conversation about capital expenditure is changing.

Guest 1

You know, serverless used to be the thing that TypeScript developers and web developers could get away with for free forever, and it was just I never have to pay for a server. I deployed to for sale and Netlify or render, fire up, whatever. Right? And it doesn't cost me any money. I could scale infinitely, but really it doesn't work that way. These things get stupidly expensive the minute you go past that free tier, if you hit that free tier.

Guest 1

And the the costs come charging pretty quick.

Guest 1

So, you know, Node one little bit of a story on why it may be interesting for people to learn more about bare metal and hosting their own stack.

Guest 1

I used to work for a company called Equinix Metal, formerly Packet, and their product is cloud infrastructure, but you get a full bare metal machine.

Guest 1

Like when I do demos for people, I'm running a 1 meg binary running in Rust, that needs about 2 kilobytes of RAM and I run it on a machine with a 128 gig of RAM and 78 cores. Right? Just why? Because it's fun. Right? So finding the ability to use that hardware Scott at that scale Yeah. Controlling your stack from top to bottom Wes, we're just your Scott. Now the point of that story of why I worked at Equinix Metal is I used to help a company called LogDNA who had a 1,000,000 pound per month bill with AWS. And by moving to bare metal, they got that down to under 300,000 per month, which is still a lot of money, but, you know, it's a 60% reduction in cost. So once you get past those 3 tiers, maybe you need to analyze the market where you need to be hosting your your Wes

Scott Tolinski

people say bare metal, like, what what does that mean explicitly for for people who might not know? Sounds sick.

Scott Tolinski

Yeah. It does.

Topic 3 05:04

Bare metal hosting provides direct access to hardware without virtualization

Guest 1

When you have compute and memory and IO on a little square box or maybe a rectangle box stuck into a rack and you SSH onto it with no virtualization whatsoever.

Guest 1

So, of course, every cloud provider has bare metal, but you don't interact with it. You work with a virtual machine, the hypervisor that you're given. Bare metal means there's none of that. You're working directly with the CPU, etcetera.

Guest 1

I mean, I I'll say yes, but I'm gonna go into that with a little bit more nuance. Right? It's very rare very, very, very rare that you ever get a Linux container as your environment because you know what? People don't trust you.

Guest 1

And I know that may come as a shock, but containers are great because we get access to Linux name spaces.

Guest 1

That means we don't get visibility of all the processes on the host. We don't have access to the fit well. We shouldn't be able to see the entire fail system or memory and all this other stuff.

Guest 1

But these providers can't run that risk that you're gonna go poking and find a breakout of that container. So what actually happens is whenever you get a VPS or a virtual machine on the EC two, there's your Elution, they're actually running a hypervisor layer, Zen or KVM or something else, which gives you a virtual machine and then they stick your load perhaps still in a container but within the virtual machine because they need that kernel level isolation.

Guest 1

Meaning, they don't trust you speaking to the host kernel. You're gonna get a little Vercel. There, they have a better sandbox.

Guest 1

Very standard setup. Yeah. And this is how Chromebooks work as Wes. If you've ever tried to use the Linux more than a Chromebook, you actually it's a Linux host with a virtual machine with your containers inside of it. So

Guest 1

But not that you'll be able to delete the CPU. Are we talking about the trobic use case here or on the cloud? Just just in general, like, if if someone does give you access to that low level stuff, you can do

Guest 1

For definitely, right, let's go to use case. So let's assume I'm a cloud provider and I say, hey, here's a container on my bare metal. You've got access to the kernel.

Guest 1

Go nuts.

Guest 1

Now in theory, I've done things right and I've mapped the root user away so that you don't have access to root user. I've given you a confined, file system. You've only got restricted view. You've not got any process ID table stuff. All you see is a very small part of that. However, the code that you execute is still executed against the host kernel. Meaning, if you have a 0 day or some exploit that you've established on your own and you could speak to that kernel, you could ask for it to give you all the files. You could ask for all the memory. You could, inject CPU instructions Sanity other processes. There's a whole bunch of malicious and nefarious activity that you could do. And that's why the issue there of Zen or KVM is brought in at the cloud provider level.

Scott Tolinski

Wes you're hosting bare like, actual bare metal, are you still going to a service provider like like Hetzner or something to host on bare metal, or are you are there's the companies that prioritize bare metal over, you know, typical virtualized private servers?

Guest 1

Yeah. There's not that many cloud providers Yarn willing to give you a bare metal box, and for a few different reasons we can get into. And the biggest one is Equinix Metal. They were formerly called Packet. You may have heard of them under that Node.

Guest 1

And they've built all of their own tooling to do this. So they have this project that can when you click a button and say give me some bare metal, it wipes the disks, installs the operating system, and gets it to you in about 90 seconds to 2 minutes, which is just unbelievable for what you're actually getting.

Guest 1

There are some other providers like Hetzner, their Scaleway, and OVH. They all offer some bare metal that you can have access to. And recently, AWOL recently, last year, 2 years ago, AWS started offering their MEL instances as well when especially useful because you can get them with the Graviton Yarn processors, which are wicked powerful.

Guest 1

Of course. Yeah. I mean Yeah. As he said, things used to work. We always just have cupboards in our offices, and we'd run a small 16 year rack with a bunch of machines in it and you stick an IPV 4 on it. Obviously, those are expensive these days, but that aside, yeah, you can run your own bare metal, but there's always the catch. It's economies of Scott. 37 signals, they can afford to do this. They've got such high costs in the cloud that they're always going to save money. LogDNA have such high Scott. They're always going to save money.

Guest 1

Regular company building a Scott up that just spends less than $10 a month on his cloud bill, that's the that's the most cost efficient way to do it pnpm you get to that certain Scott and then maybe you kind of step back and and change things. At a certain point, it it makes sense like, oh, we can hire somebody at 100, 200, $300 a year and pay for the actual servers to sort of maintain this thing because it's gonna be cheaper than

Guest 1

Yeah.

Guest 1

But that's not to kind of dismiss the challenges of running bare metal either. Having done this a lot over the last 20 years, storage is really hard to get right. You know, we take things for granted these days with the cloud. Like, you know, having an unlimited storage on s three, having super fast Node disks that we can disconnect and attach to any device on the cloud within a region, etcetera.

Guest 1

Doing that in a bare metal environment, very painful.

Guest 1

Very, very painful. And that's me not even getting into Kubernetes yet. That's just storage.

Guest 1

Yeah. Yeah. Or 1 or 2 servers. And then when we bring in Kubernetes where workloads become ephemeral and they transfer, they move around, and the storage has to go with it. Like, data has Sanity, and that gravity pulls Yarn. So you've Scott to be really careful there. Man. Well, let let's get into that then. What the hell is Kubernetes? Can you tell us lowly JavaScript developers what what that weird word means? So alright.

Topic 4 11:59

Kubernetes is a container orchestrator that manages and runs containers

Guest 1

Kubernetes is a container orchestrator.

Guest 1

It is a supervisor that used to run my container for me and its job is just to run that container. If your container exits or crashes, it will restart it. If it needs access to certain permissions, to certain Wes, to certain networking policies, it will attach them.

Guest 1

And at the very basic level, that's all Kubernetes does.

Guest 1

You just say, hey. Run my Node application on this container for me, and then you don't need to worry about it.

Guest 1

Alright. Let's pnpm in a world where you're both running a Syntax Cloud. Right? You've got your 2 machines. You've got 1 in your basement, another one in Scott's.

Guest 1

You wanna run your Node application and at a crash. So you've just run container run, you go to sleep, at at fails, your alerting kicks off, Sanity JS going nuts.

Guest 1

Somebody's Scott effects there. Right? So who's getting paged? Which one of you is it? So someone has to wake

Guest 1

So someone has to wake up, and then they have to go into the SSH, and they have to restart it, and then they hit okay. Now anyone listening who's familiar with containers will say, hey. There's restart policies. Alright. Cool. Or someone else is starting going, hey. There's system d. Like, we can use that as the supervisor. Right. Okay. Right. I get it as well. However, all of these are bound for the context of 1 single machine.

Guest 1

And Kubernetes doesn't make sense to run on 1 single machine. If Kubernetes makes sense, that's when you have more than 1 machine. When we then have to say, okay. We got these workloads that we want to run. They can run-in any of these machines. And if it crashes, it may be best just to move it to another machine because maybe there's something wrong with this one and we'll isolate it, we'll carden it, whatever. We'll fix it when we wake up because nobody likes getting woken up at 3 AM. And then we carry on our business and schedule it somewhere else. Kubernetes, as soon as you have more than 1 machine, you really need to start paying attention to it. And there's I know that in the the front end world, the web world, it's not been something that's been coming charging at people. But if you look at the cloud landscape right now, look at AWS, look at Azure, look at Google Cloud, render, digital, academia, every one of them, They all offer a managed Kubernetes service and that's because in the back end world, it has taken over. This is Node way to run a container based workload at scale with resiliency and redundancy.

Guest 1

So we can't avoid it, unfortunately.

Scott Tolinski

Yeah. So so you have a central Kubernetes instance that is then orchestrating amongst other other machines essentially and other other hosts?

Guest 1

Yeah. Kind of. So Kubernetes is is split into its kind of 2 phases. There's the control plane and there's the the the worker Node the worker plane. The control plane consists of an API server, a data store which is typically etcd, a controller manager, a scheduler, all these all these fancy bets that decide where things run and how to fix them if they go wrong.

Guest 1

You push that into the managed side, so you're using AKS, EKS, etcetera.

Guest 1

Then all you've got is a fleet of worker nodes that just want jobs to run, and those are the parts that you probably interact with as a developer that wants to schedule a container.

Guest 1

Yeah. So you can just ask any of these cloud providers to give you a managed Kubernetes control plane. They handle all the hard stuff. The the the scheduler and the data store, the like, etcetera itself is an absolute monster to operate even beyond Kubernetes.

Guest 1

SCD, ECD.

Guest 1

It's a key value store from from HashiCorp.

Guest 1

I think they were there. No.

Guest 1

They built console on top of it to d. Yeah.

Guest 1

And CoreOS, I think they built it to d.

Guest 1

Yeah. Definitely.

Guest 1

There's a whole bunch of projects in this space. So let's again assume you're just going down to Sanity Kubernetes. Like, I as much as I love doing Kubernetes and bare metal, like, I'm a complete I do not recommend anyone listening to this. Please don't know. I actually on my my YouTube channel, I have a show called Clustered, which is where I actually spun up bare metal Kubernetes clusters to every single week and gave them to random people on the Internet and said go break them and then we fixed them live, which was scary but fun.

Guest 1

Anyway, yeah. Don't do it yourself. Use a managed service. And the way that those work is they have something called the metric server that runs on Kubernetes that monitors all of the workloads, memory consumption, CPU.

Guest 1

It can also monitor for the scheduler being unavailable to schedule something. So if you say, hey. Run me 12 of these and it can't schedule them. These are all signals to the control plane or at least to a cluster auto scaler which goes, oh, we need more compute. And it will go away to the cloud provider as they give me 10 more servers and schedule your workloads And then when there's overprovisioned, it will start to scale back down over time as well. So you get to really define the scale up and scale down policies, and it depends on how big that credit card is and who you're working for. Yeah. I've been fortunate with some companies where I could say scale up a 100 Node and then scale down 10 every hour and that's okay. But some places, you know, like, oh, maybe just scale up 2 then if we'll scale up 2 more if we need. But then it all comes down to how many nines you need at the end of the day for your application.

Guest 1

If it's user facing, maybe it's internal, can you just can you handle 8 minutes of downtime per year, or can you handle 80 minutes of downtime per year?

Guest 1

I think that's, like, 8 minutes.

Guest 1

Hold on. Let me Google that. I should know better. I should know my nines by Node. Less than 2.25

Guest 1

Yeah. Well, I mean, measuring this stuff is all about kinda put your finger in there and hope for the best. They just pages never accurately reflect what's actually happening with these applications. So their status page says they've got 5 nines, but realistically, they're doing worse than that. But then it comes down to the one that, like let me go slightly off tangent here. So it comes down to what you define to be an outage. Right? Now in a microservice architecture, which you most likely have because we call that the cloud native architecture if you're deploying to Kubernetes, then if a service is down for 10 minutes, but your your end user doesn't see any regression and performance or accessibility or anything like that Sanity, It's not really an outage, so Wes don't count it.

Guest 1

Of course, if Google.com front page is down and people can't search, that's an outage. So that's where we apply, like, a s one, s two, s three is that's a severity written. And outages, generally, if an s one, they'll wake you up. You fix them. Those go against your 5 nines and everything else. You built the resiliency. You architect for failures so that they don't affect that overall. Okay. Yeah. So I I told the story a couple of months ago how one of my apps was

Guest 1

Exactly.

Guest 1

You take your high value containers. And and Kubernetes, we call these deployments. Right? A deployment says I'm going to run 1 or more containers for you. And they identify the things that do have some sort of uptime guarantee or we have, you know we can attach money to. Right? Every time a user gets this and it's down like Amazon, that cost them 1,000,000 if they're down for a minute. Right? So those things you're gonna over provision. You're gonna say, okay. We're gonna run 200 of them. We only need a 150, but we've got a 25% capacity there if we need to in ESLint of an error. So yeah. Yeah. So when you say container, okay, to give the audience

Scott Tolinski

perspective here, can you give just a basic what is a container? Does that inherently mean we're talking about Docker, or is Docker just a type of container?

Guest 1

The Docker containers existed before Docker, but Docker were the pioneers. Those are the ones that made the tooling easy.

Guest 1

So people it's it's quite common for people to see a Docker container or Docker, and they just mean a Linux container. And that that's fine. Right? Call it wherever the hell you want.

Guest 1

But what a container actually is, let's try and imagine you're setting at your terminal. Right? You've got your shell open, dead shell fish, whatever you're ESLint, Node shell, and you type Wes.

Guest 1

When you type that LS, you're listing all of the fails that exist in a fail system. That LS itself is a process that runs on the kernel and it consumes some sort of CPU and memory.

Guest 1

And container says that we're gonna isolate that process so that it can't see other processes and it can only see the file system that we give it. And even to the point where you can change the time zone within a container and say that it actually believes it's in Los Angeles instead of being in Glasgow for whatever reason. These are all called namespaces in Linux. So, you know, process.

Guest 1

There's mount which is file system.

Guest 1

There's a user time space 1. Node. Time share, Node space, which is like the time and date and all that stuff.

Guest 1

And all we do is give a very simplified view of a system. So to the point where you could run CSS slash and typically you'd see a root file system, But what if that root fail system was only the root fail system within your container? And that's what's actually happening under the hood.

Guest 1

Yeah. You could even be root in a container which is UID 0, but in the host, you're actually UID 67,512.

Guest 1

Wes, yeah, there's loads of really cool things happening there within the container in the way that the machine spaces work.

Guest 1

I run Docker on my Mac for sure because it handles the the VM layers. So I got Linux native containers. It's got good support.

Guest 1

So, you know, I'm on an pnpm Mac, which means I can't run a lot of container images that are built for AMD 64.

Guest 1

So yeah. Docker's Docker's great. But in a server environment, I use container d. So we are talking about Kubernetes. Let's just provide some more background context for anyone who's not familiar. Kubernetes is a graduated project within the Cloud Native Computing Foundation, the CNCF.

Guest 1

Container d is the container runtime for Kubernetes. It used to be Docker. It's not anymore.

Guest 1

But even Docker itself is using container d under the hood, which in turn uses the c. So it's this hierarchy of layer of abstractions.

Guest 1

So, yeah, container d on the server, docker on the desktop, and then there are loads of other tools. You don't have to use these ones, but these are the main ones that most people are familiar with. The reason so many people use Docker is they literally invented the Dockerfile.

Guest 1

And building these containers was really hard prior to 2013.

Guest 1

The Docker file came along and made this a very simple text file where we just do Docker build and we get a container image spot of the the other side, which is pretty cool.

Guest 1

Okay. Let's bust this in more vocabulary. This has not been in under here. Right? Docker is a command line tool that you can execute commands. If you do Docker build, it builds something that we call an OCI artifact or image. This is an open container initiative project.

Guest 1

So because Docker made this popular, they kinda control the spec, and then other people were like, hey. We've got a container run time. So they all work together on OCI. And Node we have OCI images which can be executed or run by any compliant container runtime.

Guest 1

Okay. Yeah.

Guest 1

It gets complicated very, very quickly because there's so many moving parts underneath.

Topic 5 26:21

Containers provide isolated processes with limited system access

Guest 1

Yeah. I am fully invested in WebAssembly Node, So let me try and again provide a bit more context.

Guest 1

There's a really great quote from Solomon Hykes. Solomon Hykes was the founder or at least one of the founders of Docker.

Guest 1

And I'm sure he's gonna yell at me for bringing this up because he I think he wants to forget he said it. But he said on Twitter, so it's public domain, that if WASI and WebAssembly had existed in 2013, there would have been no need to invent Docker, which is such a powerful statement. Right? Now the reason I'm, you know, I'm still doing Kubernetes. I'm still doing containers. Right? There is a certain type of workload that has to be in that space.

Guest 1

But what we're seeing now is there is a lot of workloads that don't need everything that comes with a container and those can run-in a WebAssembly sandbox.

Guest 1

Now WebAssembly runs on a browser, has no access to networking except for the the fetch API. It has no access to fail systems. It can't do anything that a real application or a node application would need, but that's where Waze comes in. So this is the web assembly systems interface which provides a POSIX like API for web assembly workloads.

Guest 1

And as of really recently, a couple of weeks ago even, there was the Wazee preview 2 announcement.

Guest 1

And this is the 1st time the WASI spec has changed to support something called the component model which we'll get into.

Guest 1

The component model means I can take away this simply binary and I can run it but I can enrich it with new features. I can give it access to TCP and UDP sockets. I can give it access to file systems. I can give it access to wherever I want. And to the point, the component API is still flexible.

Guest 1

I could say I'm going to give you access to a key value API.

Guest 1

You can say it's get and set. And then you have no idea what powers up behind the scenes. It could be Redis.

Guest 1

Where this gets more powerful is the operator of that runtime and the components can swap out your Redis for Kafka, MongoDB, whatever they want, and you never need to know. So we get this kind of onion architecture that we can apply to our applications. And then as as developers, all we focus about is our application needs these APIs to run, it speaks to something, we don't care what, and we store data, we fetch data, we do whatever we have to do.

Guest 1

So WebAssembly and Wazee, Wes preview 2 becomes a very powerful, platform for running stuff with a few key benefits over containers. Right? Now I wouldn't ask you to show it by hands, but I'm assuming you both built a Docker container at one point in relief, hopefully.

Guest 1

a little bit painful every now and then, but, yes, we have. Right. So there's lots of things you have to be in mind when you build a Dockerfile. Right? You've got something called layers, which affect the build cache and those layers are additive. You can never delete something on a layer to reduce the size of the the overall container artifact. So if you do, like, an app install llama llm 17,000,000,000 gigabyte llm Node, and then you run your command and then the next layer you see delete on that, your image is still gonna be that huge massive size. And people learn that's the hard way, unfortunately.

Guest 1

On the other side, web assembly binaries are teeny tiny.

Guest 1

The last 1 I shipped to production was 5 meg in size. First JS the average container size is hundreds of megs, if not gigs of megs, which I see all too common.

Guest 1

Then JS the really cool thing is that the start up time for a container. Now you've done Lambda or any serverless container based environment. You've heard of the cold start problem. The cold start problem says that in the worst case, this container may take up to 200 300 milliseconds to start.

Guest 1

Should know what the Scott up time for an invocation on a web assembly module is?

Guest 1

We measure it in nanoseconds.

Guest 1

Yes.

Guest 1

Wow. So if if you can ship small megabyte in size binaries with a startup time measured in nanoseconds, Your serverless platform just got a lot more interesting. Right? So

Guest 1

They're just WebAssembly

Topic 6 31:23

WASI allows running WebAssembly binaries with access to system resources

Guest 1

saving data, file system, networking, etcetera. Is that is that right so far? Yeah. Those now all exist with as a preview tool. We has that we have components for networking. We have components for fail and disk access. This this is no standard, and and you can do that today with Rust, with TypeScript, with ZEG, etcetera.

Scott Tolinski

Okay. Wow. Are are people shipping this, like, currently to production, or is this still

Guest 1

Oh, yeah. Definitely. There's some really amazing platforms out there to make this work.

Guest 1

I think one of the key selling points, you know, besides the stuff that I've already said about the size and the implication speed JS that, you know, if you've worked with Docker in your local machine, it's it's not it's a good experience, but it's not a great experience. Especially for node applications where you have to mount and a fail system and then the reloads could get really slow.

Guest 1

But those disappear. And also this architecture. I'm an pnpm one Mac and someone, you know, the container image is built for AMG 64. Everything's broken. I can't do anything.

Guest 1

With WebAssembly, all that disappears. There's no virtualization beyond the WebAssembly runtime itself. So things are native, which means we can bind most of them. We can run them in any machine. The same web assembly workload I can pile on my Mac. I can give to somebody running Windows or Linux on a different CPU architecture, and it still just runs as x m.

Guest 1

Exactly. Yeah.

Guest 1

what what else are people using it for? I mean, I'm just building standard applications. Like, the last thing I shipped to production was a URL shortener. And Yeah. Yeah. It just because it's it's so easy to do, and I get to work in my own like, I like writing Rust code. And if I had to compare that Rust to WebAssembly natively without any extra hoops to jump through.

Guest 1

It's quite Node. And then you just deploy it to a platform that supports WebAssembly and then it's online and I can I can just use it which is Okay? Yeah. It's just nice. And the tool chain's the same locally. I'm I'm building a Wes application. I'm not building something for a container. I'm not building something for Cloudflare workers. I'm just building a Rust app that happens to be shipped as a WebAssembly module.

Guest 1

So, I mean, yeah, you could. Definitely. However, you know, JavaScript has really good and TypeScript has really good support now and run times that compel that to WebAssembly without having to ship an interpreter with it. But for languages that don't, you know, let's go back to the classic PHP. Right? We've all run it at some point, but VMware has really led the charge on compiling the PHP comp compiler interpreter to a WebAssembly module, and then you can just mount all of your PHP code into it and run it and it runs in a WebAssembly module, which is just wild to me as Wes. To the point where you can actually do a script or in a Wes page, pull in that that exact PHP compiler JS a WebAssembly module and then JavaScript with PHP code and it runs in your browser.

Guest 1

PHP in the browser. Yes. Script type If it serves for PHP dash WASM. People have actually run Drupal 7 in the browser via the WebAssembly interpreter.

Scott Tolinski

Why?

Guest 1

I mean, why not?

Guest 1

And People have to handle a lot of things in the browser. You can now run Git in the browser. Someone can build Git into WebAssembly module, the SQLite. There's a whole bunch of really cool stuff happening.

Guest 1

Yeah. So WASI just gives us that component model where we can lay it on the functionality that the browser is never gonna accept because it is a secure sandbox for applications.

Guest 1

There's many run times as well. You know, there's Wazimir, Wazamesh, Wazm Node. All of these things run WASI workloads on your traditional laptop or server infrastructure.

Guest 1

Yeah. You could definitely do that. You just update your Rust toolchain. You add the WASM Wes 832 support.

Guest 1

You change the target Wes you compile it. You get a module and then you execute it with Wes or WASM cloud, etcetera. And as long as you've got the right components in place, which admittedly is not as plug and play as we'd like right now, but that's Yeah. Like is very new. Right? Wazee preview 2 is a few weeks old.

Guest 1

The easiest way right now for for you and Scott to get started would be to check out a project called Spin by Fermion.

Guest 1

This is the batteries included easiest way for anybody to start writing Wes assembly today. And it's got an SDK that provides key value storage, networking request, everything.

Guest 1

They're also helping lead or contribute to the WASI 2 previous spec. So they're implemented as we go.

Topic 7 38:25

Spin provides batteries-included WebAssembly development

Guest 1

Possibly. I mean, I'm I'm not familiar with that effort, but I'm not surprised either. You know? I mean, coming from my perspective, Rust is so easy to compel to Wazir. I've always just done it that way. But it makes sense to provide very simple runtime for that because anyone has written more than 1 Rust program knows. The Rust compiler is very slow, and we pnpm a lot of our times doing the SKCD things, putting about their chairs with swords. Yeah.

Guest 1

Oh, that's cool. ESLint is pretty neat. I've put the link in our our show notes here for anybody who wants to check this out. Yeah. They've really simplified that whole developer experience side of building WebAssembly applications. And you just do a pnpm deploy and it sticks it on their cloud and it's, like, readily available. And I actually wrote a application that was written in Rust and I encoded all my information into it and we had the URL, it spits out, like, a VCF contact card for when I go to conferences, and it's just this Wes assembly module JS my business card now, which is kinda cool. Right.

Scott Tolinski

Yeah.

Guest 1

So traditionally, servers have always been pnpm AMD 64.

Guest 1

That is the instruction set that they kind of use, the architecture that they use, but, you know, ARM has has really changed the way the world thinks about computing with risk. Risk is we're just instructions that computer. They basically condensed everything and said Wes don't need all this stuff. Let's simplify it.

Guest 1

And they've just been leading the charge on making that fast, really, really fast. You Node, there's something called SIMD, which is a really pushing beyond the realm of possibility, which is where you have a single instruction that runs in multiple types of data. So you're you know, if you get your compiler right, something that Rust does really Wes, it can optimize for these things for the architecture you're building to. And then the the real selling point of Yarn over Intel is just the power consumption is drastically less. You know, we live in a world now where we have to be very careful about our consumption of the planet's resources, you know, green energy, and Yarn really makes a significant dent in server infrastructure there, which is why it's been so heavily pushed.

Guest 1

Yeah. One of my friends, Alex Ellis, he's very big into this. He's got his own PieHat, and he runs 8 of them as his own home automation and infrastructure thing. It's never really something I've dabbled into. I've always had access to chunky Equinix metals. I've been a bit spoiled in that regard.

Scott Tolinski

Yeah. But yeah.

Guest 1

I can there's a really strong Vercel. He's worked with Yarn stuff that you should probably chat to. I'll drop his details in with you later. His name is Daniel Magnum.

Guest 1

He can go into that in ridiculous detail if you wanna chat about Yarn stuff at some point.

Scott Tolinski

Yeah. What what are people running on these things if if you have a pie cluster in your home office or something like that? I know Wes and I both run, like, NAS drives with all sorts of containers. But what are people doing on this stuff out of their house?

Guest 1

A lot of what I see, like, you know, when I go to KubeCon and such, is people are running home assistant, and then they're getting very you know, they're having a lot of fun just automating as much as their home as possible.

Guest 1

Yeah. A lot of people also hook it up to webhooks from different, you know, cloud providers, GitHub, whatever services they're using, have them hit their home machines, and then they build more automations on top of that. Discord Bos, Slack bots, whatever.

Guest 1

It's just a really simple way of experimenting and building

Guest 1

Yeah. I mean, if somebody's streaming and the device supports, you know, warp p, that's fine. Then, you know, everyone can knock themselves out. You can have 12 concurrent connections, but then you always gonna have 4 k person and then everything falls over and then Yeah. Oh, crap. I need to scale this up now. And that that that's exactly what Kubernetes does. You Node, it it brings a whole bunch more to the table, but the the the easiest way to think about it is, you know, it runs 1 or more containers and scales them for you. So Mhmm. Next thing you know, you got a home lab. Whole home you're on home lab YouTube Yeah. Watching, Building a supercomputer.

Guest 1

I mean, just all of that old hardware out there, I'd I'd hope someone's putting that to good use through some other means definitely.

Guest 1

Yeah.

Guest 1

What do you have one of these browser testing companies offers you, like, a mobile view. Maybe it's just running on, like, a 20 year old Android device somewhere. Wes know?

Scott Tolinski

Yeah. Yeah.

Guest 1

I've got a couple of mini PCs on my desk that run a combination of Linux and Windows. The Windows that don't really turn on too much, but now and again, I do like to try and do a better gaming.

Guest 1

And with the Linux side, I've really been experimenting with, container based desktops. You know? So it's traditional Linux would be like run Ubuntu or run Fedora.

Guest 1

But there's a really cool project called Vercel Blue and Bluefin Wes the actual kernel starts and then runs a container which provides your desktop environment. And then if you ever want to modify it, you update the Dockerfile essentially. Say, add Wes new package and reboot, and then you get a brand new fresh machine every time, which is kinda cool. That's cool.

Scott Tolinski

Hey. Might fit in well with the offline first stuff that we were talking about earlier today.

Scott Tolinski

I I think, you know, because we are getting kinda close to the end here. I I do wanna get to 1 quick topic really quick before we we start to, you know, change course. But if if people are interested in trying out Kubernetes, like, they hear this and they say this is something that's interesting to me. Where do you even start with this stuff?

Guest 1

So Kubernetes dot io is probably the best place. The documentation is pretty feature complete. There is a a challenge with Kubernetes.

Guest 1

It's a very fast moving project.

Guest 1

They used to do 4 releases, 4 major releases every Yarn. They've now dropped that to 3 because people kept complaining that Wes too fast.

Guest 1

But the the documentation is they've got a great team that work on that every quarter and keep it up to date. So I would go there. The tutorials are fantastic.

Topic 8 47:33

Rockwood Academy YouTube channel has Kubernetes and cloud native content

Guest 1

You know? I'll also say people should just check out the Rockwood Academy, which is my YouTube channel. I've got, like, 400 hours of Kubernetes and cloud native content there.

Guest 1

So if you want it depends how deep you wanna go down the rabbit hole. Right? If you just wanna learn Kubernetes 101 and get your 1st deployment going, go through the documentation.

Guest 1

If you wanna understand how the container run times work and the control play and then entity and what happens when these things break, then go check out my channel.

Guest 1

And there's another really good website called Cube Simplify and one more called Learn Kates. These have phenomenal resources on them as well.

Guest 1

Kubernetes is is complex. Right? Yeah. There's a lot of moving parts. It's very volatile. It changes.

Guest 1

You have to learn at least a dozen resources that you deploy to a cluster just to get 1 workload working. Mhmm. Really, if you want to do it proper at production grid.

Guest 1

And the only way to learn that really is just to get hands on. So really you just need to get your cluster. Use mini cube, use docker desktop, whatever you want. Get a cluster locally and start kicking the tires on that thing. Write your 1st deployment, write your 1st pod, get your 1st service, and, yeah, take it a step by step from there.

Guest 1

I can. I mean, I'm hoping to be at a conference in Europe, a JavaScript conference this summer Wes I have a talk called deleting raw code off the face of the Internet, which is where I delete every one of my DNS records and my domains live on stage.

Guest 1

And because I have a lot of confidence in my automation.

Guest 1

So I use, I automate everything.

Guest 1

I actually wipe my machine every 30 days and restore it. So I wrote a Rust project called Contria that takes a YAML interface for all my applications and my Scott files and provisions them for me. Woah.

Guest 1

But that just does my machine. I was like, oh, well, what about my email? What about my DNS? What about all my domains? So then I started using I used to work at Pulumi, so I was using Pulumi for a while.

Guest 1

But I'm now using Terraform CDK, which is using TypeScript with Terraform to do the same thing. So I've got it hooked up to Cloudflare.

Guest 1

I say here's all my domains.

Guest 1

I give them all my records pnpm literally I just do, CDKTF apply or deploy and it goes and checks everything is correct. And if not, it reconciles them and then shuts back down. Then I can run that 10 times a day. I could run it once a day. I could run it 14 times an hour. Whatever I want, it's always gonna make sure that what I want actually exists.

Guest 1

And it's just a really powerful way of doing it to the point where I could go on stage at this conference, delete everything with a CDKTF destroy, show people my website doesn't work, they can't email me, my URL sharpener is broken, But within 10 minutes, have that all spun back up exactly as it was before.

Scott Tolinski

Amazing. Is is do you do you talk about that at all on your YouTube channel? Because I am very interested in learning more there.

Guest 1

Yeah. I've got a few links on infrastructure's code and using these tools. I'll definitely share them with you as well so you could check that out.

Guest 1

Cool. But it's really it's really easy to get started. And what I love about this, right, is because it's TypeScript, you know. One of the things that bug me about Terraform is when it comes to duplicate in Node, you have to do it all the time because it has no concept of really reusing stuff beyond modules and modules are a huge pain. But with TypeScript, I can literally just say, I actually I have a function on my helper class called enable Google Workspace, and that sets up the MX records. It sets up the Deacon, the D Yarn, the SPF, all of that. It just takes the key as a parameter and it's all done for me. I've got enable, you know, CloudFlare Pnpm, enable fast Fastify. I've got enable Sentry. Whatever I want to exist is just 1 function away, and then I can apply that to any domain within my stack. And it just becomes really powerful when you build your own abstractions that fit exactly the model that you have in your head and then apply that to the world.

Guest 1

No? Okay. I mean, if I do a destroy and I have a database that has provisioned, it's gone bye bye. But then, you know, it's like for backups.

Guest 1

And if you do it right, you can you can restore, you can do everything that you need to Node because you can define whatever workflow you want.

Guest 1

So it's a very powerful model and, you can do a lot with it. And the nice thing is, you know, there's a Terraform provider for everything. It doesn't matter if you're using Porkbun, Cloudflare, you know, GoDaddy, whatever.

Guest 1

There's probably a Terraform provider. I mean, there's a Terraform provider to odds of pizza. But, I mean, I'm pretty sure we can manage your data.

Guest 1

Yeah. Someone wrote a Domino's, a Terraform provider.

Guest 1

Highly amusing. Not practical whatsoever, but, you know, if you want a pepperoni pizza, why not infrastructure Node it?

Guest 1

Yeah. I mean, a a happy bug, he got 2 pieces. But, yeah, I'm sure he spent twice the money. So always unit test your infrastructure as code as well. That's important.

Guest 1

Well, let's have a closing thought from Astell, then let's cross the streams. You know? Kubernetes, like I said, it's not going anywhere. I still write containers. I still ship them to Kubernetes.

Guest 1

But there is a really growing presence of changes within Kubernetes to support WebAssembly workloads natively.

Guest 1

So Docker themselves wrote a shim called run Wazee, which is a container d shim that can execute, WebAssembly modules as if they were containers, but they're not containers.

Guest 1

That can now be run on a Kubernetes cluster, and then you have 2 different types of workloads that can be scheduled as Kubernetes to run your containers here, here, and here. But you can also tell it to invoke WebAssembly modules on demand based on roots coming into your cluster as well. So the world hopefully, the future we get in 6, twelve, 24 months is a single control plane that everyone should learn now, Kubernetes, that can run a multitude of workloads whether that be Wes assembly, maybe even JVM. Yeah. There's a big movement right now with Graul native as well.

Guest 1

You Node, so why not run that on our Kubernetes control plane with our WebAssembly modules and our containers, and things get particularly interesting. So

Scott Tolinski

After years years of laughing about not understanding Kubernetes, I think I've reached the point where I want to learn it.

Scott Tolinski

So thank you, David. Job done. Awesome.

Guest 1

Yeah.

Guest 1

Yeah. There's been a new project that I discovered just this week that kinda touches on everything that we're talking about. It's called Golem Cloud.

Topic 9 55:41

Golem Cloud offers durable executions for WebAssembly

Guest 1

And what they've done is they're looping in to the durable execution thing that's happening right now. Everybody seems to be talking about durable execution.

Guest 1

And they've said, okay. Well, let's flip this on its head, and they've given us you write your code, compare it to WebAssembly, we'll run it on our bespoke and give you a durable execution to the point where you can say sleep for 2 years and that web assembly module will be invoked in 2 years' time right where it left off with the same state, which is just magic in my book.

Guest 1

It's by the people that built ZIO for Scala.

Guest 1

So these are people that are big into functional programming, and that comes to Gollum too. So you get, like, this actor driven workflow system with durable execution, Wes, web assembly, and it's just my eyes lit up when I found out. I was like, this is the coolest thing I've ever seen. It's super early. Nobody should go and adopt it in production right now, but go definitely check that out. So, like, you could, like, literally pause the thing in the middle of a a function,

Guest 1

Yeah. Like, it's for both the workflows Wes previously you'd have to say, okay. Quit this workflow. And then in 2 years' time, start me a new one with this date. Now you just say process Scott sleep and then the runtime understands, okay, let's just pause and check point this and then come back when that's the time is up. And then your workflow is just a single function that defines everything that you need to do even though it runs over days, weeks, months, or years, which is really cool.

Guest 1

do something like a good process. Queue in their state, and it would be checkpoint. Another project that's trying to do exactly that, though, is called restate Scott dev.

Guest 1

Spelling rust because all these things Yarn.

Guest 1

And it has something called a keyed service, which can applicate queue and process these things sequentially 1 by 1 with whatever workflow characteristics you want. But now I feel like I'm cheating and giving you 2 sick picks.

Scott Tolinski

I I'm actually mad because you got both of these last 2 before me, and I was like I had them on my my clipboard. I was very Wes jumps them in there.

Guest 1

Cool. Yeah. And what's really interesting, all these things are written in Rust, but they all provide a TypeScript SDK, which I think is really cool.

Guest 1

Really just my YouTube channel. Go check out the Rockwood Academy. Like, if you've suffered to Wes episodes long enough to hear me see Kubernetes and Wes assembly, then my YouTube channel is the right place for you to come and learn more. So just check that out at rockhode.liveoryoutube.com/rockhodeacademy.

Guest 1

No. Thank you for having me. It's it's been weird. Like, this is the 1st time I've ever heard both of you speak at one x because I listen to 1 x.

Guest 1

Cool. Alright. But, yeah, thank you so much. It's been fun.

Share

Play / pause the audio
Minimize / expand the player
Mute / unmute the audio
Seek backward 30 seconds
Seek forward 30 seconds
Increase playback rate
Decrease playback rate
Show / hide this window