634

June 30th, 2023 × #Python#Queues#Scale#Sentry

Supper Club × Messaging Queues and Workers with Armin Ronacher

Armin Ronacher discusses designing performant queues and backpressure systems to handle massive scale at Sentry. He also shares his views on Python, JavaScript, Rust and staying up to date.

or
Topic 0 00:00

Transcript

Wes Bos

Welcome to Century folks. We've got a very good episode for you today. We have Arman Rona How did I do? Perfect. Pretty good? Yeah. Yeah. Yeah. Yeah. Perfect. Wow. I don't know if he's being nice, but, pretty pretty stoked about that. We have, So Armin is principal architect at Century, and I've been following his work for quite a while, even before Syntax joined Century. And so So the other day, I was, like, in the the chat, and I was like, we got to do a show on message queues. And basically, like, How do you deal with getting lots of requests at once where you maybe can't handle it all at once or, like, Messaging queues is just kind of a general idea that's something that we've haven't really covered too much. So I was just like, hey.

Wes Bos

You know who gets a lot of Requests and probably knows a lot about this thing. That's So I was like, who who Sentry can talk to us about, mentioning, and that's kind of one of the things that I'm excited about joining Sentry is that you have access to Some really cool people. So, welcome, Armin. Thanks so much for coming on. Yeah. Thank you for having me. So it's a good topic to talk about. I I like queues.

Wes Bos

Oh, good.

Wes Bos

That's good. So you give us a quick rundown of, who you are, what you do. I think you might be the 1st person on this podcast that has a Wikipedia

Scott Tolinski

page, which is unlimited. Tom Preston Warner has 1, but Oh, you're right. So maybe the first one. Second. Yeah. Okay.

Guest 2

Yeah.

Guest 2

So I like open source. That's kinda why I'm at Sentry too.

Guest 2

Most of my background is in Python, and this is also we have a lot of Python mid century. I built a bunch of, let's call it, frameworks and utilities for developers.

Guest 2

Originally, Python web frameworks. Like, I built a whiskey library, for building web apps in Python called.

Guest built Flask and other popular Python libraries

Guest 2

And then I built a web framework on top of it called Flask, which got quite popular.

Scott Tolinski

Mhmm.

Guest 2

And then I built the A bunch of template engines over the years. I built, 1 in Python called Ginger, and then a second version called Ginger 2.

Guest 2

I also built the PHP version of it called Twik, which later on turned into, I think, a symphony project.

Guest 2

And these days, mostly, I have A strong interest in Rust, and I also do, on the site now, sort of an an attempt of fixing pipe and packaging.

Guest 2

But, for, I think, 9 years now at Sentry, trying to make everything work in one way or another.

Guest 2

And so I'm based in Vienna, and we have the teams here which generally produce data. So it's the SDKs, and then it's also the injection pipe. So everything that sort of is It's it's getting event data into Sentry.

Guest 2

It's happening here, up to the point where it then hits the database, and then other folks take over.

Guest 2

So I basically have 9 years worth of experience of feeding sentry event data into the system in one form or another, both from the client SDK side and And it's QA QA side of things.

Wes Bos

That's awesome. Well, I'm I'm glad to hear that. So it seems like you're the right person to talk about, messaging queues. So Yeah. In in most of the applications that I've built and and probably the same for for Scott and probably for a lot of people listening is that Often, they're just sending a request. They're sitting there. They're waiting for the request to come back, and they sort of deal with that. And they've never gotten into sort of, like, Queuing or things that can't happen immediately or hoping that something comes back at some point. So can you give us an idea of what are What are queues, and what are they used for?

Guest 2

So the the short version which people I think get into queuing stuff is They build an app. It takes an HTTP request, and then it takes a while.

Topic 2 04:32

Queues allow postponing work by putting it into a queue to be processed later

Guest 2

And then for one reason or another, they really want to Basically, pretend that, work is already done, but they are not done yet.

Guest 2

And so the the the way this is typically, described is someone goes to Stack Overflow. So I was like, how do I run them back around? Yeah. This is this is, I think, how people go to queues.

Guest 2

And then, usually, the answer is, well, you should you need it sounds like the kind of problem to which you need a message queue. And the idea is that you take the kind of work that you want to do, you throw it on the message queue, and then some workers are going to pick it up and work on it and then maybe eventually produce a result somewhere.

Topic 3 05:18

Queues help distribute work to workers and prevent request handler overload

Guest 2

So this is sort of the typical reasons. Like, you rather than doing a thing right now, you want to postpone this problem for a little bit later or very much later.

Guest 2

And so the idea is basically, take a bunch of work, distribute it to workers that are Targeted to solve this kind of problem.

Guest 2

And one of the reasons why you want to be doing this is because in in a good situation, you free up The work that your HTTP request handlers might be doing so that they can do HTTP request handling. And then, When a huge inflow of data comes, they are happy because they they mostly are done with their work of just getting the data into the queue and then sort of work off the backlog.

Guest 2

As you have capacity, Anthony can kinda scale this up too. That's sort of the the the basic reason why you why you have a queue, I guess. So when when you have,

Topic 4 06:15

Queues can run on separate infrastructure but often run on same servers

Scott Tolinski

you know, this this processes that need to be moved into a a queue, Is that typically offloaded onto, like, totally separate servers, totally separate infrastructure, or

Guest 2

can that often be run with the same? You mean in in century case or in the general case? Just in general case, I'd say. I think in general case, there are a bunch of different reasons why people build a queue. The biggest one is Just the initial one.

Guest 2

I want to just have the ability to, like, create a backlog.

Guest 2

Mhmm. And then it's often The same code base.

Guest 2

So particularly in Python, you have, systems like Celery that sort of, Advocate or advertise the idea that you just have a function here that's going to invoke later with the data on the queue. And it It's a different process running, but it like, you you wouldn't really notice. Like, you can use all of your same Python code and that kind of stuff.

Topic 5 07:11

Python Celery runs queued functions as tasks on the same servers

Guest 2

And then in, I think, recent years, quite popular.

Guest 2

I don't know if it's a good plan or not, but, like, very Popular design element has become building ridiculous tiny microservices.

Guest 2

And so, Message2 is a really good way to sort of Make them talk to each other in one form or another. And so then you end up with a situation where maybe you have, sending stuff from Hi. If I'm picking up a note, send it further to a go back end, and just build yourself into a crazy, unmaintainable mess this way.

Wes Bos

That that's Kinda why I'm glad I had you on because at at the end of the show here, I just have a whole bunch of questions. What are your thoughts on x? Because I I know from following you on Twitter and whatnot that You've got opinions on stuff. So I love I love when we have people on, like, that's something like, tell me what you really think.

Wes Bos

So with queues, you're able to, like, like, If you publish something into a queue, I would be able to, like, listen in my node app for when something gets added to the queue, or do I pull it, or how does that work? So it depends a little bit on it because, like, in so first of all, you need to pick a system that

Guest 2

well, you need a bunch of things. So first of all, you need a queue. And so, That is for instance, it could be rather than cure. It could be rather there are a bunch of different things with different qualities and And benefits of it, and they they behave in a certain way. And then you have newfangled stuff like Kafka, which is also queue ish in kind, but actually behaves quite a bit different.

Topic 6 08:41

Kafka queues require planning for scaling while RabbitMQ queues autoscale

Guest 2

And so the way you, on a low level, interact system is quite a bit different.

Guest 2

But Generally speaking, you have, in addition to your queue, some sort of, utility that helps you manage this a little bit better. And so in Python, for instance, and this is sort of what Century, historically, users, you have, Revit, and queue sitting behind an abstraction called, celery, Which then sits on top of a sort of a lower level apricot combo.

Guest 2

And the idea is that you don't really have to deal with all these Integral parts of it because there's a lot of things that you might want to deal with in a queue. Like, you need to serialize the data. You need to deserialize the data. There have to be some policies about the way you route this kind of stuff.

Guest 2

What happens if the task doesn't get acknowledged? Do you want to re retry it? Do you want to compose these kind of things together? Like, there's a lot of stuff that you can do. And so depending on the ecosystem that you're sitting in, you might have, this kind of thing going on. We have some sort of framework on top that helps you with this.

Guest 2

And so in in in Node, I'm actually not sure what sort of the most popular way is of of of talking to Hughes. But in the Python ecosystem, I would say that historically, salary was the way to go. And, typically, you have some sort of decorator. It says like, hey. This function is a ask. And then something else says, like, hey. I want to produce an,

Wes Bos

an item on the queue that eventually gets handled by this task, and sort of magic makes it Pickup. Yeah. And let's let's talk more about, like, what types of stuff people generally put into you. So one example I have is Amazon. You buy something on Amazon.

Topic 7 10:17

Amazon queues transactions to smooth workload

Wes Bos

And I once had, like, an expired card in there. I tried to buy something. It's like, great. It works good.

Wes Bos

Your order is done. And then, like, half an hour later, I got an email that says, hey, your credit card declined because it was fired or or whatever. You have to go in. So I thought, oh, that's interesting. They don't process the transaction while I'm sitting there waiting for the request to come back. They'd probably throw it into some sort of queue and then process them either as they have time or I'm not sure really, really why they do that. Do you have any other examples of Common stuff that gets thrown into a queue. I would say, like, the most common thing is usually, anything that talks to an external service that is kind of fire and forget.

Guest 2

So classic example here is, if I want to send you an email Yep. Very often, I just say, like, Sent this email, but put it in my queue because I don't really care if it goes out straight away. Like, it can go out a little bit later.

Guest 2

And my email delivery might depend on the availability of my own email server, or there might be something else going on. So, like, Any sort of I need to notify an external service, like my local email service or, like, some external service on, like, an outgoing thing, a queue is a good example of how you would probably relatively naturally try to solve it.

Guest 2

I think the anything that's sort of Related to external service, particularly around payments, very often goes through a queue. And on payments in particular because, it depends a little bit on how the abstraction goes with, and Stripe in particular sort of hides a lot of this away from you. But, in the past, I had to implement Payment processing with a company called GlobalCollect, which I don't know if they're still around, but they made it very, there was no abstraction. It was very, very leaky. And so, even if you only did a credit card transaction, which typically would get processed immediately, that same kind of interface might also send you through PayPal.

Guest 2

And PayPal had this awesome payment flow at the time where, like, 90% of transactions would, like, go through immediately, but then 10% of them might go to, Customer can wire money into a reference account, and that might take days to come back. Right? And so then you want to keep this The state somewhere of, like, hey. This transaction is still not being done. And so you have to periodically check if the thing went through. So you might keep putting tasks into the queue until this thing eventually transfers. And so you you start to maintain some sort of external state machine with this, and then you have these tasks sitting there trying to do stuff. That

Wes Bos

Is that also very helpful for when an external provider could possibly go down? Because Amazon went down last week And there was kind of 2 emails I got, which is, 1, our service went down. Anything that happened in this 3 hour window is gone forever, and they're probably not using a queue. And then the other one was, as soon as Amazon went back up, I got a bunch of, like, delayed emails of, like, x, y, and z is now done or it's processed or Saved. And I thought, oh, interesting. They they probably just filled up their queue. And then once whatever service they needed was Back online, they're able to process it through. Is is that what people do to to avoid going offline?

Topic 8 13:34

Queues help retry failed external requests and smooth workload

Guest 2

So queues are a good way of doing that. They kinda At the time that you put a thing on the queue, you have to figure out what it should do when it doesn't work. For instance, payment transaction typically are kind of thing that you want to give a bunch of tries until you give up. Right? So, like, it may say they have 10 retries and then maybe you space them out or multiple days.

Guest 2

Very classical example because even on credit cards, People might like to max out the credit cards by the end of the month. And so if you only give it a single try, then, if the card is maxed out, You're just going to lose out on the transaction. So if you keep trying a little bit more, then then maybe you try for 5 more days, once a day, Maybe you make it so that eventually the card, has balance on it, and they can charge it. So that's a classic case where, like, you would, even if the the task fails on the queue. Maybe you you kinda put it back in in one form or another. Sudden queues of a way to say, like, hey. Actually, only run this. At least with abstractions, you can sort of say, like, hey. Run this again in, like, a day or something. Typically, like, how does that type of, like, task system

Scott Tolinski

Work, is that usually just done through a, like, a a job that's scheduled at various times to process the queue? Is that

Guest 2

so it depends, like, their queues. So, finally, it's like you can Postgres actually is a pretty decent queue for the kind of, behavior where you want strong persistence. If you have these tasks that take a really long time to because you might have multiple days of retries.

Guest 2

You can actually store it in Postgres. Postgres has a built in sort of system that can be used for that. And these kind of Things where you have, like, these long running tasks, like jobs that is sort of address and and, like, want to introspect that kind of stuff.

Guest 2

You would often use a system like this.

Guest 2

And, usually, you have some sort of extra component sitting around that Helps you execute these long scheduled things. In Python, for instance, when you use salary, there's a system called salary beat. They can also be used to periodically schedule tasks onto the queue, like, run this once a minute, something like this.

Guest 2

So there there are very different ways in which you can do this. So even if queues are not naturally able to delay. Like, Redis, for instance, doesn't have much of it has a queue because at list onto a list, you can sort of build your own queuing behavior.

Guest 2

But then if you want to do things like, execute things, an hour in the future, you cannot have to reach for More abstract ways of implementing it yourself, and then there are certain systems on top that might help you with that.

Guest 2

I'm not sure right now what the sort of latest flavor of of queuing brokers on top of Redis is. But there are there there is different ways in which you can implement it. And, Like, salary, for instance, has solutions depending on which queue you use. They will have something for you. You could you could almost do, like, a Like a cron job in your queue, were you saying? Yeah. I think cron jobs on a queue is a very common kind of thing that you do. Yeah. That kinda answers one of the questions I was going to ask next was because a lot of the services you mentioned do seem like,

Scott Tolinski

you know, maybe very specialized, whether that is the Kafka, you you mentioned sometimes putting in Redis, like, do people put cues into Databases. So you you did just mention

Guest 2

so so that that is kind of the answer. They they do potentially put used in the databases sometimes as well. You can think of a queue as, like, a very simple thing. It's like you put an item in on the left and then first in, first out. That's sort of the idea. But Depending on what you want to do with the things on it, that problem turns really quickly, really fast.

Guest 2

And so there's a whole bunch of very basic Queuing theory that is worth having in mind before you actually start going on an adventure of trying to build something because at scale, all of these things matter a lot.

Guest 2

And so depending on what you want to do with this and so there's, like the the very basic things you have to keep in mind is, like, is my task idempotent? That means If the task were to run twice, is it a problem if it runs a 2nd time? If it's idempotent, then the 2nd time it will not create A different result. It may be it will not run at all because it detected it already ran, or it will do the same kind of action but not in a destructive way. But there are certain tasks that maybe are hard to implement like this. And if you were to run them a second time, then it would actually count twice or something like this. And so another very important part is, like, if I'm accumulating a large backlog of items because I cannot actually process all the items on the queue fast enough, What do I want to do then? Do I want to throw them away? Do I actually want to scale up and, actually commit to processing this down? Do I want to So slice the queue in half where I say, like, actually, I've built such a big backlog. I want to eventually drain it down, but I want to skip ahead and process the items from right now. There are many different ways. And Depending on how you implement all of this, certain things become possible or not.

Guest 2

And so depending on the kind of thing that you want to put on the There are many different ways in which you can go down

Wes Bos

and which solutions are better and which ones are worse. And how do you typically architect something like that? Is it Is it like a state machine? Is it a bunch of code? Do you have, like, whiteboarding diagrams where you've got arrows pointing everywhere?

Guest 2

So I think the problem at Century is that every, is a sort of the every problem we have a hammer. Every problem problem looks like a nail, I guess. So we have a very specific kind of problem, which is we have a lot of incoming events.

Topic 9 18:44

Queue design depends on the specific problem to solve

Guest 2

We need to process all of those. And so all of our solutions, more or less, are built around the very fundamental part that backlogs are terrible. Mhmm. Because if we if if I press Pause on our system for, like, a minute, and then I press play again.

Guest 2

The amount of incoming events that have accumulated in this 1 minute is a sizable backlog. That's going to take me a while to to crunch on.

Guest 2

And this is at scale, like, you have this kind of problem where, like, backlogs are really, really bad, and you want to avoid them.

Guest 2

Whereas in many other systems, backlogs are actually what the kind of this is why you build this. You see, you build it so that you can Sweep up, like, maybe days' worth of backlog so that you can sort of process them down one after another. And so it really depends on what is it that your problem looks like that you want to do and and how you do it.

Guest 2

And so even the process of solving it, I guess, comes to what is the specific problem that you have. Because I a lot of those things I can come down to, like, they look like one of couple of different types of problems, And then you don't have to go that deep into it. You're just like, okay. This is this kind of problem, so I'm going to use this type of queue. And then from there, you maybe, go more into depth of it. But it's not that you have to overengineer the whole queuing story.

Guest 2

There are some some very basic principles. And if your problem looks like one of those, then There are some best practices that they can follow. And do you typically

Topic 10 20:22

Need to add queues when one queue overloads next queue downstream

Wes Bos

configure concurrency? I'm I'm thinking about, like, Serverless function. Like, let's say I'm generating PDFs and all of a sudden my thing gets super popular And my queue goes from 8 in the backlog to 8,000.

Wes Bos

How do you typically do with that? Do you just go and Turn the knob on your your servers, or do you change the queuing number? I mean, it it it's it's really dependent because, like, for instance, Kafka is very hard to scale up.

Guest 2

Kafka is not a real cue in that sense. But, certain systems, you can sort of almost naturally scale up. Rabbit is a really decent kind of system to auto scale up because there's 1 component that sort of dishes out tasks to a bunch of workers, and you can just scale them up. Right? Where it gets tricky is if you're if the end result of a task then feeds into another thing.

Guest 2

Because if you it's the you said, like, okay. I'm I'm auto scaling based on some sort of really primitive parameter like CPU load.

Guest 2

You might go into a, a system where, well, now I have a backlog because my whatever. Credit card like, I don't know, launch day, a lot of people put their credit cards in. All of a sudden, you're going to spend all this time credit card processing. Right? And so you're scaling up automatically the workers because you want to Handle all of those credit card transactions as quickly as possible.

Guest 2

And then let's say after you're done with the credit card transaction, you're you're triggering yet another task, and that other task It's doing something else that's really slow. Let's say I don't know.

Guest 2

It I really have no idea what to do afterwards. Maybe just, I don't know, Creates an account and provisions like virtual machines. I don't I don't know what's happening, but but there's something slow happening after the credit card. Now the problem that you might have is you you you're burning down on this backlog on a credit card transaction and really quickly now because you actually scaled up the whole thing. But then you overwhelm the next system in line, and you actually turn out like that is impossible to scale to the same amount.

Guest 2

So scaling these kind of queues can be tricky because, Scaling up one thing and then having that thing produce a lot of items to the next queue might pro mold this problem downwards.

Guest 2

And so that's why one of the most important things on this is sort of back pressure management, which is, understanding that Nothing is infinite.

Topic 11 22:46

Backpressure throttles incoming load to prevent downstream overload

Guest 2

And the way you can think of back pressure is sort of like a bathtub.

Guest 2

The water goes in and water goes out. Right? You you have Water flow in on the top, and then water leaves through a hole on the bottom.

Guest 2

And how and and you can almost think, like, after that Bathtub. There's another bathtub where, like, you have, like, an as as connected set of bathtubs. Right? So that for for whatever reason, when the 1st bathtub empties out, it goes into the next bathtub. Scaling up means making the 1st bathtub bigger. Right? It doesn't mean much more than that. It's just like, okay. I'm, or maybe a second second scaling up is in this case, 2nd hole is like it it drains quite as much, twice as fast. But if the next bathtub is smaller, then it doesn't help me to empty the entire first bathtub bathtub and second one because the second one is going to overflow.

Guest 2

And then I could sort of make the 2nd bathtub bigger so that they can hold the whole volume of the 1st bath and the second one. But, eventually, the size of my bathtub is finite.

Guest 2

I I don't want to have an infinite sized bathtub. Yeah. And backlogs are kinda like this. The the idea that If if I don't have back pressure, I have infinite sized bathtubs.

Guest 2

And that's a problem because it means that the the first inflow, gets never throttled.

Guest 2

And that means I'm I'm committing myself to all of the water and all the bathtubs, and that's a bad idea. What you really have to do is you have to communicate eventually to the beginning of the of the system that, well, I'm actually overwhelmed. I don't want to deal with this right now. So that eventually the pain stops and someone just doesn't give you any more water. Right? If The the problem with the queues is that you you kinda get into this idea to committing yourself to all this kind of work that you want to do. But sometimes, and in fact, most of the time, it's really important to have a system in place that says, like, I'm actually overwhelmed.

Guest 2

I don't want any more water. And that's Sort of very important back pressure design is often forgotten in these kinds of system. And it's especially stupid if you have a system that sort of Has really big queues because it can actually like, if you if you design Sentry from scratch and Mhmm. And you you would have an infinite queue, And let's say you have an hour and a half worth of downtime, and you you send every single event into the system. You commit yourself to doing all this work. Right? Aim, like, before you can get any of the new events and you have to burn through an hour and a half worth of old events.

Guest 2

Right? Because that's that's what the system did. You have Accepted all this work already. And maybe that's not what you want. Maybe you want to say, like, hey. I actually want to prioritize new events now, and then I want to have a 2nd system in place that burns through the backlog that we have accumulated.

Guest 2

There are all these kinds of really important basic ideas of how you deal with this in the face of stuff not working well. And, usually, the answer is not to blindly scale it up. You kinda have to understand if your system is actually capable of doing that. Like, what I'm getting is that there's typically

Scott Tolinski

or can be multiple queues. Right? You're not just, like, looking at 1 big line, essentially. Yeah. Usually have, like, multiple Things that are working

Guest 2

in parallel on the independent of each other, but then every once in a while, you spill from 1 queue into the next queue. Is there, like, an inflection point in which you would

Scott Tolinski

add another queue instead of,

Guest 2

tossing more resources at it? Like, what what is that inflection point? And, is that go along with the bathtub metaphor you were? I mean, if you if you have things that work completely independent of each other, you would try to keep them at least on a code configuration point of view so that you can split them onto dependent queues. If you then have to actually keep them independent or if you throw them on a big thing In in in Rabbit, for instance, it's, you can sort of observe. Like, there's depends a little bit on on how these different things interact.

Guest 2

On Kafka, it's much more complicated. There, you generally have to separate this out from the beginning. You have to spend a lot of time thinking about how he's going to scale it up Because Kafka doesn't have this kind of fair distribution of work that you have going on with Revit. Because if I if I throw a 1,000 task into Revit, Then, they're going to dish out 1 of the another 2 workers as they become available.

Guest 2

And in Kafka, I predetermine which work is going to go which items. So if if if a worker doesn't make progress on any one of those items on a petition, none of the items that are in line afterwards are going to be processed either. So it's it depends,

Wes Bos

very much on on what the workload is. Wow. So let's let's talk about how you do it at Sentry.

Wes Bos

Specifically, I'm curious.

Topic 12 27:09

Sentry processes 300,000 requests per second, filtering with SDK backpressure

Wes Bos

How many do you even know how many events Century gets? Because I think about, like, I write 1 incorrect Console log or or I have a one error on my thing.

Wes Bos

And if I have a 1000 people visiting my website, it's sending many events this century. Right? Like, How how do you possibly handle that much traffic and data come in your way? So so Sentry is,

Guest 2

it's interesting because, like, if you if you talk about, like, what's an event in century, there are actually different kinds of things that can be sent to us.

Guest 2

The errors, there are session replays, there's performance metrics, there's session data. Mhmm.

Guest 2

So they all are different.

Guest 2

And so in at any point in time, I think the load balances for pure event ingestion, including all of these different kinds of thing, Handle around on a day on a regular day peak, I think around 300,000 requests a second.

Guest 2

A little bit more than that, I think.

Guest 2

So it's it's pretty large. Right? Is this and and this is why, like, backlogs are really annoying because, if you if you just wait a little bit, it's going to be a lot. Right? But not all of them are going to be, immediate items that make it onto a queue. So as an example, every error Typically makes it onto the queue, but not every error that makes it to our ingestion system is kept.

Guest 2

As a good example, a customer is over the quota or didn't pay or anything like this.

Guest 2

We don't want to send this event onwards because it would be pointless.

Guest 2

The way I will call this is we we we have a system in place where we select we extend our queue all the way to the client.

Guest 2

So we write our own client SDKs, and the client SDKs cooperate with the rest of the system to already implement back pressure management all the way into the client. So as an example, if I have a mobile app, and I'm if I go viral with a mobile app, I might only pay 29.99 to Century or whatever our key price plan is. But that app is viral, so it's installed on 2,000,000 devices. And, I don't know, 0.1% of them crash.

Guest 2

That's going to be a lot of traffic to us, and it will be more traffic to us than I'm probably willing to entertain for that small developer.

Guest 2

And so what I actually do in that case is our ingestion system keeps track of what the quota is that every customer has.

Guest 2

And if, for a particular customer that is already over quota, I will communicate this all the way to the client as the case, which then eventually stop sending until, let's say, 30 minutes in the future. Right? So I can sort of Communicate back pressure all the way to the client to make the pain stop. Important rule number 1 is, you kinda have to tell things to shut off, because if I if I were to not have that system in place, it would be way more than 300,000 requests a second. Right? This is already after, we lose a lot of events.

Guest 2

In order for us to know how much we lose, we actually count.

Guest 2

Every every SDK, when they don't send an event, they count it up, And they will periodically send us how much they didn't send so we can extrapolate for customer what the true volume would be if we wouldn't be doing this kind of Mhmm. But once all of those events make it into the first point of ingestion, which is a system called Relay, we split them up into different kinds of Then so errors will route differently to, for instance, session metrics.

Guest 2

And session metrics, in that case, they would be preaggregated, on relay and then flushed out once every 10 seconds.

Topic 13 30:34

Sentry queues route events by type - errors go to Kafka, metrics aggregated

Guest 2

So we reduce the total amount of individual events that make it somewhere.

Guest 2

In any one of those cases, though, once they go for all of the system, they will end up on Kafka, which is its own beast.

Guest 2

But, one of the things that Kafka is very good at is basic goes to disk. So if we have, for whatever reason, an extended downtime Between one of those systems, we can buffer effectively without limits, like, until the disk x goes, and says, like, I'm full.

Guest 2

But, we have a lot of space to, keep data if something goes really wrong.

Guest 2

Revit, on the other hand, we don't have that, that benefit because we the way we operate Revit, it doesn't scale for us anymore if we would have disk storage. So it it has to work purely out of memory. And so we have to limit how many events can actually make it into Revit. So there's another system in place later on that sort of tries to Prevent that we put too many events into Revit so that Revit can operate properly, and sort of kept in a in a reasonable state.

Guest 2

And then all the events make it through a really elaborate processing pipeline where certain events go through immediately.

Guest 2

Other events like, mini dumps, they might require downloading debug files. They can be gigabytes in size.

Guest 2

And so That means that for individual customer, 1 crash comes in. But the 1st time we're going to handle that crash, you might have to download 2 gigabytes worth of debug files.

Guest 2

And so then a worker has to pick that up, do all this stuff, hopefully, keep the cashes around. And so it gets quite elaborate because we don't know ahead of time how long an individual price is going to be. We can make some estimated some, educated guesses that, for instance, a Python event will be quick because it doesn't require a lot of processing.

Guest 2

And we can make the the educated guess that, c plus plus event will always be slow. But the difference between cached and uncached on a lot of those events is, multiple orders of magnitude.

Guest 2

So a good example, like, if you have a JavaScript event, you can sort of think of it this way quite easily.

Guest 2

It's a stack trace, and it's minified, and it looks like garbage.

Guest 2

So we have to find the source map for it.

Guest 2

And, in the worst case, we have to go to the Internet and download the source map Because you have the minified JavaScript file sitting there on the Internet, so we fetch that. And then there's a source map reference in it, and then you didn't upload it to us. So we also have to go to your server and fetch that. And maybe the reason your website is crashing right now is because your server is overloaded. So we come in and try to fetch even more from your server. We try to get the source map, and it's going to take 30 seconds. Right? That means that what your one event doing this keeps us busy for 30 seconds, not doing anything valuable. Right? That's very unpredictable, predictable for us. Like, once we have this stuff cached, it will not take 30 seconds anymore. It will take milliseconds.

Guest 2

But the difference between milliseconds and 30 seconds is, for for this kind of system, really, really annoying.

Guest 2

And so it makes it annoyingly hard to predict how it's going to behave.

Scott Tolinski

Wow. With with this much information Coming in and out, is there, like, standard ways of visualizing and being able to see? Because any cue that I've worked on has been Small enough where you could visualize it in a table. You could see here are the things that are processing. Here's their status, whatever. But with this many events, you can't possibly do that. So what is, like, an actual useful visualization of any of this? Yeah.

Guest 2

The way you would ideally visualize, and this is not how we do it, this is, I think, how we wish we could do it is, is is a form of cohort tracking where, you could basically do this. You so, basically, the way you visualize these queues, unfortunately, is basically, just a bunch of numbers.

Guest 2

Time series over time is like how many events do you have at certain points that you measured. And so, ideally, what you would be doing is you would sort of say, like, okay. This is the time when the thing was first put into the queue, like, when you first saw it. And And it would say, like, okay. I'm going to take, I don't know, timestamp module of 30 or something. So I have 30 buckets, per Hour. Like, every 2 minutes, I have a bucket, and I would sort of track this. And then I could, in theory, see 30 Segments over time sort of making it through my system, and I can see if 1 of the cohorts is doing worse than the other, and then it kind of tracks sort of what's my average latency through the whole thing. There there there are ways in which you could sort of do that, at scale quite nicely. We definitely don't do that. And part of the promise we we lose a lot of knowledge in the system. So I mentioned earlier, for instance, that the we do in relay, we do this pre aggregation, so metrics that come in basically get flushed every 10 seconds into something. But we cannot really flush it every 10 seconds because it depends a little bit on the project. So let's say we we collect your data, and then after 10 seconds, we want to flush this project onwards. All the metrics captured for this project on this particular relay should be sent forward.

Guest 2

It could be that we cannot actually forward this project because we're waiting for the config of that project.

Guest 2

Relay basically fetches for every customer's project the config to in, influence how it behaves. And so it might be that we have a network connection error between relay and sentry and that we can cannot send this. And so we would have to, in theory, keep track of Every single incoming metric of when it was sent to relay and not just this 10 second bucket that we want to send on voice. Right? But we we every time we sort of buffer things together, we lose all of this information, how long something took. Because they're like, okay. The this package of stuff Contains data.

Guest 2

All this came from this minute, and the newest came from this minute or something like this. But whenever we do the sort of batch Processing where we read multiple items together, we we lose every information, that was there, from from how old it is, Where it came from so we we have very, very questionable visibility into this really complex queuing system. And with the power of hindsight And and many years of building this of them, I would probably spend more time in making it visualizable, but, it's kinda is what it is. And and And you have very crude tools only to to deal with this. Unbelievable. It it just blows my mind

Topic 14 36:10

Sentry lacks visibility into complex queues and data flow

Wes Bos

how complex stuff can get When you start scaling, I can't imagine all the little edge cases. Like, even just like, I had a doctor's appointment that got moved, And I got a text message about it being moved. And then an hour later, I got a text message for the old appointment As a reminder and I was like, that's just one example of, like, a I don't even know if that's messaging cues, but it's just one example of just an edge case that Someone has not taken care of. I can't even imagine the type of stuff that goes even just like sending an email notification, I'm sure the logic behind that is. So I give you I give you a really cool example of,

Topic 15 37:25

Mobile SDKs are distributed queues outside Sentry's control

Guest 2

something that happened many years ago, but it took us a really long time to figure out. So So I mentioned earlier that we did this thing where we basically told we we communicate from our ingestion system to the clients when they're out of quota. Yeah. Right. We we say, like, hey.

Guest 2

You you didn't pay enough. And the message really that we want to tell to the client is Until that point in time comes where quota is available, don't even try to hold on to those messages. Throw them away. Right? This is the contract that we have. It's like, if if if you're if if the next 30 seconds, you cannot send data, after 30 seconds, please don't send me 30 second old data. That's the contract.

Guest 2

There was an SDK that didn't help uphold this contract. It was basically, retrying. Like, it said, like, okay. I'm going to buffer these 30 seconds worth of data.

Guest 2

And then, when when I get caught again, I will send it. Now the problem is this was a mobile app, And so all of those devices were buffering everywhere, and they were trying to send their ever growing old event payload coming in. Right? But we never saw this. The only way in which we figured out that this is happening is that on some of the largest customers, the events that they saw in the dashboard got collectively older. Right? Eventually, everything was days old Because he had all of those devices with the local buffers trying to send, like, if age old event data. That was many years ago, but This is, this is why we have so much, more data today on what these clients are doing because they are completely out of our control. Like, we we deploy SDKs to them in one sense, but we're hoping that the customers update those SDKs.

Guest 2

But they are an integral part of our queuing system Because they literally are a cue on the client.

Guest 2

And if they misbehave, we can only guess Oh, man. What they mean to us.

Guest 2

But you can bring them in horrible states, and then you're really screwed because, yeah, they they're just out there doing stuff to you. You can you can self yourself if you do something stupid. You can't

Wes Bos

you can't tell it to stop sending you stuff.

Wes Bos

It just keeps that's That's actually crazy where a part of the service, you have to just give it to somebody else and say, run this for me.

Wes Bos

Hope you got it right.

Guest 2

I guess that's how, like, JavaScript works, but at least JavaScript is a little bit more you've refreshed the page, And you can at least update. Mobile mobile is hard because, like, it takes it's, like, it's multiple layers away from you. It's like, first of all, you you can fix a bug in the SDK. Mhmm. And then you have to hope that Your customer updates and puts it in their app, and then you hope that every single customer updates

Scott Tolinski

the the app on their phone. That it gets approved through the,

Guest 2

Through the, app market. Yeah. So it takes a lot longer for this fix to go out. Oh, man. So, first, the way we fixed this originally was we lied For this particular SDK version, not to send a 4/29.

Guest 2

We just lied and said, like, hey. Actually, it went through, just to drain out these Distributed cues that we had everywhere. Mhmm.

Topic 16 40:13

Sentry lied to SDKs about quota to clear distributed queue backlog

Scott Tolinski

That's that's wild. I had a question about rust, and you mentioned that you're doing a lot of rust right now. I was wondering, like, what kind of Projects or things you're finding interesting in the Rust space right now and what type of work you're doing there? So

Guest 2

at Century, Rust sort of Came naturally in a sense. Like, we it worked. Maybe it didn't work perfectly, but it was it was good enough to solve a particular problem. Like, the same way, I think, Python and TypeScript dot Sentry is like they were picked up early.

Guest 2

We kept using them.

Guest 2

And so we kept using them where they worked, and we didn't use them where they didn't work.

Guest 2

For us, I think that the part where Rust works really well is 2 pieces, or 3, actually. 1 is, We our client side, CLI tool is written in Rust. It makes it very easy to distribute.

Guest 2

We have the Core ingestion system, but then rust from performance reason, it's quite nice, and typing makes it nice. There's there's a bunch of reason why it's a pretty good choice, I would say.

Guest 2

And then, historically, Rusted Sentry was in a in a space where we needed to do native crash reporting.

Guest 2

And so dealing with binary data, dealing with native PDBs, DWARF files, source maps even, Rust is just a really good language because Rust is written in Rust, and so there's a lot of tooling around compilers, compiler ecosystems.

Guest 2

And the really only competitor in that space is c plus plus which we used earlier. We used LLVM for this, and it's not nearly as nice to use. The developer experience of Rust is so much better than the developer experience c plus plus. Mhmm. And so I'm mostly paying attention to anything in that space, which is High throughput backhand backhand processing, data processing, that kind of stuff. I'm I'm not paying too much of what Rust is doing in gaming or what it's doing in even WebAssembly is not while I find it incredibly fascinating and I'm toying with it a lot, I'm not paying that much attention to it compared to, say, distributed tracing, queue Mhmm. Services, that kind of stuff.

Guest 2

But I think what I find most interesting at the moment in the Rust ecosystem is that there is a growing set of Reviving Python projects going on in the Rust system.

Guest 2

So, there's a library called Py o three, And there's a project called Maturen, I think. Maturen. I don't know. Maturen. I have no idea how to pronounce it. But it's basically, these 2 things together let you write Python extension modules in Rust.

Guest 2

And that is actually quite interesting because Python is growing, I would say, mostly in a data science space or the data processing kind of space. And then you Historically, wrote a lot of stuff in c and c plus plus, like, Skippy, NumPy, all have a lot of c, c plus plus code in it.

Guest 2

And now you can write Rust in it, which is a lot more fun.

Guest 2

And so there's a there's a growing number of Tools, in the Python ecosystem that are written, in Rust, and that I think is quite interesting. It lets you do a lot of interesting things, where performance historically has been a problem and where writing in c plus plus wasn't that much fun. What do you think about

Wes Bos

Python in the, like, AI space because it seems to be that almost all the AI stuff is is written in Python. Do you think That will continue to be so, or or why is everything

Guest 2

in AI written in Python? I don't know why everything is written in Python, but it's There's definitely a lot of Python code in that space, and that's just a reality. And so I think, it's it's probably mostly just, an effect of where all of the stuff is coming from and that there were good libraries to do this kind of processing.

Guest 2

And so it's an it's a case of, like, there was already stuff there.

Guest 2

And I feel like while there were some competitors for Python and there still are, Julia and others didn't catch on quite as much as everybody was thinking.

Guest 2

That's sort of my interpretation, at least, of what's going on.

Topic 17 44:25

Python leads in AI due to existing data science libraries

Guest 2

And so it's just a it's a fact that Python is used a lot in that space.

Guest 2

I think that the words of Python are clearly not Problematic enough to completely ruin that experiences, but Mhmm. I mean, I hate packaging in Python. It makes me angry, and I even started my own Python package for that reason because the developer experience around Python is just really quiet from the last century in some sense.

Guest 2

And and I know that a lot of, like, AI folks are also complaining about this.

Guest 2

There are a lot of competing Python Ecosystems now with Anaconda, with the different kind of package.

Guest 2

So some of the machine learning AI kind of stuff is It's definitely running into split split splitting into many small communities as a result of this.

Wes Bos

Let's hear it straight from you. What do you think about Just JavaScript and and JavaScript on the server.

Wes Bos

Don't hold back on us.

Guest 2

I don't like this hate the JavaScript, but I don't understand more than front end JavaScript anymore. And the problem with more than front end JavaScript is that it's basically back end JavaScript from what I can tell. It's React server components just I just I'm so confused.

Guest 2

It's it's so complex. Like, I understand that it's, like, supposed to make everything simple. And it I feel like if you're there's there's a certain type of programmer where this really, really resonates.

Guest 2

But if you've been writing applications in a certain way or multiple years, you have some sort of, there's certain things that you feel like, okay. As As me writing this kind of application, that's a kind of problem that I keep in mind. Like, this is dual authentication. This is, like, how audit things like, There is like, you write things in a certain way because of best practices, and then you look at React server components, which is, I guess how you write back in JavaScript these days.

Guest 2

And all of those things are just completely underexplored and unclear how to do it. And it's It's so complex, and you try to integrate with this in sort of in in ways that are nonobvious.

Guest 2

It's very complex, shockingly so. So, like, the the hello world of free server components looks really nice, but, like, the I have no idea how it's going to work in practice.

Guest 2

The the growing complexity of the JavaScript ecosystem is just really, really high. And I think now there's also A little bit of churn in that space. Like, I I I'm definitely noticed that, we have SDKs for Next. Js and others, and there is a lot more breakage now than compared to historically this kind of stuff because of how quickly this ecosystem Time is growing. Yeah. I always wonder that because it does feel like there's updates all the time to your SDKs.

Scott Tolinski

And I I wonder, like, just what type of workforce that takes to keep all that stuff up. I mean, it's really exciting, but it's also

Wes Bos

shockingly complex. I would love to see that at, like, At the end of the year, it's like a roundup of the most broke

Guest 2

frameworks in century. Like, I don't know if you can run stats on I think we asked David something like this. I I would I would guess I would guess next year is is, like, the one that has the most churn.

Guest 2

Not necessarily because it's the most unstable, but it also has a lot of users. Yeah. And so, like, the the box scale with utilization too, definitely, I think it's Oh, yeah. By far. It would be my guess. Interesting.

Topic 18 47:56

JavaScript ecosystem has high churn increasing SDK maintenance

Wes Bos

Should we get into supper club questions here, Scott? Yep.

Wes Bos

Alright. These are a set of questions we ask everybody who comes on the episode.

Wes Bos

First one is, what computer, mouse, and keyboard do you use?

Guest 2

So I have A bunch of MacBooks.

Guest 2

I've sentry issued hardware, which is a 14 inch MacBook Pro that I have at home, I have a 16 inch MacBook Pro.

Guest 2

I also have a Windows computer, which is sort of a self built thing.

Guest 2

Keyboard, usually a, to the annoyance of my wife because it's very clicky clacky.

Guest 2

I think you can't make it more clicky clacky than it is. And I actually mostly unless I'm playing computer games, I mostly use a, magic touch pet thing from Apple as mouse. What about your

Scott Tolinski

Text editor theme and font. Text editor is

Guest 2

either VIN or now I use a little bit of Helix, but I have, My own forklift is a little bit more like VIN or we should do the code with VIN plug in.

Guest 2

Font, I use Mona Lisa because a friend of mine made it. So, I I quite like that font.

Guest 2

Theme, I have For them, I have, like, a really old one I called fruity, which I have for ages.

Guest 2

And in this quote, I have I think it's Night midnight owl or something. I don't know. It's shit pretty blue. Doug's blue.

Wes Bos

So Yeah. Yeah. That's, Sarah Drazner. We had her on the podcast.

Wes Bos

Yeah. I think that's the one. Night owl, it's called. That's a really nice one. What about terminal and shell?

Guest 2

C s h, you you call it this? ZSH?

Wes Bos

Yeah. ZSH is a Canada net. Yeah.

Guest 2

On Mac item 2, On Windows, the whatever Windows terminal thing is called these days, the new one.

Guest 2

And I actually don't use Linux at all at the moment,

Wes Bos

But I I used to just use home terminal. I'm surprised by that. I I definitely the first question I was like, this guy definitely uses Linux, but no.

Guest 2

Yeah. I don't know.

Guest 2

It's Too many other things to to worry about, I guess. No. The problem is, like, I mostly use my computer other than a little bit of open source hacking for Sharing pictures and stuff with my wife.

Guest 2

And Mac is impossible to beat, and Mac is also impossible to use. Like, the Mac ecosystem of Absence impossible with using Linux. So Yeah. Totally.

Scott Tolinski

Yeah. It Totally. Once you because you think, oh, it'd be great running a Linux system full time, and then the moment you do, you're like, oh, wait. Everything that I actually use. The I think the only sort of family friendly ecosystem that actually works on Linux is Google stuff.

Guest 2

And I got burned so hard so many times for Google things. I no longer trust the company.

Wes Bos

So Yeah. Everybody's scared of it. Yeah. Topical too right now. I have a question here that's not on our list, but I'm very curious. So you have a GitHub that's one of those, like, walls of green where it's just And obviously, that's part of your job is that you have to commit code. But do you have any advice for staying motivated On continuing

Guest 2

both open source? I don't actually think that my I don't think that my GitHub is particularly wall of green. Maybe this year again, but, like, historically, it hasn't been. I'm very bad at yeah. I think, like, very bad at keeping myself motivated. Most of the greed is probably, like, pull request reviews and not actually commit. Oh.

Guest 2

I don't know. I'm very bad at staying motivated. Like, I have probably many more projects that I started that I've never Pushed anywhere? I actually, I'm sort of embarrassed that I don't put projects on GitHub anymore unless I feel like they're going somewhere because I had a lot of time where there was just, like, dying pieces of uninteresting stuff on there. Mhmm. Yeah. I don't know. I I find sensory problems really interesting still because it's like, it has grown to be bigger problems, And they they are sort of motivating out of principle.

Guest 2

Staying motivated for open source libraries is, to be honest, a Function of adoption.

Topic 19 51:55

Open source motivation follows project adoption more than passion

Guest 2

And if you're going to get that option for something or not, is does not correlate to how much you like the project. I have Some projects that I would really like to work on, but nobody else cares. I don't like my motivation also. So

Wes Bos

Yeah. Yeah.

Guest 2

Where do you go to stay up to date with stuff? I don't stay up to date with stuff. I think it's the short answer. So I use Twitter, and I use Reddit. And I maintain a long mute list of topics on Twitter because Yeah.

Guest 2

It kinda, like, makes me Not emotionally healthy to read about certain things there.

Guest 2

Yeah. Yeah. The churn in certain ecosystems is really demotivating.

Guest 2

So, and and a lot of a lot of modern open source projects, they're very, pushy and, like, marketing y and that kind of stuff.

Guest 2

And so I just mute this out out of personal health, and then I miss out on a lot of new things. So

Wes Bos

I don't know if that's a good answer, but I really try not to in some ways. It's working for you. So that's a it's a good I have a a pretty long mute list as well, but I I don't do it with tech topics because I'm

Guest 2

anxious that I'm gonna miss something. It's more just the the other stuff. But your job is to stay, like, Up to date to this kind of stuff. I I can I can Yeah? I can get away with all doing that. I think there's specific creators and and companies that I'll mute just because I know

Scott Tolinski

The vibe is is too in your face market y to me, and it's like I I'm not gonna gain anything from them. But, generally, yeah, my mute list is

Guest 2

Straight up topics I don't wanna see. I have a I have a theory of staying up to date, which is basically you you get good at something. Let's say, server side rendering. It doesn't matter. Like, a topic.

Guest 2

And it's going to get out of favor for 8 years. And then in 8 years, it's going to go back to that, and you're going to be still well informed. So just ride the wave of whoever for It's right

Wes Bos

here.

Topic 20 53:55

Ride hype cycles - master one thing until it's back in favor again

Wes Bos

Mhmm. Oh, I like that. That's great.

Wes Bos

Alright. Last Section we have here is sick picks and shameless plugs.

Wes Bos

Did you come prepared with a sick pick and shameless plug? Yeah. So I guess the

Guest 2

Pick that I would make here is there's a an Austrian band called Bilderboek.

Guest 2

They just released a new song, I think, last Friday or something.

Guest 2

And they kinda do, like, songs that are the kind of music that we'd listen to in the summer. And I think they mostly released their albums around the summertime, But, they released the song again, and I think the album is going to come next. And it's like whenever summer is coming, I'm I'm listening to the the latest Stop the release.

Guest 2

It's really good. What what kind of music is that? I have no idea. It's like I should Google this.

Guest 2

Art pop? I don't know. I, Honestly, I don't know that many bands that have that kind of sound, so it's very hard to relate it to anything.

Guest 2

Yeah. I don't know. I just really like them. They're Not well known, but have really I mean, like, they also have, like, half the lyrics are German, so it doesn't really translate that well. But it's, like, It's more about the vibe. The lyrics are pointless anyway. So

Wes Bos

Yeah. Yeah. Well, Scott found it. Oh, yeah. I found it. Oh, hold on. We need we need to, To say this to anyone listening, b I l d e r b u c h. Beautiful.

Guest 2

And shameless plug. Shameless plug, my package manager.

Guest 2

It's not Python.

Guest 2

So what's it called? I have to I have to visit rye, like the grain, because we use Python a lot at Century, we use it in a specific way, and all that infrastructure isn't there for my hobby projects. I kept maintaining on the site like Just a way to make PIP suckless or or getting Python binaries just on your system that doesn't involve Python.

Guest 2

And then I I've just run. I actually released it on GitHub for others to look at, but it's it's it's quite a bit older than that.

Guest 2

And it got quite popular in the last month or so.

Guest 2

And so, if you if you want to pipe and packaging to be fixed, maybe look at the project, shout at me and see if it's It's a good idea or not, but it's it's it's reasonably fun to maintain at the moment, I would say. That's great because I

Wes Bos

dipped into Python a couple of weeks ago because I needed something that was was a Python project.

Wes Bos

And, like, it's wild that By default, it just globally installs everything. And sometimes there's a list of dependencies that you need, but there's no, like,

Guest 2

Package is there, like, a package JSON equivalent in in Rhino? There's there's now it's called oh, it's not it's a Python style. It's called Pyproject.

Guest 2

Toma, But it's very, very light on I think the Python community doesn't like to it likes to have opinions, but then it doesn't have them very strongly. And so it's, like, a standard called Pyproject.toma.

Guest 2

But rather than there being, like, a package that chase on 1 tool that works with it, Python now has 13 tools working with it, and they're all inaugurated somewhere like this, hedge, this whatever. Like, Hedge, PDM.

Guest 2

It's just this I realize that there's an enormous list of them, and they all have Slightly different interpretations of what you can do with this file format, but, technically, it's there now. Sounds familiar.

Guest 2

But it is wild. It's it's really wild.

Guest 2

Python has a lot of opportunity to make a better developer experience, I would say. Sweet. I'm gonna check that out because I was Struggling

Topic 21 57:15

Python packaging is fragmented without clear standards

Wes Bos

with virtual ems a couple weeks ago, and I finally just gave up and used the hosting service.

Wes Bos

Yeah.

Wes Bos

Just trying to get the right version of Python, and you had to run Python 3.7.

Guest 2

And there's the alias there. Great because, like, The the way the way RUI works, and this is why I build it, is because, like, you just I I basically declare bankruptcy on anything. So you the rye, and it manages Python for you in a way that you just say, like, I want 3.7, and you get 3.7. That was always the hardest part of Python to me.

Wes Bos

Exactly. That's exactly what I need. Try to emulate Rust up and cargo for Python. That's sort of what I did. Okay. Definitely gonna check this out. Because, like, I Patience for figuring issues out with Node.

Wes Bos

But, like, when you're in a when you're in Python, it's like this is not my space, you know? And then it's just like nothing works and

Guest 2

Stack traces everywhere. Okay. I wanna check this out. The thing is it's so close to working now. They're like the the whole ecosystem is, like, converging onto, Like, standards and and opinions and stuff, but but nothing wins. Like, everybody has, like, their own little independent version of it, and nothing is, like, fully Fully executed to, like, I don't know, to being good, I think.

Guest 2

I can't promise that this thing is going to eventually solve it, like that. Maybe it can. Who knows? We'll see. Awesome. Well, thank you so much for coming on. Appreciate all your time and insights into this. You're welcome. Peace. Alright. Peace.

Scott Tolinski

Head on over to syntax.fm for a full archive of all of our shows, And don't forget to subscribe in your podcast player or drop a review if you like this show.

Share

Play / pause the audio
Minimize / expand the player
Mute / unmute the audio
Seek backward 30 seconds
Seek forward 30 seconds
Increase playback rate
Decrease playback rate
Show / hide this window