Skip to main content
774

May 27th, 2024 × #javascript#promises#concurrency

Promise Flow Control, Concurrency, Libraries, TypeScript and Deferreds - Part 3

Discussion on advanced promise concepts like with resolvers, fetch promises, flow control, concurrency, throttling, TypeScript typing, and more.

or
Topic 0 00:00

Transcript

Scott Tolinski

Oh, welcome to Syntex. On this Monday, hasty treat, we're gonna be hitting you with part 3 of 3 in our series on promises. We're gonna be talking about some new stuff, like, with resolvers.

Scott Tolinski

We're gonna be talking about, you know, practical stuff with fetch. We're gonna be talking about flow control, concurrency, throttling, queues, all of those types of things. Basically, we got the basics. We got the intermediate kind of syntax stuff.

Topic 1 00:28

Going deeper into promises

Scott Tolinski

Now here's what we want. We wanna learn how to get even deeper with this. And if you're getting deeper into any type of new topic or code, you're gonna make mistakes. You're gonna cause bugs. Potentially, you're going to end up, with some performance issues as well. And to track all of that stuff and make sure that you have a handle on it, you're gonna want to use [email protected].

Scott Tolinski

That's Wes e n t r y Scott I o forward slash syntax.

Scott Tolinski

Sign up and get 2 months for free. And, seriously, just, just give it a rip because once you get used to those dashboards and once you get used to those dashboards in your your project, in your application, it's really hard to live without that visibility and to just kind of be flying blind on what your users are actually hitting in your site. So let's get on into it. I have deferred the starting of this episode long enough. We're gonna be talking about the first thing, deferred promises.

Scott Tolinski

In talking about with resolvers, which is newly available. So, Wes, do you wanna I I've not used with resolvers. I'm super interested in what this is and and how to use it.

Wes Bos

Yeah. So previously, if you wanted to access a promise and its resolve and reject methods outside of the promise body, what you've had to do is you had to create a variable outside of the scope of that promise. And then when you get inside of it, you have to update the variable that's outside.

Topic 2 01:34

Using with resolvers for promise flow control

Wes Bos

That works really well. That whole pattern is called a deferred, and that's something we've had since jQuery had deferred for a while.

Wes Bos

Or it still does. And it's it's not a new concept in in programming, but it's not something we've had in JavaScript. And there's lots of libraries out there that were just made for simply making a deferred.

Wes Bos

And let's talk about, like like, why might you want that? I've talked about this on the podcast before JS if you want to instead of having to pass your work into a promise Mhmm. So often when you're you're doing a bunch of stuff, what you'll do is you'll make a function, you'll return a new promise, and then all of the work needs to happen inside of that. And sometimes it makes a little bit more sense to instead of passing your work into the promise, you want to pass the promise itself along with its controls, its resolve and reject methods.

Topic 3 02:47

Using with resolvers for streams

Wes Bos

You wanna return that from a function so that you can wire it up to buttons. You may wanna pass it to a success callback of another API and promise that with the resolvers will will do that for you. It it Scott of surfaces the resolver and check methods alongside the actual promise.

Wes Bos

Where this is is really useful is in streams. If you want to be able to with streams, the way that it works is that you you get a stream, you listen for what's called chunks. I know, Scott, you're working with the get user the webcam API right now to stream in video chunks. Is that what it is? Yeah. Yeah. And I'm sending them to over HTTP to a Wes,

Scott Tolinski

process, which is then collecting them all in a temp file.

Scott Tolinski

And then once the process is overriding that temp file to a video file with FFmpeg.

Wes Bos

So, yeah, currently doing that. That's that's really cool because with in the browser with media recorder, the way that it works is that you you create a media recorder, you listen for chunks, which is basically every I don't know. You can Sanity how many seconds, but imagine it was 3 seconds of video. You get a chunk every 3 seconds. And then the way that it works is you you put those chunks into an array, and then you wait for it to be finished. And then when it's finished, you take all those chunks and you put them together and you get a a video file at the other end. Right? Yeah. That streaming API is based on events. It's like on data. Right? Mhmm. And when you work with streaming APIs, it's it's not a promise. It's chunks. But if you just care about the output result, you can use the promise that with resolvers to you can say, like, on error and on cancel of the media recorder is equal to reject. Oh. And then on unfinished, you give it the resolve method, and you can you can pass the the chunks to that. So it it's not again, it's not something we could never do before.

Wes Bos

It was very easy before. It was just like 1 more level of indentation inside of a promise, but that's 1 Wes nested promise, one less weird scoping issue that we need with this sorta helper method. Yeah. And it it's actually funny, Wes, that you say that because I've used

Scott Tolinski

libraries to collect a stream before as a promise, like collect stream to promise. Because when working with Braintree, every single one of their search APIs returns a stream rather than you can't just, oh, async await. You can't just await for the data to come in. And that's obnoxious as hell when you're doing like, oh, let me just do a quick HTTP call, and I don't wanna have to return the stream back to the UI. I just wanna collect the data and send it to the UI when it's done. Yeah.

Topic 4 05:33

Need a class to handle both streams and promises

Wes Bos

That's actually, I've been wanting to make this for a while. It's like some sort of, like, custom class that extends the promise, and it gives you both the stream events, and you can await it. Yes. Because sometimes I care about both. I want the output, but I also wanna listen for the actual end. Like, a chat g p t would be a good example. Right? I care about the end text that I get from chat g p t, but I also wanna display the text as it's being streamed in from the server.

Wes Bos

So I kinda want both. So I'm I'm sure somebody has already built that. It wouldn't be it wouldn't be too tricky.

Wes Bos

Let's talk about fetch. We know we've talked about fetch before. Fetch, you pass it a URL, returns a promise. The kinda interesting thing here is that the first promise of a fetch is responds as soon as the server comes back with its headers. So the way that a web request works is that it requests something from a server, and the server is going to come back to you immediately with a some information. It might not be all of the information, but you do get the initial headers back. And in that, you can check for the header response code.

Topic 5 06:11

Fetch returns two promises

Wes Bos

You can return. Most commonly, we all are doing Scott JSON.

Wes Bos

Right? And people have often wondered, like, why why is fetch 2 promises? The first Node, and then you immediately from that, you return response dot JSON, and then you have access to the actual data. And that's because that's that's the way servers work. You you you get the the initial headers first. And then if you want all the the whole payload of the body of the response, you have to sometimes wait a little bit longer for all of that to to sort of come in. So most likely, you're checking for a response code. You're getting Scott text or JSON, but you are also we did a whole episode on streams.

Wes Bos

You may be streaming in the result or using server sent events from the Vercel. Whereas, like like, chat g b t, a lot of these things are as they are generating the actual output, they are streaming it to the browser. So I used to think, like, while you're using chatty gpt, they're just doing this, like, fake typing to make it feel real. Mhmm. And the actual answer to that is that's actually not true. It's it's literally coming up with the result Yeah. And firing it out the door before it even knows that it is done.

Scott Tolinski

And I I even I've been sorry. Go ahead. No. And that's one of the reasons why ChatGPT isn't great at commenting code if it's also creating the code. If you tell it to create code and comment the code, it always puts the comments below the code it just wrote. Because unless it's writing the comments first, maybe you could tell it to do that. It it's kind of like coming up with what it's trying to do as it's doing it. Yeah.

Wes Bos

I don't even know when I'm gonna be Deno, but here's what I've got so far. And I've I've noticed on I use Raycast.

Wes Bos

They just rolled out Claude Haiku support. Mhmm. And I noticed using the Claude one, sometimes it's, like, 8 or 10 words into the response, and then it just clears it and puts something different in. And it must be because as it's coming up with the result and as it's imagining more of the sentence, it goes, you know what? Those first 8 words I said were not Could be a bit better, and it it replaces it. So I thought that was kinda interesting with with streaming.

Wes Bos

Flow control.

Wes Bos

How do you how do you flow through promises? First of all, looping over promises, you can use a four of loop and you can await inside of a four loop. This makes everything synchronous, meaning or or or series is often a good word for that because the word synchronous in in JavaScript often means that it's blocking.

Topic 6 09:06

Controlling promise flow

Wes Bos

But in our case, you simply are just, like, pausing the four loop as you're doing it. And that is a good use case if you wanna do one thing after another. Like, we we talked about me downloading 10,000 files a couple episodes ago. I initially had wrote a four loop, and that was kind of slow because you download 1 file, you wait for it to finish, then you do the next Node, then you do the next one. In in reality, there's some probably some sort of use case in between doing them in series, which is Node after another, and doing them all at once with promise to all settled, which is 10,000 at once. There's some sort of middle ground, and we'll talk about that in just a second.

Scott Tolinski

Yeah. I I can think of, like, some APIs too. They return things in pages.

Scott Tolinski

So you could say, hey. Go off and fetch page 1. Alright. Page 2 has come back. Sometimes what they do is they give you, like, a, like, a key for what the page JS.

Scott Tolinski

So then you pass that into the next fetch call, then you like, the data that precedes

Wes Bos

it matters for the data that you're trying to get next. So that'd be a good reason to do a follow-up. Yeah. Yeah. That that's a great use case because sometimes the API will tell you how many pages there are, and you can fetch them all at once. Mhmm. But often, they don't know. You just they don't know. You just you just keep going until there's no more. And often, the only way for you to tell that is the next time you fetch 1, the result is empty.

Wes Bos

And they're like, oh, well, there's nothing here. I must have reached the end of the road, and I no longer have to to keep doing that. Right. And, also, in cursors, that's something we haven't talked about a lot as well Wes sometimes APIs are not based on pages. They're based on cursors.

Wes Bos

Meaning that in order to get the items after the ones you need, like, a cursor ID Mhmm. And you you can't do it until you have the cursor from the the page before it.

Wes Bos

You can't use, like, a map. If you have an array of, like, names and you wanna fetch like, you have Scott, Wes, and CJ, and you wanna go to the GitHub API and fetch all 3 of our profiles, often what people will do is you'll map over each of those strings and return a promise, like return a fetch request that goes to those those 3 URLs, and then you'll wrap the ad in a promise to all settled to get all 3 of them. Those run concurrently, meaning they run at the same time. It's not technically. We have a whole show on that, but it you can think of it as that.

Wes Bos

A while loop. Again, a a while loop is not something I reach for all the time, probably a couple times a year, but you can await inside of a while loop, which can be useful if you want to just wait for the the next item to come through.

Scott Tolinski

While loops. Yeah. I don't find myself using while loops that often, but the other day, Node did come up, and I I while looped it. That's for sure.

Topic 7 12:20

Concurrency and throttling with promises

Wes Bos

Concurrency throttling queuing. So I find myself getting into this area a lot where I need to make a whole bunch. It's almost always with fetch. You know? I need to download a whole bunch of files. I need to hit an API a bunch of times to get all of this data, and you find yourself ESLint a situation where it is way too slow to do them in series Node after another.

Wes Bos

But it is not possible to run them all at once because that will like, your Internet provider will probably shut you down or you'll hit some Scott, like, thing. If you send 10,000 requests to download files to an API, you're probably gonna get in trouble.

Wes Bos

So there are some really nice packages out there, specifically p map, p limit.

Wes Bos

Sanity Sohras has, I don't know, probably 15 different p dash things, and they're all, like, little utility libraries. Quite honestly, I forget which one I need every single time, and I have to read the docs.

Wes Bos

But, like, something like p map will allow you to create an array of possible promises.

Wes Bos

So loop over this list of 10,000 files and return a function that will download a file once it has been triggered.

Wes Bos

And you give all of those to npm map and you say, run 5 at a time at most. And what it will do is it will simply just it'll fire off 5. And then as as as the first one finishes, it goes down to 4, and it starts the next Node, and then it starts the next one. That's that's better than chunking because chunking will do, like, 5 at a time, wait for them all to finish. And then another Another 5, wait for them all to finish. Yeah. Whereas, like, concurrent mapping will always be running 5 and always tearing off a new one as soon as there's a single spot. Yeah.

Wes Bos

Nice. Yep. It's pretty handy. I used to use this thing called Async JS for callback based architecture, but I don't use it all the time. And I don't use it anymore because between promises, prompts all settled in these little helper functions, I find them to be more than enough.

Wes Bos

There's also, like, p limit, p throttle, p debounce.

Wes Bos

These will allow you to only run them at after a certain amount of time or at most every 2 minutes.

Wes Bos

So often you will only be allowed to create 60 requests per hour. So you wanna say or or or 60 requests per minute. So, like, okay. Well, don't run more than 60 in this time span or don't run this function or or run it at most once every 1 minute. You know? And that kind of stuff can be really, really funky or retrying is another one. There's there's libraries out there for run this promise. And if it fails, retry it up to 6 times. And if so, then finally reject.

Scott Tolinski

And having that all built in is really, really nice. Yeah. I have not I mean, it's funny. I I I have used throttling and debouncing, but I haven't used any of these p tools before. They're often scraping for me. It it's funny.

Topic 8 15:14

Retry logic for promises

Wes Bos

So maybe, maybe that's where you'll you'll have to hit it. Or, like, the OpenAI API only allows you to run it a certain amount per Sanity.

Wes Bos

And Yeah. Sometimes these APIs will return to you how many calls you have left and and when your call amount refreshes. And if that's the case, I'll just look at the header and say, alright. If I'm out of calls, throw a 500 milliseconds wait wait in there, you know, and then and then fire it off.

Wes Bos

But if that's not the case, they don't tell you how many are allowed. They only tell you if you can't run it. If that's the case, then you have to, like, use one of these libraries to and put in the hard limits so that it won't run it too too frequently.

Wes Bos

TypeScript and promises. This is the last thing we have here is just how to use promises in in TypeScript. Right? Because promises are a generic in TypeScript. It's it's built in, and you'll often see that in TypeScript. It's promise angle brackets and then the actual type of the value that you want. And if you if you fetch something with a specific type, you'll see its promise square bracket, whatever. And if you put in a weight in front of it, it will unwrap the promise and actually give you the real type. Yeah. And and so to be clear, the reason why you do that promise with the brackets around it, that is essentially saying, hey. This thing does return a promise,

Topic 9 16:52

Typing promises in TypeScript

Scott Tolinski

but then the promise itself will return that type.

Wes Bos

Exactly. It'll eventually resolve to that amount. And a couple times, I've needed to get the type from a function that returns a promise.

Wes Bos

And if that's the case, like, you're using, like, a a type of a specific function, sometimes you wanna take the promise off. Like, you wanna unwrap it. Like, this function returns a promise that is of a user.

Wes Bos

You're like, I don't care about the promise part. I want the type of user.

Wes Bos

And if that's the case, you can wrap it in the awaited TypeScript helper, and that will take the the promised part off and and give you the, the actual type there.

Wes Bos

Promise that all settled filtering.

Wes Bos

This is just like a a TypeScript thing in general, and I bet this will be fixed soon, but I've also been saying this for a while.

Wes Bos

If you have a an array in typescript and you try to filter it, by default, if you don't use any, like, patching libraries or anything like this, the filter does not actually narrow the types for the items that you filtered it in, which is annoying. So you have to add a type guard to your filter to actually tell TypeScript, Node. I am I'm filtering it down. And I run into that a lot with promise dot all settled because it will return again the fulfilled and the rejected values. And if you wanna just filter for the rejected to display some errors and you wanna filter for the successes to display the data, then you'll have to write a quick little type guard. And there are I think there's a type called promise all settled result, and then you can write a type guard that will check for either the reason or the status property on there.

Wes Bos

And lastly, there's no way to type the rejected value of a promise, and that that's a little bit of a pain as well. That's it can be nice to use that. In the last episode, we talked about that, writing, like, a wrapper function that will do collecting and give you a tuple error first, data second.

Wes Bos

You can extend that a little bit further and internally do a type assertion, and you can say, alright. Well, if it does return an error, you can use a generic to say this is what the error will look like. And that is really nice because now and then you're you're not doing the whole, like, if error is an instance of error and or if error has a message property, then you you just know exactly what the error type is.

Wes Bos

Wild. Yes. I think that's it for promises. Wild world. I'm impressed that we're able to get 3 episodes out of this, but there's a lot of funky stuff to learn about promises, and it's it's good to know. Can you imagine if we would have had to do this episode on

Scott Tolinski

promise libraries before promises existed? Yeah.

Wes Bos

It would have been there's so much like, with the promise libraries back in the day, you had to, like, wrap everything. Even in Node. Js before, like, like, like, the file system APIs didn't have promise versions of it, you had to, like, wrap their callback based APIs in a, like, util Scott promisify.

Scott Tolinski

I did not get it. Yeah. I I I got actually got to wait until promises landed in the browser before I had to deal with Promises. I I just used callbacks until then. I was like, you know what? This doesn't seem worth it to me. That too. Yeah.

Wes Bos

Awesome. Alright. Well, thanks everybody for tuning in. Tweet us your promised tips at Syndax event. We'd love to hear them.

Scott Tolinski

Take

Share