593

March 27th, 2023 × #OpenAI#ChatGPT#JavaScript

Coding with the Open AI / Chat GPT API

Scott and Wes discuss how to work with the OpenAI API and ChatGPT in JavaScript. They go over the different APIs available, pricing, token limits, prompt tuning, and share tips for saving money.

or
Topic 0 00:00

Transcript

Scott Tolinski

Welcome to Syntax.

Scott Tolinski

On this Monday, hasty treat. We're gonna be talking about the open AI and chat GPT APIs. We're gonna be talking basically about what the heck the open a I API really is and what you can do with it.

Topic 1 00:45

Stumbling over saying OpenAI API

Scott Tolinski

And forgive me for stumbling over OpenAIAPI.

Scott Tolinski

That's, something that is going to be difficult to say the entire episode. My name is it. Got to let's get a developer from Denver. With me, as always, is

Wes Bos

Wes Bos. Hey. I'm excited to talk about this. We had Travis on, and he did, like, a very high level.

Topic 2 01:06

Wes excited to talk about coding with the API

Wes Bos

This is what it means for our life.

Wes Bos

And this is the kind of the opposite show, which is going to be here's how you actually code it with JavaScript.

Wes Bos

So I'm pretty excited about getting into that. Let's talk about one of our sponsors. 1st, though, is Sentry at Sentry.

Topic 3 01:24

Sentry sponsor

Wes Bos

Io.

Wes Bos

Century is the savior for figuring out what's going on with your website. Why is your website being slow? Why does your website have bugs? Why did it crash? I had one the other day. This was really interesting.

Wes Bos

I talked I talked a while ago how I had a couple crashes that were here and there, and I had been running for 3 months without a single crash. And the other day, Bam, it crashed. And I said, what happened? I had caught an error of an API. I was submitting some data to My email newsletter API and I had caught an actual error of when their API was down. And for some reason, I was catching the error and I just threw an error. That was my own error. Like, I caught whatever 503 error they had, and I threw an error of like, Looks like the Drip API is down, but I wasn't catching that. And as you know, if uncaught errors cause your node process to crash. So I jumped right into Century. I saw exactly what was happening. I saw, oh, good thing. It only happened once because it was probably a freak thing with the API being down, And I was able to jump in, fix it. In 2 minutes, I had the thing fixed. Market has fixed in Century, and that's where you hope it doesn't come back. So you're going to check that out for your website. Check it out. Sentry. Io use coupon code tasty treat for 2 months free. Nice. I love being able to see that it's not affecting a lot of people. Like, oh, yes, they can. But I would only have, I think, 11 user. Okay, well, that's bad for that user, but at least it's not everybody. Yeah. Well, the flip side is when it You see a spike, and it's like, none. None. None. It shows you, like, a 24 hour graph, and then your graph goes from none to Oh, this is affecting 200 people in the last hour. That's kind of off. Drop whatever you are doing right now.

Wes Bos

Yeah.

Wes Bos

So let's talk about OpenAI API.

Topic 4 03:20

Start talking about the OpenAI API

Wes Bos

So it's so funny that when I started writing the notes for this API.

Wes Bos

They had released an update to GPT, called GPT Turbo. And then by the time I finished writing these notes, they had released GPT 4, which Just this stuff is moving super fast and we'll sort of keep you updated. But the whole API around it is somewhat similar.

Wes Bos

It's getting very cheap to play with these APIs. So I hadn't dipped into it for a while because one wrong loop or one rogue console log Could end up costing you 6 or $7. You know, like, it kind of gets expensive to play with. But The prices on OpenAI have come down significantly so much to a point where you can just start to play around with it.

Topic 5 04:15

Transcripts summarized with OpenAI API

Wes Bos

And it really doesn't cost all that much. We'll talk more about how much does it actually cost in just a second? I've been testing it with the syntax transcripts. We talked a while ago about what we would like to surface more information around the podcast In terms of, like, descriptions and transcripts, but, like, more helpful information around the transcripts. Specifically, I was trying to summarize every major topic and give it a time stamp that's associated with that. And OpenAI is fantastic at that type of thing.

Topic 6 04:50

Scott playing around with the API

Scott Tolinski

So I've been playing around with that, feeding it a bunch of transcripts, and I thought we'd just talk about, like, how do you actually use this type of thing? Sick. Yeah. I have you know what? I've I've done a cursory glance at this stuff, so I I'm really excited to play the role of the audience member who has not dove into these quite yet. So I'm gonna be asking you, some questions as well as as we go here. So, yeah, I'm interested to hear what you've done. So first of all, using the API is Extremely refreshing.

Wes Bos

You simply just sign up. Like, I've been signed up for a while to test it out on their GUI.

Wes Bos

They have, like, a bunch of dev tools, but I thought, let's actually write some code against this. So they have a no they have a Python and a Node. Js API, And it's awesome. It actually uses Axios behind the scenes, which is kind of interesting. So if you're familiar with Axios response and whatnot. It's pretty easy to get up and running.

Wes Bos

And I thought it was kind of interesting how they use Axios because they have to Send the API key along with every single response so they can kind of store it and whatnot. Why do you think they use Axios as opposed to Just just like straight up that. Yeah. I guess they could probably do it under the under the hood.

Wes Bos

But they from the looks of it, there's a bunch of like Like custom error handling, it looks like they are using the streaming API. I guess you could do that with fetches. You know what? I don't really know. Probably just for a speed thing.

Wes Bos

But I would think like, okay, you create a new OpenAI Instance you give it your API key, and then every single time that you you create a fetch, you don't have to pass it your API key again. But you could also do that with with straight up fetch. So I'm not really totally sure around that, but it's kind of interesting. You don't really see too much of the Axios API service, but the API itself is Very simple.

Wes Bos

There's all types and TypeScript for it.

Wes Bos

I really like that because a lot of APIs will just give you straight up rest API endpoints.

Wes Bos

And then they'll also give you, like, I don't know, 6 or 7 client secrets and OAuth and all this, like, stuff. And it's just exhausting having to get up. Actually, I have some notes on, like, OAuth APIs using them In general, how that works. So we have another show on that coming. But this is like reminds me of the old days. You give it your API key. It has methods for things like chat completion And boom, the thing works has really good error messages returned as to, like, what might have possibly gone wrong. So it's very easy to actually use and get up and running. That's what it's funny. Whenever I use Apple's

Scott Tolinski

web APIs, I'm always like, why can't you just give me an API key? And, like, that's it. Like, what? Like, just let me deal with an API key. I can handle that.

Wes Bos

So it's nice to hear. Yeah, so obtuse. And they have lots of like really good note examples as well. So you can just copy paste, put those into your note application. So it's really easy to get up and running.

Topic 7 07:42

Most data sent to OpenAI is text

Wes Bos

So let's talk about actually running it. The first thing you need to understand is that most of the data sent Shoe Open AI is in the form of text. It also accepts images as well. We'll talk about that in a second, but most of it isn't in the terms of text. And you are paying based on how much text you send it and how much text is returned from you, and those are referred to as tokens.

Wes Bos

And they count their tokens. It's kinda weird, but generally about a 1,000 words sorry.

Wes Bos

A a 1,000 tokens is about 750 words. So if you think, okay, cool. Yeah, I just spoke 7 50 words. That's about 1,000 tokens and you pay per 1,000 tokens that you're doing. And when a couple of weeks ago, GPT 3.5 Turbo came out and the price is $0.002, meaning that it's 1 5th of a cent per 1,000 tokens.

Topic 8 08:13

1000 tokens is about 750 words

Wes Bos

So to give you a concept of, like, how much does that actually cost? I took a 25 minutes podcast of us talking about logging.

Topic 9 08:44

Costs a fraction of a cent per 1000 tokens

Wes Bos

And each of it also includes timestamps of Scott Tolinski at 4 minutes and 32 seconds says this. And Wes Boss at 8 minutes says this, and it goes back and forth. So a 25 minute podcast, If I were to feed that into that at 1,000, sorry, at a 5th of a cent per 1,000 tokens, That would cost us Extremely cheap if you think about it. Like, I could take a 25 minute podcast and you put some you put a little bit more text on top of you. It says, please summarize The following podcast in bullet forms while maintaining the time stamp and Out the other end for $0.03.

Wes Bos

Sorry. The input is going to be $0.03 and then you are also paying based on the output, How much text it sends back to you.

Wes Bos

And that's really interesting because it depends on how big the output You could tell us what do you want? What are you trying to get out of the app? Do you take these 2 sentences and make it into 3 paragraphs, or are you taking this 25 minute podcast and Putting it into 8 bullet points. And I even went as far as to say, give me 3 hashtags that describes this episode. So in that case, it's returning almost nothing like it's pretty much free.

Topic 10 09:51

Output length affects cost

Wes Bos

But you do have to remember that the return value also. So maybe 3, maybe 4¢ by the time

Scott Tolinski

we have finished all of Yeah. But you also have to consider too, like, that is super cheap for 1 25 minute podcast, and it's it's gonna be super cheap for a 500 podcasts.

Scott Tolinski

But if you have like a company that's building on this and you're processing 500 podcasts for each user. Why? It's never gonna be that, but that's when you're gonna start to see these numbers get to a point where it. A more substantial cost in your business. Yeah. Yeah. Absolutely. That

Topic 11 10:38

Cheap for one podcast but adds up at scale

Wes Bos

that type of thing doesn't maybe it scales. Well, it's kind of interesting to just see it now. Like, Raycast added me to their AI beta. Actually, they added you as well. Yeah.

Wes Bos

Which is really cool. I've been finding myself using the Raycast UI to ask quick questions rather than have to go to see I haven't. I should do that. OpenAI, the chat log in, it makes you log in again. And also, I hate that it like The chat gbt, like, does the word typing for you. Like, I think it's trying to slow down so you don't ask it too much, But I hate that. Like, just give me all of the text immediately.

Wes Bos

Also, Warp the other day released their own API, which is cool because it's context of what you're running in your terminal, but you can also ask it questions.

Wes Bos

So It's really interesting to see all of these apps just implement it for free right now. And, you know, there's some VC footing the bill for that right now. But I'm very curious if it will get cheaper. They want to be hooked too. Totally.

Topic 12 11:45

Other companies adding OpenAI APIs

Wes Bos

Yeah.

Wes Bos

Yeah, I guess that's what GitHub did as well. But If if you think about, like, GitHub Copilot, that's what, $11 a month, and it is running 8 hours a day. Every single line of code I write, it's sending my code and coming back with the answer. So I often wonder, oh, I thought I thought, well, 10, $11 for GitHub Copilot was kind of a lot, but now looking at this, they might be,

Scott Tolinski

Yeah.

Wes Bos

But I think with that kind of stuff, it will only get cheaper as the technology evolves. And Probably at a certain point, we'll be able to just run

Scott Tolinski

this powerful of thing on our computer. Maybe. Maybe. Yeah. You know, it's interesting here is that, like, as these languages and these models evolve, you know, like, the the pricing isn't like set the pricing isn't just for using The tech the price is on like a per model basis.

Scott Tolinski

So the GPT 4 is is 15 times more expensive right now.

Topic 13 13:04

GPT pricing very fluid

Scott Tolinski

But it also just dropped. So that basically goes to show you that a lot of the stuff is very Fluid in terms of pricing, you know, and that would both make me, concerned as well as excited that the pricing can be, One could come out and be, like, astronomically expensive, and maybe you've built your entire business on this thing, and you might just have to suck up and pay it. You know, who knows? There's probably a case where it's just going to get cheaper all the way down every year as we go. I think it will be a wash eventually.

Wes Bos

Yeah. Totally. The kind of the thing about the GPT 3.5 is before that, it was like when we went from 3.5 to The 3.5 turbo, it got 10 times cheaper, and that's the point where I said, alright. I'm dipping into this thing. Ten times cheaper. It's almost nothing. And then GPT 4 comes out and it's 15 times more. So it's really not that much more than it was 3 weeks ago, you know? Oh, right. Yeah, that that tracks.

Scott Tolinski

And it's also again, it's better, so you're getting a better product for that money. It's not like you're Exactly. Yeah. Paying for nothing. One of the benefits to,

Wes Bos

the GPT 4 is that there is a token limit on request. So I think right now on 3.5, there is a 2,000 Token limit or something like that, which means that I'm not able to send, like, I was only able to send about 8 minutes worth of our transcript Over to GPT and before I hit that actual limit. And that's a bit of a bummer because if you want to give it an entire book and say, please summarize this book for me. It's not entirely possible because you can't give it that much text at once. So the solution to that is called summarizing, which is what I actually ended up doing for the podcast is you chunk up the podcast into what I was doing is I was chunking them up into every block that me and you were saying.

Wes Bos

And then I would say, please take this block and summarize it into 2 sentences.

Topic 14 15:11

Chunking transcripts to stay under token limit

Wes Bos

And what you're doing is you're pretty much just you're compressing it And then you recursively call that meaning that you you summarize as big of a blocks as you can, and then you pass those maybe 2 blocks into it, and then it summarizes again. And basically, you could just keep summarizing the parts down until you reduce it. And every time you're doing that, you are paying for more tokens. So that $0.03 that I said earlier might end up being actually $0.09 or $0.10 per podcast because I'm running it Multiple times over and over again. And the same thing is with the chat. So on chat g p t, when you say When you ask it a question and it replies to you and then you ask it like a follow-up question that says Mhmm. No. I've tried that. Please look for x, y, and z.

Wes Bos

You don't just send the updated text. You send the entire transcript every single time. So Every time. Every single time. So every time, like if the 1st time it's 500 words, 2nd time 600 words, 3rd time it's and you have to you have to also send it what it sent back to you. So basically you have to say, all right, Chad, I asked you this, you told me this, and Now I'm asking you this, but you have to send all 3 parts to it. Otherwise, it has no context. I wonder if that will Change at some point where you get like a seed or like a unique identifier and you say, okay, I'm following up this conversation, but It's like a goldfish. As soon as it sends the data back, it has no recollection of you actually asking that unless you give it All of the information that it's told you so far. So I have a couple comments there. 1,

Topic 15 16:09

Must send full context each request

Scott Tolinski

I I actually didn't know that. I thought It was a token. I thought it was a token based thing so that I thought there was some sort of storage essentially of the content. So that makes way more sense, Essentially, like, in from a technical aspect how this works too.

Scott Tolinski

The goldfish thing was I think you might need to update, your your brain matter because The goldfish thing actually is fake. That's a fake, fake thing that yeah. Goldfish, do not have a 5 second memory or whatever. That is a lie that people say. And, you know you know how you you learn these little factoids in life and then you just hear them repeated over and over again, and sometimes it just, like, really, Like, I hear so many people say the goldfish thing, and and I'm the type of person who's, like, listening to my podcast player being like, that's not right.

Scott Tolinski

So Sorry, I had to. I don't like being a know it all in that way, but that's good. Yeah. I appreciate that. We could ask Chad GPT about the goldfish thing. I'm sure I would agree. I like that.

Wes Bos

Another thing that you can do with these models is you can fine tune them, meaning that you can give it information that you know. Right? So, like, right now, it just has this wide body of knowledge. But wouldn't it be nice if you could feed it? Like, for example, syntax? I would like to before we do anything, I would like to train it on every word we have ever said via all the transcripts, maybe even all of our tweets as well as, all of the show notes that we have and and maybe even, like, all the Tweet replies. Like, there's all this information. Like, wouldn't it be nice to give it all that first? Mhmm. And then say, okay. Given that you've listened to every episode, What do you think about X, Y, and Z? So fine tuning the model is not doable on the GPT three And for right now, there are other models that you can fine tune. Eventually, I'm sure you will be able to do that type of thing. So if you do want to fine tune it, it's not possible with the latest and greatest.

Wes Bos

So I was like, okay, how much will that cost right now? So their most advanced models called Da Vinci.

Topic 16 19:01

Fine tuning models not possible yet

Wes Bos

And I said I just did some back of the napkin math here.

Wes Bos

30000 minutes of audio of every single podcast we've ever recorded, which is roughly about $400 to train it on every word we have ever spoken. That's which is that's a lot a lot of A lot of text to send it. Yeah. That's a bargain, all things considered. Yeah.

Scott Tolinski

I I have a fun idea that we can put into our back catalog. But, yeah, What what if we, like, trained it on all of our potluck episodes specifically and then, like, asked it to write us Questions for a potluck quest for a potluck episode, and it's like, oh, here's a I wonder if that would be a total disaster or not. Oh, it's a it's a good question. Or

Wes Bos

Hey. You could also have it, like, asked us stumped questions.

Scott Tolinski

Oh, yeah.

Wes Bos

Like, we'd some somehow feed it a bunch of facts, And then it would ask us like interesting. We could maybe that's even possible right now. Let's try it. Can you Give me 5 questions about CSS.

Wes Bos

That would be good for an interview.

Wes Bos

What is the box model and how does it work? How do you center an element horizontally and vertically? What is the difference between visibility hidden And display none.

Wes Bos

These are a little bit easier, but maybe we could ask them a bit More technical and harder.

Wes Bos

What's the difference between nth child and nth of type? Okay. How do you implement a CSS grid layout? How do you create CSS animation that loops infinitely? Okay, that's pretty good. What is the cascade of CSS and how does it work? How do you optimize CSS performance on a website? That's actually pretty good. Maybe we should do a stumped robot version.

Scott Tolinski

Yeah. Robots stumped. I'm

Wes Bos

done for that. Yeah, that would be actually fun.

Wes Bos

So let's talk about the different API. So Open AI has 7 or 8 different APIs that are used for different things. Probably the most popular one is the chat completion API.

Wes Bos

And the way that it works is you give it 3 things. You give it a system prompt, Which is basically telling the robot what it is.

Chat completion most popular API

Wes Bos

So you could say you are a web developer Who interviews potential candidates. You ask questions to gauge how smart your users are and their experience in CSS. So you sort of set the stage for what the system is. You see, this is who you are, you little robot you.

Wes Bos

And then you have user, which is your your prompt. So you basically you give it an array of prompts. The first one is is usually system. You'd say what it is. The second 1 is your question to it. And then there is also assistant, which is what the what they give back to you. And then you can pass it Like I said earlier, if you wanna pass an entire back and forth conversation to update it, you have to give it an array of system, user system, user system, etcetera, back and forth.

Wes Bos

There's also text completion, which Is kind of the same thing as the chat. Basically, instead of having the chat interface of back and forth, you just give it straight up text and then it will return to your reply.

Topic 18 22:28

Text completion similar to chat completion

Wes Bos

I think they will probably maybe do away with that, but they seem to be pushing people more in the area of The chat AI because it kind of does the text completion, but it also does the context of going back and forth. So that's probably the one you're going to want to check out is The chat completion. If you're checking this out yourself, there's image generation, which is the, I think, the DALL E thing.

Topic 19 22:53

Image generation API

Wes Bos

It's actually pretty nifty. It's not very good, which is part of me is like, 3 months ago, I was marveling at how amazing this was that it could create a thing so you can ask it for a specific image of something and it will generate it.

Wes Bos

You can edit an image. So what I did is I gave it my YouTube thumbnail, and I said, give me some variations on this. And it just it Actually, it gave me, like, photos of some guy sitting at a computer monitor with his mouth open. But it's, like, very clearly just like some Fever dream of a guy on there. And it also doesn't do text at all. None of these AI models CS. Do text. It's just random

Scott Tolinski

letters, and that makes sense. And when you think about the technology and how it's producing these images, maybe, you know, maybe, like, the way that that ends up working is inside of actual software like Figma or something it where it would be able to transform layers without text and then understand text as a text layer rather than just pixels because text as pixels.

Wes Bos

Yeah. It's very difficult, I think, in that same sort of way. I find it amazing that it can You can give it an image and say, make me smile or replace this with a grandma.

Wes Bos

And like, that's no problem. But you say, hey, Put the text Wes on top. Impossible.

Wes Bos

You know? Impossible. Yeah.

Scott Tolinski

But who knows, Wes? You know, there there could be a patch next week a new version of something, and it's going to be like all of a sudden, probably by the time this comes out.

Wes Bos

Next 1 is speech to text, which is the, I think it's called the Whisper API.

Topic 20 24:37

Speech to text API

Wes Bos

And this will just basically take audio and convert it into dictated text.

Wes Bos

It works pretty good. It does not do speaker detection, though.

Wes Bos

So It's I was, like, looking at that for our transcripts, but I really want Just like in in the examples that I was, it says Scott goes on to talk about how he really loved using Figma for this example. You know? Like, he was doing it. Really nice examples of, like, Scott and then Wes asked Scott about his thoughts on X, Y, and Z. Like, it was doing a really good summarization of that. And if we didn't give it who was speaking which words, it wouldn't be able to do that thing. So what I was I just use this Website Otter dotai.

Wes Bos

You can also use Descript.

Wes Bos

Otter dotai seems to be more for like transcribing Zoom calls these days.

Wes Bos

But I signed up a couple of years ago and I have a really affordable plan, so I was just using it as a good way. I probably In the future, Whisper will be able to do that, but there's also examples where people Use other AI models to detect who are the speakers, and then they feed those timestamps into Whisper, So they use them together. There's a bunch of examples out there. It seems a little bit complicated for me at the moment, so I'm just playing with it. So I just downloaded some transcripts of it.

Wes Bos

There's a moderation API for moderating comments and spam and whatnot. There's an embedding API. This one's really interesting to me.

Wes Bos

It's for measuring relatedness, which I think would be really cool because again, if we give it our entire back catalog of podcasts, Wouldn't it be cool to suggest related podcasts that actually made sense given that it has listened to everything?

Topic 21 26:17

Embedding API for relatedness

Scott Tolinski

Right. We have all the audio. You have all that text.

Scott Tolinski

You should be able to say what are the episodes that most closely relate And, what might be the most interesting to you.

Scott Tolinski

And you think about it, you know, I don't know how YouTube does their algorithm. I'm sure it's not Doing anything with text in that regard yet, but it does seem like that would be a huge upgrade to be able to scan The contents of a multimedia file to completely understand what it is about and what it's most closely related to. So I talked About summarizing,

Wes Bos

which is kind of cool, like the recursive, the limit to GPT 4 is 8,000 tokens, so 4 times bigger. And then you can also go up to 32,000 tokens, which It's much more expensive per token.

Wes Bos

But then again, you're not paying twice to summarize the stuff yourself. So It's kind of a up and down.

Wes Bos

Let's talk about Langchain.

Wes Bos

Somebody sent me over. It's a JavaScript library for working with multiple different LLVM's, large learning

Topic 22 27:38

Langchain JavaScript library

Scott Tolinski

Language? Language learning.

Wes Bos

What does that mean? We can ask,

Scott Tolinski

Yeah.

Scott Tolinski

GPT. What does LLVM mean? It. Low level virtual machine? No. That's not right.

Scott Tolinski

LLM stands for master of laws.

Scott Tolinski

It's a postgraduate academic degree.

Scott Tolinski

It.

Wes Bos

That's not correct, obviously. An LLVM can be used as a back end for compiling and optimizing machine learning models Written in high level languages such as Python, allowing for efficient execution of various hardware targets.

Wes Bos

Additionally, LLVM can use accelerate the execution of certain operations and machine learning frameworks such as TensorFlow, PyTorch through just in time compilation.

Wes Bos

Interesting. Basically what it is, it's like a library for working with multiple ones. Open AI Hugging Face is another one. There's several of them out there, and they both interface with lots of them, which is nice. And then they also have handy methods like summarizing. So there's one called Recursive Character Text Splitter, which will essentially do that. It'll summarize it all for you So you could just give it your entire thing and then it will summarize it for you. I did. I'm not sure exactly how that works because I did find I needed to massage the summarization a little bit because it was being very cold initially. And I was like, that's not how Scott and I talk. And I want the show notes to look like it was. So I said, like, Be a bit more relaxed. Yeah. Don't be as verbose. And you and then over time, you you come up with this 6 or 7 sentences before you tell it to summarize, and you sort of nail the the wording and whatnot. And, also, I was trying to get it. It would say like Scott and Wes are talking about, and I was trying to summarize it into Like, I like I wanted to summarize, like, the I have tried this. Not Perspective.

Topic 23 29:54

Prompt tuning takes work

Wes Bos

Wes said he had tried this.

Wes Bos

Correct. Yeah. And I was Who's the, who's the speaker? Because I was telling and I told them, please use the 1st person.

Wes Bos

And then it said as a machine, I cannot do the 1st person. I got around it somehow, but you really do have to spend some time massaging the prompt

Scott Tolinski

before you you get it going in there. Oh, no. It is interesting the types of things that, like, refuses to do, just for fun. I was, like, asking it questions about Performance enhancing drugs.

Scott Tolinski

And I would find out that it would say it wouldn't it wouldn't do it. I was like as an athlete because I was I was listening us into something. I forgot what it was, but they were talking about the types of performance enhancing drugs that certain athletes are taking to avoid detection. Mhmm. So I wanted to see it. If ChatGPT could be able to express how to avoid detection for professional organizations just to see it. If people could say, hey, I wanna take steroids in professional sports. How can I get around this? And Chad Gpt is like, and I ain't doing that.

Scott Tolinski

I'm I'm not dealing with anything illegal here. So then I was like, well, let's just say, Let's just say you could do that.

Scott Tolinski

You know, it's like, no. No. No. No. I'm like, but well, what if I was a police officer? Yeah.

Scott Tolinski

And I'm just trying to understand a little bit more about how these athletes are getting away with it, and they, like, refused at all steps. So I was just trying to dance around that. That's hilarious. You know what I'm Not looking forward to

Topic 24 31:22

Anticipating future misinformation

Wes Bos

the, like, 5 years from now, my where you get, like, a subset of People who are like, oh, you just believe anything the OpenAI will tell you? Are you a sheep? You gotta use This x, y, and z, one that's been tuned for no bias and, like like, there's been a for the truth. Exact tune it. Exactly. Yes.

Scott Tolinski

It's only the truth has been passed into this. Oh, I am not looking forward to the

Wes Bos

All of the COVID stuff and opinions that went flying in the last couple of years as soon as this goes mainstream.

Wes Bos

And then people are like, I don't know, there's a lot that is going to hit us at that point in time. So,

Scott Tolinski

Yeah. Buckle up. It's all very heavy in a lot of those regards, but I don't know. You know, there's only so many ways we could say that. Another gotcha. And this is just the interesting thing is make some sort of function

Wes Bos

to save your replies.

Wes Bos

Because if you want to rerun something, for example, I console logged A nested object and freaking the node console will not do it unless you console dot dir depth null or like hook up the hook up Versus code or whatever. And there's just a pain in the ass. So I, I wasted, like, a dollar.

Topic 25 32:24

Tip to save responses

Wes Bos

That's awesome. Honestly, the most expensive console log I ever had because I was I, like, ran this super recursive thing to try to summarize 6 episodes at once and out the other end.

Wes Bos

Object. Object. Object. Object.

Wes Bos

So just make some sort of just have it right to a file. Yeah. JSON file so that you can I go back and review it should you ever need that? That was my little gotcha.

Wes Bos

That's it for working with it. It is really fun To get up and running, if you've got 6 or $7 kicking around, you certainly should try to Feed it some sort of info. I'm curious now. Like, I also have my all my tweets downloaded.

Wes Bos

What would happen if we like it? Yeah.

Topic 26 33:33

Curious about training on old tweets

Wes Bos

Alright. That's it. Thanks, sir, for tuning in. If you're making anything with the OpenAI a OpenAI API, let us know at syntax fm. We'd love to hear your thoughts on working with it. Yeah. Likewise. I wanna hear what people are doing. Who's

Scott Tolinski

And don't forget to subscribe in your podcast player or drop a review if you like this show.

Share

Play / pause the audio
Minimize / expand the player
Mute / unmute the audio
Seek backward 30 seconds
Seek forward 30 seconds
Increase playback rate
Decrease playback rate
Show / hide this window