625

June 9th, 2023 × #AI#ChatGPT#OpenAI#Future of Work

Supper Club × OpenAI, Future of programming, LLMs, and Math with Andrey Mishchenko

Andrey Mashenko from OpenAI discusses developing plugins for ChatGPT, training language models, the future of programming and work, and aiming for artificial general intelligence.

or
Topic 0 00:00

Transcript

Wes Bos

Welcome to syntax, the podcast with the tastiest web development treats out there. Today, we've got a really good one for you Today, we have Andrei Mashenko on from OpenAI, and it kinda had, like, a friend of a friend introduce us. And I don't know a whole lot about aside from the the research I've done, but, I thought it would be fun. Like, let's start off with the introduction. I asked Chat gpt to write an introduction, for Andrei, and we'll see I'm gonna run through it, and we'll see what he thinks of it. So Today on the podcast, we're joined by Andrei Mashenko, an engineer at OpenAI who has made significant contributions to the development of some of the most cutting edge AI Technologies. Andre has worked on a range of products at OpenAI, including the development of GPT three language model and the optimization of neural network training and inference.

Wes Bos

He also has contributed to the development of OpenAI's research platform, Which provides researchers with powerful tools and infrastructure for conducting large scale experiments in AI. We're excited to have him on the show today Discuss his work and insights on the future of AI. Welcome. How was that?

Guest 2

That was pretty inaccurate, unfortunately.

Wes Bos

I don't know where it came where I got all that information from. It's funny because I I asked it who you were, and it's like, I don't know. And then I was like, he works at OpenAI. He's like, okay. I got this. And it it kinda just made the rest up. Yeah.

Guest 2

Did you try with, with, like, the browsing plug in enabled or the Bing plug in? Sorry.

Wes Bos

No. I didn't, actually. That probably would have been a much better because I did, go through the web and and find out a lot about you, including the fact that you went to u of m.

Wes Bos

But why don't you give us a quick rundown of of who you are and what you do at OpenAI?

Guest 2

I I I should say at this point and maybe again later that I do not speak for OpenAI.

Topic 1 02:30

Joined OpenAI 8 months ago, started plugins project

Guest 2

This is just, like, my own take on all of this stuff. Yeah. I am an engineer at OpenAI.

Guest 2

I joined about 8 months ago, and I started the plug ins project. So, like, the third party plug ins project in particular, a few months ago, I'd say that's the biggest thing I've done here. I've done a bunch of actually work on the AI research platform, as well.

Guest 2

So did get that part right. Yeah. Before coming here, I did a bunch of different things. I did a math PhD. I played poker for a year in Zurich, Which was fun. Really? Yeah. Okay. So that's maybe we'll start there. That's kind of one thing I was interested in as I went to your website and I found that,

Wes Bos

you had done a bunch of algorithms. You had been involved in a bunch of papers around.

Wes Bos

What specifically was it? Like, something with circles? Circles. Yeah. Circles.

Wes Bos

How how can you explain what that is? Circle packing is

Guest 2

The area that I studied in grad school, and it studies at a high level, it studies the sorts of patterns that you can form, with just, like, round discs, circles, if you arrange them in in in certain patterns. Like, if you start with a pattern, can you put Circles in that pattern or not.

Topic 2 03:25

Studied circle packing in math PhD

Guest 2

And if you can, is the circle arrangement for that pattern unique? It's kinda hard without,

Wes Bos

You know, being able to draw it on a whiteboard or something? Yeah. Yeah. I I actually remember that years ago, I had to do, what's called a box Fitting algorithm where we had, like, we had a bunch of, like, keywords. And basically some of the keywords, it's kind of like a tag cloud. You know, you want to be able to fit Words into specific areas. And I got into it. I was like, man, that's complicated with boxes, let alone

Scott Tolinski

circles. You know? No. We also Let's see that you, you went to U of M. I'm also a Michigan grad. Went to the, school of music, so go little go blue off the top. Go blue. Yeah. Coincidentally, I'm wearing my blind pig t shirt today.

Scott Tolinski

Just totally off the wall. Did not expect that, but, yeah. So Happy to have another Ann Arborite or,

Guest 2

are you originally from that area or just went to the school because it was Michigan? I I was born in Ukraine, actually, and grew up in New York and went to Michigan for grad school. I do love Ann Arbor, though. I have a lot of very good friends there and go back all the time. You know, blind pig almost closed during the pandemic.

Scott Tolinski

Apparently, it was like a grassroots effort that, kept it up kept it alive. That's actually, where this t shirt is from. They're, grassroots effort effort to keep it up. So big fan. I actually played at the blind pig when I was a 18, so I have a a big attachment to, blind pig overall to just one of those institutions of my life.

Guest 2

So so you said you started Open Eye when? About 8 months ago in, October, just a couple months before Chat GPT

Scott Tolinski

went out. Wow. And and was Was your your first project the plug in project, or did you work on other things before then? My first project is an internal

Guest 2

tool called Code Chat, which you can think of it as just like a simpler version of the ChatGPT UI that, doesn't have a lot of the stuff that you need in prod, like Scalability, telemetry off.

Guest 2

It's just very, very thin, and it's a a sort of playground for researchers to prototype, new capabilities to be added to the model. So for example, like code interpreter and, browsing, and then also third party plug ins. All of those were, initially prototyped,

Wes Bos

in in Coach ad and then sort of moved over to prod. Oh, that's cool. I always wondered, like, How do you even test these type of thing? You know, like, if you if you want the all of a sudden it can understand JavaScript or Python input, How how does that work? Do you wanna explain a little bit about how somebody how you implement that into, chat gpt?

Guest 2

Which thing?

Wes Bos

What would be the most fun to talk through? Maybe maybe let's just talk about, like like, plug ins in general. I guess that's something that's something totally different, though, is it?

Guest 2

I think, actually, plug ins is a good one to to talk about. So Okay.

Guest 2

There's, like, different levels of the stack, and I guess I'll just, like, walk through a bunch of them.

Topic 3 06:26

Plugins allow adding integrations to ChatGPT

Guest 2

So there's, like, the actual interface with the model. So the completion model takes strings in and gives strings back out.

Guest 2

And in order to create the, like, conversational, interface that Chat GPT has in it where there's, like, messages, in, like, a structured format.

Guest 2

We take the set of messages that's in the conversation at any given moment, serialize them, so, like, stringify them, basically, somehow, like, in some encoding format. Send that to the model, and the model has been trained on a lot of conversations that are serialized in that format, And it knows how to produce, sort of the next message in string format, and then we deserialize that back, to get, the next message to add to the conversation.

Guest 2

So that part that covers, like, just having, like, a text message in, text message out conversation, with the model. But then if you wanna add plug ins, you basically need to give the model access to another syntax that then we know how to parse and interpret.

Guest 2

So the way that plug ins actually works is that the model gets shown, a function calling interface. So it gets shown a bunch of actually TypeScript function signatures.

Guest 2

So in, like, a super simple example, it might get shown a TypeScript namespace called to do and a bunch of functions that are like add. And, like, the add function will have, like, a to do entry string field, and then delete, which will have, like, a to do ID number, field. So the model gets shown this list of function signatures, and then there's a syntax that we've taught it, for how to compose a message calling one of those functions.

Guest 2

And then so yeah. Like, we'll we'll send the conversation so far serialized to the model And then get its response back in string form and deserialize that and parse it out and basically read out a function call in the format that we know or that we expect, and then translate that and actually make the function call on the model's behalf. Add the results of the function call to the conversation and then and proceed

Wes Bos

forward like that. Awesome. Can we talk about a little bit of the model? I'm not sure how much of this space. You seem like a pretty smart guy, so I'm sure you can talk about it a little bit. But A lot of us developers are just using the API straight away. Right? And then you hear about all these different models.

Wes Bos

How how do you actually train This model that we're sending questions to, and and how does the model actually understand the questions that we're asking and the ability to

Guest 2

Reply in in natural language. I mean, I'm not an expert in this. So Yeah. I probably am not gonna be able to say anything you haven't heard before.

Guest 2

Okay. The way that the way that I think of it is, like, fundamentally, it's a a string completion model. So if there's a you could think of it as, like, it gets a bunch of words, and then it tries to predict what the next word is. So if it gets roses are red, violets are, and then you ask it like, predict the next word, It's gonna it doesn't just give you 1 bird bag. It gives you, like, a distribution over, like, what the most likely next words are gonna be. And in that case, it's just gonna predict blue.

Topic 4 09:29

Models trained on predicting next word

Guest 2

And that's basically it. That's, like, one of the 2 levels of training that there are. There's fine tuning and then something called RLHF, which I can tell you a little bit about afterward if you want. But, most of what I think is viewed as intelligence in the model comes from the fine tuning, phase.

Guest 2

And, I guess it's just sort of, I don't know, a miracle or I don't I don't know what you call it. That, Like, simply, that that very simple task of, like, learning to predict the next word, is enough to get the model to implicitly learn

Wes Bos

All sorts of facts about the world, about the way that various words actually, like, interact and and and correlate, and so it ends up picking up concepts. As it goes through, that That's wild that it it's like the answer to that is it kinda just learns, you know, like, as predicts the next word to it. And that process is all done through, Look. What a neural network is that the

Scott Tolinski

the idea here? Is that the prediction is being done through a neural network? Yeah. So there's, like, a giant neural network,

Guest 2

with billions of of parameters, and you feed the initial chunk of text in, and then,

Wes Bos

It sort of spits out the next word and then the next one and then the next one. And that requires, like, extremely heavy computing both for it to learn as well as for to process it in real time. Is that true? Yeah. That's right. Yeah. Is and is have you ever dipped into any of what that looks like? Like, is it like a is it I'm assuming it's a data center somewhere with a bunch of GPUs running.

Guest 2

I haven't personally been to the data centers where this stuff runs. I, like, SSH into the machines.

Wes Bos

That's pretty cool. You could probably play, doom or something on those pretty smoothly, I bet. Yeah.

Scott Tolinski

Do you wanna Talk about this RLHF and and what that is. So,

Guest 2

the basic way that RLHF works is that, you train 2 models. 1 is called a reward model, And 1 is the actual model you're trying to train. And what the reward model does is given, like, a completed, string or a completed set of messages, It says how good or how bad, that rollout is, and it sort of, it just tries to estimate that. And the way that the reward model gets trained is with Human feedback. That's what the HF and RLHF stands for.

Guest 2

So, basically, the base model will Get used to generate conversations, like, over and over again, for the same initial prompt. And then human raters will be asked, which of these do you prefer? And those ratings feed into the reward model. They, like, are used to train the reward model.

Guest 2

And then so this reward model gets in like, what it learns is what humans like or what humans don't like, basically, which the string completion model doesn't Really learn. Like, just completing strings doesn't necessarily get you stuff that people like to hear or or is going to be helpful. Like, the string completion model is trained on the text of the Internet. There's plenty of text on the Internet that people don't actually really like. Yeah.

Topic 5 12:32

Models don't learn what people like without human feedback

Wes Bos

I bet. Okay. So it it Takes feedback. And where is that feedback generated from? Is it just things people find helpful? Like, if I've uploaded Something on Stack Overflow, would it take that into account, or is it more like a thumbs up at the end of a of a chat window? I say this was a good Input or this is a bad input? This is a,

Guest 2

like, we have labelers that we contract to do these evaluations for us. Okay. We collect that data. We we, you know, could, for example, also use the thumbs up, thumbs down in Chat GPT in the UI. Mhmm. I'm not sure whether I'm allowed to talk about whether we do that or don't do that. Oh, yeah. Yeah. Sure. Yeah. Yeah. I was gonna ask, I I'll Refrain from asking. I was gonna ask if the thumbs down was wired up to nothing like the elevator button. And, you know, like,

Scott Tolinski

on YouTube, the thumbs Down button is, like, not wired up to anything. It's just for people to feel good about clicking thumbs down. That's not true, is it? Is that really true? I didn't know that. They what they did is they It sounds like a Fake news rumor. I think it is probably.

Scott Tolinski

But it it you know, what they did is they, stopped calculating essentially thumbs downs. I'm sure their algorithm behind the Utilizes them in some sort of way, but they only started showing positive feedback. So it's this now gets, you know, 10,000 thumbs ups, and we're not going to let you know If there are thumbs downs for this at all, where you used to be able to see the numbers back and forth, and people would make all kinds of plug ins for that. So Let's talk about plug ins specifically. Like, maybe given them, like, and explain, like, on 5 for our audience, What plug ins mean for chat g p t, and, then we can get into a little bit more of, you know, how they how they can be used and things like that.

Guest 2

Yeah. So the third party plug in system is a mechanism for people to bring their product to ChatGPT and build integrations that users would access from the chat gpt UI.

Guest 2

So, basically, they add an integration into chat gpt.

Guest 2

The very, very basic way of thinking about it for our audience since they're web developers is to create a plug in, all you have to do is stand up a REST web server.

Guest 2

And what the plug in system does is essentially, you tell it what the API of your REST server is, and it gives the model the ability to, like, call your endpoints with whatever parameters your endpoints require. So So if you have, like, a super simple REST API to, like for a to do app, say, I keep going back to to do, like add to do, delete to do, then the model basically has access to that API.

Guest 2

Or if you're like weather.com, you maybe you have an endpoint to, like, get the weather based on geospatial location, then the model would see that API and basically be able to call it.

Guest 2

So so that's really the the the very, very basic

Wes Bos

capability that plug ins exposes. It also Things like off. Man, I I I'm looking at it right now. So Expedia is one of them. And I've got a flight coming up in a couple of days. And Expedia does not have anywhere on their website the ability to export to Google Calendar or to export your Your itinerary as, like, a calendar invite or log in with Google or anything like that. You have to, like, sometimes it shows up in your Gmail and you can add it like that, but Otherwise, it doesn't work very well. And now I'm thinking I like a sucker. I manually went in and added all my flights To my calendar with all the different time zones. But wouldn't it be awesome if you gave it access to Expedia? I said, hey, pull a list of my upcoming flights and then format them as Whatever the calendar ICNS, whatever the calendar extension is, that kind of stuff would be so helpful And probably would've saved me 20 minutes. Yep. Let's talk about the actual, like, code, behind all this stuff. So when you're building day to day, What languages are you actually using to build this stuff? Mostly Python, the React React on the front end. Okay. That seems to be that Almost everything in the AI space is Python based. Is that because Python is very good at, Like data science and and going fast, or or why is why is Python the language for the AI space? It's because that's what people use and what people

Topic 6 16:39

Mostly Python and React for development

Guest 2

love. Or I don't know if they love it, but it's what everyone uses. So all the major ML frameworks, have Python client implementations, and It's like a it's a great it's a great language to get started on and spun up on. It's, like, very easy to get started. Integrates really well with c. So for the parts of your code that you do need to be performing, You can, like, escape hatch out,

Wes Bos

to see or another compiled language. That's cool. We have somebody coming on the show, in a couple weeks, she's working on, like, a new Python interpreter.

Wes Bos

Oh, no. Sorry. It's from a company called Modular, and they're working on, like, a basically a compiled version of Python so that you can you can still write all of your ML stuff, but it will run an assembly code instead of actually having to be interpreted as Python. Seems pretty nifty. I'm excited to have them on. Yeah. That's cool.

Guest 2

Probably not possible to be fully language conformant.

Guest 2

No. Because or why is that? Python lets you do so much Dynamically at runtime with, like, the things that in your program that appear to be statically defined that for performance reasons you would want to compile into your binary. Like, you can edit functions and class definitions at runtime, technically, or even, like, module simple tables. Yeah. It just seems like if you if you try to compile it and And you actually wanted to support the standard, you would end up having to, like, run an interpreter inside of your compiled binary. Yeah. You're right. You're right. We actually had that.

Wes Bos

We're talking about that. We're mostly JavaScript devs, and we had that the other day where people were complaining that, like, if you wanna ship at least right now, if you wanna ship JavaScript, you have to you have to ship the entire kitchen with it. Right? The whole the whole run time to to actually make it work. And the entire TypeScript kitchen As well, right, not just the JavaScript kitchen. If you want to be able to to convert it from patching, people are like, oh, why can't you compile it to assembly code? And because of reasons like that, you can literally Monkey patch a function in the middle of running it and change what the function is. Probably shouldn't, but that's the dynamic nature of it. Doesn't TypeScript get stripped out at compile time? Yeah. The TypeScript does, but, there is A runtime called Deno that we we often use, which will

Scott Tolinski

it it strips it out at runtime rather than you having to compile it out. But there's also source maps and stuff. I mean, you're not shipping those to the client, but you're shipping them in libraries and stuff. When you, because you mentioned that you started on before right before Chad GPT was available to the public. Was that what you had mentioned? About 2 months before? About 2 months before? When when that, like, time period was going on, did you have any idea that When it was released, it was going to blow up the way it has in a the popular culture sort of way?

Guest 2

No. I don't think anybody did actually,

Wes Bos

Do it internally. That that's wild to hear because, like, I'm sure, like, the AI space has been going on for Many, many years. Right? And then it just seems like us normies are starting to get excited about it because it's accessible. It's cheap for us to get into it. I can write a fetch function and hit the API and and be a AI developer at the end of the day.

Wes Bos

What was it like internally when this stuff started

Topic 7 20:05

Team working hard when ChatGPT blew up

Guest 2

to just blow up. Was it kinda like all hands on deck to be able to make this thing work? Well, when it was blowing up, the, like, the team that was running it Was definitely all hands on deck to, like, keep up with the growth.

Guest 2

Mhmm.

Guest 2

And, yeah, I would say it was definitely a change to the culture internally When suddenly we had this, like, extremely popular, product that, like, many, many people around the world were using, Like, we weren't really a consumer product company prior to that. Mhmm. And, yeah, there was a ton of attention. Like, when I joined the company, No one that I talked to outside of, like, the AI and computer science community, like, really knew what OpenAI was or at least, like, many people didn't. And now, like, everyone does, And that was a big a big change. Yeah. It really did seem like there was a a few

Scott Tolinski

APIs that people had at least heard of or even, You know, Chat GPT is making the rounds as Chat GPT, and maybe people aren't even talking about OpenAI as a company necessarily or as, the parent of all this thing besides the website you go to for access to chat g p t or something.

Scott Tolinski

Did you did your background in in math, Like, really lead you into where you are in your development career, so to say? Do because I I I went to school for performing arts essentially at Michigan, and sure enough, I I I talk into a microphone all day, but I'm a developer. Right? And, Yeah. For me, I wouldn't say necessarily that my degree has led me here, but I use my skills all the time. Is your degree and your work in in the math field actually one of those things that has carried you through into your development career? I would say that,

Guest 2

like, a rigorous math education is really good for, like, thinking rigorously and carefully through technical technical stuff. I don't know if I would always have been that kind of person, Whether or not I had done the math degree, I think a lot of what math teaches you actually is vocabulary.

Guest 2

And, actually, this ties back in an interesting way, in my opinion, to to language models.

Guest 2

If you think about programming, right, like, why do we write functions and classes and, like, higher level abstractions? It's because if you were just, like, working with the base, base, base level abstraction all the time In 1 big flat imperative loop, that would be, like, too much complexity to manage. So we, like, bundle abstractions that are, you know, useful attractions in the context of where we're working, and then we think in higher in a higher level in terms of those attractions.

Scott Tolinski

And That's what math is all about. And I think about that all the time. Anytime I watch videos on the higher level math concepts where it's just abstracting away math into Essentially, symbols. And before it gets too long, I have no idea personally what I'm looking at because the highest I went in math was calc 3. So for me, I'm looking at those symbols just like, I don't know about this stuff. So, you know, it it does really resemble Programming in a lot of ways and, like like you mentioned, abstractions.

Guest 2

Yeah. I don't know. I just like thinking in abstractions, though. Like, that's why I liked math, and that's why I like programming. I'm I'm not I don't really see a lot of, like, direct carryover if I'm really being honest with myself.

Wes Bos

Mhmm. Oh, yeah. What, Where did you learn to program then? Did you also take a comp sci degree, or did you just pick up Python? Or is is that part of a math degree? Is that pro you're programming?

Guest 2

These days, it probably is kind of part of a math degree, but I finished my PhD in 2012, so kind of a long time ago now. Okay.

Guest 2

Actually, when I went to high school, I went to, a magnet school in New York, and this was be I started high school in 2000, and this was before high schools really had CS departments or at least it was pretty uncommon. Now I guess they, like, all do. But actually, we had this guy, Mike Zymanski, who is awesome, who started a CS program at that school, and he was, like, a former Goldman software engineer who, like, rage quit Goldman and came And decided to teach high school computer science. So he had us doing things like building ray tracers and writing web servers and building, like, shells, and we were maintaining the the school's, computer networks, like all these Debian boxes. So I was doing all that stuff even Back in high school and and getting into programming and computers. And then when I went to school, I didn't I didn't really study CS.

Guest 2

I, like, took a math class and really liked like, a particular math class called set theory, and I just said, oh, this is awesome. And then At the end of the spur of the moment, I just put math instead of CS down in on on the line where it says major, on the form the

Wes Bos

the entry form to the school. That's that's wild. I I'm curious what the demand for Math majors will be now that a lot of this AI stuff is really starting to pick up because I talk to folks that are just at Random companies, and they're starting to hire people that know this type of stuff because you can only npm install so many algorithms, before somebody has to make a middle out algorithm by themselves or something like that. You know? People in,

Guest 2

tech companies or at least many tech companies like to hire math majors because you don't study math to, like, get rich Generally, like, the people that are studying math or the people that, like, really love math or, like, thinking in abstract terms or solving technical puzzles, and it turns out a lot of that is very transferable, to engineering work, software work, but also ML work.

Guest 2

So I think my my friends from the math programs that I've been in, they seem to do okay

Wes Bos

in in the tech sector. What what other kinds of stuff are math grads Working on?

Topic 8 25:49

Math teaches thinking in abstractions like programming

Guest 2

People used to go into consulting a good amount like McKinsey, BCG. I think lately, That's been a little bit less the case.

Guest 2

These days, it's really a lot of people going to places like, you know, Google and other tech companies. People do go to places like Goldman or, like, other hedge funds, on Wall Street. A lot of people go into things like teaching.

Guest 2

Obviously, a lot of people try to be professors.

Wes Bos

Yeah. But it's really hard. It's, like, insanely competitive. And, like, a a lot of these big companies are hiring math folks so they can create algorithms or processes to find stuff that we would otherwise not be able to find just by I know, like, a lot of my friends from school went into working at KPMG and stuff like that, and they would evaluate big spreadsheets and try to find Just by their own smarts, they would try to look at the data and then make recommendations. And I'm thinking, like, boy, some of those folks are are gonna be out of luck because

Guest 2

AI is really good at that.

Scott Tolinski

Yeah. Yeah. Yeah. Like, you can lots of data. Yeah. You can upload, like, a spreadsheet of

Wes Bos

Your data, and it will find, like, occurrences. Like, what will that do? I don't know. I guess it's just like it'll find, like, trends and and whatnot Within the data. And I think some of my poor friends in working for these consultancies are gonna be out of job.

Guest 2

Like, I do think Learning to leverage the model for what it's really good at and learning to continue to do the stuff that it's not great at and that you are really good at is gonna be an important thing As we transition over the next few years so, like, I definitely use Copilot heavily, for example, when I'm writing code. And it's changed the way I write code, And it's, like, changed the things that I focus on.

Guest 2

It hasn't made it so that I don't have to write code anymore. Actually, I probably spend a smaller portion of my time actually with Hands on keyboard, like, typing code in, and more of my time, like, thinking about how I want the code to be structured, and then, like, guiding Copilot to to fill it in. And I think with your example with spreadsheets, it's gonna be it's gonna be similar. Like, getting an answer to, like, what are a couple of patterns in this spreadsheet, Suddenly, that'll be a a fairly quick thing to do, but then there'll be, like, way more spreadsheets. There's always more data. It's like, where do you look? Where do you focus your attention? What questions do you ask? The work just sort of shifts. Yeah. That's that's one thing we were talking about as well. A lot of people are like, I'm joking about, are we gonna be out of a job? But

Wes Bos

Certainly the type of work is going to be changing given that we're not figuring out how to center a button anymore. But now We're freed up from that work, and we have to think about larger, more complicated things at a larger scale. So That's kinda interesting. I also wanted to ask you But but before before that, do you do you guys worry you're about to be out of a job? No. I I honestly see this stuff as a massive, like, booster pack in terms of what we can get done because I a lot of people are are looking at it like, okay. Let's just stay where we are with technology, but AI is gonna do that for us, and then We're just going to have a 2 day work week or something like that. And unfortunately, that's not how it works. If we can get This AI to do a lot of the work that is sort of the heavy lifting and whatnot, then we are then freed up and we have the superpower of being able to do more complicated stuff. And I feel like we're gonna see a major jump in technology,

Scott Tolinski

in the next couple years. I'm not sure what you think about that. Yeah. And I personally too like, Wes, You know, I spend more time, thinking about conceptual things rather than the technical details. Like, I I've been working in NoSQL databases for so long now. And now that I have a MySQL database as my my main stack, I don't have to worry about, the fact that I'm haven't been working in MySQL for so many years. I can Find the answers to the questions I have of what I'm doing at any particular time way faster and worry about the conceptual things instead of, the the nitty gritty of each individual thing that I'm doing. And I'm I'm still having to generate the big questions rather than Just the outputting, you know, my entire app for me. I'm still having to direct

Guest 2

exactly what I wanted at all given times. You don't you don't Worry that it's gonna get better and better and that even the things you're doing now,

Wes Bos

you won't have to do anymore? Yeah. That that's the question. Sure. Yeah. Oh, yeah. I'm I'm really good at. That's the question. That's what people are like. I'm a prompt engineer, and it's like, well, you know who also is a good prompt engineer? The AI. 54.

Wes Bos

Yeah. Right. Exactly. Right. You know, Like, this it's this circle of well, if you can do it, can't the AI do that as well At a certain point I don't know. Like, do you do you foresee any any time where we are going to be at work or maybe any industries that are gonna be decimated.

Guest 2

I I should say at this point and maybe again later that I do not speak for OpenAI. Yes.

Guest 2

This is just, like, my own take on all of this stuff. I think that, like, there definitely will be categories of labor that the AI will just do for us.

Guest 2

And, like, you know, when computers were first a thing, there were these rooms full of people, you know, hacking on punch cards with, like, their bare hands. Right? And, I'm like, we don't do that anymore.

Guest 2

We don't do arithmetic anymore because computers can do arithmetic for us. And there will be things like that that LLMs also automate. And I don't really wanna speculate about exactly what those are. And, like, of course, like, the just the day to day jobs will will shift over time.

Guest 2

What will people do as more and more work gets automated? I don't know. I think it's a little bit unclear.

Topic 9 31:30

Future could be crafts that don't scale like gardening

Guest 2

I think, like, an An optimistic future that I've personally started to to think about is that people will start focusing on crafts that don't scale, Like, build my own home the way that I want it to look. Like, tend to a garden, cook really good food, You know, brew nice beer, serve it to people, things like that. Things that, you know, might not make economic sense to to fully automate away. Do woodworking, Write a poem. I guess GPT four is pretty good at writing poems. Oh, okay.

Wes Bos

Yeah. Well, that that's really interesting. I've I've made the joke before that There's gonna be, like, Etsy coders that, I I'll code it by hand. You know? Like, I I'm not using any of that. Exactly. Handcrafted from like like, with a beard and all that that good stuff, but it is actually interesting that It could be. Yeah. Focus on stuff that that doesn't scale. I've never actually thought about that before. As someone who works in the space, does it

Scott Tolinski

Get tiring to hear the LOL people will lose their jobs or my everyone's jobs is toast kinda deal. Like, Is that tiring to hear over and over again? Yes and no.

Guest 2

Honestly, I don't personally follow the news. Like, I don't read Hacker News or Twitter.

Guest 2

So I I guess I just don't hear the volume of that that someone would if they were following those things actively. Like, I don't read Reddit, for example. It's probably smart on all accounts. That's good. We we get a lot of questions from people who are 1 to 2 years in

Wes Bos

on their Web development career. And they're like, should I, like, ditch this right now? Should I go learn plumbing or or something like that? And then or farming. And it's hilarious because I was like, my brother in law is a farmer, and this stuff is shaking that up just as much as it is Your HTML over there, so I don't think there's very many industries that are going to not be affected by this. How is this farming being shaken up? Just Curious about that. There's so many things. Like, there are lasers that can literally laser beams, freaking laser beams that can shoot Weeds. So instead of spraying or it can just spray like like pesticides.

Wes Bos

Instead of spraying everything with a pesticide, it can Have a camera and detect what the weeds are and just like, hit them out of the air because you have 6,000,000,000 images of what a weed looks like. It's able to detect the weeds and quickly zap them out. So there's Kind of interesting stuff like that. And then, like, there's also the, like, self driving robots. These tractors are like $1,000,000 and they're Huge. Huge. Like like, I don't know, 50 feet wide.

Wes Bos

And then they're really heavy and it impacts the soil. You have lots of compaction.

Wes Bos

So they're saying, like, what happens if we just have, like, 20 of these tiny little robots that do the farming instead of 1 major one? You know, like, there's a there's a lot That is happening, and they already have GPS guided systems that will plant their

Guest 2

their rows perfectly straight. So There's there's a lot of shakeup in that industry as well. At the same time, farming is an interesting example because it used to be, you know, 70, 80, 90% of the workforce, and now it's Really a tiny portion. Like, it's in some sense, farming has already been shaken up by technology.

Wes Bos

Yeah. Yeah. It it's true. It's Been around for a while.

Wes Bos

What do you think about, like, the 2 year curve of this type of stuff? Do you foresee it going at the pace that we're currently seeing, how quickly things have gone. It feels like the last 6 months a year has been a whirlwind. Is that gonna Keep going at that process. Is it gonna level off? Do you have any idea? I think the future is uncertain.

Guest 2

Yeah. I I do think there's a lot of stuff that's gonna come In the wake of people building

Scott Tolinski

better plug ins or, like, tool integrations, basically. Yeah. Do you see foresee the plug in interface being A big, big shift in how people use these tools. Again, not speaking for OpenAI,

Guest 2

but just for myself.

Guest 2

If I think about, like, what is the interface that I want to do a large class of interactions that I do with my mobile device or my computer, I would like to just be able to, like, Speak to my phone and say, I'm having a party tonight.

Guest 2

Can you make sure there's a large pepperoni pizza and, like, a 6 pack or whatever At 7 o'clock, and I want the phone to just respond to me and say, okay. I know that you like this pizza place.

Guest 2

Here's the order. Is that okay? And I say, yes. Okay. Basically, like an Alexa, but it works or like a Siri, but it works kind of interaction.

Guest 2

And Plug ins, it's it's basically just like the interface that allows people to build those kinds of interactions with the model. Like, if you're a food delivery company, you can expose that interface of, like, get get menu, like, add item to order, add item to order, Finalize order.

Guest 2

And it's still in early days, but that's the direction I see things going just because, like, that's The interface that I think users will want, that's the interface that I would want. And, technically, I think with the LLMs getting as good as they are now, That's actually buildable. Like, that's not a very intellectually demanding task if you think about it. It's really just like teaching the model a few very basic flows of, like, How it should, how it should navigate that kind of an interaction with the user, that kind of a conversation. What what about

Wes Bos

that thought but on programming? Do you think that we will still be writing Python and and JavaScript to to build these types of things, or do you think that we're gonna write be writing some sort of Higher level abstracted. Because because right now, a lot of these tools are just we talked to the guys from GitHub Copilot a couple of days ago. They said these are just bolted on solutions. This is how we do things now. How do we add AI into it? But then they're also thinking about, like, what if we did something totally different? Do you foresee that ever happening with code?

Guest 2

Well, when you're thinking about how to make it easier to write a program, like, you have your pro you want your program to do x y z thing. How do you implement it? There's, 2 angles you can attack from. 1 is you can increase the quality of your abstractions To make the program just easier to write so you can build higher and higher level abstractions. The web sees a ton of this. Right? Like, we have TypeScript. We have React on top of TypeScript, script on top of JavaScript.

Topic 10 37:35

Higher level abstractions still need lower level code

Guest 2

And then you have, you know, React libraries on top of that.

Guest 2

Or the other thing you can do is Have an engine that, like, auto generates the code for you.

Guest 2

And working on those 2 things is, like, a little bit orthogonal. So, like, I think we're constantly trying to make our abstractions better and better. Like, it's way easier to just, like, build a basic web page with, like, a button that does, you know, something when you click on it today than it was even 10 years ago. And, like, in that question of, like, are we still gonna be writing JavaScript and Python, It's sort of implicit. Like, can we come up with higher level abstractions? Like, what you're really asking if you ask whether we are gonna keep writing JavaScript and Python is, Can we make our abstractions even better? Mhmm. And I think the answer to that, I'm a little bit more skeptical. Like, there's a lot of companies trying to do this with, like, no code or low code stuff, and that's basically what that angle is.

Guest 2

And I think that there's a reason that Very few really professional enterprises use no code solutions for their prod software. You actually wanna have your hands on All of the low level knobs, like, really tune the behavior of your application.

Guest 2

What the LLM does is allows you to attack from the other side of, like, making the process of Auto generating that code that can't be abstracted. Like, you need to, like, spell it all out. Mhmm.

Guest 2

But it is pattern matchy enough that, like, And a pattern matching engine like the LLM can do a lot of work helping you auto generate it. So, yeah, TLDR, yes. I think, like, languages are still gonna be around because it's nice to specify things formally. PLM will continue to get better at helping you,

Wes Bos

use programming languages. That's good. And do you ever do you foresee a future where AI will give us more expected outputs? Like, in math, 1 plus 1 is always 2. In in programming, we have pure functions. And part of the frustrating thing for developers working with this thing these things is you don't always Get the same output.

Wes Bos

Do you foresee a future where we will be able to get reproducible outputs Given the same inputs?

Guest 2

Well, you can do that now, actually, by, like I'm not sure whether that's possible with, OpenAI's public API, but you can configure the model to always give you the same thing back. I think it like, the randomness is actually a feature, not a bug. Oh, okay.

Wes Bos

Is that what is that what seeds are?

Guest 2

Yeah. Seeds and, like, temperature.

Guest 2

So, like, what the model actually produces And it's like string to string completion form isn't like the next string. It's like a probability distribution over the next string or the next, like, word, say, next token, really.

Guest 2

So there's, like it could the next token could be anything, but, like, this token is more likely than that one. So, like, what is the likelihood of each of the each of the possible next tokens? And then you sort sort of You could either sample according to that distribution, or you can just take the most likely one. Oh, I see. So it is

Wes Bos

the The mathematic output is always the same. It's just the fact that it's different every time is a feature of

Scott Tolinski

people wanting that to be. That's more natural. Right? Yeah. Exactly. Natural natural language.

Scott Tolinski

You had mentioned, you know, utilizing these tools for what they're good at, But I I before, you know, we wrap up or anything like that, I did want, you know, for the audience, people who might not be that up on the latest or even deep into this stuff. Like, what What do these things do better than anything else? Language and pattern matching. So just, yeah, pattern matching?

Guest 2

Really, it's language.

Guest 2

The way I think of, like, the LLMs are gonna be useful for a lot of stuff, but they, like, smooth the transition between natural language and formal language.

Topic 11 41:40

Randomness in outputs is a feature

Guest 2

So that's, like, why the LM is able to write code for you. But, also, if you have a DSL that exposes, like, The set of functions to control the lights in your house, that's a formal language.

Guest 2

Right? In order to use that interface, you need to, like, Call the functions syntactically correctly. The online can just do that, and that's the plug in system. And you can imagine, like, if you're, you know, Microsoft and you have Excel, You can expose some, like, formal DSL, that controls Excel or, like, controls, like, what to add to Like, how how to configure the function that's gonna be computing the value of some cell or, I don't know, resizing a column or something like that. And then in response to, like, a user command given a natural language, actually execute, like, the correct formal language instruction, to to make that happen and do it in a way that's actually reliable enough for consumers.

Guest 2

To me, that's, like, the immediate immediate superpower of the LMs.

Guest 2

Now the real goal of OpenAI is to build AGI, Whether LLM like, LLMs are obviously gonna be an important part of that, and I think there's, like, some debate over whether you'll need more than just, like, a pure language model. Or, like, if you keep making the language model bigger and bigger and give it more and more data, can that scale all the way up to to AGI?

Wes Bos

What can you explain what AGI is?

Guest 2

AGI stands for artificial general intelligence, and we have a definition of that in our charter. But off the top of my head, It I think it's defined to be an autonomous system that's as good or better than humans at all economically relevant tasks.

Topic 12 43:13

AGI is AI better than humans at all tasks

Guest 2

So, basically, an autonomous system that is, like, better than any programmer at, like, making some website for some company or analyzing, You know, stock portfolio Yeah. Diagnosing a disease.

Scott Tolinski

What on a per a personal level, what's your your favorite AI based tool or project going on right now? Other than plug ins, which everyone should go and use that. Other other than your own stuff. Yeah. I like Copilot a lot. Copilot's definitely changed the way I code as I said. Definitely. I just got access to the Copilot chat this morning. I think they announced it this morning at Microsoft's dev event.

Wes Bos

It's wild. The contacts are getting actually, that's something we haven't even talked about is the context of the number of tokens that you can give it It's getting larger and larger, and then the new chat is able to they haven't said what it is, but it feels like it's Reading more of your open files and passing more data to it. Like I was able to get it to give me a response.

Wes Bos

That was About 7,000 tokens, I think, which is still under GPT 4, but pretty large. It gave you a response that was 7,000 tokens? I'm pretty sure. Yeah. I was Trying to measure it. It was because I was like, give me I give it, I say generate some types, give me a list of people. Each person has a has a job. Every job will have a manager, A start date, maybe an end date. Give me types 50 or 20 examples and write a library for it with documentation. And it went through the whole thing. And I tried pushing it up a little higher and it started to just do dot dot dot insert your data here. But I was I was pretty impressed with it.

Wes Bos

It's cool. Something in your real life. It must be wild working in this every day and then having to go back to normal life. And I I had called people on the phone, and it's like, press 1 for x, y, or z. Press 2 for and it must drive you nuts Seeing all these things in the world that could be so much better solved, are there any examples in your life that you feel like You cannot wait for AI to get ahold of? I guess, like, customer support lines. Yeah. And, like although, we'll see.

Guest 2

Right now, when you got a bot answering you on the phone, you're not, like, happy usually. Right? It's like

Wes Bos

it doesn't bode well. I guess we're hoping we can change that. That was the big thing a couple of years ago with the intercom.

Wes Bos

They had chatbots, and I would It's it's so frustrated with them because they never actually know what you're talking about, and it feels like they're just piping it into some knowledge base. But

Scott Tolinski

Maybe maybe better now. Yeah. Well, it ends up being like a a contact form with more steps. Here, I bring up this big chat. Alright.

Wes Bos

Worth, what's your email? Okay. This is a contact form. They're just trying to get you to not contact them. But, like, Netlify so Netlify is like a host.

Wes Bos

What they did is they took all their docs in every single form post and whatever else, and they ran that through. I'm assuming they ran it through embeddings.

Wes Bos

And I asked it a couple of questions of things I've had problems with on Netlify, and it came up with the right answer every single time.

Wes Bos

And I was pretty impressed by that. I was like, that Is way better than me going through support because the support would have had to find the person that knows about that specific area of serverless functions. And This is a really good example of customer service not trying to shoo me away from contacting them, but trying to help me faster than

Guest 2

Contacting customer service. Yeah. What about, like, doing taxes? Like, have you ever tried to read the tax code or, like, The instructions for filing taxes. Okay. Yeah. Yeah. That's a real that's a good example. It's like, what what am I allowed to do? Or Like, I'd love to be yeah. Actually, tax code is not natural language. It's basically its own little formal language. Yeah. So I'd like to be able to use the model to translate my natural language query of, like, Do I have to file this form or not into,

Scott Tolinski

like, you can read the tax code for me and actually give me a give me a reliable answer? That would be great. Yeah. I was just using chat GPT for something the other day on my my business side of things, asking a question, which forms do I file in Colorado? And The amount of at of the type of ease that it was able to say, here are the forms that you you know? And it's not like I I wanna Just blindly trust it. I want to, you know, verify everything, obviously, but, I I just find those those types of interactions in the mundane stuff. I'm gonna have to go Hunting on your government's website. The government's websites are all bad. You have to, like, paw through them to find the right forms and, you know, being able to use these tools in that sort of way just does feel like a A big upgrade. It'd be nice to pass in all of your your financial information and say, hey.

Scott Tolinski

Where where am I what am I missing out here? Like, there's all the research and development credit. You are not taking advantage of that. File for that. You know?

Wes Bos

That really seems nice. Here's hoping. Yep. We had some lawyers on A couple a couple episodes ago, and they were like, no.

Wes Bos

The AI stuff is cool, but you still need a lawyer to make sure you you cover everything. And I wanted to be like, well, at a certain point, the AI is going to have a better understanding of all of the laws in your obscure state Or, like, county laws than than a lawyer might. You know? Like, I I'm not saying it's gonna replace them, but certainly help. I feel like a lot of what lawyers do is have relationships with, Like, the judges and

Guest 2

Yeah. Various people in the legal system as well. Oh, man. I watched

Wes Bos

We don't have sick pics right now, but I just got finished the Murdoch murders on Netflix.

Wes Bos

That was very, very good. It's about a have you seen that, Scott? I I listened to some podcast about, like, the exact same Oh, okay. Story. I look a lot better than that. I'm very brief, but we did watch the Netflix one as well because, You know, Netflix. Alright. We're going to get into the next section of the episode just before we wrap it up here, which is our supper club These are questions that we ask everybody who comes on the show.

Wes Bos

The first question we have here is what, text editor

Guest 2

and terminal are you using? I used vanilla Vim for a really long time, and then I tried Neovim For a bit, I went through a phase of configuring it endlessly. Now I just use Versus code.

Guest 2

And, I use, like, kitty and, like, various other fancy terminals.

Guest 2

Now I just use Iterm. I gave up configuring everything endlessly.

Wes Bos

No more. That seems to be such a common response on these, Especially as the, like, the smarter the guest we have, they're just like, I just use the default. You know? I used to I used to, like, remap my keyboard

Guest 2

And, like, futz around with, like, what my semicolon key does and what my Oh, gosh. Key does. Remap the whole thing. Now I just

Wes Bos

oh, yeah. Everything default? Default? Default? Default? We we remap our caps lock key, both of us, which is Caps lock is control.

Wes Bos

Okay. There you go. Yeah. We I actually use it, for hyper. So it sends shift, control, option, and command at once, And then you have a whole new set of keys available to you. What do you use hyper for? I have it Paired to all my window management for recording.

Wes Bos

So I can just hit my hyper arrows and it will Do all my things, and then I I'll do the same, man. Lots of just, like, shortcuts for development.

Wes Bos

Filling out credit card on a Stripe Payment form is a common one that I use because I hate typing 4242

Scott Tolinski

over and again. Yeah. Oftentimes, if I I can't remember, like, a mnemonic for a Keyboard shortcut in Versus code or anything and or I just don't like the shortcut. I'll just reassign it to hyper the thing that sticks in my brain the most just as a way to, You know, personally get a little bit more attached to my keyboard shortcuts.

Scott Tolinski

If if you had to start coding from scratch today, given what you know about, your AI tools and everything like that, what would you choose to work on as a programmer? That's an interesting question.

Guest 2

I guess I'll say something.

Guest 2

I don't know how you'll take it, but I would just work on whatever was interesting. That's what I've done my whole career. I I haven't had any, like, long term strategy or plan.

Guest 2

I just worked on the stuff that I thought was cool at any given time, and I went through lots of phases of, like, trying to learn stuff I should learn, like picking up a book on networking or whatever Or, like, taking a Coursera class on something. But, basically, every time I ever did that, I ended up being really frustrated and dropping it pretty quickly.

Guest 2

So I just learned what I need to to get a project done, and I try to pick projects that, that will push me and, Like, at least some new new direction where I'll have to learn something that I didn't know before.

Scott Tolinski

I was just like, go with what's interesting. Yeah. Hell, yeah. Yeah. That that's That's largely how both Wes and I have gotten to where we are in our our development careers. You know? Yeah. The it's honestly such good advice is that

Wes Bos

Like, people always try to lay out these massive road maps for web developers and say, like, you gotta learn x, y, and z. And you learn so much more By being curious and having fun and building shit that is cool, then you would doing literally anything else. Obviously, you have to learn the stuff along the ways, and there's Some fundamentals there, but

Guest 2

there's something to be said for just focusing on what you're in interested in. One thing I do try to do, though, is learn things quite deeply. So if I'm, like, using a language that like, when I started using React for the 1st time, I went and, like, bought a couple of books and actually, like, read them thoroughly.

Guest 2

Or, like, when I had to build the front end for the 1st time, I actually wrote it in vanilla JS first to learn what that actually is like before bumping up to, like, TypeScript and React. And I do think that you get a lot of mileage out of understanding the tools that you're working with and the stack that you're on A level or 2 deeper than, like, the surface, and I feel like that's a mistake that I see a lot of beginning programmers make. Like, they're very eager to, like, build the finished product or build the demo, And then they don't take the time to, like, learn things deeply.

Scott Tolinski

Yeah. I I definitely see that a a ton because Wes and I, we both Teach essentially programming for a living. That's, like, been our career for a long time. And and being able to to always make sure people are Aware of what's going on beneath the surface, even beneath that surface, usually provides more context, More understanding people aren't going to be hitting an error message and be like, I want what the hell do I knew now? I I got an error. It doesn't work.

Scott Tolinski

Well, would do you have any tips for Learning things deeply, it it just you find

Guest 2

documentation. You find the books and you read. You just stick your mind to it, or do you have any, tips? The Big thing that I try to do is not have anything that I'm, like, staring at regularly that I don't know what it is or understand.

Guest 2

And if you're not actively trying to do this, your life will just be full of stuff you don't understand. Like, you know, if you're reading, like, a fiction book and you there's a word you don't know, your brain sort of just, like, glosses over it, That's not a big deal. That's not what I'm talking about. But in, like, a computer program that you're editing, if there's, like, a concept or some class or some import or something Or some, like, little bit of syntax, and you have maybe a guess, but you don't really know what it is. Actually, just like taking the time to dig in and, like, really nail down your understanding of that thing. That's, like, I'd say the biggest thing that I personally really lean on.

Guest 2

And I find that it has this compounding effect because If you actually understand every character in the file, like, every line of code, and you really, really understand what it does, then it compounds, like, the mental model that you're building of the program.

Guest 2

Like, that mental model is just gonna be a lot stronger, and it's like building that understanding of those individual concepts that are making the thing up as well. So, yeah, that's, like, the main tip that I would, That that's fantastic advice.

Wes Bos

Alright. Last thing we'll do here, we want a shameless Plug for you. Shameless plug. Plug ins. Everyone should go try plug ins. Awesome. Again, I've I've played with it a little bit here and there, but definitely gonna Check it out a little bit more after this. Thank you so much for coming on. I really appreciate all your time and insights into this. It's fantastic. Thanks, guys. Yeah. Thank you so much. Alright. Peace.

Scott Tolinski

Head on over to syntax.fm for a full archive of all of our shows, And don't forget to subscribe in your podcast player or drop a review if you like this show.

Share

Play / pause the audio
Minimize / expand the player
Mute / unmute the audio
Seek backward 30 seconds
Seek forward 30 seconds
Increase playback rate
Decrease playback rate
Show / hide this window