This first episode explains AI for you in human terms. If you want to get more grip on the AI tools available to you, it’s good to start out by understanding them better. Our guest Christopher Penn is renowned for making the complicated accessible – and, actually, it’s not even as complicated as you might think. Christopher gives us some easy analogies through which to understand artificial intelligence.
Grace Cartwright is your host – a Sr Content Specialist who has been chatting to folks and reading/writing about customer service & AI for Klaus.
Christopher Penn is your guest – a recognized thought leader on data science and machine learning, he is also co-founder and Chief Data Scientist at Trust Insights.
We highlight the importance of embracing AI as a tool for personal and professional growth, especially in customer service and various other fields. AI is here to complement and enhance our abilities, not replace them. If you’re not so technically-minded, but want to fulfil your curiosity, we cover…
- What a large language model actually is.
- How to keep up with AI trends and come up with the best prompts.
- How to make AI your collaborator, not your competition.
- How to use AI as a coaching tool for yourself and others.
- Why the best prompt is ‘explain this to me in terms of pizza.’
You also read the podcast transcript in full below, and we’ve gathered some of Christopher’s recommendations here:
A free-to-use, locally running, privacy-aware chatbot that works a little like an offline personal assistant.
A software coding tool that will ask you further questions upon your prompt to help write you the code you need.
Grace: Welcome to the third series of Quality Conversations with Klaus.
In this series, we’ll be tackling a matter on everyone’s minds, AI. But more specifically, how it relates to customer service careers. Maybe you’ve dipped your big toe in the fast flowing waters of AI, but the likelihood is you haven’t pushed the boat out.
I’m Grace, Klaus’s Senior Content Specialist, and in this season, I’ll be talking to customer service leaders to find out how support is changing. You’ll learn about new technologies and the practical advice needed to remain engaged. But in this first episode, we’re going to explain AI in human terms.
Christopher Penn is our guest. He’s a renowned thought leader on machine learning and data science, co-founder of Trust Insights and most importantly for us, he’s fantastic about talking about tech in ways that are easy for everyone to understand. I learned a lot from talking to him and I hope you do too!
Grace: Okay, so hello Chris, welcome and thank you so much for being our guest on the third series of the podcast. The first question I suppose is such a broad one, but it’s something that I think people see a lot and don’t necessarily even know what the meaning is. What is an LLM?
Christopher: A large language model. So, let’s start with this: a model is just fancy for software, right? In the same way that Microsoft Word is a piece of software written by humans for humans. You have models which are AI software written by AI for AI.
Grace: And does that mean that there are no humans in that entire chain then?
Christopher: It depends. The way these things are constructed, you take an enormous amount of text, and you take that text and you put it through a series of processes, and these processes are pretty straightforward. What they do is they distill down all the words into numbers (cuz machines can’t read), and then they calculate these statistical relationships among all these words. In technical terms it’s things like embeddings and positional encoding. But it’s basically just statistics to say, what’s the next logical word? Based on these statistical distributions that are in this model, a machine is going to predict words like tea or coffee or beer.
And so these companies build these really large models of billions, if not trillions, of pieces of words. And then they distill that down into essentially a really big library of probability that is the model. And then we have interfaces to that model to ask it questions.
The way I’ve explained it to people in the past, imagine you went around the world just eating pizza, right? Every country, every place, every city eating pizza and you’re taking notes all the time. I had this pizza in Prague and I had this cream corn and squid pizza in Tokyo, and so on and so forth. And then when you get home after eating all this pizza, assuming you’re not violently ill…
Grace: Yes, assuming you’ve got over the violent illness, after all of that…
Christopher: You write a cookbook. Pizzas of the world is the cookbook that you write. That is the process of constructing a large language model. You take all this text, distill it down, and it is essentially a cookbook. And then you as, the person who ate all this pizza, then you can make them.
Make one pizza with basil, but also gonna add mozzarella to it and add corn. Maybe you add pineapple and make half the internet angry.
Grace: And actually, I should say many people in our own company. We have a great debate on pineapple and pizza.
Christopher:. That’s what these large language models are. They’re essentially, in a lot of ways, like a cookbook of food, which is important because a lot of people think, oh these companies have stolen my content. They haven’t. They’ve read your content and they’ve distilled mathematical distributions from it, but your original content is not in these models, just the mathematical relationships, the words you’ve used.
And then when someone asks a model to ‘write me a blog post about B2B marketing’ – if you’ve written about B2B marketing, some tiny percentage of the distributions will be in the final product. But it’s so minuscule compared to all the texts on the internet that it’s very difficult to say yes, this is in there.
Just as a cookbook contains no actual pizza, a large language model does not contain the original text, but it could replicate things that sound like the original texts that come from.
Grace: I love that analogy of a cookbook. But are you almost saying that it is just one incredibly intelligent predictive text, to a certain extent?
Christopher: I wouldn’t even call it incredibly intelligent. It is very predictive. It has a lot of probabilities and it does exhibit some emergent properties, the ability to do primitive forms of reasoning. But these things have no level of self-awareness. They have no sentience, they have no even awareness.As a result, they can’t perform actual tasks of reasoning. They can simulate it really well, and depending on the kind of model you’re using and how much you fine tune it, it can get really good at some tasks to the point where it’s uncanny. But under the hood, it is still just trying to figure out what the next word is in a sequence.
Grace: To put this in the terms of customer service, this is exactly what the really smart chatbots are able to do, right? They’re able to take information and put it in a certain context. It’s simulating many conversations for customer support folks – but probably not so intelligent that they can apply the same reasoning that a human could for the really sophisticated or the more complex cases.
Christopher: Exactly, like edge cases that these things have never seen before. They’re gonna have trouble doing some reasoning on them.
Imagine you made toasters, and you had a customer support chat bot for your toaster. It can absolutely answer things like, “Hey, my bread is stuck. What doI do?” Or “Hey, this thing started smoking.” It’ll respond, “Please unplug it.”
Grace: Or stop talking to me and maybe leave the apartment if it’s smoking too much.
Christopher: But if it said I tried to use this toaster as a morning star to attack someone in my apartment who was invading… that’s a very small edge case. Someone could take a toaster bite and swing it around, like a mace. That’s probably uncommon. And so the chat bot would have a harder time understanding what to do with that.
Grace: And in that reasoning, there’s only so much expectation that really we can put on these new AI capabilities in that thread.
What do you think is a common misconception that people have about AI?
Christopher: Oh, they think it’s magic and it’s not. It’s mathematics. When you start building and deploying these systems – not just typing in ChatGPT, but actually downloading models – you’re writing code, you watch the underpinnings of what’s happening in the terminal and stuff scrolling by. That’s when you realize this is just a guessing machine, right?
It is just a prediction machine at its core, and that’s all it is. We can gussy it up with fancy interfaces and cool logos, but it’s just a word prediction machine, that’s for large language models. In some ways that takes all the magic out of it.
This thing it’s actually just guessing the next word, it’s not doing the things that you think it would be doing. You can have a large language model, for example, that can imitate a therapist. But then you realize it’s just doing probabilities. It doesn’t actually care about you.
It’s just doing word prediction. Think of these things almost like entry level employees. They have some knowledge, some basic capabilities, but you have to be very clear and very specific and very granular in what you ask of them so that you get the result you want.
Andrej Karpathy, who’s one of the AI scientists at Open AI, had a great analogy for this. He said, imagine that you have this employee and you have to give them directions. You give them directions, but you are not allowed to follow up. You may not email them again until the task is done. How much detail would you put in those directions to make sure they got it right?
Would you just say: ‘Go summarize those meeting notes’? You might instead say, ‘Summarize these meeting notes in bullet point format. Put five bullets per each speaker. Put time codes next to each thing, etc.
In this way, you would hand the new employee exactly what to do. And then they could probably do that successfully without needing to ask questions and without you having to give them follow-ups.
Grace: Absolutely. It’s something I’ve noticed myself. When I think over the last few months, we’ve all been a lot better at prompts due to taking on chat GPT and experimenting with it, if not using it for our actual work. The more detailed, the prompt, the better and the more specific and the more usable the response you’re actually going to get. Whereas if you write a one-liner, the chances of it being able to, in the first iteration, come up with what you would like is relatively small.
I think that’s something that in customer service is certainly true when it comes to automated support or chatbots. The customer doesn’t necessarily want to come in and provide all of the context possible.
They want the quickest answer possible or the more precise answer possible, not necessarily one that they’re going to, in a sense, train up the bot in return. They would like something that is tailored to them, not something that they have to tailor for themselves.
Christopher: And that is being addressed now more with some of the most advanced technologies around these systems. What you’re seeing now, the cutting edge or what we call agent systems.
There’s a technology, for example, called LangChain that does this. If you’ve ever used ChatGPT and you’ve had a conversation with it, with follow-ups and additional information and questions and things. In an automated fashion, these systems will spin up maybe two or three or four instances of itself. And they’re talking to each other, asking themselves, asking additional questions and things.
There’s one, for example, called GPT Engineer, which is a software coding tool. It spins up several instances. You give it a prompt, and then it will ask you five or six follow-up questions immediately. It will ask you to clarify things, and then once it has enough information, it will go and write a bunch of code for you.
That’s the way these agent systems are probably going to function within customer service. The LangChain tools will allow you to pull data out of, say, a SQL database about the customer’s previous interaction, summarize them, and then say: “It looks like you’ve called in eight times in the last three days because your toaster keeps catching fire. Have you considered a microwave?”
Grace: Or maybe just having bread?
Christopher: Exactly. Or asking additional questions right off the bat. Hey, Grace, I noticed that you’ve been a customer since, 2021. It looks like you’ve had five interactions with our service department in the past. It looks like the sentiment of those interactions has been neutral to slightly positive.
What can I do to help you have a better experience today?
Grace: That is a testament to the speed of change in AI, right? Years ago that wouldn’t have been thought possible, or certainly not to people who weren’t in the back end of the research. But I think even in the last year, it’s palpable how advancements have really sped up the process of change. And I think that’s also why there are many people who feel positive about the future – capital T, capital F for that. And there are people who also understandably feel pretty wary about that too.
Why do you think people should continue feeling optimistic about how things are going to change?
Christopher: Because these tools are just math tools at the end of the day. They’re very powerful tools. They’re like chainsaws, right? In the right hands, a chainsaw is incredibly useful, very powerful, and can do a lot of things. Like you can with a chainsaw, you can build a house, right? Like a log cabin. You could build an entire house with the chainsaw.
Grace: I’d just like to say that I don’t think I could. I do believe that some people could.
Christopher: Yes, for sure. You can also just go around murdering people with it, right? The tool is agnostic. It has no morals, it has no guides, it has no safeties built in. There’s safety features like a hand guard and stuff, but the tool is what you make of it.
So how optimistic or pessimistic you should be is based on how you feel about your fellow humans because they are the ones using the tools. If you believe that humanity is generally good, then AI will be of general good to humanity. If you believe that humans are generally not good, then you are going to find that AI is a very scary proposition in the hands of your fellow humans.
Grace: I don’t want this to turn into a philosophy podcast, although it’s tempting to, but yeah, that is really the bigger question, isn’t it? It’s not necessarily about AI in general, it’s about what we are doing with it.
Therefore, I think it’s really important for more and more people to become okay with it, to make sure that they are as much as possible, the ones who are also in control of it. I think that the best way to make sure AI doesn’t make your job obsolete is to also tackle it for yourself, to make you better at what you do.
Would you agree?
Christopher: AI is not gonna take your job, right? The machines have no self-awareness. What we say is this, people who are skilled with AI will take the jobs of people who are not. That is what is going to happen because if you are skilled at ai, you are between 2x and 10x more productive than someone who is not.
For example, I’m in the midst of writing a piece of code. I need to summarize a bunch of articles, and distill them down so that I can present a report to a client on what’s happening in their news coverage. I’m a moderately skilled programmer. Well, not a very good programmer, but I’m decently skilled with AI. And so what I’m doing is I’m basically writing a programming structure with the help of AI to send this data into a language model and return these summaries.
If I had to build this code from scratch, this would take me weeks, maybe months, possibly. I’m gonna be done with this in about 15 minutes, and I started about an hour ago.
So I am more productive because I can use AI to do 80% of the work. I still have to provide the ideas, I still have to provide the strategy, I still have to provide the guidance. But the machine can write the code. They can write the data frame, move this data, mutate this, filter this, etc.
Typing the actual things is low value work. Knowing what you want the machine to do is the high value work. And so if you can do that with the help of AI, you are vastly more productive.
Now, here’s where it gets dicey for humans. If you are skilled at AI, you are between 2x and 10x more productive. A company doesn’t need 10 of you or 50 of you, right? 3 of you will do because each of you does the work of 10 people.
So if you work, for example, in a large PR firm, and there’s a lot of junior people writing press releases. Press releases are not high value content, right? One person can do that. The job of 50 in that particular role because all you gotta do is copy, paste the different client names into your prompt template, run the prompt, boom, there’s your press release. That in turn means that for positions that are composed of lots of low value tasks, there will be fewer of them.
Grace: I completely agree. It’s something that I’ve certainly seen in my role. I do a fair amount of writing and it has made me so much better. Or so much quicker I would say, maybe not better, but absolutely I am faster at doing the kind of lower or the shorter content, the quicker content.
And it’s something that I think is a wonderful tool if you are able to therefore use it like that. And if you train yourself how to use it.
That’s certainly what, at Klaus, we want to be a tool to help people do what they are great at doing. But make sure they’re doing less of the mundane stuff and, as well, that what they’re doing is of value, that they can make it influential and make their customer service better quality overall.
I think the question that people would have is – okay, but how do I prepare? How do I make sure I am one of those people who is in the 20% who is able to put in the high value prompt, who is able to do that work and therefore take it on for themselves?
Would you have advice for the person who’s wondering that?
Christopher: Get good at this stuff, right? There’s no substitute right now for practicing, for reading, following along with the developer blogs, for example. And the regular blogs say from OpenAI, from Microsoft, from Google – the foundational technology companies that are explaining this stuff and deploying the raw materials.
If you want to master how these tools really work you should be looking heavily at things like the open source options that are available.
There’s a great project called gpt4all.io. It’s a desktop application that when you load, you download the models of your choice, based on what your needs are. You run those locally and then you can test and experiment. Your data never leaves your computer which, particularly if you do business in the EU (which is looking very unfavorably towards the bigger tech companies) these open source models allow you to run this data locally and ensure complete privacy, etc. The data never leaves the machine.
Look at your job, what are the tasks that you do that are highly repetitive and language based, and are relatively low value. Writing memos, writing status reports, writing summaries of things. Those are the things that machines are really good at, and you should be looking at every opportunity.
Can I have a machine do this as well as me? Because this status email is not gonna win a Pulitzer.
Grace: Very rarely. I personally am holding up for when AI can do my laundry, cuz that’s my lowest value work that I absolutely hate. But I love that recommendation to get more into looking at really what the technology means for yourself instead of just adopting the tools and thinking that’s everything.
Would you say that people need to have a bit of technical training for that though? I know from my point of view that sometimes what seems a little bit intimidating is that I don’t have coding skills, for example.
Christopher: It depends on how deep down the rabbit hole you want to go. Certainly becoming skilled at consumer facing tools: ChatGPT, Bing, Bard, DALL-E, Midjourney. You should absolutely be doing those. Those have required no technical skill whatsoever. You’re just using regular language and getting good outputs from them.
If you wanted to delve down the code stuff, there’s ways to do that. And including using tools like a ChatGPT to teach you this stuff.
One of the things that large language models are capable of, that people just don’t realize, that these things are outstanding coaches and mentors. If you ask them good questions, you will get good answers about the vast majority of topics.
And they’re very good at explaining concepts in ways that your brain will understand. For example, in a large language model, you will have things like parameters, the number of parameters, 10 billion parameters, 70 billion parameters, 500 billion parameters, et cetera. And then there are things called model weights. And the model weights are how the parameters are executed at runtime.
If what I just said makes absolutely no sense to you, like that sounded like words, but really not. You ask ChatGPT. Explain this to me in terms of pizza. In terms of pizza, a large language model’s parameters are the different kinds of ingredients and toppings, the weights are how much of each topping there is. Suddenly you know how that concept works. Onions and anchovies and sardines and bell peppers, and a model has these weights. There’s five times as many bell peppers as there are onions. Now I understand how these concepts relate.
Anything like that is an opportunity in the customer service realm. For example, if you’ve ever had a customer that you’ve just don’t enjoy talking to, and you really just wanna send them an email that says <long strings of profanity, and you’re a jerk, and you’re an idiot, and you have no idea how anything works>
Write that whole email, explain exactly what you feel, and then at the top of it, use this prompt: ‘Rewrite this email in a professional tone of voice’. Paste the whole thing in ChatGPT, and you will get a professional version of the FU memo that you want to write to this customer with all the key points that you’ve made.
But you’re not going to cause damage to the customer relationship because you’ve had the tool rewrite what you really feel into a professionally acceptable tone of voice. Even things like that, in an automated fashion, I can see being very useful in customer service because we’ve all dealt with a customer service agent who’s had a bad day, right? Or week or month, or life – we’ve all dealt with people who are less skillful with language in their jobs.
When we implement large language models in a customer service setting, what this does is it basically takes the worst performing employees in terms of their ability to communicate and brings it up to that slightly better than mediocre level. So if you implement these tools universally, the best agents, the best customer service folks will still be the best. You’re not gonna water them down. But for others, you give them a button in their console, it says, give me ideas about what to say to this customer, and it will do that.
And then you will deliver better service because you’ll be removing the lowest performing tier of service given to customers.
Grace: I think that’s absolutely true.
You talked about using it as a coach. Using it to train yourself is quite underrated, rather than something that is going to replace what you do.
When you first start, for example, in a role. If you’re feeling a little bit like you want to get up to speed quickly, then using ChatGPT or using these tools even as a simulation with a customer, I think, is really an innovative way to start implementing this into very different kinds of modes of how you learn and how you interact.
Christopher: Yeah, absolutely. Think about this. If you’re a manager and you’re sitting with your team and you got maybe some subject matter experts on your team, and you don’t know what this person is saying. You are worried about undermining the team’s confidence in you as a manager.
You can just have the ChatGPT app on your phone, and just ask a question. It’s like what does that model parameters mean? A lot of people do have concerns about not appearing foolish in front of others not knowing something. You now have the world’s smartest coach in your pocket to ask questions, so that you don’t have to ask those questions in public or in front of others. And you become a better manager and a better employee using these tools as guides.
Grace: Are there any other instances that you think it’s interesting for someone leading a team to use? Because I find that quite a unique way to look at it rather than just from a personal point of view.
Christopher: One of the things I did recently was I took 20 emails from a colleague and condensed them down into a single text file. I fed it to the GPT-4 model and said, perform a big five personality insights analysis on this person and it spit back the name X scores and their assessment of the person’s personality.
And I read through: okay, that seems legit. I fed 20 of my own emails and it gave me my personality analysis. I’m like, okay, that is legit cause that’s who I am. And think about how powerful that is if you’re unsure about where a person is or how they’re feeling, cuz our personalities change in different contexts.
You can use these tools to understand other people around you better.
Think about this even in a customer service context, can you imagine a chatbot that was performing that analysis as the customer was talking? And you can say this person has a high degree of introversion. So don’t overwhelm them.
Let them do the talking. This person is extremely extroverted, right? You can have a vibrant conversation with them and it’ll be okay. This person isn’t open to new ideas. This person is open to new ideas. Those guides can help you interact with people in a more skillful way if you’re not good at it yourself.
Grace: We already do sentiment analysis in terms of: ‘is the customer experiencing frustration or experiencing anger?’ It’s something we can analyze.
But I think going forward that even more niche understanding of communication for ourselves, and for the people we’re talking to is going to be more possible. Which just also opens this whole realm of opportunity as to how we interact with each other more efficiently, but also with more care and more empathy, hopefully. There is a weird notion that by learning from machines we’ll actually be better humans to an extent.
Christopher: It is because the machines are not making this stuff up. The machines are trained on human interactions, right? We give them human interactions to learn from it. As a result they train on that data and then they spit it back to us. They’re not creating anything new here. They’re just regurgitating the mathematical average of what they’ve been given.
As long as you are skillful in your prompts. For example: ‘help me write a reply to this that is empathetic and caring’. Because you can easily see the problem with ‘help me write a reply that is bold and direct’, right? And it will give you a very bold and direct reply that might not be the best choice.
You can absolutely direct machines to give you the outputs that you want. In ways that you might have gaps in those skills. One of the things I think is so powerful about generative AI is that it gives us skills we don’t have.
I can’t compose music, but I can give ChatGPT some parameters, and it will create sheet music for me. I cannot paint. I’m a very poor painter. I can go to DALL-E and say, here’s my idea for a painting. Generate some options for this. With very detailed prompts it will do a better job than I can do. I can’t do those things. My skill level’s very low, and so it brings my theoretical capable output to that slightly better than mediocre.
Make an image of a blue frog sitting in a trash can. I could try and draw that, or I could have the machine do a good enough rendering of it.
These tools help patch gaps in our own capabilities.
Grace: Similarly, it might take a coach to tell you exactly how to produce empathy or how to show through communication that you’re empathetic. It might take weeks almost to train someone that, and yet this tool will be able to do it for you instantly.
Similarly with what we do at Klaus, it’s very much looking back at conversations and analyzing, okay, what did the agent maybe do better? What could the agent could have, could have done? Or exactly how many customers are feeling frustrated when they contact support and why? What are the root causes of that? Couldn’t a human do that? Absolutely. However, it would take them an awful lot longer to do it. And instead, if they have that data there and then can go and do the actual analysis themselves, then it’s going to be far easier for them to improve.
It’s just really a case of knowing what is out there as well, which is sometimes slightly overwhelming. I feel like there’s always new tools and we all need to be good at deciphering what is going to be the best and how to adopt it best, right?
Christopher: Yes. And so there’s a couple of different ways to tackle that. One is you should be doing your requirements analysis, right? You should be understanding what it is that you need, what problems you have that AI is best suited for. Things that are highly repetitive, for example. And, two, when you’re evaluating vendors, if you have good subject matter expertise and you have at least some understanding of AI, the easiest thing to do is to ask a vendor, ‘Hey, can I talk to your engineering team without supervision?’
I had this experience once, I was talking with this vendor and I said, lemme talk to your tech team. And they said okay. And I went to the tech team and I asked this question about the centrality algorithm. And the sales guy over there said it did this. And she said we don’t do that easily. He’s lying. It’s that our product can’t do that. I’m like, thank you.
In general, if a company’s unwilling to let you talk to Engineering without supervision, they’ve got something to to hide.
Grace: I love this. You’ve just advertised Klaus for us because actually our data team happily and often jump on calls with potential customers to really explain it in full. Because what the salesperson is for, what the marketing team is for is to advertise everything that we do. Whereas if you want to really get down to the nitty gritty of what that means when it’s something that you’re hopefully signing up for long term, the people who really know what it’s about are your data crew who are working in a day in, day out. At the moment they are very much in demand, but for good reason.
Grace: So I have two quick fire questions to ask you, and which you’ve slightly answered, but maybe just in a fast form, what would be your takeaway advice to someone who is working in customer service right now who wants to make sure that they are upskilling in a way that aligns with how AI is progressing?
Christopher: Figure out which parts of your job are repetitive and figure out if today’s generative tools can do those tasks. If they can, then you need to learn how to use the machine so that you are the ones supervising the machines. And those tasks are not simply handed off to a machine and they’re no longer in your workload.
Because if that happens, your utilization rate goes down. And there is a certain point at which someone’s gonna say, we can just consolidate the number of employees we have doing these tasks as there’s fewer of them.
Grace: Absolutely. And the final question is, will AI take our jobs?
Christopher: No, AI will not take our jobs. People who are skilled with AI will take the jobs of people who are not, if you are resistant to it.
You are going to be less and less valuable to your company over time compared to a candidate or an existing employee who is skillful with us who can 2x to 10x their output.
Here’s the thing, the fundamental drives of people are the same, right? Everyone wants better, faster, cheaper. Every company that is a for-profit company wants to make more money while spending less money. Every person, every individual human consumer wants better, faster, and cheaper. And as one person put it, we are all motivated by the same 3 drives: greed, stupidity, and horniness. And as a result, because we know how people will behave. We know what people are gonna do with these tools. We know that people are naturally going to want to pay people less, employ fewer people, and make greater profits, right?
So the way that you stay aligned with that is you become the conductor of the orchestra. You become the highly skilled employee so you can keep charging more money for what you do, and you don’t get replaced by someone who’s more skilled at doing those same things.
Grace: Amen. Thank you so much. You have enlightened me certainly. And I’ve been someone who’s been studying and writing about this for a few months now.
You absolutely fulfilled the brief of putting into more simple terms what AI means and what we all can do. Because it’s not something that we should be wary of. It’s something we should be excited about because if we embrace it, it is exciting.
Christopher: You should be excited and wary of it. I think it’s a balance – again, it’s a tool. People will use tools in different ways. There are ways to misuse AI. The most easy to understand example is a tool like Voice Gen or Tortoise TTS, or ElevenLabs.
I did this with a friend of mine recently. I cloned her voice and I showed her a ransom message I made with it that sounded exactly like her, and I asked her. ‘Would this fool your mother?’ And she said the only thing that’s a giveaway is that you used a different term for mom than I usually use, and that was the only giveaway.
Grace: That’s so interesting. I just listened to a podcast which was on exactly this. It was a father who had received a voicemail from his son who was traveling in South America. I think it was saying, ‘dad, I need help’, blah, blah, blah.
And they spoke to a few people, one of whom had fallen for it, and the other who didn’t. The other who didn’t said he just had to call back his son and say, ‘was this you?’ Because he said, ‘I was 50% convinced, but something was telling me that it wasn’t quite it.’
And we do usually have this sense, right? When it’s a machine, if it’s long enough, we can decipher for some reason. There’s some specific ways of communication that we can usually tell. However, it’s not foolproof. If it’s something that is a circumstance in which you’re gonna feel desperation and you’re gonna feel fear, then probably those powers of reasoning are slightly switched off. And therefore, yeah, of course you’re gonna be fooled.
Christopher: You do the exact same thing that hopefully your parents taught you as a kid. When I was a kid, there was an agreement, there was a catchphrase, a password essentially, that you would give to somebody. Like, ‘hey, I’m gonna go pick up my kid’. The person who’s picking you up has gotta be able to say the phrase, ‘Vatican cameo’.
Is there something that wouldn’t be normal conversational to indicate this is really me? And not someone pretending to be me. So when you send your friend to go pick up your kid at the bus stop, you tell him that phrase so that the kid knows, ‘oh, my dad actually sent this person and it’s not someone trying to kidnap me.’
Grace: I don’t have any of those with anybody that I know. And now I feel like I need to create some really weird words.
Christopher: It’s a good idea.
Grace: This passcode type thing so that we are actually sure that we are who we say we are, when we’re communicating. How bizarre.
So thank you so much for coming on and I’m really looking forward to publishing this and sharing it with with the world.
Christopher: Thank you for having me!