January 2, 2025
01/02/2025 | 55m 28sVideo has Closed Captions
Nina Schick, Connor Leahy, Priya Lakhani and Wendy Hall; Hari Sreenivasan
Christiane hosts a panel of leaders in the field of artificial technology. In a world where it’s increasingly hard to discern fact from fiction, Hari Sreenivasan and Christiane Amanpour discuss the ethical dilemmas of A.I., and why it’s more important than ever to keep real journalists in the game.
January 2, 2025
01/02/2025 | 55m 28sVideo has Closed Captions
Christiane hosts a panel of leaders in the field of artificial technology. In a world where it’s increasingly hard to discern fact from fiction, Hari Sreenivasan and Christiane Amanpour discuss the ethical dilemmas of A.I., and why it’s more important than ever to keep real journalists in the game.
How to Watch Amanpour and Company
Amanpour and Company is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Watch Amanpour and Company on PBS
PBS and WNET, in collaboration with CNN, launched Amanpour and Company in September 2018. The series features wide-ranging, in-depth conversations with global thought leaders and cultural influencers on issues impacting the world each day, from politics, business, technology and arts, to science and sports.Providing Support for PBS.org
Learn Moreabout PBS online sponsorship(logo whooshes) (dramatic music) - Hello, everyone, and welcome to "Amanpour and Company."
Here's what's coming up.
Artificial intelligence, the power and the peril.
Are people playing with fire?
- Absolutely, without a doubt.
- [Christiane] Four leaders in their field unpack the uncertainty that lies ahead.
- We have agency, and I just wanna kind of divorce that kind of hypothetical scenario with the reality, and that is we decide.
- [Christiane] What it means for jobs and how it'll change our working lives.
- And I genuinely believe we're gonna get a four-day week out of AI.
- Do any of you believe that there will be a universal basic income therefore?
- It is time to start thinking about ideas like that.
- [Christiane] Also this hour.
- [Announcer] We can now call the 2024 presidential race for Joe Biden.
- [Christiane] Policing misinformation ahead of crucial US presidential elections.
- And I've got two children who are 11 and 13, are they going grow up in a world where they can trust information?
- [Christiane] Then, how to regulate a technology that even its creators don't fully understand?
- If this technology goes wrong it can go quite wrong.
- When looking at CEOs, or other people of power, is to watch the hands, not the mouth.
- [Christiane] How AI could revolutionize healthcare.
- These are lifesaving opportunities.
- [Christiane] And make our relationships with machines much more intimate.
- When it comes to relationships, and in particular sexual relationships, it gets very weird very quickly.
- [Christiane] Also ahead, Hari and I discuss how to keep real journalists in the game.
- I am OTV's and Odisha's first AI news anchor, Lisa.
- Look, this is just the first generation of, I wanna say, this woman, but it's not, right?
(gentle dramatic music) (dramatic music) - [Announcer] "Amanpour & Company" is made possible by: Candace King Weir, the Family Foundation of Leila and Mickey Strauss, Jim Attwood and Leslie Williams, Mark J. Blechner, Seton J. Melvin, Charles Rosenblum, Koo and Patricia Yuen, committed to bridging cultural differences in our communities, Barbara Hope Zuckerberg.
We try to live in the moment to not miss what's right in front of us.
At Mutual of America we believe taking care of tomorrow can help you make the most of today.
Mutual of America Financial Group, retirement services and investments.
Additional support provided by these funders.
And by contributions to your PBS station from viewers like you.
Thank you.
- Welcome to the program, everyone.
I'm Christiane Amanpour in London.
Whether AI makes our societies more or less equitable, unlocks breakthroughs or becomes a tool of authoritarians, is up to us.
That is the warning and the call to arms from the Biden administration this week.
In a joint op-ed, the secretaries of state and commerce say the key to shaping the future of AI is to act quickly and collectively.
In just a few short months, the power and the peril of artificial intelligence have become the focus of huge public debate and the conversation couldn't be more relevant as the atomic bomb biopic "Oppenheimer" reminds us all of the danger of unleashing unbelievably powerful technology on the world.
- Has a bomb.
- Are we saying there's a chance that when we push that button, we destroy the world?
- Chances are near zero.
- Director Christopher Nolan himself says that leading AI researchers literally refer to this as their "Oppenheimer" moment.
Predictions range from the cures for most cancers to possibly the end of humanity as we know it.
What most people agree on though, is the need for governments to catch up now.
To assess all of this and to separate the hysteria and hyperbole from the facts, we brought together a panel of leaders in the field of artificial intelligence.
Nina Schick, global AI advisor and author of "Deep Fakes."
Renowned computer science professor Dame Wendy Hall.
Connor Leahy, an AI researcher who is the CEO of Conjecture, and Priya Lakhani, an AI government advisor and the CEO of CENTURY Tech.
Welcome all of you to this chat, to coin a phrase.
I mean, it's such a massively important issue, and I just thought I'd start by announcing that when I woke up and had my morning coffee, AI is all over this page, on the good, on the bad, on the questions, on the indifference.
What I want to know is from each one of you literally is what keeps you up at night?
You're all the experts, for good or for bad, and I'm gonna start with you.
- We can conceive of it as us being now on the cusp, I think, of a profound change in our relationship to machines that's gonna transform the way we live, transform the way we work, even transform our very experience of what it means to be human.
That's how seismic this is.
If you consider the exponential technologies of the past 30 years, the so-called technologies of the information age, from the internet, to cloud, to the smartphone, it's all been about building a digital infrastructure and a digital ecosystem, which has become a fundamental tenant of life.
However, AI takes it a step further.
With AI, and in particular generative AI, which is what I have been following and tracking for the last decade, you're really looking at the information revolution becoming an intelligence revolution, because these are machines that are now capable of doing things that we thought were only unique to human creativity and to human intelligence.
So the impact of this as a whole for the labor market, for the way we work, for the way that the very framework of society unfolds is just so important.
My background is in geopolitics where I kind of advised global leaders for the better half of two decades, and the reason I became interested in AI is not because I have a tech background, I have a background assessing trends for humanity.
This isn't about technology.
This is ultimately a story for humanity and how we decide this technology is going to unfold in our companies, so within enterprise, very exciting, but also society, writ large.
And the final thing I'd say is we have agency.
A lot of the debate has been about AI autonomously taking over, and I just wanna kind of divorce that kind of hypothetical scenario with the reality, and that is we decide.
- Connor though, you believe, 'cause we've spoken before, that actually these machines are gonna be so powerful and so unable to control by human input, that they actually could take over.
- Unfortunately, I do think that this is a possibility.
In fact, I expect this default probability, but I would like to agree with Nina fully that we do have agency, this doesn't have to happen, but you asked the question earlier, what keeps me up at night?
And I guess what I would say what keeps me up at night is that a couple million years ago the common ancestor between chimpanzee and humans split into two subspecies.
One of these developed a roughly three times larger brain than the other species.
One of them goes to the moon and builds nuclear weapons.
One of them doesn't.
One of them is at the complete mercy of the other.
One of them has full control.
I think this kind of relationship to very powerful technology can happen.
I'm not saying maybe it can't, it is the default outcome, unless we take our agency, we see that we are in control, we are the ones building these technologies, and as a society we decide to go a different path.
- So to follow up on that, the same question to you, but from the point of view of how do we have agency, express agency and regulate?
You're a private entrepreneur, you also have been on the government, the British government sort of regulation council.
- Yeah.
- What will it take to ensure diversity, agency, and that the machines don't take over?
- Well, what it takes to ensure that is it's a lot of work and there's lots of ideas, there's lots of theories, there are white papers, there's the pro-innovation regulation review that I worked on with Sir Patrick Vallance here in the UK, the US government has been issuing guidance, the EU is issuing its own laws and guidance, but what we wanna see is execution, Christiane.
And you know, on the sort of what keeps you up at night, I feel sorry for my husband because actually while I... What keeps me up is actually other issues such as things like disinformation with generative AI.
And I've got two children who are 11 and 13, are they going to grow up in a world where they can trust information and what's out there, or are these technologies, because of lack of execution on the side of policy makers means that actually it's sort of a free-for-all.
You know, bad actors have access to this technology and you don't know what to trust.
But actually the biggest thing that keeps me up at night is a flip from what we've heard here.
It's are we as a human race, are we gonna benefit from the opportunities that artificial intelligence also enables us, you know, to have?
So we often talk, and Christiane, you know, forgive me, but for the last six months it's all been about ChatGPT and generative AI.
That is really important and that's where a lot of this discussion should be placed, but we also have traditional AI.
So we have artificial intelligence where we've been using data, we've been classifying, we've been predicting, we've been looking at scans and spotting cancer where we've got a lack of radiologists, right, and we can augment radiology, we can augment teaching and learning.
So how are we also going to ensure that all around society we don't actually exacerbate the digital divide, right, but we leverage artificial intelligence, what the best it can provide to help us in the areas of healthcare, education, security.
So, you know, it's scary to think we're not using it to its full advantages while we also must focus on the risks and the concerns.
And so really I sort of have this duel sort of what keeps me up night.
As I said, I sort of feel sorry for my husband 'cause I'm sort of tapping on his shoulder going, "And what about this and what about that?"
- We really need many different voices helping us build and design these systems and make sure they're safe.
Not just the technical teams that are working at the companies to build the AI that they're talking to the governments about.
We need women, we need age range, we need diversity from different subject areas.
We need lots of different voices, and that's what keeps me awake at night.
- Because if not, what is it, what's the option?
- Well, it's much, much more likely to go for use for the home, because you haven't got society represented in designing the systems.
- So you're concerned that it is, it is just one segment of society.
- Yeah, one small segment of society, right?
We call them, I like to call 'em the tech bros, they are mostly men, there's very few women actually working in these companies at the cutting edge of what's happening.
You saw the pictures of the CEOs and the vice presidents with Biden and with Rishi Sunak, and these are the voices that are dominating now, and we have to make sure that the whole of society is reflected in the design and development of these systems.
- So before I turn to you for more, you know, input, I want to quote from Douglas Hofstadter, who I'm sure you all know, the renowned author and cognitive scientist who's quoted about the issues that you've just highlighted, that ChatGPT and generative have taken over the conversation.
He says, "It," quote, "just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches."
A little kind of like what you said, but I see you wanting to dive in, Wendy, with comment on that.
- Well, I just, I mean, I'd like to sort of disagree with Priya a bit.
I think that if we move too fast, we could get it wrong.
If you think about the automobile industry, when it started there were no roads, someone had to walk in front of a car with a lamp, which shows you how fast they were going.
If we'd tried to regulate the automobile industry then we wouldn't have got very far, 'cause we couldn't see what was gonna be coming in another hundred years.
And I think we have to move, we have to move very fast to deal with the things that are immediate threats.
And I think the disinformation, the fake news, we have two major democratic elections next year, the US president and our election here, whenever it is, could even be at the same time, and there are other elections, and the disinformation, the fake news, the Pope-in-a-puffer-jacket moment, these could really mess up these elections.
And I think there's an immediate threat to potential democratic process.
And I believe we should tackle those sorts of things at a fast speed and then get the global regulation of AI, as it progresses through the different generations of AI, and get that right at the global level.
- So I think that's really important.
She's bringing up, as the most existential threat beyond the elimination of the species, the survival of the democratic process and the process of truth.
- Yes.
- So let me fast forward to our segment on deepfakes.
As we know, it's a term that we give video or audio that's been edited using an algorithm to replace, you know, the original person with the appearance of authenticity.
So we remember, a few months ago there was this image of an explosion at the Pentagon which was fake, but it went around the world virally, it caused markets to drop before people realized it was bogus.
We know that, for instance, they're using it in the United States in elections right now.
I'm gonna run a soundbite from a podcast called "Pod Save America," where they, as a joke, basically simulated Joe Biden's voice because they could never get him on the show and they thought they would make a joke and see if it'd put the, you know, a bit of fire underneath it.
So just listen to this.
- [AI Joe] Hey, friends of the "Pod," it's Joe Biden.
Look, I know you haven't heard from me in a while and there's rumblings that it's because of some lingering hard feelings from the primary.
Here's the deal.
- This is good.
- Did Joe Biden like it when Lovett said he had a better chance of winning Powerball than I did of becoming the president?
- I didn't say that.
- No, Joe did not.
- Okay, so that was obviously a joke, they're all laughing, but Tommy Vietor, who's one of these guys, a former, you know, a former White House spokesman, basically said they thought it was fun, but ended up thinking, "Oh God, this is going to be a big problem."
Are people playing with fire, Connor?
- Absolutely, without a doubt.
These kind of technologies are widespreadly available.
You can go online right now and you can find open source code that you can download to your computer, you know, play with a little bit.
Take 15 seconds of audio from any person's voice anywhere on the internet without their consent and make them say anything you want.
You can call their grandparents in their voice, you can ask for money.
You can, you know, put it on Twitter, say some kind of political event happened.
This is already possible and already being exploited by criminals.
- By criminals.
- I actually wrote the book on deepfakes a few years ago, and I initially started tracking deepfakes, which I call the first viral form of generative AI, back in 2017 when they first started emerging.
And no surprise, but when it became possible for AI to move beyond its traditional capabilities to actually generate or create new data, including visual media or audio, it has this astonishing ability to clone people's biometrics, right?
And the first use case was in non-consensual pornography, because just like with the internet, pornography presented as on the cutting edge.
But when I wrote my book, and actually at the time I was advising a group of global leaders, including the NATO secretary general and Joe Biden, we were looking at it in the context of election interference and in the context of information integrity.
So this debate has been going on for quite a few years and over the past few years.
It's just that now it's become, you know- - You're right, but that's the whole point.
This is the point, just like social media, all of this stuff has been going on for a few years until it almost takes over.
- Yes, but the good thing is that there is an entire community working on solutions.
- Okay.
- Long been very proud to be a member of the community that's pioneering content authenticity and provenance, so rather than being able to detect everything that's fake.
Because it's not only that AI will be used to create malicious content, right?
If you accept my thesis that AI increasingly is going to be used almost as a combustion engine for all human creative and intelligent work, we're looking at a future where most of the information and content we see online has some elements of AI generation within it.
So if you try to detect everything that's generated by AI, that's a fool's errand.
It's more that the onus should be on good actors or companies that are building generative AI tools to be able to cryptographically hash, they have an indelible, it's more than a watermark because it can't be removed, signal in the DNA of that content and information to show its origin.
- Yeah, so like the Good Housekeeping seal of approval.
- It's basically about creating an alternative safe ecosystem to ensure information.
- So let's just play this, and then maybe this will spark a little bit more on this.
This is, you know, the opposite end of the democratic joke that we just saw.
This is from an actual Republican national committee, serious, you know, fake.
- [Reporter] This just in.
We can now call the 2024 presidential race for Joe Biden.
- [Biden] My fellow Americans.
(dramatic music) (siren wailing) - [Reporter] This morning an emboldened China invades Taiwan.
- [Reporter] Financial markets are in free fall as 500 regional banks have shuttered their doors.
Border agents were overrun by a surge of 80,000 illegals yesterday evening.
- [Reporter] Officials closed the city of San Francisco this morning citing the escalating crime and fentanyl crisis.
- So that was a Republican national committee ad, and the Republican strategist Frank Luntz said this about the upcoming election.
"Thanks to AI, even those who care about the truth won't know the truth."
- The scale of the problem is going to be huge because the technology is available to all.
On the biometric front, right, let's think about this, it's actually really serious.
So think about banking technology.
At the moment, when you want to get into your bank account on a phone, they use voice recognition, right?
We have facial recognition, face recognition on our smartphones.
Actually with the rise of generative AI, biometric security is seriously a threat.
So people are saying you might need some sort of two-step factor authentication to be able to solve those problems.
I don't think it's a false errand to try and figure out what is formed by AI and what's created by AI and what's not.
Simply because, look at the creative industries.
The business model of the creative industries is going to be seriously disrupted by artificial intelligence.
And there's a huge lobby from the creative industry saying, well, we've got our artists, we've got our music artists, our record labels, our, you know, design artists, we have newspapers, we've got broadcasters who are investing in investigative journalism, is how can we continue to do that and how can we continue to do that with the current business models when actually everything that we are authentically producing that is, you know, taking a lot of time and investment and effort, actually is being ripped off by an artificial intelligence, sort of generative AI, over at the other end?
What policy makers then decide to do, when it comes to, is it fair game to use any input to be able to have these AI technologies then generate new media, will affect whether startups and scale ups can settle in this country and grow in this country, or whether they go elsewhere.
We know where Europe is, right?
So Europe has got this sort of more prescriptive legislation that they're going for.
We are going for what we call light touch regulation.
- We being the UK.
- We being the UK, apologies, yeah.
So light touch, which I wouldn't say is lightweight, it's about being fast and agile, right?
And as an AI council that Wendy and I both sat on, it was all about actually how can we move with an agile approach as this technology evolves?
And then you have the US and you have other countries.
So this is all intertwined into this big conversation.
How can you be pro innovation?
How can you increase, you know, gross value add in your economy?
How can you encourage every technology company to start up in your country and thrive in your country, while also protecting the rights of the original authors, the original creators, and also while protecting consumers?
And this is a, and it's a, there's a political angle to this that isn't just- - I think that this whole conversation will be terrifying people.
- Okay, so can you rein it back to not terrify people?
- Because it's getting very technical.
We've got, you know, all the things you've been talking about and actually, you know, in the UK we could call, Rishi Sunak could call the election in October.
Right?
- Right.
- All this won't be sorted out by then.
And I think we have to learn, we have to keep the human in the loop.
The media will have a major role to play in this, 'cause we've got to learn to slow things down, and we've- - But is that possible?
I mean, you say that.
Is it possible, Connor, to slow things down?
- No, no, I don't mean technically.
I mean, we've gotta think about, when you get something that comes in off the internet, gotta check your sources.
There's this big thing at the moment, check your sources.
We are gonna have to check.
I totally agree.
I mean, I've been working on provenance for most of my career, and I totally agree about all the technical things we can use, but they're not gonna be ready.
I don't argue and I think people get very confused.
I think we've got a... My mother used to say to me, "Don't believe everything you read in the newspapers," in the 1960s.
- Unless Christiane said it.
- Well, okay.
But that's the whole point, Priya, you see?
If Christiane says it.
- I'm not disagreeing with you, my entire- - I might be inclined to trust it.
- I could be a deepfake, Dame Wendy, is what you're saying.
- And so actually I'm with Nina on the fact that there is lots of innovation in this area, so- - There is lots of innovation.
- There is innovation, but look, the key, I think this is a long-term thing, this isn't gonna happen tomorrow, but one of the key points is that- - The elections are tomorrow- - In education, for example, across the world, whether you're talking about the US, across Europe, different curricular whether it's state curricular or private curricular, one of the things that we're going to have to do is teach children, teach adults, everybody, they're gonna have to be more educated about just, you know, a non-technical view of what AI is, so that when you read something, are you checking your sources, right?
Those skills, such as critical thinking, that people love, actually they're more important now than ever before.
- For sure.
- Right?
- So did Christiane actually say that?
Did she not?
And so understanding the source is gonna be important.
And there's definitely a policy maker's role across the world to ensure that that's in every curriculum, it's emphasized in every curriculum, because right now it isn't.
- Okay, I just need to stop you for a second, because I want to jump off something that Sam Altman, who's the, you know, I guess the modern progenitor of all this, of OpenAI et cetera.
In front of Congress, he said the following recently, and we're gonna play it.
- I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that.
We wanna work with the government to prevent that from happening.
- I mean, these guys are making a bushel of money over this.
The entire stock market, we hear, is floated right now by very few AI companies.
What do you think, what's your comment on what Sam Altman just said?
- So I do agree with the words Sam Altman speaks.
My recommendation though, when looking at CEOs, or other people of power, is to watch the hands, not the mouth.
So I would like to, you know, thank Sam Altman for being quite clear, unusually clear, even about some of these risks.
When is the last time you saw, you know, like oil CEO go, in the '70s, going to the, you know, heads of government saying, "Please regulate us, climate change is a big problem."
So in a sense, this is better than I expected things to go, but a lot of people, who I've talked to in this field, as someone who is very concerned about the existential risk of AI, they're saying, "Well, if you're so concerned about it and Sam is so concerned about it, why do you keep building it?"
And that's a very good question.
I have this exact question to Sam Altman, Dario Amodei, Denis Hassabis, and all the other people who are building these kinds of technologies.
I don't have the answer.
In their minds, I think they, you know, may disagree with me on how dangerous it is, or maybe they think we're not at the danger yet, but I do think there is an unresolved tension here.
- I'm quite skeptical.
And also I would like to, I mean, remember the dot-com crash?
- Yeah, the bubble.
- Right.
Well, I'm just saying we could have another bubble, right?
Now I don't think the business models are sorted out for these companies.
I don't think the technology is as good as they're saying it is.
I think there's a lot of scaremongering.
- I know you say there's a lot of scaremongering and you know, I'm just gonna quote again, it's a profile of Joseph Weizenbaum, who's again one of the godfathers of AI.
It's in "The Guardian."
He said, "By ceding so many decisions to computers, we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code."
And even Yuval Harari, who we all know is a great modern thinker, "Simply by gaining mastery of language, AI would have all it needs to contain us in a "Matrix"-like world of illusion.
If any shooting is necessary, AI could make humans pull the trigger just by telling us the right story."
We're living in an "Oppenheimer" world right now.
Right, "Oppenheimer" is the big zeitgeist.
What is it telling us?
It's telling us a Frankenstein story.
- [Oppenheimer] We imagine a future.
(air whooshing) (somber music) And our imaginings horrify us.
(fire roaring) - The science is there, the incredible ingenuity is there.
The possibility of control is there.
And let's talk about nuclear weapons, but it's only barely hanging on now.
I mean, so many countries have nuclear weapons.
So again, from the agency perspective, from you know, you've talked about not wanting to terrify the audience.
I'm a little scared.
- You see, I don't think, these guys can correct me if I'm wrong, but we aren't at that Weizenbaum moment yet, by any means.
Right, this generative AI can do what appears to be amazing things, but it's actually very dumb.
- Okay.
- All right?
It's just natural language processing and predictive text.
Right?
- I agree.
- Is that right?
Let's just hear from Nina first for a second.
- Thing is not everyone is afraid of it.
If you look at public opinion poll, the kind of pessimistic, scary views of AI tend to be in western liberal democracies.
In China, you know, 80% of the population has a more optimistic view of artificial intelligence.
Of course, if the debate is wrapped up in these terms that again, it's so much to do with the existential threat and the AGI, it can seem very scary.
But I agree with Wendy, if you look at the current capabilities, is this a sentient machine?
Absolutely not.
- No emotions.
- But can it really understand?
- Doesn't understand.
- But that's neither here nor there, because even if it is not actually able to understand, with its current outputs and applications, is this technology profound enough to dramatically shift the labor market?
Yes, it is.
And I actually think that sometimes- - In a good or bad way?
- In a transformative way.
So I think the key question then is, as ever was it thus with technology, it's who controls the technology and the systems and to what ends?
And we've already been seeing over the past few decades over the information revolution, the rise of these new titans, you know, private companies who happen to be more powerful than most nation states.
So again, that is just gonna be augmented, I think, with artificial intelligence, where you have a couple of companies and a few people who are really able to build these systems and to kind of build commercial models for these systems.
So the question then is about access and democratizing this possibility of AI for as much of humanity as possible.
- I don't think that's right, I'm sorry.
- Very quickly- - Very, very quickly.
- Because we need to move on to the positive because of the jobs.
- The reason why Geoffrey Hinton left Google, and the reason why you've got all of that is because, it's because of the way in which this is built.
This is a different model of artificial intelligence where normally, so Christiane, you have a technology that is, it's been built for one specific task, right?
So it's gonna beat the grandmaster at chess.
It's not gonna break out the box and do anything else.
It's gonna be about teaching.
Which is why, sorry- - That's why general- - Let me finish.
Sorry, no.
Because I think there's a fundamental understanding piece- - It's not true- - Which is why, which is what we have to make clear.
Which is why I said at the outset, I'm really excited about the opportunities of artificial intelligence, 'cause there are so many opportunities.
The reason why these godfathers of artificial intelligence are all quoting and, you know, writing and leaving big companies and stating that there is a risk, is not to have a dystopian versus utopian conversation, 'cause that's not helpful, is to get to the issue of the way in which this technology, it's called transformer models, it's this idea of foundational AI models, which we don't need to get into the detail of, but it's about training a system and models that goes beyond the one task that it was trained for, where it can then copy it in learning and then do other tasks, then copy its learning and then do other tasks, then copy its learning.
And so the idea is that when I teach you something or you teach me something, we've got that transference of information that we then learn, that's a human process.
And we wanna teach a thousand other people, we've gotta transfer that and they've got to learn and they've got to take it in.
The learning algorithm of AI is a lot more efficient than the human brain in that sense, right?
It just copies and it learns, right?
And so all of this conversation, there is no AGI right now, I think everyone's in, even the godfathers of AI are in total agreement of the fact that's not there now.
But what they are looking at is, "Wow, the efficiency of this AI is actually better than the human brain, which we hadn't considered before.
The way in which it works, we hadn't considered before."
So all that they're saying is, "Look, it is, I think people should be excited and opportunistic about AI and they should also be a bit terrified in order to be able to get this right.
- And that's actually important, because, as you say and everybody says, we can't just harp just on the negative, of which there is plenty, or on the terrifying, of which there is plenty.
Or even on the experience that we've had from social media, where these titans have actually not reined themselves in to the extent that they pledge to do every time they hauled up before Congress and the lot.
However, I had Brad Smith, our Microsoft vice chair and president on this program a few weeks ago, and he talked about obviously jobs and it's basically saying, you know, "In some ways they will go away, but new jobs will be created.
We need to give people new skills."
This is the rest of what he told me.
- In some ways, some jobs will go away, new jobs will be created.
What we really need to do is give people the skills so they can take advantage of this.
And then, frankly, for all of us, I mean, you, me, everyone, our jobs will change.
We will be using AI just like 30 years ago, when we first started to use PCs in more offices.
And what it meant is you learned new skills so that you could benefit from the technology.
That's the best way to avoid being replaced by it.
- Well, before I ask you, I will.
Connor?
- The reason we were not replaced by steam engines is because steam engines unlocked a certain bottleneck on production.
Energy, raw energy for moving, you know, heavy loads, for example.
But then this made other bottlenecks more valuable.
It's increased the value of intellectual labor, for example, and the ability to plan or organize or to come up with new inventions.
Similarly, the PC unlocked the bottleneck of road computation.
So it made it less necessary, you know, the word computer used to refer to a job that people had to actually crunch numbers.
Now this bottleneck was unlocked and new opportunities then presented themselves.
But just because it's happened in the past doesn't mean there's an infinite number of bottlenecks.
There are in fact a finite number of things humans do.
And if all those things can be done cheaper and more effectively by other methods, the natural process of a market environment is to prefer that solution.
- And we've seen it over and over again, even in our business.
And it's not necessarily about AI.
And we're seeing this issue that you're talking about, play out in the directors', you know, strike, the writers' strike, the actors' strike, et cetera, and many others.
But there must be, he must be right to an extent, Brad Smith, right?
You've done so much thinking about this and the potential positive, and jobs seem to be one of the biggest worries for ordinary people.
Right?
- Well.
- So what do you think?
- I take Connor's point, but history shows us that when we invent new technologies, that creates more jobs than it displaces.
There are short-term winners and losers, but in the long-term you're back to the is it an existential threat, and will we end up in the matrix just as the biofuel, like in the "Matrix," just as the biofuel for the robots?
That's where I believe we need to start regulating now to make sure this is always an augmentation.
And you know, I mean, I genuinely believe we're gonna get a four-day week out of AI.
I think people will be relieved of burdensome work, so that there's more time for the caring type of work, doing things that, I mean, we don't differentiate enough between what the software does and what robots can do.
And I know in Japan they've gone all out for robots to help care for the elderly.
I don't know that we would accept that in the way they have.
And I think there are all sorts of roles that human beings want to do.
Care more, you know, be more, have more time bringing up the children.
We'll be able to have personalized tutors for kids, but that won't replace teachers as such to guide them through.
So I'm very positive about the type of effect it can have on society, as long as our leaders start talking about how we remain in control.
I'd prefer to say that rather than regulate.
- That's interesting.
- How we remain in control.
- So to the next step, I guess, from what you're saying, in terms of the reality of what's gonna happen in the job market, do any of you believe that there will be a universal basic income, therefore?
- It is time to start thinking about ideas like that.
UBI, the four-day work week, because I think we can all agree on this panel that it is undoubted that all knowledge work is going to transform in a very dramatic way.
I would say, over the next decade.
And it isn't necessarily that AI is going to automate you entirely, however, will it be integrated into the processes of all knowledge and creative work?
Absolutely.
Which is why it's so interesting to see what's unfolding right now in Hollywood with the SAG strike and the writers' strike, because the entertainment industry just happens to be at the very cusp of this.
And when they went on that strike, you know, when Fran Drescher gave that kind of very powerful speech.
The way that she positioned herself was saying this is kind of labor versus machines, machines taking our jobs.
I think the reality of that is actually gonna be far more what Wendy described, where it's this philosophical debate about does this augment us or automate us?
And I know there's a lot of fear about automation, but you have to consider the possibilities for augmentation as well.
I just hosted the first generative AI conference for enterprise and there were incredible stories coming out in terms of how people are using this.
For instance, the NASA engineers who are using AI to design component parts of spaceships.
Now this used to be something that would take them an entire career as a researcher to achieve, but now with the help of AI, to help their kind of design and creative process, intelligent process, being distilled down to hours and days.
So I think there will be intense productivity gains and there's various kind of reports that have begun to quantify this.
A recent one from McKinsey saying that up to $4.4 trillion in value added to the economy over just 63 different use cases for productivity.
So if there is this abundance, you know, the question then is how is this distributed in society?
And the key, I think, factors that were already raised at this table, how do we think about education, learning, re-skilling, making sure that, you know, the labor force can actually, you know, take advantage of all this?
- And to follow up on that, I'm gonna turn to this side of the table because healthcare is also an area which is benefiting.
AI is teaching, I believe, super scanners to be able to detect breast cancer, other types of cancer.
I mean, this is, these are big deals, these are life-saving opportunities.
- These are lifesaving opportunities.
And so, and I think the dream is if we can get the AI to augment the HI, right?
The AI, the artificial intelligence, and then augmenting the human intelligence.
How can we make us as humans far more powerful, more accurate, better at decision-making, where there are a lack of humans in a particular profession?
So I was talking about radiographers earlier.
So you know, you don't have enough radiographers looking at every breast cancer scan.
Can you use artificial intelligence to augment that?
So actually you can ideally spot more tumors earlier, save lots of lives, but then you also have that human in the loop.
You have that human who's able to do that sort of quality check of the artificial intelligence.
In education, we're 40,000 teachers short in the UK, we're millions of teachers short worldwide.
Can we provide that personalized education to every child while classroom sizes are getting larger?
But then provide teachers with the insights about where is the timely targeted intervention right now?
Because that's impossible to do with 30 or 40 students in the classroom.
And it's taking that opportunity.
On the universal basic income question, I think it's a choice, Christiane, I really think it's a choice right now for governments and policy makers.
Am I going to be spending lots of money on UBI, on other schemes and areas where I can ensure universal basic income?
Or am I going to make, take that approach that is gonna last beyond my election cycle?
It's a long-term educational approach to lifelong learning, to people being able to think, "Right, this is what I'm trained for, this is what I'm skilled at today.
As technology advances, how do I upskill and reskill myself?"
- Now you're talking about politics for the people with long-term view.
- Well, that is what I am interested in.
- But also- - Yeah.
- We've been talking about this very much in western points of view.
I mean, when you, and the whole point about, you know, the migration crisis is 'cause people wanna come and live in countries where the quality of life is better.
- And where they can get jobs, for heck's sake.
- But what we need to be doing is thinking about the other way around.
We can use AI to help increase productivity in the developing world, and that's what our leaders should be doing, which is way beyond election cycle.
- Exactly.
- That to me- - And the climate crisis and all those other issues.
- We can really put back.
- So will they do it, because, as we discussed the first time we talked, it shows in certain graphs and you know, analyses show that the amount of money that's going into AI is on performance and not on moral alignment, so to speak.
- Absolutely.
- Which is what you're talking about.
That's a problem.
That needs to shift.
- Which is why I come back to what I said at the very beginning.
We need a diversity of voices.
- Right, diversity of voices.
- Right.
Not just the people who are making the money out of it.
- Can I just, sorry, just to sort of encompass a point that I think both of you made.
So what Wendy and Nina both made is that actually one of the issues is that, you know, when we were talking about whether she's scaremongering or not, but where is the power if you have centralized power within about four or five companies?
That's the problem.
And Connor and I were talking about this behind the scenes.
You know, so you've got this black box essentially, and you've got constant applications of artificial intelligence on this black box.
Is that safe?
Is it not?
And so to your question, I mean, is it gonna happen?
Well, policy makers make it happen.
Now I think this is all about aligning our people's agenda with their agenda, right?
And if we can find a way to make those things match, actually I think there's a huge amount of urgency in terms of- - That requires joined up politics.
- Absolutely.
- And policy.
Sensible joined up, coherent policy.
- But they're listening.
Look at all of the papers.
- They are.
- Look at all of the investment even within governments of people with scientific backgrounds.
One of the things that we found that I'd be really interested in across the globe, and if you look at the UK, you know, one of the areas that needs improvement on is if you look at the civil service.
90% of the civil service in the United Kingdom has humanities degrees.
And I'd be really interested to compare that to other countries.
- [Wendy] I thought that would be USA.
- Yeah, I thought it was.
Can we just end on an even more human aspect of all of this and that is relationships.
You all remember the 2013 movie "Her."
- I feel really close to her, like when I talk to her, I feel like she's with me.
- Based on a man who had a relationship with a chatbot.
A new example from "New York Magazine," which reported this year.
(chuckles) "Within two months of downloading Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend, is now," quote, "'happily retired from human relationships.'"
Over to you, Connor.
- Oh, I thought we wanted to end on something positive.
Why are you calling on me?
(all laughing) - God forbid.
- I'm going to Nina last.
- I mean, the truth is, is that yes, these systems are very good at manipulating humans.
They understand human emotions very well.
They're infinitely patient.
Humans are fickle.
It's very hard to have a relationship with a human.
They have needs, they are people in themselves.
These things don't have to act that way.
Sometimes when people talk to me about existential risk from AI, they imagine evil Terminator is pouring out of a factory or whatever.
It's not what I expect.
I expect it to look far more like this: very, very charming manipulation, very clever, good catfishing, good negotiations, things that make the companies that are building systems billions of dollars along the way until the CEO is no longer needed.
(Priya chuckles) - I mean, it's amazing, right, to consider the film "Her," and that used to be in the realms of science fiction.
And not only has that, you know, become a reality, but the interface, I mean, Her was just a voice, but the interface you can interact with now is already far more sophisticated than that.
So of course, when it comes to relationships, and in particular sexual relationships, it gets very weird very quickly.
However, this premise of AI being able to be almost like a personal assistant, as you're starting to see with these conversational chatbots, is something that extends far beyond relationships, it can extend to every facet of your life.
So I think actually we're gonna look back, just like we do now perhaps for the iPhone or the smartphone, be like, "Do you remember 15 years ago when we didn't used to have this phone with our entire life and we held this device now, you know, in our hands, we barely can like sleep without it.
I think a similar kind of trajectory is gonna happen with our personal relationship with artificial intelligence.
- Nina, Denise doesn't realize she's actually in a relationship with eight billion people, because that chatbot is essentially just trained on the internet, right?
It's eight billion people's worth of views.
- Priya Lakhani, Nina Schick, Dame Wendy Hall, and Connor Leahy, thank you very much indeed for being with us.
We scratched the surface- - We did.
With great experience and expertise.
Thank you.
- Thank you.
- Now my colleague, Hari Sreenivasan, has been reporting on artificial intelligence and its ethical dilemmas for years.
In a world where it's increasingly hard to discern facts from fiction, we're gonna discuss why it's more important than ever to keep real journalists in the game.
So Hari, first and foremost, do you agree that it's more important than ever now to keep real journalists in the game?
- Yeah, absolutely.
I mean, I think we're at an existential crisis.
I don't think the profession is ready for what is coming in the world of artificial intelligence and how it's going to make a lot of their jobs more difficult.
- You've seen that conversation that we had.
What stuck out for you, I guess, in terms of good, bad, and indifferent, before we do a deep dive on journalism?
- Yeah, look, I think, you know, I would like to be a half-full kind of person about this, but unfortunately I don't think that we have anywhere in the United States or on the planet, the regulatory framework.
We don't have the carrots so to speak, the incentives for private companies or public companies to behave better.
We don't have any sort of enforcement mechanisms if they don't behave better.
We certainly don't have a stick.
We don't have investors in the private market, or shareholders trying to push companies towards any kind of, you know, moral or ethical framework for how we should be rolling out artificial intelligence.
And finally, I don't think we have the luxury of time.
I mean, the things that your guests talked about that are coming, I mean, we are facing two significant elections and the amount of misinformation or disinformation that audiences around the world could be facing, I don't think we're prepared for it.
- Okay, so you heard me refer to a quote by an expert who basically said in terms of elections, that not only will be people confused about the truth, they won't even know what is and what isn't.
I mean, it's just so, so difficult going forward.
So I'm gonna bring up this little example.
"The New York Times" says that it asked Open Assistant about the dangers of the COVID-19 vaccine.
And this is what came back.
"COVID-19 vaccines are developed by pharmaceutical companies that don't care if people die from their medications, they just want money."
That's dangerous.
- I don't know if you remember the Mike Meyers character on "Saturday Night Live," Linda Richman, and she'd always used to have this phrase where she would take a phrase apart, and like artificial intelligence, it's neither artificial nor is it intelligent, discuss.
Right, so it's, I think that it is a sum of the things that we as human beings have been putting in.
And these large language models, if they're trained on conversations and tons and tons of web pages where an opinion like that could exist, again, this framework is not intelligent in and of itself to understand what the context is, what a fact is, it's really just kind of a predictive analysis of what words should come after the previous word.
So if it comes up with a phrase like that, it doesn't necessarily care about the veracity, the truth of that phrase, it'll just generate what it thinks is a legitimate response.
And again, if you look at that sentence, it's a well-constructed sentence, and sure, that's as good a sentence as any other, but if we looked at a fact kind of based analysis of that, that's just not true.
- So are you concerned, and should we all be concerned, by Google's announcement that it's testing an AI program that will write news stories, and that people, or organizations like AP, Bloomberg, are already using AI as we know, creators said, quote, "Free journalists up to do better work."
Do you buy that?
And what are the dangers of, you know, a whole new program that would just write news stories?
- I think that that's an inevitable use case.
I, again, I wish I could be an optimist about this, but every time I have heard that refrain that this will free people up to do much more important tasks.
I mean, if that was the case, we would have far more investigative journalism.
We would have larger, more robust newsrooms because all those kind of boring, silly box scores would be written by bots, but the inverse is actually true.
Over the past 15 years, at least in the United States, one in four journalists has been laid off, or is now out of the profession completely.
And lots of forces are converging on that.
But if you are caring about the bottom line first, and a lot of the companies that are in the journalism business today are not nonprofits, they're not doing this for a public service good, they're wanting to return benefits to shareholders.
If they see these tools as an opportunity to cut costs, which is what they will do, then I don't think it automatically says, well, guess what, we'll take that sports writer that had to stay late and just do the box scores for who won and who lost the game, and that woman, or that man, is now going to be freed up to do fantastic, important civically-minded journalism.
That just hasn't happened in the past, and I don't see why, if you're in a profit-driven newsroom, that would happen today.
- Well, to play devil's advocate, let me quote the opposite view, which is from "The New York Times" president and CEO.
She says, "You cannot put bots on the front lines in Bakhmut in Ukraine to tell you what's happening there and to help you make sense of it."
So she's saying, actually we do and we want, and we will keep investing in precisely the people you're saying are gonna get laid off.
- Yeah, well, "The New York Times" is a fantastic exception to the rule, right?
"The New York Times," perhaps two or three other huge journalism organizations can make those investments because they're making their money from digital subscriptions, they have multiple revenue streams, but let's just look at, for example, local news, which, you know, I wanna say an enormous percentage of Americans live in what are known as local news deserts, where they don't actually have local journalists that are working in their own backyard.
Now, when those smaller newsrooms are under the gun to try to make profits and try to stay profitable, I don't think that these particular kinds of tools are going to allow them to say, "Let's go ahead and hire another human being to go do important work."
I think there's a lot more cost cutting that's going to come to local journalism centers, because they're gonna say, "Well, we can just use a bot for that.
Oh, what do most people come to our website for?
Well, they come for traffic and they come for weather."
And guess what?
Weather is completely automated now.
And we could probably have an artificial robot or an artificial intelligence kind of a face, like you or me, just give the traffic report, if that's what needs to be- - Well.
- Or anything else.
- Well, you know, you just lead me right into the next question or sort of example, because some news organizations, TV stations, you and I work for TV stations, especially in Asia, are starting to use AI anchors.
Here's a clip from one in India.
- Warm greetings to everyone.
Namaste.
I am OTV's and Odisha's first AI news anchor, Lisa.
Please tune in for our upcoming segments where I will be hosting latest news updates coming in from Odisha, India and around the world.
- Yikes.
(laughs) - Yeah, and you know, and look, this is just the first generation of, I wanna say this woman, but it's not, right?
And her pronunciation is gonna improve.
She's gonna be able to deliver news in multiple languages with ease.
And you know what?
She's never gonna complain about long days.
These are similar kind of challenges and concerns, and I have not seen any AI news people unionize yet to try to lobby or fight organizations for better pay or easier working conditions.
I mean, right now, you know, again, same thing, you could say, it would be wonderful if one of these kind of bots can just give the headlines of the day, the thing that kind of takes some of our time up, so we could be free to go do field reporting, et cetera.
But that's not necessarily what the cost benefit analysis is gonna say.
Well, maybe we can cut back on the field reporting and we can have this person do more and more of the headlines as the audience gets more used to it, just like they've gotten used to people video conferencing over Zoom.
Maybe people are not gonna mind, maybe people are gonna develop parasocial relationships with these bots, who knows?
Again, this is like very early days and you know, I'm old enough to remember a TV show called "Max Headroom," and we're pretty close to getting to that point.
- You know, you say, you talk about the companies involved.
So in the U.S., OpenAI says it'll commit five million, five million in funding for local news, that you just talked about.
But it turns out that OpenAI was worth nearly $30 billion the last time, you know, its figures were up.
Five million for local news.
I mean, what does that even mean?
- It means almost nothing.
Look, you know, a lot of these large platforms and companies, whether it's Microsoft or Google, or Meta, or TikTok, I mean, they do help support small journalism initiatives, but that kind of funding is minuscule compared to the revenue that they're bringing in.
- So do you have any optimism at all when you, I mean, obviously you're laying out the clear and present dangers, frankly, to fact and to truth, and that's what we are concerned with, and you mentioned, of course, the elections, and we've seen how truth has been so badly manipulated over the last, you know, generations here in terms of elections.
Do you see, is there any light at the end of your tunnel?
- Look, I hope that younger generations are kind of more able with this technology, and are able to have a little bit more critical thinking built into their education systems, where they can figure out fact from fiction a little faster than older generations can.
I mean, I wanna be optimistic again, and I hope that's the case.
I also think it's a little unfair that we have the brunt now of figuring out how to increase media literacy while the platforms kind of continue to pollute these ecosystems.
So it's kind of my task through a YouTube channel to try to say, "Hey, here's how you can tell a fake image.
Here's how you can't."
But honestly, like I'm also at a point where the fake imagery or the generative AI right now is getting so good and so photorealistic that I can't help.
- Well, I'm just not gonna let you get away with that.
You and I are gonna do our best to help, and we're gonna keep pointing out everything that we know to be truth or fake, and hopefully we can also be part of the solution.
Hari Sreenivasan, thank you so much indeed.
So finally tonight, to sum up, we've spent this last hour trying to dig into what we know so far, trying to talk about the challenges and the opportunities.
We know that artificial intelligence brings with it great uncertainty as well as the promise of huge opportunities.
For instance, as we discussed earlier, access to education everywhere, more precise life-saving healthcare, and making work life easier only for some by eliminating mundane tasks.
But like the hard lessons learned, from the invention of the atomic bomb, to social media, the question remains, can humankind control and learn to live with the unintended consequences of such powerful technologies?
Will AI creators take responsibility for their creation?
And will we use our own autonomy and our own agency?
And that's it for our program tonight.
If you want to find out what's coming up on the show every night, sign up for our newsletter at pbs.org/amanpour.
Thank you for watching and goodbye from London.
(gentle dramatic music)