Thinking About AI: Part V - Implications
We continue our series on AI, focusing on the implications of this new technology to society.
Welcome Back to Win-Win Democracy
AI frenzy continues in the media. Building on what we learned in the last issue about how these system work, we turn now to discuss the implications of this technology on society.
Current Impacts
Even in its current state, AI technology is exacerbating problems we already face. If the technology continues to improve as many anticipate, the impacts will intensify over time.
Disinformation Campaigns
Disinformation campaigns are rampant. Fox News Network has demonstrated their ability to get millions of Americans to believe lies. Russian operatives used Facebook, Twitter, Instagram, and other social media platforms to influence the outcome of the 2016 presidential election1.
AI opens up new avenues for disinformation campaigns. ChatGPT and its ilk can be used to bombard social media platforms with disinformation for almost no cost.
More insidious is the problem of deep fakes, AI-generated believable recordings and images of events that never occurred.
This fake image, created as a demonstration by Eliot Higgins, founder of the investigative outlet Bellingcat, purports to be the scene when former-president Trump was arrested by New York police. Of course, it never happened.
Look closely and it’s obviously fake. But someone inclined to believe this happened is unlikely to look closely. Indeed, Higgins’ tweets depicting this fake event have been viewed nearly 5 million times.
A more insidious deep fake, produced by right-wing provocateur Jack Posobiec2, shows a fake video of President Biden invoking the draft to meet demand for US soldiers in response to Putin’s occupation of Kyiv and the impending Chinese blockade of Taiwan, both events that haven’t happened.
Again, a cursory viewing is convincing. After about 45 seconds of the fake video of President Biden, Posobiec goes on to say that this is “coming attractions, a glimpse into the world beyond,” meaning that he made this fake to illustrate where Biden could head if we’re not careful.
Perhaps these two deep fakes are not, themselves, a threat because it is so easy to check the facts about such well-known people. But it doesn’t take much imagination to see how deep fakes could be used in effective disinformation campaigns. Think back, for example, to the role of the 1933 Reichstag fire, which many historians believe was a Nazi false flag operation — in giving the Nazis dictatorial power in Germany.
Could deep fakes, used to place blame on some minority group for a similar event today, be used to justify “emergency” dictatorial powers in the US? Given the manner in which many in the US responded to former-President Trump’s lies about the 2020 election, it seems plausible, even likely, that deep fakes could be used as an important component of a disinformation campaign to overthrow our democracy.
Scams
As you saw in the video of fake President Biden speaking, AI can now generate audio that sounds like a particular individual. Perfect for scams.
Wall Street Journal columnist Joanna Stern created “AI Joanna” using several inexpensive online tools that process uploaded audio files of a few hours of the real person speaking to produce an “audio clone”. AI Joanna was good enough to fool her bank’s voice authentication system.
Real Joanna believes that it would have fooled her sister if her sister hadn’t noticed that the clone didn’t pause to take a breath.
CNN reported on the use of AI-based voice cloning to convince a mother that her daughter was being held for ransom by kidnappers.
As voice cloning becomes more sophisticated, the scamming risks will grow.
Fabrication as a Business Model
As we’ve discussed previously, ChatGPT often writes confidently using “facts” that it fabricates, with the potential to cause great harm to individuals and organizations.
But what a great business opportunity: News websites, written at near-zero cost by ChatGPT (and other chatbots), bringing in revenue from programmatic advertising3.
As reported by Bloomberg News, the news-rating group NewsGuard has identified 49 purported news sites that “appear to be almost entirely written by artificial intelligence software.” Some of the sites summarize (without attribution) content from elsewhere and some appear to fabricate content from prompts.
A few are so devoid of human supervision that their articles include error messages from their AI authors. Check out, for example, this article.
Employment Loss
The media is full of alarming headlines about AI’s impact on jobs:
Fortune: Goldman Sachs Predicts 300 Million Jobs Will Be Lost Or Degraded By Artificial Intelligence
CNN: 300 million jobs could be affected by latest wave of AI, says Goldman Sachs
The Guardian: US experts warn AI likely to kill off jobs – and widen wealth inequality
and many more
The history of technology adoption is that new technologies displace or eliminate some jobs but create other jobs, but the timing and changing skills needs are such that many individuals’ livelihoods are harmed in the transition.
We’ll discuss AI’s potential long-term impact on employment below.
What’s happening in the short term? As a recent article in the Washington Post described, across many industries there’s experimentation with using AI to augment humans in their work, rather than to replace humans. To some extent, if AI helps humans be more productive, fewer humans could be necessary for doing the same amount of work.
Last week, IBM’s CEO, Arvind Krishna, said that IBM’s hiring in back-office functions, like human resources, will be suspended or slowed. In an interview, Krishna said that “these non-customer-facing roles amount to roughly 26,000 workers” and that “I could easily see 30% of that getting replaced by AI and automation over a five-year period.”
Be skeptical about that pace. Such technology transitions are always slower than predicted because there are many unforeseen or under-appreciated impediments to adopting new technologies.
For example, in 2016, Krishna’s predecessor, former IBM CEO Ginny Rometty, in an interview about using IBM’s Watson AI in cancer treatment, told CBS News that “I think in the next five years, you’ll use this kind of technology to make almost any important decision.” Rometty added, “and it could be around the weather, it could be around education, it could be around shopping, but at the other end, it will be about risk, finance, whether it’s anything to do with anything complex in a system in our world that’s out there.”
Neither the use of Watson AI in cancer treatment nor her predictions about Watson AI’s broader use have come to fruition4.
Nevertheless, the mere anticipation that 7,800 positions could be turned over to AI automation over five years, is leading to a hiring slowdown today, even before it has been demonstrated that AI can replace people in these roles.
Businesses Disrupted by AI
We know that new technologies can destroy old businesses. Think digital cameras and Kodak. Usually an old business’s demise plays out over many years, and a few threatened companies even manage to pivot to remain relevant even in the face of the new technology.
But even the threat of AI seems to have quick impact on some companies. Bloomberg financial columnist Matt Levine reported that the stock of Chegg, Inc., a homework assistance company, plummeted 42% on Tuesday after it filed a securities offering document that identified ChatGPT as a business risk.
Now, stock prices are not the same thing as the company’s business. But the company’s CEO said that “since March we saw a significant spike in student interest in ChatGPT. We now believe it’s having an impact on our new customer growth rate.”
Intellectual Property Issues
ChatGPT was trained on vast amounts of text and computer codes written by humans, who never anticipated that the intellectual property that they created would be incorporated into ChatGPT with neither attribution nor compensation. Likewise, generative AI programs that create images and videos were trained on art produced by humans, again without attribution or compensation. Who owns the intellectual property created by such programs?
None of those humans, nor corporations to which intellectual property rights had been assigned, gave permission for their intellectual property to be used in these ways.
The same AI technology that can clone voices for use in scams, can similarly clone voices of singers, voice actors, and other well-known personalities. A Washington Post article describes the situation of Michelle Clarke, a voice actor who learned that a cloned version of her voice was available inexpensively through an online service.
Similarly, the New York Times reported on the music track “Heart on My Sleeve,” which went viral on TikTok, Spotify, and YouTube. It was written by AI and used clones, without permission, of the voices of two popular musicians, Drake and The Weeknd.
Future Impacts
The impacts we discussed in the last section, which are already underway, will continue and probably accelerate. The potential future role of AI-based disinformation campaigns further destabilizing our democracy is particularly concerning. All of these impacts are part of our future even though I’m not going to discuss them further in this section.
The big question is whether today’s AI frenzy is the start of a revolution that’s going to change everything quickly; or is it the beginning of a decades-long gradual evolution of our economic and social governance systems; or, is it a flash in the pan that we’ll all shake our heads about in a few years.
Old people like me remember shaking our heads about the Japanese Fifth Generation Computing Project of the 1980s, which was going to revolutionize computing and programming with AI. It was going to destroy the American computer industry and bring Japan to worldwide economic dominance. Didn’t happen.
Likewise, so-called expert systems, again in the 1980s, were going to replace skilled workers of many kinds, including doctors and lawyers. It never happened. Instead, bits and pieces of technology made its way into various products, but never revolutionized anything.
Is this time different? Yes, I think so.
Despite today’s AI’s many flaws, which we’ve discussed in previous issues of the newsletter, AI technologies based on artificial neural networks are already solving real problems like speech recognition, language translation, image classification, and more. And, I don’t mean in research settings, but in practical implementations used by probably hundreds of millions of people. Not only aren’t they going away, they’re going to continue to improve.
Put another way, the success of AI technologies to solve certain particular, narrow problems or to be used as tools assisting human beings with various tasks is assured. It has already been underway for a decade and the history of technology diffusion would suggests that will continue for a few more decades as companies and their employees learn how best to deploy the technologies.
The bigger question is whether AI technologies will prove revolutionary, in the sense of causing rapid, disruptive, change throughout society, and, if they do, can that change be managed in a way that most people would consider positive.
Let’s start that discussion with the worst-case scenario.
AI Annihilates Humanity
Science fiction has given us characters like Star Trek’s Lieutenant Commander Data, an android (synthetic humanoid) endowed with super-human artificial intelligence and superior physical capabilities, but lacking in human emotions. The Data character was a positive presence among his human colleagues, but his older brother Lore, created by the same fictional human scientist, was malevolent.
Could an army of self-replicating Lore-like androids use their superior intelligence and physical capabilities to destroy humanity? Perhaps, but today’s AI is far from giving us real-life Data and Lore androids, or even disembodied artificial brains, what some call artificial general intelligence.
A more likely annihilation scenario involves AI-controlled weaponry gone wrong or used malevolently. Humans already use remotely-controlled robotic weapons in warfare. It is a relatively small step to use AI for automatic weapons targeting and release, especially in warfare situations in which occasional mistakes might be tolerated.
Even an unintentional AI-launched attack on another world power could provoke a humanity-annihilating nuclear weapons exchange. Can we prevent that? Perhaps.
Fundamentally, this scenario is another instance of global weapons control. Humanity has managed to avoid nuclear conflagration for 70 years using a variety of diplomatic, military coordination, and economic approaches. It is essential for these approaches to be updated in the light of the potential for AI-controlled weaponry5.
Long-Term Economic Impacts
Accurate predictions about the future are impossible. Most predictions of technology adoption are overly optimistic, but there are also plenty that have been wrong in the other direction. Nobel laureate Paul Krugman’s 1998 article Why most economists’ predictions are wrong, published in the now-defunct Red Herring magazine, makes amusing reading in retrospect.
Nevertheless, all of us are compelled to predict the future!
I mentioned earlier a Goldman Sachs report that was touted by the popular business press with headlines about 300 million jobs “lost or degraded.” Despite the media’s click-bait headlines, the report, available here, is actually a thoughtful attempt to predict the economic impact of AI.
The report’s authors analyzed each occupation in the US Occupational Employment and Wage Survey and in the Eurostat Labor Force Survey, estimating for each occupation the share of workload that could be replaced by AI. (Exactly what AI capabilities they assumed are “real” is unclear.) They aggregated the results across the occupations and then extended these US and European estimates globally, adjusting for variations in occupations across emerging and developed markets.
Here are their key conclusions:
In the US and Europe, “roughly two-thirds of current jobs are exposed to some degree of AI automation and generative AI could substitute up to one-fourth of current work.” Extrapolating globally suggests that 300 million full-time jobs could be exposed to automation (NB: exposed means affected but not necessarily eliminated.)
AI could raise annual US labor productivity by just under 1.5% over a 10-year period.
The boost to global labor productivity could eventually increase annual global GDP by 7% if AI delivers on its promise.
I wouldn’t bet on the specific numbers, but, directionally, this seems about right to me.
We’ll see many jobs affected, just like we’ve seen many jobs affected over the last 50 years by adoption of computers pretty much in all industries. Some occupations were eliminated, but many more have been changed — there are still accountants but they use spreadsheets and software, not paper ledgers.
And, over time, maybe decades, we’ll see increased productivity, which will lead to more economic growth.
But there will be other impacts:
Changes to the demand for various skills. The diffusion of computers into business increased the demand for high-skilled workers, especially college-educated workers, and reduced opportunities for lower-skilled workers6. We could see more of that as AI is adopted.
New job categories will open up. Already, companies are recruiting so-called “prompt engineers,” people who can coax useful results out of ChatGPT and other AI technology. And, of course, there will be increased investments in the AI technology itself.
Pace of adoption is a big question. If AI is adopted rapidly the impacts on employment, businesses, skills, etc., could be extremely disruptive, even to the point of causing unrest. On the other hand, if the adoption is more like what happened with computer technology, there will be time for people and businesses to adapt, lessening the possibility of a resulting crisis.
Concentrating Power
The companies and people who control AI technology and its application will amass enormous wealth and power. We’ve already seen how the network effects of technology and social media have put the faang companies — Facebook, Amazon, Apple, Netflix, and Google — in the driver’s seat for huge parts of our economy. I’d add the resurgent Microsoft to that list, especially with its huge investment in OpenAI.
Additionally, companies in other industries that figure out how to exploit AI in their own spaces will come to dominate those industries. The opportunity for vastly more corporate monopolization and concentration of wealth and power is a high-risk side effect of AI adoption.
Mitigating the Impacts
So, what do we do to mitigate AI’s threats? Most proposals take one of two approaches.
Pausing
A group of prominent scientists, academics, and business leaders, called The Future of Life Institute, as well as tens of thousands of others, have called for pausing for at least 6 months all work on AI systems more advanced than the recently-released GPT-4. During the pause, AI labs and independent experts should develop “safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.” This same group has published a policy brief — Policymaking in the Pause: What can policymakers do now to combat risks from advanced AI systems? — outlining their initial recommendations.
I see no sign that the major players are planning to pause. The commercial stakes of the AI race are so high that it would be an extraordinary risk for the leaders of the major AI companies to decide to pause.
Moreover, if-we-don’t-do-it-the-Chinese-will-do-it thinking is a powerful disincentive for the government to impose a pause, even if the government has the power to do that, which seems unlikely.
Nevertheless, Future of Life Institute’s policy proposals are worth pursuing even without a pause.
Regulating
As Ezra Klein reports in the New York Times, both the White House and the European Commission have promulgated draft regulatory policy frameworks for governing AI; and the Chinese government insists that “content generated through the use of generative A.I. shall reflect the Socialist Core Values, and may not contain: subversion of state power; overturning of the socialist system; incitement of separatism; harm to national unity; propagation of terrorism or extremism; propagation of ethnic hatred or ethnic discrimination; violent, obscene, or sexual information; false information; as well as content that may upset economic order or social order.” Now that’s a tall order.
Lina Khan, the chair of the US Federal Trade Commission, has also articulated the need to make different policy decisions than we did in the early days of the Internet.
Regulating AI to some degree is necessary, but regulation alone is not sufficient to protect society from the most important negative impacts of AI. Moreover, the wrong regulations could stifle growth of important capabilities.
What’s Next?
In the next issue of the newsletter, we’ll start by examining some of the regulatory and policy proposals being discussed and what effects they may have.
Then we’ll discuss the effect that AI might have on some of the broader problems we’ve discussed previously. For example, AI will probably drive more concentration of wealth and political power. We could try to regulate AI to prevent this from happening. Perhaps it would be better to focus directly on the concentration of wealth and power, using AI as a forcing function to move that conversation ahead.
Suggested Reading
You could spend the better part of each day reading about AI and its potential impact. It is easy to get overwhelmed. Here are some articles that I’ve found to be particularly insightful or thought provoking:
Jaron Lanier, There is No A.I., The New Yorker, April 2023. Lanier is a long-time thinker/philosopher about computing and its interplay with society. This article makes the case that the “I” in AI is overstated. He says “By persisting with the ideas of the past—among them, a fascination with the possibility of an A.I. that lives independently of the people who contribute to it—we risk using our new technologies in ways that make the world worse. If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.”
Thomas L. Friedman, We Are Opening the Lids on Two Giant Pandora’s Boxes, the New York Times, May 2, 2023. Friedman asks that as we confront the impacts of both climate change and AI, “What kind of regulations and ethics must we put in place to manage what comes screaming out?”
Will Douglas Heaven, Geoffrey Hinton tells us why he’s now scared of the tech he helped build, MIT Technology Review, May 2, 2023. Hinton, is a 2018 Turing Award7 winner for his work on machine learning, a professor at University of Toronto, and a member of Google's AI team. Hinton and his students originated backpropagation in the 1980s, now a key technology used to train artificial neural networks. He resigned from Google last week to make it possible for him to speak freely about the future of AI.
Yuval Noah Harari, Yuval Noah Harari argues that AI has hacked the operating system of human civilisation, The Economist, April 28, 2023. Harari argues that since language and storytelling are at the core of human culture, AI’s mastery of language and storytelling will allow it to mass-produce intimate relationships with millions of people, influencing our opinions and worldviews.
Ezra Klein, The Surprising Thing A.I. Engineers Will Tell You if You Let Them, The New York Times, April 16, 2023. Klein tells us that when he talks with people working on AI, they tell him that they are desperate to be regulated, that we shouldn’t leave the future to a race among Microsoft, Google, Meta (formerly known as Facebook), and a few other companies. Klein summarizes regulatory efforts underway in the US and European Union.
For an interesting study on how social media provides voice to people with no track record or reputation, see the article Social Media and Fake News in the 2016 Election by Hunt Allcott and Matthew Gentzkow, in Journal of Economic Perspectives —Volume 31, Number 2 — Spring 2017 — Pages 211–236.
Wikipedia describes Posobiec as “an American alt-right political activist, television correspondent and presenter, conspiracy theorist, former United States Navy intelligence officer, and provocateur.
Programmatic advertising works like this: A website that wants to earn revenue via advertising uses an online service that runs a real-time auction to place ads on the site. Bidders are given some information about the person viewing the site. The ad from the highest bidder is shown. These auctions are completely automated and run in a few milliseconds each time someone views the site. Google Ads is, by far, the biggest of the online services for placing ads.
Yesterday, The Atlantic published an article by Mac Schwerin titled America Forgot About IBM Watson. Is ChatGPT Next?.
A better approach would be to prohibit direct AI control of weaponry. I don’t, however, believe that is feasible in today’s geopolitical reality because of the low levels of trust among the major world powers.
Adam Zaretsky, Have Computers Made Us More Productive? A Puzzle, Federal Reserve Bank of St. Louis, 1998
The Turing Award is considered the Nobel Prize of computing.