Short Note: Win-Win Democracy Takes a Rest & AI Regulation
Win-Win Democracy will be taking a break for a while. Back to AI: Be skeptical of the requests from industry leaders for regulation.
Win-Win Democracy Takes a Rest
After almost a year and a half, 267 subscribers, 34 long-form posts, and almost 30,000 views total, I’ve decided to (at least temporarily) reduce my effort on writing Win-Win Democracy. Three factors drove me to this decision:
Writing this newsletter takes a lot of my time. I enjoy working on it — especially doing the research — but I’ve made little progress on other projects that are important to me and to my family.
I’ve failed to attract enough readership — especially people who are different than me — to have impact commensurate with the amount of work writing the newsletter requires.
Prospects for win-win solutions are greatly diminished when the Republican Party nationwide is focused on using fascist political tactics to gain authoritarian power rather than engaging to solve the country’s real problems. For example, in 2023 alone, Republican legislators across the nation introduced more than 500 anti-LGBTQ bills, harming many without helping anyone. The Republican Party has lost its way and, in its current form, can not be a partner in working toward common goals.
I expect to occasionally write new material, some about win-win ideas and some about other particularly important topics related to the viability of our democracy, like I did last time with Fascist Politics in America.
I appreciate all of the encouragement and feedback that I’ve received along the way.
Thinking About AI
When we last talked about AI, I argued that “regulating AI” itself is the wrong approach to controlling the impact of AI on our society. I suggested instead that we strengthen the laws and regulations related to disinformation, platform liability, verification of identity, unemployment, fair use, wealth inequality, etc.
AI does not create these problems, but it probably will worsen the problems that we already have.
NYU social psychologist Jonathan Haidt and former Google CEO Eric Schmidt, writing in the article AI is About to Make Social Media (Much) More Toxic, put it this way:
“We can summarize the coming effects of AI on social media like this: Think of all the problems social media is causing today, especially for political polarization, social fragmentation, disinformation, and mental health. Now imagine that within the next 18 months — in time for the next presidential election — some malevolent deity is going to crank up the dials on all of those effects, and then just keep cranking.”
Given that we as a society haven’t made progress solving the many problems we already have with disinformation, social media, etc., what would make one think that regulating AI would improve the situation? We must address the root issues, not just try to keep AI from making them worse.
I’ve already discussed some approaches to doing so:
Here, I want to address the highly visible push from industry leaders for Congress to regulate AI, whatever that might mean.
My short message: Be skeptical. Highly skeptical.
Industry’s Charm Offensive on Regulating AI
As reported by The Daily Beast in the article How Congress Fell for Sam Altman’s AI Magic Tricks, OpenAI’s CEO Sam Altman performed for Congress in May, showing off the gee-whiz capabilities of large language models and generally wowing the legislators, not to mention wining and dining them the evening before. Senator Richard Blumenthal told reporters that “Sam Altman is night and day compared to other CEOs, and not just in the words and the rhetoric but in actual actions and his willingness to participate and commit to specific action.”
Meanwhile, as reported by the Washington Post, Senate majority leader Charles Schumer has been meeting with industry leaders and representatives as well as academics and critics of AI, as part of what he calls “an all-hands-on-deck effort in the Senate.” Schumer says “We need the best of the best sitting at the table: the top AI developers, top executives, scientists, advocates, community leaders, workers, national security experts all together in one room, doing years of work in a matter of months.”
Put all of this in the context of the call 3 months ago from prominent scientists and leaders for a 6-month pause on work on advanced AI. As far as I can tell, none of the industry leaders asking to be regulated have done anything to slow down their own work.
It is nearly impossible to know the true motives of the tech industry’s leaders, but we can get a hint of their motives by understanding how they interact with regulators in the EU, which is far ahead of the United States in providing privacy protection for consumers, regulating social media, and enforcing anti-trust among tech companies.
The EU has been working on regulations for AI for years, building on top of the regulatory framework they already have in place (which we completely lack). As reported in the Washington Post, Google, Microsoft, and OpenAI all declined comment after the European Parliament overwhelmingly passed the EU AI Act last month, which includes limits on “tools that could sway voters to influence elections or recommendation algorithms, which suggest what posts, photos and videos people see on social networks.” No comments from these companies about supporting new regulations on AI — could it be that when the rubber hits the road they really don’t want to be restrained?
We should be wary of the tech industry’s charm offensive on regulating AI. Despite protestations to the contrary, it seems plausible, even likely, that tech leaders are really seeking protections that allow them to do what they want without being accountable for the results, much like what Section 230 has done for existing social media and digital platforms.
Meanwhile, we hear nothing from these same industry leaders about changing their current businesses to protect society from the extremely harmful effects of their current platforms. Absent that, I can’t be sanguine about their true intent with their pleas for AI regulation.
What’s Next?
I probably won’t be writing anything major before fall, although that could change depending on events.
I’d be interested in any advice you can offer on how to both expand and broaden the audience and/or to evolve the content. You can respond publicly in the comments or privately by replying to this email.
Lee, I understand all too well the challenges you are facing. I know all of your subscribers appreciate the incredible effort to create a well-researched articulate newsletter. I look forward to your next "Win-Win Democracy" essay. Enjoy the break.