We continue our multi-part series on AI, drawing lessons from early experiences with programs that humans perceived as human and with programs that play strategy games at championship level.
Thanks Lee, Weizenbaum had just written his book when I was an undergraduate at MIT, and at the time many faculty and students felt he was over-reacting. However, he had supporters as well, primarily from the humanities as well as the social and behavioral sciences who realized the dangers of over-attribution of intelligence to things. Also, agree with Sridhar that "responsible actors" using general purpose technologies for humanity-level good, not bad, purposes is key. To better understand one approach to taming today's large language models, this paper is instructive: Bai et al (2022) Constitutional AI: Harmlessness from AI Feedback URL: https://arxiv.org/abs/2212.08073 - search the paper for the word "Gandhi" for some especially interesting "prompt engineering" insights. In my short history of AI icons of progress, which includes Dartmouth conference, Deep Blue, Watson Jeopardy!, AlphaGo/Fold, etc. - I also include a few key papers, and just added "Constitutional AI" to that list.
Thanks Jim, I’ll track it down. I took a “computers & society” course when I was a Computer Science graduate student in the late 70s. We read both Weizenbaum’s book and Dreyfus’s book What Computers Can’t Do. Even though AI was primitive at the time compared to today, they raised important points that are still valid. It is going to be interesting to see how our dysfunctional political system navigates the issues.
Yes it will be interesting to see political systems navigating the issues. I am pretty sure you have already seen this ABC interview with OpenAI CEO Sam Altman, but if not....
Sam Altman's interview - working with governments on AI governance
Rebecca_Jarvis, ABC_News (2023) OpenAI CEO, CTO on risks and how AI will reshape society
Key messages: Tools can be used for good or bad. Need to reality test to see both good and bad uses. If bad dominates we have to go slower. If good dominates we can go faster. Yes, working closely with governments. Recommend governments be in close contact with and monitor for good/bad uses of all makers of advanced AI tools. Yes, we have kill switches. However, the benefits for humanity are too great not to be giving society access to this tool. Society as a whole must steer the direction. We are in an important adjustment period for one of the most powerful technologies humanity has created so far. AI tools will be personalized and aligned with people's belief systems and values over time.
Lee - nice walk down the 'AI' memory lane, starting with classic Eliza from Fiegenbaum's book. Without question, each wave of innovation has resulted in 'experts' over promising and under delivering - how ever the march goes on with AI becoming more and more pervasive in business after business (and of course to consumers. governments and nation states. The key question continues to be for each wave of innovation, tool and technology " Will we responsible humans, use the technology responsibly, ethically, kindly for the common good or will we go the other way. This is especially true for 'general purpose technologies' like Electricity, Software and now AI.
I agree. I’d also add to your list the problem that humans are gullible in the sense that when we see an AI that seems to do something well, we ascribe human intelligence to it and assume it can do much more than it can. Regardless, the questions of responsible, ethical use of the technology, and the impact on society, are huge. I intend to discuss those in future issues.
Fascinating!
Thanks Lee, Weizenbaum had just written his book when I was an undergraduate at MIT, and at the time many faculty and students felt he was over-reacting. However, he had supporters as well, primarily from the humanities as well as the social and behavioral sciences who realized the dangers of over-attribution of intelligence to things. Also, agree with Sridhar that "responsible actors" using general purpose technologies for humanity-level good, not bad, purposes is key. To better understand one approach to taming today's large language models, this paper is instructive: Bai et al (2022) Constitutional AI: Harmlessness from AI Feedback URL: https://arxiv.org/abs/2212.08073 - search the paper for the word "Gandhi" for some especially interesting "prompt engineering" insights. In my short history of AI icons of progress, which includes Dartmouth conference, Deep Blue, Watson Jeopardy!, AlphaGo/Fold, etc. - I also include a few key papers, and just added "Constitutional AI" to that list.
Thanks Jim, I’ll track it down. I took a “computers & society” course when I was a Computer Science graduate student in the late 70s. We read both Weizenbaum’s book and Dreyfus’s book What Computers Can’t Do. Even though AI was primitive at the time compared to today, they raised important points that are still valid. It is going to be interesting to see how our dysfunctional political system navigates the issues.
Yes it will be interesting to see political systems navigating the issues. I am pretty sure you have already seen this ABC interview with OpenAI CEO Sam Altman, but if not....
Sam Altman's interview - working with governments on AI governance
Rebecca_Jarvis, ABC_News (2023) OpenAI CEO, CTO on risks and how AI will reshape society
URL: https://youtu.be/540vzMlf-54
Key messages: Tools can be used for good or bad. Need to reality test to see both good and bad uses. If bad dominates we have to go slower. If good dominates we can go faster. Yes, working closely with governments. Recommend governments be in close contact with and monitor for good/bad uses of all makers of advanced AI tools. Yes, we have kill switches. However, the benefits for humanity are too great not to be giving society access to this tool. Society as a whole must steer the direction. We are in an important adjustment period for one of the most powerful technologies humanity has created so far. AI tools will be personalized and aligned with people's belief systems and values over time.
Lee - nice walk down the 'AI' memory lane, starting with classic Eliza from Fiegenbaum's book. Without question, each wave of innovation has resulted in 'experts' over promising and under delivering - how ever the march goes on with AI becoming more and more pervasive in business after business (and of course to consumers. governments and nation states. The key question continues to be for each wave of innovation, tool and technology " Will we responsible humans, use the technology responsibly, ethically, kindly for the common good or will we go the other way. This is especially true for 'general purpose technologies' like Electricity, Software and now AI.
Sridhar
I agree. I’d also add to your list the problem that humans are gullible in the sense that when we see an AI that seems to do something well, we ascribe human intelligence to it and assume it can do much more than it can. Regardless, the questions of responsible, ethical use of the technology, and the impact on society, are huge. I intend to discuss those in future issues.