Wonderful. Thank you so much. Admittedly I was salving my back pain with Butter Chardonnay when I should have been better taking notes.
One if my first thoughts was working as the TA to Ralph Gomory, 1980 to 82, and remembering his saying that the really hard algorythmetic problems were classified as AI, until solved, and then their solution went into a new chapter in CS Algorythms theory text books .
You suggest your avoidance of serious CS, but all you really avoid is taking human annotated speech and pictures, and slurping them into neural network data structures. The listeners did not have much left out, at the conceptual level.
One of the interesting things missed was the improvement in playing the complex game GO. (I learned early, 1960s, that Montecarlo simulations could really bootstrap you ahead. In my case working with a professor on random motion of vacancies at a boundary of two metal crystals. My Montecarlo simulation did not match their theory of 1961. The theory people had work to do.) Early Go playing Machines were good enough for the family game room, but not good players. Initially, they thought this complex game was a brick wall.
The trick to great Go Machines was to add some randomness, to take more risk of losing, and have a machine play itself, or another Go machine. They were more experimental, and lost some games, but a lost game is also a potential way to win. The Go machines trained very successfully playing themselves, soon beating masters with unknown to humans strategies.
Not mentioned was that the large language models are based on word frequency sequence statistics. Their "brilliance" is the vocabularies and sequence are so familiar that we get sucked in.
It takes some effort to get past the easy readability of word sequences, and realize the hallucinations, that quickly approached lies.
I completely buy Gomory’s adage that you relate for problems that have a clear answer. In the early 80s, a lot of the problems might not have had a single “correct” answer, but you could formulate a precise goal.
Some of the AI being attempted today doesn’t have a clear answer — it is not as if there is one “best” response to a ChatGPT prompt and there is a hope among some AI proponents that AI can even be “creative”. Interestingly, introducing some randomness into the next-word predictions that the LLM’s do is evidently important to get them to produce “interesting” outputs.
Wonderful. Thank you so much. Admittedly I was salving my back pain with Butter Chardonnay when I should have been better taking notes.
One if my first thoughts was working as the TA to Ralph Gomory, 1980 to 82, and remembering his saying that the really hard algorythmetic problems were classified as AI, until solved, and then their solution went into a new chapter in CS Algorythms theory text books .
You suggest your avoidance of serious CS, but all you really avoid is taking human annotated speech and pictures, and slurping them into neural network data structures. The listeners did not have much left out, at the conceptual level.
One of the interesting things missed was the improvement in playing the complex game GO. (I learned early, 1960s, that Montecarlo simulations could really bootstrap you ahead. In my case working with a professor on random motion of vacancies at a boundary of two metal crystals. My Montecarlo simulation did not match their theory of 1961. The theory people had work to do.) Early Go playing Machines were good enough for the family game room, but not good players. Initially, they thought this complex game was a brick wall.
The trick to great Go Machines was to add some randomness, to take more risk of losing, and have a machine play itself, or another Go machine. They were more experimental, and lost some games, but a lost game is also a potential way to win. The Go machines trained very successfully playing themselves, soon beating masters with unknown to humans strategies.
Not mentioned was that the large language models are based on word frequency sequence statistics. Their "brilliance" is the vocabularies and sequence are so familiar that we get sucked in.
It takes some effort to get past the easy readability of word sequences, and realize the hallucinations, that quickly approached lies.
Great talk, thank you!
I completely buy Gomory’s adage that you relate for problems that have a clear answer. In the early 80s, a lot of the problems might not have had a single “correct” answer, but you could formulate a precise goal.
Some of the AI being attempted today doesn’t have a clear answer — it is not as if there is one “best” response to a ChatGPT prompt and there is a hope among some AI proponents that AI can even be “creative”. Interestingly, introducing some randomness into the next-word predictions that the LLM’s do is evidently important to get them to produce “interesting” outputs.
Thanks for sharing your talk Lee. Liked your comments on history and investment at the beginning especially.