Discussion about this post

User's avatar
Derrick's avatar

I read another Substack from a prominent expert on the impact of AI. He shared this X post from a PhD researcher who was blown away by a conversation he had with an AI that allegedly came up with a novel way to advance a cancer treatment. To check this claim I simply googled the parts of the approach that were the “novel idea” and sure enough the whole idea had been published in a paper months prior. So, I came to the same conclusion as you: I don’t see any evidence that it can do anything novel, but its ability to, in this example cut through all recent research relevant to what someone is working on and catch them up on things without having to sift through all the research themselves is a pretty huge productivity boost.

Expand full comment
James's avatar

To quantify Kev's point, run a modified Touring test where a large sample of subjects have two conversations with two other subjects. Based on their conversation they grade how intelligent they consider each subject to be. Randomly introduce the latest AI as some of the subjects in the study. How do you think they'd stack up? The more specialised the questions are, the better the model's answers are vs the average person. Current models are better problem solvers than the vast majority of humans and ridiculously more knowledgeable. Mind prison saying there has been no progress towards general intelligence is either shock factor click bait or naive. Probably the former.

WRT their developing anything new, they do that extremely well. They're better story tellers than I, better artists than I, and generally more creative than I am. They absolutely have novel thought. The point you're making is I think more one of culturally significant invention. And for that I agree with you that current AI doesn't do that yet, but I disagree that they're not moving towards it, and I disagree that it's due to lack of intelligence.

Things like the printing press and airplanes were not sudden inspirations that manifested in a revolution the next day. They are the result of a spark of thought followed by a long period of evolution. Prototyping and experimentation develop an inventor's understanding, bringing them closer and closer to something eventually viable. LLMs can't do this, but let's explore why..

First, stating the obvious, they're a language model with no body so lack a means of prototyping. Thought experiments get you only so far in the real world. Even as they evolve from language models into Human Intelligence Models (HIM), they need a physical counterpart to play the role you're expecting of them, not just intelligence.

Second, and probably the real crux of the point you're making, they currently lack a viable long term feedback loop. Context windows are getting bigger and bigger, but the base foundational models are not incrementally improved based on the output they themselves produce. They need an expensive and time consuming training process to fundamentally improve. To match human intelligence they need to be able to learn not just temporarily, but permanently. This is IMO the biggest gap between where we are now and where we need to be for AI's creative thought to become culturally significant. However, just as it took the Wright brothers a while to get flight figured out, it will take us a while to figure this out. That's how significant progress happens.

-j

Expand full comment
5 more comments...

No posts