7 Comments
User's avatar
Derrick's avatar

I read another Substack from a prominent expert on the impact of AI. He shared this X post from a PhD researcher who was blown away by a conversation he had with an AI that allegedly came up with a novel way to advance a cancer treatment. To check this claim I simply googled the parts of the approach that were the “novel idea” and sure enough the whole idea had been published in a paper months prior. So, I came to the same conclusion as you: I don’t see any evidence that it can do anything novel, but its ability to, in this example cut through all recent research relevant to what someone is working on and catch them up on things without having to sift through all the research themselves is a pretty huge productivity boost.

Expand full comment
Rene Bruentrup's avatar

Great example!

Expand full comment
James's avatar

To quantify Kev's point, run a modified Touring test where a large sample of subjects have two conversations with two other subjects. Based on their conversation they grade how intelligent they consider each subject to be. Randomly introduce the latest AI as some of the subjects in the study. How do you think they'd stack up? The more specialised the questions are, the better the model's answers are vs the average person. Current models are better problem solvers than the vast majority of humans and ridiculously more knowledgeable. Mind prison saying there has been no progress towards general intelligence is either shock factor click bait or naive. Probably the former.

WRT their developing anything new, they do that extremely well. They're better story tellers than I, better artists than I, and generally more creative than I am. They absolutely have novel thought. The point you're making is I think more one of culturally significant invention. And for that I agree with you that current AI doesn't do that yet, but I disagree that they're not moving towards it, and I disagree that it's due to lack of intelligence.

Things like the printing press and airplanes were not sudden inspirations that manifested in a revolution the next day. They are the result of a spark of thought followed by a long period of evolution. Prototyping and experimentation develop an inventor's understanding, bringing them closer and closer to something eventually viable. LLMs can't do this, but let's explore why..

First, stating the obvious, they're a language model with no body so lack a means of prototyping. Thought experiments get you only so far in the real world. Even as they evolve from language models into Human Intelligence Models (HIM), they need a physical counterpart to play the role you're expecting of them, not just intelligence.

Second, and probably the real crux of the point you're making, they currently lack a viable long term feedback loop. Context windows are getting bigger and bigger, but the base foundational models are not incrementally improved based on the output they themselves produce. They need an expensive and time consuming training process to fundamentally improve. To match human intelligence they need to be able to learn not just temporarily, but permanently. This is IMO the biggest gap between where we are now and where we need to be for AI's creative thought to become culturally significant. However, just as it took the Wright brothers a while to get flight figured out, it will take us a while to figure this out. That's how significant progress happens.

-j

Expand full comment
Rene Bruentrup's avatar

Thank your for sharing your thoughts, James. I still maintain that this new generation of tools is rather optimizing and accelerating our own intelligence rather than developing a new one. You assert the "absolutely have novel thought". I believe it when I see it. The main point I want to make in this article is however that it doesn't really matter if it's artificial intelligence or automated human intelligence. What matters to me is the realization that it's a powerful technology either way and will enable a lot of product innovation going forward.

Expand full comment
James's avatar
3dEdited

Sure, sure, it’s useful either way, but you’re not winning any novel thought awards for that. ;)

Ask gpt to tell you a bedtime story about two brothers going on an unlikely unique adventure. While you can say everything in there is just a combination of variations of similar things someone else somewhere has written at some point, I’ll bet you it’s all new to you. Or draw a picture of yourself as a cucumber wearing a cowboy hat then ask mid journey to do the same. Who draws something less predictable?

I guess what it comes down to is what’s your eval for considering an idea new? All living things, including us, base their thoughts and actions on everything learned by themselves and their ancestors. That training data makes us who we are and all our unique ideas are a product of that brain development. You can philosophize over where an idea actually comes from; Soul, consciousness, understanding, or whatever you want to call it. But to use your phrase, does it really matter?

Expand full comment
Kev's avatar

How much I is in AI?

Prob more than in most humans.

Expand full comment
Rene Bruentrup's avatar

fair enough :D

Expand full comment