Discussion about this post

User's avatar
Will's avatar

Holy Necro Post Batman! But I wanted to drop a note that this is a fantastic article, it's more like a white paper.

I am guessing Gary Markus is an influential person, all I know of him is that he is a gigantic AI critic. And also seems to be doing it as much for the "I told you so" than anything of real importance. I mean, the tweet you show at the top of this article has him literally gawking "I won!"

Seems awfully contrarian for the sake of only his ... wallet, me thinks. Anyway, great article and tons to digest and just happy to see someone that doesn't have the ridiculously common but wholly uninformed opinion of LLMs as "glorified autocorrect" programs.

Expand full comment
Sirsh's avatar

Good article, its clear how you put a lot of work in. I like your bilateral brain reference - thats cool. Things like that and e.g. predictive coding are interesting to consider as alternative ways of doing things that are inspired by the brain but still aligned with machine learning practice.

Its probably not really worth asking this question though - is it over? You could easily say yes or no as a matter of perspective. LLMs are already amazingly useful and even if you just made them cheaper and faster that would account for years of valuable research. And as you point out people are and should continue doing some really cool research in the space (DL in general). Even in their expensive current state there is clearly still a place for LLMs and there will be for years to come.

I think Gary Marcus like others seems to have certain stances wrapped up in their identity so I take it with a pinch of salt. The frustrations are warranted and often they are a useful hype counter balance right? As François Chollet said on one of your interviews, (i dont quote) LLMs have sucked the oxygen out of the room. I like thinking about the form of AGI that we “deserve” and deep learning and LLMs are not it. But a part of it.

Expand full comment
8 more comments...

No posts