Discussion about this post

User's avatar
Sirsh's avatar

Good article, its clear how you put a lot of work in. I like your bilateral brain reference - thats cool. Things like that and e.g. predictive coding are interesting to consider as alternative ways of doing things that are inspired by the brain but still aligned with machine learning practice.

Its probably not really worth asking this question though - is it over? You could easily say yes or no as a matter of perspective. LLMs are already amazingly useful and even if you just made them cheaper and faster that would account for years of valuable research. And as you point out people are and should continue doing some really cool research in the space (DL in general). Even in their expensive current state there is clearly still a place for LLMs and there will be for years to come.

I think Gary Marcus like others seems to have certain stances wrapped up in their identity so I take it with a pinch of salt. The frustrations are warranted and often they are a useful hype counter balance right? As François Chollet said on one of your interviews, (i dont quote) LLMs have sucked the oxygen out of the room. I like thinking about the form of AGI that we “deserve” and deep learning and LLMs are not it. But a part of it.

Expand full comment
bill coop's avatar

So where do you stand on Turing's halting problem; and as neurolinguistics shows that if natural languages mirrored brain symbolic language, then all natural languages would be very similar; so they are barking up the wrong tree.

Expand full comment
6 more comments...

No posts