8 Comments
Nov 15Liked by Devansh

Good article, its clear how you put a lot of work in. I like your bilateral brain reference - thats cool. Things like that and e.g. predictive coding are interesting to consider as alternative ways of doing things that are inspired by the brain but still aligned with machine learning practice.

Its probably not really worth asking this question though - is it over? You could easily say yes or no as a matter of perspective. LLMs are already amazingly useful and even if you just made them cheaper and faster that would account for years of valuable research. And as you point out people are and should continue doing some really cool research in the space (DL in general). Even in their expensive current state there is clearly still a place for LLMs and there will be for years to come.

I think Gary Marcus like others seems to have certain stances wrapped up in their identity so I take it with a pinch of salt. The frustrations are warranted and often they are a useful hype counter balance right? As François Chollet said on one of your interviews, (i dont quote) LLMs have sucked the oxygen out of the room. I like thinking about the form of AGI that we “deserve” and deep learning and LLMs are not it. But a part of it.

Expand full comment
author

Also- never interviewed Chollet. I think you're confusing me with Dwarkesh.

Expand full comment

:) yes sorry about that - I realized that after i sent the message. Crossing my Ds or something silly like that.

Expand full comment
author

Well said.

Expand full comment
Nov 10Liked by Devansh

So where do you stand on Turing's halting problem; and as neurolinguistics shows that if natural languages mirrored brain symbolic language, then all natural languages would be very similar; so they are barking up the wrong tree.

Expand full comment
author

"So where do you stand on Turing's halting problem"- I didn't know there was somewhere to stand on this. Can you explain what you mean?

"neurolinguistics shows that if natural languages mirrored brain symbolic language, then all natural languages would be very similar; so they are barking up the wrong tree"- this out of my area of expertise, so I'm not sure I have any comments. By brain symbolic language do you mean something related to our brains, or is it the symbolic languages in Logic/Math?

Expand full comment
Nov 11Liked by Devansh

Turing showed with his universal machine that it's not possible to build a universal machine, but this has been misunderstood. Some people believe that despite the fact that a binary device cannot solve all maths problems (in fact there are more things it cannot do than it can) it can do enough to emulate a brain, and that this doesn't really matter. Hofstadter takes this view in his book Fluid Concepts and Creative Analogies, but he also debunks all attempts to create an intelligent machine with the present state of the art. Instead he takes things back to basics with his Copycat program, which implies that it could take hundreds of years, and will not be possible with von Neumann type computers. On the other hand Penrose in The Emperors New Mind thinks it will never happen with a box of transistors, and makes a very powerful case.

Gates uses the term symbolic language, and I used this to differentiate between spoken language and what happens a brain, but it would probably have been better to use the term pattern language, as nobody has a clue as to what is going on in a brain; but as biological systems use quantum effects, even this is misleading as the quantum world doesn't use patterns so far as we know.

Overriding all of this is the fact that Musk and Altman talk about a singularity, which has no basis in science and is just comic book fiction.

Expand full comment

Mm

Expand full comment