3 Comments

One take away from reading Ray Kurzweil's How to Create a Mind, which I've been meaning to reread, was the role of statistics in AI. if each successive result overwrites past results, I can imagine how biased conclusions might accumulate over generations. Applying a bilateral architecture to preserve weights of prior knowledge seems like a good hedge.

Expand full comment
author

That's a great point

Expand full comment

Fifty years ago Feynman said "We don't even know how dogs minds work"

Feynman wrote the book on computer-AI, and was a nobel prize winning physics guy

Here today we still don't know how a dog brain works, yet we try to make artificial man's brain

Sure we have robot dogs that walk but they don't play, they don't have individual personalitys and loving ability, or loyalty;

The current AI LLM or lstm with attention has set us back 100 years in AGI

Certainly real AI will involve electronics & biology, but we should really get the DOG brain down 100%, before even trying to think that the man brain is working

...

The problem is we have dumbed down 'intelligence', now people are wowed by chatGPT, but Eliza was doing that shit in the late 1960's; Random text generation that is organized in a form acceptable to english speaking humans is NOT AI, its just a black box that generates random text that looks like a human may have wrote it, but even the chat-GPT is +50% bullshit on a good day, because while the grammar may be correct the facts are artificially created;

The novel reason MSM is all over AI today is that its' woke, it pushed the homo, pedo, globo-homo narrative that NWO wants the entire earth to adopt but this is NOT AI

Expand full comment