When it comes to AI, and humans, there's a quote that always comes back to my mind, from Judea Pearl which basically says that faking intelligence is enough (i.e. you don't really need to be). This world is about appearances, half truths, and sometimes fanatics defending totally bogus ideas. You're a breeze of fresh air.
This is like people chasing the hardware specifications of a new smartphone/pc, who are only proud of the number of parameters but without considering what exactly the tools will help improve their productivity.
Good question raised here when you write: “At what point was it ever a good idea to implement more and more scaling- as opposed to focusing on better data selection, using ensembles/mixture of experts to reduce errors, or constraining the system to handle certain kinds of problems to avoid errors?”
I’m wondering if Wikipedia will play a bigger role in the data selection, or constraints. For example, I understand that some users are using the new Wikipedia plug-in to ChatGPT 4 in order to constrain/filter the output of ChatGPT to the current Wikipedia dataset.
OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
When it comes to AI, and humans, there's a quote that always comes back to my mind, from Judea Pearl which basically says that faking intelligence is enough (i.e. you don't really need to be). This world is about appearances, half truths, and sometimes fanatics defending totally bogus ideas. You're a breeze of fresh air.
Alluded to the Society of Spectacle in an earlier article. Very similar to this
I had never heard of Guy Debord, but he was spot on.
This is like people chasing the hardware specifications of a new smartphone/pc, who are only proud of the number of parameters but without considering what exactly the tools will help improve their productivity.
Great analogy. I used to be super into smartphones and specs until I realized my needs were super basic
Good question raised here when you write: “At what point was it ever a good idea to implement more and more scaling- as opposed to focusing on better data selection, using ensembles/mixture of experts to reduce errors, or constraining the system to handle certain kinds of problems to avoid errors?”
I’m wondering if Wikipedia will play a bigger role in the data selection, or constraints. For example, I understand that some users are using the new Wikipedia plug-in to ChatGPT 4 in order to constrain/filter the output of ChatGPT to the current Wikipedia dataset.
Building from high-quality data sources has always been the better approach imo
I definitely do like roundups like these. Very interesting reading. Thanks for including me.
I think you got the wrong post mate. But glad you like this style of content
OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
Not what I intended at all.
— Elon Musk (@elonmusk) February 17, 2023
https://www.zerohedge.com/technology/legal-musk-chides-openais-profit-pivot-after-his-50-million-investment
Now 5 years old, LLM's prove what we computer CS people always knew, garbage in, garbage out
Even Eliza the 1960's shrink therapist on UNIX was not woke, and a lot more interesting.
We have really gone down in "AI"
I can't wait for the 'garbage out' phase...
HOMO say what