I agree with your findings and think your framework for understanding- generalization, abstraction, judgement, & refinement is very useful. It does raise the question for me of, Are all the technical underpinnings for understanding in place yet? I’ll give a couple of examples for clarification: Memory in humans plays a key role in contextual understanding, judgement, refinement (learning-unlearning-learning cycle), and the ability to generalize. Current memory frameworks, techniques, & “infrastructure” are underdeveloped relative to the speed of improvement in other areas of AI. Improvements in the “memory sphere” could led to step improvements in AI “ understanding.” Second, much of human understanding is crafted via “social/collaborative/collective” interactions. As we are in the early stages of machine to machine “collaborative interaction” via agent frameworks like AutoGen, that dynamic hasn’t had time or space to evolve both technically and in the larger context of AI development. Curious what your thoughts are, as this is just my quick two cents on the matter.
It's very interesting that you bring this up. There is a lot of potential in replicating biological phenomenon to train more efficient AI. I'd love to see more of it going forward
I agree with all your findings. I did a deep dive into ChatGPT and others yesterday for a case study and if I gave it very structured prompts with clear instructions and simple math (I had to break down the complex ask into many others) it did just OK. useful as a timesaver for me but not great. When I asked it to recommend other insights / analysis it completely lost track of the original ask and gave me very canned answers that were off topic.
You started this piece with "Ever since Deep Learning started outperforming human experts on Language tasks..." I'd disagree with this and I think your essay demonstrates that Deep Learning hasn't come close to outperforming experts as you showed it's barely performing on the baseline examples.
Deep Learning is not LLMs. There are multiple classification and unsupervised tasks where machines do much better than people (that's why the field exists). The examples showed here are to to understand markers of intelligence, not capability in specific tasks
I agree with your findings and think your framework for understanding- generalization, abstraction, judgement, & refinement is very useful. It does raise the question for me of, Are all the technical underpinnings for understanding in place yet? I’ll give a couple of examples for clarification: Memory in humans plays a key role in contextual understanding, judgement, refinement (learning-unlearning-learning cycle), and the ability to generalize. Current memory frameworks, techniques, & “infrastructure” are underdeveloped relative to the speed of improvement in other areas of AI. Improvements in the “memory sphere” could led to step improvements in AI “ understanding.” Second, much of human understanding is crafted via “social/collaborative/collective” interactions. As we are in the early stages of machine to machine “collaborative interaction” via agent frameworks like AutoGen, that dynamic hasn’t had time or space to evolve both technically and in the larger context of AI development. Curious what your thoughts are, as this is just my quick two cents on the matter.
It's very interesting that you bring this up. There is a lot of potential in replicating biological phenomenon to train more efficient AI. I'd love to see more of it going forward
I agree with all your findings. I did a deep dive into ChatGPT and others yesterday for a case study and if I gave it very structured prompts with clear instructions and simple math (I had to break down the complex ask into many others) it did just OK. useful as a timesaver for me but not great. When I asked it to recommend other insights / analysis it completely lost track of the original ask and gave me very canned answers that were off topic.
You started this piece with "Ever since Deep Learning started outperforming human experts on Language tasks..." I'd disagree with this and I think your essay demonstrates that Deep Learning hasn't come close to outperforming experts as you showed it's barely performing on the baseline examples.
Deep Learning is not LLMs. There are multiple classification and unsupervised tasks where machines do much better than people (that's why the field exists). The examples showed here are to to understand markers of intelligence, not capability in specific tasks
It's over
What do you mean?
What do you mean?