I'm honored to guest author this important and timely article for Artificial Intelligence Made Simple, a fantastic publication created by the brilliant Devansh, a respected expert and star author. Thank you, Devansh!
Thanks for the excellent probing of o1 in this regard. This continues the concern that we will place AI into positions of responsibility long before it is ready. This is often countered by saying we will have humans monitor the output. However, what if that output is significantly deceptive in its accuracy?
The much stronger ability of o1 to rationalize an outcome will also come with other disturbing consequences. As has been shown by weaker AI models, they can be effectively used to alter individuals beliefs with long standing effect.
I'm honored to guest author this important and timely article for Artificial Intelligence Made Simple, a fantastic publication created by the brilliant Devansh, a respected expert and star author. Thank you, Devansh!
Thank you for fantastic contribution to our little cult
Thanks for the excellent probing of o1 in this regard. This continues the concern that we will place AI into positions of responsibility long before it is ready. This is often countered by saying we will have humans monitor the output. However, what if that output is significantly deceptive in its accuracy?
The much stronger ability of o1 to rationalize an outcome will also come with other disturbing consequences. As has been shown by weaker AI models, they can be effectively used to alter individuals beliefs with long standing effect.
A study that I covered here - https://www.mindprison.cc/p/ai-instructed-brainwashing-effectively
thanks for sharing
I had one question.
Can these models be useful in at least reducing the search space, if we let them decode at various temp and other settings.
Even the act of going through the reasoning can unblock someone diagnosing.
Is this reasonably feasible ?