Thanks for your comment. Your example isn't a disagreement with mine, since your example speaks to the need of nuanced evaluations and inclusive design (both parts of the good principles I talk about)
Notice that my argument is the moral alignment of LLMs is useless. Not that we should not have morally aligned systems. If that 11% is one demographic, and you still push out AI that alienates them- then it's an issue with your evaluation and safety procedures (i.e it's bad design). Fixing it requires changing your inputs, adding more representation... all components of good system design. That was exactly my point- any problem that moral alignment claims to solve is better solved with better design.
"Conversely, we need to recognize that there is a large group of users with unethical and immoral beliefs, and AGI companies cannot "compromise" or " accommodate" these types of users, and whether the end result is to alienate them or steer them toward a more reasoned view, they should be OK with that."- The so-called AGI companies build platforms. Safety and security should be left to developers. If these AGI companies start meddling here- you only create more errors and lesss powerful solutions. You can ofcourse build some checks in place, but my other argument was that any real harms caused by AGI were caused only b/c we had deeper problems that we needed to fix (AI Misnformation is a problem b/c we don't have the critical thinking to examine inputs). To me those are the issues worth dedicating putting our attention to.
I have mentioned this excellent write-up on moral alignment in my newsletter 'AI For Real' this week. Here's the link https://aiforreal.substack.com/p/are-you-ai-positive-or-ai-negative
Thanks for sharing
Thanks for your comment. Your example isn't a disagreement with mine, since your example speaks to the need of nuanced evaluations and inclusive design (both parts of the good principles I talk about)
Notice that my argument is the moral alignment of LLMs is useless. Not that we should not have morally aligned systems. If that 11% is one demographic, and you still push out AI that alienates them- then it's an issue with your evaluation and safety procedures (i.e it's bad design). Fixing it requires changing your inputs, adding more representation... all components of good system design. That was exactly my point- any problem that moral alignment claims to solve is better solved with better design.
"Conversely, we need to recognize that there is a large group of users with unethical and immoral beliefs, and AGI companies cannot "compromise" or " accommodate" these types of users, and whether the end result is to alienate them or steer them toward a more reasoned view, they should be OK with that."- The so-called AGI companies build platforms. Safety and security should be left to developers. If these AGI companies start meddling here- you only create more errors and lesss powerful solutions. You can ofcourse build some checks in place, but my other argument was that any real harms caused by AGI were caused only b/c we had deeper problems that we needed to fix (AI Misnformation is a problem b/c we don't have the critical thinking to examine inputs). To me those are the issues worth dedicating putting our attention to.