Highest Rated Comments


metaetataa4 karma

Do you feel like "guard rails" or content moderation for AI are necessary, or even useful? Asimov's Three laws were famously invented to show how very basic logic can be upended and potentially have very bad unintentional consequences. How do we avoid compromising these tools, while still make sure they are broadly safe for use and consumption?

Anecdotally, it seems like since the mid December update of CatGPT has become a lot less helpful and, more importantly, capable of some tasks that it used to excel at, particularly coding. It also seems like people are becoming engaged in a sort of prompt engineering "arms race" to get around content moderation and still get the responses that they seek.

metaetataa1 karma

[deleted]

metaetataa-1 karma

Hi, thanks for the reply. However, I was really asking about what considerations are being discussed about how to ethically provide content moderation with regard to the limitations it may have on the capabilities of the technology.

It really seems to be a missing part of the wider dialog that gets handwaved away under histrionics about racists when there should be meaningful dialog regarding how it impacts the tool's usefulness.

Anyways, thanks for the AMA.