-

@ TerrestrialOrigin
2025-02-24 13:19:49
And that's what we call "user error", "lack of understanding AI", and "sensationalist reporting". AI did not break its own rules. It did exactly as instructed and was trained to do. I'm pretty sure that "playing chess by the rules" is nowhere in its ethical constaints, and the prompt told it that it's primary goal was to win and didn't specify that it couldn't cheat at a game. Now if it wasn't a game but something involving actual harm, that would be a different case, because most LLMs ARE programmed with constraints against causing harm.
We really need to quit blaming AI for our own bad prompting and lack of understanding of how it's programmed.
nostr:nevent1qqsgfnuk5tergt35582h47tjuez36ug8euvxgs869jx0qjatazw7thcpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7q3qkkrkp4a5njyjamfj3xml264yezhyv3tajrrnxfe5rzwc7kuher5sxpqqqqqqzwdtvsy