Clear, Detailed, And Wrong
Andrew Walters writes: "I am asking ChatGPT questions about Munchkin rules and cards. It is wrong 100% of the time, even with questions that are on the FAQ. ChatGPT says, explicitly, that the Munchkin FAQ was part of its training material.
"The responses were perfectly grammatically correct, detailed, and sounded very confident. And wrong every time. Well, 0/5. I have a transcript."
Example: "In the card game Munchkin, the Kneepads of Allure allow you to have one additional player (beyond your normal allies) to assist you in combat. The card text reads: "You may ask one more player than usual to help you in combat. This item is worth no bonuses. Usable once only."
Err . . . Wrong, wrong, wrong. Sounded good, though!
Chat GPT is here to entertain you (and it can provide some fine Game Master descriptions of imaginary places.) But it's not really artificial intelligence. It has ZERO intelligence. It is using the rules of grammar to scramble and re-present phrases it has found online . . . and it has no way to "know" what is objectively right. Compared to ChatGPT, Jon Snow knows everything.
So don't use this program as your Munchkin referee. Unless, of course, you really like arguments!