Text created by ChatGPT is good. Really good. Good enough to write a high school essay, or for that matter a college one.
(But not a LowEndBox blog post, in case you’re wondering).
There are services that will analyze AI-generated text and score it. But if you’re worked with one of these tools long enough, you start to see some tell-tale signs that will let you see if it’s AI-generated without needing to run it through a tool.
Em Dashes: An em dash looks like a hyphen but is longer.
- Hyphen: –
- Em dash: —
The frequent use of em dashes often indicates that the text was generated by AI. Really, any use of the em dash should be suspect unless it’s being published by a professional news or content site. If you see one in a forum post, forget it. Your average Internet user does not know how to create an em dash on their keyboard. I don’t. Granted, using an em dash is often the correct punctuation to use, and sometimes the software you’re writing with will autocorrect the hyphen to an em dash, but…a lot of em dashes usually means AI. It’s a case of AI being outed for being more correct and proper than a human typically would be.
Emojis: Sections delimited by emojis is something ChatGPT seems to love.
🤔 Not sure why it does that.
🤖 Real humans don’t talk like that. Even teens.
🔎 If you see that kind of thing, it’s probably AI-generated.
Ethical Considerations: AI creators are terrified their bots are going to do something unethical, so they often make sure there are strong ethical statements in their output, often at the end of the output. It’s like cyborg virtue-signaling. Ask a question about how to sell your used car and you’ll some practicalities sandwiched between a sermon about being honest and transparent. It’s like ChatGPT has a reputation as a seducer of youth or something and needs to constantly prove to the world that’s it’s pure of heart.
Lengthy Introductions: OMG, just shut up already. Ask ChatGPT how to do something and it often starts by babbling how excited it is to help you and why it’s important that this knowledge exists and how we’re going to have an outline followed by some tips and then finally…gaaaaah! Just stop talking.
Inappropriate, misplaced emotion coupled with stock phrases: Here’s a ChatGPT sentence someone posted on LowEndTalk: “Given the current pricing and specs, it no longer seems like a question of profitability — it looks more like a path toward collapse.” Besides the em dash, the way it’s phrased is very ChatGPT. The language sounds dire, but also “stock”. If you’ve ever talked with someone who has Alzheimer’s, they often repeat stock phrases and words. ChatGPT is inevitably the same, because it’s vacuuming up millions of conversations, and the definition of “stock phrase” is that it’s used a lot. So ChatGPT often uses cliched expressions. It also has a tendency to marry these with dramatic emotions for some reason.
What are some generative AI “tells” that clues you in to the fact that what you’re reading wasn’t written by a human? Let us know in the comments below.
Leave a Reply