
Topics: ChatGPT, Artificial intelligence, Reddit, Technology, Social Media, Life, Life Hacks

Topics: ChatGPT, Artificial intelligence, Reddit, Technology, Social Media, Life, Life Hacks
A ChatGPT user has revealed some of their top red flags that show something was written by AI, and it's hard to unsee them.
With the rapid advancement of artificial intelligence technologies, it's becoming more and more difficult to spot the difference between something that's been generated by a human or made by a computer.
Whether it's AI influencers popping up as you scroll through Instagram and TikTok or being unsure whether a social media post was actually penned by a human or a robot, there's no doubt that AI content has well and truly weaved it's way into our society.
Some people are even going to ChatGPT for dating advice and health diagnoses, which has prompted controversy.
Advert
While some love and rely on their trusty AI assistant, others have sworn off the technology altogether and want to be sure they're not using it or reading anything it generates.
Over on Reddit, in a subreddit called r/ChatGPT, one user has helpfully rounded up nine of the details they noticed give away that something was written by AI - not including the em dash, which has unfortunately shifted from being just punctuation to a marker for ChatGPT.

According to the Reddit user, the first giveaway on the list is “And honestly?” being used as a sentence starter.
They added that it's usually 'followed by something that isn’t really that crazy honest' so it's more of a stock phrase if anything.
So, maybe avoid using this wording if you don't want to be mistaken for a robot...
Secondly, "you're not imagining it," being penned in an essay or text message could be a really key red flag that it wasn't written by a human.
The Reddit user also listed, 'you’re not alone,' 'you’re not broken,' and 'you’re not weak' as other common examples, something they called 'therapist mode talk'.
This isn't too surprising as it comes at a time when people are using ChatGPT as their therapist, which is a controversial and quite scary concept in itself.

Remaining on the 'therapy talk' route, another question that's apparently commonly asked by AI is "Do you want to sit with that for a while?"
The Redditor added that another example is “Are you ready to go deeper?," and this often comes 'as if you just confessed something life changing,' even if you didn't.
Other red flag sayings again, come at the beginning of sentences and are apparently a clear ChatGPT indicator.
"Here's the kicker," "And the best part?" and "And here’s the part most people miss" are all on the list.
AI really loves an introduction before delving into the point.
The Redditor added that another popular one is: "I'm going to state this as clearly as possible".
They explained that it's usually followed by 'compulsive signposting paired with 600 words that could have been two sentences'.
Similarly, "Here's the breakdown" often comes before a large chunk of text.
It's used by artificial intelligence to break down complex information into manageable, organised points, making the facts more readable.
Interestingly, it seems that ChatGPT loves the word 'quiet'.
Whether it's “quiet truth”, “quiet confidence”, “quietly growing”, “quiet rebellion”, the Redditor explained that AI loves a good adjective.
"It can't just simply say the thing," they added.

'Forced reassurance after pushback' is also on the list.
Aka, when ChatGPT tells you something that's not quite right, and you tell the computer it's wrong.
It seems that often, when its logic is being corrected, instead of doubling down, it will reassure and tell you that you're valid for questioning it.
Finally, AI can sometimes get a bit too big for its boots and attempt to write a metaphor.
However, as you'd expect, they don't quite make sense in the way one written by a human would.
The Reddit user explained that a red flag is 'odd comparisons that sound smart but feel slightly off, like the writer doesn't fully understand the thing they're describing'.