As we all come to terms with a world in which artificial intelligence is now mainstream and no longer a fantasy only seen in the movies, the need to be able to know what elements of day-to-day life and experiences AI has influenced has never been greater.
OpenAI’s ChatGPT was the first mainstream example of how useful AI can be for a variety of people, businesses and industries, but that doesn’t mean it’s perfect, accurate or the ultimate assistant to our everyday needs.
ChatGPT comes with its fair share of controversy thanks to it occasionally providing incorrect, inaccurate or unfactual responses to questions, landing people in hot water as a result. It’s even got its creators, OpenAI, in trouble.
In 2023, it falsely claimed an American law professor had made inappropriate comments and sexual advances towards a student while on a school trip, and even cited The Washington Post as the source of the information. There was no school trip and there was no article in the Washington Post. Put simply, it completely made the whole thing up.
That’s just one of several very alarming flaws in the chatbot’s very impressive capabilities. On the whole, it can be a very useful source of information, but what it says cannot and should not always be taken as fact.
But as more and more people use ChatGPT to complete everyday tasks online or as part of their work, it’s quite possible that every time you go online, you’re consuming at least one piece of information that was generated or written by AI, and likely ChatGPT itself.
The internet, as a result, is fast becoming a minefield where the lines between human and AI written text are blurred. There may be a time in the future when that won’t be an issue – and ChatGPT-5 could be around the corner – but, for now, there remains a need for people to be vigilant and know when something is written by ChatGPT.
What are the signs that something is written by ChatGPT?
As it’s down to humans to prompt ChatGPT, the detail of the chatbot’s response is often decided by the detail in the prompt it’s given. If it has a lack of instruction, especially in more detailed subjects, it is more likely going to return a vague or inaccurate response.
To the untrained eye, like someone with no particular background knowledge on the subject, that may not be obvious. But to others, it might be glaringly obvious that the text was written by ChatGPT.
These are the key things to look out for…
General use of language and repetition
ChatGPT is what is known as ‘narrow’ AI, which means it is not able to understand or replicate human emotion or behavior and cannot think for itself, which means its responses can often be devoid of any personality or creative language.
Moreover, even though it does make mistakes, it’s trained to make as few mistakes as possible. Therefore, responding in a simple and somewhat robotic tone is designed to help minimize mistakes and inaccuracies.
This is mostly obvious if you were, for example, to ask it to write you a review for your favorite film or product. It might leave out key information about what it’s reviewing, such as actors’ names or the product’s dimensions.
Therefore, if you’re reading something like this on a review site and it appears to omit what would normally be considered key information, there is a chance it was written by ChatGPT.
The same applies to the repetition of words or phrases. ChatGPT is trained on vast amounts of data and language but it might still use repetitive language, especially in larger blocks of text.
Hallucinations
The aforementioned case involving a law professor is the perfect example of an AI hallucination, where it completely makes something up within its response. It remains one of the biggest issues with generative chatbots like ChatGPT.
AI experts recommend fact-checking ChatGPT responses, particularly when involving more niche information and subjects. There are many more cases of hallucinations, some more serious than others, so it’s always advisable to triple-check its responses with other sources of information before trusting its accuracy.
If you’re reading about a subject you know well, this will be much easier to spot. For example, a match report on a recent soccer game that you watched will likely be easy for you to identify any factual inaccuracies. But if you’re doing, say, research on the thermic effect of food, it may be harder to spot if it’s human or AI-generated.
Copy-and-Paste Errors
This is arguably the easiest one to spot. People have been known to accidentally copy and paste ChatGPT’s response and its side comments like ‘Sure, here’s a movie review for….’
That’s a surefire way of knowing if something you’re reading was written by ChatGPT or a human, purely based on human error, as opposed to an error on behalf of the AI.
Read the text thoroughly
ChatGPT is trained to sound and respond like a human, so it will be near impossible to tell if something was written by AI if you only read one or two sentences. It will be clearer if you read the entire article thoroughly, which may highlight particular hallucinations, repetition, general language or copy-and-paste errors.
Of course, humans can edit ChatGPT’s responses to remove much of the above and make the text more human-like, but often that will involve a level of editing that would likely mean it would be easier to write the text themselves.
How to detect ChatGPT content
The rise of ChatGPT and similar AI chatbots has led to an array of AI content detectors popping up, which all claim to be able to detect when a piece of text is human or AI-written.
We’ve already looked at the best AI content detectors, which can even indicate exactly what parts of text are human and what parts are AI. Some will give you a percentage estimate of human v AI in the text its given to analyse.
However, none of these AI content detectors are 100% perfect and there will always be occasions where it incorrectly detects human writing as AI, and vice versa.
However, using an AI detector is a good way of seeing whether there are any signs of AI in what you’re reading, especially if you’ve still got suspicions after checking for repetition, hallucinations and general language.
But the key takeaway, especially when referring to niche subjects you’re not completely familiar with, is to always fact-check what ChatGPT tells you. Better to be safe than sorry.
Featured image: Ideogram
The post “How to tell if something is written by ChatGPT” by James Jones was published on 04/07/2024 by readwrite.com