close
Computer Sciences

With Above 99% Accuracy, AI-Generated Academic Science Writing Can Be Recognized

The introduction of the artificial intelligence chatbot ChatGPT, which can produce text and conversations that resemble those of humans, has created a sensation. Nevertheless, a study that was released on June 7, 2023, in the journal Cell Reports Physical Science found that there are a number of telltale indications that can help us discern AI chatbots apart from people. The researchers created a technique to accurately detect AI-generated academic science papers with over 99% based on the indications.

“We tried hard to create an accessible method so that with little guidance, even high school students could build an AI detector for different types of writing,” says first author Heather Desaire, a professor at the University of Kansas. “There is a need to address AI writing, and people don’t need a computer science degree to contribute to this field.”

“Right now, there are some pretty glaring problems with AI writing,” says Desaire. “One of the biggest problems is that it assembles text from many sources and there isn’t any kind of accuracy check it’s kind of like the game Two Truths and a Lie.”

Although there are many online AI text detectors that work reasonably well, they weren’t made expressly for academic writing. The team set out to create a tool with superior performance specifically for this need in order to close the gap. They concentrated on an article category known as views, which offers summaries of particular study issues authored by scientists.

The first thing people want to know when they hear about the research is ‘Can I use this to tell if my students actually wrote their paper?’

Professor Heather Desaire

To train the model, the team chose 64 views and produced 128 ChatGPT-generated articles on the same study themes. They discovered a predictability indicator for AI writing when they compared the papers.

Humans have more complicated paragraph structures than AI, with variable sentence lengths, overall word counts, and number of sentences each paragraph. Additionally revealing are preferences in vocabulary and punctuation.

For example, scientists gravitate towards words like “however,” “but” and “although,” while ChatGPT often uses “others” and “researchers” in writing. The team tallied 20 characteristics for the model to look out for.

When put to the test, the model eliminated human-written full-perspective articles with a 100% accuracy rate. For identifying individual paragraphs within the article, the model had an accuracy rate of 92%.

On comparable tests, the study team’s model also significantly beat a commercially available AI text detector.

Next, the team plans to determine the scope of the model’s applicability. They aim to test it across various academic science writing styles and larger datasets. The researchers also want to see if their approach will hold up when AI chatbots develop and grow more complex.

“The first thing people want to know when they hear about the research is ‘Can I use this to tell if my students actually wrote their paper?’” said Desaire.

Desaire claims that although the model is quite good at differentiating between scientists and artificial intelligence, it was not intended to detect student writings that were written by AI. However, she notes that people can easily replicate their methods to build models for their own purposes.

Topic : Article