Arab Canada News
News
Published: February 4, 2024
Researchers have reached clear evidence of using AI-powered text generation tools in several scientific research papers over the past year.
However, when it comes to the illicit educational use of AI-powered chatbots, we may have only seen the tip of the iceberg.
Authors have been caught publishing works that include seemingly legitimate scientific references completely invented using AI. Others have left text snippets from a ChatGPT user interface within their works.
A study related to mathematics was withdrawn after it was found to unintentionally include a sentence from an AI interface, according to an article in the journal Nature in September 2023.
The use of AI-powered text generation tools represents the biggest dilemma when it is not transparently disclosed. In other words, when the services might be fake.
The issue of how to handle the shift towards AI-generated text has been discussed in universities since the emergence of ChatGPT at the end of 2022.
Despite the global hype surrounding AI's ability to summarize information and write texts at a human-like language level, experts say that programs similar to Microsoft's ChatGPT and Google's Bard are still far from human intelligence.
Many teachers place their hopes on programs that claim they can detect AI-generated texts. Does this mean the end of cheating?
One expert says, "Not at all."
Deborah Weber Wolf from HTW Berlin University of Applied Sciences says, "The hope for a simple software solution to detect AI-modified texts will not become a reality."
She added, "There are many programs that claim detection capabilities, but they do not do what they are supposed to do." She explained, "Some companies producing these programs have already admitted to shortcomings."
Weber Wolf participated in a study testing 14 programs for detecting AI-modified texts. According to the study, these programs did not provide reliable results when it came to the question of whether the text was written by a human or a machine.
The research team published their findings at the end of December last year in the International Journal of Educational Technology Integration.
Weber Wolf explained that "in some ambiguous cases, the systems tend to assume the text author is human." She added, "Of course, this is because people do not want to be falsely accused. That would be a disaster for the educational system."
However, the study pointed out that the main problem is that about one out of every five AI-modified texts went undetected. According to the study, the rate of AI usage detection failure increases if humans review the AI-modified text.
It is also not easy to translate the results related to the programs for the average user: some provided a percentage indicator of the likelihood that the text was modified by an AI program.
It is noted that there is no solid evidence, which means universities might find it difficult to prove misconduct on this basis. Weber Wolf says, "Unlike plagiarism, it is impossible to compare the text to the original."
Weber Wolf says she has encountered cases where lecturers suspected AI use, and students admitted to using it. She also pointed out that chatbots are widely used—often without students knowing they are making a mistake.
Weber Wolf explains, "We need to think carefully about how to assess performance," which might mean future assignments should be designed very differently from the past.
She said students should be encouraged to identify errors in AI tool responses as part of the task.
In the end, Weber Wolf points out that all AI-powered chatbots are nothing more than parrots, as they only say what they hear. Therefore, it is important to teach students academic writing standards, such as the purpose of footnotes.
If AI systems are used, they must be dealt with transparently. Weber Wolf explained, "Full responsibility must be taken for the garbage produced by these systems. There are no excuses."
Comments