“It can be difficult for a professor to tell the difference between a paper written by a student and one generated by ChatGPT, especially if the student has made minimal changes to the text generated by the model.” — ChatGPT answering the question, “Would a professor be able to tell the difference between a paper I wrote and a paper written by ChatGPT?”
The way Nick Bennett stumbled into the world of AI is a summarization of modern teenagehood — an addiction to TikTok and a disinterest in Shakespeare. (Nick Bennett is not the student’s real name, which Global News is not providing so the student can speak candidly).
It was 3 a.m. on his last night of holiday break, with an essay on nihilism due in the morning. On the hunt for quotes that supported his thesis, the 15-year-old remembered “ChatGPT” — an open-source AI that he had seen all over his TikTok For You Page. All he had to do was log in and give the bot a prompt.
Testing his luck and tight on time, Bennett typed into the blank chatbox: “Show me examples of Macbeth becoming nihilistic.” Within seconds, the bot had written an introductory sentence and provided three quotes with an analysis for each.
“I was so shocked,” Bennett said. “I would’ve had to read through so many useless articles where they didn’t get to the point, but ChatGPT did it for me extremely fast. It saved so many hours of research.”
‘Zoom fatigue’: Online communication less effective than in person, study suggests
Scientists share source of signal captured from almost 9 billion light-years away
Bennett decided to use the quotes but write his own analysis — worried that he would get caught for plagiarism. However, when another assignment popped up the following week — he wrote down the AI’s answer, word for word, and submitted it. His teacher never caught on.
The future is here, and it can do your homework.
ChatGPT wasn’t invented solely for young people like Bennett, but its accessibility coupled with its newness (the software has only been available for two months) leaves it ripe for exploitation by students. It’s a free-to-use chatbot that can answer questions at lightspeed, with each response being totally original. It has many applications, like writing code, suggesting recipes, and improving the quality of life for seniors and those living with disabilities.
However, its ability to write in grammatically correct full sentences and price tag (free) make it a powerful weapon in the hands of kids with two hours until a midnight deadline. Intentional or not, this nascent AI software is altering decades-old educational structures and changing the way we think about learning.
“This will fundamentally change education,” said Alan Mackworth, the Canada Research Chair in Artificial Intelligence at the University of British Columbia. “I don’t see any other option.”
Although it’s just in its infancy, the company behind ChatGPT has been working in this space for years. OpenAI is a California-based research firm that works on projects to expand the capabilities of artificial intelligence and explore its impacts on society. According to OpenAI, ChatGPT is valued at an estimated $29 billion.
Why technology used by NASA on Mars might be able to help reduce emissions from Alberta’s oilsands
Pamela Anderson alleges Tim Allen flashed her on ‘Home Improvement’ set
With this type of value attached to such a new project, ChatGPT has Silicon Valley buzzing. It’s reported that Microsoft is set to invest another $10 billion in the viral tool, getting a 75 per cent share of OpenAI’s profits until it makes back the money on its investment — after which it will get around half.
When you ask ChatGPT why this investment is a good idea — it will share modestly about how it can improve efficiency, reduce workload and cut costs for organizations.
While it may be beneficial to businesses looking to improve their practices, it doesn’t take an all-knowing robot to see that ChatGPT could cause major problems for universities looking to stave off plagiarism.
Schools assign thousands of research papers every semester. With each paper generated by the bot being completely unique, the opportunity to use ChatGPT to complete assignments is there for the taking. According to Mackworth, ChatGPT can respond to 175 billion prompts, leaving almost nothing out of its grasp.
“You prompt it with various suggestions like, ‘I’d like you to write an essay in the style of Charles Dickens’ and it would do it,” explained Mackworth. “It goes and looks at everything that Charles Dickens has ever written and synthesizes it based on that knowledge.\
Another professor concerned with how ChatGPT will impact university students is Toronto Metropolitan University’s Angela Misri. As a journalism instructor with a specialty in all things technology, Misri has been living at the intersection of AI and the written word for years. “What scares me as a human is we’re basically taking an industry of creation, and we’re just making it worth nothing. Like it’s going to make creating a piece of text almost free, right?”
Misri also acknowledged that the bot has forced her to rethink the structure of how she teaches. She said she’ll now consider emphasizing oral presentations and interviews rather than written work to try and minimize opportunities for ChatGPT to impact her classes. She added that a colleague at TMU is turning to a “fishbowl” strategy, where students’ names are put in a bowl at the front of the class and are drawn at random to explain their work.
New tech gadgets are making it harder to escape domestic abuse: advocates
Drone captures incredible moment New Brunswick moose sheds both antlers
Still, the tech may just be too fast and too smart. ChatGPT can generate thousands of words in seconds, with short-answer questions being a cakewalk for the bot.
“The tech is now trumping what the schools have,” said Marc Saltzman, a tech expert and journalist.
“Something has to be done. I feel we’re on the brink of a new wave of AI-powered software that’s going to make it very hard to determine if a human wrote [a piece of text] or not.”
But what about plagiarism detection software? Schools have been relying on services like Turnitin to nab cheaters for years. Can these systems detect if AI has been used to write a piece? Experts say it doesn’t take much to get a ChatGPT-laden Trojan horse through their walls.
“It’s reached a critical point,” Mackworth said. “It’s taking off and the effect on education in the short term is that all exams, essays and programs are suspect. You cannot easily determine if they’re written by who they claim to be written by.”
Turnitin is attempting to battle the bots, however, as it recently released three manuals to help teachers determine if AI wrote their student’s assignments.
If software like Turnitin can be duped by the bot, forget about trying to discern if ChatGPT was used with the naked eye. “Do I think I could be fooled? I absolutely do,” Misri said. “Everyone’s going to get fooled, but quite frankly the students are losing out if they are relying on this to get through a degree program.”
To find out just how well humans could spot the work of bots like ChatGPT, Mackworth looked at how well humans can detect the presence of AI in written work. It turns out that it appears to be a shot in the dark. “Humans get it right about 52 per cent of the time,” he said. “So we’re just slightly better than guessing.”
That’s why Edward Tian, a 22-year-old Princeton student, has created an app to eliminate the guesswork — even for professors with the sharpest eyes.
Tian’s app, GPTZero, was created over his winter break after realizing the risk of AI-related plagiarism in post-secondary schools was getting out of control. Tian’s app was in such high demand that quickly after its launch, GPTZero’s website crashed due to high traffic. More than 33,000 people used it in just seven days.
But how did GPTZero achieve what many high-tech plagiarism catchers couldn’t detect? It turns out human writing has a certain je-ne-sais-quoi that the bot can’t match. The young software engineer claims that in human-written text, real students tend to go deeper into the context of their words than chatbots. In essence, ChatGPT’s work lacks the humanness of writing.
In addition to complexity, Mackworth believes there’s another major puzzle piece that AI is missing when it comes to generating text: real understanding.
“Despite all these impressive successes, it really doesn’t understand what it’s doing,” he said. “All it’s doing is stringing words together. It doesn’t know what they mean. It doesn’t know what ‘truth’ is.”
While ChatGPT can seemingly pull anything from cyberspace at a moment’s notice, it isn’t perfect. It cannot complete bibliographies or include in-text citations in a paper that it’s written, but it can offer up primary sources that a student could then turn into a complete list of works cited. It also regularly reaches capacity, which bars new users from logging on until space opens up.
When ChatGPT reaches capacity, it offers a fun example of its work to keep users entertained.
Ontario AI sector saw big job gains recently, but won’t be unscathed by downturn: report
Pink Floyd army comes for anti-LGBTQ2 fans outraged by new ‘rainbow’ logo
As more and more people feed ChatGPT with new prompts, it continues to learn. This is a brutal irony not lost on Misri. As students eschew traditional learning for the shortcuts offered by ChatGPT, they in turn make the AI better, leading it to be more effective for future users. “It’s really sad that you’re giving up all this power to a machine that could steal your job,” she said. “The whole thing is circular.”
That is to say nothing of how ChatGPT can “help” with math problems. While there are millions of possible ways to write an essay, the rigidity of mathematics poses its own unique issues. Given that there is often only one right answer to many problems and ChatGPT is capable of showing its work, it could easily be used by a student with their instructor being none the wiser.
Even if the AI is used responsibly — say, to check grammar or sentence structure — its impacts could be felt on an artistic level. If the bot says that one way to do something is the correct way, then students may be disincentivized or hesitant to take risks with their work. Similarly, if ChatGPT is used to create a framework for an assigned essay or written piece, the student won’t get to experiment with structure or find their own voice.
This could lead to a stall-out in convention-changing ideas, and a shift towards centring work around the thought processes of a robot. “I think it might be the end of valuing original thought, sadly,” Misri said. “It’s not that we won’t create original thought, it’s that we won’t value it as much when we can get it for free.”
OpenAI says it’s working on implementing safeguards to protect academic integrity, and will be incorporating user feedback to improve its product. In a statement made to Global News, OpenAI said “we don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system. We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from artificial intelligence.”
Bennett says that his use of ChatGPT won’t stop at Macbeth.
Many people at his high school continue to use the app too, finding ways to sneak their robot-written text under their teacher’s noses. In fact, Bennett believes that not a single teacher at his school has even heard of the tool.
He also doesn’t think using the bot is cheating. He believes the leap to artificial intelligence is similar to the jump from textbooks to search engines like Google.
“I’ll be careful with it but I’ll definitely continue to use [ChatGPT]. It’s just so efficient and it would be ridiculous not to, knowing what it can do.”