Playing Nicely With AI

As curious students venture the uncharted territory of generative AI, education systems are tasked with ensuring its appropriate use while still delivering quality education.
By Kai Xiang Lee

New toys and gadgets have always been finding their way into schools—from the Tamagotchi to calculators and smartphones. While some are relatively benign, others can be highly disruptive to a student’s education.

 

ChatGPT, one of the best known generative artificial intelligences (GenAIs), is the latest tool to rapidly enter the education scene for its ability to converse and produce written text like a human. Within months of its initial release, millions of users explored ChatGPT’s abilities, ranging from content creation and text translation to even code debugging.

 

Besides writing, other GenAIs, such as DALL-E and Midjourney, can generate original digital images from text prompts—some can even produce entire videos from a script or blog. With such fascinating capabilities, ease of accessibility and increasing popularity, the young and curious minds are inevitably finding ways to use them in the classroom.

 

School boards are now faced with the difficult challenge of setting guidelines on how to use these technologies to enhance learning, while ensuring an even playing field for all students.

 

CAN THE AI DO MY HOMEWORK?

 

With a highly tech-literate generation, the launch of GenAIs saw students zealously applying these tools to their assignments. In a matter of minutes, they could churn out plausible essays, parse simple answers to quizzes, and succinctly summarize reports. In a survey reported by Forbes, 89 percent of students confessed to using the platform to complete homework assignments. Some state governments even blocked the service from their networks.

 

OpenAI, the developers of ChatGPT, are eager to work with educators in finding solutions. As a response to the proliferation of AI-assisted cheating, OpenAI has released a preliminary classifier to identify works produced by their proprietary technology.

 

OpenAI’s classifier positively identified 26 percent of AI-generated content, but incorrectly flagged nine percent of human-written text. Plagiarism detection giants like Turnitin have also released an AI detection software but are facing similar issues—flagging self-written essays as AI-fabricated. Moreover, outputs from ChatGPT and other GenAIs will likely only become harder to detect, as they learn from the feedback of the same classifiers used to catch them.

 

LET THE MACHINES TAKE OVER

 

Dr. Toby Walsh, a professor of AI at the University of New South Wales, Australia, believes that these are signs that we are testing for the wrong things. “The funny thing is, we don’t set essays because there’s a shortage of essays, we set essays because that’s a way to measure people’s ability to build arguments, to think critically about a topic,” said Walsh in an interview with Supercomputing Asia.

 

When the task of content synthesis is surrendered to machines, we can focus more on thoughtful curation of the material. As an example, Walsh suggested that students can use ChatGPT to prepare an essay to analyze and critique. This directly allows us to test for the nuanced skills needed for presenting arguments, critical analysis and inquisition.

 

When the modern calculator was first introduced, it raised similar controversy among educators, but ultimately proved to be a positive force in the classroom. Routine mathematics could be outsourced, amplifying productivity, and allowing students to focus on more advanced mathematics. Similar to how the calculator is now

 

Additionally, GenAI can provide individualized learning catered for each student. One company that offers such services, Carnegie Learning, uses AI to track a student’s progress to plan lesson activities. This level of personalization goes beyond what a single teacher can do for a full classroom and can make the learning process more effective for students.

 

However, these tools are not without flaws. Currently, GenAIs are still prone to inventing fantasies, and proclaiming them as facts. Both students and instructors need to curate content synthesized by these tools carefully before using them.

 

SHOULD WE BE WORRIED ABOUT GenAI?

 

Even as society embraces the new technology, we must be aware of concerns perpetuated over its use in the wider context. Surrounding the promotion of GenAIs in education is a haze of legal and ethical concerns.

 

One issue regards the confidentiality of information shared through these applications. For instance, ChatGPT, which explicitly states that conversations are recorded to improve the chatbot, suffered a data breach on March 20, 2023, leading to leaked conversations and payment information. Coupled with previous security breaches, Italy has currently banned its use while other European nations have imposed strict regulations.

 

In the US, there are ongoing class-action lawsuits over the use of copyrighted data to train these algorithms. The ethical training of the model raises concerns over the intellectual property rights of artists, such as the ownership of the works produced by models trained on copyrighted data.

 

Nevertheless, experts are hopeful that a more sustainable solution will emerge. A similar problem faced the music and film industries in the past with the launch of file-sharing service Napster. The surge in internet piracy was tackled by the provision of streaming services, which allowed users to access content while maintaining standards of copyright.

 

According to Walsh, “We’re going to have a similar evolution in terms of GenAI, whether we’re returning value back to the people. 

“Even assuming that the rightful copyright owner is the person whose queries generated the AI work, the concept of independent creation may preclude two parties whose queries generated the same work from being able to enforce rights against each other.”
Margaret Esquenet
Partner at Finnegan law firm

Other Featured Reads