Friday, September 12, 2025

Professors ponder ethical uses of AI in academia

By Maureen Russel

 

Artificial intelligence is a weapon.  It has proven to be highly problematic when used in an academic setting. 

As the school year begins, both students and teachers are reminded of its role in the classroom. With the continuous technological advancements large language models have made, it can be safely assumed that they will persist in some fashion.

 While ChatGPT is the artificial intelligence system that comes to mind first, there are other divisive versions of the technology that compete with one another. From Google Gemini to Grok AI, creating new artificial intelligence bots has become the next big thing in the tech world, meaning it was only a matter of time before it infiltrated the academic universe. And while most automatically associate its use with cheating, others argue that it can be used ethically.

The main concern the majority of people have regarding ChatGPT use in school is the lack of energy it allows students to put into their coursework. Students often feed a simple prompt into the system and receive a polished, instant response that is ready for submission.

 The catch is that it is clear to most teachers when something has been completed by artificial intelligence. Dr. Fadi Helwanji, a professor at SUNY Plattsburgh’s School of Business and Economics, can easily recognize the students who cheat on assignments because all of the responses sound alike. It also casts doubt upon the effectiveness of the student’s education. Colleen Lemza, a Professor of Public Relations and Journalism argues this exact point.

 “If someone is not firing neurons to complete a worksheet, it defeats the purpose of being in school, and forces you to question whether the student is learning from the assignment.” Said Lemza.

Though global plagiarism is regarded by all as cheating, using artificial intelligence as a starting point can be acceptable when done appropriately.

 Dr. Helwanji encourages students to use such resources for basic, menial aspects of certain projects, even including their use in his course syllabi.

 “It’s something that has to be embraced, since it’s gonna be in every aspect of life. Actually, it already is.” said Helwanji.

 Associate Professor of Computer Science Kevin McCullen values ai language models as any other source of information, but is wary of where it gets its information from. He argues it is important to remember to cite it when it has been used or face the consequences of academic dishonesty.

 “With AI, you need to credit it. You also need to verify it, because it is an unreliable narrator,” said McCullen.

 At the core of the technology, it is a tool like any other when used correctly. The difference is  recognizing its use as a tool instead of a replacement for actual writing and creating. Specific situations that do and do not warrant its use is the hurdle that separates cheating and honesty.

“Large language models are definitely a threat, and an opportunity, and an enigma.” Says McCullen.

 The battle with artificial intelligence that academia currently faces is the simplest, most novice version of the technology that we will see in our lifetimes. Its abilities will continue to advance, and the challenges will become more complex. To prevent artificial intelligence from clutching society in its fist, I believe we must work to find the balance between machine and thought.


Leave a Reply

- Advertisment -spot_img

Latest