Wednesday, December 25, 2024

Campus takes on AI

By Jesse Taylor

Artificial intelligence has become popular recently with the introduction of ChatGPT, an AI chatbot that can generate responses based on prompts, released in November last year. But what are the implications of an AI that can tell you the answer to virtually everything? SUNY Plattsburgh’s Institute of Ethics in Public Life sought out to answer these questions.

Members of the Institute of Ethics in Public Life held a discussion over Zoom about generative AI April 26. The group opened with Delbert Hart, professor of computer science, explaining how exactly generative AI such as ChatGPT work. Hart explained that AI programs are fed large amounts of data that is then used to recognize patterns or improve its performance.

Additionally, there exists a wide variety of AI that can perform different tasks. AI, such as DALL-E and Midjourney, are able to create images using the data that is fed to them while others, like ChatGPT, are able to formulate responses to virtually any question. However, the introduction of AI has also raised concerns about the potential pitfalls of this technology. 

Kevin McCullen, associate professor of computer science, said that “people are starting to treat it like some kind of Greek oracle,” asking the program all kinds of answers and treating the answers as if they are the utter truth. However, ChatGPT is perfectly capable of giving the wrong answers, as the speakers pointed out.

If the datasets given to AI to generate responses are inaccurate, then the answers it gives will be inaccurate as well. Many of these programs use open-source websites for data, such as Stack Overflow and Reddit. In fact, the group brought up that Reddit is making claims against ChatGPT for getting so much of its data from Reddit.

However, it is unknown where ChatGPT sources all of the data it uses. Lonnie Fairchild, professor emerita of computer science at SUNY Plattsburgh, pointed out that we don’t know how many networks are in ChatGPT. 

“We are being asked to trust things that we don’t know anything about,” Fairchild said.

Tom Moran, founder of the Institute of Ethics in Public Life, brought up the idea that enemies of the United States may acquire this technology. 

“We worry that adversaries might acquire the technology, and it’s funny that’s very analogous to the dilemma that has existed since the creation of atomic weapons,” Moran said.

As of now, AI is unrecognized by the government as causing further issues.

Another question that was brought up was whether ChatGPT actually has intelligence. One of the many fears that opponents of AI, such as business magnate Elon Musk, point out is that AI may one day reach humanity’s intelligence and even surpass it. However, ChatGPT has been trained only on text.

For AI to be able to reach that kind of intelligence means that it has to be given the ability by programmers to do so. It is possible that someday in the future programmers will give an AI the ability to become intelligent, but as of now, that still remains to be seen.

- Advertisment -spot_img

Latest