The University of Colorado Boulder faculty have been discussing concerns about the artificial intelligence chatbot ChatGPT through various assemblies and panels this semester.
ChatGPT was created by OpenAI, an artificial intelligence research company headquartered in San Francisco. Since the launch of the chatbot on Nov. 30, 2022, it has generated many conversations in the academic world about how this tool relates to education, particularly in terms of academic dishonesty.
Several public school systems in New York City and Seattle have banned the chatbot on school Wi-Fi networks; however, many colleges and universities have been reluctant to ban the tool in higher education, in part due to the challenges of enforcing this policy.
Though CU Boulder has hosted several panels and one faculty assembly regarding ChatGPT in recent months, the university has not announced any new policies regarding the technology.
Tiffany Beechy, an assistant professor at CU Boulder and the chair of the Boulder Faculty Assembly (BFA), said that faculty members have been sharing articles and discussing concerns about detecting the use of AI in their classes.
Here’s what you need to know about ChatGPT and how it has been impacting the academic environment at CU Boulder.
How ChatGPT works
OpenAI released ChatGPT last November as a free preview of their AI chatbot. At the most basic level, a chatbot is a computer program that uses AI and natural language processing to understand and respond to users’ questions, thus simulating human conversations. For example, ChatGPT can be prompted to compose a song in the style of a certain artist, recommend popular activities in a certain area or write essays on a variety of different topics.
As a high-tech AI chatbot, ChatGPT can generate complex answers to open-ended questions, using its training from large amounts of data and machine-learning algorithms to interpret and process questions and instantly provide human-like answers.
A growing concern in the academic world
The discussions at CU Boulder’s faculty assembly have covered many of concerns raised about ChatGPT. Questions have emerged on whether AI-generated responses should be considered plagiarism and how college courses can refine assignments to inspire original critical thinking from students.
“[Generative AI technology] has always been impressive, but most people have not been able to interact with it in this way,” said Casey Fiesler, an associate professor at CU Boulder who researches technology ethics, internet policy and communities. “Before ChatGPT, people didn’t realize how advanced this kind of technology was. Now, everyone has a very easy way to play with it and see what it can do.”
According to Beechy, administrators in the Honor Code Advisory Board and the Division of Student Affairs are looking for ways to update CU Boulder policies to include new rules regarding ChatGPT and other similar AI chatbot technologies. Specifically, they want to define what the standards of proof should be to confirm that a student’s work is their own.
“All students enrolled in a University of Colorado Boulder course are responsible for knowing and adhering to the Honor Code,” said university spokesperson Andrew Sorensen in an email statement to the CU Independent. “Violations of the policy include plagiarism, which covers the use of paper writing services and technology, such as essay bots, whether paid or unpaid.”
Although CU Boulder’s honor code refers to the use of academic writing technology under the umbrella of plagiarism, it does not specifically call out the submission of AI-generated work. However, Beechy stated that she believes turning in AI-generated work as one’s own aligns with the definition of plagiarism.
“I think the principles are completely covered by the honor code,” Beechy said. “It’s plagiarism if the work you’re representing as yours didn’t actually come from your own mind and hasn’t been cited.”
Fiesler said that professors should either adapt to the technology or clarify with their students how AI should and shouldn’t be used in the classroom.
“I would encourage instructors, if they’re concerned about students using [ChatGPT] to do a task in their class, to use the tool to do that task and see what happens,” Fiesler said. “Then, [they should] think about how they might need to modify that task.”
The future of AI chatbot technology
Despite fears of AI-generated responses dominating the academic world, ChatGPT is still a work in progress.
Before gaining access to the chatbot, users are warned by OpenAI that the chatbot may produce “incorrect or misleading information” and “offensive or biased content.” It is also described as a “free research preview,” rather than a complete, refined chatbot.
On ChatGPT’s website, OpenAI has a limitations section cautioning users that the chatbot may make up false but plausible-sounding information. According to OpenAI, this challenge is difficult to combat, as ChatGPT isn’t specifically trained to generate factually accurate information. The creators have said that encouraging the model to be more cautious may hinder its ability to answer appropriate questions.
“It’s extremely impressive, but it’s not intelligence,” Fiesler said. “It’s not magic. At the most simplistic level, ChatGPT is a statistical model of what word probably comes next.”
In today’s world, advanced technologies are developing rapidly. Revenue from the artificial intelligence industry is projected to grow at a yearly rate of roughly 37% from 2023 to 2030.
Given these developments, some people have become increasingly concerned about the impact AI will have on our society going forward, while others are more optimistic about the situation.
“New technologies have always been met with some measure of freaking out,” Beechy said. “They can — and often do —destabilize our experiences as a society. Then, we figure out ways, both good and bad, to live with them. We’ll inevitably have to do that with ChatGPT as well.”
Editor’s Note: A previous version of this story incorrectly stated that ChatGPT was based on GPT-3. OpenAI has not revealed what version of GPT that ChatGPT is based on. It has since been removed.
A previous version of this story included a photo of students working in the Atlas computer lab mentioning the Technology, Arts and Media program, which has since been renamed the program of Creative Technology and Design. It has since been removed.
Contact CU Independent Staff Writer Ann Marie Vanderveen at ann.vanderveen@colorado.edu.