Can ChatGPT rewrite the rules of higher education?
Recently, UNC’s History Department, the UNC Institute for the Arts and Humanities and theThe Center for Information, Technology, and Public Life (CITAP) convened a panel to discuss the implications of ChatGPT in higher education.
Heesoo Jang, a Ph.D. student at UNC Hussman, was among the four panel members who discussed this new technology and its impacts on the campus community.
Read through a recap of the discussion below by Hussman studentIsabella Sherk ’23, who wrote about the event for the school’s Media Hub program.
Art by Denise Kyeremeh ’23.
Students and faculty slowly trickled into a room on UNC-Chapel Hill’s campus on a Friday afternoon in February. They exchanged quiet but eager conversations as they waited for a panel discussion to begin. Sentiments like ‘I just got on last week’ and ‘It’s crazy what it can do’ echoed through the small event space.
Alice Marwick, an associate professor of communications and the co-director of the Center for Information, Technology and Public Life (CITAP) at UNC, opened the panel by telling the audience, “ChatGPT is a cutting-edge language model developed by OpenAI that has the ability to generate human-like text.”
If that quote didn’t sound natural, it’s because it’s not — Marwick asked the technology to write the introduction. Nodding heads and hums of approval followed. People were impressed.
When OpenAI released ChatGPT in late November, it quickly became popular — the web app reached 100 million active users in January, making it the fastest-growing consumer app in history, according to Reuters.
When asked how ChatGPT works, the OpenAI chatbot responds, “ChatGPT is a type of language model based on transformer architecture developed by OpenAI. It uses deep learning algorithms and a massive amount of text data to generate human-like responses to natural language queries.”
Users ask the chatbot questions or prompt it to write something specific, and ChatGPT delivers. Prompts like ‘write 100 words on Jane Austen’ or ‘write an email to Mary wishing her a happy birthday’ scratch the surface of what it can do.
According to Gary Marchionini, Dean of the UNC School of Information and Library Science, conversations about language modeling came out of the information retrieval community with the creation of indexes that cataloged each word in a document or web page. That’s how search engines like Google came to dominate the internet.
Unlike models that search for specific words, ‘generative’ AI like ChatGPT creates material through a language model that can predict which words are most likely to go after one another.
“What language modeling did is it said, ‘Well, instead of just thinking about what the words are, what if we could imagine what all the possible combinations of word orders would be for all the words?’” Marchionini explained.
The difference between past information retrieval systems like Google and the language model ChatGPT is that instead of retrieving information that already exists, the AI is actually generating new material. That means students can input an essay prompt, and ChatGPT will generate a new piece of writing based on what they typed into the chatbot.
This ability has caught the attention of educators at UNC and across the country — and concerned some of them, as well.
They are worried that students will use the technology to cheat or write essays for them. The New York City Department of Education already blocked ChatGPT from its networks and devices last month due to “negative impacts on student learning, and concerns regarding the safety and accuracy of content,” according to ChalkBeat.
Concerns about cheating remain, but some say banning the technology outright is the wrong decision.
Mohammad Jarrahi, an associate professor of Information Science at UNC who studies the use and implications of AI systems in the context of different professional fields, believes systems like ChatGPT should be worked with, not avoided.
“The blocking approach is not going to work,” Jarrahi said. “We cannot play this continuous, endless game of Whack-A-Mole.”
Mark McNeilly, a professor of the practice of marketing at UNC’s Kenan-Flagler Business School, said that students will continue to encounter this technology in their professional lives.
“These kinds of systems are going to be students’ futures, so to tell them they can’t use it I think is kind of short-sighted,” McNeilly said. “Students need to understand what this thing does, and what its limitations are.”
There are already tools to detect AI-generated writing such as GPTZero, created by Princeton undergraduate Edward Tian, which teachers can use to expose cheating, Marchionini said.
Instead of banning the use of ChatGPT or embracing it fully, Jarrahi said that educators should find the middle ground. Making it mandatory for students to use the system would raise ethical concerns — using the technology trains the AI and provides it with free labor, he said.
But if students use ChatGPT, Jarrahi said professors should ask them to critique its responses. That is a human skill we should nurture, he said.
Humanity, or the lack thereof, surrounds the ChatGPT conversation. Many have observed the robotic tone of voice and lack of feeling in AI-generated writing.
Tressie McMillan Cottom, an associate professor at UNC’s School of Information and Library Science and a 2020 MacArthur Fellow, wrote about this lack of humanity in a December edition of her New York Times newsletter.
“Voice, that elusive fingerprint of all textual communication, is a relationship between the reader, the world and the writer,” she wrote. “ChatGPT can program a reader but only mimic a writer. And it certainly cannot channel the world between them.”
While the technology may not be able to replicate human thought and emotion at the highest level, educators and students can use AI technology like ChatGPT to simplify tasks such as writing emails, proofreading or brainstorming. Jarrahi said that as someone whose first language is not English, ChatGPT has been helpful for tasks like these.
“The way I use it is to help with some mundane writing tasks,” he said. “Phrasing emails to make them more professional, suggesting how I can rephrase parts of my articles. That’s a positive thing as an international.”
There are also other elements of ChatGPT that will impact how people learn and operate in academia — besides potential student misconduct.
OpenAI admits that ChatGPT can give inaccurate responses. In the user onboarding process, there is a disclaimer acknowledging that the chatbot can “occasionally generate incorrect or misleading information and produce offensive or biased content.”
Jarrahi said that ChatGPT has even generated references that do not exist when he requests citations for the chatbot’s information.
OpenAI, previously a non-profit, switched to a for-profit model in 2019. While access to ChatGPT is currently free, it may not stay that way. The company recently announced ChatGPT Plus in a blog post, which will give users “general access to ChatGPT, even during peak times, faster response times and priority access to new features and improvements” for $20 per month.
When AI technology has a price tag, issues of equity on campus arise, Marchionini said.
“What happens when it now starts costing $1?” he asked. “What about $100? What about $1,000?”
Another potential problem is the “biased content” ChatGPT warns its users about.
“If the data set it has been trained on is biased, or has biases in it, it will pop those out,” McNeilly said.
Heesoo Jang, a panel member and Ph.D. student at the UNC Hussman School of Journalism and Media, studies the intersection of technology, society and the media, and is writing her dissertation on the public discourse on AI ethics.
Bias in AI systems is not new, she said — AI models are trained on data from the internet that carry historical biases, and will subsequently perpetuate them.
“Even though we don’t know all the harms and all the biases that are embedded in the systems, even the smallest part of the harms we are seeing is very concerning,” Jang said. “The more people know about this, the more people talk about this, the more we can actually start a discussion to make this better.”
Sam Altman, the CEO of OpenAI, acknowledged this problem recently, tweeting “We know that ChatGPT has shortcomings around bias, and are working to improve it.”
Back on campus, the panel reiterated the sentiments of many — continuing to talk about the implications of AI technology is important.
In one of UNC Chancellor Kevin Guskiewicz’s weekly messages to campus in January, he mentioned ChatGPT, saying he has been listening to faculty and student discussions about the technology.
“Like so many new technologies over the past few years, artificial intelligence holds huge promise for both progress and disruption,” he writes. “We will have to answer a lot of questions about how we respond in higher education.”