Finding the sweet spot
By Scott Geier, Assistant Professor
On January 6, 2023, I used ChatGPT for the first time. I teach an introductory web development course at UNC Hussman, so for my first prompt, I wanted to find out if generative AI could write working code. I asked the chatbot to create a web page about our school, complete with a headline, an article, a photo of Dean Reis and a YouTube video.
ChatGPT nailed it. In a matter of seconds, it produced flawless code, wrote the article and found the images needed for the page. I just had to copy and paste the HTML and CSS into a code editor, and I was done. After the chill went up my spine and entered my brain, I thought:
“It’s like the automobile has just been invented. And I’m a horse and buggy salesman.”
I knew, immediately and instinctively, that I had to pivot. I couldn’t wait to see how this new technology evolved; I couldn’t weigh the pros and cons for a few months before taking action. I had to start teaching my students how to drive cars, not ride horses.
I had this in mind a few days later when I walked into my introductory coding class.
“Everything has changed,” I told the students, and then I gave them their first AI-driven assignment. I gave them a mockup of a simple web design and asked them to build the page with AI. I imagined their minds, like mine, would be blown by the power of this new technology.
I thought some students would throw up their hands and ask, with existential angst, “If AI can do this, why are we even here?”
That’s not what happened. Instead, I discovered that most of them couldn’t solve the problem. They got frustrated and confused. They wrote prompt after prompt, getting nowhere. They couldn’t get AI to do the same thing I had done myself a few days prior.

That’s when I realized: my students couldn’t get the results they needed because they didn’t know how to ask. When I used ChatGPT, I had taken my own human-acquired, domain-specific knowledge for granted. In my prompt, I specified the coding languages, HTML and CSS, that were needed; I used web design keywords like “header” and “responsive,” and I told AI to fetch the URLs to include in the image tags. The students didn’t know any of this yet, so they couldn’t use that knowledge in their prompts.
This pattern emerged in other AI exercises during the next few months. For example, during a graphic design class, I asked students to create an editorial illustration on a given topic. The results were disappointingly homogeneous. All of the images looked the same, with that now-infamous “AI style” (photorealistic but bizarre, stock photo meets YA fantasy novel), and none of the artwork was suitable for publication. I realized that students were facing the same problem that had stymied them in my coding class — they couldn’t get the results they wanted from AI because they didn’t know how to ask. So I had to teach them how to ask.
I discussed the value of different artistic mediums (e.g., watercolor, woodcut) and gave them terms to describe different styles (e.g., bold linework, minimalism, color block). I explained how to write an image-generation prompt like an ethnography, with thick description and granular detail. I showed them how to daisy-chain one prompt after another until AI had refined the results to their liking. I discussed the bias inherent in generative AI and the ethical questions to consider when creating images of people. The students tried again, and the results were amazing.
What I’ve learned during this journey is that AI doesn’t exist in a vacuum. It’s not a wonder-tool that can read our minds and do our bidding with ease (at least not yet). It can only do what we tell it to do, and it only works if we give the right instructions. And that takes good old-fashioned human know-how. In other words, using artificial intelligence requires actual intelligence, the kind of savvy we’ve always tried to foster in higher education.
In fact, one could argue that in the age of AI, humanistic pedagogy has truly found its place. Critical thinking has become even more critical now that our students are offloading the grunt work to machines. AI can be an amazing, tireless, loyal assistant, but we, the humans, must be the managers, and if you’ve ever worked in management, you know – it’s not easy. Good managers must understand the problem at hand, they must explain it clearly to others and they have to set realistic expectations before assigning tasks. This is very much a skill that can be learned and practiced, so we must teach it to our students.