For alumni and friends
of the university

Q&A: Just Human Enough

Darren Hick / Nathan Gray

Darren Hick, an assistant professor of philosophy, explains how one form of artificial intelligence is already changing our lives.



  • In simple terms, what is ChatGPT? And what can it do?

    DH: “ChatGPT is a form of artificial intelligence called a large language model (or LLM). It’s a form of learning software, and it’s trained by programmers to give human-like responses to questions from users. It’s a neural network, a sort of artificial brain, and it’s trained on a huge amount of material, likely all freely available stuff from the internet. So, you could simply have a conversation with it. Or you could ask it to write an essay or a poem, or to provide code for a program you’re working on, and it will do its best to give you the sort of thing you’ve asked for. The idea is that, when you ask ChatGPT a question, it does the same sort of thing that you do when you answer a question. I asked it to pitch some ideas for a feature article suitable for Furman magazine – it gave me seven ideas in seven seconds.”

  • What are some ways it can advance society?

    DH: “It’s difficult to say, because at this point it’s all potential. If you had asked that same question about the internet in 1993, we wouldn’t have been able to imagine how the internet would be integrated into our lives today. Right now, there’s a lot of experimentation to see what directions offer the best potential. ChatGPT and similar forms of A.I. are good at quickly generating content. Science-fiction publishers are reporting a deluge of A.I.-generated stories being sent to them. Earlier this year, the popular tech website CNET was outed as using A.I. to generate some of its online content (they won’t say if it’s ChatGPT). And Microsoft has partnered with OpenAI (the same team that created ChatGPT) to integrate chat A.I. into its Bing search engine. Google has promised something similar. But along the way, we’re learning where A.I. needs work. The content it generates can be riddled with factual errors, and apparently its story-writing skills leave something to be desired. The A.I.-powered Bing, meanwhile, has made headlines by confessing its love to some users and threatening to ruin others by exposing their personal information.”

  • From a professor’s perspective, how does it complicate teaching?

    DH: “From my perspective, the central threat from ChatGPT is that it will prove enticing to students who might consider plagiarizing. For students, cheating is always the outcome of a cost-benefit analysis: What’s the risk of being caught? What’s the penalty if you are? What’s the payoff if you’re not? For a long time, there have been two standard models for plagiarizing – you could grab stuff off the internet and present it as your work, or you could get someone to write your paper for you – usually an essay farm. The first option is risky because it leaves a trail: if the student could find it on the internet, the professor can, too. The second option doesn’t leave the same trail, but it takes time and usually costs money. ChatGPT is doing essentially the same thing as that second option: It’s writing your essay for you. But it’s nearly instantaneous, and it’s currently free. For a student who’s doing that cost-benefit analysis, it’s a gamechanger. Right now, ChatGPT-generated essays have that risk of being riddled with errors, and there are certain stylistic giveaways that an attentive professor will spot. But, since it’s learning software, it’s going to keep getting better at what it does. In a year, it will be better. In five years, who knows?”