When Suzanne Ehst, associate academic dean, was writing her dissertation, she encountered documentation of some of the anxieties and implications of the pencil replacing the standard quill and ink:  

“There were certain writers who were so scared that [the pencil] was going to change the quality of writing,” Ehst said. “[It] was going to change the rhythms of writing — it was going to make our writing dumber when we didn’t have those natural pauses to dip our quill and ink.”

As artificial intelligence becomes more prominent and accessible academics are wrestling with these new tools — determining what is acceptable in classroom settings. 

This year, Goshen College updated its academic integrity and grievance policies to explicitly label the use of AI as plagiarism when it misrepresents a student’s work as their own. Outside of that, GC has given departments discretion about how they wish to use AI. 

Anna Groff, assistant professor of communication, said that the college gave faculty an optional AI policy that they could add to their syllabi. She said, “That was nice to have that language coming from the top — even though typically I want to be able to do things my way, in this arena, it’s nice to have some guidance and language.”

Groff said that while there have been efforts from people like Ehst to create supportive spaces around AI use, she sometimes feels alone in the process. “There’s so much autonomy with being faculty, which is what makes the job so awesome, but it also means I’m empowered to do what I want and figure it out on my own,” Groff said.

Andrew Hartzler, professor of accounting, referred to discussions he has had with Ehst on AI use in classroom settings. “It can be serious, like in theory, you can get kicked out of college if you keep cheating on things repeatedly,” Hartzler said. “I don’t think students fully realize that you can actually be dismissed from the college for doing it multiple times.”

Ehst recalls when she first heard about ChatGPT towards the end of the fall semester of 2022. Ehst said, “[ChatGPT] was doing things that other AI tools hadn’t done in such a publicly accessible way.”

Groff recalls being a part of the first conversation around AI, led by Ehst, who was at that point the Core general education director, in January 2023. Groff said, “It was so, preliminary, exploratory — just imagining what could be and then I went back to my office and I created a ChatGPT account and started testing it that day.”

In August 2023, Ehst hosted a meeting for faculty teaching the Identity, Culture and Community course. Hartzler remembers Ehst’s discussion around AI including a demonstration in which she entered the prompt for the first ICC essay, “How do communities shape our identities?” into ChatGPT. 

“We were like, this is vague, and it makes a lot of real generalized references but it’s pretty good,” Hartzler said. “In fact, it’s better than probably what over half the students will turn in.”

Ehst said, “I remember … thinking, oh boy, we have some work to do around these tools as they come out.”

When teaching accounting classes, Hartzler said, “[My classes] are mostly process-based with numbers, so it’s kind of hard to use AI to do that. … 11 years teaching here, I still force students to do things for me on paper to show me that they can do it.”

“I want them to break down the process so I can give them partial credit. … I want to see ‘Which parts of the process did you get right?’ ‘Where did you go off?’”

David Housman, professor of mathematics and department chair, said that the first time he used ChatGPT, he fed it a question that a student was working on. Housman said, “It thought for a while, and finally came back with a very nice layout. Had like two cases, and it looked very authoritative until I actually started reading it and almost every line was incorrect. The logic wasn’t there — the result wasn’t correct.”

After the 2023 fall semester of ICC, professors reconvened in December, where they once again ran the same prompt through ChatGPT. Hartzler said, “It was way better that it had just been in August when [Ehst] did it. She’s like, I just want you to realize it’s getting better really fast.”

Luke Beck Kreider, assistant professor of religion and sustainability, said, “Not long after [generative AI] came out, I started noticing that some forms of student writing were changing — and I was noticing AI use in student assignments pretty quickly.”

“Students were using it in ways that I wasn’t really comfortable with, that seemed like it was allowing them to skip the types of work that I was wanting them to do — the types of thinking and reading and writing that I was hoping that they would do.”

In his classrooms, Beck Kreider explained that he does not allow students to use AI in any part of the writing process and discourages the use of AI to help read or interpret assigned texts. Beck Kreider said, “I’ve tried to explain — here’s why I want you to write this in your own voice, through your own process and without using ChatGPT.”

Groff shares a similar sentiment about writing. “In general, I would say like the majority of the time I want people not to use [AI] for the writing process. If they want to use it for brainstorming ideas or coming up with a definition that they might use some language from, that would be OK,” she said.

Groff also shared that the administration does not want AI to be used for grading – she referenced a point brought up by Beck Kreider in the Sept. 11 convocation on artificial intelligence about the vulnerability of the writing process. “It’s about connecting with your audience … We can only do that as individual writers.”

Beck Kreider emphasized that in assignments, he looks for students’ original thoughts and interpretations. “I want students to do the best they can at those processes and so I’ve erred on the side of not allowing it for most things,” he said, “and I haven’t really found ways yet of integrating it constructively in terms of the assignments — that feels still like a gap for me. I want to be better at demonstrating what I would think of as more positive or appropriate uses of it.”

In the course Principles of Public Relations, Groff said she incorporated AI into one assignment where she asked students to write a press release. “I wanted people to use [ChatGPT] and then show me how they did. Some people used it for AP style checking, other people used it for headlines, other people used it for writing a lead paragraph and then they had to debrief how it worked for them.”

“That was a really small thing, but I loved hearing back how students used it and what they liked about it.”

Housman explicitly names AI as a resource that students can use on “open resource assessments.” His only condition is that appropriate acknowledgements need to be made for every source used, whether that’s generative AI, any published work, the internet, or another student. Housman said, “When they sit down to do the problems, they should be able to do it on their own without any resources — that’s what the expectation is.”

“I want them to have access to all those resources as they’re really trying to learn the material, but I want them to recognize when they’re using it and to say ‘OK, yeah I used this resource, I talked to this person, I use generative AI for this purpose, etc.”

Ehst said, “I think we are needing to do a little bit more work on … [asking] ‘What is the value added?,’ ‘Why is there value in incorporating those tools into our lesson plans for our classes?’ … [decerning] when we use an advanced technological tool, or when an old school tool is better.”

“Now we think of that No. 2 pencil as like an old school tool, but even that gave people pause about ‘How are tools going to mediate the way that we think and communicate?’” Ehst said. “I think we’re asking those questions and we’re asking them with openness and curiosity and those are great questions to be asking.”