Newsletter Signup
Stay up to date on all the latest news from Boston.com
ChatGPT has been making waves in the education sector for months, as professors grapple with the effects the AI will have on their teaching and grading models.
Initially, colleges in the area defaulted to plagiarism policies, saying that work written by AI would be considered plagiarized. But this is a short-term fix, says Wesley Wildman, a professor in Computing and Data Sciences at Boston University.
“We actually have to find a way to incorporate generative AI into what we’re doing in the university education process,” he said.
Administrators are still trying to come up with alternative, long-term solutions to the issue.
Wildman teaches a class called Data, Society, and Ethics. His students developed guidance for instructors and their fellow students surrounding ChatGPT and similar AI. In March, the policy was unanimously adopted by the department, making a new standard for grading, said Azer Bestavros, a professor and the associate provost for Computing and Data Sciences.
The policy states that when given permission, students should be allowed to use AI on assignments. It also puts the burden of checking for plagiarism flagging prior to submission on the students.
Students are also expected to credit AI whenever it is used.
Bestavros called the policy a “baseline” for instructors and said the guidance is optional. It’s up to individual instructors to decide how they wish to incorporate AI into the classroom.
“This is a new technology that’s now disrupting how we teach and how we evaluate and assess learning outcomes,” he said, adding that the department understands “that different courses may have different constraints.”

BU at large has not changed its policy. And Associate Provost for Digital Learning and Innovation Chris Dellarocas said instituting official overall guidance might not be the best approach.
“I don’t think it’s prudent to have a university-wide policy for something which is changing so rapidly and whose exact use and importance and significance we don’t yet fully comprehend,” he said, adding that “there’s a lot of fluidity right now.”
Possibly the best steps for those in higher education to take, Dellarocas added, is to experiment with the new tool and policy surrounding it, then see what happens.
Other schools have not implemented any ChatGPT-specific grading practices, but there has been conversation about it.
An Emerson University spokesperson told Boston.com over email that they have no policy right now, but the plagiarism policy encompasses work done by AI.
At Harvard, officials confirmed ongoing conversations about ChatGPT. They did not elaborate.
Northeastern University’s Vice President of Communications Renata Nyul told Boston.com in an email that the university was not willing to share details about any conversations regarding ChatGPT, but that it “takes academic integrity seriously and enforces a range of policies outlined in its Faculty Handbook and Student Code of Conduct.”
“We are continuously monitoring technological advancements that could be used to generate fraudulent work, and we make enhancements to our policies and practices as needed,” Nyul said. “Disclosing the details of these enhancements would render them less effective.”
Wildman said using a strict plagiarism policy to tackle AI concerns leaves holes in grading.
“How would you ever prove that plagiarism had occurred and each time you get the same prompt to these generative AI text?” he said.
Tools to detect AI are often unreliable, Wildman added, which can cause improper crackdowns. He said that students who have nothing to fear may “wind up on the wrong side of a false positive.”
Another variable in the ChatGPT conversation is student impact. For some, Bestavros said, AI can be a tool to assist with difficult assignments. He compared ChatGPT to Matlab, which students can use to help solve mathematical problems. BU offers a free license for the program to all faculty, staff, and students.
He also compared ChatGPT’s introduction to that of a calculator. When it was first introduced, he said, people were concerned that the world would forget how to manually multiply and divide. This has not been the case.
“What calculators allow us to do is not to waste our time doing all these manual things,” Bestavros said. “And as a result, actually, we up our game.”

Wildman said his students were interested in using the AI to assist them, but he reassured that the students did not want the tool to take over their education.
“They want training in how to use it well,” he said. “They also wanted to make sure that their skillsets weren’t damaged because generative AI gives them a lot of shortcuts. They still want to know how to write.”
Bestavros said it’s easy to argue that professors are affected adversely by the introduction of AI because it makes grading harder. Students though, he said, are also seeing direct impact.
“These students are the ones who are gonna go out in the workforce that is going to be disrupted by this new technology,” he said. “So in a way, this is not just about the class they’re taking, it’s really about how we are going to navigate this new world where technology can do things that may be perceived as creative, such as writing, or other forms of expression.”
Adjustment is necessary on both sides of the spectrum, Wildman said. His students, he added, were particularly intent on making sure that professors would be open to the idea of AI in the classroom.
“They really didn’t want professors to be all stodgy and attached to their centuries-old pedagogies about using writing to learn how to think,” he said. “They wanted us to be a bit more adaptable, a bit more agile.”
Stay up to date on all the latest news from Boston.com
Stay up to date with everything Boston. Receive the latest news and breaking updates, straight from our newsroom to your inbox.
To comment, please create a screen name in your profile
To comment, please verify your email address
Conversation
This discussion has ended. Please join elsewhere on Boston.com