
The Radnor school board on Tuesday explicitly banned the use of artificial intelligence to create sexualized images of another person, after a student made deepfakes of female classmates.
But how the changes will be enforced remains an open question, given that the school district says it has limited authority over student conduct off campus.
“These policies identify the issue, but they do not yet ensure accountability or protection,” said Audrey Greenberg, whose daughter was victimized.
The policy changes come as schools are increasingly grappling with how to handle situations where students make so-called deepfakes, using AI to create nude or inappropriate images. The issue has also emerged in the Council Rock School District and in Lancaster, where two boys who made hundreds of images of female classmates at Lancaster Country Day School were sentenced last month to probation and ordered to pay $12,000 in restitution.
Pennsylvania lawmakers have defined AI-generated explicit images of minors as child sexual abuse material, and have also moved to require school staff to report those images as child abuse.
Debate around Radnor’s response to deepfakes erupted after freshman girls were told in December that a male classmate had created sexual videos of them.
A police investigation didn’t turn up any videos, though parents said police didn’t subpoena social media companies or the app the boy admitted to using.
They also objected to the district’s framing of the incident: While police charged the boy with harassment, a January message to district families said that a student had created images of classmates that “move and dance,” and that police had not found evidence of “anything inappropriate.”
Policy changes approved Tuesday place new restrictions around the use of AI by students.
“The non-consensual use of generative artificial intelligence (AI) to create, modify, distribute, or solicit sexualized, indecent, or intimate content involving another person is strictly prohibited and constitutes sexual harassment,” the new language reads.
But it isn’t clear whether the policy revisions would have changed the district’s handling of the December deepfakes.
A provision added to the bullying policy reads: “Examples of the specific circumstances in which the District has jurisdiction over a student’s conduct that takes place off campus will be outlined in the accompanying Administrative Regulation.” The revised harassment policy also includes language specifying that the district will investigate conduct that occurs “in the district’s education program or activity.”
The administrative regulations weren’t available for review Tuesday. A district spokesperson did not immediately respond to a question Wednesday about when they would be made public.
Greenberg questioned the lack of clarity around when Radnor would regulate off-campus activity.
“In cases like this, where the harm shows up in school every day, the distinction matters,” she said.
Greenberg said she appreciated the changes, including language noting that AI deepfakes could be criminal, but said the policies failed to specify what support victims could expect.
Another parent of a victim, Morgan Dorfman, said that Radnor’s response to the deepfakes sent “the wrong message … that unless something is seen in school, it doesn’t count. That if something is deleted, it never happened.”
“That is not how you protect students,” Dorfman told the board.