Westfield Public Schools held a regular board meeting in late March at the local high school, a red brick complex in Westfield, NJ, with a scoreboard outside proudly welcoming visitors to the “Home of the Blue Devils” sports teams.
But it wasn’t business as usual for Dorota Mani.
In October, some 10th-grade girls at Westfield Middle School – including Ms. Mani’s 14-year-old daughter Francesca – notified administrators that boys in their class had used artificial intelligence software to construct sexual images of them and circulated fake images. Five months later, the Manis and other families say, the district has done little to publicly address the distorted images or update school policies to prevent exploitative AI use.
“It appears that Westfield Middle School administration and the district are engaging in a master class to make this incident go away,” Ms. Mani, a local preschool founder, warned board members during the meeting.
In a statement, the school district said it opened an “immediate investigation” upon learning of the incident, immediately notified and consulted with police and provided group counseling to the sophomore class.
“All school districts are grappling with the challenges and impact of artificial intelligence and other technologies available to students anytime, anywhere,” Westfield Public Schools Superintendent Raymond González said in the statement.
Blindsided last year by the sudden popularity of AI-powered chatbots like ChatGPT, schools across the United States rushed to crack down on text-generating bots in an effort to prevent students from cheating. Now a more disturbing AI image-making phenomenon is shaking up schools.
Boys in several states have used widely available “nude” apps to distort real, recognizable photos of their clothed classmates, shown attending events such as school dances, into graphic, convincing AI-generated images of the girls with exposed breasts and genitalia . In some cases, the boys shared the fake images at school lunch, on the school bus or through group chats on platforms such as Snapchat and Instagram, according to school and police reports.
Such digitally altered images – known as “deepfakes” or “deepnudes” – can have devastating consequences. Child sexual exploitation experts say the use of non-consensual AI-generated images to harass, humiliate and intimidate young women can harm their mental health, reputation and physical safety, as well as put their college prospects at risk. and their careers. Last month, the Federal Bureau of Investigation warned that it is illegal to distribute computer-generated child sexual abuse material, including realistic AI-generated images of identifiable minors engaged in sexual behavior.
However, the use of artificial intelligence exploits by students in schools is so new that some districts seem less prepared to deal with it than others. This can make safeguards for students unsafe.
“This phenomenon has appeared very suddenly and may catch many school districts off guard and unsure of what to do,” said Riana Pfefferkorn, a researcher at the Stanford Internet Observatory who writes about legal issues related to child sexual intercourse generated by computer. misuse of images.
At Issaquah High School near Seattle last fall, a police detective investigating complaints from parents about AI-generated images of their 14- and 15-year-old daughters asked an assistant principal why the school hadn’t reported the incident to police. in a report by the Issaquah Police Department. The school official then asked “what she should report,” the police document said, prompting the detective to inform her that schools are required by law to report sexual abuse, including possible child sexual abuse material. The school then reported the incident to Child Protective Services, the police report said. (The New York Times obtained the police report through a public records request.)
In a statement, the Issaquah School District said it spoke with students, families and police as part of its investigation into the deepfakes. The district also “shared our empathy,” the statement said, and provided support to affected students.
The statement added that the district had reported the “fake AI-generated images to Child Protective Services for an abundance of caution,” noting that “according to our legal team, we are not required to report fake images to the police. “
At Beverly Vista Middle School in Beverly Hills, California, administrators contacted police in February after learning that five boys had created and shared clear AI-generated images of classmates. Two weeks later, the school board approved the expulsion of five students, according to district documents. (The district said California education code prohibited it from confirming whether the students who were expelled were the students who had made the images.)
Michael Bregi, superintendent of the Beverly Hills Unified School District, said he and other school administrators wanted to set a national precedent that schools should not allow students to create and circulate sexual images of their peers.
“This is extreme bullying when it comes to schools,” said Dr. Bregy, noting that the explicit images were “disturbing and offensive” to the girls and their families. “It’s something we will not tolerate here.”
Schools in the small, affluent communities of Beverly Hills and Westfield were among the first to publicly acknowledge the deeply bogus incidents. The details of the cases — described in district communications with parents, school board meetings, legislative hearings and court filings — show the variability of school responses.
The Westfield incident began last summer when a high school student asked to befriend a 15-year-old classmate on Instagram who had a private account, according to a lawsuit filed against the boy and his parents by the young woman and her family. (The Maniates said they are not involved in the lawsuit.)
After accepting the request, the student copied photos of her and several other classmates from their social media accounts, court documents state. He then used an artificial intelligence app to construct sexually explicit, “fully identifiable” images of the girls and shared them with classmates through a Snapchat group, court documents say.
Westfield High began investigating in late October. While administrators quietly took some boys aside for questioning, Francesca Mani said, they called her and other 10th-grade girls who had undergone deep fakes into the school office, announcing their names over the school intercom.
That week, Mary Asfendi, the principal of Westfield High, sent an email to parents alerting them to “a situation that has led to widespread misinformation.” The email went on to describe deepfakes as a “very serious incident”. It also said that, despite concerns from students about possible sharing of images, the school believed “any images created have been deleted and are not being circulated”.
Dorota Mani said Westfield administrators told her the district suspended the male student accused of making the images for a day or two.
Soon after, she and her daughter began speaking out publicly about the incident, urging school districts, state lawmakers and Congress to enact laws and policies specifically banning explicit deepfakes.
“We need to start updating our school policy,” Francesca Mani, now 15, said in a recent interview. “Because if the school had AI policies, then students like me would have been protected.”
Parents, including Dorota Mani, also filed harassment complaints against Westfield High last fall over the graphic images. At the March meeting, however, Ms. Mani told school board members that the high school had not yet given parents a formal report about the incident.
Westfield Public Schools said it could not comment on any disciplinary action due to student confidentiality. In a statement, Dr. González, the superintendent, said the district is stepping up its efforts by “educating our students and establishing clear guidelines to ensure these new technologies are used responsibly.”
Beverly Hills schools have taken a firmer public stance.
When administrators learned in February that eighth-grade boys at Beverly Vista Middle School had created explicit images of 12- and 13-year-old classmates, they quickly sent a message — subject line: “Heinous misuse of artificial intelligence” — to all parents, staff and students Middle and High School. The message urged members of the community to share information with the school to ensure the “disturbing and inappropriate” use of AI by students “stops immediately.”
He also warned that the district was ready to impose severe punishment. “Any student found to be creating, distributing or possessing AI-generated images of this nature will face disciplinary action,” including a recommendation for expulsion, the message said.
Dr. Breggi, the superintendent, said schools and lawmakers needed to act quickly because AI abuse was making students feel unsafe in schools.
“You hear a lot about physical security in schools,” he said. “But what you don’t hear about is this invasion of students’ personal, emotional safety.”