Help keep Salon independent
commentary

Writing by hand!?: Teachers are going old-school in the fight against AI

Educators are turning back to blue books to battle the threat of artificial intelligence eroding genuine learning

Senior Writer

Published

A return to writing by hand (Strelciuc Dumitru/Getty Images)
A return to writing by hand (Strelciuc Dumitru/Getty Images)

I’m waiting on a call back from someone at the Roaring Spring paper company in Roaring Spring, Pennsylvania that probably isn’t coming. I get it; they’re busy. As the school year begins, the biggest manufacturer of blue books in the United States is currently in very high demand. A new status quo of laptops and tablets seems to have made those flimsy, 24-page exam books with their robin’s-egg blue covers as obsolete as inkwells. Instead, blue books are being stockpiled by educators and institutions seeking ways to redirect students from the call of ChatGPT, Claude and other large language models willing and able to do everything students need.

Since the 2023 launch of OpenAI’s ChatGPT, researchers have been scrambling to collect data on how many students are using AI regularly, what they’re using it for and how it’s impacting their education. In a May 2025 report, the Chronicle of Higher Education estimated that 86% of students in 16 countries use AI, 56% of American college students, and a whopping 92% of UK students. The year-over-year increase has been dramatic: A survey of K–12 students conducted in 2024 found that use of LLMs doubled since the year before. A study of 558 college students conducted by Intelligent revealed that three out of every four college students believe that using AI to find answers to test questions, write essays and summarize textbooks is cheating — and that about 69% do it anyway.

The trick is to discourage students from becoming dependent on the tools to do their work without forcing themselves to moonlight as AI cops, and for many, blue books are the first line of defense.

Teachers are calling in the cavalry, and the cavalry is blue books. The folks at Roaring Spring, a family-owned company that’s been printing paper products since 1887, are probably tired of being on the blower all day with people like me wanting to know how many units they’ve moved this month. But the company is also taking a well-deserved victory lap: A huge banner on the landing page of the Roaring Spring website enthuses, “The Blue Book is Making Headlines.” And come on — a humble exam notebook becoming one of the biggest stories in U.S. manufacturing seems like the kind of feel-good story we could all use.

The advent of widespread LLM use among students has put educators in a difficult place: On the one hand, they’re aware that it’s a tool that can help students do research, draft outlines, and consolidate data; on the other, they also know that many students are going well beyond that. The trick is to discourage students from becoming dependent on the tools to do their work without forcing themselves to moonlight as AI cops, and for many, blue books are the first line of defense.

Very few educators seem to want to demonize AI tools wholesale; rather, what they want is for students to understand what they lose by outsourcing their thinking, writing and imagination to it. And currently, many of them feel like they’re trying to hold the ocean back with a broom. “I hate that I’m teaching from a defensive place,” admits one adjunct professor I spoke with, who preferred not to be named. “It feels hopeless. You suspect your students are using ChatGPT or Copilot for their assignments, so you run their work through AI-detection software, which is also AI, and not always accurate.” Building more in-class discussions into the schedule has helped develop what she believes is a generally accurate AI-dar: “You start recognizing student work where different papers will have things in common — certain words, certain sentence constructions. You get a sense for it. I’m not grading on vibes.” But that doesn’t mean it’s not tiring and even demoralizing. “I just don’t think they care,” she says. “And I don’t want to get worn down to a place where I stop caring.”

It’s a sentiment that pervades listservs, Reddit forums and other places where classroom professionals vent their frustrations. “I’m not some sort of sorcerer, I cannot magically force my students to put the effort in,” complains one Reddit user in the r/professor subreddit. “Not when the crack-cocaine of LLMs is just right next to them on the table.” And for the most part, professors are on their own; most institutions have not established blanket policies about AI use, which means that teachers create and enforce their own. Becca Andrews, a writer who teaches journalism at Western Kentucky State University, had “a wake-up call” when she had to fail a student who used an LLM to write a significant amount of a final project. She’s since reworked classes to include more in-person writing and workshopping, and notes that her students — most of whom have jobs — seem grateful to have that time to complete assignments. Andrews also talks to her students about AI’s drawbacks, like its documented impact on critical-thinking faculties: “I tell them that their brains are still cooking, so it’s doubly important to think of their minds as a muscle and work on developing it.”


Start your day with essential news from Salon.
Sign up for our free morning newsletter, Crash Course.


Last spring’s bleakest read on the landscape was New York Magazine’s article, “Everyone Is Cheating Their Way Through College,” which included a number of deeply unsettling revelations from reporter James D. Walsh — not just about how widespread AI dependence has already become, but about the speed with which it is changing what education means on an empirical level. (One example Walsh cites: a professor who “caught students in her Ethics and Technology class using AI to respond to the prompt ‘Briefly introduce yourself and say what you’re hoping to get out of this class.’”) The piece is bookended with the story of a Columbia student who invented a tool that allowed engineers to cheat on coding interviews, who recorded himself using the tool in interviews with companies, and was subsequently put on academic leave. During that time, he invented another app that makes it easy to cheat on everything. He raised $5.3 million in venture capital.

Educators are at cross-purposes with AI companies because, well, they want students to actually learn. AI companies, by contrast, want to blanket every aspect of young people’s lives with AI products. When students are asked about AI use, one of the benefits they reliably point to is time efficiency; the research and writing LLMs let them avoid work that they consider a waste of time. The problem is that the more AI can do, the more assignments and processes students might decide are a waste of time.

Cheat-code culture is real, and students who see their peers using AI to get assignments done in a fraction of the time they spend on research, organization and writing are likely to end up feeling like suckers. But there’s also evidence that students recognize that they learn and retain more when the process is as important as the outcome.

It’s not coincidental that the biggest booster of LLMs as a blanket good is a man who, like many a Silicon Valley wunderkind who preceded him, dropped out of college, invented an app and hopped aboard the venture-capital train. As a leading booster of AI, Sam Altman has been particularly vocal in encouraging students to adopt AI tools and prioritize “the meta ability to learn” over sustained study of any one subject. If that sounds like a line of bull, that’s because it is. And it’s galling that the opinion of someone who dropped out of college — because why would you keep learning when there’s money to be made and businesses to found? — is constantly sought out for comment on what tools students should and shouldn’t be using. Altman has brushed off educators’ concerns about the drawbacks of AI use in academia and has even suggested that the definition of cheating needs to evolve.

But Altman also regularly speaks out of both sides of his mouth, enthusing to media outlets that Gen Z is “the luckiest generation in all of history,” despite confessing his own reservations about the technology to the Senate Judiciary Subcommittee on Privacy, Technology, and the Law in 2023. In encouraging regulation of AI, he warned of AI’s potential to cause “significant harm to the world,” including by generating massive amounts of disinformation: “If this technology goes wrong, it can go quite wrong.” In more recent congressional testimony, he admitted, “I worry that as the models get better and better, the users can have sort of less and less of their own discriminating process.”

Founders like Altman have told us, implicitly and explicitly, that they see money as more valuable than education, and they have a lot invested in conflating the newness of the technology with the necessity of it; framing AI as a revolution rather than a product is their stock in trade. Suggesting that unethical behavior is suddenly not really unethical because it’s in service to this revolution isn’t about what’s good for students, but about what’s good for business. (Then again, should we really be surprised when tools that would not exist without the theft of copyrighted content are used to enable and justify further unethical behavior?)

Cheat-code culture is real, and students who see their peers using AI to get assignments done in a fraction of the time they spend on research, organization and writing are likely to end up feeling like suckers. But there’s also evidence that students recognize that they learn and retain more when the process is as important as the outcome. In a Substack piece titled “Blue Books Reimagined,” Danielle Kane, a professor of Sociology at Purdue University, recalls the semester she decided to open her mind to AI and let her students use it for drafting writing assignments. “It turned out to be my worst semester of teaching,” she wrote. “Whether due to the use of AI or unchecked device usage during class, students were completely disengaged, making meaningful discussions nearly impossible.”

Meeting a fellow professor who was using blue books less as exam repositories than as classwork and process journals was transformative for Kane: “[S]tudents were encouraged to see the blue books as a creative outlet to demonstrate their mastery of course readings, ideas and practice writing in a loosely structured format . . . This step away from specific writing conventions was intended to encourage students to focus on thoughts and to trust their own ideas and creativity.” Blue books led them to understand the difference between accumulating information and deciding what information is useful to their specific needs. When class ended for the semester, many students chose to keep their blue books.

The blue-book renaissance is a Band-Aid on what educators see as much deeper and entrenched rot, but it’s a start. And there is something satisfying about the return of blue books as a bulwark against AI tools: a David-and-Goliath moment where a small, family-run paper company is embraced as a corrective to the noisy, showboating industry that’s invaded our lives with nonconsensual force. Maybe the blue book is the tangible reminder that technology in the classroom isn’t an either/or proposition, but a both/and one. One Reddit user sums it up: “I went back to blue book exams last year and am so happy I did. My students learn and they appreciate having to learn. And it’s like . . . you know you could learn all the time if you didn’t use ChatGPT, right?”

By Andi Zeisler

Andi Zeisler is a Senior Culture Writer at Salon. Find her on Bluesky at @andizeisler.bsky.social

MORE FROM Andi Zeisler

Related Topics ------------------------------------------

Related Articles