A Professor in Every Pocket llms.txt

/ 29 Nivôse 234 0
11 minutes / 2320 words

A Professor In Every Pocket

The scale of the cheating crisis in higher education is staggering. Depending on the survey, between 60% and 92% of college students now regularly use AI for their coursework. Professors are ill-equipped, and administrators are broadly turning a blind eye to the entire situation. Treating this as wholly the fault of AI is, however, a grave mistake.

The truth is that education has been broken for a long time, and AI was the accelerant that made the bonfire bright enough to see from a distance. In a system where degrees now primarily function as an expensive IQ screening mechanism, where learning is wholly secondary to certification, and where industrial-age teaching methods clash with information-age realities, AI has finally revealed the plastered-over fractures that have been growing for decades.

This transformation forces us to confront three difficult realities: the collapse of traditional assessment, the fundamental mismatch between educational delivery and modern needs, and the urgent necessity of reimagining learning for an AI-enabled world.

The Credential-Knowledge Divorce

Put simply, the Academy has shifted from a place of learning to a place of job-skill accreditation. Students are not there to learn; they are there to get an “employable degree.” Most students dream of a simple, middle-class future and see everything that happens at university as a barrier between them and the magic piece of paper that makes this lifestyle possible.

This happened because employers eliminated on-the-job training for nearly all roles. Even the simplest positions now require a college degree. The credential has become a proxy for trainability, and the actual education is incidental.

Bryan Caplan made this argument rigorously in The Case Against Education: approximately 80% of the return to education is signaling, not skill-building. The degree signals intelligence, conscientiousness, and conformity to employers. The content of the education is largely irrelevant; what matters is that you finished. This is why the final year of college pays more than the first three years combined (the “sheepskin effect”), why students take easy courses over challenging ones, and why adults forget almost everything they learned in school within a few years.

Classes are also inordinately more expensive than they were even five years ago, creating a strong economic incentive to never withdraw from a difficult course, let alone fail. Many students rely on scholarships with GPA requirements to continue receiving funding. International students face visa implications for academic failure. The stakes are existentially high for students who fundamentally do not care about the material.

This combination of expensive university plus genuine indifference to content creates a prime environment for corner-cutting. Add administrators obsessed with the bottom line, gleefully cutting instructor hours and entire departments, and the incentive structure becomes clear. If your administration doesn’t care about the classics department, why would the students?

To be clear, cheating was rampant well before ChatGPT. Studies consistently show 60-70% of students admitted to cheating behaviors even before generative AI became widely available. AI simply revealed the systematic issue:

Most students in university do not want to be there, and should not be.

This is not to say most students are dumb or malicious. It is to state simply that they are not suited for academia and would be better served directly entering a stable career after high school. If education is simply a means to an end for students, this problem will remain perpetually unsolved.

Detection Has Failed

When ChatGPT dropped in late 2022, institutions panicked. The response was predictable: let’s catch the cheaters. Billions were poured into AI detection tools, proctoring software, and surveillance infrastructure.

It didn’t work.

The most comprehensive independent evaluation of AI detection tools, published in the International Journal for Educational Integrity, tested fourteen detection tools including twelve publicly available platforms and two commercial systems. The findings were unequivocal: all tools scored below 80% accuracy, with only five exceeding 70%. The study concluded that the available detection tools are “neither accurate nor reliable” and have a bias toward classifying output as human-written.

The performance gets worse under realistic conditions. When students paraphrase AI output even slightly, detection accuracy drops significantly. Some detectors have been found completely ineffective against paraphrased text.

This isn’t a bug waiting for a patch. Researchers at the University of Maryland demonstrated mathematically that for sufficiently capable language models, even optimal detectors can only marginally outperform random classification. As AI models improve and their outputs converge with human writing patterns, the statistical signatures detection relies upon diminish. Detection is not temporarily inadequate. It is fundamentally constrained.

Beyond mere unreliability, these tools exhibit systematic bias that transforms their deployment into a civil rights concern. A landmark study found that GPT-based detection tools misclassified 61.3% of writing by non-native English speakers as AI-generated. The mechanism is algorithmic: detection systems measure “perplexity,” meaning how predictable word choices are. Non-native speakers naturally produce text with less linguistic diversity and more predictable word choices. The system penalizes exactly the linguistic patterns that characterize competent second-language writing.

The institutions know this. Cornell University explicitly states they “do not recommend using current automatic detection algorithms” for academic integrity violations using generative AI, given their unreliability. The University of Pittsburgh has declined to endorse any AI detection tools, citing “substantial risk of false positives.” Yet Broward County Public Schools in Florida is spending over $550,000 on a three-year contract with Turnitin anyway.

The institutions continue investing in detection because the alternative requires confronting uncomfortable truths about what education has become.

The Cheating Industry

While institutions chase detection, entrepreneurs have noticed the demand signal. The market response has been predictable and ruthless.

In April 2025, a 21-year-old Columbia student named Roy Lee raised $5.3 million in seed funding for Cluely, a tool explicitly marketed to “cheat on everything.” Lee had been suspended from Columbia after using an earlier version of the tool, originally called Interview Coder, to land internship offers from Amazon, Meta, and TikTok. Rather than hide in shame, he posted the entire saga on X, dropped out, and launched the company.

Cluely is a covert AI application that overlays the screen during exams, interviews, or any other assessment. It analyzes what’s shown and spoken, then feeds real-time answers back to the user. The tool bypasses keystroke tracking, disguises tab switching, and stays invisible during screen sharing. By launch, it had 70,000 users. By October, Cluely claimed over $3 million in annual recurring revenue.

The cheating tools are proliferating because the market rewards them. Students need credentials. Credentials require passing assessments. AI makes passing assessments trivially easy. The economic logic is irrefutable; the moral hand-wringing is ineffective.

Here’s the part no one wants to acknowledge: if everyone cheats, credential collapse follows. When hiring managers can no longer trust that a degree indicates capability, the degree becomes worthless. Cluely’s founders seem to believe they’re surfing a wave of technological inevitability, but they may be accelerating the destruction of the very system their product exploits. If the signal becomes noise, the signaling game ends.

Why the “Obvious” Solutions Don’t Work

When I discuss this with people outside the current education environment, several “why don’t you just” solutions are offered. I’d like to walk through the most popular and highlight why they are simply not feasible.

Why don’t you force everyone to take pen and paper exams?

Students hate it, and professors hate it even more. Class sizes have ballooned over the past several decades. For a professor managing 250 students per section (which is common in introductory courses), grading handwritten essay exams becomes a logistical nightmare. Many would simply refuse.

In the short term, handwriting requirements mean students mass-applying for accessibility accommodations for reasons why they must use their laptop. In the long term, word gets around about the requirement, and students choose not to take the class. Classes not taken means classes cut and teaching staff fired.

Why don’t you just force them to use the computer lab instead of their own devices?

Bookable computer labs haven’t been a thing for years; Frankly, many University libraries don’t even have books in them anymore. The modern university library is a study lounge with wifi, not a controlled testing environment.

Professors should just use AI detection software and cull any students with a low pass rate.

AI detection software simply does not work, as documented above. False positive rates can reach 50% for some platforms. Over 61% of non-native English writing gets flagged. If we could reliably catch every student using AI for coursework (which we can’t), we’d end up culling 60 to 90% of the cohort. Administrators obviously don’t love the idea of losing that much per-student revenue.

There should simply be a zero-tolerance policy for cheating.

See above. The numbers don’t work. Zero tolerance against 90% of your student body is institutional suicide.

Well, administrators need to change then.

Haha. Yeah.

A Professor in Every Pocket

Here’s how this plays out.

The old model assumed scarcity: scarce access to experts, scarce access to information, scarce opportunities for feedback. University was where you went to get these scarce resources. The new model recognizes abundance: infinite access to information, on-demand expert-level feedback, conversation available 24/7.

Every student now has a professor in their pocket who never gets tired of them. Office hours are always available. The question “how do I learn about X?” can be immediately answered with “let’s start: what do you already know?” Discovery happens through conversation with the interface.

The failure mode of self-guided study has always been getting stuck. You hit a concept you don’t understand, you have no one to ask, you lose momentum, you quit. The infinite-patience tutor solves this. You can ask the same question seventeen different ways until one explanation clicks. You can request analogies from domains you already understand. You can say “I’m completely lost” without embarrassment.

Discovery, which universities claim as their unique value proposition, now happens through dialogue. The AI says “have you considered how this connects to X?” and suddenly you’re down a rabbit hole you didn’t know existed. The serendipitous encounter with ideas, the thing professors claim you can only get in a seminar room, now happens in your bedroom at 3 AM.

This changes the fundamental value proposition of education. If information is free and tutoring is infinite, what exactly is the university selling?

The honest answer: credentials and networking. The learning was always incidental for most students; now the pretense can drop entirely.

Two Paths Forward

For students who genuinely don’t care about learning (the majority), the path forward is direct job training. The bootcamp model pioneered in software development during the 2010s has already proven this works. Trade schools, healthcare certification programs, and corporate training partnerships all follow the same logic: specific training paired directly with employers looking to hire those skills.

Students get a focused, two-year education model leading directly into a high-demand job. Skill verification over time-served metrics is key, as is cost efficiency. This model succeeds because both parties are honest about what they want: employers want capable workers, students want stable jobs. No one pretends the goal is enlightenment.

For knowledge-seekers, the transformation is more interesting. University was never really the right place for them anyway; autodidacts have always found ways to learn. But the AI tutor changes the game.

The challenge for knowledge-seekers is structure and accountability. Without a curriculum, without deadlines, without someone checking your work, it’s easy to meander forever without mastery. This is where light-touch human facilitation adds value: study groups, mentorship relationships, periodic assessment to ensure you actually learned what you think you learned.

What emerges is something like a distributed learning community: self-directed study supported by AI tutoring, with human curation for discovery and human accountability for completion. Credentials could come from demonstrated competency rather than seat time. The expensive campus infrastructure becomes optional rather than mandatory.

The Stakes

The institutions that continue investing in detection alone will find themselves administering systems that harm the students they’re meant to serve: disproportionately flagging non-native speakers while sophisticated users evade detection entirely.

The question facing institutions is no longer whether to permit AI use. That question has been answered by the students who have already integrated these tools into their work. At least 13.5% of biomedical abstracts now show evidence of AI processing. Up to 22.5% of computer science papers evidence AI modification.

The question now is whether AI use will be pedagogically integrated or covertly substituted.

The students who genuinely want to learn will be fine. They now have tools their predecessors couldn’t imagine: an infinitely patient tutor who can explain concepts seventeen different ways, who remembers where they got stuck last time, who can connect disparate fields in ways no single professor could. The AI doesn’t replace the desire to learn; it amplifies it.

The students who never wanted to learn will game whatever system exists, as they always have. The institutions caught between these groups will either adapt or die.

The path through is honesty about what education is actually for. For some students, it’s job access: give them efficient, honest job training and stop pretending otherwise. For others, it’s genuine learning: give them the tools and communities that support self-directed study, freed from the artificial constraints of semester schedules and seat time.

The academy as we knew it is ending. Something better can replace it, if we’re willing to let go of the pretense.


Buy me a coffee
Bitcoin
bc1pu34s92tpl0j3cp7dfvm87zgyn4mdkp5nx5ckaeczuhunpsf7qgcshf4ptg

Comments