The 90% Problem: Why AI-Generated Code Passed Every Check Except Understanding
āThe TAs and I noticed you are using AI to generate your lab reports⦠There has been a lot of incorrect and confusing material in your lab submissions. For example, postlab 2 with your error analysis, Iām not sure where this came from.ā
That email from Professor Humann arrived while I was in the middle of yet another Claude Code session, asking the AI to generate MATLAB scripts for my Feedback Control Systems homework.
The timing was brutal. These are core courses in my mechanical engineering programāFeedback Control Systems, Motion Control, Advanced Mechanisms. Failing to demonstrate competence here doesnāt just mean a bad grade. It means graduating without actually understanding the material Iāll need as a working engineer.
What I Thought I Was Doing Right
I had been diligent about transparency. Every submission included an AI disclaimer. I cited Claude Code as a tool. I thought disclosure was the ethical obligation, and I was meeting it.
The output impressed me too. Claude generated clean HTML reports with MathJax rendering. It derived transfer functions for PD, PI, and PID controllers. It produced C++ code for IIR filters running on embedded hardware. The work looked professionalābetter formatted than anything Iād submit by hand.
Hereās a sample from my Motion Control pre-lab:
// First Order IIR Filter
// y(n) = 0.832448Ā·y(n-1) + 0.167552Ā·x(n)
double A = 0.832448;
double B = 0.167552;
double y_prev = 0.0;
// In the main loop:
double x_n = (adc_value / 65535.0) * 20.0 - 10.0; // Convert to voltage
double y_n = A * y_prev + B * x_n;
y_prev = y_n;
The filter coefficients are mathematically correct for a 20 Hz cutoff at 750 Hz sampling. The code compiles. The math checks out.
But the lab provided a specific template with predefined function signatures and a particular structure for interacting with the Sensoray 826 board. My submission ignored all of it. I didnāt use the template because I didnāt realize it existedāIād asked Claude to generate a solution from the lab description without carefully reading what infrastructure was already provided.
What Actually Went Wrong
The errors fell into categories I couldnāt see because I hadnāt done the underlying work.
Missing context. In the Feedback Control homework, I asked Claude to derive transfer functions from block diagrams. Problems B.3 and B.4 came back wrongāI had to return later and ask Claude to fix them. The issue was a misread of the feedback path topology. If I had traced through the block diagram by hand first, I would have caught this in seconds. Instead, I accepted output I couldnāt verify.
Wrong abstraction level. The Motion Control load cell code āwasnāt anything close to what was expected,ā according to my professor. Claude generated a technically valid approach to reading a load cell, but the lab wanted us to use specific library functions and follow a particular signal flow. The AI optimized for correctness in a vacuum. The assignment required correctness within a constrained framework.
Plausible nonsense. The error analysis my professor mentioned? I still donāt know exactly what went wrong. Claude generated statistical formulas and uncertainty propagation that looked reasonable. But ālooked reasonableā isnāt the same as āmatched the methodology taught in class.ā I couldnāt catch the error because I didnāt know what correct looked like.
Why AI Couldnāt Catch This
AI tools like Claude excel at generation. Given a problem description, they produce syntactically valid, often mathematically sound output. But validation requires context they donāt have:
- What template did the instructor provide?
- What methodology was taught in lecture?
- What does ācorrectā mean for this specific assignment?
- What level of explanation demonstrates understanding versus parroting?
This is the 90% problem. Claude got me most of the way to a complete assignment. The last 10%āverifying against instructor expectations, checking that my approach matched the taught methodology, ensuring I actually understood what I submittedārequired judgment I hadnāt developed because Iād outsourced the foundational work.
That 10% is where learning happens. Itās also where grades get assigned.
The Uncomfortable Part
With Claudeās help, I drafted a response to my professor:
āMoving forward from Prelab 4 onward, I will complete all lab reports, code, and calculations entirely on my own without AI assistance. I will follow the lab instructions more carefully and make sure my submissions align with what is expected.ā
Yes, I used AI to help write an apology email about over-using AI. I noticed the irony when I was doing it. I did it anyway.
That choice reveals something Iām still working through. The habit of reaching for AI assistance is deeply ingrained now. Even when composing a two-paragraph email, my instinct was to ask for help with phrasing. That instinct is exactly what my professorās email was pushing back againstānot the tool itself, but the dependency that prevents me from developing my own competence.
What Iām Actually Changing
Intentions are cheap. Hereās the concrete workflow Iām implementing:
-
Hand-first for new concepts. Before asking Claude anything about a problem, I work through it on paper. Derive the transfer function. Trace the block diagram. Write pseudocode for the algorithm. This creates the mental model I need to evaluate AI output.
-
Template audit. Before generating any code, I read all provided materials and identify what infrastructure already exists. Function signatures, expected file structure, required library functionsāall documented before I write a single line.
-
AI for debugging, not drafting. Once I have my own solutionāeven a broken oneāI can use AI to help identify specific errors. āWhy does this integral wind up?ā is a different question than āWrite me a PID controller.ā
-
Explain before submitting. If I canāt explain every line without referring back to Claudeās output, it doesnāt go in.
What Happens Next
I donāt know if this approach will work.
The habits are strong. The pressure is realāthese are difficult courses with significant workloads, and AI makes the impossible feel manageable. Thereās a reason I reached for these tools in the first place.
Professor Humann hasnāt responded yet. There may be consequences beyond the warning.
What I do know is that Iāve been optimizing for the wrong metric. Completed assignments arenāt the goal. Understanding is. And understanding doesnāt come from reading AI outputāit comes from the struggle Iāve been avoiding.
The remaining assignments wonāt be as polished. But theyāll be mine, and Iāll be able to defend every line.
That uncertainty feels more honest than anything Iāve submitted this semester.