Feedback Loops: How Verifying Against Solutions Changed My AI Workflow
My control systems homework came with partial solutions. I had a choice: peek first, or use them to verify my AI-assisted work after the fact.
I chose verification. The workflow was simple: ask Claude to solve each problem independently using transfer function analysis and root locus methods, compare results against the provided solutions, identify differences, then apply those learnings to the remaining problems. A homework assignment became a feedback loop where mistakes turned into teaching moments.
The Verification Workflow in Practice
The key instruction was explicit:
âPlease complete the homework first without looking at the partial solutions, then compare where the partial solutions and my solutions differ and come up with a plan to fix it for that problem.â
This forced Claude into a learning posture rather than a copying posture. When the AI discovers its own mistakes, it produces better explanations of why something went wrongâexactly what I needed to actually learn the material.
One problem asked for the steady-state error of a unity feedback system with a Type 1 plant. Claudeâs initial solution applied the final value theorem correctly but used the wrong error constant formula, treating the system as Type 0. Comparing against the solution revealed the gap: the number of free integrators in the loop determines the system type, which then determines which error constantâposition, velocity, or accelerationâgoverns steady-state behavior.
That distinction, something Iâd glossed over in lecture notes, stuck after seeing it fail in practice.
The output format mattered too. I had Claude generate HTML that gets printed to PDF via Chromeâs headless mode:
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
--headless --disable-gpu --print-to-pdf="hw8.pdf" \
--no-margins solutions.html
This avoided the LaTeX rabbit hole while producing clean, submission-ready documents. Sometimes the simplest approach is the right one.
Debugging a Ghost Instance on AWS
The same verification pattern surfaced later that day while debugging my Minecraft serverâs auto-shutdown feature. The symptom: the server should have been running, but SSH connections timed out.
Checking the CloudFormation outputs gave me an instance ID:
aws cloudformation describe-stacks --stack-name minecraft-server \
--query "Stacks[0].Outputs[?OutputKey=='InstanceId'].OutputValue" \
--output text
# Returns: i-0be2a78206b22947e
But querying EC2 directly told a different story:
aws ec2 describe-instances --instance-ids i-0be2a78206b22947e
# An error occurred (InvalidInstanceID.NotFound):
# The instance ID 'i-0be2a78206b22947e' does not exist
The CloudFormation stack showed CREATE_COMPLETE. Expected state: instance exists. Actual state: instance gone. Same verification pattern as the homeworkâcompare expected against actual, investigate the difference.
Digging into the CloudFormation template revealed the shutdown mechanism: a Lambda function triggered by CloudWatch alarms when player count drops to zero. The function was supposed to stop the instance, preserving it for later restart. But an earlier refactor had changed stop_instances to terminate_instances without updating the surrounding logic.
The auto-shutdown had worked. It just worked too well. Instead of a stoppable instance waiting for the next play session, I had a terminated instance and a CloudFormation stack pointing at nothing.
The fix was straightforward once identified: revert to stop_instances and add a check preventing termination of already-stopped instances. Finding it required the same discipline as the homeworkâdonât assume the system matches its declared state, verify against reality.
The Learning Happens in the Gap
Both situations followed the same pattern: form an expectation, check it against ground truth, learn from the delta.
For homework, the expectation was âClaudeâs solution is correctâ and the ground truth was the partial solutions. For the infrastructure bug, the expectation was âCloudFormation says the instance existsâ and the ground truth was the EC2 API.
Claudeâs wrong error constant formula taught me more about system types than the lecture did. The terminated-instead-of-stopped bug taught me to audit Lambda function changes more carefully. In both cases, the mismatch created the lesson.
What Iâll Do Differently
Next time I have reference solutions availableâfor homework, debugging, anythingâIâll build verification into the workflow from the start rather than reaching for it as a fallback. The extra structure creates the feedback loops where actual understanding develops.
The homework took longer this way. I could have copied the solutions and finished in twenty minutes. Instead, I spent an hour and a half working through problems, comparing, fixing, and re-solving. But Iâll remember the steady-state error formulas now.
Thatâs the trade-off worth making.