When Your Transcript System Captures Everything Except the Conversation

{
  "type": "unknown",
  "parameters": {}
}

Repeated. Over and over. Dozens of times.

I built a transcript capture system to preserve my Claude Code sessions—the tool calls, the file reads, the debugging back-and-forth. Today I discovered it’s been faithfully recording that work happened while capturing nothing about what work happened.

The timestamps are there. The tool calls occurred. But somewhere between Claude executing commands and my transcript system capturing them, the actual content evaporated. I can see that something happened at 12:42:08, then again at 12:42:12, then 12:44:08—but what? The data structure knows a function was called. It just doesn’t know which one, or what it did.

The Ghost of Real Work

I spent the morning implementing IIR digital filters for a motion control lab. Real work happened—filter coefficients were calculated, frequency response was analyzed, MATLAB code was written and tested. I can see fragments in the conversation: references to filter_implementation.m, coefficients A=0.832448 and B=0.167552. But the bridge between “Claude read this file” and “Claude produced this output”? Two hundred lines of empty brackets.

Here’s what my transcript system should have captured:

{
  "type": "tool_use",
  "name": "Read",
  "parameters": {
    "file_path": "/Users/seth/ME4231/filter_implementation.m"
  }
}

Instead, every tool call serialized to the same hollow structure. The framework knows a tool was invoked. It just stripped the name and parameters somewhere in the pipeline.

The Reconstruction Tax

Here’s the part that stings: I spent more time trying to reconstruct what happened than I would have spent just doing the work manually.

To piece together my own session, I had to:

  • Check git history for file modifications
  • Scan my shell history for commands
  • Read through the actual output files to infer what prompts generated them
  • Cross-reference timestamps with my memory of the morning

This is the automation trap. The system was supposed to save time by capturing knowledge for future reference. Instead, it captured timestamps and empty brackets, and now I’m doing archaeology on my own workday.

The motion control lab involved measuring frequency response across a range of test frequencies, comparing measured values against analytical predictions. The results exist, so I know this happened. But the process—the debugging, the iterations, Claude’s explanations of why certain filter coefficients produce certain cutoff behaviors—that’s gone. The context, the reasoning, the false starts that got corrected. Exactly what I built this system to preserve.

The Uncomfortable Truth

I don’t actually know what’s wrong with the serialization. My first instinct was to write “a few hours of debugging would probably reveal a missing field mapping”—but that’s speculation dressed as confidence. The honest version: something in my transcript capture hook isn’t serializing tool calls properly, and I haven’t yet figured out where or why.

What I do know is that the failure is silent. The system runs without errors. The JSON is valid. The files are created on schedule. Every quality metric I might check says “working.” Only the output reveals the problem, and only if you actually read it.

Valid Output vs. Meaningful Output

This happens constantly in software systems. Logging that records events without context. Monitoring that tracks metrics without meaning. Documentation that describes structure without behavior. The system answers “did something happen?” while ignoring “what happened?”

The fix is probably mechanical—find the serialization bug, add the missing field mapping, test with real data. But the deeper problem is that I didn’t build in sanity checks. If a tool call serializes without a name, that should throw an error, not silently produce "type": "unknown". If parameters are empty for a tool that requires them, something should complain.

I tested that the system ran. I didn’t test that the system captured.

The Takeaways

Validate meaning, not just structure. A transcript full of empty JSON is technically working. It’s also worthless.

Make silent failures loud. If your automation can produce hollow output without complaining, it will.

Test with real data you actually read. Not sample data. Not synthetic test cases. The actual output your system produces in production.

Two hundred lines of "parameters": {} now sit in my transcript directory—a monument to working code that captures nothing.