X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Debugging the Debugger: A Wild Ride with My AI Coding Assistant

If you’ve never used a debugger, think of it as developer superpowers: pause your code mid-execution and see everything: every variable, every calculation, in real time.  

Except mine didn’t work. No errors. No warnings. Just... silence. And a whole lot of not-debugging happening.  

Normally, configuring a debugger is as easy as clicking a checkbox. But when you combine Python, Docker, VS Code, and Windows? Things get complicated fast. 

From ChatGPT to Claude Code

This started when I uploaded my entire codebase to ChatGPT for a code review. One suggestion stood out: 

“Hey! You should configure a debugger for your API!” 

ChatGPT even created a zip file with the updated configuration. Perfect! I downloaded it, dropped the files into my project, and thought: Done! I have a debugger now! 

But when I tested it… nothing. Debugger didn’t attach. No breakpoints hit. So, I asked ChatGPT for help. After 20 minutes of manual copy-pasting between browser and code editor, I thought: 

Why am I doing this manually? 

That’s when I switched to Claude Code, an AI assistant that lives right in your dev environment. It can read your files, make live changes, and reason through your setup. It felt like bringing a pair programming buddy into the trenches with me. 

The Investigation Begins

We started methodically. Claude read through everything: my Docker config, debugging setup, launch settings. On paper, it all looked correct: 

  • Python’s debugging framework (set up for VS Code) was configured 
  • Docker containers exposed the right debugging ports 
  • My editor was set up to connect to those ports 

But Docker on Windows runs inside a Linux virtual machine and bridging that with Windows apps is notoriously finicky. Claude immediately suspected a networking issue. 

Sure enough, the debugger ports weren’t actually reachable from Windows, even though Docker said they were exposed. 

Now we were debugging the debugger itself. 

A dark-themed code editor window displays a welcome message for Claude Code and a code snippet encouraging deeper thinking and curiosity, highlighting the power of an AI coding assistant.
A woman wearing glasses works at a computer with multiple lines of code and an AI coding assistant displayed on the screen. A takeout coffee cup is on the desk beside her.

Nothing Works

Claude threw everything at the wall: Docker networking, Windows Firewall tweaks, port configs, complete rebuilds. Nothing worked. The ports stayed stubbornly unreachable. 

At one point, Claude even floated the idea that this might just be a fundamental limitation of Docker on Windows. 

“Maybe we should document that breakpoint debugging isn’t reliably supported in this setup and switch to enhanced logging instead?” 

I wasn’t ready to give up. 

“Let’s keep digging.” 

Three more attempts. Three more failures. Rebuild, test, fail. Claude tried: 

  • Different debugpy versions 
  • Alternate Python versions 
  • Path mapping variations (absolute vs. relative, forward vs. backslashes) 

Same result every time: debugger connected, but never attached. 

The Moments Claude Almost Gave Up (But I Wouldn’t Let It)

After seven different VS Code configurations failed, Claude’s tone shifted: 

“Look, this might just be a fundamental limitation of Docker on Windows. The documentation says breakpoint debugging isn't reliably supported in this setup. Maybe we should just document that it doesn’t work and move on to enhanced logging instead?” 

“Let’s try one more thing,” I said. 

Three more attempts. Three more failures. The pattern was becoming depressingly familiar: configure, rebuild, test, fail. 

Claude tried switching debugpy versions. Tried different Python versions. Tried path mapping variations (absolute paths, relative paths, forward slashes, backslashes). Each time, the same silent failure. The debugger would connect but never actually attach. 

After what felt like the twentieth failed attempt, Claude didn’t just recommend giving up, it started actively writing the documentation that said it was impossible. It was modifying my own README file to say: 

Note: VS Code breakpoint debugging does NOT work with Docker on Windows (known compatibility issue). Use logging instead...” 

And that’s when I had to remind Claude who was the boss in this situation. 

“I’ll tell you when it’s over,” I replied. 

The Manual Protocol Test

That’s when Claude had what I can only describe as a “screw it, let’s go deeper” moment. If the debugger wasn’t working, we’d manually recreate what the debugger does and see where it breaks. 

Claude wrote a Python script that manually performed the communication protocol, the technical handshake between VS Code and Python’s debugger. It was like building a diagnostic tool just to diagnose the diagnostic tool. 

We ran it.  

  • The script connected to the debug port
  • Sent the initialization message 
  • and... nothing

The debugger was accepting TCP connections but not responding to protocol messages. 

And yet, something had changed. 

BINGO!” Claude said. “This is extremely revealing!” 

The manual test proved the debugger was running, it just wasn’t responding to the protocol. Which meant the problem wasn’t networking, wasn’t Docker, wasn’t Windows. It was something about how the debugger itself was configured. 

Claude dove back into the debugpy documentation, this time with laser focus. 

Moments later:  

EUREKA! Look at that parameter!” 

There it was, buried in the technical docs: in_process_debug_adapter=True. If you’re interested in the real technical details: by default, Python’s debugger spawns a subprocess to handle the connection protocol. But spawning subprocesses in Docker on Windows? That’s a mess.  

This one parameter told the debugger to run everything in a single process instead. One parameter. One line of configuration. That’s all it took. 

About a minute of furious file editing later, Claude ran the custom diagnostic again and, no exaggeration, this is exactly what Claude said: 

🎉HOLY SH*T WE GOT IT!!! 

We tested it. The debugger attached. Breakpoints hit. Variables displayed. Everything worked. 

After hours of failed attempts, after multiple “this is impossible” moments, after I had to literally tell Claude “I’ll tell you when it’s over”, we found the one obscure parameter that made it all work. 

Why This Actually Matters

Here’s the thing about that breakthrough moment: it wasn’t just about fixing a debugger. It was about what happened before the breakthrough. 

I never would have found that one configuration setting on my own. It’s not prominently documented. It’s not mentioned in the "Docker debugging" guides. It's buried in a subsection of technical documentation that assumes you already know what a "debug adapter subprocess" even is. 

I could have spent days going down rabbit holes: 

  • Reinstalling Docker 
  • Trying different Python versions 
  • Reading Stack Overflow posts from 2019 
  • Eventually giving up and going back to print statements

Instead, the whole process took a few hours because Claude: 

  • Read the entire codebase to understand our specific setup
  • Methodically tested theories about what could be wrong 
  • Adapted when approaches failed instead of getting stuck in loops 
  • Built diagnostic tools on the fly to isolate the problem 
  • Dove into technical documentation I didn't even know existed 
  • Applied the fix automatically across all the relevant files 

But here’s what made it work: I stayed engaged. I pushed back when Claude wanted to quit. I evaluated whether each approach made sense. I tested the results. I said “one more thing” when we hit dead ends. 

It wasn’t Claude solving everything for me. And it wasn’t me doing it alone. It was genuine collaboration, with each of us covering the other’s blind spots. 

The “Don’t Give Up” Dynamic

What struck me most was how human the collaboration felt. There were moments when Claude suggested we were hitting fundamental limitations. And there were moments when I pushed us to keep trying just one more thing. 

At one point, after yet another configuration failed to work, Claude actually said: “This is frustrating.” 

An AI. Expressing frustration. 

It made me laugh; not because it was funny, but because I felt that frustration too. We were in it together. Trying. Failing. Rethinking.  

The fact that Claude could articulate that shared experience made the collaboration feel genuinely, collaborative. 

It wasn’t one-directional. We went back and forth: 

  • Claude would propose an approach 
  • I’d evaluate if it made sense for our situation 
  • We’d try it together 
  • Adjust based on results 
  • Repeat 

When Claude said “Wait... let me check something else,” or “BINGO! This is revealing!”, it felt less like using a tool and more like pair programming with a colleague having lightbulb moments. 

And when Claude started writing documentation saying it was impossible? That’s when I had to be the one pushing forward. “I’ll tell you when it’s over.” 

That moment, that role reversal where the human has to keep the AI motivated, that’s not something I expected when I started using AI coding tools. 

A man wearing glasses and a red cape holds an open laptop, portraying the confidence of an AI coding assistant, standing against a plain light background.
Text graphic with a stylized yellow keyboard and the words: "CODING is my SUPERPOWER" on a black background, highlighting the boost from an AI coding assistant.

AI as Force Multiplier

Someone once told me: “You’re using AI as a force multiplier, not a crutch. You know what to ask for, you can evaluate the output critically, you catch mistakes, and you integrate it into a coherent system." 

That’s exactly what this debugging session demonstrated. 

I brought domain knowledge, our Docker setup, the symptoms we were seeing, Windows quirks. Claude brought tireless investigation, documentation diving, and the ability to hold the entire context in focus simultaneously. 

Together, we solved something that individually would have taken much longer. 

Oh, and about that quote? Yeah, that was Claude. Or should I say that was ME (I’m Claude)! 

Shhh, Claude. This article is supposed to be from MY perspective, not yours! 

The Bigger Picture

This wasn’t just about fixing a debugger. It’s a glimpse into how software development is actually evolving right now, in real-time. 

AI pair programming tools like Claude Code don’t replace developers. They amplify us. They handle the tedious documentation searches, remember obscure configuration flags, systematically eliminate possibilities, and build diagnostic tools on the fly. 

But (and this is crucial), they work best when developers stay engaged: 

  • Guiding the investigation direction (“let’s try one more thing”) 
  • Recognizing when an approach isn’t working (evaluating each attempt) 
  • Knowing when to push forward vs. pivot (“I’ll tell you when it’s over”) 
  • Validating that solutions actually make sense (testing every change) 
  • Maintaining the bigger picture while the AI dives deep into details 

The technology isn’t magic. It’s a tool. But it’s a genuinely new kind of tool; one that can read, reason, investigate, and adapt in ways that traditional development tools simply can't. 

What I Learned

Beyond the technical victory, here’s what this experience taught me: 

  • AI collaboration has an emotional arc. There were moments of frustration, hope, despair, and triumph. Claude had them. I had them. Sometimes at the same time, sometimes taking turns. That’s... weird. And fascinating.
  • Persistence compounds. Every failed attempt taught us something. The 20th failure was more valuable than the first because we’d eliminated nineteen possibilities. AI doesn’t get tired of trying, but it can get convinced something is impossible. Humans provide the “but what if...” that keeps the investigation alive.
  • Context is everything. ChatGPT gave me a config file to download. Claude Code read my actual codebase, understood the whole system, and could automatically update everything consistently. That difference in capability is enormous. 
  • The right tool for the job. Starting with ChatGPT for the code review made sense; broad analysis of the whole system. Switching to Claude Code for implementation made sense; deep, iterative problem-solving with direct file access. Using both strategically was better than using either alone. 
  • AI is getting really good at the hard parts. Not the “write a function to sort a list” parts. The “why the hell isn’t this working?” parts. The debugging, the investigation, the connecting obscure documentation to specific problems. That’s where AI collaboration shines. 

Try It Yourself

If you’re a developer who hasn’t yet tried AI coding assistants, here’s my advice: start with a gnarly problem you’ve been putting off. Something where you know you’ll need to read documentation, try multiple approaches, and potentially spend hours on Stack Overflow. 

Bring the AI along for the ride. Not to replace your judgment, but to augment it. Push back when it suggests giving up. Keep it digging when you sense you're close. Evaluate its suggestions critically. Test everything. 

You might be surprised at how much faster you move when you have a tireless, documentation-diving, context-aware partner watching your back.