For a while now, I wanted to try out Claude Code but didn’t know how to start. I watched Anthropic’s Mastering Claude Code in 30 minutes, where the speaker (Claude Code’s creator) encouraged getting to know the tool by exploring some unfamiliar codebase. Seeing the way he used it - asking raw questions like “what does this work that way? look through git history” or “why did we fix issue #123 by doing XYZ?” - got me really excited to start a new session in an unfamiliar project and explore it with this CLI partner.
So I paused the video, and instead of trying to optimize my first session with hours of research and planning (my usual style), I decided to just try it out and see what happens. But where should I start?
My first thought was trying it out on a complex open-source project I’m not familiar with. I’ve contributed some bug-fixes in the past in projects like typescript and vscode, and the music notation software MuseScore. Bugs are a great way to get to know a limited aspect of the complex codebase while also making a contribution - they are well defined and the goal is easy to understand and test. Still, the first contribution takes a lot of effort: you have to learn the architecture, set everything up, and get to know the technology stack.
Remembering the joy and pain of open-source contribution, it feels like a great opportunity for Claude Code to shine. With that in mind, I got to work!
Putting Claude Code to the test
I use Bruno frequently as an open-source lightweight alternative to Postman, especially enjoying their offline and git-friendly approach to request collections.
Browsing bruno’s good first issue list, I found a good candidate: “Allow SSL validation to be turned off at request level” (with no additional information).

looks interesting - someone started working on it 1.5 years ago (no update since), some likes and comment showing urgency
So after cloning the repo, running npm install -g @anthropic-ai/claude-code
and starting claude
my session officially began. The following is a recreation of the workflow and prompts I used during that session, to varying degrees of success.
Phase 1: the initial plan (and my doubts) - I set the stage with this prompt:
I'm working on issue #1325 called "Allow SSL validation to be turned off at request level" (no additional information). Initialize the working environment based on @contributing.md and give me an overview of the structure of the repository. Guide me into where to look to understand my task
Note: for Claude to open URLs or look for information online I had to explicitly ask it to do it, just adding links wasn’t enough.
Claude churned for a while and requested approval to run some setup operations, finally giving me a nice and summarized implementation guide:

Claude Code’s initial implementation guide - gave me a good initial direction for exploration
While it churned away, I looked for references to “SSL” in bruno’s docs and found the global setting for SSL/TLS certificate verification. Claude already addressed that in the plan, nice!
Still, something felt off with the plan so I started creating the new toggle manually. After a few edits I understood what I didn’t like - having both a global preferences.shouldVerifyTls
toggle and a local settings.sslValidation
toggle means that once I activate it I can’t go back to the undefined
value (following global settings).
This is a confusing UX! if the user sees an “SSL validation” toggle and turns it off, they expect validation to be disabled. But here, “off” would mean “follow the global config”, which could still have SSL validation turned on! I shared this observation with Claude and asked it to change the plan:
Change the behavior of the request-level flag: when the value is "true" it should TURN OFF ssl verification, and when the value is false/null the verification should behave as instructed by the global settings.
Phase 2 - implementing the better plan: While Claude improved the plan and started executing it I kept browsing, and found out that there’s an older similar issue with more traction, where one of the maintainers suggested this exact behavior (using request-level disableSslVerification
flag), so we are on track. Nice again!

Claude Code’s Edit File prompt - I really like the options it gives and used each one of them during my session in different contexts and various tools
It then continued with the implementation, and I guided it with some style preferences: reduce duplication with utility methods and use well-named variables to avoid unnecessary comments (it really loves adding comments).

The new disable SSL verification logic by Claude, clean and indicative names
Phase 3 - testing and taking the wheel: I then asked Claude to find relevant modules to add tests to, and we went back-and-forth on the testing implementation. It started with a very naive approach of testing the new utility function shouldDisableSslVerification
, which isn’t useful at all. When testing the top-level configureRequest
method like other tests, our new tests failed and I had to get involved.
Technical details: I had to debug deep into the code to find the reason - we tried asserting that
axiosInstance.defaults.httpsAgent
was passed with the expectedrejectUnauthorized: false
configuration (disabling SSL verification), but after extensive debugging ofmakeAxiosInstance
I understood that it usesinterceptors
to change the configuration mid-flight, so it wasn’t visible in the test assertion.
Claude didn’t quite connect all the dots. This might have been because of the complex context it already had at that point, or maybe I just gave up too quickly and took control myself. I didn’t want to create an outbound HTTP request in this test, so I had to change course.
So I reverted the changes (a good reminder to commit frequently), and went with a middle-ground approach: test an intermediate method and making sure it modified the configuration when I passed the request-level SSL disable flag.
For manual testing, I asked ChatGPT for help:

ChatGPT coming up with a great response. It was “primed” with this previous prompt from the start of our conversation: Explain this command shortly, I'm a software engineer: <some chown command>
Note: generally, I found myself using other assistants to ask questions that needed minimal context, and to avoid Claude’s churning and rate limiting.
I spun up the UI with npm run dev
, tested it manually using the URL and it worked!

The new request-level SSL toggle, implemented by me! (us?)
Finishing up: automating the PR - I asked Claude to wrap things up:
I created a fork of the repository at [email protected]:Git-Lior/bruno.git. Add it as "upstream" remote, checkout a branch and create a detailed commit matching the guidelines of this repository. In the commit description mention that this was tested manually using https://self-signed.badssl.com/. Push the new branch to the fork.
The end result, which took only a few hours: feat: add per-request SSL verification toggle!
Considering this is my first contribution to Bruno and first interaction with Claude Code, I consider this session a great success! The $20 subscription held up during the entire session, but I reached the limit shortly after I was done (when I tried to summarize my work).
Reflection
I really enjoyed this first session with Claude Code with my spec-driven workflow - it truly feels like talking with an ever-patient teammate living in your CLI. Besides the powerful LLM behind it which I already use in tools like Cursor, the CLI itself is much more comfortable and easy to control than I expected.
Throughout the session, I noticed my dynamics with Claude Code falling into a few distinct roles, which worked well:
- The guide: more than just a smart search, Claude was able to dive into the complex codebase and create a summarized implementation guide tailored to my needs.
- The in-context automator: telling it to initialize the repo based on
contributing.md
or finish up the PR was amazing - it took over the tedious aspects of working in a new project and writing descriptive commits, letting me reserve my energy and reduce frustration. - The pair programmer: here I used Claude more as my “hands” than my “brain”, telling it what to do and how I expect to see it. This, combined with a detailed plan, meant that Claude generated most of the code and I only needed to make small adjustments.
- The reviewer: on a few occasions I asked Claude to review the changes we made and make sure we didn’t forget anything, or to compare my implementation with other similar features to see if we missed something.
Still, there were a few things that went wrong:
- “Cutting corners” and hallucinations - Claude Code doesn’t always read the entire file content, it mostly searches for the exact matches and reads some of the surrounding information. This means that its plans may be flawed, or even worse - it “fills the gaps” with hallucinations which sound plausible. When working in familiar projects it’s easier to notice, but in this case it wasn’t immediately obvious and could be dangerous.
- Inefficiency: it’s still hard for me to understand how to optimize the cost and limit usage, but seems like Claude Code itself isn’t too worried about that - It will churn away on mundane operations if you don’t monitor it closely.
- Context management - the context is opaque and compacts automatically, which I find annoying. I want to be able to tailor the context for my specific request, adding entire files for planning and summarizing phases and then using just the plan for the implementation. I might be using it wrong though, I will try to improve that in future sessions.
Final judgment
As I keep repeating, this was an enjoyable experience. Was it an ultra-efficient one though? am I a 10x AI-powered developer now? I didn’t feel that way, but I might change my mind in a few more sessions. I could have done this alone, and it would have taken me one or two more hours (judging from past open-source contribution experiences).
But working with Claude Code did something incredible: it kept me energized and engaged, even more than the IDE-based assistants I already use. It (we?) did that by automating the boring and tedious aspects, and doing the plan-to-code conversion mostly autonomously. Without it, this PR wouldn’t have been created.
I’m still far off from letting it loose in my codebase with no supervision. Seems to me that those who push for that don’t have to pay the bill, like Anthropic’s employees who in their own words say they use Claude Code like a “slot machine” - commit, let it work for 30 minutes ($$) and accept or start fresh. Not my style of coding.
But with my intermittent supervision we managed to reach a great result with way less frustration and effort on my part, so this is a huge win overall!
A more comprehensive take on “vibe-coding production software” is on the way, stay tuned!