Developing with AI: How ChatGPT and Claude Accelerated My Latest Project
In a recent software project, I took a new approach by using AI tools as collaborators in the development process. I didn’t use them for everything—but I did use them extensively in the areas where they shine: architecture reasoning, rapid prototyping, code refinement, and automated testing.
Rather than working full-time through the process, I engaged with ChatGPT in part-time conversations to shape the initial system architecture. Once the structure was clear, I used Claude Code to generate, refine, and test the implementation. It’s the first time I’ve worked this closely with two different LLMs across a single workflow, and the results were impressive. - The mentinon of Claude in this article is specifically about Claude Code and not Claude in general.
Architectural Reasoning with ChatGPT
ChatGPT served as a sounding board to shape the architecture of the project. The ability to pause and return to conversations made it ideal for reflecting on design decisions without committing immediately to code. I found this back-and-forth especially useful for exploring trade-offs, naming patterns, and layering strategies.
The fixed monthly cost also made it a natural choice for exploratory thinking. I could spend hours breaking down a design without worrying about token consumption or runtime constraints.
Implementation Acceleration with Claude
Once the architecture was in place, I brought Claude into the workflow. Claude’s strength is in context-rich implementation. It can take a directory structure and start to fill in files based on that plan. More importantly, it can catch compilation issues, run tests, and suggest fixes automatically once given permission to proceed.
The speed boost here was substantial. What would typically take weeks of solo development was compressed into a few days, thanks to Claude’s ability to iterate quickly and automate repetitive tasks.
Prompt Engineering as a Skill
There’s a lot of talk about prompt engineering, and for good reason: it matters. I had already developed a good sense of how to direct LLMs effectively, but I still found value in formal resources like the course from DeepLearning.ai.
Prompt engineering, like recipe writing, requires knowing not just what’s possible—but when and why. You may know what an API call is, or what a database does, but that doesn’t tell you when to use it or how it fits into a broader architecture. That’s where software design experience makes a real difference. The better you understand what you want to build, the better instructions you can give the AI to help build it.
Choosing Between ChatGPT and Claude
Each model has its place in the workflow. ChatGPT is ideal for upfront reasoning and documentation-heavy planning because it has a fixed cost and works well across asynchronous sessions. Claude, on the other hand, excels at grounded implementation. It can interact with file structures, analyze diffs, and even reason about terminal output when things break.
Using them together let me take advantage of the strengths of both without being locked into the limitations of either.
Testing and Quality with AI
Claude’s integration of linters, style checkers, and test generation made it easier to maintain a high standard of code quality throughout the project. It could run a test suite with a single command and use the results to suggest fixes automatically.
This helped not just with correctness, but with consistency across the codebase. Claude could even generate scripts to refactor groups of files when a widespread issue was discovered.
Context Management and Versioning
One challenge with LLMs is keeping context between sessions. Claude addresses this by allowing you to export session context, which you can re-import the next time you pick up the project. Combined with Git for code versioning, this created a feedback loop: Claude could see what changed, compare it to prior outputs, and suggest fixes based on the diff.
That context continuity kept progress moving forward, even across multiple days and sessions.
Productivity Reflections
In raw output, Claude delivered a multi-week implementation in a matter of days. It wasn’t just faster—it also offloaded the mental load of compilation, linting, and basic test writing. I could stay focused on higher-order decisions while the AI handled the routine.
The result: more time spent on thinking, less time spent on syntax.
The Future: AI as a Pair Programming Partner
We often talk about pair programming as a best practice—but not everyone enjoys or benefits from the social dynamic. For introverts like myself, AI fills a unique niche: a tireless partner who’s always ready to suggest, refactor, or debug—but only when asked.
That said, AI isn’t magic. The quality of output still depends on the quality of input. If your prompts are vague, the results will be too. Experience still matters, and the more you know about the design and development process, the more effective your collaboration with AI will be.
This workflow isn’t one-size-fits-all, but it worked well for this project. In a future post, I’ll share more about the specific SaaS application that came out of this process. For now, I hope this glimpse into a human+AI development loop helps others explore new ways to build.