A Better Way to Use AI in Software Development

There’s a lot of hype right now around using AI in software development. And to be fair, the hype isn’t entirely misplaced. AI can write code. It can summarize, generate, and even refactor. But there’s a major problem with the way most people use it. The problem is guessing.

When AI Has to Guess

If your prompt is not sufficiently clear, the AI has no choice but to guess what you meant. And this guessing introduces subtle. and sometimes major. errors. It might guess wrong about your naming conventions, your error-handling preferences, your architectural style, or your end goals. Even if you think the prompt is clear, the AI may read it differently than you do.

Developing with AI: How ChatGPT and Claude Accelerated My Latest Project

In a recent software project, I took a new approach by using AI tools as collaborators in the development process. I didn’t use them for everything—but I did use them extensively in the areas where they shine: architecture reasoning, rapid prototyping, code refinement, and automated testing.

Rather than working full-time through the process, I engaged with ChatGPT in part-time conversations to shape the initial system architecture. Once the structure was clear, I used Claude Code to generate, refine, and test the implementation. It’s the first time I’ve worked this closely with two different LLMs across a single workflow, and the results were impressive. - The mentinon of Claude in this article is specifically about Claude Code and not Claude in general.

Chatting with LLMs: Exploring Local AI with Ollama

AI-powered large language models (LLMs) have become a big deal. They can generate text, assist with coding, and handle all sorts of conversational tasks. With local AI models becoming more accessible, developers no longer have to rely on cloud-based services to experiment with LLMs. One of the preferred ways to run these models locally is with Ollama, a tool designed to simplify working with AI on your own machine.

Running LLMs Locally with Ollama

Ollama makes it easy to download and run different LLMs without needing cloud access. This is great for privacy, performance, and flexibility. With a capable machine, you can run AI models just as easily as you would any local application.

Hello Again, Hugo

During the Covid years, I let my blog lapse and eventually deactivated the site. At the time, I wasn’t sure whether I would bring it back, but I kept a backup of the WordPress SQL database just in case. That decision proved useful when I recently decided to restore the blog.

Restoring the Old Content

With the original WordPress database backup in hand, the next step was getting access to the posts. Rather than setting up a full WordPress environment, I used a Docker-hosted MySQL database to load the backup. Once restored, I extracted the pages and posts and converted them into Markdown files for use with Hugo. I had played with Hugo long ago, but had not used it for my blog until now.

Python/Selenium: Waiting for combinations

Selenium has a feature where you can wait for a condition. Documentation for that is here. The expected conditions of an explicit wait offer a limited set to check. In some cases, it is beneficial to wait for a combination of conditions. For example, after submitting a form, you might get an error or you might get a success message. It is better to not have to wait for a timeout for one element if you instead receive the other. Here are a couple of expected condition classes that allow this.