The last two months I’ve experimented extensively with AI1 for coding. First, I vibe coded a CRUD application for calorie-counting and meal-planning. Second, I let AI write Python code for a feature that’s part of a greater framework at work.
For the CRUD app, I really liked AI. To be honest, I didn’t bother reading the code in detail as the application was not critical. Instead, I just extensively tested the app through the frontend until I was satisfied with the result. Everything worked. The development process was not fully hands-off: I had to hand-hold the AI for more complicated topics such as Azure deployment1 and GitHub Actions CI/CD.
At work, the use of AI blew up in my face. I followed the similar approach of letting the AI mostly do the work, understanding the high-level working of the code (but not the details), and testing extensively. Although the feature worked flawlessly, I did not have the detailed knowledge of how the code worked. That was fine, until today I had to modify some workings of the feature. This forced me to study the code line by line and, oh my, was the code more convoluted than it needed to be! In the end, I’d rather have taken the time to have written in myself. I’d have improved my own Python skills and have the code clear in my mind.
All in all, I just don’t see the benefit of using AI in serious applications. I definitely see it’s use in generating a bunch of boilerplate code, though. As for me, I’ll stick with asking Claude specific questions, but writing production code myself.