Essay v1.0 · Thinking in Progress

From Vibe Coding to Compound Engineering

Building Software Philosophies in the AI Era

January 1, 2026
7 years in AI/ML Tokyo → Compound

Mid 2025 came knocking, and there I was—sitting with my 7 years badge of "software engineer specializing in AI/ML" like some kind of Python wizard. Python was my weapon, my companion, my everything.

Then July happened. After spending 5 whole years at a startup in Tokyo (yes, Tokyo, very fancy), I joined my 3rd company thinking I'm ready for whatever comes next.

And boom—life introduced me to something called Clean Architecture.

Now listen, I wasn't completely new to this world. I knew about dependency injection and all those fancy programming patterns that make you sound smart at tech meetups. But an entire software design philosophy? Like, a whole philosophy? That was new territory.

Here's the interesting part—I was that guy who topped many coding tests, especially in Python. Leetcode? Solved it. HackerRank? Done. But suddenly, I found myself staring at a simple problem in clean architecture, completely unable to formulate the solution. My brain just went error 404: pattern not found.

And this felt... different. Like discovering there's an entire ocean when you thought you knew all the rivers.

The Vibe Coding Era

Now here's the plot twist that makes everything make sense—I had been doing what people now call "vibe coding" long before the term even became a thing. For the last two and half years, I wasn't really writing a lot of code myself. So yeah, this context matters.

I asked Cursor to follow the clean architecture patterns, and my first PR went live. Success! Or so I thought.

Then the code review comments came rolling in like monsoon rain—so many issues, so many suggestions, so much feedback that I started wondering if there's a better way to do this. Back and forth we went. Many times. Multiple PRs. Each one teaching something new.

See, in my previous company, working on a project was simple mathematics—I would simply prompt the feature, fix the bug when it breaks, repeat the cycle, and call it a day. But scaling the project this way? Difficult doesn't even begin to describe it.

Adding new features broke something else. Always. Every single time. It was like playing Jenga with code—pull one piece, entire structure wobbles.

So naturally, at first I wasn't a fan of clean architecture. But slowly, very slowly, the pieces started clicking together. The various barriers it creates, the patterns it suggests—they all started making sense. These weren't restrictions—they were guide rails keeping the project on track.

They helped reduce AI slop and actually ship better code into production. Turns out structure can be quite liberating when you give it a chance.

The Rabbit Hole

Now here's where things get really interesting. One of the fascinating things about AI is that you can do what took others a decade to do within just a few hours. Suddenly you're exploring territories that would have required years of experience.

So naturally, based on my experience with clean architecture, I got myself into the rabbit hole of creating new software design philosophies. Because if we can understand existing ones, why not try building new ones?

Multiple attempts happened. Multiple iterations. Interactive chats with AI that went deep into software design theory. My goal was ambitious—create something that could scale to billions of lines of code without the issues that plague existing philosophies.

But then reality came knocking, this time wearing Gödel's face and carrying mathematics textbooks.

I realized something profound—it is not possible to build a logically consistent AND complete philosophy. You have to choose. Consistency or completeness. Pick one.

This is the realm of Gödel's incompleteness theorem, information theory, uncertainty principle, and the fundamental nature of our reality... okay fine, maybe I'm being a bit dramatic here, but yeah, it was a genuinely mind-bending realization.

The good news? I found out we can build useful ones. Not perfect, but useful. And suddenly, TypeScript became a crucial ally. Linting rules that seemed boring before? Now they're essential. Test cases? Not just a checkbox to tick anymore—they're the foundation that prevents your architectural ideas from becoming a wobbly tower of cards.

The Experimentation Phase

So here's where things get practical. I have created a few such philosophies and have been running experiments with them. And you know what? It seems to work well. Not just in theory, but in actual projects where rubber meets the road.

Contract Interface & Implementation Philosophy

The idea is beautifully simple—implementation layer is disposable AI-generated code. Yes, disposable. Like those paper plates you use at a picnic and then throw away. But the contract layer? That's the good china, the stuff you keep. Human-reviewed test cases, interfaces, the promises your code makes to the world.

The magic happens when changes made shouldn't break anything existing AND it should overall improve the codebase based on new requirements. It's like having your cake and eating it too, except the cake is code and eating it means deploying to production.

Intent Driven Architecture

Instead of being slave to rigid specifications, the generated code is continuously aligned with human intent. We spend so much time writing specs, but specs are just translation of what we actually want. Why not cut the middleman? Intent over specs. Desire over documentation.

For each philosophy, I create a Claude.md file that explains the philosophy, the rules, the patterns to follow. Then I build projects that actually follow these philosophies. Real projects, not just toy examples.

This is very much in the process of experimentation right now—think of it as software design philosophy in beta version.

The Compound Engineering Revolution

Now, when watching content related to vibe coding, there was one critique that kept popping up everywhere. People said AI has frozen knowledge and unlike an engineer who gains context of the project gradually and learns from their mistakes, AI can't do that.

Every session is a fresh start, tabula rasa, blank slate. The AI doesn't remember what happened yesterday, what patterns worked last week, what preferences you have. Fair criticism, honestly.

However—and this is a big however—some people created something called Compound Engineering where AI grows after every session. Not just pretends to grow, but actually grows.

It documents its processes and interaction with user. Records user preferences and engineering processes. Improves along with the project like an actual team member who's been there since day one.

The AI becomes contextually aware, historically informed, progressively smarter about YOUR specific project. This was one of the best tricks I learned, in my opinion. Game changer doesn't even begin to describe it.

The results?

Swift Desktop Apps Flutter Mobile Cloudflare Workers p5.js Games

Each project building on lessons from the previous one, each AI session smarter than the last, each iteration teaching the system something new about how I think, how I work, what quality means to me.

It's like having an engineering partner who actually pays attention and gets better over time, not just someone who nods politely and forgets everything by tomorrow.