4 Years of Engineering Experience in 4 Months
How AI shifted the job from line-by-line coding toward orchestration, review, and delivery management.
This archive organizes the site by topic instead of chronology so the strongest internal links are explicit and the older essays are easier to interpret in context.
Some posts from 2019-2021 are preserved as historical snapshots. They are still useful, but they reflect the state of the field at the time they were written.
Field reports on shipping with coding agents, choosing stacks, and keeping quality high while moving faster.
How AI shifted the job from line-by-line coding toward orchestration, review, and delivery management.
A design-philosophy essay on why AI-assisted software still needs structure, memory, and explicit process.
What actually mattered while building a real-time voice translation app with AI tools across app and backend layers.
Experiments and essays on context length, token efficiency, prompting, and practical LLM engineering.
An experiment on compressing multiple tokens into one representation to reduce attention waste.
Extending transformer context length and documenting the tradeoffs that show up in practice.
An older but still useful note on prompting, question answering, and adapting pre-trained NLP models.
Mobile UI parsing, OCR evaluation, and machine learning features that shipped into real testing products.
A detailed OCR benchmark focused on rendered mobile UI text and the failure cases that matter.
How visual regression, self-healing, and app exploration fit together in a mobile testing workflow.
Behavior cloning, heatmaps, and uncertainty-aware tap prediction for mobile automation.
Production-oriented experiments using segmentation, graph learning, and RL inside a testing company.
Practical notes on deploying models, choosing infrastructure, and understanding cloud tradeoffs.
A hands-on deployment tutorial with a clear note that the hosted demo is no longer live.
A high-level introduction to why serverless containers mattered for ML application delivery.
A cost-oriented comparison of the cloud options that were available for deep learning at the time.
Older essays that are still worth keeping online, but should be read as time-bound snapshots rather than current guidance.