Hello!
Writing
Notes on fullstack engineering, AI in production, and shipping reliable software—newest first.
Learn to set up a self-contained local environment for LLM app development using Docker Compose. Deploy vector stores, open-source models, and FastAPI for a streamlined build process.
Discover practical strategies for integrating AI tools and LLMs into your Python/TypeScript development workflow. Automate tasks, enhance code quality, and accelerate project delivery with smart AI assistance.
Discover how Claude Code Routines streamline the orchestration of LLM-powered coding tasks, enabling Python developers to build robust, predictable, and AI-driven applications.
Learn to apply the Miller Principle (7 ± 2) to simplify Python applications, FastAPI APIs, and LLM prompt design, effectively reducing cognitive load and improving maintainability.
Uncover the vulnerabilities and biases in current AI agent benchmarks and learn practical Python strategies to build more robust, secure, and trustworthy LLM evaluation frameworks.
Explore how to leverage Python and prompt engineering with Large Language Models to transform modern text into a distinct 'caveman speak' style, showcasing LLMs' flexibility for creative text generation.
Explore efficient local LLM deployment with Lemonade by AMD, leveraging GPU/NPU for speed and open-source flexibility. Learn practical integration into Python applications using FastAPI for powerful AI services.
Explore practical Python strategies and techniques for developing robust Qwen3.6-Plus powered LLM agents that interact seamlessly with tools and APIs, tackling real-world deployment challenges.
How I moved from basic prompt usage to building structured multi-agent systems using LangChain and LangGraph, and what actually changed for me.