Hello!
Writing
Notes on fullstack engineering, AI in production, and shipping reliable software—newest first.
16 articles · 62 topics · Page 1 of 2
Learn to set up a self-contained local environment for LLM app development using Docker Compose. Deploy vector stores, open-source models, and FastAPI for a streamlined build process.
Learn practical strategies and tooling to build automated CI/CD pipelines for managing, versioning, and deploying machine learning models reliably from training to production.
Learn to architect and implement full-stack web applications with LLM agents, covering backend orchestration, tool usage, and frontend interaction patterns for intelligent, production-ready systems.
Discover practical strategies for integrating AI tools and LLMs into your Python/TypeScript development workflow. Automate tasks, enhance code quality, and accelerate project delivery with smart AI assistance.
Discover how Claude Code Routines streamline the orchestration of LLM-powered coding tasks, enabling Python developers to build robust, predictable, and AI-driven applications.
Learn to apply the Miller Principle (7 ± 2) to simplify Python applications, FastAPI APIs, and LLM prompt design, effectively reducing cognitive load and improving maintainability.
Uncover the vulnerabilities and biases in current AI agent benchmarks and learn practical Python strategies to build more robust, secure, and trustworthy LLM evaluation frameworks.
Explore the security and privacy implications for developers as national digital identity like Germany's eIDAS relies on proprietary mobile platforms, and learn to build resilient, open authentication with Python/FastAPI.
Explore how to leverage Python and prompt engineering with Large Language Models to transform modern text into a distinct 'caveman speak' style, showcasing LLMs' flexibility for creative text generation.
Learn how to create optimized VS Code Dev Containers for your AI/ML projects, ensuring consistent development environments, faster onboarding, and reproducible results for your team.
Explore efficient local LLM deployment with Lemonade by AMD, leveraging GPU/NPU for speed and open-source flexibility. Learn practical integration into Python applications using FastAPI for powerful AI services.
Explore practical Python strategies and techniques for developing robust Qwen3.6-Plus powered LLM agents that interact seamlessly with tools and APIs, tackling real-world deployment challenges.