Simplify Software Design: Apply the Miller Principle to Python, FastAPI, and LLM Prompts
Learn to apply the Miller Principle (7 ± 2) to simplify Python applications, FastAPI APIs, and LLM prompt design, effectively reducing cognitive load and improving maintainability.
The Human Element in Software Design: Why We Get Overwhelmed
Ever felt like you're drowning in code, unable to keep track of all the moving parts? You're not alone. As software systems grow, so does their inherent complexity, leading to increased cognitive-load for developers. This isn't just an annoyance; it slows down development, introduces bugs, and makes maintainability a nightmare. Our brains, powerful as they are, have limits, especially when it comes to short-term memory.
This is where the Miller Principle comes in. Coined after psychologist George A. Miller's seminal 1956 paper, "The Magical Number Seven, Plus or Minus Two," it suggests that the average person can only hold about 5 to 9 "chunks" of information in their working memory at any given time. While Miller's original work was about memory, its implications for software-design are profound. If we design interfaces (be they functions, APIs, or even llm prompts) that require us to juggle too many independent concepts simultaneously, we're setting ourselves up for overload.
The good news? By strategically applying the Miller Principle, we can simplify our designs, reduce mental strain, and build more robust, easier-to-understand systems. Let's explore how.
The Miller Principle in Practice: Simplifying Interfaces
Applying the Miller Principle isn't about dumbing down your software; it's about smart abstraction and thoughtful interface design. The goal is to reduce the number of discrete pieces of information a user (another developer, an ai model, or even your future self) needs to hold in their head to understand and interact with your code.
Key strategies include:
- Chunking: Grouping related items into a single, cohesive unit.
- Abstraction: Hiding complex details behind a simpler facade.
- Sensible Defaults: Providing pre-configured values to reduce decision points.
- Reducing Arity: Minimizing the number of direct parameters an interface exposes.
Let's see these in action across different facets of software development.
Python Modules and Functions: Managing Complexity
In python, functions and modules are the fundamental building blocks. They are also prime candidates for Miller Principle violations.
Problem:
- Functions with too many arguments: A function requiring 8, 10, or even more arguments forces you to remember the purpose and type of each one every time you call it.
- Modules exporting too much: A module exposing dozens of public functions, classes, and variables makes it hard to grasp its primary purpose and find what you need.
Solution:
For Functions:
-
Group related arguments into objects: Instead of a long list of primitive types, pass a single
dataclassorPydanticmodel. This "chunks" related data together.# Before (high cognitive load) def create_user(name: str, email: str, password: str, age: int, city: str, country: str, phone: str, is_admin: bool): # ... # After (chunked into a UserData object) from pydantic import BaseModel class UserData(BaseModel): name: str email: str password: str age: int city: str country: str phone: str is_admin: bool = False # Sensible default def create_user(user_data: UserData): # ... -
Decompose complex logic: If a function performs multiple distinct operations, break it down into smaller, focused functions. Each new function then has fewer responsibilities and arguments.
-
Use sensible default values: For optional parameters, provide good defaults to reduce the number of choices a caller has to make.
For Modules:
- Adhere to the Single Responsibility Principle: A module should ideally do one thing well. If it grows too large, consider splitting it into sub-modules or separate files.
- Control public interfaces: Use leading underscores
_for internal functions and variables to signal they're not part of the public API. Leverage__all__in__init__.pyfiles to explicitly declare what's imported whenfrom my_module import *is used, reducing visual noise.
FastAPI Endpoints: Crafting Intuitive APIs
fastapi excels at defining APIs, but it's still up to the developer to design endpoints that are easy to understand and use. A poorly designed API can quickly overwhelm its consumers.
Problem:
- Endpoints with too many path or query parameters: Similar to function arguments, an endpoint like
/products?category=X&min_price=Y&max_price=Z&sort_by=A&order=B&page=P&limit=Lquickly becomes a cognitive burden. - Overly complex request bodies: Requiring many top-level fields in a JSON request can be daunting.
- Endpoints doing too much: A single endpoint trying to handle creation, update, and deletion based on different request methods or body content.
Solution:
-
Leverage Pydantic models for request bodies:
fastapinaturally encourages this. Group all related input data forPOSTorPUTrequests into a single Pydantic model. This is the ultimate "chunking" for API inputs.# Before (hypothetical, less common in FastAPI, but illustrates the point) # @app.post("/items/") # async def create_item(name: str = Body(...), description: str | None = Body(None), price: float = Body(...)): # ... # After (standard FastAPI with Pydantic - lower cognitive load for API consumers) from pydantic import BaseModel class ItemCreate(BaseModel): name: str description: str | None = None price: float tax: float | None = None # Additional fields don't inflate endpoint signature @app.post("/items/") async def create_item(item: ItemCreate): return {"message": "Item created", "item": item} -
Sensible path and query parameter design: Keep path parameters to essential identifiers. For filtering, sorting, and pagination, use query parameters, but aim to keep the number of distinct concepts manageable. Consider a query parameter that accepts a JSON string for complex filtering if necessary, abstracting away many individual parameters.
-
Resource-oriented design: Follow REST principles. Instead of one
/usersendpoint handling all actions via complex queries, use/users/{id}for specific user operations, and/users/{id}/ordersfor user-specific orders. This creates clear, predictable patterns that are easier to chunk.
LLM Prompts: Streamlining Your AI Interactions
The rise of llms has introduced a new frontier for software-design. Crafting effective prompts is an art, and the Miller Principle can significantly improve your ai interactions and the maintainability of your prompt library.
Problem:
- Long, unstructured prompts: A monolithic block of text containing context, instructions, examples, and format requirements all mashed together is difficult for both humans and the LLM to parse consistently.
- Too many simultaneous constraints: Asking the LLM to remember too many specific rules or facts at once can lead to "hallucinations" or ignored instructions.
- Inconsistent output: Vague or overly complex prompts lead to varied, hard-to-predict responses.
Solution:
-
Chunk your prompt into sections: Use clear headings or delimiters to separate different types of information. This helps the LLM (and you!) process the prompt piece by piece.
### Persona You are an expert technical writer for a software engineering blog. ### Context The user wants an explanation of the Miller Principle applied to software design. ### Task Explain the Miller Principle concisely and provide practical examples for Python, FastAPI, and LLM prompts. Focus on reducing cognitive load. ### Format - Start with a clear introduction. - Use Markdown headings (## and ###). - Include short code examples where relevant. - Conclude with a summary. - Maintain a clear, practical, slightly warm tone.This structured approach helps the LLM prioritize and process information more effectively, leading to more consistent and higher-quality outputs.
-
Limit the number of core instructions/variables: Focus each prompt on a primary objective. If you need the LLM to perform multiple distinct sub-tasks, consider chaining prompts or using a multi-turn conversation.
-
Use few-shot examples as "chunks": Instead of describing a complex output format in prose, provide one or two examples. The LLM can "chunk" the pattern from the examples much more efficiently than from abstract rules.
-
Abstract with templates: For common prompt structures, create reusable templates that encapsulate the boilerplate, allowing you to focus on the specific variables for each use case. This reduces the number of unique elements you need to consider for each prompt instance.
Embrace Simplicity for Better Software
The Miller Principle isn't a silver bullet, but it's a powerful lens through which to examine your software-design choices. By actively seeking opportunities to "chunk" information, abstract complexity, and reduce the number of items developers (and AI models) need to juggle in their minds, you'll be on your way to building systems that are not just functional, but genuinely understandable and maintainable.
It's about respecting the limits of human cognition, leading to less cognitive-load, fewer bugs, and ultimately, a more pleasant and productive development experience for everyone involved. Start small: pick one function, one endpoint, or one prompt, and ask yourself, "Can I reduce the number of things I need to remember to use this?" You might be surprised by the impact.
Share
Post to your network or copy the link.
Related
More posts to read next.
- Streamline Local LLM App Development with Docker Compose
Learn to set up a self-contained local environment for LLM app development using Docker Compose. Deploy vector stores, open-source models, and FastAPI for a streamlined build process.
Read - Supercharge Your Dev Workflow: Integrating AI with Python and TypeScript
Discover practical strategies for integrating AI tools and LLMs into your Python/TypeScript development workflow. Automate tasks, enhance code quality, and accelerate project delivery with smart AI assistance.
Read - Simplify LLM-Driven Coding with Claude Code Routines