HomeBuilding Real ProjectsHow to Plan an AI Project
beginner10 min read· Module 8, Lesson 1

🗺️How to Plan an AI Project

From idea to architecture — plan before you code

How to Plan an AI Project

Why Planning Matters

Most developers make the same mistake: they get excited about an idea and immediately start writing code. With AI projects, this is especially dangerous because:

  • API costs add up fast — every wrong turn costs real money
  • Architecture changes are painful — refactoring AI pipelines is harder than regular code
  • Model selection matters early — switching models mid-project can break everything
  • Prompt engineering needs structure — random prompts lead to random results

The difference between a successful AI project and a failed one is almost always planning, not coding skill.

Think of it this way: you wouldn't build a house without blueprints. AI projects need blueprints too.


The 5-Step Planning Process

Step 1: Define the Problem

Before anything else, answer these questions clearly:

Output
1. What does the user need? → "Users need to summarize long PDF documents quickly" 2. What is the input? → "A PDF file, 1-100 pages" 3. What is the expected output? → "A structured summary with key points, 1-2 paragraphs" 4. What does success look like? → "Summary captures 90%+ of key information in under 30 seconds" 5. Who is the user? → "Business professionals who read many reports daily"

Common mistake: Defining the problem too broadly.

Bad: "I want to build an AI assistant" Good: "I want to build a CLI tool that answers questions about a specific codebase"

The more specific your problem definition, the easier every other step becomes.

Step 2: Choose the Right Approach

Not every problem needs AI. Ask yourself:

Output
Decision tree: ├── Does this need AI at all? │ ├── Can regex/rules solve it? → Use regex │ ├── Can a database query solve it? → Use SQL │ └── Does it need understanding/generation? → Use AI ├── Which AI approach? │ ├── Simple text task → Claude API (direct call) │ ├── Complex multi-step task → Claude Code (agentic) │ ├── Image/vision task → Claude with vision │ └── Structured data extraction → Claude with tool use ├── Which model? │ ├── Simple/fast tasks → Claude Haiku (cheap, fast) │ ├── General tasks → Claude Sonnet (balanced) │ └── Complex reasoning → Claude Opus (powerful, expensive) └── API vs Claude Code? ├── One-shot tasks → API ├── Multi-step with tools → Claude Code / Agentic └── Interactive sessions → Claude Code CLI

Key principle: Start with the simplest approach that could work. You can always upgrade later.

Step 3: Design the Architecture

Map out the flow of your application:

Output
Input → Processing → Output Example for a PDF Summarizer: ┌──────────┐ ┌──────────────┐ ┌─────────────┐ ┌──────────┐ │ PDF File │───→│ Text Extract │───→│ Claude API │───→│ Summary │ └──────────┘ └──────────────┘ └─────────────┘ └──────────┘ (pdf-parse) (with prompt) (markdown)

For each component, decide:

Output
1. Tools and libraries needed: - PDF parsing: pdf-parse, pdfjs-dist - AI: @anthropic-ai/sdk - CLI interface: commander, inquirer 2. External APIs: - Anthropic API (Claude) - Any other services? 3. Data flow: - How does data move between components? - What format at each stage? - Where are the error points? 4. Error handling: - What if the PDF is too large? - What if the API is down? - What if the response is malformed?

Step 4: Set Up the Project Structure

A well-organized project saves hours of debugging later.

Output
my-ai-project/ ├── src/ │ ├── index.ts # Entry point │ ├── config.ts # Configuration & env vars │ ├── prompts/ │ │ ├── system.ts # System prompts │ │ └── templates.ts # Prompt templates │ ├── services/ │ │ ├── claude.ts # Claude API wrapper │ │ └── parser.ts # Input parsing logic │ ├── utils/ │ │ ├── files.ts # File handling utilities │ │ └── validators.ts # Input validation │ └── types/ │ └── index.ts # TypeScript types ├── tests/ │ ├── unit/ │ └── integration/ ├── .env.example # Environment template ├── .gitignore # Never commit .env! ├── package.json ├── tsconfig.json └── CLAUDE.md # Instructions for Claude Code

Critical files to create first:

TypeScript
// config.ts — centralize all configuration dotenv.config(); export const config = { anthropicApiKey: process.env.ANTHROPIC_API_KEY!, model: process.env.MODEL || "claude-sonnet-4-20250514", maxTokens: parseInt(process.env.MAX_TOKENS || "4096"), maxFileSize: parseInt(process.env.MAX_FILE_SIZE || "10485760"), // 10MB },
TypeScript
// prompts/system.ts — keep prompts organized export const SYSTEM_PROMPT = \` You are a document summarizer. Your job is to: 1. Read the provided document text 2. Extract key points and main arguments 3. Produce a concise, structured summary Rules: - Keep summaries under 500 words - Use bullet points for key findings - Include a one-sentence TL;DR at the top \`;
Output
# .env.example — document required variables ANTHROPIC_API_KEY=your-key-here MODEL=claude-sonnet-4-20250514 MAX_TOKENS=4096

Step 5: Build Incrementally

Never try to build everything at once. Follow this order:

Output
Phase 1: Prove the concept (Day 1) ├── Hardcode a simple input ├── Make one successful API call ├── Verify the output format └── Checkpoint: "Does this approach work at all?" Phase 2: Build the pipeline (Day 2-3) ├── Add real input handling ├── Connect all components ├── Add basic error handling └── Checkpoint: "Does it work end-to-end?" Phase 3: Harden and polish (Day 4-5) ├── Add input validation ├── Handle edge cases ├── Add proper error messages ├── Add logging └── Checkpoint: "Can a real user use this?" Phase 4: Optimize (Day 6+) ├── Reduce API costs (prompt optimization) ├── Improve speed (caching, streaming) ├── Add tests └── Checkpoint: "Is this production-ready?"

Golden rule: Each phase should produce something that works. If Phase 1 fails, stop and reconsider your approach.


Example: Planning a PDF Summarizer

Let's walk through the entire planning process for a real project.

Problem Definition

Output
Goal: Build a CLI tool that summarizes PDF documents User: Developers and business professionals Input: PDF file path Output: Markdown summary printed to terminal Success: Accurate summary in under 30 seconds

Approach Selection

Output
✓ Needs AI? Yes — understanding and summarizing text ✓ Which approach? Direct API call (single task) ✓ Which model? Sonnet (balanced cost/quality) ✓ API vs Claude Code? API (building a standalone tool)

Architecture

Output
CLI Input (file path) Validate file (exists? is PDF? size OK?) Extract text from PDF (pdf-parse library) Chunk text if needed (for large documents) Send to Claude API with system prompt Format and display summary

Implementation Plan

TypeScript
// Phase 1: Prove the concept // Just test: can we extract PDF text and get a summary? async function quickTest() { const dataBuffer = fs.readFileSync("test.pdf"); const pdfData = await pdf(dataBuffer); const client = new Anthropic(); const response = await client.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 1024, messages: [ { role: "user", content: \`Summarize this document:\n\n\${pdfData.text}\`, }, ], }); console.log(response.content[0].text); } quickTest();

Checklist Before Starting Any AI Project

Use this checklist every time you begin a new project:

Output
□ Problem Definition □ Clear problem statement written down □ Input/output format defined □ Success criteria established □ Target user identified □ Technical Decisions □ Confirmed AI is needed (not over-engineering) □ Model selected with reasoning □ API vs Claude Code decision made □ Cost estimate calculated □ Architecture □ Data flow diagram created □ All components identified □ External dependencies listed □ Error scenarios mapped □ Project Setup □ Folder structure created □ Package.json initialized □ TypeScript configured □ .env.example created □ .gitignore includes .env □ CLAUDE.md written (if using Claude Code) □ Development Plan □ Phases defined □ Phase 1 (proof of concept) scoped □ Each phase has a checkpoint □ Testing strategy decided

Common Mistakes to Avoid

1. Over-Engineering

Output
BAD: "I'll build a microservices architecture with message queues, a database, and a React frontend" GOOD: "I'll build a simple CLI script that does one thing well"

Start simple. Add complexity only when you have evidence you need it.

2. Not Testing Early

Output
BAD: Write 500 lines of code, then test for the first time GOOD: Write 20 lines, test, write 20 more, test again

With AI projects, you need to verify your prompts work before building the rest of the application around them.

3. Ignoring Costs

Output
Cost calculation example: - Claude Sonnet: ~$3/M input tokens, ~$15/M output tokens - Average PDF: ~5,000 tokens - 100 PDFs/day = 500K tokens/day - Monthly cost: ~$50-75 for input + output Always estimate costs BEFORE building.

4. Hardcoding Prompts

Output
BAD: Prompts scattered throughout the codebase GOOD: All prompts in a /prompts directory, easy to find and modify

You will iterate on prompts constantly. Make them easy to change.

5. No Error Handling for API Failures

TypeScript
// BAD: No error handling const response = await client.messages.create({...}); // GOOD: Proper error handling try { const response = await client.messages.create({...}); } catch (error) { if (error.status === 429) { console.log("Rate limited. Waiting before retry..."); await sleep(5000); // retry logic } else if (error.status === 500) { console.log("API error. Please try again later."); } else { throw error; } }

Summary

The 5-step planning process:

StepKey QuestionOutput
1. Define the ProblemWhat does the user need?Problem statement
2. Choose ApproachDoes this need AI? Which kind?Technical decisions
3. Design ArchitectureHow do the pieces fit together?Data flow diagram
4. Set Up StructureWhere does everything live?Project folders
5. Build IncrementallyWhat is the smallest working version?Phase plan

Remember: 30 minutes of planning saves 3 hours of debugging. Plan first, code second, and your AI projects will succeed.