intermediate15 min read· Module 5, Lesson 2
💬Messages API Deep Dive
Master conversations, system prompts, and response handling
Messages API Deep Dive
The Messages API is the foundation of everything you build with Claude.
Anatomy of a Request
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const response = await client.messages.create({
// REQUIRED
model: "claude-sonnet-4-6", // Which model
max_tokens: 1024, // Max output length
messages: [ // Conversation
{ role: "user", content: "Hello!" }
],
// OPTIONAL
system: "You are a helpful teacher.", // System prompt
temperature: 0.7, // Creativity (0-1)
stop_sequences: ["END"], // Custom stop words
});System Prompts — Setting Claude's Behavior
System prompts act as persistent instructions:
// A customer support bot
const response = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
system: `You are a customer support agent for TechCo.
Rules:
- Be friendly and professional
- If you don't know something, say so
- Never share internal policies
- Always offer to escalate to a human agent`,
messages: [
{ role: "user", content: "My order hasn't arrived yet" }
]
});Multi-Turn Conversations
Build up conversations over time:
let conversationHistory = [];
async function chat(userMessage) {
conversationHistory.push({
role: "user",
content: userMessage
});
const response = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: conversationHistory
});
const assistantMessage = response.content[0].text;
conversationHistory.push({
role: "assistant",
content: assistantMessage
});
return assistantMessage;
}
// Usage
await chat("What is JavaScript?");
await chat("How is it different from Python?"); // Claude remembers context!Temperature: Controlling Creativity
| Temperature | Effect | Good For |
|---|---|---|
| 0 | Deterministic, same answer each time | Math, facts, code |
| 0.3 | Mostly consistent, slight variation | Customer support |
| 0.7 | Balanced (default) | General conversation |
| 1.0 | Most creative and varied | Creative writing, brainstorming |
Handling Responses Properly
const response = await client.messages.create({...});
// Get the text
const text = response.content[0].text;
// Check why it stopped
if (response.stop_reason === "max_tokens") {
console.log("Response was cut off — increase max_tokens");
}
// Check token usage
console.log(`Input: ${response.usage.input_tokens} tokens`);
console.log(`Output: ${response.usage.output_tokens} tokens`);Streaming Responses
Get responses token-by-token (great for UIs):
const stream = await client.messages.stream({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a poem" }]
});
for await (const event of stream) {
if (event.type === "content_block_delta" &&
event.delta.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}Next up: Text generation — summaries, content, code, and more.