---
title: "Chat Applications"
description: "Interactive CLI chat example (TypeScript)"
---

Build conversational AI agents with the CLI chat example.

## Available Examples

| Example  | Language   | Location                   | Description               |
| -------- | ---------- | -------------------------- | ------------------------- |
| CLI Chat | TypeScript | `examples/chat/chat.ts`    | Interactive terminal chat |

---

## CLI Chat

The foundational example. Demonstrates:

- Runtime initialization with plugins
- Message handling and streaming
- Conversation flow management

```bash
cd examples/chat
cp .env.example .env
# Add at least one provider API key to .env
bun install
bun run start
```

**Key code:**

```typescript
const runtime = new AgentRuntime({
  character: { name: "Eliza", bio: "A helpful AI assistant." },
  plugins: [sqlPlugin, openaiPlugin],
});

await runtime.initialize();

// Handle messages with streaming
await runtime.messageService.handleMessage(
  runtime,
  message,
  async (content) => {
    process.stdout.write(content.text);
    return [];
  },
);
```

---

## Features Demonstrated

### Character Definition

Define your agent's personality:

```typescript
const character = {
  name: "Eliza",
  bio: "A helpful AI assistant who loves to learn.",
  system: "You are friendly, knowledgeable, and concise.",
  topics: ["technology", "science", "philosophy"],
  style: {
    tone: "casual",
    formality: "medium",
  },
};
```

### Message Memory

Messages are automatically stored for context:

```typescript
// Create message memory
const message = createMessageMemory({
  id: uuidv4(),
  entityId: userId,
  roomId,
  content: { text: userInput },
});

// Previous messages are available in context
const memories = await runtime.getMemories({ roomId, count: 10 });
```

### Streaming Responses

Display responses as they're generated:

```typescript
await runtime.messageService.handleMessage(
  runtime,
  message,
  async (content) => {
    if (content?.text) {
      // Write each chunk as it arrives
      process.stdout.write(content.text);
    }
    return [];
  },
);
```

---

## Customization

### Use Different Models

```typescript
// Use Claude instead of GPT
import { anthropicPlugin } from "@elizaos/plugin-anthropic";

const runtime = new AgentRuntime({
  character,
  plugins: [sqlPlugin, anthropicPlugin],
});
```

### Add Conversation Persistence

```typescript
// Use PostgreSQL for production
const runtime = new AgentRuntime({
  character,
  plugins: [sqlPlugin, openaiPlugin],
  database: {
    url: process.env.POSTGRES_URL,
  },
});
```

### Multi-turn Conversations

```typescript
// The runtime automatically includes conversation history
// Just keep sending messages to the same roomId
const roomId = stringToUuid("persistent-chat-room");

// Message 1
await handleMessage("Hello, I'm learning about AI");

// Message 2 - context from message 1 is included
await handleMessage("Can you explain transformers?");

// Message 3 - context from messages 1 & 2 included
await handleMessage("How do they relate to what we discussed?");
```

---

## Next Steps

<CardGroup cols={2}>
  <Card
    title="REST API Examples"
    icon="server"
    href="/examples-gallery/rest-apis"
  >
    Expose your chat agent via HTTP
  </Card>
  <Card title="Web Apps" icon="browser" href="/examples-gallery/web-apps">
    Build browser-based chat interfaces
  </Card>
</CardGroup>
