Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 32 additions & 11 deletions apps/docs/integrations/ai-sdk.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,13 @@ Automatically inject user profiles into every LLM call for instant personalizati

```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { withSupermemory } from "@supermemory/tools/vercel"
import { openai } from "@ai-sdk/openai"

const modelWithMemory = withSupermemory(openai("gpt-5"), "user-123")
const modelWithMemory = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456"
})

const result = await generateText({
model: modelWithMemory,
Expand All @@ -44,11 +47,13 @@ const result = await generateText({
```

<Note>
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories:
**Memory saving is enabled by default.** The middleware automatically saves conversations to memory. To disable memory saving:

```typescript
const modelWithMemory = withSupermemory(openai("gpt-5"), "user-123", {
addMemory: "always"
const modelWithMemory = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
addMemory: "never"
})
```
</Note>
Expand All @@ -58,27 +63,39 @@ const result = await generateText({
**Profile Mode (Default)** - Retrieves the user's complete profile:

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", { mode: "profile" })
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "profile"
})
```

**Query Mode** - Searches memories based on the user's message:

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", { mode: "query" })
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "query"
})
```

**Full Mode** - Combines profile AND query-based search:

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", { mode: "full" })
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "full"
})
```

### Custom Prompt Templates

Customize how memories are formatted. The template receives `userMemories`, `generalSearchMemories`, and `searchResults` (raw array for filtering by metadata):

```typescript
import { withSupermemory, type MemoryPromptData } from "@supermemory/tools/ai-sdk"
import { withSupermemory, type MemoryPromptData } from "@supermemory/tools/vercel"

const claudePrompt = (data: MemoryPromptData) => `
<context>
Expand All @@ -91,7 +108,9 @@ const claudePrompt = (data: MemoryPromptData) => `
</context>
`.trim()

const model = withSupermemory(anthropic("claude-3-sonnet"), "user-123", {
const model = withSupermemory(anthropic("claude-3-sonnet"), {
containerTag: "user-123",
customId: "conv-456",
mode: "full",
promptTemplate: claudePrompt
})
Expand All @@ -100,7 +119,9 @@ const model = withSupermemory(anthropic("claude-3-sonnet"), "user-123", {
### Verbose Logging

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
verbose: true
})
// Console output shows memory retrieval details
Expand Down
623 changes: 37 additions & 586 deletions bun.lock

Large diffs are not rendered by default.

155 changes: 95 additions & 60 deletions packages/tools/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,44 +57,45 @@ const addTool = addMemoryTool(process.env.SUPERMEMORY_API_KEY!, {

#### AI SDK Middleware with Supermemory

- `withSupermemory` will take advantage supermemory profile v4 endpoint personalized based on container tag
- You can provide the Supermemory API key via the `apiKey` option to `withSupermemory` (recommended for browser usage), or fall back to `SUPERMEMORY_API_KEY` in the environment for server usage.
- `withSupermemory` wraps any language model with supermemory capabilities using the v4 profile endpoint
- You can provide the Supermemory API key via the `apiKey` option (recommended for browser usage), or fall back to `SUPERMEMORY_API_KEY` in the environment for server usage
- **Per-turn caching**: Memory injection is cached for tool-call continuations within the same user turn. The middleware detects when the AI SDK is continuing a multi-step flow (e.g., after a tool call) and reuses the cached memories instead of making redundant API calls. A fresh fetch occurs on each new user message turn.

```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithMemory = withSupermemory(openai("gpt-5"), "user_id_life")
const modelWithMemory = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conversation-456",
})

const result = await generateText({
model: modelWithMemory,
messages: [{ role: "user", content: "where do i live?" }],
model: modelWithMemory,
messages: [{ role: "user", content: "where do i live?" }],
})

console.log(result.text)
```

#### Conversation Grouping
#### Configuration Options

Use the `conversationId` option to group messages into a single document for contextual memory generation:
The `withSupermemory` function accepts a model and a configuration object:

```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithMemory = withSupermemory(openai("gpt-5"), "user_id_life", {
conversationId: "conversation-456"
withSupermemory(model, {
containerTag: string, // Required: User/container identifier for memory scoping
customId: string, // Required: Conversation ID for grouping messages
mode?: "profile" | "query" | "full", // Memory retrieval mode (default: "profile")
addMemory?: "always" | "never", // Auto-save conversations (default: "always")
searchMode?: "memories" | "hybrid" | "documents", // Search mode (default: "memories")
searchLimit?: number, // Max search results for hybrid/documents mode (default: 10)
verbose?: boolean, // Enable detailed logging (default: false)
apiKey?: string, // Supermemory API key (falls back to env var)
baseUrl?: string, // Custom API base URL
promptTemplate?: (data: MemoryPromptData) => string, // Custom memory formatting
})

const result = await generateText({
model: modelWithMemory,
messages: [{ role: "user", content: "where do i live?" }],
})

console.log(result.text)
```

#### Verbose Mode
Expand All @@ -106,21 +107,23 @@ import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithMemory = withSupermemory(openai("gpt-5"), "user_id_life", {
verbose: true
const modelWithMemory = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
verbose: true,
})

const result = await generateText({
model: modelWithMemory,
messages: [{ role: "user", content: "where do i live?" }],
model: modelWithMemory,
messages: [{ role: "user", content: "where do i live?" }],
})

console.log(result.text)
```

When verbose mode is enabled, you'll see console output like:
```
[supermemory] Searching memories for container: user_id_life
[supermemory] Searching memories for container: user-123
[supermemory] User message: where do i live?
[supermemory] System prompt exists: false
[supermemory] Found 3 memories
Expand All @@ -139,11 +142,10 @@ import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

// Uses profile mode by default - gets all user profile memories
const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123")

// Explicitly specify profile mode
const modelWithProfile = withSupermemory(openai("gpt-4"), "user-123", {
mode: "profile"
const modelWithMemory = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "profile",
})

const result = await generateText({
Expand All @@ -158,8 +160,10 @@ import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithQuery = withSupermemory(openai("gpt-4"), "user-123", {
mode: "query"
const modelWithQuery = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "query",
})

const result = await generateText({
Expand All @@ -174,8 +178,10 @@ import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithFull = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full"
const modelWithFull = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "full",
})

const result = await generateText({
Expand All @@ -184,38 +190,58 @@ const result = await generateText({
})
```

#### Automatic Memory Capture
#### RAG with Hybrid Search

The middleware can automatically save user messages as memories:
Use `searchMode: "hybrid"` to search both memories AND document chunks (recommended for RAG applications):

**Always Save Memories** - Automatically stores every user message as a memory:
```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithAutoSave = withSupermemory(openai("gpt-4"), "user-123", {
addMemory: "always"
const ragModel = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "full",
searchMode: "hybrid", // Search both memories and document chunks
searchLimit: 15, // Return up to 15 results
})
Comment thread
sreedharsreeram marked this conversation as resolved.
Comment thread
sreedharsreeram marked this conversation as resolved.

const result = await generateText({
model: modelWithAutoSave,
messages: [{ role: "user", content: "I prefer React with TypeScript for my projects" }],
model: ragModel,
messages: [{ role: "user", content: "What's in my documents about quarterly goals?" }],
})
// This message will be automatically saved as a memory
```

**Never Save Memories (Default)** - Only retrieves memories without storing new ones:
#### Automatic Memory Capture

The middleware can automatically save conversations as memories:

**Always Save Memories (Default)** - Automatically stores conversations:
```typescript
const modelWithNoSave = withSupermemory(openai("gpt-4"), "user-123")
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithAutoSave = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
addMemory: "always",
})

const result = await generateText({
model: modelWithAutoSave,
messages: [{ role: "user", content: "I prefer React with TypeScript for my projects" }],
})
// This conversation will be automatically saved as a memory
```

**Combined Options** - Use verbose logging with specific modes and memory storage:
**Never Save Memories** - Only retrieves memories without storing new ones:
```typescript
const modelWithOptions = withSupermemory(openai("gpt-4"), "user-123", {
mode: "profile",
addMemory: "always",
verbose: true
const modelWithNoSave = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
addMemory: "never",
})
```

Expand All @@ -239,7 +265,9 @@ ${data.generalSearchMemories}
</user_memories>
`.trim()

const modelWithCustomPrompt = withSupermemory(openai("gpt-4"), "user-123", {
const modelWithCustomPrompt = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-456",
mode: "full",
promptTemplate: customPrompt,
})
Expand Down Expand Up @@ -646,23 +674,30 @@ Without `strict: true`, optional fields like `includeFullDocs` and `limit` won't

### withSupermemory Middleware Options

The `withSupermemory` middleware accepts additional configuration options:
The `withSupermemory` middleware accepts a model and a configuration object:

```typescript
interface WithSupermemoryOptions {
conversationId?: string
verbose?: boolean
mode?: "profile" | "query" | "full"
addMemory?: "always" | "never"
/** Optional Supermemory API key. Use this in browser environments. */
apiKey?: string
interface WithSupermemoryConfig {
containerTag: string // Required: User/container identifier for memory scoping
customId: string // Required: Conversation ID for grouping messages
verbose?: boolean // Enable detailed logging (default: false)
mode?: "profile" | "query" | "full" // Memory retrieval mode (default: "profile")
searchMode?: "memories" | "hybrid" | "documents" // Search mode (default: "memories")
searchLimit?: number // Max search results for hybrid/documents mode (default: 10)
addMemory?: "always" | "never" // Auto-save conversations (default: "always")
apiKey?: string // Supermemory API key (falls back to SUPERMEMORY_API_KEY env var)
baseUrl?: string // Custom API base URL
promptTemplate?: (data: MemoryPromptData) => string // Custom memory formatting
}
```

- **conversationId**: Optional conversation ID to group messages into a single document for contextual memory generation
- **containerTag**: Required. The container tag/identifier for memory search (e.g., user ID, project ID)
- **customId**: Required. Custom ID to group messages into a single document for contextual memory generation
- **verbose**: Enable detailed logging of memory search and injection process (default: false)
- **mode**: Memory search mode - "profile" (default), "query", or "full"
- **addMemory**: Automatic memory storage mode - "always" or "never" (default: "never")
- **searchMode**: Search mode - "memories" (default), "hybrid" (memories + chunks), or "documents" (chunks only)
- **searchLimit**: Maximum number of search results when using hybrid/documents mode (default: 10)
- **addMemory**: Automatic memory storage mode - "always" (default) or "never"

## Available Tools

Expand Down
Loading
Loading