{
  "interval": {
    "intervalStart": "2026-04-08T00:00:00.000Z",
    "intervalEnd": "2026-04-09T00:00:00.000Z",
    "intervalType": "day"
  },
  "repository": "elizaos/eliza",
  "overview": "From 2026-04-08 to 2026-04-09, elizaos/eliza had 0 new PRs (2 merged), 0 new issues, and 3 active contributors.",
  "topIssues": [],
  "topPRs": [
    {
      "id": "PR_kwDOMT5cIs7I6dg2",
      "title": "feat: Bring Odi logging, Memory lock down, banner, other core enh",
      "author": "odilitime",
      "number": 6562,
      "body": "- banner, init hook, reply optimization, roles warnedUnnamedEntities, runtime (embedding cache, callerInfo, safeReplacer, onlyInclude, logPrompt/logResponse gating), logger file output\r\n- DISABLE_MEMORY_CREATION/ALLOW_MEMORY_SOURCE_IDS in message service; JSON5, parseBooleanFromText try/catch, formatPosts; add DISABLE_MEMORY_CREATION guard to runtime.evaluate() (was missed)\r\n- keepExistingResponses/BOOTSTRAP_KEEP_RESP, ANXIETY provider, null service guard, provider timeout, HandlerCallback actionName, hasRequestedInState in reply\r\n- Docs: CHANGELOG, ROADMAP, docs/DESIGN.md, README env/design section, WHY comments; fix ALLOW_MEMORY_SOURCE_IDS comment in message.ts\r\n- New: bootstrap banner (preview script), anxiety provider\r\n\r\nMade-with: Cursor\r\n\r\n<!-- Use this template by filling in information and copying and pasting relevant items out of the HTML comments. -->\r\n\r\n# Relates to\r\n\r\n<!-- LINK TO ISSUE OR TICKET -->\r\n\r\n<!-- This risks section must be filled out before the final review and merge. -->\r\n\r\n# Risks\r\n\r\n<!--\r\nLow, medium, large. List what kind of risks and what could be affected.\r\n-->\r\n\r\n# Background\r\n\r\n## What does this PR do?\r\n\r\n## What kind of change is this?\r\n\r\n<!--\r\nBug fixes (non-breaking change which fixes an issue)\r\nImprovements (misc. changes to existing features)\r\nFeatures (non-breaking change which adds functionality)\r\nUpdates (new versions of included code)\r\n-->\r\n\r\n<!-- This \"Why\" section is most relevant if there are no linked issues explaining why. If there is a related issue, it might make sense to skip this why section. -->\r\n<!--\r\n## Why are we doing this? Any context or related work?\r\n-->\r\n\r\n# Documentation changes needed?\r\n\r\n<!--\r\nMy changes do not require a change to the project documentation.\r\nMy changes require a change to the project documentation.\r\nIf documentation change is needed: I have updated the documentation accordingly.\r\n-->\r\n\r\n<!-- Please show how you tested the PR. This will really help if the PR needs to be retested and probably help the PR get merged quicker. -->\r\n\r\n# Testing\r\n\r\n## Where should a reviewer start?\r\n\r\n## Detailed testing steps\r\n\r\n<!--\r\nNone: Automated tests are acceptable.\r\n-->\r\n\r\n<!--\r\n- As [anon/admin], go to [link]\r\n  - [do action]\r\n  - verify [result]\r\n-->\r\n\r\n<!-- If there is a UI change, please include before and after screenshots or videos. This will speed up PRs being merged. It is extra nice to annotate screenshots with arrows or boxes pointing out the differences. -->\r\n<!--\r\n## Screenshots\r\n### Before\r\n### After\r\n-->\r\n\r\n<!-- If there is anything about the deployment, please make a note. -->\r\n<!--\r\n# Deploy Notes\r\n-->\r\n\r\n<!--  Copy and paste command line output. -->\r\n<!--\r\n## Database changes\r\n-->\r\n\r\n<!--  Please specify deploy instructions if there is something more than the automated steps. -->\r\n<!--\r\n## Deployment instructions\r\n-->\r\n\r\n<!-- If you are on Discord, please join https://discord.gg/ai16z and state your Discord username here for the contributor role and join us in #development-feed -->\r\n<!--\r\n## Discord username\r\n\r\n-->\r\n\r\n\n<!-- CURSOR_SUMMARY -->\n---\n\n> [!NOTE]\n> **Medium Risk**\n> Touches core runtime/message pipeline behavior (provider timeouts, action-chain state recomposition, memory persistence gating, callback signature) and adds new logging side effects, so regressions could impact response latency, persistence, or integrations.\n> \n> **Overview**\n> **Core runtime hardening & observability.** `composeState` now enforces a per-provider 30s timeout (continuing with empty results on failure) and profiles provider timings; action chains recompose state using `onlyInclude` to refresh just `RECENT_MESSAGES`/`ACTION_STATE` while preserving other cached provider data; action-result caching uses `safeReplacer()`.\n> \n> **Logging & callbacks.** Adds opt-in file logging via `LOG_FILE` to write `output.log`, `prompts.log`, and `chat.log` (with ANSI stripping and prompt/response correlation), and extends `HandlerCallback` to accept an optional `actionName` (runtime and bootstrap `reply` now pass it).\n> \n> **Memory/pipeline controls & bootstrap behavior.** Implements `DISABLE_MEMORY_CREATION` with `ALLOW_MEMORY_SOURCE_IDS` allowlisting and updates race handling to use resolved `keepExistingResponses`; skips memory-dependent evaluators when memory creation is disabled. Adds a bootstrap startup banner (plugin `init` hook + preview script), a new `ANXIETY` provider (registered by default), budget-based truncation of action history/results via shared `sliceToFitBudget`, and reduces roles-provider warning spam with per-runtime bounded tracking. Documentation and examples are updated accordingly (README/CHANGELOG/ROADMAP/DESIGN, Telegram model selection, JSON5 dependency).\n> \n> <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit d4e24be84caf81f419a4a5dd4b50652daf168a0e. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup>\n<!-- /CURSOR_SUMMARY -->\n\n<!-- This is an auto-generated comment: release notes by coderabbit.ai -->\n## Summary by CodeRabbit\n\n* **New Features**\n  * Memory persistence controls (DISABLE_MEMORY_CREATION, ALLOW_MEMORY_SOURCE_IDS), shared config helpers, new anxiety provider, startup banner, and prompt/response file logging.\n\n* **Improvements**\n  * Per-provider timeouts and a 30s provider limit, first-action reply optimization, budgeted truncation of action/history context, safer JSON parsing from text, and handler callbacks include action names.\n\n* **Bug Fixes**\n  * Explicit timer cleanup on provider completion and hardened plugin registration.\n\n* **Tests & Docs**\n  * Expanded tests and new design/roadmap/README documentation.\n<!-- end of auto-generated comment: release notes by coderabbit.ai -->\n\n<!-- greptile_comment -->\n\n<h3>Greptile Summary</h3>\n\nThis PR is a broad hardening and observability pass on the core `@elizaos/typescript` runtime. Key changes include: per-provider 30 s timeouts in `composeState`, `onlyInclude` optimization for action-chain state recomposition, `DISABLE_MEMORY_CREATION` / `ALLOW_MEMORY_SOURCE_IDS` controls in both the message service and `evaluate()`, optional file logging (`LOG_FILE` → `output.log` / `prompts.log` / `chat.log`), an `ANXIETY` verbosity-guidance provider registered by default, a bootstrap ANSI banner on plugin init, JSON5-tolerant LLM output parsing, and `HandlerCallback` extended with an optional `actionName` parameter.\n\n- **Bug — IGNORE response persistence bypassed by `ALLOW_MEMORY_SOURCE_IDS`** (`message.ts:948–952`): the ignore path checks `allowedSources.includes(\"agent_response\")` — a hardcoded string users will never configure — so all IGNORE decisions are silently dropped when any source allowlist is active. The regular response path (line ~773) correctly skips this check; the ignore path should match that behavior.\n- **Bug — zero-vector fallback corrupts semantic memory** (`runtime.ts:4861`): on embedding failure the memory is persisted with a zero vector, making it unretrievable by cosine-similarity search while appearing normally stored. Silent data loss.\n- **`PROVIDERS_TOTAL_TIMEOUT_MS` default raised from 1 000 ms → 5 000 ms** (`message.ts:~2019`): existing deployments relying on the 1 s default will now allow providers up to 5 s before bailing, which may noticeably increase P99 response latency.\n- **Variable shadowing in ignore block** (`message.ts:949`): `allowedSources` is re-resolved inside `if (!disableMemoryCreation)`, shadowing the binding already computed on line 461 and adding a redundant runtime call.\n- **Dead `catch` block in `parseBooleanFromText`** (`utils.ts:941–950`): `String(value).trim().toUpperCase()` cannot throw after the early-return guards; the `logger.warn` is unreachable.\n\n<h3>Confidence Score: 3/5</h3>\n\n- Not safe to merge without addressing the IGNORE memory persistence bug and zero-vector embedding fallback.\n- Two confirmed logic bugs affect correctness in production: IGNORE responses are silently dropped from memory when ALLOW_MEMORY_SOURCE_IDS is set (contradicts stated design and the regular response path), and failed embeddings produce unretrievable zero-vector memories. The PROVIDERS_TOTAL_TIMEOUT_MS default change is a silent breaking change for latency-sensitive deployments. The rest of the PR is well-structured, well-commented, and tested.\n- packages/typescript/src/services/message.ts (IGNORE persistence logic at lines 948–974) and packages/typescript/src/runtime.ts (zero-vector embedding fallback at lines 4858–4865)\n\n<h3>Important Files Changed</h3>\n\n| Filename | Overview |\n|----------|----------|\n| packages/typescript/src/services/message.ts | Core message processing pipeline — adds DISABLE_MEMORY_CREATION / ALLOW_MEMORY_SOURCE_IDS filtering and keepExistingResponses race-check; the IGNORE path incorrectly applies the source allowlist to agent-generated responses, causing all IGNORE memories to be silently dropped when ALLOW_MEMORY_SOURCE_IDS is configured (contradicts stated design). Also contains variable shadowing and redundant comments. |\n| packages/typescript/src/runtime.ts | Major runtime hardening: per-provider 30s timeouts, onlyInclude optimization for action-chain state recomposition, DISABLE_MEMORY_CREATION guard for evaluate(), file-based prompt/response logging, caller-info stack traces. Zero-vector embedding fallback silently corrupts memory search for affected records. |\n| packages/typescript/src/logger.ts | Adds optional file logging (output.log, prompts.log, chat.log) gated by LOG_FILE env-var; prompt/response correlation now uses the same slug passed through metadata; all log files are placed in the same directory as LOG_FILE. Implementation is sound. |\n| packages/typescript/src/bootstrap/providers/anxiety.ts | New ANXIETY provider returning channel-specific verbosity guidance via random selection of 3 examples per turn; uses Math.random()-0.5 sort (biased shuffle) but non-critical for this use case. |\n| packages/typescript/src/bootstrap/providers/roles.ts | Switches from a module-level Set to a WeakMap<IAgentRuntime, Set<string>> for per-runtime warn-once tracking of unnamed entities; correctly addressed from previous review. |\n| packages/typescript/src/utils/json-llm.ts | New JSON5-based tolerant extraction helper for LLM output; handles code-fenced blocks and single-quote / trailing-comma JSON; throws on failure so callers (parseJSONObjectFromText) can catch and return null. |\n| packages/typescript/src/utils.ts | parseBooleanFromText accepts broader types, formatPosts adds metadata fallbacks and text delimiters, parseJSONObjectFromText delegates to JSON5-based json-llm helper. The try-catch in parseBooleanFromText is dead code but harmless. |\n| packages/typescript/src/types/components.ts | HandlerCallback extended with optional actionName parameter (backward compatible); allows callers to attribute responses to the producing action without content parsing. |\n\n</details>\n\n<h3>Flowchart</h3>\n\n```mermaid\n%%{init: {'theme': 'neutral'}}%%\nflowchart TD\n    A[Incoming Message] --> B{DISABLE_MEMORY_CREATION?}\n    B -- No --> C{ALLOW_MEMORY_SOURCE_IDS set?}\n    C -- No allowlist --> D[Persist message memory + queue embedding]\n    C -- Allowlist set --> E{sourceId in allowlist?}\n    E -- Yes --> D\n    E -- No --> F[Skip persistence, assign UUID]\n    B -- Yes --> F\n\n    D --> G[shouldRespond check]\n    F --> G\n\n    G -- IGNORE --> H[callback ignoreContent]\n    H --> I{DISABLE_MEMORY_CREATION?}\n    I -- No --> J{allowedSources includes 'agent_response'?}\n    J -- Yes --> K[Persist IGNORE memory]\n    J -- No → bug --> L[\"❌ IGNORE memory silently dropped\"]\n    I -- Yes --> L2[Skip IGNORE memory]\n\n    G -- Respond --> M[runSingleShotCore / LLM]\n    M --> N{canPersistMemory = !disableMemory}\n    N -- true --> O[Persist response memories]\n    N -- false --> P[Skip response memories]\n\n    O --> Q[run evaluators]\n    P --> Q\n    Q --> R{DISABLE_MEMORY_CREATION?}\n    R -- Yes --> S[Skip evaluators entirely]\n    R -- No --> T[Run evaluators with timeout]\n\n    style L fill:#ff4444,color:#fff\n    style L2 fill:#aaa,color:#fff\n```\n\n<sub>Last reviewed commit: [\"docs: add review dis...\"](https://github.com/elizaos/eliza/commit/d7d1ad5199b8fe12b302326d321ce50fede6912a)</sub>\n\n> Greptile also left **2 inline comments** on this PR.\n\n<!-- /greptile_comment -->",
      "repository": "elizaos/eliza",
      "createdAt": "2026-03-08T23:50:03Z",
      "mergedAt": "2026-04-08T23:27:47Z",
      "additions": 5218,
      "deletions": 259
    },
    {
      "id": "PR_kwDOMT5cIs7N-pIx",
      "title": "fix(core): consolidate StreamChunkCallback, remove dual-extractor CAUSING TTS garbling",
      "author": "odilitime",
      "number": 6690,
      "body": "Eight inline `onStreamChunk` definitions across types/runtime.ts, types/model.ts, types/message-service.ts, streaming-context.ts, and runtime.ts are replaced by a single canonical `StreamChunkCallback` type alias in types/components.ts.\r\n\r\nThe callback gains `accumulated?: string` — the full extracted field text from ValidationStreamExtractor. WHY: handleMessage previously ran two independent XML extractors (ValidationStreamExtractor via dynamicPromptExecFromState + ResponseStreamExtractor via runWithStreamingContext). Both received raw LLM tokens in useModel and emitted independently, producing overlapping deltas that garbled TTS output. Providing accumulated text from the single remaining extractor eliminates the reassembly problem.\r\n\r\nhandleMessage no longer creates a ResponseStreamExtractor or wraps processMessage in runWithStreamingContext. Voice first-sentence detection wraps the caller's onStreamChunk callback and uses accumulated when available, falling back to a local buffer.\r\n\r\nDocs updated: streaming-responses guide, types-reference, messaging.mdx extractor table.\r\n\r\nMade-with: Cursor\r\n\r\n<!-- Use this template by filling in information and copying and pasting relevant items out of the HTML comments. -->\r\n\r\n# Relates to\r\n\r\n<!-- LINK TO ISSUE OR TICKET -->\r\n\r\n<!-- This risks section must be filled out before the final review and merge. -->\r\n\r\n# Risks\r\n\r\n<!--\r\nLow, medium, large. List what kind of risks and what could be affected.\r\n-->\r\n\r\n# Background\r\n\r\n## What does this PR do?\r\n\r\n## What kind of change is this?\r\n\r\n<!--\r\nBug fixes (non-breaking change which fixes an issue)\r\nImprovements (misc. changes to existing features)\r\nFeatures (non-breaking change which adds functionality)\r\nUpdates (new versions of included code)\r\n-->\r\n\r\n<!-- This \"Why\" section is most relevant if there are no linked issues explaining why. If there is a related issue, it might make sense to skip this why section. -->\r\n<!--\r\n## Why are we doing this? Any context or related work?\r\n-->\r\n\r\n# Documentation changes needed?\r\n\r\n<!--\r\nMy changes do not require a change to the project documentation.\r\nMy changes require a change to the project documentation.\r\nIf documentation change is needed: I have updated the documentation accordingly.\r\n-->\r\n\r\n<!-- Please show how you tested the PR. This will really help if the PR needs to be retested and probably help the PR get merged quicker. -->\r\n\r\n# Testing\r\n\r\n## Where should a reviewer start?\r\n\r\n## Detailed testing steps\r\n\r\n<!--\r\nNone: Automated tests are acceptable.\r\n-->\r\n\r\n<!--\r\n- As [anon/admin], go to [link]\r\n  - [do action]\r\n  - verify [result]\r\n-->\r\n\r\n<!-- If there is a UI change, please include before and after screenshots or videos. This will speed up PRs being merged. It is extra nice to annotate screenshots with arrows or boxes pointing out the differences. -->\r\n<!--\r\n## Screenshots\r\n### Before\r\n### After\r\n-->\r\n\r\n<!-- If there is anything about the deployment, please make a note. -->\r\n<!--\r\n# Deploy Notes\r\n-->\r\n\r\n<!--  Copy and paste command line output. -->\r\n<!--\r\n## Database changes\r\n-->\r\n\r\n<!--  Please specify deploy instructions if there is something more than the automated steps. -->\r\n<!--\r\n## Deployment instructions\r\n-->\r\n\r\n<!-- If you are on Discord, please join https://discord.gg/ai16z and state your Discord username here for the contributor role and join us in #development-feed -->\r\n<!--\r\n## Discord username\r\n\r\n-->\r\n\n<!-- CURSOR_SUMMARY -->\n---\n\n> [!NOTE]\n> **Medium Risk**\n> Touches core streaming and message handling flow and widens a public callback signature, which could break custom integrations and affect streaming/TTS behavior if any downstream expects the old `(chunk, messageId)` contract.\n> \n> **Overview**\n> Fixes garbled streaming/TTS by removing `DefaultMessageService`’s extra `ResponseStreamExtractor`/`runWithStreamingContext` path so only the `ValidationStreamExtractor` pipeline emits chunks.\n> \n> Consolidates all streaming chunk callbacks into a single exported `StreamChunkCallback` type and extends it to `(chunk, messageId?, accumulated?)`, with `ValidationStreamExtractor` now providing authoritative per-field `accumulated` text and raw token streams explicitly passing `undefined`.\n> \n> Updates action streaming to maintain its own filtered accumulation, propagates the new signature through runtime/context/model/message-service types and tests, and refreshes docs to describe the new callback contract and architecture change.\n> \n> <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit aa5621ef891a54bab4ac37843c1de8047e3caf3f. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup>\n<!-- /CURSOR_SUMMARY -->\n\n<!-- This is an auto-generated comment: release notes by coderabbit.ai -->\n\n## Summary by CodeRabbit\n\n## Release Notes\n\n* **New Features**\n  * Streaming callbacks now receive accumulated text when available, enabling better handling of complete streamed responses.\n  * Callbacks now support both synchronous and asynchronous execution.\n\n* **Bug Fixes**\n  * Fixed garbled TTS output caused by extractor collision in the message pipeline.\n\n* **Documentation**\n  * Updated streaming callback documentation with expanded parameter details and architectural clarifications.\n\n<!-- end of auto-generated comment: release notes by coderabbit.ai -->\n\n<!-- greptile_comment -->\n\n<h3>Greptile Summary</h3>\n\nThis PR eliminates a **dual-extractor race condition** that caused TTS output garbling. Previously, `DefaultMessageService.handleMessage` wrapped `processMessage` in `runWithStreamingContext` with a `ResponseStreamExtractor`, while `dynamicPromptExecFromState` inside `processMessage` independently ran a `ValidationStreamExtractor`. Both extractors received the same raw LLM tokens from `useModel` (`paramsChunk` + `ctxChunk`), so consumers received two independent, overlapping delta streams — producing unintelligible TTS.\n\nThe fix removes the `ResponseStreamExtractor` + `runWithStreamingContext` layer entirely from `handleMessage`. A single canonical pipeline remains: VSE → MarkableExtractor → `wrappedOnStreamChunk`. Voice first-sentence detection moves into `wrappedOnStreamChunk`, which uses the new `accumulated` parameter (the authoritative full-field text from VSE) instead of re-assembling from deltas. Consolidating `StreamChunkCallback` into one type across eight previously-inconsistent call sites is a clean, forward-looking improvement.\n\n**Key changes:**\n- `StreamChunkCallback` in `types/components.ts` gains `accumulated?: string` and widens return to `void | Promise<void>`, replacing 8 inline duplicate signatures\n- `ValidationStreamExtractor.emitFieldContent` now passes `content` as `accumulated` in its `onChunk` call\n- `handleMessage` no longer creates `ResponseStreamExtractor` or calls `runWithStreamingContext`; voice detection wraps the user callback directly\n- `createStreamingContext` and `actionStreamingContext` in `runtime.ts` forward `accumulated` unchanged through their respective filters\n- Raw-token paths in `useModel` correctly pass `accumulated=undefined`\n\n**Minor observations:**\n- `createStreamingContext` forwards `accumulated` from the upstream source without transforming it, which is only semantically correct when the inner extractor is a passthrough. Currently always `MarkableExtractor` (safe), but the assumption is undocumented.\n- The `streamTextFallback` fallback buffer is never seeded from `accumulated`, so if a stream transitions from a VSE-based source to a raw-token source within one `handleMessage` call, the fallback path starts from empty. In practice `firstSentenceSent` is already `true` by that point, so voice won't re-trigger, but the invariant is fragile.\n\n<h3>Confidence Score: 4/5</h3>\n\nSafe to merge; correctly eliminates the dual-extractor TTS garbling bug with a clean, well-reasoned single-pipeline architecture\n\nThe root cause is correctly identified and fixed — removing runWithStreamingContext from handleMessage eliminates the second extractor path that caused overlapping deltas. The new wrappedOnStreamChunk pattern is equivalent to the old voice detection logic but operates on VSE-provided accumulated text instead of re-assembling from deltas. Type consolidation is a net positive. Two P2 style issues remain: the implicit passthrough assumption in createStreamingContext and the unsynchronized streamTextFallback buffer — neither affects correctness in the current code paths.\n\npackages/typescript/src/services/message.ts (wrappedOnStreamChunk fallback path) and packages/typescript/src/utils/streaming.ts (createStreamingContext accumulated forwarding)\n\n<h3>Important Files Changed</h3>\n\n| Filename | Overview |\n|----------|----------|\n| packages/typescript/src/services/message.ts | Core fix: removes dual-extractor pipeline by eliminating ResponseStreamExtractor+runWithStreamingContext; voice detection moved into wrappedOnStreamChunk wrapper with correct accumulated/fallback handling |\n| packages/typescript/src/utils/streaming.ts | ValidationStreamExtractorConfig.onChunk gains accumulated parameter; createStreamingContext updated to forward accumulated through MarkableExtractor passthrough; emitFieldContent correctly passes content as accumulated |\n| packages/typescript/src/types/components.ts | StreamChunkCallback consolidated to canonical form with accumulated parameter and widened return type void|Promise<void>; well-documented with inline WHY comments |\n| packages/typescript/src/runtime.ts | dynamicPromptExecFromState bridges VSE accumulated to StreamChunkCallback; actionStreamingContext correctly passes accumulated=undefined from raw-token context; inline type replaced with StreamingContext&{onStreamEnd} |\n\n</details>\n\n<h3>Sequence Diagram</h3>\n\n```mermaid\nsequenceDiagram\n    participant U as User / Client\n    participant HM as handleMessage\n    participant PM as processMessage\n    participant DPE as dynamicPromptExecFromState\n    participant VSE as ValidationStreamExtractor\n    participant ME as MarkableExtractor\n    participant WC as wrappedOnStreamChunk\n\n    U->>HM: handleMessage(options.onStreamChunk)\n    Note over HM: Creates wrappedOnStreamChunk<br/>(voice detection wrapper)\n    HM->>PM: processMessage(opts.onStreamChunk=wrapped)\n    Note over PM: createStreamingContext(MarkableExtractor,<br/>wrappedOnStreamChunk, responseId)\n    PM->>DPE: dynamicPromptExecFromState(onStreamChunk=streamingCtx)\n    Note over DPE: Creates ValidationStreamExtractor<br/>modelParams.onStreamChunk = chunk→VSE.push(chunk)\n    DPE->>VSE: raw LLM token (paramsChunk)\n    VSE->>ME: onChunk(delta, field, accumulated)\n    ME->>WC: passthrough delta + accumulated\n    WC->>WC: voice first-sentence detection<br/>(uses accumulated if available)\n    WC-->>U: userOnStreamChunk(delta, msgId, accumulated)\n\n    Note over HM: OLD (removed): runWithStreamingContext(RSE)<br/>caused ctxChunk to ALSO fire on same token → garbling\n```\n\n<sub>Reviews (1): Last reviewed commit: [\"fix(core): consolidate StreamChunkCallba...\"](https://github.com/elizaos/eliza/commit/c3c0422fc95af7bc825dc5e1faaf32ba4a1975fa) | [Re-trigger Greptile](https://app.greptile.com/api/retrigger?id=26545105)</sub>\n\n> Greptile also left **2 inline comments** on this PR.\n\n<!-- /greptile_comment -->",
      "repository": "elizaos/eliza",
      "createdAt": "2026-03-27T08:26:43Z",
      "mergedAt": "2026-04-08T23:37:19Z",
      "additions": 432,
      "deletions": 184
    }
  ],
  "codeChanges": {
    "additions": 5823,
    "deletions": 445,
    "files": 60,
    "commitCount": 26
  },
  "completedItems": [
    {
      "title": "fix(core): consolidate StreamChunkCallback, remove dual-extractor CAUSING TTS garbling",
      "prNumber": 6690,
      "type": "bugfix",
      "body": "Eight inline `onStreamChunk` definitions across types/runtime.ts, types/model.ts, types/message-service.ts, streaming-context.ts, and runtime.ts are replaced by a single canonical `StreamChunkCallback` type alias in types/components.ts.\r\n\r\n",
      "files": [
        "packages/docs/guides/streaming-responses.mdx",
        "packages/docs/runtime/messaging.mdx",
        "packages/docs/runtime/types-reference.mdx",
        "packages/typescript/src/__tests__/runtime.test.ts",
        "packages/typescript/src/runtime.ts",
        "packages/typescript/src/services/message.ts",
        "packages/typescript/src/streaming-context.ts",
        "packages/typescript/src/types/components.ts",
        "packages/typescript/src/types/message-service.ts",
        "packages/typescript/src/types/model.ts",
        "packages/typescript/src/types/runtime.ts",
        "packages/typescript/src/utils/streaming.ts",
        "packages/python/elizaos/types/components.py"
      ]
    },
    {
      "title": "feat: Bring Odi logging, Memory lock down, banner, other core enh",
      "prNumber": 6562,
      "type": "feature",
      "body": "- banner, init hook, reply optimization, roles warnedUnnamedEntities, runtime (embedding cache, callerInfo, safeReplacer, onlyInclude, logPrompt/logResponse gating), logger file output\r\n- DISABLE_MEMORY_CREATION/ALLOW_MEMORY_SOURCE_IDS in m",
      "files": [
        "anxiety.test.ts",
        "eliza-cloud-v2/services/gateway-discord/tests/logger.test.ts",
        "examples/telegram/typescript/telegram-agent.ts",
        "logger.test.ts",
        "message-service.test.ts",
        "packages/typescript/CHANGELOG.md",
        "packages/typescript/README.md",
        "packages/typescript/ROADMAP.md",
        "packages/typescript/docs/DESIGN.md",
        "packages/typescript/package.json",
        "packages/typescript/scripts/preview-banner.mjs",
        "packages/typescript/src/__tests__/message-service.test.ts",
        "packages/typescript/src/basic-capabilities/providers/actionState.ts",
        "packages/typescript/src/basic-capabilities/providers/recentMessages.ts",
        "packages/typescript/src/bootstrap/__tests__/anxiety.test.ts",
        "packages/typescript/src/bootstrap/__tests__/banner.test.ts",
        "packages/typescript/src/bootstrap/__tests__/banner.ts",
        "packages/typescript/src/bootstrap/__tests__/message-service.test.ts",
        "packages/typescript/src/bootstrap/__tests__/reply.ts",
        "packages/typescript/src/bootstrap/__tests__/roles.test.ts",
        "packages/typescript/src/bootstrap/actions/reply.ts",
        "packages/typescript/src/bootstrap/banner.ts",
        "packages/typescript/src/bootstrap/index.ts",
        "packages/typescript/src/bootstrap/providers/actionState.ts",
        "packages/typescript/src/bootstrap/providers/anxiety.test.ts",
        "packages/typescript/src/bootstrap/providers/anxiety.ts",
        "packages/typescript/src/bootstrap/providers/index.ts",
        "packages/typescript/src/bootstrap/providers/recentMessages.ts",
        "packages/typescript/src/bootstrap/providers/roles.ts",
        "packages/typescript/src/logger.test.ts",
        "packages/typescript/src/logger.ts",
        "packages/typescript/src/runtime.test.ts",
        "packages/typescript/src/runtime.ts",
        "packages/typescript/src/services/message.test.ts",
        "packages/typescript/src/services/message.ts",
        "packages/typescript/src/types/components.ts",
        "packages/typescript/src/types/message-service.ts",
        "packages/typescript/src/utils.test.ts",
        "packages/typescript/src/utils.ts",
        "packages/typescript/src/utils/defer-startup-work.ts",
        "packages/typescript/src/utils/index.ts",
        "packages/typescript/src/utils/json-llm.ts",
        "packages/typescript/src/utils/plugin-banner.ts",
        "packages/typescript/src/utils/plugin-config.ts",
        "packages/typescript/src/utils/slice-to-fit-budget.test.ts",
        "packages/typescript/src/utils/slice-to-fit-budget.ts",
        "packages/typescript/src/utils/text-similarity.ts",
        "reply.test.ts",
        "src/bootstrap/__tests__/anxiety.test.ts",
        "packages/typescript/src/bootstrap/__tests__/reply.test.ts",
        "packages/typescript/src/bootstrap/__tests__/actions.test.ts"
      ]
    }
  ],
  "topContributors": [
    {
      "username": "NubsCarson",
      "avatarUrl": "https://avatars.githubusercontent.com/u/192162056?u=d2be9082dbee60fcbad21d32bf6e662ab1af3674&v=4",
      "totalScore": 30.92671156116416,
      "prScore": 30.92671156116416,
      "issueScore": 0,
      "reviewScore": 0,
      "commentScore": 0,
      "summary": null
    },
    {
      "username": "0tabris",
      "avatarUrl": "https://avatars.githubusercontent.com/u/129533148?v=4",
      "totalScore": 14.346573590279972,
      "prScore": 14.346573590279972,
      "issueScore": 0,
      "reviewScore": 0,
      "commentScore": 0,
      "summary": null
    },
    {
      "username": "greptile-apps",
      "avatarUrl": "https://avatars.githubusercontent.com/in/867647?v=4",
      "totalScore": 9,
      "prScore": 0,
      "issueScore": 0,
      "reviewScore": 9,
      "commentScore": 0,
      "summary": null
    },
    {
      "username": "Yaqing2023",
      "avatarUrl": "https://avatars.githubusercontent.com/u/130617529?v=4",
      "totalScore": 0.2,
      "prScore": 0,
      "issueScore": 0,
      "reviewScore": 0,
      "commentScore": 0.2,
      "summary": null
    }
  ],
  "newPRs": 0,
  "mergedPRs": 2,
  "newIssues": 0,
  "closedIssues": 0,
  "activeContributors": 3
}