## Issue Triage — 2026-02-23

### 1) [Bug] API Explorer “Send request” fails with `api key is required` even when key is present (ElizaCloud) — ID: Discord-2026-02-20-ElizaCloud-APIExplorer-Key
- **Current Status:** Reported on Discord; forwarded to team; no public fix/issue link yet.
- **Impact Assessment:**
  - **User Impact:** **High** (blocks most users trying to validate API usage in-dashboard)
  - **Functional Impact:** **Yes** (breaks core onboarding workflow: testing requests)
  - **Brand Impact:** **High** (SaaS quality signal; “it’s broken” moment)
- **Technical Classification:**
  - **Category:** Bug / UX
  - **Component Affected:** Cloud Dashboard + API Explorer + Auth/API key handling
  - **Complexity:** Moderate effort
- **Resource Requirements:**
  - **Required Expertise:** Frontend (React/Next), API gateway/auth, browser storage/session debugging
  - **Dependencies:** Needs clarity on how API key is sourced (session vs stored key vs “use different key” feature)
  - **Estimated Effort:** **3/5**
- **Recommended Priority:** **P0**
- **Specific Actionable Next Steps:**
  1. Reproduce with fresh account + existing account; capture network traces and request headers.
  2. Verify where the API Explorer reads the key (localStorage/cookie/session state) vs what backend expects.
  3. Add UI-level validation showing *which key* is being used and where it was loaded from.
  4. Add an automated E2E test: “API Explorer can successfully send request with generated key.”
- **Potential Assignees:**
  - **borisudovicic** (cloud/product coordination history)
  - **odilitime** (infra + triage leadership)
  - Add a web frontend-focused contributor from the cloud team (if separate repo exists)

---

### 2) [UX/Bug] Broken mousepad scrolling on ElizaCloud dashboard (notably API Explorer); scrollbar hard to discover — ID: Discord-2026-02-20-ElizaCloud-Scroll
- **Current Status:** Reported on Discord; no tracked issue link yet.
- **Impact Assessment:**
  - **User Impact:** **High** (affects broad set of dashboard users, especially trackpads)
  - **Functional Impact:** **Partial** (workaround exists but is non-obvious)
  - **Brand Impact:** **High** (basic UI regression harms credibility)
- **Technical Classification:**
  - **Category:** UX / Bug
  - **Component Affected:** Cloud Dashboard UI layout/scroll containers
  - **Complexity:** Simple fix to Moderate effort (depending on CSS/layout structure)
- **Resource Requirements:**
  - **Required Expertise:** Frontend CSS/layout, browser compatibility (macOS trackpad)
  - **Dependencies:** None
  - **Estimated Effort:** **2/5**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps:**
  1. Identify scroll-lock container (nested overflow divs, modal panes, or `wheel` event preventDefault).
  2. Fix scroll container strategy (single primary scroll region; avoid nested `overflow: hidden`).
  3. Improve scrollbar visibility (OSX overlay scrollbar considerations) and add “scroll hint” if needed.
  4. Add UI regression checks (Playwright: wheel scroll changes scrollTop).
- **Potential Assignees:**
  - **borisudovicic**
  - A frontend maintainer (unlisted in provided data)

---

### 3) [Bug/Cost] URL in message triggers duplicate LLM calls (processed as both text and attachment) causing duplicated output (webapp) — elizaos/eliza #6486
- **Current Status:** **OPEN**
- **Impact Assessment:**
  - **User Impact:** **High** (any user who pastes links)
  - **Functional Impact:** **Partial** (responses still arrive, but are duplicated; can break downstream automation)
  - **Brand Impact:** **High** (visible glitch + doubled cost)
- **Technical Classification:**
  - **Category:** Bug / Performance
  - **Component Affected:** Webapp message ingestion + attachment/link preview pipeline + SSE streaming
  - **Complexity:** Moderate effort
- **Resource Requirements:**
  - **Required Expertise:** Message pipeline, SSE streaming, web client/server boundary, link preview/attachment handling
  - **Dependencies:** Needs agreement on canonical “URL handling” (preview as separate tool call vs merged context)
  - **Estimated Effort:** **3/5**
- **Recommended Priority:** **P0** (cost + correctness regression)
- **Specific Actionable Next Steps:**
  1. Reproduce and log server-side events: confirm two LLM invocations and their triggers.
  2. Implement deduplication rule: a URL should be either (a) inline text, or (b) attachment preview—not both for the same user message.
  3. Add guard in SSE aggregator: do not concatenate two assistant responses for the same message ID unless explicitly multi-part.
  4. Add test: “message with URL produces exactly one model call” (unit + integration).
- **Potential Assignees:**
  - **lalalune** (runtime/message flow changes in v2 work)
  - **odilitime** (core stability/perf focus)
  - **anchapin** (defensive coding patterns; good for safe guards)

---

### 4) [Security/Data Isolation] Align RLS isolation with v1 patterns; replace `application_name` context with parameterized `set_config` (SQL injection safety); rename `withEntityContext` → `withIsolationContext` — PR elizaos/eliza #6521
- **Current Status:** **PR open** (in review/rebase coordination with larger DB refactor work)
- **Impact Assessment:**
  - **User Impact:** **High** (affects any multi-entity / isolation-enabled deployment)
  - **Functional Impact:** **Yes** (incorrect isolation can leak or mix data across entities)
  - **Brand Impact:** **High** (data isolation issues are existential for trust)
- **Technical Classification:**
  - **Category:** Security / Bug
  - **Component Affected:** Plugin System (plugin-sql), Core DB access patterns, RLS/isolation context plumbing
  - **Complexity:** Complex solution (touches DB semantics + refactor conflicts)
- **Resource Requirements:**
  - **Required Expertise:** Postgres RLS, Node DB drivers, isolation context design, security review
  - **Dependencies:** **Conflicts/overlap with DB refactor PR #6509** (and Odilitime’s parallel refactor)
  - **Estimated Effort:** **4/5**
- **Recommended Priority:** **P0** (security + architectural correctness)
- **Specific Actionable Next Steps:**
  1. Establish a single “source of truth” for isolation context API in v2 (`withIsolationContext` naming and semantics).
  2. Rebase/deconflict: integrate #6521 into the DB refactor branch (or vice versa) with minimal churn.
  3. Add isolation regression tests: verify entity A cannot read/write entity B under RLS for all critical tables.
  4. Security review: ensure `set_config` usage is parameterized everywhere; confirm no string interpolation remains.
- **Potential Assignees:**
  - **standujar** (author of #6521; best positioned to finalize intent)
  - **odilitime** (DB refactor owner; integration responsibility)
  - Optional reviewer: security-minded maintainer (not named in provided dataset)

---

### 5) [Breaking Upgrade Path] Removal of auto-migration from v1.4.x → v1.6.x may strand legacy users; need explicit error/guide before upgrading to v2.0.0 — ID: Migration-Guard-v1.4-to-v2 (from core-devs discussion)
- **Current Status:** Migration code removed (~2600 LOC); team considering throwing a directed error.
- **Impact Assessment:**
  - **User Impact:** **Medium** (unknown count; but those affected will be hard-blocked/confused)
  - **Functional Impact:** **Yes** (upgrade failure / potential data issues)
  - **Brand Impact:** **Medium** (upgrade pain but mostly for older installs)
- **Technical Classification:**
  - **Category:** Bug / Documentation
  - **Component Affected:** DB migration tooling, upgrade UX, release engineering
  - **Complexity:** Moderate effort
- **Resource Requirements:**
  - **Required Expertise:** DB migrations, version detection, CLI/server startup checks, docs
  - **Dependencies:** Needs a reliable way to detect schema/version state at runtime
  - **Estimated Effort:** **3/5**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps:**
  1. Implement startup guard: detect “pre-1.6 schema” and fail fast with a clear message and link to migration steps.
  2. Provide a documented “Upgrade Ladder”: 1.4.x → 1.6.x → 2.0.0 (with verification commands).
  3. Add telemetry/log signature to help estimate how many users still hit the guard (if applicable).
- **Potential Assignees:**
  - **odilitime** (DB refactor workstream)
  - **standujar** (migration context from #6521)
  - **anchapin** (safe guards and clear error handling)

---

### 6) [Performance/Reliability] Hitting 200k token limits with many plugins; bootstrap providers + evaluations inflate context — ID: Perf-TokenLimit-200k (from core-devs discussion)
- **Current Status:** Actively being worked on; root causes identified (bootstrap providers/evaluations).
- **Impact Assessment:**
  - **User Impact:** **High** (power users running “many plugins” are a flagship use case)
  - **Functional Impact:** **Yes** (hard failures when token limits are exceeded)
  - **Brand Impact:** **High** (agents feel “unscalable” or “expensive”)
- **Technical Classification:**
  - **Category:** Performance
  - **Component Affected:** Plugin-bootstrap, provider aggregation, evaluation pipeline, prompt assembly
  - **Complexity:** Architectural change (budgeting + relevance filtering across providers/evals)
- **Resource Requirements:**
  - **Required Expertise:** Prompt construction, retrieval/relevance ranking, caching strategies, profiling
  - **Dependencies:** Leverage/align with existing work like ActionFilterService (PR #6475) and bootstrap optimizations (PR #6476)
  - **Estimated Effort:** **5/5**
- **Recommended Priority:** **P0** (core scalability + frequent hard limit)
- **Specific Actionable Next Steps:**
  1. Add instrumentation: per-provider/evaluator token contribution breakdown to logs/metrics.
  2. Enforce a context budget with graceful degradation: truncate/skip lowest-value providers first.
  3. Default-enable action/provider filtering (vector/BM25) where safe to reduce prompt bloat.
  4. Add performance tests: “N plugins” should stay under defined token budget for baseline chat turn.
- **Potential Assignees:**
  - **odilitime** (already driving)
  - **lalalune** (runtime architecture + ActionFilterService familiarity)
  - **standujar** (bootstrap/evaluation integration reviewer)

---

### 7) [Bug] Ollama embeddings failing on Linux environments — elizaos-plugins/plugin-ollama #17
- **Current Status:** **OPEN** (investigation started per weekly summary)
- **Impact Assessment:**
  - **User Impact:** **Medium** (Linux self-hosters are common; embeddings are key for memory/RAG)
  - **Functional Impact:** **Yes** (breaks embedding-dependent features)
  - **Brand Impact:** **Medium** (plugin ecosystem reliability)
- **Technical Classification:**
  - **Category:** Bug
  - **Component Affected:** Model Integration (Ollama), embeddings pipeline
  - **Complexity:** Moderate effort (likely environment/path/model mismatch)
- **Resource Requirements:**
  - **Required Expertise:** Ollama API, embeddings models, Linux environment debugging
  - **Dependencies:** Needs reproducible environment details (distro, Ollama version, model name)
  - **Estimated Effort:** **3/5**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps:**
  1. Collect diagnostics template in the issue: Ollama version, `ollama list`, model tag, CPU/GPU, logs.
  2. Add a minimal Linux CI smoke test (container) for embedding call if feasible.
  3. Improve error messages: surface raw Ollama response + recommended fix (model pull, correct endpoint).
- **Potential Assignees:**
  - **mbatini** (reporter; can help validate)
  - **lalalune** (model integration breadth)
  - A plugin-ollama maintainer (if different from core)

---

### 8) [Feature Request] Support custom OpenAI-compatible endpoint URL in OpenAI provider — elizaos/eliza #6490
- **Current Status:** **OPEN**
- **Impact Assessment:**
  - **User Impact:** **Medium** (unblocks OpenAI-compatible providers like SiliconFlow)
  - **Functional Impact:** **Partial** (core works with OpenAI; limits interoperability)
  - **Brand Impact:** **Medium** (developer flexibility expectation)
- **Technical Classification:**
  - **Category:** Feature Request
  - **Component Affected:** Model Integration (OpenAI provider), configuration/env handling
  - **Complexity:** Simple fix to Moderate effort (config plumbing + docs + tests)
- **Resource Requirements:**
  - **Required Expertise:** Provider architecture, config schema, HTTP client
  - **Dependencies:** Need to ensure compatibility with auth headers and per-model routing
  - **Estimated Effort:** **2/5**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps:**
  1. Add `OPENAI_BASE_URL` (or similar) with safe default to official endpoint.
  2. Ensure URL joins are correct (`/v1` paths) and document expected formats.
  3. Add test using a mock OpenAI-compatible server to validate requests.
- **Potential Assignees:**
  - **lalalune** (core/provider architecture)
  - **odilitime** (integration review)

---

### 9) [Bug/Config] Agent unexpectedly starts responding in Korean (openclaw agent) — ID: Discord-2026-02-21-OpenClaw-Locale
- **Current Status:** Reported on Discord; not yet tracked as a GitHub issue in provided data.
- **Impact Assessment:**
  - **User Impact:** **Low to Medium** (seems isolated, but alarming when it happens)
  - **Functional Impact:** **Partial** (agent still responds but incorrectly localized)
  - **Brand Impact:** **Medium** (perceived “unreliable / haunted” behavior)
- **Technical Classification:**
  - **Category:** Bug / UX
  - **Component Affected:** Prompting/character config, locale detection, memory contamination, model routing
  - **Complexity:** Complex solution (could be prompt injection, memory artifact, or model setting)
- **Resource Requirements:**
  - **Required Expertise:** Prompt debugging, memory inspection, model/provider config, reproducibility discipline
  - **Dependencies:** Requires logs/transcripts + config snapshot to reproduce
  - **Estimated Effort:** **3/5**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps:**
  1. Create a GitHub issue with full repro template (model, provider, character file, last 30 messages).
  2. Check for: system prompt language instructions, user locale headers, plugin outputs in Korean, retrieved memories.
  3. Add a “language lock” option at agent/character level (e.g., `response_language=en`).
- **Potential Assignees:**
  - **borisudovicic** (prompt/agent quality focus historically)
  - **odilitime** (triage + system insight)

---

### 10) [Documentation/UX] Clarify “use a different key” (BYOK) and add per-model pricing breakdown in ElizaCloud dashboard — ID: Discord-2026-02-20-ElizaCloud-BYOK-Pricing
- **Current Status:** Open questions on Discord; not yet tracked as docs tasks.
- **Impact Assessment:**
  - **User Impact:** **High** (billing clarity and key management affect most paying users)
  - **Functional Impact:** **Partial** (product usable but confusing; increases support load)
  - **Brand Impact:** **High** (pricing ambiguity drives distrust)
- **Technical Classification:**
  - **Category:** Documentation / UX
  - **Component Affected:** Cloud Dashboard, billing UI, docs
  - **Complexity:** Moderate effort
- **Resource Requirements:**
  - **Required Expertise:** Product + frontend + billing knowledge
  - **Dependencies:** Must reflect actual implementation (whether BYOK exists and how it’s sandboxed)
  - **Estimated Effort:** **2/5**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps:**
  1. Decide and document: is BYOK supported now, partially, or planned?
  2. Update UI copy: “Use a different key” should explicitly say whose key, scope, and where it’s stored.
  3. Add pricing table per model directly in-dashboard + link to canonical pricing page.
- **Potential Assignees:**
  - **borisudovicic** (product/docs coordination)
  - Cloud frontend owner (unlisted)

---

## Top Highest-Priority Issues (Address Immediately)
1. **P0:** ElizaCloud API Explorer “Send request” failing with `api key is required` — **Discord-2026-02-20-ElizaCloud-APIExplorer-Key**
2. **P0:** URL triggers duplicate LLM calls and duplicated streamed output — **elizaos/eliza #6486**
3. **P0:** RLS/isolation alignment + SQL-injection-safe context setting (merge path for PR) — **elizaos/eliza PR #6521**
4. **P0:** 200k token limit reached due to plugin/bootstrap provider + evaluation prompt bloat — **Perf-TokenLimit-200k**
5. **P1:** ElizaCloud scrolling broken across dashboard pages — **Discord-2026-02-20-ElizaCloud-Scroll**
6. **P1:** Legacy migration guard needed (1.4.x → 1.6.x → 2.0.0) — **Migration-Guard-v1.4-to-v2**
7. **P1:** Ollama embeddings failing on Linux — **elizaos-plugins/plugin-ollama #17**
8. **P1:** Clarify BYOK + pricing transparency — **Discord-2026-02-20-ElizaCloud-BYOK-Pricing**

---

## Patterns / Themes Suggesting Deeper Architectural Issues
- **Context assembly lacks budgets and relevance:** Multiple signals (200k token limits, duplicate URL processing) point to missing *canonical* rules for what enters the prompt, and how to cap/shape it.
- **Isolation and migrations are under heavy refactor pressure:** RLS context changes + removal of auto-migrations + concurrent “great DB refactor” increases risk of subtle regressions (security + upgrade UX).
- **Cloud UX regressions undermine developer trust:** Broken scrolling + API Explorer auth mismatch suggests insufficient E2E coverage for core dashboard flows.
- **Ecosystem/plugin velocity is outpacing guardrails:** More plugins increases prompt size, cost, and integration edge cases; needs standardized contracts, profiling, and compatibility checks.

---

## Recommendations (Process Improvements)
1. **Add “golden path” E2E tests for Cloud dashboard** (API Explorer send request, key switching, scroll behavior) and run on every deployment.
2. **Introduce prompt-budget instrumentation by default** (per provider/evaluator token contribution) and enforce a hard budget with graceful fallback.
3. **Create an explicit Upgrade Ladder policy** (supported starting versions; required intermediate migrations) and implement fail-fast startup guards.
4. **Security regression suite for isolation** (RLS tests that assert cross-entity access is impossible) as a required CI gate for DB-related PRs.
5. **Triage hygiene:** Convert recurring Discord-reported bugs into GitHub issues within 24 hours, with reproducibility templates and owner assignment.