## Issue Triage — 2026-02-19 (elizaOS)

### P0 / P1 Issues (Immediate Attention)

---

### 1) `[Bug] URL in message triggers duplicate LLM calls - processed as both text and attachment (webapp)` — elizaos/eliza **#6486**
- **Current Status:** Open (1 comment)
- **Impact Assessment**
  - **User Impact:** **High** (any webapp user sharing links; common behavior)
  - **Functional Impact:** **Partial** (chat still works, but produces duplicated output + cost blowups)
  - **Brand Impact:** **High** (visible “double response” bug + wasted tokens)
- **Technical Classification**
  - **Category:** Bug / Performance (cost amplification)
  - **Component Affected:** Webapp + Server message ingestion / attachment parsing / SSE streaming
  - **Complexity:** Moderate effort
- **Resource Requirements**
  - **Required Expertise:** TypeScript; server message pipeline; client SSE aggregation; attachment/URL preview handling
  - **Dependencies:** None hard, but needs clarity on intended URL-preview architecture (attachment vs text)
  - **Estimated Effort (1–5):** **3**
- **Recommended Priority:** **P0**
- **Actionable Next Steps**
  1. Add tracing logs/metrics to confirm where the second call originates (text handler vs attachment/preview handler).
  2. Define a single “URL handling” policy:
     - either *extract URL metadata without invoking LLM*, or
     - *treat URL as text only* unless an explicit attachment is present.
  3. Add regression test: message with URL must produce exactly **one** LLM request and one streamed response.
  4. Add safeguard in server aggregation: prevent streaming duplicate assistant content before `done`.
- **Potential Assignees**
  - **odilitime** (core/runtime/server changes)
  - **lalalune** (message flow architecture)
  - **borisudovicic** (cloud chat/api pathway familiarity)

---

### 2) `Embedding error on Ubuntu 24.04 despite Ollama working` — elizaos-plugins/plugin-ollama **#17**
- **Current Status:** Open (investigation started per weekly summary)
- **Impact Assessment**
  - **User Impact:** **Medium–High** (Linux users are a large share; Ubuntu 24.04 is a common baseline)
  - **Functional Impact:** **Partial** (blocks embeddings → retrieval/memory/search features degrade or fail)
  - **Brand Impact:** **Medium** (perception: “local model integration unreliable”)
- **Technical Classification**
  - **Category:** Bug / Compatibility
  - **Component Affected:** Model Integration (Ollama plugin embeddings path)
  - **Complexity:** Moderate effort
- **Resource Requirements**
  - **Required Expertise:** Node/TS plugin debugging; Ollama API; Linux env reproducibility; HTTP/JSON contracts
  - **Dependencies:** Repro steps + logs from reporter; confirm Ollama version + embedding model behavior
  - **Estimated Effort (1–5):** **3**
- **Recommended Priority:** **P1**
- **Actionable Next Steps**
  1. Reproduce in CI/devcontainer: Ubuntu 24.04 + latest Ollama + a known embedding model.
  2. Capture request/response payloads; verify endpoint (`/api/embeddings` vs `/v1/embeddings`) and schema.
  3. Add robust error messaging (surface underlying Ollama error + hints).
  4. Implement version-gated compatibility shim if Ollama API differs by version.
  5. Add integration test that runs only when `OLLAMA_HOST` is set (optional CI job).
- **Potential Assignees**
  - **odilitime** (core integrations + plugin ecosystem stability)
  - **lalalune** (provider/model integration experience)

---

### 3) `Distribution issue: npx milady installs unrelated Alibaba tool` — milady-ai/milady **#324**
- **Current Status:** Open (raised on Discord; acknowledged as pre-release; workaround = GitHub release binaries)
- **Impact Assessment**
  - **User Impact:** **High** (any user attempting “standard” `npx` install path)
  - **Functional Impact:** **Yes** (installs the wrong package → cannot use project)
  - **Brand Impact:** **High** (looks like a supply-chain / naming / publishing failure)
- **Technical Classification**
  - **Category:** Bug / Release Engineering / UX
  - **Component Affected:** CLI distribution (npm publishing / package naming / docs)
  - **Complexity:** Moderate effort
- **Resource Requirements**
  - **Required Expertise:** npm publishing; package naming/ownership; CLI release process; security awareness
  - **Dependencies:** npm package name availability; verification of official package scope/namespace
  - **Estimated Effort (1–5):** **2–3**
- **Recommended Priority:** **P1**
- **Actionable Next Steps**
  1. Decide canonical package name (prefer scoped, e.g. `@milady-ai/cli` or `@elizaos/milady`).
  2. Update docs immediately to remove/avoid `npx milady` until corrected.
  3. Publish a protected scoped package and add prominent install instructions.
  4. If feasible, claim/coordinate name transfer or publish a “stub” package on the conflicting name that warns and redirects (only if ownership allows; otherwise do not engage).
  5. Add “install verification” step in release checklist (fresh machine `npx` test).
- **Potential Assignees**
  - **Odilitime** (already engaged on Discord; release guidance)
  - **milady maintainers** (repo owners/releasers)

---

### 4) `Spartan setup is brittle: bun install hangs; missing plugins require manual cloning; Docker files non-functional` — (No GitHub issue captured; Discord report on 2026-02-17)
- **Current Status:** Known pain point; not polished; Docker files exist but “don’t work”
- **Impact Assessment**
  - **User Impact:** **High** (blocks developer onboarding; repeated support overhead)
  - **Functional Impact:** **Yes** (prevents running Spartan without tribal knowledge)
  - **Brand Impact:** **High** (first-run experience; “it doesn’t install” is corrosive)
- **Technical Classification**
  - **Category:** Bug / DX / Documentation
  - **Component Affected:** Plugin System + Tooling (Bun), Repo bootstrapping, Docker
  - **Complexity:** Complex solution (likely multiple root causes)
- **Resource Requirements**
  - **Required Expertise:** Bun monorepo tooling; plugin registry/dependency graph; Docker; CI reproducibility
  - **Dependencies:** Ongoing plugin upgrades (noted as current blocker); clear list of required vs optional plugins
  - **Estimated Effort (1–5):** **4**
- **Recommended Priority:** **P1**
- **Actionable Next Steps**
  1. Create a tracking issue in the appropriate repo (Spartan repo or elizaOS main) with:
     - minimal reproducible steps,
     - required plugins list,
     - expected “happy path” install flow.
  2. Fix Docker baseline first (golden path): produce a working `docker compose up` for Spartan.
  3. Replace “manual clone plugins” with a scripted bootstrap:
     - plugin manifest (required/optional),
     - deterministic fetch/install step.
  4. Add CI job that validates fresh install (Linux) and fails on missing plugin deps.
- **Potential Assignees**
  - **odilitime** (acknowledged state; core/tooling)
  - **lalalune** (architecture + multi-language/next work may touch bootstrap assumptions)
  - **hanzlamateen** (large-scale dependency/tooling work)

---

### 5) `Feature Request: Support custom OpenAI endpoint URL for OpenAI provider` — elizaos/eliza **#6490**
- **Current Status:** Open
- **Impact Assessment**
  - **User Impact:** **Medium–High** (unlocks OpenAI-compatible providers; common enterprise/self-host need)
  - **Functional Impact:** **Partial** (OpenAI works; third-party compatibles blocked)
  - **Brand Impact:** **Medium** (integration flexibility expectation for agent frameworks)
- **Technical Classification**
  - **Category:** Feature Request
  - **Component Affected:** Model Integration / Provider layer (OpenAI)
  - **Complexity:** Simple fix → Moderate effort (depends on config plumbing + docs)
- **Resource Requirements**
  - **Required Expertise:** Provider abstraction; configuration; security considerations (SSRF/allowlist if server-side)
  - **Dependencies:** Confirm whether provider code lives in core vs separate plugin repo; align config schema across runtimes
  - **Estimated Effort (1–5):** **2**
- **Recommended Priority:** **P2** (promote to **P1** if enterprise demand is active this sprint)
- **Actionable Next Steps**
  1. Add `OPENAI_BASE_URL` (or `OPENAI_ENDPOINT`) to provider config with sensible default.
  2. Ensure it propagates through CLI templates and cloud deployment env vars.
  3. Add docs + examples for SiliconFlow/other compatibles.
  4. Add validation to prevent malformed URLs; consider allowlisting for hosted/cloud mode.
- **Potential Assignees**
  - **odilitime** (noted activity on plugin-openai work in monthly contributor notes)
  - **lalalune** (provider abstractions)

---

## P2 / P3 Issues (Near-Term / Planned)

---

### 6) `Need a proper support/ticket system for coin migration and user issues` — (Community support gap; Discord 2026-02-18; also recurring migration disputes on 2026-02-16)
- **Current Status:** Unresolved; users seeking help routed to public posts; no ticket path
- **Impact Assessment**
  - **User Impact:** **Medium** (impacts affected users strongly; volume unclear but recurring)
  - **Functional Impact:** **No** (not a framework runtime bug)
  - **Brand Impact:** **High** (finance/token support is reputation-sensitive; scam risk)
- **Technical Classification**
  - **Category:** UX / Process / Security (social engineering surface)
  - **Component Affected:** Community Ops / Support tooling
  - **Complexity:** Moderate effort
- **Resource Requirements**
  - **Required Expertise:** Community operations; moderation; basic tooling (forms/helpdesk); security comms
  - **Dependencies:** Clear policy decisions (what can/can’t be recovered; migration deadlines)
  - **Estimated Effort (1–5):** **3**
- **Recommended Priority:** **P2**
- **Actionable Next Steps**
  1. Publish a single canonical “Token/Migration Support” page + pinned Discord message (what’s supported, official links, scam warnings).
  2. Stand up a lightweight ticket intake (Google Form → GitHub Discussions → Zendesk/HelpScout) with auto-responses.
  3. Add moderator SOP: never accept DMs; always redirect to official form; identity verification guidelines.
- **Potential Assignees**
  - **ElizaBAO** (community/business ops coordination)
  - **Kenk** (community guidance; docs pointers)
  - **Ops team** (policy + tooling)

---

### 7) `Evaluate and potentially adopt HackMD MCP tool for team-wide collaborative note-taking` — (Discord core-devs; repo: yuna0x0/hackmd-mcp)
- **Current Status:** Proposed; no decision
- **Impact Assessment**
  - **User Impact:** **Low** (end-users unaffected)
  - **Functional Impact:** **No**
  - **Brand Impact:** **Low**
- **Technical Classification**
  - **Category:** Documentation / Process tooling
  - **Component Affected:** Internal dev workflows (MCP tooling)
  - **Complexity:** Simple fix (pilot) → Moderate (org-wide rollout)
- **Resource Requirements**
  - **Required Expertise:** MCP integration; security review for tokens/permissions
  - **Dependencies:** Decide standard note repo and access model
  - **Estimated Effort (1–5):** **1–2**
- **Recommended Priority:** **P3**
- **Actionable Next Steps**
  1. Pilot with core-devs for 1 week using a shared HackMD space.
  2. Define guidelines for what belongs in HackMD vs GitHub issues/docs.
  3. If adopted, document setup + recommended MCP usage patterns.
- **Potential Assignees**
  - **Stan** (proposer; rollout driver)
  - **odilitime** (lightweight security sanity check)

---

## Summary — Top Priority (Next 5–10 Items to Address)

1. **P0:** elizaos/eliza **#6486** — URL messages causing **duplicate LLM calls** (cost + UX + quality).
2. **P1:** plugin-ollama **#17** — **Ubuntu 24.04 embedding failures** (breaks retrieval/memory features).
3. **P1:** milady-ai/milady **#324** — `npx milady` installs the **wrong package** (distribution trust issue).
4. **P1:** **Spartan onboarding/Docker broken** (create tracking issue; make one-click install path).
5. **P2:** elizaos/eliza **#6490** — custom OpenAI endpoint URL (unblocks OpenAI-compatible providers).
6. **P2:** **Support/ticket workflow** for migration/user issues (reduce scams + reputational risk).
7. **P3:** HackMD MCP adoption pilot (internal efficiency; lower priority than product stability).

---

## Cross-Issue Patterns / Themes (Potential Deeper Architectural Problems)

- **Message ingestion & multi-modal normalization is fragile:** #6486 indicates the pipeline can trigger multiple processing paths (text + attachment) leading to duplicated model calls and duplicated streaming output. This suggests missing “single source of truth” normalization before inference.
- **Ecosystem reliability is constrained by packaging/onboarding:** Spartan setup problems + milady `npx` confusion + Ollama compatibility issues all point to **DX bottlenecks** (fresh install parity, reproducibility, and release discipline).
- **Trust & safety expectations are rising:** Token migration support confusion and scam warnings highlight the need for clearer official support channels and hardened public guidance.

---

## Process Recommendations (Prevent Recurrence)

1. **Add “One message → one inference” invariants**
   - Introduce a test harness that asserts *exactly one* LLM request per inbound message under common cases (URL, attachments, previews).
2. **Golden-path install CI**
   - For key distributions (Spartan, major plugins), add CI that runs on a clean Ubuntu image and validates: install → run → smoke test.
3. **Release/distribution checklist**
   - Require “fresh machine install” verification for any CLI/`npx` path, plus namespace/ownership checks to avoid name collisions.
4. **Operational support surface hardening**
   - Centralize support intake + pinned scam warnings + “no DMs” policy enforcement; link from docs and Discord topic headers.
5. **Compatibility matrix for model providers**
   - Maintain a small matrix (Ollama versions, Ubuntu versions, embedding endpoints) and a standard logging/debug mode across providers/plugins.