## Issue Triage — 2026-02-26 (elizaOS)

### 1) **[Incident] `develop` branch contains v2.0.0 code / 1.x effectively lost**
- **Issue Title & ID:** `develop` branch unexpectedly contains 2.0.0 code (INC-2026-02-25-DEVBRANCH)
- **Current Status:** Mitigation in progress/partial — plan is to cut `v2-develop` to preserve 1.x for users in transition (per Discord). Root cause unknown/untraceable via normal PR history.
- **Impact Assessment**
  - **User Impact:** **Critical** (anyone tracking `develop` for 1.x gets unexpected breaking changes)
  - **Functional Impact:** **Yes** (breaks expected build/runtime compatibility for 1.x workflows)
  - **Brand Impact:** **High** (appears like repo is unmanaged / unsafe to depend on)
- **Technical Classification**
  - **Issue Category:** Bug / Release Engineering
  - **Component Affected:** Repo management, branching strategy, CI/CD
  - **Complexity:** **Architectural change** (branching + protections + release process)
- **Resource Requirements**
  - **Required Expertise:** Git/GitHub administration, release engineering, CI, monorepo maintainership
  - **Dependencies:** Understanding relationship to large v2 PRs (e.g., **PR #6351 “V2.0.0”**, PRs like **#6474/#6485 “next generation”**)
  - **Estimated Effort (1–5):** **4**
- **Recommended Priority:** **P0**
- **Specific Actionable Next Steps**
  1. Immediately **freeze `develop`** (temporary branch protection: restrict pushes, require PRs, require status checks).
  2. Create/verify **explicit long-lived branches**: `v1-develop` (or `1.x`), `v2-develop`, and document which is default.
  3. Add a **repo-wide announcement**: which branch to use for 1.x vs 2.x; update README + docs.
  4. Perform forensic check: compare commit graphs and identify when `develop` diverged (even if PR trail is missing).
  5. Update CI matrix to run on both branches and **fail fast** if incompatible artifacts appear.
- **Potential Assignees:** **Odilitime** (core maintainer), **lalalune** (v2 author), **Stan ⚡** (process/ops coordination)

---

### 2) **GitHub ↔ Linear bidirectional sync is corrupting/duplicating issue tracking (“mess”)**
- **Issue Title & ID:** GitHub-Linear bidirectional sync causing issue tracking corruption (INC-2026-02-25-LINEAR-SYNC)
- **Current Status:** Confirmed (Discord); cleanup required; sync currently/previously bidirectional.
- **Impact Assessment**
  - **User Impact:** **High** (contributors can’t reliably find the right issue/source of truth)
  - **Functional Impact:** **Partial** (doesn’t break runtime, but breaks delivery execution)
  - **Brand Impact:** **Medium/High** (public tracker looks chaotic; slows external contribution)
- **Technical Classification**
  - **Issue Category:** UX / Process / Tooling
  - **Component Affected:** Project management tooling, GitHub Issues hygiene
  - **Complexity:** **Moderate effort**
- **Resource Requirements**
  - **Required Expertise:** GitHub/Linear admin, workflow design, automation rules
  - **Dependencies:** Branch/roadmap clarity (v1 vs v2) to correctly label/migrate issues
  - **Estimated Effort (1–5):** **3**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps**
  1. Switch to **one-way sync** (Linear → GitHub or GitHub → Linear) or disable until stable.
  2. Define “source of truth” and add a **standard issue template** linking Linear IDs if needed.
  3. Run a cleanup pass:
     - dedupe issues
     - restore correct states/labels
     - close or merge duplicates with canonical links
  4. Add automation safeguards: prevent reopen loops, prevent duplicate creation on label/state changes.
- **Potential Assignees:** **Stan ⚡** (confirmed sync details), **Odilitime** (maintainer), plus any org admin with Linear permissions

---

### 3) **[Bug] URL in message triggers duplicate LLM calls (processed as text + attachment)**
- **Issue Title & ID:** “[Bug] URL in message triggers duplicate LLM calls - processed as both text and attachment (webapp)” — **#6486**
- **Current Status:** **Open** (reported, minimal discussion; clear repro + expected behavior included)
- **Impact Assessment**
  - **User Impact:** **High** (common behavior: sending links)
  - **Functional Impact:** **Yes** (double responses, duplicated output, 2x token usage/cost)
  - **Brand Impact:** **High** (visible duplication + unexpected costs)
- **Technical Classification**
  - **Issue Category:** Bug / Performance
  - **Component Affected:** Webapp chat pipeline, message ingestion, SSE streaming, attachment/link preview handling
  - **Complexity:** **Moderate effort** (needs correct dedupe point in pipeline)
- **Resource Requirements**
  - **Required Expertise:** TypeScript runtime/server message flow, webapp message model, SSE streaming
  - **Dependencies:** Clarify intended behavior for URL previews/metadata (keep preview but avoid second LLM call)
  - **Estimated Effort (1–5):** **3**
- **Recommended Priority:** **P0**
- **Specific Actionable Next Steps**
  1. Add logging to confirm two separate “message items” are being queued (text + attachment).
  2. Implement a **single canonical message representation** before LLM invocation:
     - either strip URL preview attachments from LLM input, or
     - treat preview as metadata only, not a second message/event
  3. Add regression tests:
     - message containing URL produces exactly one LLM call
     - SSE stream emits one assistant message before `done`
  4. Validate with multiple clients (webapp + any other chat frontends) to ensure no reintroduction.
- **Potential Assignees:** **Odilitime** (core/runtime), **lalalune** (message/runtime refactors), **anchapin** (robustness fixes), **thewoweffect** (reporter for validation)

---

### 4) **OpenAI provider cannot target OpenAI-compatible third-party endpoints**
- **Issue Title & ID:** “Feature Request: Support custom OpenAI endpoint URL for OpenAI provider” — **#6490**
- **Current Status:** **Open**
- **Impact Assessment**
  - **User Impact:** **Medium/High** (blocks a meaningful set of users on OpenAI-compatible infra like SiliconFlow)
  - **Functional Impact:** **Partial** (core works on OpenAI; reduces portability/compat)
  - **Brand Impact:** **Medium** (framework seen as less flexible than competitors)
- **Technical Classification**
  - **Issue Category:** Feature Request
  - **Component Affected:** Model Integration / Provider system (OpenAI provider config)
  - **Complexity:** **Simple fix** to **Moderate effort** (config plumbing + docs + validation)
- **Resource Requirements**
  - **Required Expertise:** Provider abstraction, env/config handling, API client instantiation
  - **Dependencies:** Ensure no conflict with auth headers, Azure/OpenAI variants, and per-model routing
  - **Estimated Effort (1–5):** **2**
- **Recommended Priority:** **P2** (upgrade to P1 if cloud vendors become a primary onboarding path)
- **Specific Actionable Next Steps**
  1. Add `OPENAI_BASE_URL` (or provider-scoped `baseUrl`) to config schema.
  2. Ensure it propagates to the OpenAI client constructor for both chat + embeddings (if applicable).
  3. Document common third-party examples and pitfalls (headers, model names).
  4. Add a minimal integration test using a mock OpenAI-compatible server.
- **Potential Assignees:** **lalalune** (core/provider changes), **Odilitime**, **coolRoger** (reporter for acceptance testing)

---

### 5) **Payment infrastructure plugin discussion is fragmented; needs a single spec thread**
- **Issue Title & ID:** “Payment Infrastructure Plugin” — **#6443**
- **Current Status:** **Open** (Discord action item: “Add comment to GitHub issue #6443” to consolidate discussion)
- **Impact Assessment**
  - **User Impact:** **Medium** (important for agent-to-agent commerce use cases; not required for baseline chat)
  - **Functional Impact:** **No** (not blocking core runtime)
  - **Brand Impact:** **Medium** (key differentiator vs other agent frameworks if done well)
- **Technical Classification**
  - **Issue Category:** Feature Request
  - **Component Affected:** Plugin System, Wallet/Payments integrations, possibly Cloud
  - **Complexity:** **Complex solution** (security + custody model + chain support + UX)
- **Resource Requirements**
  - **Required Expertise:** Web3 payments, key management, threat modeling, plugin API design
  - **Dependencies:** Decide target rails (EVM, Solana, x402, custodial/non-custodial), authentication model (JWT/SAID)
  - **Estimated Effort (1–5):** **5**
- **Recommended Priority:** **P3** (keep warm; prioritize after current repo/process stability issues)
- **Specific Actionable Next Steps**
  1. Require all proposals to be summarized as: goals, non-goals, custody model, supported chains, minimal API.
  2. Add a “Phase 0” deliverable: **interface spec + reference plugin skeleton** with mock payments.
  3. Identify security review gates before any mainnet integration.
- **Potential Assignees:** **saoirse102345-blip** (requester), **Odilitime** (triage/spec), **Stan ⚡** (coordination), + a Web3/security-focused contributor

---

### 6) **Release communication gaps: Babylon timing + “AI news release schedule” unanswered**
- **Issue Title & ID:** Babylon release timing clarification (DOC-2026-02-25-BABYLON); AI news schedule clarification (DOC-2026-02-25-AINEWS)
- **Current Status:** Open questions from Discord; no linked GitHub issues in provided data.
- **Impact Assessment**
  - **User Impact:** **Medium** (community confusion; partner/user planning impacted)
  - **Functional Impact:** **No**
  - **Brand Impact:** **Medium** (perception of disorganization, especially during restructuring)
- **Technical Classification**
  - **Issue Category:** Documentation / UX (community comms)
  - **Component Affected:** Project communications, docs site, release notes
  - **Complexity:** **Simple fix**
- **Resource Requirements**
  - **Required Expertise:** PM/comms, release planning
  - **Dependencies:** Internal roadmap alignment (Milady → Babylon/Hyperscape sequence mentioned in Discord)
  - **Estimated Effort (1–5):** **1**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps**
  1. Publish a short “What’s shipping next” note: Milady status, Babylon ramp plan, Hyperscape timeline (even if tentative).
  2. Add a single pinned Discord message linking to a living roadmap doc (date-stamped).
  3. If “AI news” is a recurring artifact, document cadence + owner.
- **Potential Assignees:** **Odilitime** (Discord owner/admin), **ElizaBAO** (raised questions; can help draft), **s** (project sequencing context)

---

### 7) **Competitive feature investigation: “trajectory compression” (Hermes Agent)**
- **Issue Title & ID:** Investigate trajectory compression for fitting training data into token budgets (R&D-2026-02-25-TRAJ-COMP)
- **Current Status:** Identified as interesting; investigation pending.
- **Impact Assessment**
  - **User Impact:** **Low/Medium** (future optimization; not immediate breakage)
  - **Functional Impact:** **No**
  - **Brand Impact:** **Low/Medium** (could become a differentiator if implemented well)
- **Technical Classification**
  - **Issue Category:** Feature Request / Performance (R&D)
  - **Component Affected:** Training pipeline, memory/reflection compression, scenario tooling
  - **Complexity:** **Complex solution**
- **Resource Requirements**
  - **Required Expertise:** ML/data pipeline, prompt/memory compression techniques, evaluation harness
  - **Dependencies:** Clarity on elizaOS training/ART examples and scenario evaluators; token usage profiling
  - **Estimated Effort (1–5):** **4**
- **Recommended Priority:** **P4** (park until P0/P1 stability items are resolved)
- **Specific Actionable Next Steps**
  1. Write a 1-page technical brief: what is compressed (trajectories, reflections, tool traces), expected wins, risks.
  2. Prototype on a small dataset; measure token reduction vs accuracy loss.
- **Potential Assignees:** **Odilitime** (raised item), **standujar** (workflow/tooling experience), ML-focused community contributor

---

## Highest Priority Summary (Top 5–10 to address now)
1. **P0:** INC-2026-02-25-DEVBRANCH — `develop` branch contaminated with v2.0.0 code; establish v1/v2 branch strategy + protections.
2. **P0:** **#6486** — URL messages cause **duplicate LLM calls** and duplicated SSE output (cost + UX regression).
3. **P1:** INC-2026-02-25-LINEAR-SYNC — GitHub↔Linear bidirectional sync is breaking issue hygiene; disable/repair and dedupe.
4. **P2:** **#6490** — Support custom OpenAI-compatible base URL for OpenAI provider (unblocks many infra choices).
5. **P2:** DOC-2026-02-25-BABYLON / DOC-2026-02-25-AINEWS — publish release timing + comms cadence to reduce community uncertainty.
6. **P3:** **#6443** — consolidate “payment infrastructure plugin” requirements into a spec and phased plan.
7. **P4:** R&D-2026-02-25-TRAJ-COMP — evaluate trajectory compression as a future token-budget optimization.

---

## Patterns / Themes Suggesting Deeper Architectural Problems
- **Release engineering and branch discipline are currently the biggest systemic risk.** The `develop` branch incident implies missing protections, unclear versioning policy (1.x vs 2.x), and insufficient CI gating for incompatible changes.
- **Message normalization boundaries are leaky (text vs attachments vs previews).** The duplicate-LLM-call URL bug strongly suggests the pipeline lacks a single canonical “message-to-LLM” representation.
- **Tooling/process fragmentation (GitHub vs Linear) is creating operational drag.** Even when engineering is active, contributors can’t reliably track what is real, current, or canonical.

---

## Process Improvement Recommendations
1. **Implement a hard versioning policy + branch protections**
   - Protected branches, required PR reviews, required CI checks, and explicit “v1” vs “v2” default branches.
2. **Add CI “guardrails” for costly regressions**
   - Track “LLM calls per message” and/or token usage in integration tests (especially for chat + SSE streaming paths).
3. **Define a single issue system of record**
   - If Linear is required, move to **one-way sync** and document the workflow; otherwise fully standardize on GitHub Issues.
4. **Introduce a lightweight release communication ritual**
   - Weekly pinned roadmap update (even if brief), and link it in Discord to reduce repeated unanswered questions and uncertainty.