## Issue Triage — 2026-03-11

### 1) **Develop branch is broken; “next version” work required to unblock**
- **Issue Title & ID:** Develop branch broken / release-blocking regressions — **DISCORD-DEV-2026-03-10-01**
- **Current Status:** **Open / Investigating** (confirmed broken by Odilitime; fix pending)
- **Impact Assessment:**
  - **User Impact:** **High** (contributors + users tracking develop)
  - **Functional Impact:** **Yes** (blocks core development workflow + downstream tasks)
  - **Brand Impact:** **High** (signals instability; reinforces “missed deadlines” narrative)
- **Technical Classification:**
  - **Category:** Bug / Release Engineering
  - **Component Affected:** Core Framework / Monorepo / CI
  - **Complexity:** **Moderate effort** (unknown root cause; likely multiple breakpoints)
- **Resource Requirements:**
  - **Required Expertise:** TypeScript/Node monorepo, CI pipelines, dependency management, release process
  - **Dependencies:** May depend on ongoing v2.0.0 refactors (prompt batching, services lifecycle)
  - **Estimated Effort (1-5):** **4**
- **Recommended Priority:** **P0**
- **Specific Actionable Next Steps:**
  1. Freeze merges to develop (except fixes) and declare a “stabilization window.”
  2. Identify first failing CI job / failing package with a bisect (last green SHA).
  3. Produce a short “broken develop” status note + workaround (use main / last stable tag).
  4. Add a minimal smoke test matrix for agent boot + plugin load to prevent recurrence.
- **Potential Assignees:** **Odilitime** (core), **Stan ⚡** (monorepo integration)

---

### 2) **Milady: System permissions / capabilities configuration unresolved**
- **Issue Title & ID:** Milady system permissions/capabilities not enabling correctly — **DISCORD-MILADY-2026-03-10-02**
- **Current Status:** **Open** (question raised; unresolved in-thread)
- **Impact Assessment:**
  - **User Impact:** **Medium–High** (blocks Milady setups needing privileged capabilities)
  - **Functional Impact:** **Partial** (some installations/features blocked)
  - **Brand Impact:** **Medium** (setup friction; “it doesn’t work out of the box”)
- **Technical Classification:**
  - **Category:** Bug / UX (configuration)
  - **Component Affected:** Plugin/System Integration (Milady), Runtime/OS permissions
  - **Complexity:** **Moderate effort**
- **Resource Requirements:**
  - **Required Expertise:** Linux permissions, containerization/systemd, Node runtime security, docs
  - **Dependencies:** May depend on installation method (APT vs manual), environment config conventions
  - **Estimated Effort (1-5):** **3**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps:**
  1. Reproduce on a clean Debian/Ubuntu VM (both APT and non-APT install paths).
  2. Document the exact required capabilities (what feature needs what permission).
  3. Add a diagnostics command/flag to print permission checks and recommended fixes.
  4. Publish a minimal “known-good” config example (JSON env section + OS steps).
- **Potential Assignees:** **BinaryCookies** (reporter/integration), **Meme Broker** (Milady repo contributor)

---

### 3) **Milady: Neon database integration unclear beyond env placement; operational failures likely**
- **Issue Title & ID:** Milady Neon DB integration reliability & setup clarity — **DISCORD-MILADY-2026-03-10-03**
- **Current Status:** **Partially understood** (config location found; end-to-end not confirmed)
- **Impact Assessment:**
  - **User Impact:** **Medium** (teams attempting hosted Postgres/vector setups)
  - **Functional Impact:** **Partial** (data persistence and multi-agent state may fail)
  - **Brand Impact:** **Medium**
- **Technical Classification:**
  - **Category:** Bug / Documentation
  - **Component Affected:** Model/Agent persistence layer; DB connectors/config loader
  - **Complexity:** **Moderate effort**
- **Resource Requirements:**
  - **Required Expertise:** Postgres/Neon, connection pooling, secrets management, config schema validation
  - **Dependencies:** Clarify interplay with upcoming “in-memory persistence” and DB cleanup work
  - **Estimated Effort (1-5):** **3**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps:**
  1. Create a verified Neon quickstart (env JSON snippet + required tables/migrations).
  2. Add config validation (fail fast with actionable error messages).
  3. Add one integration test job that boots Milady against Neon in CI (if feasible).
- **Potential Assignees:** **BinaryCookies**, **Odilitime** (persistence architecture)

---

### 4) **Plugin-Ollama: Embedding failures on Linux**
- **Issue Title & ID:** Embeddings fail on Linux environments — **elizaos-plugins/plugin-ollama #17**
- **Current Status:** **Open / Investigating** (noted in weekly summary)
- **Impact Assessment:**
  - **User Impact:** **High** (Linux is common for self-hosting/local-model users)
  - **Functional Impact:** **Yes** (breaks embeddings → RAG/memory/search workflows)
  - **Brand Impact:** **High** (local model support credibility)
- **Technical Classification:**
  - **Category:** Bug / Compatibility
  - **Component Affected:** Model Integration (Ollama), Embeddings pipeline
  - **Complexity:** **Moderate effort**
- **Resource Requirements:**
  - **Required Expertise:** Ollama APIs, Linux runtime quirks (libc/AVX), node bindings, vector libs
  - **Dependencies:** Might depend on Ollama version detection and model/embedding endpoint differences
  - **Estimated Effort (1-5):** **3**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps:**
  1. Collect failure signatures (logs, Ollama version, distro, CPU flags, model used).
  2. Add a minimal reproduction script in the issue (single embed call).
  3. Implement version/endpoint detection + better error messaging (suggest fixes).
  4. Add Linux CI coverage (at least Ubuntu) for embeddings smoke test.
- **Potential Assignees:** **Odilitime** (core integrations), a plugin maintainer from **elizaos-plugins** org (unlisted in provided data)

---

### 5) **Prompt batching subsystem (v2.0.0): risk of regressions without a spec + test harness**
- **Issue Title & ID:** Prompt batching scheduler needs spec/tests to avoid behavior regressions — **DISCORD-V2-2026-03-10-04**
- **Current Status:** **In progress** (design/implementation underway)
- **Impact Assessment:**
  - **User Impact:** **Medium** (impacts plugin authors + agent behavior consistency)
  - **Functional Impact:** **Partial** (could degrade autonomy/evaluators/init flows)
  - **Brand Impact:** **Medium–High** (erratic agent behavior is very visible)
- **Technical Classification:**
  - **Category:** Architecture / Performance / Bug-prevention
  - **Component Affected:** Core Framework (LLM orchestration), Plugin execution, Evaluators
  - **Complexity:** **Architectural change**
- **Resource Requirements:**
  - **Required Expertise:** Scheduling/queuing, LLM tooling, plugin lifecycle, benchmarking
  - **Dependencies:** Depends on dynamicPromptExecution/autonomous system semantics in 3.x
  - **Estimated Effort (1-5):** **5**
- **Recommended Priority:** **P1** (ship safely; don’t rush without guardrails)
- **Specific Actionable Next Steps:**
  1. Write an RFC/spec: ordering guarantees, concurrency limits, fairness, cancellation, retries.
  2. Define compatibility mode to preserve legacy behavior for existing plugins.
  3. Create a test harness with representative plugins (init + autonomous + evaluator mix).
  4. Add metrics hooks (latency, token usage, queue depth) for tuning frontier vs local.
- **Potential Assignees:** **Odilitime** (lead), **Stan ⚡** (monorepo/service integration)

---

### 6) **Service lifecycle improvements (lazy loading, in-memory persistence, “serverless concepts”) lack a coordinated plan**
- **Issue Title & ID:** Service lifecycle refactor needs phased plan (lazy loading + persistence) — **DISCORD-V2-2026-03-10-05**
- **Current Status:** **Concept/Exploration**
- **Impact Assessment:**
  - **User Impact:** **Medium** (startup time, stability, cost; affects most deployments)
  - **Functional Impact:** **Partial** (risk of state loss/ordering bugs if done ad hoc)
  - **Brand Impact:** **Medium**
- **Technical Classification:**
  - **Category:** Performance / Architecture
  - **Component Affected:** Core Framework services initialization, persistence layer
  - **Complexity:** **Architectural change**
- **Resource Requirements:**
  - **Required Expertise:** Dependency injection/service graphs, caching/state design, cloud patterns
  - **Dependencies:** Interacts with prompt batching + DB cleanup; needs sequencing
  - **Estimated Effort (1-5):** **4**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps:**
  1. Map service dependency graph and identify which services can be lazily instantiated safely.
  2. Define persistence tiers (in-memory vs durable DB) and when each is valid.
  3. Add lifecycle hooks and a standard “service readiness” contract for plugins/components.
- **Potential Assignees:** **Stan ⚡** (integration), **Odilitime** (architecture)

---

### 7) **Database cleanup/technical debt discovered during v2 work (risk of blocking releases)**
- **Issue Title & ID:** DB cleanup required; unknown scope debt surfaced — **DISCORD-DB-2026-03-10-06**
- **Current Status:** **Open / Ongoing**
- **Impact Assessment:**
  - **User Impact:** **Medium** (stability and migrations affect many)
  - **Functional Impact:** **Partial** (can become a release blocker if migrations/data integrity break)
  - **Brand Impact:** **Medium**
- **Technical Classification:**
  - **Category:** Bug / Maintenance
  - **Component Affected:** Persistence/DB schema, migrations, runtime state rebuild
  - **Complexity:** **Complex solution** (scope uncertain)
- **Resource Requirements:**
  - **Required Expertise:** DB migrations, schema versioning, backward compatibility
  - **Dependencies:** Coupled to “in-memory persistence” ideas and any Neon/Postgres guidance
  - **Estimated Effort (1-5):** **4**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps:**
  1. Create an inventory of DB debt items (broken migrations, orphan tables, performance hot spots).
  2. Add migration tests (apply → rollback → reapply) in CI.
  3. Publish upgrade notes for users (how to back up + migrate safely).
- **Potential Assignees:** **Odilitime**

---

### 8) **Model configuration inconsistencies across agents**
- **Issue Title & ID:** Model configuration differs across agents; confusing/buggy behavior — **DISCORD-MODEL-2026-03-09-01**
- **Current Status:** **Open** (reported; no resolution recorded)
- **Impact Assessment:**
  - **User Impact:** **High** (core workflow; many users run multiple agents)
  - **Functional Impact:** **Partial–Yes** (misconfigured models can effectively break agents)
  - **Brand Impact:** **High** (perceived as “framework is brittle”)
- **Technical Classification:**
  - **Category:** Bug / UX
  - **Component Affected:** Core Framework config system, Model Integration
  - **Complexity:** **Moderate effort**
- **Resource Requirements:**
  - **Required Expertise:** Configuration schema design, precedence rules, validation tooling
  - **Dependencies:** Interacts with v2 orchestration/prompt batching and plugin config expectations
  - **Estimated Effort (1-5):** **3**
- **Recommended Priority:** **P1**
- **Specific Actionable Next Steps:**
  1. Define and document config precedence (global → agent → plugin → runtime overrides).
  2. Add schema validation + “effective config” debug dump per agent.
  3. Provide a migration path if config keys are being renamed/changed in v2.
- **Potential Assignees:** **Odilitime**, **BinaryCookies** (reporter, can help reproduce)

---

### 9) **Voice provider cost pressure (ElevenLabs) + request for Google voice plugin**
- **Issue Title & ID:** Need affordable voice integration alternative (Google voice plugin) — **DISCORD-VOICE-2026-03-09-02**
- **Current Status:** **Open / Feature request**
- **Impact Assessment:**
  - **User Impact:** **Medium** (voice users; cost-sensitive builders)
  - **Functional Impact:** **No** (not blocking core, but blocks adoption for voice-heavy agents)
  - **Brand Impact:** **Medium** (competitiveness vs other frameworks)
- **Technical Classification:**
  - **Category:** Feature Request
  - **Component Affected:** Plugin System / Voice services
  - **Complexity:** **Moderate effort**
- **Resource Requirements:**
  - **Required Expertise:** Google TTS/STT APIs, auth, streaming audio, plugin packaging
  - **Dependencies:** Needs clear plugin interface standards + examples
  - **Estimated Effort (1-5):** **3**
- **Recommended Priority:** **P3**
- **Specific Actionable Next Steps:**
  1. Confirm target Google service (Cloud TTS? Speech-to-Text? both?) and required features.
  2. Draft plugin skeleton + minimal “speak(text)->audio” demo agent.
  3. Provide cost comparison doc + recommended defaults.
- **Potential Assignees:** Community plugin contributors (unidentified), **BinaryCookies** (requester)

---

### 10) **Milady repo PRs for GitHub issue #71 need review/merge to unblock users**
- **Issue Title & ID:** Review/merge PRs addressing Milady issue #71 — **milady-ai/milady #71**
- **Current Status:** **PR(s) submitted** (awaiting review/merge; details not captured in logs)
- **Impact Assessment:**
  - **User Impact:** **Medium** (users affected by #71 specifically)
  - **Functional Impact:** **Partial** (depends on #71 scope; likely installation/distribution related)
  - **Brand Impact:** **Medium**
- **Technical Classification:**
  - **Category:** Bug / Maintenance
  - **Component Affected:** Milady distribution/packaging
  - **Complexity:** **Simple fix–Moderate** (review + regression check)
- **Resource Requirements:**
  - **Required Expertise:** Repo maintainership, packaging, CI
  - **Dependencies:** APT distribution work may intersect
  - **Estimated Effort (1-5):** **2**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps:**
  1. Triage #71: confirm reproduction + acceptance criteria.
  2. Review PR(s) from **Meme Broker**; run CI + minimal install test on Debian/Ubuntu.
  3. Merge + publish release notes; close #71 with verification steps.
- **Potential Assignees:** **Meme Broker** (author), Milady maintainers (unlisted)

---

### 11) **Operational: $elizaos “holders system” not functioning**
- **Issue Title & ID:** Restore $elizaos holders system functionality — **DISCORD-OPS-2026-03-10-07**
- **Current Status:** **Open** (commitment made; no ETA)
- **Impact Assessment:**
  - **User Impact:** **High** (token holders/community)
  - **Functional Impact:** **Partial** (not core framework, but core ecosystem promise)
  - **Brand Impact:** **High** (trust/credibility under current community tension)
- **Technical Classification:**
  - **Category:** Bug / Ops
  - **Component Affected:** External services / auth / gating / token-holder verification
  - **Complexity:** **Moderate effort**
- **Resource Requirements:**
  - **Required Expertise:** Web backend, wallet auth, indexers, infra/ops, security review
  - **Dependencies:** May depend on finalized migration rules and snapshots
  - **Estimated Effort (1-5):** **3**
- **Recommended Priority:** **P1** (ecosystem trust is currently fragile)
- **Specific Actionable Next Steps:**
  1. Define what “holders system” includes (benefits, gating, endpoints, UI).
  2. Add uptime + correctness checks (snapshot verification, chain RPC reliability).
  3. Publish a public status page/ETA and a minimal “it works” validation flow.
- **Potential Assignees:** **Odilitime**, (support from **jin** for comms/status reporting)

---

### 12) **Operational: ai16z → elizaos token migration process is manual and unclear**
- **Issue Title & ID:** Finalize migration workflow (wallet DM + snapshot proof is not scalable) — **DISCORD-OPS-2026-03-10-08**
- **Current Status:** **Open / Manual process**
- **Impact Assessment:**
  - **User Impact:** **Medium–High** (affected holders; ongoing support load)
  - **Functional Impact:** **No** (not framework core)
  - **Brand Impact:** **High** (perceived disorganization; increases conflict)
- **Technical Classification:**
  - **Category:** UX / Documentation / Ops
  - **Component Affected:** Token tooling, support processes, verification pipeline
  - **Complexity:** **Moderate effort**
- **Resource Requirements:**
  - **Required Expertise:** Web form + automation, wallet verification, snapshot proof validation
  - **Dependencies:** Must align with policy decisions (eligibility rules, deadlines)
  - **Estimated Effort (1-5):** **3**
- **Recommended Priority:** **P2**
- **Specific Actionable Next Steps:**
  1. Publish official eligibility rules + required proof artifacts.
  2. Replace DMs with a form + automated verification queue.
  3. Provide users a tracking ID and expected turnaround SLA.
- **Potential Assignees:** **Odilitime** (owner), **jin** (automation/communications tooling)

---

## Highest-Priority Focus (Top 5–10 to address immediately)
1. **P0 — Develop branch broken** (**DISCORD-DEV-2026-03-10-01**)  
2. **P1 — Model configuration inconsistencies across agents** (**DISCORD-MODEL-2026-03-09-01**)  
3. **P1 — Plugin-Ollama embeddings failing on Linux** (**plugin-ollama #17**)  
4. **P1 — Milady system permissions/capabilities not working** (**DISCORD-MILADY-2026-03-10-02**)  
5. **P1 — Prompt batching: add spec + tests to prevent regressions** (**DISCORD-V2-2026-03-10-04**)  
6. **P1 — Restore $elizaos holders system** (**DISCORD-OPS-2026-03-10-07**)  
7. **P2 — DB cleanup/technical debt (migrations/integrity)** (**DISCORD-DB-2026-03-10-06**)  
8. **P2 — Milady issue #71 PRs review/merge** (**milady-ai/milady #71**)  
9. **P2 — Token migration workflow finalization** (**DISCORD-OPS-2026-03-10-08**)  
10. **P2 — Service lifecycle refactor plan (lazy loading/persistence)** (**DISCORD-V2-2026-03-10-05**)

---

## Patterns / Themes Suggesting Deeper Architectural Problems
- **Config & lifecycle ambiguity:** Repeated friction around model configuration, service initialization, permissions, and DB setup indicates unclear **precedence rules**, weak **validation**, and insufficient **diagnostic tooling**.
- **Release discipline gaps:** “Develop is broken” plus large architectural work (v2 prompt batching/serverless concepts) suggests missing **stabilization gates** (smoke tests, feature flags, compatibility modes).
- **Operational tooling lag:** Manual token migration + broken holders system happening alongside community tension implies operational systems lack **automation, observability, and clear owner/ETA communication**.

---

## Process Improvement Recommendations
1. **Add mandatory “agent boot + plugin load” smoke tests** in CI for develop (Linux at minimum), including a minimal embeddings check for local-model integrations.
2. **Introduce RFCs for architectural subsystems** (prompt batching, service lifecycle, persistence tiers) with explicit backward-compat guarantees and rollout plans.
3. **Implement config validation + “effective configuration” debug output** across agents/plugins to reduce support load and invisible misconfiguration.
4. **Adopt a stabilization policy for develop** (merge freeze + release captain) whenever core refactors are underway.
5. **Operational changes:** replace DM-based workflows (migration/support) with form + queue + tracking IDs; publish a weekly status changelog to reduce repeated escalation in community channels.