## User Feedback Analysis — 2026-02-28 (based on 2026-02-25 to 2026-02-27 Discord + Feb GitHub issues/PR activity)

### Data note (for quantification)
Percentages below are based on **~14 distinct feedback signals** captured in the provided dataset (Discord questions/issues + GitHub issues), so they indicate **directional** frequency, not statistically robust community-wide metrics.

---

## 1) Pain Point Categorization (Top recurring friction areas)

### 1) Technical Functionality — **Version churn & broken plugins out-of-the-box** (~29%, 4/14)
**What users report**
- “Most plugins … are broken out-of-the-box in **1.7.2**” (e.g., `plugin-linear`, `plugin-rolodex`, `plugin-memory`) and require **manual patching** (Julio Holon).
- Multiple runtimes/branches create uncertainty: recommendation to use **`v2-develop`** for “mature 1.x code,” while **v2.0.0 is alpha** (Odilitime).
- Known **bcrypt issue** in v2.0.0 requiring patches (Julio Holon via Discord summary).

**Who it impacts most**
- Teams trying to adopt ElizaOS for real workflows quickly (enterprise automation, internal tools).
- Plugin-dependent users (Linear/GitHub/Google workflows).

---

### 2) Documentation — **“Which version/branch should I use?” + onboarding ambiguity** (~21%, 3/14)
**What users report**
- Repeated need to clarify alpha status and branch guidance (Julio Holon asking if the project is “alpha”; Odilitime explaining breaking changes and recommending `v2-develop`).
- Support interactions that start with “install + read docs” but don’t resolve specific beginner blockers (Jamie asking for help building an agent; Omid Sa pointing to docs/help channel).
- Troubleshooting stalls when critical environment info isn’t requested by a structured flow (Twitter input issue: first response was “which version/product?”).

**Who it impacts most**
- Newcomers (Jamie).
- Developers coming from other agent frameworks who expect stable “latest” install behavior.

---

### 3) Integration — **Provider/integration gaps & unclear configuration** (~21%, 3/14)
**What users report**
- Twitter input functionality issue (Jamie) and slow triage due to missing version/product details.
- GitHub issue: request for **custom OpenAI endpoint URL** to support OpenAI-compatible providers (e.g., SiliconFlow) (GitHub #6490).
- Open question about “code bot” behavior with **organizational repositories** (Odilitime).

**Who it impacts most**
- Users deploying agents into existing stacks (org GitHub, third-party inference providers, social integrations).

---

### 4) UX/UI — **Duplicated responses / confusing message handling** (~7%, 1/14; high severity cost impact)
**What users report**
- GitHub bug: sending a URL triggers **duplicate LLM calls** (processed as both text and attachment), resulting in duplicated output and doubled cost (GitHub #6486).

**Who it impacts most**
- Webapp chat users; anyone sharing links (very common usage pattern).

---

### 5) Performance / Reliability — **Hidden cost multipliers & runtime fragility** (~14%, 2/14)
**What users report**
- Duplicate LLM calls doubling token usage (GitHub #6486).
- Release-to-release breaking changes causing “works yesterday, breaks today” plugin behavior (Discord discussion on multiple runtimes and frequent breaking changes).

**Who it impacts most**
- Cost-sensitive builders and production deployments (especially enterprise/financial use cases where predictability matters).

---

### 6) Community / Trust & Safety — **Scams + token confusion + unanswered governance questions** (~21%, 3/14)
**What users report**
- Scam warnings: “ticket links and DMs are scams” (Omid Sa).
- Token naming confusion: “$elizaOS not $eliza” (Odilitime).
- Token migration question after a stated deadline remained **unanswered** (Mario).
- Compliance question about credit dispute automation safeguards remained **unanswered** (Caesar).

**Who it impacts most**
- Newcomers and non-technical users who rely on community guidance.
- Builders operating in regulated environments who need clear safety/compliance positioning.

---

### 7) Legal/Compliance (Integration + Community Risk) — **Regulated automation & IP/legal uncertainty** (~14%, 2/14; very high severity)
**What users report**
- Credit-building plugin: need **FCRA compliance verification** and safeguards to prevent improper disputes (Caesar) — unanswered.
- Hyperscape/RuneScape-related project: fear that rightsholder (Jagex) could shut it down; uncertainty discourages contribution (Error P015-A).

**Who it impacts most**
- Plugin authors and adopters in regulated domains (credit/finance/legal).
- Contributors wary of investing time in legally uncertain projects.

---

## 2) Usage Pattern Analysis (actual vs intended usage)

### Observed real-world usage (emerging “jobs to be done”)
1) **Enterprise workflow automation** (strong signal)
- Converting **Google Meet minutes → Linear issues**
- Monitoring blocked work items autonomously
- Drafting code changes and opening PRs for human review (Julio Holon; Caesar’s guidance)

2) **Regulated process automation**
- Credit building/dispute letters with certified mail via Lob integration (credit-builder plugin).
- Community immediately extends concept to **traffic ticket/citation dispute automation** (speeding/red-light cameras).

3) **Autonomy as scheduling/ops, not “full agent OS”**
- Users compare to OpenClaw and ask what “autonomous” actually does.
- Practical pattern: cron-like polling + periodic analysis + human confirmation for high-stakes actions (Caesar recommended hourly polling + HITL).

4) **Cross-platform content ops**
- Automation for posting to Discord/X/Telegram (jin).

### Mismatches vs intended/assumed usage
- Many users expect “install → plugins work” stability typical of mature frameworks, but encounter **alpha-level breakage** and runtime fragmentation.
- “Autonomous” is interpreted as OpenClaw-like full OS autonomy, but in ElizaOS it currently spans multiple implementations (plugin-autonomous vs v2 built-in vs milaidy project).

### Feature requests aligned with actual usage
- **Custom OpenAI endpoint URL** (GitHub #6490) aligns with teams standardizing on OpenAI-compatible inference backends.
- **Cron/task configuration via chat** (discussion of 1.x tasks system not chat-accessible; plugin-pim possibly covering) aligns with real ops workflows.
- **Workflow integrations** (Google Meet → Linear) align with enterprise adoption path.

---

## 3) Implementation Opportunities (2–3 concrete solutions per major pain point)

### A) Broken plugins / version churn (High impact, Medium–High difficulty)
1) **Publish a “Compatibility Matrix” + CI plugin smoke tests per runtime**
- Deliverable: a table mapping **runtime (1.7.2 / v2-develop / v2.0.0)** → known-good plugin versions.
- Add CI: for top plugins (`linear`, `github`, `memory`, `twitter`) run a minimal “create agent → tool call → response” test.
- Similar approach: Kubernetes “version skew policy”; Home Assistant “breaking changes + integration tests”.

2) **Introduce a plugin API stability layer (semver + deprecation windows)**
- Define a small stable interface (e.g., tool/action registration, memory, message schema).
- Enforce: breaking changes require migration notes + codemods.
- Similar approach: VS Code extension API stability; Terraform provider versioning.

3) **Ship “blessed bundles”**
- Example: `elizaos stack enterprise-v1` pins runtime + vetted plugin set for Linear/GitHub/Google.
- Lowers adoption friction for the exact workflow Julio described.

**Priority**: #1 (fastest trust gain), then #3, then #2 (longer-term).

---

### B) Version/branch confusion + onboarding (High impact, Low–Medium difficulty)
1) **Single “Start Here” decision tree**
- “If you need stability → use v2-develop; if you’re testing multi-language core → v2.0.0; if using 1.7.2 → known plugin caveats.”
- Make it the first page in docs and the first output of `elizaos create`.

2) **Structured bug report prompts in Discord help flow**
- A bot/form that asks: runtime version, plugin version, provider, OS, logs snippet.
- Prevents stalls like the Twitter issue (“which version/product?”).

3) **Add “Known Issues” page linked from CLI**
- Include: v2.0.0 bcrypt patch status, common plugin breakages, URL duplication bug (#6486) until fixed.

**Priority**: #1 and #3 immediately.

---

### C) Integration gaps (High impact, Medium difficulty)
1) **Add configurable OpenAI base URL**
- Implement `OPENAI_BASE_URL` / provider config field (GitHub #6490).
- Similar approach: LangChain/OpenAI SDKs often accept `baseURL`.

2) **Twitter plugin troubleshooting checklist + versioned docs**
- Given recurring Twitter issues, add a pinned doc: auth flow, rate limits, required env vars, supported versions.

3) **Org-repo “code bot” behavior documented + toggle**
- Clarify whether it follows org accounts; add explicit setting (e.g., `FOLLOW_ORG_REPOS=false`).

---

### D) Duplicate URL processing (High impact, Medium difficulty; clear ROI)
1) **Deduplicate message parts before LLM invocation**
- Decide: treat URL as text OR attachment, not both (as the issue describes) (#6486).
2) **Add regression test: “message with URL triggers single completion”**
- Prevents token-cost regressions.
3) **Expose per-message tool/LLM call counters in UI**
- Helps users detect cost anomalies quickly.

Similar approach: Slack/Discord bridge bots commonly normalize attachments/embeds into a single canonical message payload.

---

### E) Compliance/safety for regulated automations (Very high impact, Medium–High difficulty)
1) **Add “Compliance Gate” middleware for high-risk actions**
- Before sending dispute letters: require checks + logging + optional human approval.
- Provide a default “human-in-the-loop required” mode for credit/legal plugins.

2) **Safety checklist + disclaimers template for plugin-form candidacy**
- If “plugin-form candidate” is a quality standard, make “regulated automation checklist” part of it (FCRA/ECOA/FDCPA notes, audit logging, rate limiting, user attestations).

3) **Audit trail / provenance logging**
- Store: prompt, evidence inputs, rule checks, user approvals, final letter content hash.
- Similar approach: fintech automation tools and RPA platforms (UiPath) emphasize audit logs + approvals.

---

### F) Trust/safety + token expectation management (Medium impact, Low difficulty)
1) **Pinned “Official Links & Token Info”**
- Include: $elizaOS naming, migration rules/status, and “team will never DM you first”.
2) **Auto-moderation for scam patterns**
- Block common “ticket link” domains; warn on new-account DMs.
3) **Answer-bank for recurring token questions**
- Reduce repeated confusion and rumor propagation.

---

## 4) Communication Gaps (expectations vs reality)

### Where expectations don’t match reality
- **“Latest release is stable”** expectation vs reality: frequent breaking changes, alpha v2.0.0, multiple runtimes.
- **“Autonomy” meaning**: users expect a single canonical autonomy system, but there are **three** (plugin-autonomous, v2 built-in, milaidy project).
- **Plugin maintenance expectation**: users ask if plugins are maintained because core upgrades break them.

### Recurring questions indicating doc/onboarding gaps
- “Which version/branch should I use?” (repeated in discussion).
- “What does plugin-autonomous actually do?” and “Is it true autonomy?”
- “How do I build an agent? I’m new.”
- “Is token migration still possible after the deadline?” (unanswered in dataset).

### Specific improvements
- Put a **version chooser** and **autonomy explainer** in docs and CLI output.
- Add a **“Production readiness” rubric** per runtime and per plugin (e.g., Stable / Beta / Alpha, plus last-tested commit).

---

## 5) Community Engagement Insights

### Power users / high-leverage contributors (and their needs)
- **Odilitime**: repeatedly clarifies versioning, branch strategy, plugin creep concerns (PR #6531); needs tooling to enforce architecture boundaries and plugin policies.
- **Caesar**: production perspective, enterprise workflow guidance, and compliance framing; needs clearer “approved patterns” (HITL, persistent storage, embeddings).
- **Julio Holon**: representative enterprise adopter; needs stable plugin stack and clear migration story.
- **Meme Broker**: building advanced regulated-industry plugin; needs compliance framework + review pathway for “plugin-form candidacy”.
- **standujar (GitHub)**: workflow automation stability work (n8n plugin); benefits from consistent integration test harnesses.

### Newcomer friction signals
- Jamie’s “help me build an agent” + Twitter issue suggests users don’t know:
  - which channel to use,
  - what info to include,
  - what’s supported in their version.

### Converting passive users into contributors
- Create “Good First Fix” issues specifically for:
  - plugin smoke tests,
  - docs decision tree,
  - URL dedup regression test (#6486),
  - OpenAI base URL config (#6490).
- Pair power users with newcomers via short “office hours” focused on one integration (Linear/GitHub/Twitter).

---

## 6) Feedback Collection Improvements

### Current channel effectiveness
- **Discord** is good for rapid context and brainstorming, but key questions remain unanswered (compliance verification; token migration).
- **GitHub issues** capture actionable bugs well (URL duplication; OpenAI endpoint), but are not being systematically linked back into Discord threads.

### Improvements for more structured, actionable feedback
1) **Weekly “Top 5 unresolved questions” triage**
- Pull from Discord unanswered questions and open issues; publish responses or “status: investigating”.

2) **Standardized issue templates for integrations**
- Require: runtime version, plugin version, provider, reproduction steps, logs.
- Would have prevented the Twitter issue from stalling at “which version/product?”

3) **Opt-in telemetry for cost/reliability**
- Track: duplicate LLM call detection, plugin load failures, action invocation errors (privacy-preserving).
- Helps quantify problems like #6486 beyond anecdote.

### Underrepresented segments
- Non-Discord enterprise teams (who may avoid public chats).
- Compliance/legal professionals (important given credit/ticket automation interest).
- Non-crypto builders who need clarity separate from token discussions.

---

## Prioritized High-Impact Actions (next 2–4 weeks)
1) **Ship a runtime/plugin compatibility matrix + CI smoke tests for top plugins** (stops “broken out-of-box” adoption failures).
2) **Fix and regression-test the “URL triggers duplicate LLM calls” bug (GitHub #6486)** (direct cost + UX win).
3) **Publish a single “Which version should I use?” decision tree (docs + CLI output)** (reduces repetitive confusion and support load).
4) **Add configurable OpenAI base URL to the OpenAI provider (GitHub #6490)** (unblocks many OpenAI-compatible inference backends).
5) **Create a “regulated automation safety/compliance gate” guideline + checklist for plugin-form candidacy** (enables credit/ticket automation without reputational risk).