# Council Briefing: 2025-02-02

## Monthly Goal

December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

## Daily Focus

- The fleet advanced plugin modularity and code quality (new market-data plugins + Biome standardization), but urgent reliability threats persist in core setup/connection paths (provider-utils export, Fetch regressions, Supabase/live connectivity) that could erode developer trust if not triaged as a priority block.

## Key Points for Deliberation

### 1. Topic: Reliability Frontline: Setup, Adapters, and Runtime Connectivity

**Summary of Topic:** Despite strong shipping velocity, recurring installation and runtime connection failures (Node version pinning, Supabase adapter friction, post-deploy connectivity) remain the primary trust risk for builders—directly conflicting with the Execution Excellence principle.

#### Deliberation Items (Questions):

**Question 1:** Do we enforce a stricter “supported runtime” policy (Node version + OS matrix) to reduce support entropy, even if it narrows immediate accessibility?

  **Context:**
  - `Discord 💻-coders: "Node.js 23.3.0 is specifically recommended for ElizaOS installation." (answered by infinityu1729)`
  - `Discord discussion: "Users frequently encounter issues with the latest v0.1.9 release, particularly on Windows/WSL systems."`

  **Multiple Choice Answers:**
    a) Yes—hard-support one runtime (e.g., Node 23.3.0) with automated checks and explicit refusal/warnings outside the matrix.
        *Implication:* Reduces debugging surface area and increases perceived reliability, but may slow adoption in conservative environments.
    b) Partially—support an LTS baseline plus the recommended version, with best-effort support elsewhere.
        *Implication:* Balances accessibility with stability, but still leaves edge-case support load and inconsistent community outcomes.
    c) No—keep broad compatibility as the priority and absorb the support cost via docs and community troubleshooting.
        *Implication:* Maximizes reach, but risks ongoing “it doesn’t work” narratives that undermine developer trust.
    d) Other / More discussion needed / None of the above.

**Question 2:** What is the correct strategic default for persistence: optimize for “it just works locally” (SQLite/PGlite) or “production-first” (Postgres/Supabase)?

  **Context:**
  - `GitHub issues (2025-02-02): "Users are facing setup challenges with the Supabase Adapter." (elizaos/eliza#3160)`
  - `Discord 💻-coders: "Discussions about Supabase vs. SQLite for database integration."`

  **Multiple Choice Answers:**
    a) Local-first default (SQLite/PGlite) with a clear migration path to Postgres/Supabase.
        *Implication:* Improves onboarding success rate and demo velocity, but risks a gap between local and production behavior.
    b) Production-first default (Postgres/Supabase) to align dev experience with deployment reality.
        *Implication:* Reduces production surprises, but increases initial setup failures and support burden.
    c) Dual-track: CLI prompts users to pick a profile (Local, Cloud, Enterprise) with pre-validated templates.
        *Implication:* Increases DX clarity and reduces misconfiguration, but requires investment in a “doctor” + templates ecosystem.
    d) Other / More discussion needed / None of the above.

**Question 3:** Do we declare “blocking reliability incidents” that freeze feature merges until resolved (e.g., Fetch regressions, missing exports), or keep parallel shipping?

  **Context:**
  - `Needs Attention (2025-02-02): "@ai-sdk/provider-utils is not providing an export named 'delay'" (elizaos/eliza#3159)`
  - `Needs Attention (2025-02-02): "The Fetch method is exhibiting strange behavior again" (elizaos/eliza#3154)`
  - `Needs Attention (2025-02-02): "connection problems after going live" (elizaos/eliza#3162)`

  **Multiple Choice Answers:**
    a) Freeze: institute a reliability gate—no new features until P0 setup/connectivity issues are cleared.
        *Implication:* Maximizes trust-through-shipping and reduces churn, but slows visible roadmap progress.
    b) Parallel lanes: allow feature work, but require a dedicated strike team for P0 incidents with SLAs.
        *Implication:* Preserves momentum while protecting stability, but needs strong coordination and enforcement.
    c) Keep shipping: rely on rapid patch cadence and community triage without formal gates.
        *Implication:* Maintains velocity, but amplifies reputational risk if onboarding remains fragile.
    d) Other / More discussion needed / None of the above.

---


### 2. Topic: Composable Expansion: Plugin Growth with Quality Controls

**Summary of Topic:** The project is rapidly expanding modular capabilities (CoinMarketCap/CoinGecko, Solana/Twitter/Ton improvements) and investing in linting/testing (Biome + coverage), signaling maturing engineering discipline—but raises governance questions around plugin sprawl, compatibility, and CI costs.

#### Deliberation Items (Questions):

**Question 1:** Should we require baseline test coverage and formatting (Biome) for all plugin PRs before merge to protect reliability?

  **Context:**
  - `GitHub (2025-02-02): "Added the CoinMarketCap plugin with comprehensive test coverage." (PR #3134)`
  - `GitHub (2025-02-02): "Implemented test configuration and coverage for the CoinGecko plugin." (PR #3124)`
  - `GitHub (2025-02-01): "Resolved multiple issues across various plugins, including Biome linting and formatting." (PR #3181)`

  **Multiple Choice Answers:**
    a) Yes—enforce mandatory minimal coverage + Biome formatting for all plugins (with CI failing otherwise).
        *Implication:* Improves reliability and maintainability, but may reduce community contribution velocity.
    b) Phase-in: require for core/flagship plugins now, and progressively enforce for long-tail plugins.
        *Implication:* Balances quality and community throughput, but may create a two-tier ecosystem.
    c) No—keep requirements light; focus on documentation and examples, and accept uneven plugin quality.
        *Implication:* Maximizes experimentation, but risks the ecosystem becoming noisy and unreliable for builders.
    d) Other / More discussion needed / None of the above.

**Question 2:** Where should the long-term boundary sit: a lean core with independently maintained plugin repos, or a monorepo-first model for tighter integration?

  **Context:**
  - `GitHub completed items (Feb 2025): "Delete all plugins... moved to https://github.com/elizaos-plugins and independently maintained." (PR #3342)`
  - `Discord (2025-02-01): "Plugin Problems... Solana and Twitter plugins... hanging during startup with the pyth-data plugin."`

  **Multiple Choice Answers:**
    a) Lean core + external plugin org as default; core only guarantees stable interfaces and a curated registry.
        *Implication:* Scales ecosystem breadth while keeping core stable, but needs strong versioning and compatibility tooling.
    b) Hybrid: keep a “core plugins” set in-repo and push experimental/long-tail plugins to external repos.
        *Implication:* Maintains high-quality primitives while enabling experimentation, but adds governance overhead.
    c) Monorepo-first: keep most plugins in one repo to ensure synchronized releases and consistent CI.
        *Implication:* Reduces compatibility drift, but increases repo weight and slows independent iteration.
    d) Other / More discussion needed / None of the above.

**Question 3:** How do we prevent plugin sprawl from degrading DX: curated compatibility matrix, marketplace gating, or laissez-faire registry?

  **Context:**
  - `Discord 🥇-partners: "The development team is prioritizing building core infrastructure, including an agent marketplace/launchpad."`
  - `Discord discussion: "Create a directory/catalog of all apps built using ElizaOS." (requested by zircatpop and Seraph)`

  **Multiple Choice Answers:**
    a) Curated matrix: label plugins by support tier (Core/Verified/Community) with CI compatibility tests per tier.
        *Implication:* Improves trust and discoverability, but requires ongoing review capacity.
    b) Marketplace gating: only marketplace-listed plugins must meet standards; registry remains open.
        *Implication:* Creates a quality funnel without blocking experimentation, but may fragment user expectations.
    c) Open registry: no formal tiers; rely on community ratings and usage signals.
        *Implication:* Minimizes governance overhead, but risks new users repeatedly encountering broken integrations.
    d) Other / More discussion needed / None of the above.

---


### 3. Topic: Taming Information: Canonical Updates, Working Pipelines, and Public Presence

**Summary of Topic:** Multiple signals indicate that information flow and public surfaces are brittle (website 404, news JSON pipeline failures), while internal efforts (Discord summarization, FAQ books) are strong; the strategic gap is converting these into stable, canonical, easy-to-find channels that reinforce developer trust.

#### Deliberation Items (Questions):

**Question 1:** Which channel becomes the single canonical source of truth for releases, status, and docs: a dedicated elizaos.ai/news site, GitHub Pages, or Discord-native announcements?

  **Context:**
  - `Discord (2025-02-01): "Multiple users reported the elizas.com website being down with 404 errors."`
  - `Discord 🥇-partners: "Create a proper site for news/updates (mirror, elizaos.ai, or GitHub pages)." (mentioned by jin)`

  **Multiple Choice Answers:**
    a) Dedicated site (elizaos.ai) as canonical, with mirrored syndication to GitHub/Discord/X.
        *Implication:* Creates a stable external face and improves trust, but requires web ops ownership and uptime guarantees.
    b) GitHub Pages + repo release notes as canonical, with Discord/X as distribution only.
        *Implication:* Low operational burden and high reliability, but may feel less polished to non-GitHub-native builders.
    c) Discord-first canonical updates, augmented by bots that backfill to GitHub/feeds.
        *Implication:* Meets the community where they are, but risks continued fragmentation and discoverability issues.
    d) Other / More discussion needed / None of the above.

**Question 2:** Do we treat the AI news/summary pipeline as a production service with SLAs, or as an experimental internal tool until Cloud and core are fully stabilized?

  **Context:**
  - `Discord 3d-ai-tv: "data writes to SQLite but fails to write to JSON" (mentioned by jin)`
  - `Discord 3d-ai-tv: Outdated endpoint noted and new URL shared: "https://madjin.github.io/ai-news/json/daily.json"`

  **Multiple Choice Answers:**
    a) Productionize now: define ownership, monitoring, and a fixed schema; treat failures as P1 incidents.
        *Implication:* Improves information coherence and trust, but competes for engineering bandwidth with core stability work.
    b) Semi-production: weekly cadence with best-effort uptime and clear “experimental” labeling.
        *Implication:* Provides value while limiting expectations, but may still confuse users if it intermittently breaks.
    c) Keep experimental: pause public dependency until core/Cloud milestones are met.
        *Implication:* Protects focus on core reliability, but delays progress on the “Taming Information” strategic pillar.
    d) Other / More discussion needed / None of the above.

**Question 3:** How should we operationalize “documentation as first-class”: embed answers in the product (CLI doctor + guided setup) or keep improving static docs and community support loops?

  **Context:**
  - `Discord (2025-02-01): "Jin analyzed 2 months of discord chat and summarized into a documentation book" (https://hackmd.io/@xr/elizaos-rpgf)`
  - `GitHub issues list (2025-02-01): multiple setup failures (e.g., "Initial setup not working" #1666, "Problems after running 'pnpm start'" #3151)`

  **Multiple Choice Answers:**
    a) Product-embedded guidance: prioritize a CLI “doctor” and guided setup flows that auto-detect misconfigurations.
        *Implication:* Directly reduces support load and onboarding failures, but requires sustained investment in DX tooling.
    b) Docs-first: centralize and polish docs (quickstart, platform deploy guides, DB guides) with aggressive updates.
        *Implication:* Fast to execute and scalable, but still depends on users reading and interpreting correctly.
    c) Community-first: formalize support squads and templates; treat docs/tooling as secondary.
        *Implication:* Leverages community energy, but risks uneven quality and slower resolution for critical setup blockers.
    d) Other / More discussion needed / None of the above.