# Council Briefing: 2025-04-16

## Monthly Goal

December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

## Daily Focus

- The fleet advanced toward “trust through shipping” by hardening first-run UX (onboarding + GUI validation) and correcting core runtime semantics (entities vs agents), while community signals warn that V2 plugin migration and launch comms remain the primary trust bottlenecks.

## Key Points for Deliberation

### 1. Topic: V2 Transition Stability & Plugin Continuity

**Summary of Topic:** Operational chatter indicates V2 adoption is gated less by capability than by migration friction: V1 plugins do not yet function in V2, leading to broken setups (OpenAI plugin, env var recognition, provider selection) and developer confusion around version naming and “what works where.”

#### Deliberation Items (Questions):

**Question 1:** Do we freeze V2 “Gold” until critical plugin parity (OpenAI + core clients + DB adapters) is achieved, or ship with explicit “plugin-gap” guardrails?

  **Context:**
  - `💻-coders: “V1 plugins don't work with v2 yet.” (Odilitime)`
  - `💻-coders: “The plugins are yet to be migrated to V2, which will happen when V2 is released.” (Kenk)`

  **Multiple Choice Answers:**
    a) Hold V2 Gold until a minimum parity pack is migrated (OpenAI, Discord/Telegram, Postgres/PGlite).
        *Implication:* Reduces immediate churn and support load, but delays momentum and partner timelines.
    b) Ship V2 Gold on schedule with a clearly labeled “Core-Only” mode and compatibility matrix.
        *Implication:* Preserves shipping cadence while containing trust damage via explicit expectations.
    c) Ship V2 as “Beta-Extended” and redirect new builders to the most stable V1 track temporarily.
        *Implication:* Minimizes breakage for newcomers but risks fragmenting the ecosystem and slowing V2 plugin authoring.
    d) Other / More discussion needed / None of the above.

**Question 2:** What is the Council’s chosen “single source of truth” for versioning and setup paths (starter vs manual vs beta), and how aggressively do we enforce it across docs + CLI messaging?

  **Context:**
  - `elizaOS Discord: “Users are experiencing difficulties with the transition between ElizaOS versions (V1/0.x and V2/1.x).”`
  - `Action item: “Improve clarity about version naming (V1/0.x vs V2/1.x).” (cardinal.error)`

  **Multiple Choice Answers:**
    a) Make docs.eliza.how the canonical path; CLI prints targeted instructions based on detected version and OS.
        *Implication:* Maximizes DX consistency and reduces support surface area, but requires ongoing docs discipline.
    b) Adopt a “two-lane” policy: Stable Lane (V1) + Frontier Lane (V2), each with separate, minimal docs.
        *Implication:* Clarifies expectations and reduces confusion, but splits attention and may slow convergence.
    c) Defer strict consolidation until post-V2 Gold; rely on community support channels for nuance.
        *Implication:* Short-term convenience increases long-term trust erosion and repeated onboarding failures.
    d) Other / More discussion needed / None of the above.

**Question 3:** Should we prioritize fixing configuration ergonomics (env var detection, provider ordering, default model selection) over new provider integrations (e.g., OpenRouter) until support volume drops?

  **Context:**
  - `💻-coders: “Configuration challenges include environment variables not being recognized properly.”`
  - `elizaOS Development: “Plugin order behavior… affects default model selection.” (0xbbjoker)`

  **Multiple Choice Answers:**
    a) Yes—declare a “Configuration Reliability Sprint” and postpone new provider plugins.
        *Implication:* Improves reliability and builder trust; slows ecosystem breadth temporarily.
    b) Balance—ship OpenRouter while also adding guardrails (preflight checks, better errors, templates).
        *Implication:* Maintains momentum and reduces pain if guardrails are real, but raises execution risk.
    c) No—expand providers first to maximize optionality and let builders self-select around issues.
        *Implication:* Increases feature surface while compounding support complexity and perceived instability.
    d) Other / More discussion needed / None of the above.

---


### 2. Topic: Trust Through Shipping: Auto.fun Launch Readiness & Comms Discipline

**Summary of Topic:** Auto.fun is positioned as a near-term ecosystem catalyst and revenue flywheel (buyback narrative), but community frustration around delays and ambiguous countdowns threatens the “developer trust” mandate; security posture (audits/bug bounties) is also being requested as credibility collateral.

#### Deliberation Items (Questions):

**Question 1:** What is the Council-approved minimum public launch packet for Auto.fun to protect trust (timeline, tokenomics clarity, security stance, and support coverage)?

  **Context:**
  - `🥇-partners: “Auto.fun would launch ‘this week’.” (shaw)`
  - `dao-organization: “poor communication around launch delays… damaging trust.” (Zolo / Chinese community feedback)`

  **Multiple Choice Answers:**
    a) Publish a full launch packet: exact window, token flow diagram, risk disclosures, and a support rota.
        *Implication:* Maximizes trust and reduces rumor volatility; costs time and forces uncomfortable specificity.
    b) Publish a minimal packet: confirmed launch window + FAQ + “what’s live day-1” + escalation path.
        *Implication:* Protects cadence while addressing the sharpest confusion; may still be seen as thin transparency.
    c) Stay flexible: announce only at partner signal, keep details primarily in Discord/Notion.
        *Implication:* Optimizes optionality but likely deepens trust debt and amplifies community backlash cycles.
    d) Other / More discussion needed / None of the above.

**Question 2:** How do we reconcile the buyback/flywheel narrative with the North Star (reliable dev platform) so that Auto.fun doesn’t overshadow framework credibility?

  **Context:**
  - `elizaOS Discord: “SOL used on autofun will go back to buy ai16z. completing the flywheel” (anon)`
  - `🥇-partners: “The plan is definitely to make the number go up…” (shaw)`

  **Multiple Choice Answers:**
    a) Frame Auto.fun explicitly as a funding engine for reliability (docs, CI, audits, Cloud) with a published allocation policy.
        *Implication:* Aligns incentives with execution excellence and reduces “meme-first” perception risk.
    b) Keep messaging dual-track: community hype in social channels, dev-first messaging in docs/blog.
        *Implication:* May preserve both audiences but risks brand incoherence and internal priority drift.
    c) Lean into the meme economy as the primary growth engine; treat framework as downstream.
        *Implication:* Could accelerate attention but endangers long-term developer trust and composability ethos.
    d) Other / More discussion needed / None of the above.

**Question 3:** What security signal is mandatory before/at launch (audit, bug bounty, hardened infra), given the platform handles token creation and trading flows?

  **Context:**
  - `elizaOS Discord: “Proposal for an Immunefi partnership for security audits and bug bounties.” (yikesawjeez)`
  - `Auto.fun summary: “concerns about transparency and security… discussions about implementing bug bounties and audits.”`

  **Multiple Choice Answers:**
    a) Launch only with a formal bug bounty (e.g., Immunefi) and a published threat model.
        *Implication:* Strong trust posture; may delay launch and requires operational readiness to triage reports.
    b) Launch with phased security: internal review + limited bounty scope day-1, expand post-volume.
        *Implication:* Balances time-to-market and credibility but must be communicated crisply to avoid backlash.
    c) Defer formal security programs until after traction proves value.
        *Implication:* High reputational risk in web3 contexts; a single exploit could invalidate the entire flywheel.
    d) Other / More discussion needed / None of the above.

---


### 3. Topic: Execution Excellence: UX Hardening, Observability, and Test Infrastructure

**Summary of Topic:** Code-level progress shows consistent investment in first-run UX (interactive onboarding tour, GUI validation, stop-agent controls) and platform integrity (entities/agents fix), while the ecosystem’s scale demands stronger instrumentation and cross-platform CI to prevent regressions from becoming support storms.

#### Deliberation Items (Questions):

**Question 1:** Which “reliability gate” do we enforce before the next major release: CLI test suite coverage, onboarding completion path, or plugin boot health checks?

  **Context:**
  - `Daily Update 2025-04-16: “Implemented an interactive onboarding tour…” (PR #4293)`
  - `GitHub PRs: “CLI Testing… implementing a test suite for the command-line interface (CLI).” (PR #4290, #4301)`

  **Multiple Choice Answers:**
    a) Gate on CLI test suite + cross-OS matrix (Mac/Win/Linux) as the non-negotiable baseline.
        *Implication:* Reduces breakage in the primary entry path; may slow feature velocity but pays down trust debt.
    b) Gate on onboarding + GUI reliability (creation/import/start/stop) to minimize newcomer drop-off.
        *Implication:* Improves funnel conversion but may leave infra regressions undetected until later stages.
    c) Gate on plugin boot health checks and preflight validation (keys, provider order, env sanity).
        *Implication:* Directly targets the most common support failures and lowers community friction rapidly.
    d) Other / More discussion needed / None of the above.

**Question 2:** How aggressively should we standardize observability (LLM instrumentation, model/provider logging) across core + plugins to support ElizaOS Cloud reliability targets?

  **Context:**
  - `Recent PRs: “LLM Instrumentation… adds LLM instrumentation capabilities.” (PR #4304)`
  - `May 2025 update: “Integrated Sentry logging for core logger errors.” (PR #4650)`

  **Multiple Choice Answers:**
    a) Mandate a unified tracing interface in core and require new/updated plugins to emit standard events.
        *Implication:* Enables Cloud-grade debugging and SLA posture; increases contributor overhead and review rigor.
    b) Adopt observability as “recommended,” focusing only on core and flagship plugins first.
        *Implication:* Faster adoption with less friction, but creates blind spots where many community issues occur.
    c) Keep observability decentralized per plugin to avoid coupling.
        *Implication:* Preserves plugin autonomy but undermines platform-level reliability and incident response speed.
    d) Other / More discussion needed / None of the above.

**Question 3:** Do we treat “information taming” (daily knowledge repo updates, tested docs) as a release artifact equal to code, with explicit owners and SLAs?

  **Context:**
  - `elizaOS Discord: “jin implemented daily updates for the knowledge repository with automation.”`
  - `elizaOS Development: “Create tutorials for v2… education is thin.” (shaw)`

  **Multiple Choice Answers:**
    a) Yes—ship releases only with updated docs + migration notes + verified quickstarts, owned by a rotating duty officer.
        *Implication:* Directly advances developer trust and reduces repeated support; requires disciplined ops cadence.
    b) Partially—enforce docs SLAs only for breaking changes and flagship workflows.
        *Implication:* Improves the worst pain points without overburdening engineers; some confusion persists.
    c) No—optimize for code shipping and let community documentation lag behind.
        *Implication:* Short-term velocity gain but long-term trust erosion and higher support burnout.
    d) Other / More discussion needed / None of the above.