# Council Briefing: 2025-12-16

## Monthly Goal

December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

## Daily Focus

- Trust is being stress-tested by token migration delays (notably Bithumb) and an alleged migration-site compromise, making incident response and clear communications the highest-leverage actions for execution excellence.

## Key Points for Deliberation

### 1. Topic: Token Migration Reliability & Security Posture

**Summary of Topic:** Community pressure is escalating around CEX-held migration delays and an alleged phishing/compromise of the migration site; both directly threaten the monthly directive of high-success migration and developer/user trust.

#### Deliberation Items (Questions):

**Question 1:** What is the Council's default public stance and action protocol when a credible user reports funds stolen via our migration surface?

  **Context:**
  - `💬-discussion: Forrest Jackson: "Is the ElizaOS migration site hacked/hijacked?"; Odilitime: "We're looking at it."`
  - `💬-discussion summary: "claiming it was compromised and requesting funds were stolen"`

  **Multiple Choice Answers:**
    a) Treat as SEV-0: temporarily pause/geo-fence the migration UI, publish a security advisory, and open an incident war-room with hourly updates.
        *Implication:* Maximizes user safety and long-term trust, at the cost of short-term migration throughput and market noise.
    b) Keep migration live but add prominent warnings, tighten allowlists, and publish a preliminary statement after internal validation.
        *Implication:* Balances continuity and caution, but risks accusations of minimizing harm if more users are affected.
    c) Avoid public escalation until confirmed; handle reports case-by-case via support channels.
        *Implication:* Reduces panic risk, but can severely damage credibility if the issue is real and spreads through community channels first.
    d) Other / More discussion needed / None of the above.

**Question 2:** How should we operationalize accountability for exchange-held migrations (e.g., Bithumb) without alienating users caught in the delay?

  **Context:**
  - `💬-discussion: Korean users frustrated about "delays with Bithumb exchange completing the AI16Z to ElizaOS token migration"`
  - `💬-discussion: Omid Sa: "This matter is in your exchange's hands"`

  **Multiple Choice Answers:**
    a) Create an Exchange Migration Scoreboard (status, ETAs, known blockers) and a formal escalation playbook per exchange.
        *Implication:* Turns frustration into transparent tracking and pressures CEX partners via public accountability.
    b) Run a dedicated concierge pathway for impacted users (templates, translations, support tickets) while keeping messaging neutral toward exchanges.
        *Implication:* Protects brand tone and user experience, but may reduce leverage over slow exchanges.
    c) Hard-line messaging: reiterate exchange responsibility and stop investing internal bandwidth into CEX-specific delays.
        *Implication:* Conserves team capacity, but risks reputational damage in key regions and lowers perceived reliability.
    d) Other / More discussion needed / None of the above.

**Question 3:** Do we extend or modify the migration window/UX to reduce user error and phishing exposure during the long tail of migration?

  **Context:**
  - `2025-12-14: Ledger users had visibility issues; DorianD: "connect the hardware wallets to various chrome Solana wallets... then connect to the site"`
  - `2025-12-15: Reports of "site prompts wallet connection then requests approval for valuable tokens" (Forrest Jackson report summarized)`

  **Multiple Choice Answers:**
    a) Extend the window and ship hardened UX: explicit token-approval guards, transaction previews, and hardware-wallet specific guidance.
        *Implication:* Improves safety and completion rates, reinforcing execution excellence as a product attribute.
    b) Keep the window fixed, but invest in documentation, verified links, and in-app warnings to reduce confusion.
        *Implication:* Maintains schedule discipline while addressing trust; completion rates may still suffer in edge cases.
    c) Shorten/close the window and move remaining users to manual support-only remediation.
        *Implication:* Reduces attack surface but creates operational burden and user resentment, especially for CEX-delayed cohorts.
    d) Other / More discussion needed / None of the above.

---


### 2. Topic: ElizaOS Cloud Launch Trajectory (Create → Publish → Monetize → Promote)

**Summary of Topic:** Cloud progress is strong with a full business-cycle vision and new integrations/partners, but the Council must decide how to stage-access, message the value, and avoid overextending reliability during launch.

#### Deliberation Items (Questions):

**Question 1:** What is the launch gating criterion for Cloud given the monthly directive prioritizes execution excellence over feature breadth?

  **Context:**
  - `2025-12-14: shaw: "The cloud platform is progressing well" and focused on "create → publish → monetize → promote"`
  - `Repo: PR #6216: "Eliza Cloud Integration... tightly integrate... CLI should auto log them in, provision API key" (open/unmerged)`

  **Multiple Choice Answers:**
    a) Launch only after end-to-end onboarding is deterministic (CLI login/API key provisioning + deploy) and instrumented with success metrics.
        *Implication:* Optimizes first impressions and retention, but risks slipping the calendar if integration PRs lag.
    b) Soft-launch as 'Alpha' with clear constraints, focusing on builders who tolerate rough edges while we harden reliability.
        *Implication:* Accelerates feedback loops and community energy, but requires disciplined expectation-setting to protect trust.
    c) Launch publicly now to capture momentum (SEO/social/ad integrations) and fix forward.
        *Implication:* Maximizes reach, but any instability becomes a narrative that undermines 'most reliable framework' positioning.
    d) Other / More discussion needed / None of the above.

**Question 2:** Should we formalize a curated 'Eliza-Alpha' channel to coordinate Cloud previews, testing, and controlled leaks as a growth instrument?

  **Context:**
  - `🥇-partners: untitled, xyz: "repurposing... to preview cloud features... rename... 'Eliza-Alpha'"; shaw: "ya sounds good"`

  **Multiple Choice Answers:**
    a) Yes—convert partners into 'Eliza-Alpha' with invite criteria (builders, integrators, content operators) and NDA-lite rules.
        *Implication:* Creates a controlled pre-launch runway and a reliable tester corps, reducing launch risk.
    b) Partially—run time-boxed demo drops in a public channel, but keep real alpha coordination internal to avoid narrative drift.
        *Implication:* Keeps transparency while limiting coordination overhead; may reduce depth of feedback.
    c) No—avoid special channels and focus on formal documentation and release notes to reduce misinformation vectors.
        *Implication:* Simplifies comms, but may slow community-driven distribution and early adopter recruitment.
    d) Other / More discussion needed / None of the above.

**Question 3:** How should Cloud integrate monetization/promote features (SEO, ad network, social publishing) without diluting the 'developer-first infra' identity?

  **Context:**
  - `2025-12-14: "New integrations include SEO capabilities, advertising network connections, and social publishing features"; "An ad network partner has been secured"`

  **Multiple Choice Answers:**
    a) Position these as optional modules/APIs (composable), with a minimal core Cloud runtime that remains infra-pure.
        *Implication:* Protects the framework brand and composability while enabling business outcomes for builders.
    b) Bundle them into the default Cloud experience to demonstrate the full autonomous agent business loop.
        *Implication:* Strengthens flagship narrative and conversion, but raises reliability surface area and support burden.
    c) Defer monetization/promote to a later phase; launch Cloud with deployment/storage/cross-chain primitives only.
        *Implication:* Reduces launch complexity, but may weaken differentiation versus generic agent hosting platforms.
    d) Other / More discussion needed / None of the above.

---


### 3. Topic: Framework Stability & Developer Trust (DX, Testing Discipline, Data Integrity)

**Summary of Topic:** Recent GitHub momentum shows heavy refactors, dependency alignment, and Cloud-first CLI improvements, while Discord surfaces ongoing runtime friction (plugin registration errors, DB constraints) and a proposal to raise PR verification standards.

#### Deliberation Items (Questions):

**Question 1:** Do we mandate PR proof-of-function (screenshots/video + explicit test steps) as policy to reduce 'passes review, fails in prod' regressions?

  **Context:**
  - `core-devs: cjft: "include screenshots or short videos with PRs to demonstrate functionality" and "write tests and verify PR functionality in production"`

  **Multiple Choice Answers:**
    a) Yes—require a 'Proof' section for any user-facing or runtime-affecting PR (video/screenshot + test plan) and block merges without it.
        *Implication:* Raises quality bar and trust through shipping, but increases contributor friction and review time.
    b) Adopt selectively—only for high-risk areas (auth, migrations, DB/plugins, CLI) while keeping low-risk changes lightweight.
        *Implication:* Targets the reliability hotspots while keeping contribution throughput healthy.
    c) No—rely on automated tests/CI and keep PR process minimal to maximize velocity.
        *Implication:* Preserves speed, but risks repeating production failures that erode developer trust.
    d) Other / More discussion needed / None of the above.

**Question 2:** What is our stability priority: resolve DB integrity issues (Twitter reply FK constraints) or ship new platform primitives (JWT auth, unified API, Cloud integration) first?

  **Context:**
  - `💬-coders: soyrubio: "twitter replies causing database fail due to foreign key constraints"; Redvoid: "latest codebase... SQL fixes"`
  - `GitHub: PR #6200 "implement JWT authentication" (open/unmerged); PR #6201 unified API; PR #6216 Cloud integration (open/unmerged)`

  **Multiple Choice Answers:**
    a) Stability-first: freeze new features until FK/DB edge cases are reproducibly fixed and validated across supported DB modes.
        *Implication:* Reinforces execution excellence and reduces support load, but slows Cloud narrative and platform expansion.
    b) Parallelize: assign a dedicated stability strike team while continuing feature merges behind flags and staged releases.
        *Implication:* Maintains momentum without sacrificing reliability, but requires strong release management and ownership clarity.
    c) Feature-first: push JWT/unified API/Cloud onboarding to hit market windows; patch DB issues opportunistically.
        *Implication:* Maximizes short-term roadmap progress, but increases the probability of compounding reliability debt.
    d) Other / More discussion needed / None of the above.

**Question 3:** How do we reduce first-run friction for new builders encountering plugin/config errors (e.g., TEXT_LARGE / missing OpenAI plugin) while keeping the framework open and composable?

  **Context:**
  - `2025-12-13: Thirtieth reports TEXT_LARGE error; sayonara: "OpenAI or any other AI plugin is not registered it seems" and suggested "elizaos update"`
  - `2025-12-13: Unanswered: "Do I need to connect that [OpenAI API key] to elizacloud?"`

  **Multiple Choice Answers:**
    a) Improve onboarding defaults: CLI should detect missing inference plugins, prompt installation, and validate keys with actionable errors.
        *Implication:* Directly improves DX and reduces Discord support load, strengthening adoption flywheel.
    b) Ship a 'doctor' command and troubleshooting playbooks (docs-first) while leaving runtime behavior unchanged.
        *Implication:* Maintains composability and avoids opinionated defaults, but places more burden on users to self-debug.
    c) Move to Cloud-first inference: default new projects to ElizaOS Cloud provider to minimize local plugin complexity.
        *Implication:* Simplifies setup and boosts Cloud usage, but risks alienating builders who require local/alternative providers by default.
    d) Other / More discussion needed / None of the above.