# Council Briefing: 2025-12-28

## Monthly Goal

December 2025: Execution excellence—complete token migration with high success rate, launch ElizaOS Cloud, stabilize flagship agents, and build developer trust through reliability and clear documentation.

## Daily Focus

- Operational momentum bifurcated: public trust friction intensified around snapshot-based token migration while core engineering surfaced a bold Jeju/Eliza Cloud future—yet daily GitHub throughput dipped to near-idle, risking execution credibility at month-end.

## Key Points for Deliberation

### 1. Topic: Token Migration Clarity & Trust Repair

**Summary of Topic:** Snapshot-locked migration policy is generating repeated user failure modes ("max amount reached" / "0 eligible") and compounding reputational damage during a major price drawdown. The Council must decide how to reconcile strict migration rules with the North Star imperative of reliability and developer/community trust.

#### Deliberation Items (Questions):

**Question 1:** Do we hold the line on a strict snapshot-only migration, or introduce a controlled remediation path for edge cases to protect trust?

  **Context:**
  - `Odilitime (💬-discussion / 🥇-partners): "As a policy we're not migrating any purchases after snapshot."`
  - `Odilitime (🥇-partners): "'max amount reached'... means that wallet is not in the snapshot."`

  **Multiple Choice Answers:**
    a) Maintain strict snapshot-only migration with improved tooling and messaging (no policy change).
        *Implication:* Preserves on-chain/accounting simplicity, but requires exceptional UX/docs to avoid continued trust erosion.
    b) Add a narrow remediation process (case review + proofs) for specific wallet/connectivity failures without opening post-snapshot buys.
        *Implication:* Reduces legitimate user harm and scam susceptibility, at the cost of operational overhead and policy complexity.
    c) Open a time-limited secondary window with broader eligibility rules to reset sentiment quickly.
        *Implication:* Short-term goodwill boost, but risks dilution narratives, legal/exchange complications, and precedent of policy reversals.
    d) Other / More discussion needed / None of the above.

**Question 2:** What is the minimum public-facing explanation we must ship now to reduce repeat support load and scam risk while tokenomics remain partially undisclosed?

  **Context:**
  - `Borko (🥇-partners): "You're mistaking silence for something we're not sharing yet externally."`
  - `User reports across Discord: repeated confusion about eligibility and migrator errors.`

  **Multiple Choice Answers:**
    a) Publish a concise Migration Canon (eligibility rules + error glossary + official links) without discussing future token plans.
        *Implication:* Cuts support churn immediately while keeping strategic token design confidential.
    b) Publish Migration Canon plus a high-level token utility statement (one paragraph) to anchor expectations.
        *Implication:* May stabilize narrative without overcommitting, but requires careful wording to avoid future contradiction.
    c) Delay new public docs until tokenomics and Jeju token alignment are ready to disclose together.
        *Implication:* Reduces rework risk, but prolongs confusion and increases attack surface for impersonation/scams.
    d) Other / More discussion needed / None of the above.

**Question 3:** How should we define success for “high success rate” migration in the monthly directive—policy compliance or user-perceived fairness?

  **Context:**
  - `Monthly Directive: "complete token migration with high success rate"`
  - `Community: significant frustration around ineligibility and wallet snapshot constraints (Dec 25-27 logs).`

  **Multiple Choice Answers:**
    a) Success = % of eligible snapshot wallets migrated successfully (technical completion metric).
        *Implication:* Optimizes for execution excellence, but may ignore reputational cost among ineligible/edge-case users.
    b) Success = eligible migration rate plus reduction in support incidents and scam reports (trust metric).
        *Implication:* Aligns with North Star trust-building, but demands investment in support ops and comms immediately.
    c) Success = market-facing recovery indicators (sentiment/price stability) after migration completion.
        *Implication:* Targets narrative outcomes, but risks conflating product excellence with market forces outside control.
    d) Other / More discussion needed / None of the above.

---


### 2. Topic: ElizaOS Cloud Reliability & Developer Experience Gaps

**Summary of Topic:** Cloud is live and attracting projects, but edge-case failures (agent naming, deployment credentials, UI multistep streaming) are breaking the “seamless UX” promise. The Council should prioritize a small set of high-impact hardening fixes that convert early adopters into advocates.

#### Deliberation Items (Questions):

**Question 1:** Which reliability breach is most existential to developer trust this week: agent creation validation, deploy pipeline errors, or chat/streaming UX regressions?

  **Context:**
  - `DorianD (💬-coders): agent names like "null" and numeric values cause exceptions; "$" works.`
  - `DorianD (💬-coders): "ECR credentials error" during `elizaos deploy`.`

  **Multiple Choice Answers:**
    a) Fix agent creation/validation and schema constraints first (prevent corrupt or crashing states).
        *Implication:* Reduces user-facing fatal errors at the entry point, improving first-run success and retention.
    b) Fix deploy pipeline reliability first (ECR/registry/auth) to ensure “create → deploy” works end-to-end.
        *Implication:* Maximizes perceived platform viability for serious builders evaluating Cloud as infrastructure.
    c) Fix chat streaming + multistep UI parity with Otaku first to improve flagship experience.
        *Implication:* Improves product delight and demos, but may leave foundational breakages unresolved.
    d) Other / More discussion needed / None of the above.

**Question 2:** Do we formalize and enforce naming/metadata constraints at the API boundary (server) or in the client UX layer first?

  **Context:**
  - `DorianD: numeric agent names produce client-side exceptions; "null" behaves inconsistently between save and edit.`

  **Multiple Choice Answers:**
    a) Enforce constraints server-side immediately (canonical validation + clear error codes).
        *Implication:* Prevents bad states across all clients and aligns with reliability-first engineering.
    b) Patch the client UX first (friendly validation) while scheduling server hardening next sprint.
        *Implication:* Fastest perceived fix, but risks other clients/CLI still creating invalid states.
    c) Do both in one coordinated change with backward-compat migration for already-bad records.
        *Implication:* Most robust, but higher coordination cost and potential to delay urgent relief.
    d) Other / More discussion needed / None of the above.

**Question 3:** How do we convert emerging Cloud projects into ecosystem proof without slowing shipping velocity?

  **Context:**
  - `Discord: "Zoria has 'bonded' and is identified as an Eliza Cloud project."`
  - `Community: need projects to identify themselves as built on "elizaos cloud" for distribution.`

  **Multiple Choice Answers:**
    a) Launch a lightweight “Built on Eliza Cloud” badge + showcase channel now (manual curation).
        *Implication:* Creates immediate social proof with minimal engineering, reinforcing trust through visible adoption.
    b) Implement an automated identification system in Cloud (metadata + public directory).
        *Implication:* Scales distribution long-term, but adds near-term engineering load during stability push.
    c) Defer showcasing until Cloud error rates drop below a defined SLO threshold.
        *Implication:* Avoids amplifying a fragile product, but misses a window to rebuild narrative momentum.
    d) Other / More discussion needed / None of the above.

---


### 3. Topic: Jeju Distributed Cloud Trajectory vs. Month-End Execution Risk

**Summary of Topic:** Jeju’s vision (TEE-secured, proof-of-cloud, sharded KMS, serverless SQLite) is strategically aligned with cross-chain, unstoppable agents—but the immediate signal shows low day-to-day repo activity and unresolved Cloud ergonomics. The Council must decide how to sequence visionary platform work against December’s execution-excellence mandate.

#### Deliberation Items (Questions):

**Question 1:** What sequencing maximizes North Star alignment: accelerate Jeju R&D now, or pause scope to harden Cloud + flagship agents first?

  **Context:**
  - `shaw (core-devs): "jeju" described as a fully distributed cloud with TEE, proof-of-cloud, key sharding, distributed KMS; "eliza cloud" will run on it.`
  - `GitHub daily (Dec 27-28): "minimal activity... 0 merged PRs... 1 active contributor"`

  **Multiple Choice Answers:**
    a) Prioritize Cloud hardening and flagship stability through end-of-month; keep Jeju as design-only work.
        *Implication:* Improves near-term reliability and trust, but risks losing momentum on differentiated decentralization.
    b) Run dual-track: a small Jeju strike team while the main force focuses on Cloud reliability SLOs.
        *Implication:* Balances narrative and execution, but requires disciplined coordination to avoid fragmented delivery.
    c) Accelerate Jeju implementation immediately to create a major narrative catalyst, accepting short-term Cloud roughness.
        *Implication:* Could create a breakthrough story, but conflicts with “Execution Excellence” and may worsen developer churn.
    d) Other / More discussion needed / None of the above.

**Question 2:** What is the Council’s desired public posture on Jeju details while tokenomics remain partially undisclosed?

  **Context:**
  - `DorianD (🥇-partners): asked if elizaos will be the native token of Jeju.`
  - `Borko (🥇-partners): token plans exist but are not being shared externally yet.`

  **Multiple Choice Answers:**
    a) Share technical Jeju vision openly, but explicitly separate it from token commitments (no promises).
        *Implication:* Supports open-source credibility while reducing implied token guarantees.
    b) Keep Jeju details mostly internal until token alignment and Cloud reliability are ready for a unified launch message.
        *Implication:* Avoids mixed signals, but forfeits an opportunity to redirect sentiment toward engineering strength.
    c) Announce a firm token alignment position now (e.g., ElizaOS is Jeju’s native token) to quell uncertainty.
        *Implication:* May calm token debates short-term, but creates strategic lock-in before architecture and policy are final.
    d) Other / More discussion needed / None of the above.

**Question 3:** How should we handle the distributed SQLite initiative to avoid “cool tech” drift and instead deliver developer-visible value quickly?

  **Context:**
  - `shaw (core-devs): building a distributed SQLite; naming discussion ("sqlit", "sqliite", "ShawQLite", "sq-lit").`

  **Multiple Choice Answers:**
    a) Keep it as an internal dependency of Jeju/Cloud (no separate branding) until it powers a clear Cloud feature.
        *Implication:* Reduces distraction and aligns R&D with product outcomes.
    b) Open-source it as a standalone component with a crisp roadmap and benchmarks.
        *Implication:* Attracts contributors and credibility, but increases maintenance and support surface area.
    c) Defer distributed SQLite until core Cloud storage paths are stable; use existing managed stores short-term.
        *Implication:* Maximizes execution excellence now, but delays key decentralization and cost/latency advantages.
    d) Other / More discussion needed / None of the above.