Doc · WP-0001v0.9.1 · technical paper · 13 sections

Whitepaper

Abstract

Narrative Sniper is an AI-powered token launchpad on Solana, built primarily for autonomous AI agents. Given a tweet URL or a watchdog rule, the service ingests context, distils 1–5 token interpretations from a frontier LLM, dedupes against the live market via DexScreener, generates a token image via Replicate, pins assets to IPFS, and returns a serialized but unsigned VersionedTransaction that the client signs locally. The warm-path median end-to-end latency is around eight seconds with default models. Private keys never reach the server.

Problem statement

Narrative-driven markets compress all alpha into the first seconds after publication. Human traders cannot reliably act inside that window; agent fleets can, but the infrastructure to do so is fragmented across a tweet-listening layer, an LLM, an image generator, an IPFS pinner, a token-launch SDK and a Solana RPC.

Each handoff is a place where the chain breaks: a brittle prompt, a flaky rate limit, a half-supported SDK. The result is that most agent fleets either never ship or ship something so unreliable they cannot trust their own signal-to-action pipeline.

Thesis

Speed is the only edge that compounds. Discipline is the only edge that compounds twice.

We hold two opinions. First, an agent-native rail is worth more than the sum of its components — every saved second compounds across thousands of launches. Second, the rail must not custody keys. We treat the wallet boundary as inviolable.

Architecture

The system is a small, opinionated TypeScript monorepo. Shipped today unless marked otherwise:

  • API server — Express HTTP service. Stateless workers behind a queue.
  • SDK@narrative-sniper/sdk on npm. Wraps REST and performs client-side signing.
  • CLInarrative-sniper binary. Useful in shell scripts and CI.
  • SkillSKILL.md installable via npx skills add narrative-sniper/narrative-sniper; works in any agent runtime that supports the skills protocol (Claude Code, Cursor, Cline, Copilot and others).
  • MCP server (planned, Phase 2) — will expose fromTweet, watchdog.* and launches.* as MCP tools.
  • OpenClaw plugin (planned, Phase 3) — packaged for the OpenClaw agent marketplace.
  • Web dashboard (planned, Phase 3) — Next.js. API keys, usage, deploy history, demo UI.

Latency budget

Median end-to-end latency on the warm path is around 8 seconds with default models (gpt-5.4 + z-image-turbo). The dominant cost is the LLM call. Switching to GPT-5.4 Mini and Z-Image Turbo brings the path to around 6 seconds at the cost of quality. Premium image models (e.g. nano-banana-pro) add 30–110 seconds. The P95 row assumes normal network conditions; mainnet congestion can push submit + confirm to 10–15 seconds.

StageMedianP95Provider
Tweet fetch0.4s0.9sX API v2
Narrative analysis (default)2.1s3.6sGPT-5.4
Narrative analysis (premium)5–12s14sClaude Opus 4.6
Dedup guard0.3s0.8sDexScreener
Image generation1.6s3.0sReplicate · z-image-turbo
IPFS pinning0.8s2.4sPinata
Build unsigned tx0.2s0.5sSolana web3.js
Submit + confirm2.4s5–15sSolana RPC

Wallet & signing model

Private keys never touch the server. The trust boundary mirrors the official Pump.fun SDK and any non-custodial Solana dApp:

  1. Client sends tweet URL, parameters and public wallet address.
  2. Server constructs a serialized VersionedTransaction.
  3. Client deserializes, signs locally, submits to its RPC.

The SDK accepts a Keypair or a Wallet Adapter. In watchdog mode, users provide a dedicated ephemeral keypair with limited balance for offline auto-deploys.

Launch providers

Two bonding-curve providers are in scope. Today we ship Pump.fun; Meteora DBC follows in Phase 2 (Q3 2026).

Pump.fun

Constant-product bonding curve, 1B supply (6 decimals), 800M on curve. Zero creation cost. 1.25% trading fee in the bonding phase. Graduation to PumpSwap AMM costs 0.015 SOL. Creators can configure fee sharing across up to 10 shareholders as a one-time setup on creation — a feature unique to Pump.fun. We use createV2Instruction exclusively, which also supports Token-2022 mints.

Meteora DBC (Phase 2)

Multi-segment bonding curve (up to 16 price-liquidity points). Supports SOL, USDC and JUP quote tokens. Configurable poolCreationFee (0–100 SOL) and minimum migration thresholds: 10 SOL, 750 USDC or 1,500 JUP depending on quote. Token-2022 pools incur an additional 0.01 SOL fee. Anti-sniper features: Fee Time Scheduler, Rate Limiter and Alpha Vault. Migrates to DAMM v1 or v2 on threshold hit. Detailed Meteora docs ship at launch of Phase 2.

AI inference stack

Three LLMs and three image models are available. The user selects per launch (or sets a default). Inference is billed as credit multipliers.

Language models

ModelProvider$ /M in$ /M outContextSpeed
GPT-5.4OpenAI2.5015.00400kfast (default)
GPT-5.4 MiniOpenAI0.754.50400kfastest
Claude Opus 4.6Anthropic5.0025.00200kslow (premium)

Image models

Model$ /imageTypical latency
z-image-turbo0.003~1.6s (default)
seedream-5-lite0.035~15s
nano-banana-pro0.1530–110s

X API integration

We rely on X API v2 in the new pay-per-use credit model. Tweet lookups go through GET /2/tweets/:id with the full set of expansions. Watchdogs use a filtered stream with a shared rule budget of 1,000 — sufficient for 50–200 monitored accounts per project.

Same-day rechecks of stream-delivered posts are deduplicated and therefore free under X's billing rules. Projected utilization is well within the pay-per-use 2M monthly post-read cap.

Key Filtered Stream constraints we design around:

  • Stream connection limit: 1 per project. Defines our sharding strategy across multiple X API app accounts.
  • Reconnection rate limit: 50 requests per 15 minutes. Watchdog reconnect logic respects this with exponential backoff.
  • No backfill on disconnect. Any tweet emitted while we are offline is lost; we mitigate via fast reconnect and redundant connections in Phase 2.

Deduplication guarantees

Every candidate name and ticker is searched against DexScreener prior to the build step. Tokens with an existing Solana match are skipped or regenerated. The check is rate-limited to 60 requests per minute and cached for 30 seconds.

Watchdog (Phase 2)

A long-lived Filtered Stream consumer with a per-account rules engine: keywords, engagement thresholds, max launches per day, cooldowns. The consumer is sharded across X API app accounts to circumvent the single-connection constraint.

Additional details of the Phase 2 design:

  • Engagement threshold rechecks — delayed REST lookups (15–60 seconds after stream delivery) confirm that a tweet has crossed a configured likes / reposts threshold before launching.
  • Same-day rechecks are free under the X API dedup policy; next-day rechecks count against the post-read cap.
  • End-to-end latency — approximately 6–7 seconds P99 from tweet creation to delivery into the watchdog stream, before any recheck delay is applied.

Risks & mitigations

  • MEV / sniper bots — the dominant adversary for any memecoin launchpad. Mitigations: mayhemModerandomises the Pump.fun bonding curve to make timing-based snipes unprofitable; Meteora's Fee Time Scheduler and Rate Limiter perform the equivalent role on DBC pools in Phase 2.
  • Solana RPC outages — every SDK call accepts arpcUrl override. BYO RPC is the canonical mitigation; we ship a default for convenience only.
  • LLM provider outage — narrative analysis falls back across the chain GPT-5.4 → GPT-5.4 Mini → Claude Opus 4.6 on provider-side errors. Latency degrades; the pipeline does not fail-stop.
  • X API rate-limit drift — we hold engineering capacity to shard apps and to upgrade tiers if pricing changes.
  • Provider deprecations — every external call is fronted by an interface. Pump.fun ↔ Meteora is feature-parity at the API surface.
  • LLM hallucination of duplicate tickers — DexScreener dedup is mandatory and runs before the build step.
  • Wallet compromise via watchdog keypair — keys are ephemeral, balance-capped and rotated on quota or daily breach.

Roadmap

  • Q2 2026 · Phase 1 — MVP (private beta). REST API, SDK, CLI and Skill shipped. Pump.fun launch pipeline live. MCP server, OpenClaw plugin and web dashboard are in progress and slip to Phases 2–3 as called out under Architecture.
  • Q3 2026 · Phase 2 — Watchdog. X API Filtered Stream watchdog with per-account rules engine. Engagement threshold rechecks. Ephemeral keypair for offline auto-deploys. Meteora DBC as second provider. MCP server.
  • Q4 2026 · Phase 3 — Community. Self-hostable API server in a separate repo. Plugin SDK for BYO launch providers. BYO RPC, BYO LLM, BYO image model. Web dashboard. OpenClaw plugin.

Document version 0.9.1 — generated 2026-05-12. Comments to hello@narrativesniper.online.