Week 1 — day-by-day
Publish-ready · fires at launch hour
Day 0 — We shipped RunGuard. The first loop it caught was ours.
The dogfood story: our own launch script looped against a shared upstream infra blocker. We instrumented the detector between failures. By the time the script retried a seventh time, the SDK opened the breaker before the API call went out.
Gate: a public-launch channel (X / Show HN / Reddit / first cold-DM-driven install) has fired.
Gated · T+24h
Day 1 — Launch numbers without the gloss
Signups, installs, referrers, and star counts at the 24-hour mark — with delta columns against the launch hour. Honest about whether the launch sustained or fizzled.
Gate: 24 hours after day-0 publish. Values from the first 24 hours are the entire content of the post.
Gated · first non-self trip
Day 2 — The first non-self loop our SDK caught
A customer's agent looped. Our SDK opened the breaker. What the signature looked like, what the breaker defaults were, and what the customer's retry logic did next — anonymized, with permission.
Gate: ≥1 trip row in the SDK telemetry from a paying customer, with explicit opt-in to publish.
Gated · T+72h + ≥3 signatures
Day 3 — Three loop signatures we hadn't seen before
Pattern-matching across 72 hours of customer trips. Categorized by trigger kind (loop / budget / context) and then by signature shape. One redacted example per category — code blocks, not prose.
Gate: 72 hours after day-0 publish AND ≥3 distinct anonymized signatures across customer installs.
Gated · first FP or T+96h
Day 4 — The first false positive (and what we changed)
When the breaker shouldn't have opened. What the user's legitimate workflow looked like, which default exposed the false positive, and whether we're shipping a version bump or a doc clarification.
Gate: first user-reported false-positive trip OR 96 hours after day-0 publish, whichever arrives first.
Gated · both SDKs live 72h
Day 5 — TypeScript or Python? What our install ratio actually says
Five days of npm install @runguard/sdk vs pip install runguard. Two integers, one ratio, and three plausible explanations — not a "the Python community prefers X" from a week of data.
Gate: 120 hours after day-0 publish AND both npm + PyPI packages have been live ≥72 hours.
Gated · T+144h + ≥1 $-saved trip
Day 6 — $X in runaway runs we caught this week
The IDENTITY headline — "How we caught $X in runaway agent runs" — with the math shown. Customer-reported dollar figures where shared, token-pricing estimates otherwise, and every line tagged so readers can audit.
Gate: 144 hours after day-0 publish AND ≥1 customer trip with a verifiable $-saved estimate. We will not publish a cumulative figure that includes our own dogfood ($0).
Gated · T+168h
Day 7 — A week-1 retro that names what we got wrong
Three concrete things we'd do differently, with the planned fix for each. One thing we got right and want to keep. Honest about cadence — did the gates hold, did we slip a day, did we publish anyway and now regret it?
Gate: 168 hours after day-0 publish. No data gate beyond "the previous six posts have shipped."
Weeks 2–4 — weekly cadence
After day-7, the 30-day promise continues at a weekly cadence. Week-2 through week-4 stubs will be scaffolded once week-1 is actively publishing, so the structure matches what we've actually seen rather than what we imagined on day 0.
30-day soak
First post drops the hour RunGuard's launch channel fires. Stay close — or join the waitlist and we'll email when the SDK ships and the log starts.