Quickstart¶
Install¶
methodic requires Python 3.11+ and depends only on requests.
Authenticate¶
You need a Bearer token. Two flavors:
- API key (
sk_user_...orsk_agent_...) for headless agents and workers. Created viaPOST /api-keys(with an Auth0 access token), or injected into Chronicle-managedmenlo_parkworkers asCHRONICLE_API_KEY. - Auth0 access token for interactive researcher work — same token your
CLI or web UI uses; pass it as the
api_keyargument.
See the authentication guide for details.
Hello, Chronicle (researcher)¶
The top-level entry point is Chronicle. Every operation lives on a
namespace (chronicle.experiments, chronicle.variations, chronicle.search,
chronicle.runs, chronicle.assets); resource handles like Experiment and
Variation are sugar for chained drill-downs.
from methodic import Chronicle
with Chronicle(
server_url="https://api.methodiclabs.ai",
api_key="sk_user_...",
) as chronicle:
# Create, commit, add a variation, all in one chain
var = (
chronicle.experiments.create(
hypothesis_summary="ripple effect on PDE solvers in 2D",
config_yaml=open("experiment.yaml").read(),
rationale="follow-up to arxiv:2024.123",
)
.commit()
.variations.create(
config_yaml=open("variation-2.yaml").read(),
description="alt seed",
)
)
var.commit()
# Iterate every experiment visible to you (paginates server-side)
for exp in chronicle.experiments.iter():
print(exp.id, exp.hypothesis_summary)
# Search across research docs + arxiv (Vertex-backed)
from methodic import SearchFilters
hits = chronicle.search.query(
"ripple effect boundary layer",
filters=SearchFilters(asset_types=["hypothesis_report", "research_report"]),
experiment_context=[var.experiment_id], # boost lineage-related docs
)
for r in hits.results:
print(r.relevance_score, r.title, r.document_id)
# Retract with a reason; outputs get auto-invalidated server-side
chronicle.experiments.retract(var.experiment_id, reason="contaminated dataset")
Mutators on resource handles (commit, conclude, retract) return
self for chaining and invalidate the cached server data internally —
the next attribute access (e.g. exp.committed_at) re-fetches and
returns a fresh value. You don't need to call refresh().
Hello, run (worker)¶
The worker side uses a Run resource handle bound to a specific
(experiment, variation, run) triple. Asset uploads auto-populate
output_of from the bound context.
from methodic import Chronicle
with Chronicle(server_url="https://api.methodiclabs.ai", api_key="sk_agent_...") as chronicle:
run = chronicle.run("b3a8f4c2-...", variation=1, run=1)
# 1. Pull the variation config (frozen at variation commit)
config = run.get_variation_config()
# 2. Mark the run as running
run.start()
# 3. Send heartbeats from a background thread (every ~60s)
# Chronicle's watchdog times out runs after 15 minutes of silence.
run.heartbeat()
# 4. Upload an output — small JSON inline
run.upload_asset(
asset_type="takeaways_report",
content={"summary": "loss converged at step 12000"},
)
# 5. Mark the run as succeeded (waits for any pending async uploads)
run.succeed()
Errors¶
Non-2xx responses raise typed exceptions: BadRequestError (400/422),
AuthenticationError (401), PermissionDeniedError (403),
NotFoundError (404), ConflictError (409), ServerError (5xx). All
inherit from APIError (which inherits from ChronicleError). Each
carries status_code, message, and the underlying response.
Search returns 503 on servers without Vertex AI Search configured (local
dev, CI without credentials) — catch ServerError and check
status_code == 503 if you need to gate.
What methodic does not do¶
- Train models, manage checkpoints, or know about PyTorch/HuggingFace internals — that's
menlo-park. - Retry / backoff — wrap calls yourself.
- Async — the API is synchronous; binary uploads run on an internal thread pool.