@roostjs/core
Why Roost has a service container, how the pipeline middleware model works, and what the Application class is actually doing.
Why a Service Container for Workers?
The obvious question is whether a dependency injection container is overkill for a Cloudflare Worker. A Worker is a single file, executes in milliseconds, and has no long-running state. Why not just import what you need?
The answer is testability and composability. When a route handler imports a database
connection directly, tests cannot replace that connection with a fake without reaching into
the module system. When middleware needs to share resolved services with the route handler
below it, there is no clean mechanism to pass them along. The container solves both: every
service is resolved from a shared registry, tests can swap any binding before calling
app.handle(), and the scoped container per request is the standard mechanism
for sharing request-scoped data through the pipeline.
The container is also what makes service providers composable. A third-party package can ship a service provider that registers its own bindings without knowing anything about the application it will be installed in. The application bootstraps everything by registering providers, not by wiring dependencies manually.
DI Without a Long-Lived Process
Laravel's container assumes a PHP-FPM process that handles one request and exits, or a long-running Octane process. In both cases, singleton lifetimes are clear. Workers are more nuanced: an isolate is warm for a period, handling multiple requests, then may be discarded. Singleton bindings in Roost live for the lifetime of the warm isolate — across potentially thousands of requests. This is almost always the right lifetime for expensive resources like D1 database connections, but it means singletons should not hold mutable per-request state.
Roost's scoped container — a child container created per request — is the answer to per-request state. Bindings registered in a scoped container are invisible to other requests. The scoped container inherits all singletons from its parent, so it can still resolve the database connection or configuration, but it adds its own bindings (authenticated user, resolved organization, request-specific context) without touching the shared state.
Why Pipeline Middleware Instead of Nested Function Calls
The alternative to a pipeline is a manually nested chain: every middleware wraps the next in a closure. This works but is hard to compose at runtime — you cannot add middleware conditionally without restructuring the nesting. The pipeline model separates concerns: you declare a list of middleware, and the pipeline builds the nested chain for you. Adding, removing, or reordering middleware is a one-line change.
Roost's Pipeline class also integrates with the container. Middleware classes
can be passed as constructor functions, and the pipeline resolves them from the scoped
container. This means middleware can have injected dependencies like any other service —
the pipeline instantiates them from the container rather than requiring manual construction.
Structured Logging and Request Tracing
Cloudflare Workers emit logs to a runtime-managed stream. There is no file system, no
process-level logger, and no log aggregator available at the language level. The only
output mechanism is console.log, which Workers forwards to the Cloudflare dashboard
and any configured log drain. Roost's Logger class leans into this: it serializes
every entry as a single-line JSON object and writes it with console.log.
Structured JSON logging matters because Workers often run thousands of requests
concurrently. Plain text messages are impossible to correlate across requests. By
embedding a requestId in every log entry, an operator can filter a log drain by
requestId and see the complete trace for a single request — across middleware,
handlers, and any background work — without any additional instrumentation.
RequestIdMiddleware is the source of truth for the request ID. It reads cf-ray
from the incoming request headers when present (Cloudflare sets this on every request
that passes through the edge network) or generates a UUID for direct-to-Worker traffic.
The middleware binds a pre-configured Logger into the request-scoped container so
that every downstream component resolves the same logger with the same requestId,
without any manual threading of context.
The FakeLogger exists because console.log is a side effect. Tests that assert on
logging must replace the logger with something that captures output rather than emitting
it. Logger.fake() returns a FakeLogger with the same interface as Logger but
stores entries in entries for inspection. assertLogged and assertNotLogged provide
concise assertion methods that produce readable failure messages.
The Request Lifecycle
A request to a Roost application passes through four stages:
-
Scope creation.
Application.handle()callscontainer.scoped()to create a child container for this request. The scoped container inherits all application-level singletons but maintains its own bindings for request-specific data. -
Middleware pipeline. The scoped container is attached to the request object. Each middleware in registration order receives the request, optionally modifies it or the container, then calls
nextto continue. Middleware can short-circuit by returning a response without callingnext. -
Dispatch. After all middleware, the dispatcher receives the final request and returns a response. This is where routing and handler logic lives.
-
Background work. Promises registered with
app.defer()run after the response is returned to the client, inside the Worker'swaitUntil()budget. The runtime keeps the isolate alive until all deferred promises settle.
defer() decouples latency-sensitive work from fire-and-forget side effects. Writing
analytics, sending audit events, or warming a downstream cache are all candidates for
defer(). The client receives its response immediately; the background work completes
asynchronously without adding to response time.
Webhook Security
Webhooks are unauthenticated HTTP callbacks by default. Any party that knows your endpoint URL can POST to it. Signature verification is the standard mitigation: the provider signs each payload with a shared secret, and the receiver verifies the signature before processing the event.
Roost's verifyWebhook handles the cryptographic details — HMAC-SHA256, HMAC-SHA512,
and Ed25519 — and also validates the timestamp when the provider includes one. Timestamp
validation prevents replay attacks: a captured webhook payload cannot be replayed after
the tolerance window (default 300 seconds) has elapsed.
The WebhookPresets map provider-specific header names and payload formats onto the
generic WebhookVerifyOptions interface. Stripe embeds the timestamp inside the
signature header itself; GitHub uses a simple sha256= prefix; Svix uses separate
headers and Ed25519. The presets encapsulate this variation so application code stays
uniform across providers.
WebhookMiddleware wraps verifyWebhook as a Middleware-compatible class. It catches
WebhookVerificationError and returns a 401 response with a JSON error body, so
routes that use it do not need to handle verification failures explicitly.