ZealPHP
The PHP Runtime for AI Web Apps
Stream AI responses in 5 lines. WebSocket, SSE, shared memory, task workers —
one server, one process. Coroutine-native concurrency with PHP's developer experience.
Upgrade your existing PHP codebase to async — start without rewriting, migrate at your own pace.
Why? covers the problem PHP-FPM can't solve, where ZealPHP fits vs Laravel Octane / FrankenPHP / RoadRunner, and when it's the wrong choice.
$app->route('/ai/chat', function($response) {
$response->sse(function($emit) {
$tokens = call_ai_api($prompt);
foreach ($tokens as $token) {
$emit($token, 'token');
}
});
});
And it's fast — here's the throughput on 4 workers, full middleware stack:
ab -n 50000 -c 200 -k, same machine, no DB
|
PERF.md
|
reproduce locally
| Framework | Raw text | JSON API | Template |
|---|---|---|---|
| Runtime (no framework, no middleware) | |||
| OpenSwoole raw | 142k | 138k | — |
| Node.js raw http | 129k | 132k | — |
| Full framework (CORS + ETag + sessions + routing + templates) | |||
| ZealPHP built-in | 117k | 106k | 50k |
| Express.js +5 npm pkgs | 20k | 22k | 12k |
| Other PHP frameworks (community benchmarks) | |||
| Slim 4 | ~4k req/s | ||
| Symfony 7 | ~2k req/s | ||
| Laravel 11 | ~500 req/s | ||
The runtime is already faster. OpenSwoole's bare HTTP server hits 142k req/s text · 138k JSON — versus Node's 129k · 132k. +10% on text, +5% on JSON, before any framework loads.
ZealPHP keeps 82% of that with full PSR-15 middleware on top. Express keeps 15% of raw Node's. End result — ZealPHP with full middleware reaches 91% of bare Node http's throughput.
Concurrency sweep, latency percentiles, methodology, reproduction recipes & caveats →
The PHP we love. The execution model we needed.
LAMP shipped the web — Apache, mod_php, then PHP-FPM. Twenty-five years of request-per-process: spawn a worker, run the code, die when done. Shared-nothing by design.
It still works. But it can't stream AI tokens. It can't push WebSocket events. It can't share state between requests without bolting on Redis. Every “real-time” feature your customers ask for needs another service.
ZealPHP keeps the PHP. Swaps the execution model. One process, coroutines, persistent state — and your existing PHP codebase still runs.
Try it — live AI chat, streaming on this server
Powered by the OpenAI Agents SDK + ZealPHP SSE streaming. Multi-agent with tool use, streamed token-by-token.
Why not just use...?
Bold claims. Real code. You decide.
Node.js needs 30 lines for what ZealPHP does in 5
AI token streaming — the core feature of every LLM app. Compare the implementations.
$app->route('/ai/stream', function($response) {
$response->sse(function($emit) {
$ch = curl_init($apiUrl);
// ... setup curl streaming
curl_exec($ch);
});
});
app.get('/ai/stream', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.flushHeaders();
const response = await fetch(apiUrl, {
method: 'POST', body: JSON.stringify({...}),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// parse SSE lines, extract tokens...
res.write(`data: ${token}\n\n`);
}
res.end();
});
Expressive PHP with coroutine-grade concurrency
~106k req/s in our 4-worker JSON benchmark, with reflection-based injection, auto-serialization, and no boilerplate. Numbers vary by workload — see methodology below.
$app->route('/users/{id}', function($id) {
return ['user' => User::find($id)];
// auto JSON. auto 200. done.
});
func getUser(w http.ResponseWriter, r *http.Request) {
id := chi.URLParam(r, "id")
user, err := FindUser(id)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]any{
"user": user,
})
}
Multi-process workers, coroutines per worker
ZealPHP inherits OpenSwoole's architecture: N worker processes, each running thousands of coroutines on a single reactor loop. OpenSwoole is the runtime; ZealPHP is the framework layer. Real connection counts depend on workload, OS limits, and tuning — measure for your case.
// 16 workers × thousands of coroutines
// Shared memory across workers (no Redis)
// Each coroutine yields on I/O automatically
ZEALPHP_WORKERS=16 php app.php
// Store: cross-worker shared state
Store::set('cache', $key, $data);
$data = Store::get('cache', $key);
# Single process, async but not parallel
# Need Gunicorn + multiple workers
# Need Redis for any shared state
# Need Celery for background tasks
gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker
# Shared state? Add Redis.
redis_client = redis.Redis()
redis_client.set(key, json.dumps(data))
Ubuntu/Debian/WSL2 · macOS · auto-detects unsupported distros and prints manual steps · inspect first
Quick Start
PHP installed? From zero to running server in 60 seconds.
http://localhost:8080php app.php after editing routes.http://localhost:8080http://localhost:9501 — admin, login, REST API all workingEverything you need
Every feature is a live running example — click any card to explore.
Routing
Flask-style routes with reflection-based injection. Zero config, zero boilerplate.
Explore → auto-serializeResponses
Return int → status, array → JSON, Generator → stream. Framework does the right thing.
Explore → go() + ChannelCoroutines
Fan out to multiple AI models in parallel. Merge responses. go() + Channel, zero callback hell.
Explore → yield · SSEStreaming
Stream AI tokens as they generate. yield is your streaming primitive. SSR, SSE, stream() built-in.
Explore → App::ws()WebSocket
Real-time agent-to-user comms. Multi-user AI sessions, live collaboration, binary frames.
Explore → PSR-15Middleware
CORS, ETag/304, gzip. PSR-15 compatible — drop in any middleware package.
Explore → drop-inSessions
Coroutine-safe sessions. Your existing session_start() code just works via uopz.
Explore → OpenSwoole\TableStore
Share AI conversation state across workers. Cross-worker shared memory — no Redis needed.
Explore → tick() · after()Timers
Schedule recurring AI tasks. Polling, cleanup, model warmup, health checks.
Explore → HTTP/1.1HTTP
Full HTTP/1.1 compliance. HEAD, OPTIONS, Range, redirects, CORS, ETag, gzip — all built-in.
Explore → renderStream()Components
SSR streaming components. Compose views with yield from. renderStream() for progressive HTML.
Explore → file-basedREST API
Drop a PHP file in api/. It becomes a route. File-based REST — the simplest API pattern.
Explore → WordPressLegacy Apps
Run WordPress unmodified. CGI worker provides true global scope. Apache mod_php compatibility.
Explore →Bring your PHP codebase along
session_start(), header(), $_GET, echo —
overridden via uopz so existing code runs unchanged inside the coroutine runtime.
Move at your own pace: drop your whole app in, or rewrite endpoint-by-endpoint.
Today: Nginx + PHP-FPM + Redis + Socket.io + cron + …
6 services, 6 failure points, 6 sets of config.
On ZealPHP: php app.php
HTTP + WebSocket + SSE + sessions + shared memory + task workers — one process.
The migration ladder has 5 rungs (0 → 4). Rung 0 is "drop in your existing app, run php app.php." Rung 4 is full coroutine mode — 117k req/s on 4 workers. Most teams stay on rungs 1–3 indefinitely; the upgrade path is opt-in, not forced.
Return anything, get the right response
ZealPHP inspects your return type and does the right thing — no boilerplate.
| Return | Result | Example |
|---|---|---|
int | HTTP status code | return 404; return 201; |
array / object | Auto-serialized as JSON | return ['users' => $list]; |
string | HTML body | return '<h1>Hello</h1>'; |
Generator | SSR streaming (each yield sent immediately) | yield '<head>'; yield $body; |
void + echo | Buffered output via ob_get_clean() | echo "Hello"; echo " World"; |
ResponseInterface | PSR-7 response used directly | return new Response(...); |