ZealPHP
The PHP Runtime for AI Web Apps
Stream AI responses in 5 lines. WebSocket, SSE, shared memory, task workers —
one server, one process. Coroutine-native concurrency with PHP's developer experience.
Upgrade your existing PHP codebase to async — start without rewriting, migrate at your own pace.
$app->route('/ai/chat', function($response) {
$response->sse(function($emit) {
$tokens = call_ai_api($prompt);
foreach ($tokens as $token) {
$emit($token, 'token');
}
});
});
ab -n 50000 -c 200 -k, same machine, no DB
|
PERF.md
|
reproduce locally
| Framework | Raw text | JSON API | Template |
|---|---|---|---|
| Runtime (no framework, no middleware) | |||
| OpenSwoole raw | 205k | 213k | — |
| Node.js raw http | 260k | 264k | — |
| Full framework (CORS + ETag + sessions + routing + templates) | |||
| ZealPHP built-in | 70k | 67k | 51k |
| Express.js +5 npm pkgs | 45k | 42k | 15k |
| Other PHP frameworks (community benchmarks) | |||
| Slim 4 | ~4k req/s | ||
| Symfony 7 | ~2k req/s | ||
| Laravel 11 | ~500 req/s | ||
Methodology: 4-core Linux container, each server tested sequentially, 4 workers, warmed-up, ab -n 50000 -c 200 -k -l.
ZealPHP runs with the full PSR-15 stack (CORS + ETag + Range + sessions + reflection-injected routing). Express runs with cors + etag + express-session + session-file-store + ejs + body-parser.
Your results will vary with hardware, payload size, I/O, and tuning. Re-run on your own box before quoting.
Don't trust our numbers — run it yourself:
Starts ZealPHP + Express + Node raw + OpenSwoole raw, benchmarks all 3 workloads, cleans up. WORKERS=8 CONCURRENCY=500 to customize.
Try it — live AI chat, streaming on this server
Powered by the OpenAI Agents SDK + ZealPHP SSE streaming. Multi-agent with tool use, streamed token-by-token.
Why not just use...?
Bold claims. Real code. You decide.
Node.js needs 30 lines for what ZealPHP does in 5
AI token streaming — the core feature of every LLM app. Compare the implementations.
$app->route('/ai/stream', function($response) {
$response->sse(function($emit) {
$ch = curl_init($apiUrl);
// ... setup curl streaming
curl_exec($ch);
});
});
app.get('/ai/stream', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.flushHeaders();
const response = await fetch(apiUrl, {
method: 'POST', body: JSON.stringify({...}),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
// parse SSE lines, extract tokens...
res.write(`data: ${token}\n\n`);
}
res.end();
});
Expressive PHP with coroutine-grade concurrency
~90k req/s in our 4-worker JSON benchmark, with reflection-based injection, auto-serialization, and no boilerplate. Numbers vary by workload — see methodology below.
$app->route('/users/{id}', function($id) {
return ['user' => User::find($id)];
// auto JSON. auto 200. done.
});
func getUser(w http.ResponseWriter, r *http.Request) {
id := chi.URLParam(r, "id")
user, err := FindUser(id)
if err != nil {
http.Error(w, err.Error(), 500)
return
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(map[string]any{
"user": user,
})
}
Multi-process workers, coroutines per worker
ZealPHP inherits OpenSwoole's architecture: N worker processes, each running thousands of coroutines on a single reactor loop. OpenSwoole is the runtime; ZealPHP is the framework layer. Real connection counts depend on workload, OS limits, and tuning — measure for your case.
// 16 workers × thousands of coroutines
// Shared memory across workers (no Redis)
// Each coroutine yields on I/O automatically
ZEALPHP_WORKERS=16 php app.php
// Store: cross-worker shared state
Store::set('cache', $key, $data);
$data = Store::get('cache', $key);
# Single process, async but not parallel
# Need Gunicorn + multiple workers
# Need Redis for any shared state
# Need Celery for background tasks
gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker
# Shared state? Add Redis.
redis_client = redis.Redis()
redis_client.set(key, json.dumps(data))
Migrate your PHP codebase to async
Your existing code works unchanged. session_start(), header(), $_GET — all overridden via uopz to work inside the coroutine runtime.
Before — several runtime services
- Nginx / Apache
- PHP-FPM (cold start every request)
- Redis (sessions, cache, pub/sub)
- Socket.io / Ratchet (WebSocket)
- Supervisor / cron (background tasks)
- SSE proxy or polling
After — 1 process
php app.php
- HTTP + WebSocket + SSE server
- Coroutine-safe sessions (no Redis)
- Shared memory across workers
- Task workers (no cron/supervisor)
- Persistent connections, no cold starts
- Many WordPress sites run via the CGI worker bridge — see the showcase repo
Depending on the app, ZealPHP can collapse several of these into a single OpenSwoole process. Not all stacks fit.
The migration ladder — go at your own pace
App::superglobals(true); $app->setFallback(fn() => App::includeFile('index.php'));
public/public/about.php → /about · public/users/list.php → /users/list
$_GET, session_start(), echo — everything you know works.api/api/users/get.php → GET /api/users · api/users/post.php → POST /api/users
$app->route('/ws/chat', ...); $response->sse(...); yield $html;
App::superglobals(false); // thousands of concurrent requests per worker
G::instance(). Per-coroutine isolation. Each worker handles many concurrent requests without blocking.Quick Start
From zero to running server in 60 seconds.
http://localhost:8080php app.php after editing routes.http://localhost:8080http://localhost:9501 — admin, login, REST API all workingEverything you need
Every feature is a live running example — click any card to explore.
Routing
Flask-style routes with reflection-based injection. Zero config, zero boilerplate.
Explore → auto-serializeResponses
Return int → status, array → JSON, Generator → stream. Framework does the right thing.
Explore → go() + ChannelCoroutines
Fan out to multiple AI models in parallel. Merge responses. go() + Channel, zero callback hell.
Explore → yield · SSEStreaming
Stream AI tokens as they generate. yield is your streaming primitive. SSR, SSE, stream() built-in.
Explore → App::ws()WebSocket
Real-time agent-to-user comms. Multi-user AI sessions, live collaboration, binary frames.
Explore → PSR-15Middleware
CORS, ETag/304, gzip. PSR-15 compatible — drop in any middleware package.
Explore → drop-inSessions
Coroutine-safe sessions. Your existing session_start() code just works via uopz.
Explore → OpenSwoole\TableStore
Share AI conversation state across workers. Cross-worker shared memory — no Redis needed.
Explore → tick() · after()Timers
Schedule recurring AI tasks. Polling, cleanup, model warmup, health checks.
Explore → HTTP/1.1HTTP
Full HTTP/1.1 compliance. HEAD, OPTIONS, Range, redirects, CORS, ETag, gzip — all built-in.
Explore → renderStream()Templates
SSR streaming templates. Compose views with yield from. renderStream() for progressive HTML.
Explore → file-basedZealAPI
Drop a PHP file in api/. It becomes a route. File-based REST — the simplest API pattern.
Explore → WordPressLegacy Apps
Run WordPress unmodified. CGI worker provides true global scope. Apache mod_php compatibility.
Explore →Return anything, get the right response
ZealPHP inspects your return type and does the right thing — no boilerplate.
| Return | Result | Example |
|---|---|---|
int | HTTP status code | return 404; return 201; |
array / object | Auto-serialized as JSON | return ['users' => $list]; |
string | HTML body | return '<h1>Hello</h1>'; |
Generator | SSR streaming (each yield sent immediately) | yield '<head>'; yield $body; |
void + echo | Buffered output via ob_get_clean() | echo "Hello"; echo " World"; |
ResponseInterface | PSR-7 response used directly | return new Response(...); |
Try it — convert your config to ZealPHP
Paste Apache .htaccess or nginx config. AI converts it to app.php in real-time.