Alpha ZealPHP is early-stage and under active development. APIs may change between minor versions until v1.0. Feedback and bug reports welcome on GitHub.

ZealPHP

The PHP Runtime for AI Web Apps

Stream AI responses in 5 lines. WebSocket, SSE, shared memory, task workers —
one server, one process. Coroutine-native concurrency with PHP's developer experience.

Upgrade your existing PHP codebase to async — start without rewriting, migrate at your own pace.

app.php — stream AI tokens
$app->route('/ai/chat', function($response) {
    $response->sse(function($emit) {
        $tokens = call_ai_api($prompt);
        foreach ($tokens as $token) {
            $emit($token, 'token');
        }
    });
});
Browser output
Method  |  4 workers, full middleware stack, ab -n 50000 -c 200 -k, same machine, no DB  |  PERF.md  |  reproduce locally
70k
req/s text
avg 2.9 ms
67k
req/s JSON
avg 3.0 ms
51k
req/s template
avg 3.9 ms
0
failures
/ 150k reqs
Framework Raw text JSON API Template
Runtime (no framework, no middleware)
OpenSwoole raw 205k 213k
Node.js raw http 260k 264k
Full framework (CORS + ETag + sessions + routing + templates)
ZealPHP built-in 70k 67k 51k
Express.js +5 npm pkgs 45k 42k 15k
Other PHP frameworks (community benchmarks)
Slim 4 ~4k req/s
Symfony 7 ~2k req/s
Laravel 11 ~500 req/s

Methodology: 4-core Linux container, each server tested sequentially, 4 workers, warmed-up, ab -n 50000 -c 200 -k -l. ZealPHP runs with the full PSR-15 stack (CORS + ETag + Range + sessions + reflection-injected routing). Express runs with cors + etag + express-session + session-file-store + ejs + body-parser.
Your results will vary with hardware, payload size, I/O, and tuning. Re-run on your own box before quoting.

Don't trust our numbers — run it yourself:

$ scripts/bench_vs_express.sh

Starts ZealPHP + Express + Node raw + OpenSwoole raw, benchmarks all 3 workloads, cleans up. WORKERS=8 CONCURRENCY=500 to customize.

Try it — live AI chat, streaming on this server

Powered by the OpenAI Agents SDK + ZealPHP SSE streaming. Multi-agent with tool use, streamed token-by-token.

ZealPHP AI Chat Demo Checking...
Hi! I'm running on ZealPHP's SSE streaming. Ask me anything — watch the tokens stream in real-time.
View source code → The full backend powering this chat
# examples/agents/chat_agent.py
from agents import Agent, Runner, function_tool, SQLiteSession

@function_tool
def get_zealphp_reference(query: str) -> str:
    """Look up ZealPHP docs — routing, streaming, store, etc."""
    return match_sections(reference, query)

agent = Agent(
    name="ZealPHP Assistant",
    model="gpt-4.1-mini",
    instructions="You are a ZealPHP expert. Output raw HTML.",
    tools=[get_zealphp_reference],
)

# Persistent conversation threads via SQLiteSession
session = SQLiteSession(db_path=DB_PATH, session_id=thread_id)

# Stream tokens as SSE events to stdout
result = Runner.run_streamed(agent, input=message, session=session)
async for event in result.stream_events():
    if event.data.type == "response.output_text.delta":
        print(f"data: {json.dumps({'token': event.data.delta})}")

Why not just use...?

Bold claims. Real code. You decide.

Node.js needs 30 lines for what ZealPHP does in 5

AI token streaming — the core feature of every LLM app. Compare the implementations.

ZealPHP — 7 lines
$app->route('/ai/stream', function($response) {
    $response->sse(function($emit) {
        $ch = curl_init($apiUrl);
        // ... setup curl streaming
        curl_exec($ch);
    });
});
Node.js — 25+ lines
app.get('/ai/stream', (req, res) => {
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');
  res.flushHeaders();

  const response = await fetch(apiUrl, {
    method: 'POST', body: JSON.stringify({...}),
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    const chunk = decoder.decode(value);
    // parse SSE lines, extract tokens...
    res.write(`data: ${token}\n\n`);
  }
  res.end();
});

Expressive PHP with coroutine-grade concurrency

~90k req/s in our 4-worker JSON benchmark, with reflection-based injection, auto-serialization, and no boilerplate. Numbers vary by workload — see methodology below.

ZealPHP — return anything
$app->route('/users/{id}', function($id) {
    return ['user' => User::find($id)];
    // auto JSON. auto 200. done.
});
Go — manual everything
func getUser(w http.ResponseWriter, r *http.Request) {
    id := chi.URLParam(r, "id")
    user, err := FindUser(id)
    if err != nil {
        http.Error(w, err.Error(), 500)
        return
    }
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(map[string]any{
        "user": user,
    })
}

Multi-process workers, coroutines per worker

ZealPHP inherits OpenSwoole's architecture: N worker processes, each running thousands of coroutines on a single reactor loop. OpenSwoole is the runtime; ZealPHP is the framework layer. Real connection counts depend on workload, OS limits, and tuning — measure for your case.

ZealPHP — true parallelism
// 16 workers × thousands of coroutines
// Shared memory across workers (no Redis)
// Each coroutine yields on I/O automatically
ZEALPHP_WORKERS=16 php app.php

// Store: cross-worker shared state
Store::set('cache', $key, $data);
$data = Store::get('cache', $key);
FastAPI — single process limits
# Single process, async but not parallel
# Need Gunicorn + multiple workers
# Need Redis for any shared state
# Need Celery for background tasks
gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker

# Shared state? Add Redis.
redis_client = redis.Redis()
redis_client.set(key, json.dumps(data))

Migrate your PHP codebase to async

Your existing code works unchanged. session_start(), header(), $_GET — all overridden via uopz to work inside the coroutine runtime.

Before — several runtime services

  • Nginx / Apache
  • PHP-FPM (cold start every request)
  • Redis (sessions, cache, pub/sub)
  • Socket.io / Ratchet (WebSocket)
  • Supervisor / cron (background tasks)
  • SSE proxy or polling

After — 1 process

php app.php
  • HTTP + WebSocket + SSE server
  • Coroutine-safe sessions (no Redis)
  • Shared memory across workers
  • Task workers (no cron/supervisor)
  • Persistent connections, no cold starts
  • Many WordPress sites run via the CGI worker bridge — see the showcase repo

Depending on the app, ZealPHP can collapse several of these into a single OpenSwoole process. Not all stacks fit.

The migration ladder — go at your own pace

0
Drop in your entire app
App::superglobals(true); $app->setFallback(fn() => App::includeFile('index.php'));
Most existing PHP apps — WordPress, Drupal, custom — run unchanged on OpenSwoole through the CGI worker bridge.
1
Write LAMP-style PHP in public/
public/about.php → /about  ·  public/users/list.php → /users/list
File-based routing. $_GET, session_start(), echo — everything you know works.
2
Add REST APIs with api/
api/users/get.php → GET /api/users  ·  api/users/post.php → POST /api/users
Drop a PHP file, get a REST endpoint. ZealAPI auto-routes by filename. Zero config.
3
Use framework routes for new features
$app->route('/ws/chat', ...); $response->sse(...); yield $html;
WebSocket, SSE streaming, coroutines — available when you're ready, not forced upfront.
4
Full coroutine mode
App::superglobals(false); // thousands of concurrent requests per worker
Replace superglobals with G::instance(). Per-coroutine isolation. Each worker handles many concurrent requests without blocking.
Why ZealPHP? → WordPress on ZealPHP →

Quick Start

From zero to running server in 60 seconds.

1$ composer create-project sibidharan/zealphp-project:^0.2.0 my-app
2$ cd my-app && php app.php
Server running at http://localhost:8080
Includes CLAUDE.md for AI-assisted development. Restart with php app.php after editing routes.
1$ git clone https://github.com/sibidharan/zealphp.git
2$ cd zealphp && composer install && php app.php
This very site, running locally at http://localhost:8080
The framework repo IS the OSS website — every page is a live, working example of a feature.
1$ git clone https://github.com/sibidharan/zealphp-wordpress.git
2$ cd zealphp-wordpress && composer install
3$ php app.php
WordPress at http://localhost:9501 — admin, login, REST API all working
Zero WordPress modifications. CGI worker provides Apache mod_php compatibility. See Legacy Apps.
Requires PHP 8.3+ OpenSwoole 22.1+ uopz composer Install help →

Everything you need

Every feature is a live running example — click any card to explore.

route()

Routing

Flask-style routes with reflection-based injection. Zero config, zero boilerplate.

Explore →
📦
auto-serialize

Responses

Return int → status, array → JSON, Generator → stream. Framework does the right thing.

Explore →
🔀
go() + Channel

Coroutines

Fan out to multiple AI models in parallel. Merge responses. go() + Channel, zero callback hell.

Explore →
📡
yield · SSE

Streaming

Stream AI tokens as they generate. yield is your streaming primitive. SSR, SSE, stream() built-in.

Explore →
🔌
App::ws()

WebSocket

Real-time agent-to-user comms. Multi-user AI sessions, live collaboration, binary frames.

Explore →
🛡️
PSR-15

Middleware

CORS, ETag/304, gzip. PSR-15 compatible — drop in any middleware package.

Explore →
🗄️
drop-in

Sessions

Coroutine-safe sessions. Your existing session_start() code just works via uopz.

Explore →
🗃️
OpenSwoole\Table

Store

Share AI conversation state across workers. Cross-worker shared memory — no Redis needed.

Explore →
⏱️
tick() · after()

Timers

Schedule recurring AI tasks. Polling, cleanup, model warmup, health checks.

Explore →
🌐
HTTP/1.1

HTTP

Full HTTP/1.1 compliance. HEAD, OPTIONS, Range, redirects, CORS, ETag, gzip — all built-in.

Explore →
📝
renderStream()

Templates

SSR streaming templates. Compose views with yield from. renderStream() for progressive HTML.

Explore →
🔗
file-based

ZealAPI

Drop a PHP file in api/. It becomes a route. File-based REST — the simplest API pattern.

Explore →
🏗️
WordPress

Legacy Apps

Run WordPress unmodified. CGI worker provides true global scope. Apache mod_php compatibility.

Explore →

Return anything, get the right response

ZealPHP inspects your return type and does the right thing — no boilerplate.

ReturnResultExample
intHTTP status codereturn 404; return 201;
array / objectAuto-serialized as JSONreturn ['users' => $list];
stringHTML bodyreturn '<h1>Hello</h1>';
GeneratorSSR streaming (each yield sent immediately)yield '<head>'; yield $body;
void + echoBuffered output via ob_get_clean()echo "Hello"; echo " World";
ResponseInterfacePSR-7 response used directlyreturn new Response(...);

Try it — convert your config to ZealPHP

Paste Apache .htaccess or nginx config. AI converts it to app.php in real-time.

Input
ZealPHP app.php
// Click Convert to generate...
Powered by gpt-4.1-mini · Cached for 1hr · Full docs →