Alpha ZealPHP is early-stage and under active development. APIs may change between minor versions until v1.0. Feedback and bug reports welcome on GitHub.

ZealPHP

The PHP Runtime for AI Web Apps

Stream AI responses in 5 lines. WebSocket, SSE, shared memory, task workers —
one server, one process. Coroutine-native concurrency with PHP's developer experience.

Upgrade your existing PHP codebase to async — start without rewriting, migrate at your own pace.

Why? covers the problem PHP-FPM can't solve, where ZealPHP fits vs Laravel Octane / FrankenPHP / RoadRunner, and when it's the wrong choice.

app.php — stream AI tokens
$app->route('/ai/chat', function($response) {
    $response->sse(function($emit) {
        $tokens = call_ai_api($prompt);
        foreach ($tokens as $token) {
            $emit($token, 'token');
        }
    });
});
Browser output

And it's fast — here's the throughput on 4 workers, full middleware stack:

Method  |  4 workers, full middleware stack, ab -n 50000 -c 200 -k, same machine, no DB  |  PERF.md  |  reproduce locally
117k
req/s text
avg 1.7 ms
106k
req/s JSON
avg 1.9 ms
50k
req/s template
avg 4.0 ms
0
failures
/ 150k reqs
Framework Raw text JSON API Template
Runtime (no framework, no middleware)
OpenSwoole raw 142k 138k
Node.js raw http 129k 132k
Full framework (CORS + ETag + sessions + routing + templates)
ZealPHP built-in 117k 106k 50k
Express.js +5 npm pkgs 20k 22k 12k
Other PHP frameworks (community benchmarks)
Slim 4 ~4k req/s
Symfony 7 ~2k req/s
Laravel 11 ~500 req/s

The runtime is already faster. OpenSwoole's bare HTTP server hits 142k req/s text · 138k JSON — versus Node's 129k · 132k. +10% on text, +5% on JSON, before any framework loads.

ZealPHP keeps 82% of that with full PSR-15 middleware on top. Express keeps 15% of raw Node's. End result — ZealPHP with full middleware reaches 91% of bare Node http's throughput.

Concurrency sweep, latency percentiles, methodology, reproduction recipes & caveats →

The PHP we love. The execution model we needed.

LAMP shipped the web — Apache, mod_php, then PHP-FPM. Twenty-five years of request-per-process: spawn a worker, run the code, die when done. Shared-nothing by design.

It still works. But it can't stream AI tokens. It can't push WebSocket events. It can't share state between requests without bolting on Redis. Every “real-time” feature your customers ask for needs another service.

ZealPHP keeps the PHP. Swaps the execution model. One process, coroutines, persistent state — and your existing PHP codebase still runs.

Try it — live AI chat, streaming on this server

Powered by the OpenAI Agents SDK + ZealPHP SSE streaming. Multi-agent with tool use, streamed token-by-token.

ZealPHP AI Chat Demo Checking...
Hi! I'm running on ZealPHP's SSE streaming. Ask me anything — watch the tokens stream in real-time.
View source code → The full backend powering this chat
# examples/agents/chat_agent.py
from agents import Agent, Runner, function_tool, SQLiteSession

@function_tool
def get_zealphp_reference(query: str) -> str:
    """Look up ZealPHP docs — routing, streaming, store, etc."""
    return match_sections(reference, query)

agent = Agent(
    name="ZealPHP Assistant",
    model="gpt-4.1-mini",
    instructions="You are a ZealPHP expert. Output raw HTML.",
    tools=[get_zealphp_reference],
)

# Persistent conversation threads via SQLiteSession
session = SQLiteSession(db_path=DB_PATH, session_id=thread_id)

# Stream tokens as SSE events to stdout
result = Runner.run_streamed(agent, input=message, session=session)
async for event in result.stream_events():
    if event.data.type == "response.output_text.delta":
        print(f"data: {json.dumps({'token': event.data.delta})}")

Why not just use...?

Bold claims. Real code. You decide.

Node.js needs 30 lines for what ZealPHP does in 5

AI token streaming — the core feature of every LLM app. Compare the implementations.

ZealPHP — 7 lines
$app->route('/ai/stream', function($response) {
    $response->sse(function($emit) {
        $ch = curl_init($apiUrl);
        // ... setup curl streaming
        curl_exec($ch);
    });
});
Node.js — 25+ lines
app.get('/ai/stream', (req, res) => {
  res.setHeader('Content-Type', 'text/event-stream');
  res.setHeader('Cache-Control', 'no-cache');
  res.setHeader('Connection', 'keep-alive');
  res.flushHeaders();

  const response = await fetch(apiUrl, {
    method: 'POST', body: JSON.stringify({...}),
  });

  const reader = response.body.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    const chunk = decoder.decode(value);
    // parse SSE lines, extract tokens...
    res.write(`data: ${token}\n\n`);
  }
  res.end();
});

Expressive PHP with coroutine-grade concurrency

~106k req/s in our 4-worker JSON benchmark, with reflection-based injection, auto-serialization, and no boilerplate. Numbers vary by workload — see methodology below.

ZealPHP — return anything
$app->route('/users/{id}', function($id) {
    return ['user' => User::find($id)];
    // auto JSON. auto 200. done.
});
Go — manual everything
func getUser(w http.ResponseWriter, r *http.Request) {
    id := chi.URLParam(r, "id")
    user, err := FindUser(id)
    if err != nil {
        http.Error(w, err.Error(), 500)
        return
    }
    w.Header().Set("Content-Type", "application/json")
    json.NewEncoder(w).Encode(map[string]any{
        "user": user,
    })
}

Multi-process workers, coroutines per worker

ZealPHP inherits OpenSwoole's architecture: N worker processes, each running thousands of coroutines on a single reactor loop. OpenSwoole is the runtime; ZealPHP is the framework layer. Real connection counts depend on workload, OS limits, and tuning — measure for your case.

ZealPHP — true parallelism
// 16 workers × thousands of coroutines
// Shared memory across workers (no Redis)
// Each coroutine yields on I/O automatically
ZEALPHP_WORKERS=16 php app.php

// Store: cross-worker shared state
Store::set('cache', $key, $data);
$data = Store::get('cache', $key);
FastAPI — single process limits
# Single process, async but not parallel
# Need Gunicorn + multiple workers
# Need Redis for any shared state
# Need Celery for background tasks
gunicorn app:app -w 4 -k uvicorn.workers.UvicornWorker

# Shared state? Add Redis.
redis_client = redis.Redis()
redis_client.set(key, json.dumps(data))
Fresh machine? One line installs PHP 8.3 + OpenSwoole + uopz + composer:
$ curl -fsSL https://php.zeal.ninja/install.sh | sudo bash

Ubuntu/Debian/WSL2 · macOS · auto-detects unsupported distros and prints manual steps · inspect first

Quick Start

PHP installed? From zero to running server in 60 seconds.

1$ composer create-project sibidharan/zealphp-project:^0.2.7 my-app
2$ cd my-app && php app.php
Server running at http://localhost:8080
Includes CLAUDE.md for AI-assisted development. Restart with php app.php after editing routes.
1$ git clone https://github.com/sibidharan/zealphp.git
2$ cd zealphp && composer install && php app.php
This very site, running locally at http://localhost:8080
The framework repo IS the OSS website — every page is a live, working example of a feature.
1$ git clone https://github.com/sibidharan/zealphp-wordpress.git
2$ cd zealphp-wordpress && composer install
3$ php app.php
WordPress at http://localhost:9501 — admin, login, REST API all working
Zero WordPress modifications. CGI worker provides Apache mod_php compatibility. See Legacy Apps.
Requires PHP 8.3+ OpenSwoole 22.1+ uopz composer Install help →

Everything you need

Every feature is a live running example — click any card to explore.

route()

Routing

Flask-style routes with reflection-based injection. Zero config, zero boilerplate.

Explore →
📦
auto-serialize

Responses

Return int → status, array → JSON, Generator → stream. Framework does the right thing.

Explore →
🔀
go() + Channel

Coroutines

Fan out to multiple AI models in parallel. Merge responses. go() + Channel, zero callback hell.

Explore →
📡
yield · SSE

Streaming

Stream AI tokens as they generate. yield is your streaming primitive. SSR, SSE, stream() built-in.

Explore →
🔌
App::ws()

WebSocket

Real-time agent-to-user comms. Multi-user AI sessions, live collaboration, binary frames.

Explore →
🛡️
PSR-15

Middleware

CORS, ETag/304, gzip. PSR-15 compatible — drop in any middleware package.

Explore →
🗄️
drop-in

Sessions

Coroutine-safe sessions. Your existing session_start() code just works via uopz.

Explore →
🗃️
OpenSwoole\Table

Store

Share AI conversation state across workers. Cross-worker shared memory — no Redis needed.

Explore →
⏱️
tick() · after()

Timers

Schedule recurring AI tasks. Polling, cleanup, model warmup, health checks.

Explore →
🌐
HTTP/1.1

HTTP

Full HTTP/1.1 compliance. HEAD, OPTIONS, Range, redirects, CORS, ETag, gzip — all built-in.

Explore →
📝
renderStream()

Components

SSR streaming components. Compose views with yield from. renderStream() for progressive HTML.

Explore →
🔗
file-based

REST API

Drop a PHP file in api/. It becomes a route. File-based REST — the simplest API pattern.

Explore →
🏗️
WordPress

Legacy Apps

Run WordPress unmodified. CGI worker provides true global scope. Apache mod_php compatibility.

Explore →

Bring your PHP codebase along

session_start(), header(), $_GET, echo — overridden via uopz so existing code runs unchanged inside the coroutine runtime. Move at your own pace: drop your whole app in, or rewrite endpoint-by-endpoint.

Today: Nginx + PHP-FPM + Redis + Socket.io + cron + …

6 services, 6 failure points, 6 sets of config.

On ZealPHP: php app.php

HTTP + WebSocket + SSE + sessions + shared memory + task workers — one process.

The migration ladder has 5 rungs (0 → 4). Rung 0 is "drop in your existing app, run php app.php." Rung 4 is full coroutine mode — 117k req/s on 4 workers. Most teams stay on rungs 1–3 indefinitely; the upgrade path is opt-in, not forced.

See the full migration path → WordPress on ZealPHP →

Return anything, get the right response

ZealPHP inspects your return type and does the right thing — no boilerplate.

ReturnResultExample
intHTTP status codereturn 404; return 201;
array / objectAuto-serialized as JSONreturn ['users' => $list];
stringHTML bodyreturn '<h1>Hello</h1>';
GeneratorSSR streaming (each yield sent immediately)yield '<head>'; yield $body;
void + echoBuffered output via ob_get_clean()echo "Hello"; echo " World";
ResponseInterfacePSR-7 response used directlyreturn new Response(...);