NGINX HTTP Push Module is now obsolete. It has been remolded, reworked, and rebooted into Nchan, which is 99% backwards compatible with the Push Module. You should really upgrade, unless you don't want to.
This module turns Nginx into an adept HTTP Push and Comet server. It takes care of all the connection juggling, and exposes a simple interface to broadcast messages to clients via plain old HTTP requests. This lets you write live-updating asynchronous web applications as easily as their oldschool classic counterparts, since your code does not need to manage requests with delayed responses.
NHPM fully implements the Basic HTTP Push Relay Protocol, a no-frills publisher/subscriber protocol centered on uniquely identifiable channels. It is an order of magnitude simpler and more basic than similar protocols (such as Bayeux). However, this basic functionality together with the flexibility of the server configuration make it possible to reformulate most HTTP Push use cases in Basic HTTP Push Relay Protocol language with very little application- and client-side programming overhead.
You're writing a live-updating web application. Maybe it's some sort of chat, a multiplayer Flash game, a live feed reader, or maybe it's a realtime HTCPCP teapot controller. Either way, you won't have status updates come only when the user refreshes a page, and polling the server every couple of seconds seems to you ugly and insufficient. But you don't quite want to commit to writing your application in any of the available asynchronous scripted web server frameworks. You're also not crazy about CometD, maybe because you think the Bayeux protocol is overkill.
Nginx HTTP Push Module is distrbuted under the MIT Licence.
Modules are added by compiling them along with the Nginx source.
Download the push module, untar, and run
./configure --add-module=path/to/nginx_http_push_module ...
NHPM is compatible with Nginx versions 0.8 and above (tested up to 1.8).
A token uniquely identifying a communication channel. Must be present in the context of the push_subscriber and push_publisher directives. Example:
set $push_channel_id $arg_id; #channel id is now the url query string parameter "id" #(/foo/bar?id=channel_id_string)
Defines a server or location as a subscriber. This location represents a
subscriber's interface to a channel's message queue. The queue is traversed
automatically via entity-caching request headers (
If-None-Match), beginning with the oldest available message. Requests for
upcoming messages are handled in accordance with the setting provided.
See the protocol documentation for a detailed description.
The directive value controls server behavior when a subscriber requests a message that has
yet to arrive. Naturally, long-poll long-polls the request, while
interval-poll is responded to immediately with a
304 Not Modified
status code until the requested message becomes available.
Controls how multiple subscriber requests to a channel (identified by some common channel id) are handled. The values work as follows:
Defines a server or location as a message publisher. Requests to a publisher location are treated as messages to be sent to subscribers. See the protocol documentation for a detailed description.
Whether or not message queuing is enabled. "Off" is equivalent to the setting
The size of the memory chunk this module will use for all message queuing and buffering.
The minimum number of messages to store per channel. A channel's message buffer will retain at least this many most recent messages.
The maximum number of messages to store per channel. A channel's message buffer will retain at most this many most recent messages.
When enabled, as soon as the oldest message in a channel's message queue has been received by a subscriber, it is deleted -- provided there are more than push_max_message_buffer_length messages in the channel's message buffer. Recommend avoiding this directive as it violates subscribers' assumptions of GET request idempotence.
The length of time a message may be queued before it is considered expired. If you do not want messages to expire, set this to 0. Applicable only if a push_publisher is present in this or a child context.
Whether or not a subscriber may create a channel by making a request to a
push_subscriber location. If set to on, a
publisher must send a
request before a subscriber can request messages on the channel. Otherwise,
all subscriber requests to nonexistent channels will get a
403 Forbidden response.
Because settings are bound to locations and not individual channels, it is useful to be able to have channels that can be reached only from some locations and never others. That's where this setting comes in. Think of it as a prefix string for the channel id.
Maximum permissible channel id length (number of characters). Longer ids will be truncated.
Maximum concurrent subscribers. Pretty self-explanatory.
For an overview and some configuration examples, take a look at this excellent (but slightly outdated) post by Ilya Grigorik.
This section will be expanded further. Sit tight.
Varyheader. (I'm looking at you, Internet Explorer.) For maximum cross-browser compliance, you may wish to forward the
Etagheaders manually (see dumbchat.js for an example).
$arg_PARAMETERvariables for a $push_channel_id, keep in mind that Nginx does not parse the value if it is url-encoded.
This section will be expanded further. Sit tight.
Nginx is quite excellent at dealing with open connections. Using this module adds a small amount of overhead -- a dozen or so bytes per channel, and around a hundred per each open subscriber request (depending, of course, on the size of the request).
Channel messages are stored either in memory or, when they exceed some reasonable size threshold, in temporary files. If you intend to have millions of channels open with millions of messages that take weeks to expire and be purged, consider increasing the push_max_reserved_memory setting.
CPU usage should be completely unnoticeable. Channels are stored in a red-black tree, and have an O(log(N)) retrieval cost. All other NHPM operations are constant-time, so things should scale quite well.
Oh, hi there. I'm Leo.
Got questions? A successful large-scale deployment? A failing one? Found a bug?
Want to submit a patch? Send money? ...Cake?