=encoding utf-8 =head1 Name resty.limit.req - Lua module for limiting request rate for OpenResty/ngx_lua. =head1 Synopsis # demonstrate the usage of the resty.limit.req module (alone!) http { lua_shared_dict my_limit_req_store 100m; server { location / { access_by_lua_block { -- well, we could put the require() and new() calls in our own Lua -- modules to save overhead. here we put them below just for -- convenience. local limit_req = require "resty.limit.req" -- limit the requests under 200 req/sec with a burst of 100 req/sec, -- that is, we delay requests under 300 req/sec and above 200 -- req/sec, and reject any requests exceeding 300 req/sec. local lim, err = limit_req.new("my_limit_req_store", 200, 100) if not lim then ngx.log(ngx.ERR, "failed to instantiate a resty.limit.req object: ", err) return ngx.exit(500) end -- the following call must be per-request. -- here we use the remote (IP) address as the limiting key local key = ngx.var.binary_remote_addr local delay, err = lim:incoming(key, true) if not delay then if err == "rejected" then return ngx.exit(503) end ngx.log(ngx.ERR, "failed to limit req: ", err) return ngx.exit(500) end if delay >= 0.001 then -- the 2nd return value holds the number of excess requests -- per second for the specified key. for example, number 31 -- means the current request rate is at 231 req/sec for the -- specified key. local excess = err -- the request exceeding the 200 req/sec but below 300 req/sec, -- so we intentionally delay it here a bit to conform to the -- 200 req/sec rate. ngx.sleep(delay) end } # content handler goes here. if it is content_by_lua, then you can # merge the Lua code above in access_by_lua into your content_by_lua's # Lua handler to save a little bit of CPU time. } } } =head1 Description This module provides APIs to help the OpenResty/ngx_lua user programmers limit request rate using the "leaky bucket" method. If you want to use multiple different instances of this class at once or use one instance of this class with instances of other classes (like L), then you I use the L module to combine them. This Lua module's implementation is similar to NGINX's standard module L. But this Lua module is more flexible in that it can be used in almost arbitrary contexts. =head1 Methods =head2 new B C Instantiates an object of this class. The C value is returned by the call C. This method takes the following arguments: =over =item * C is the name of the L shm zone. It is best practice to use separate shm zones for different kinds of limiters. =item * C is the specified request rate (number per second) threshold. Requests exceeding this rate (and below C) will get delayed to conform to the rate. =item * C is the number of excessive requests per second allowed to be delayed. Requests exceeding this hard limit will get rejected immediately. =back On failure, this method returns C and a string describing the error (like a bad C name). =head2 incoming B C Fires a new request incoming event and calculates the delay needed (if any) for the current request upon the specified key or whether the user should reject it immediately. This method accepts the following arguments: =over =item * C is the user specified key to limit the rate. For example, one can use the host name (or server zone) as the key so that we limit rate per host name. Otherwise, we can also use the client address as the key so that we can avoid a single client from flooding our service. Please note that this module does not prefix nor suffix the user key so it is the user's responsibility to ensure the key is unique in the C shm zone). =item * C is a boolean value. If set to C, the object will actually record the event in the shm zone backing the current object; otherwise it would just be a "dry run" (which is the default). =back The return values depend on the following cases: =over =item 1. If the request does not exceed the C value specified in the L method, then this method returns C<0> as the delay and the (zero) number of excessive requests per second at the current time. =item 2. If the request exceeds the C limit specified in the L method but not the C + C value, then this method returns a proper delay (in seconds) for the current request so that it still conform to the C threshold as if it came a bit later rather than now. In addition, this method also returns a second return value indicating the number of excessive requests per second at this point (including the current request). This 2nd return value can be used to monitor the unadjusted incoming request rate. =item 3. If the request exceeds the C + C limit, then this method returns C and the error string C<"rejected">. =item 4. If an error occurred (like failures when accessing the C shm zone backing the current object), then this method returns C and a string describing the error. =back This method never sleeps itself. It simply returns a delay if necessary and requires the caller to later invoke the L method to sleep. =head2 set_rate B C Overwrites the C threshold as specified in the L method. =head2 set_burst B C Overwrites the C threshold as specified in the L method. =head2 uncommit B C This tries to undo the commit of the C call. This is simply an approximation and should be used with care. This method is mainly for being used in the L Lua module when combining multiple limiters at the same time. =head1 Instance Sharing Each instance of this class carries no state information but the C and C threshold values. The real limiting states based on keys are stored in the C shm zone specified in the L method. So it is safe to share instances of this class L as long as the combination of C and C do not change. Even if the C and C combination I change, one can still share a single instance as long as he always calls the L and/or L methods I the L call. =head1 Limiting Granularity The limiting works on the granularity of an individual NGINX server instance (including all its worker processes). Thanks to the shm mechanism; we can share state cheaply across all the workers in a single NGINX server instance. If you are running multiple NGINX server instances (like running multiple boxes), then you need to ensure that the incoming traffic is (more or less) evenly distributed across all the different NGINX server instances (or boxes). So if you want a limit rate of N req/sec across all the servers, then you just need to specify a limit of C req/sec in each server's configuration. This simple strategy can save all the (big) overhead of sharing a global state across machine boundaries. =head1 Installation Please see L. =head1 Community =head2 English Mailing List The L mailing list is for English speakers. =head2 Chinese Mailing List The L mailing list is for Chinese speakers. =head1 Bugs and Patches Please report bugs or submit patches by =over =item 1. creating a ticket on the L, =item 2. or posting to the L. =back =head1 Author Yichun "agentzh" Zhang (章亦春) Eagentzh@gmail.comE, CloudFlare Inc. =head1 Copyright and License This module is licensed under the BSD license. Copyright (C) 2015-2016, by Yichun "agentzh" Zhang, CloudFlare Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: =over =item * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. =back =over =item * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. =back THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. =head1 See Also =over =item * module L =item * module L =item * module L =item * library L =item * the ngx_lua module: https://github.com/openresty/lua-nginx-module =item * OpenResty: https://openresty.org/ =back