ratelimit Module

Ovidiu Sas

Bogdan Vasile Harjoc

Hendrik Scholz

Razvan Crainea

Edited by

Ovidiu Sas

Edited by

Bogdan Vasile Harjoc

Edited by

Hendrik Scholz

Edited by

Razvan Crainea

Revision History
Revision $Revision: 5901 $$Date: 2009-07-21 10:45:05 +0300 (Tue, 21 Jul 2009) $

Table of Contents

1. Admin Guide
1.1. Overview
1.2. Use Cases
1.3. Static Rate Limiting Algorithms
1.3.1. Tail Drop Algorithm (TAILDROP)
1.3.2. Random Early Detection Algorithm (RED)
1.3.3. Network Algorithm (NETWORK)
1.4. Dynamic Rate Limiting Algorithms
1.4.1. Feedback Algorithm (FEEDBACK)
1.5. Dependencies
1.5.1. OpenSIPS Modules
1.5.2. External Libraries or Applications
1.6. Exported Parameters
1.6.1. timer_interval (integer)
1.6.2. expire_time (integer)
1.6.3. hash_size (integer)
1.6.4. default_algorithm (string)
1.6.5. cachedb_url (string)
1.6.6. db_prefix (string)
1.7. Exported Functions
1.7.1. rl_check(name, limit[, algorithm])
1.7.2. rl_dec_count(name)
1.7.3. rl_reset_count(name)
1.8. Exported MI Functions
1.8.1. rl_list
1.8.2. rl_reset_pipe
1.8.3. rl_set_pid
1.8.4. rl_get_pid

List of Examples

1.1. Set timer_interval parameter
1.2. Set expire_time parameter
1.3. Set hash_size parameter
1.4. Set default_algorithm parameter
1.5. Set cachedb_url parameter
1.6. Set db_prefix parameter
1.7. rl_check usage
1.8. rl_dec_count usage
1.9. rl_reset_count usage

Chapter 1. Admin Guide

1.1. Overview

This module implements rate limiting for SIP requests. In contrast to the PIKE module this limits the flow based on a per SIP request type basis and not per source IP. The latest sources allow you to dynamically group several messages into some entities and limit the traffic based on them. The MI interface can be used to change tunables while running OpenSIPS.

This module is also integrated with the OpenSIPS Key-Value Interface, providing support for distributed rate limiting using Redis or Memcached CacheDB backends.

1.2. Use Cases

Limiting the rate messages are processed on a system directly influences the load. The ratelimit module can be used to protect a single host or to protect an OpenSIPS cluster when run on the dispatching box in front.

Distributed limiting is useful when the rate limit should be performed not only on a specific node, but on the entire platform. The internal limiting data will no longer be kept on each OpenSIPS instance. It will be stored in a distributed Key-Value database and queried by each instance before deciding if a SIP message should be blocked or not.

NOTE: that this behavior only makes sense when the pipe algorithm used is TAILDROP or RED.

A sample configuration snippet might look like this:

...
	if (!rl_check("$rU", "50", "TAILDROP")) {
		sl_send_reply("503", "Server Unavailable");
		exit;
	};
...
	

Upon every incoming request listed above rl_check is invoked and the entity identified by the R-URI user is checked. It returns an OK code if the current per request load is below the configured threshold. If the load is exceeded the function returns an error and an administrator can discard requests with a stateless response.

1.3. Static Rate Limiting Algorithms

The ratelimit module supports two different static algorithms to be used by rl_check to determine whether a message should be blocked or not.

1.3.1. Tail Drop Algorithm (TAILDROP)

This is a trivial algorithm that imposes some risks when used in conjunction with long timer intervals. At the start of each interval an internal counter is reset and incremented for each incoming message. Once the counter hits the configured limit rl_check returns an error.

The downside of this algorithm is that it can lead to SIP client synchronization. During a relatively long interval only the first requests (i.e. REGISTERs) would make it through. Following messages (i.e. RE-REGISTERs) will all hit the SIP proxy at the same time when a common Expire timer expired. Other requests will be retransmissed after given time, the same on all devices with the same firmware/by the same vendor.

1.3.2. Random Early Detection Algorithm (RED)

Random Early Detection tries to circumvent the synchronization problem imposed by the tail drop algorithm by measuring the average load and adapting the drop rate dynamically. When running with the RED algorithm OpenSIPS will return errors to the OpenSIPS routing engine every n'th packet trying to evenly spread the measured load of the last timer interval onto the current interval. As a negative side effect OpenSIPS might drop messages although the limit might not be reached within the interval. Decrease the timer interval if you encounter this.

1.3.3. Network Algorithm (NETWORK)

This algorithm relies on information provided by network interfaces. The total amount of bytes waiting to be consumed on all the network interfaces is retrieved once every timer_interval seconds. If the returned amount exceeds the limit specified in the modparam, rl_check returns an error.

1.4. Dynamic Rate Limiting Algorithms

When running OpenSIPS on different machines, one has to adjust the drop rates for the static algorithms to maintain a sub 100% load average or packets start getting dropped in the network stack. While this is not in itself difficult, it isn't neither accurate nor trivial: another server taking a notable fraction of the cpu time will require re-tuning the parameters.

While tuning the drop rates from the outside based on a certain factor is possible, having the algorithm run inside ratelimit permits tuning the rates based on internal server parameters and is somewhat more flexible (or it will be when support for external load factors - as opposed to cpu load - is added).

1.4.1. Feedback Algorithm (FEEDBACK)

Using the PID Controller model (see Wikipedia page), the drop rate is adjusted dynamically based on the load factor so that the load factor always drifts towards the specified limit (or setpoint, in PID terms).

As reading the cpu load average is relatively expensive (opening /proc/stat, parsing it, etc), this only happens once every timer_interval seconds and consequently the FEEDBACK value is only at these intervals recomputed. This in turn makes it difficult for the drop rate to adjust quickly. Worst case scenarios are request rates going up/down instantly by thousands - it takes up to 20 seconds for the controller to adapt to the new request rate.

Generally though, as real life request rates drift by less, adapting should happen much faster.

1.5. Dependencies

1.5.1. OpenSIPS Modules

The following modules must be loaded before this module:

  • No dependencies on other OpenSIPS modules.

1.5.2. External Libraries or Applications

The following libraries or applications must be installed before running OpenSIPS with this module loaded:

  • None.

1.6. Exported Parameters

1.6.1. timer_interval (integer)

The initial length of a timer interval in seconds. All amounts of messages have to be divided by this timer to get a messages per second value.

IMPORTANT: A too small value may lead to performance penalties due to timer process overloading.

Default value is 10.

Example 1.1. Set timer_interval parameter

...
modparam("ratelimit", "timer_interval", 5)
...

1.6.2. expire_time (integer)

This parameter specifies how long a pipe should be kept in memory until deleted.

Default value is 3600.

Example 1.2. Set expire_time parameter

...
modparam("ratelimit", "expire_time", 1800)
...

1.6.3. hash_size (integer)

The size of the hash table internally used to keep the pipes. A larger table is much faster but consumes more memory. The hash size must be a power of 2 number.

Default value is 1024.

Example 1.3. Set hash_size parameter

...
modparam("ratelimit", "hash_size", 512)
...

1.6.4. default_algorithm (string)

Specifies which algorithm should be assumed in case it isn't explicitly specified in the rl_check function.

Default value is "TAILDROP".

Example 1.4. Set default_algorithm parameter

...
modparam("ratelimit", "default_algorithm", "RED")
...

1.6.5. cachedb_url (string)

Enables distributed rate limiting and specifies the backend that should be used by the CacheDB interface.

Default value is "disabled".

Example 1.5. Set cachedb_url parameter

...
modparam("ratelimit", "cachedb_url", "redis://root:root@127.0.0.1/")
...

1.6.6. db_prefix (string)

Specifies what prefix should be added to the pipe name. This is only used when distributed rate limiting is enabled.

Default value is "rl_pipe_".

Example 1.6. Set db_prefix parameter

...
modparam("ratelimit", "db_prefix", "ratelimit_")
...

1.7. Exported Functions

1.7.1.  rl_check(name, limit[, algorithm])

Check the current request against the pipe identified by name and changes/updates the limit. If no pipe is found, then a new one is created with the specified limit and algorithm, if specified. If the algorithm parameter doesn't exist, the default one is used.

NOTE: A pipe's algorithm cannot be dynamically changed. Only the one specified when the pipe was created will be considered.

The method will return an error code if the limit for the matched pipe is reached.

Meaning of the parameters is as follows:

  • name - this is the name that identifies the pipe which should be checked. This parameter accepts both strings and pseudovariables.

  • limit - this specifies the threshold limit of the pipe. It is strongly related to the algorithm used. This parameter accepts an integer or a pseudovariable. Note that the limit should be specified as per-second, not per-timer_interval.

  • algorithm - this is parameter is optional and reffers to the algorithm used to check the pipe. If it is not set, the default value is used. It accepts a string or a pseudovariable.

This function can be used from REQUEST_ROUTE.

Example 1.7. rl_check usage

...
	# perform a pipe match for all INVITE methods using RED algorithm
	if (is_method("INVITE")) {
		if (!rl_check("pipe_INVITE", "100", "RED")) {
			sl_send_reply("503", "Server Unavailable");
			exit;
		};
	};
...
	# use default algorithm for each different gateway
	$var(limit) = 10;
	if (!rl_check("gw_$ru", "$var(limit)")) {
		sl_send_reply("503", "Server Unavailable");
		exit;
	};
...

1.7.2.  rl_dec_count(name)

This function decreases a counter that could have been previously increased by rl_check function.

Meaning of the parameters is as follows:

  • name - identifies the name of the pipe.

This function can be used from REQUEST_ROUTE.

Example 1.8. rl_dec_count usage

...
	if (!rl_check("gw_$ru", "100", "TAILDROP")) {
		exit;
	} else {
		rl_dec_count("gw_$ru");
	};
...

1.7.3.  rl_reset_count(name)

This function resets a counter that could have been previously increased by rl_check function.

Meaning of the parameters is as follows:

  • name - identifies the name of the pipe.

This function can be used from REQUEST_ROUTE.

Example 1.9. rl_reset_count usage

...
	if (!rl_check("gw_$ru", "100", "TAILDROP")) {
		exit;
	} else {
		rl_reset_count("gw_$ru");
	};
...

1.8. Exported MI Functions

1.8.1.  rl_list

Lists the parameters and variabiles in the ratelimit module.

Name: rl_list

Parameters:

  • pipe - indicates the name of the pipe. This parameter is optional. If it doesn't exist, all the active pipes are listed. Otherwise only the one specified.

MI FIFO Command Format:

		:rl_list:_reply_fifo_file_
		gw_10.0.0.1
		_empty_line_
		
		:rl_list:_reply_fifo_file_
		_empty_line_
		

1.8.2.  rl_reset_pipe

Resets the counter of a specified pipe.

Name: rl_reset_pipe

Parameters:

  • pipe - indicates the name of the pipe whose couter should be reset.

MI FIFO Command Format:

		:rl_reset_pipe:_reply_fifo_file_
		gw_10.0.0.1
		_empty_line_
		

1.8.3.  rl_set_pid

Sets the PID Controller parameters for the Feedback Algorithm.

Name: rl_set_pid

Parameters:

  • ki - the integral parameter.

  • kp - the proportional parameter.

  • kd - the derivative parameter.

MI FIFO Command Format:

		:rl_set_pid:_reply_fifo_file_
		0.5
		0.5
		0.5
		_empty_line_
		

1.8.4.  rl_get_pid

Gets the list of in use PID Controller parameters.

Name: rl_get_pid

Parameters: none

MI FIFO Command Format:

		:rl_get_pid:_reply_fifo_file_
		_empty_line_