From openSIPS

Development: Distributed User Location Design

Development -> Design-Distributed-User-Location

This page has been visited 4252 times.

Table of Content (hide)

  1. 1. "User facing" topology
    1. 1.1 "SIP driven" user facing topology
    2. 1.2 "Cluster driven" user facing topology
  2. 2. "Homogeneous cluster" topology
    1. 2.1 "Basic" OpenSIPS homogeneous cluster topology
    2. 2.2 "Advanced" OpenSIPS homogeneous cluster topology


Distributed User Location Design

This page aims at offering high-level information regarding the development of several distributed user location models which are to be included in the OpenSIPS 2.4 release. By putting together several community discussions (2013 "users" mailing list, 2015 public meeting) along with our own experience with regards to this topic, we present two models which simultaneously address needs such as horizontal scalability, geo distribution, high availability and NAT traversal.

1.  "User facing" topology

Below is a set of features specific to this model:

We present two solutions for achieving this setup: a "SIP driven" solution and a "cluster driven" one.

1.1  "SIP driven" user facing topology

This solution is ideal for SMBs or as a proof of concept. With the SIP driven solution, after saving an incoming registration, the registrar node records itself using a Path header, after which it replicates the REGISTER to all cluster nodes across all locations. This allows the user to be globally reachable, while also making sure it only receives calls through its "home box" (a mandatory NAT requirement in most cases). NAT pinging is only performed by the "home box".

PROs:

CONs:

Development:

1.2  "Cluster driven" user facing topology

This solution is a heavily optimized version of the previous one, from three perspectives: performance, network link redundancy and scripting difficulty. Similar to the above, the end results, as seen from outside the platform, stay the same: global reachability, NAT traversal and pinging.

However, the difference is that we are now using the OpenSIPS clusterer layer for all inter-node communication. Immediately, this reduces the number of messages sent ("alice" is reachable here, rather than Alice's contact "deskphone" is now present here), the size of the messages (metadata only, rather than full-blown SIP) and the parsing overhead (binary data vs. SIP syntax). Furthermore, by using the cluster-based communication, the platform now becomes resilient to the loss of some of its cross-location data links. As long as the "platform graph" stays connected, the cluster-based distributed location service will remain unaffected.

PROs

CONs

Development:

2.  "Homogeneous cluster" topology

The homogeneous cluster solves the following problems:

We present two solutions for achieving this setup: a "basic" solution and an "advanced" one.

2.1  "Basic" OpenSIPS homogeneous cluster topology

This solution is an appropriate choice for a single site with a medium-sized subscriber population (order of millions), which could all fit into a single OpenSIPS box (all cluster boxes are mirrored). The NAT bindings are tied to the SBC layer, with the cluster nodes routing out both call and ping traffic through this layer. With the help of the cluster layer, which is able to signal when a node joins/leaves the network, each node is able to determine its very own "pinging slice", by performing an AOR hash modulo current_no_of_cluster_nodes.

PROs:

CONs:

Development:

2.2  "Advanced" OpenSIPS homogeneous cluster topology

This solution is to be employed by single sites with high population numbers (order of tens/hundreds of millions). At these magnitudes of data, we cannot rely on OpenSIPS to manage the user location data anymore (unless we kickstart "OpenSIPS DB") - we would rather pass this on to a specialized, cluster-oriented NoSQL database which offers data partitioning and redundancy.


Similar to the "basic" solution, the NAT bindings are tied to the SBC layer, with the cluster nodes routing out both call and ping traffic through this layer. With the help of the cluster layer, which is able to signal when a node joins/leaves the network, each node is able to determine its very own "pinging slice", by applying an AOR hash modulo current_no_of_cluster_nodes filter to the DB cluster query.

PROs:

CONs:

Development:

Retrieved from https://www.opensips.org/Development/Design-Distributed-User-Location
Page last modified on February 17, 2018, at 07:04 PM