Documentation

Documentation.Tutorials-Distributed-User-Location-Full-Sharing History

Hide minor edits - Show changes to markup

November 04, 2019, at 04:04 PM by liviu -
Changed line 121 from:

This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling and data storage systems. This, in turn, allows each system to be scaled without wasting resources or affecting the other one.

to:

This is the ultra-scalable version of the OpenSIPS user location, allowing you to support subscriber pool sizes exceeding the order of millions. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling and data storage systems. This, in turn, allows each system to be scaled without wasting resources or affecting the other one.

November 04, 2019, at 04:00 PM by liviu -
Changed line 222 from:
    # store the registration into the NoSQL DB
to:
    # store the registration, along with the Path header, into the NoSQL DB
November 04, 2019, at 03:59 PM by liviu -
Changed line 129 from:
  • a NoSQL DB instance, such as Cassandra or MongoDB, to hold all registrations (you can upgrade them into a cluster later)
to:
  • a NoSQL DB instance, such as Cassandra or MongoDB, to hold all registrations (you can upgrade it into a cluster later)
November 04, 2019, at 12:11 PM by liviu -
Changed lines 35-36 from:

Active/backup "full sharing" setup

to:

Active/passive "full sharing" setup

Changed line 112 from:

To prevent any "permission denied" error logs on the passive node that's trying to originate NAT pings, make sure to hook the nh_enable_ping MI command into your active->backup and backup->active transitions of the VIP:

to:

To prevent any "permission denied" error logs on the passive node that's trying to originate NAT pings, make sure to hook the nh_enable_ping MI command into your active->passive and passive->active transitions of the VIP:

November 04, 2019, at 11:54 AM by liviu -
Changed line 12 from:

Tip: For a broader view on the "full sharing" setups, see this blog post.

to:

Tip: For a broader view on the "full sharing" topology, see this blog post.

November 02, 2019, at 10:46 AM by liviu -
Changed line 33 from:

Building upon this setup, the federated user location clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data across different points of presence, allowing you to scale each POP according to the size of its subscriber pool.

to:

Building upon this setup, the federated user location clustering strategy ensures similar features as above, except it will not replicate user location data across different points of presence, allowing you to scale each POP according to the size of its subscriber pool.

November 01, 2019, at 10:40 PM by liviu -
Changed line 121 from:

This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage. This, in turn, allows each system to be scaled without wasting resources or affecting the other one.

to:

This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling and data storage systems. This, in turn, allows each system to be scaled without wasting resources or affecting the other one.

November 01, 2019, at 10:39 PM by liviu -
Changed line 121 from:

This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so.

to:

This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage. This, in turn, allows each system to be scaled without wasting resources or affecting the other one.

November 01, 2019, at 10:35 PM by liviu -
Changed line 20 from:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way of performing full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:

to:

The "full sharing" clustering strategy for the OpenSIPS 2.4+ user location service is a way of performing full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:

November 01, 2019, at 10:35 PM by liviu -
Changed line 20 from:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services (a.k.a. "native full sharing") is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:

to:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way of performing full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:

November 01, 2019, at 07:33 PM by liviu -
Changed line 12 from:

Tip: For an overall view on the "full sharing" setups, see this blog post.

to:

Tip: For a broader view on the "full sharing" setups, see this blog post.

November 01, 2019, at 07:33 PM by liviu -
Changed line 12 from:

Tip: For an overall view on the "full sharing" strategy, see this blog post.

to:

Tip: For an overall view on the "full sharing" setups, see this blog post.

November 01, 2019, at 07:28 PM by liviu -
Changed line 82 from:

"full sharing" clusterer table example

to:

Native "full sharing" clusterer table

Changed line 197 from:

"full sharing" clusterer table example

to:

NoSQL "full sharing" clusterer table

November 01, 2019, at 07:27 PM by liviu -
Changed line 121 from:

This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so.

to:

This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so.

November 01, 2019, at 07:23 PM by liviu -
Changed line 134 from:

On the backend layer, here is the relevant opensips.cfg sections for the cluster nodes:

to:

On the backend layer (cluster instances), here are the relevant opensips.cfg sections:

November 01, 2019, at 07:22 PM by liviu -
Changed line 121 from:

This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each system according to the current need, without wasting resources or affecting the other one by doing so.

to:

This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so.

November 01, 2019, at 07:21 PM by liviu -
Changed line 121 from:

This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale system horizontally, according to the current need.

to:

This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each system according to the current need, without wasting resources or affecting the other one by doing so.

November 01, 2019, at 07:21 PM by liviu -
Added lines 120-121:

This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale system horizontally, according to the current need.

November 01, 2019, at 07:18 PM by liviu -
Changed line 119 from:

NoSQL "full sharing" with a SIP front-end

to:

NoSQL "full sharing" cluster with a SIP front-end

November 01, 2019, at 07:16 PM by liviu -
Changed line 198 from:

NAT pinging

to:

Shared NAT pinging

November 01, 2019, at 07:15 PM by liviu -
Deleted line 142:

modparam("usrloc", "shared_pinging", 1)

November 01, 2019, at 07:15 PM by liviu -
Deleted line 162:

modparam("clusterer", "seed_fallback_interval", 5)

November 01, 2019, at 07:14 PM by liviu -
Changed line 194 from:
1411bin:10.0.0.1771350NULLseedNULL
to:
1411bin:10.0.0.1771350NULLNULLNULL
November 01, 2019, at 07:14 PM by liviu -
Added line 222:
    # store the registration into the NoSQL DB
November 01, 2019, at 07:13 PM by liviu -
Added line 173:
    # store the registration into the NoSQL DB
Deleted line 221:
    # store the registration into the NoSQL DB
November 01, 2019, at 07:13 PM by liviu -
Added line 221:
    # store the registration into the NoSQL DB
November 01, 2019, at 07:11 PM by liviu -
Changed line 127 from:
  • a NoSQL DB instance, such as Cassandra or MongoDB (you can upgrade them into a cluster later)
to:
  • a NoSQL DB instance, such as Cassandra or MongoDB, to hold all registrations (you can upgrade them into a cluster later)
November 01, 2019, at 07:11 PM by liviu -
Changed lines 35-36 from:

Basic active/backup setup

to:

Active/backup "full sharing" setup

Added lines 117-226:

@]

NoSQL "full sharing" with a SIP front-end

Configuration

For the smallest possible setup, you will need:

  • a SIP front-end proxy sitting in front of the cluster, with SIP Path support
  • two backend OpenSIPS instances, forming the cluster
  • a NoSQL DB instance, such as Cassandra or MongoDB (you can upgrade them into a cluster later)
  • a MySQL instance, for provisioning


On the backend layer, here is the relevant opensips.cfg sections for the cluster nodes:


listen = sip:10.0.0.177
listen = bin:10.0.0.177

loadmodule "usrloc.so"
modparam("usrloc", "use_domain", 1)
modparam("usrloc", "working_mode_preset", "full-sharing-cachedb-cluster")
modparam("usrloc", "shared_pinging", 1)
modparam("usrloc", "location_cluster", 1)

# with Cassandra, make sure to create the keyspace and table beforehand:
# CREATE KEYSPACE IF NOT EXISTS opensips WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}  AND durable_writes = true;
# USE opensips;
# CREATE TABLE opensips.userlocation (
#     aor text,
#     aorhash int,
#     contacts map<text, frozen<map<text, text>>>,
#     PRIMARY KEY (aor));
loadmodule "cachedb_cassandra.so"
modparam("usrloc", "cachedb_url", "cassandra://10.0.0.180:9042/opensips.userlocation")

# with MongoDB, we don't need to create any database or collection...
loadmodule "cachedb_mongodb.so"
modparam("usrloc", "cachedb_url", "mongodb://10.0.0.180:27017/opensipsDB.userlocation")

loadmodule "clusterer.so"
modparam("clusterer", "current_id", 1) # node number #1
modparam("clusterer", "seed_fallback_interval", 5)
modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")

loadmodule "proto_bin.so"

...

route {
    ...

    if (!save("location", "p1v")) {
        send_reply("500", "Server Internal Error");
        exit;
    }

    ...
}

Provisioning

INSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \
(NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \
(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL);


idcluster idnode_idurlstateno_ping_retriesprioritysip_addrflagsdescription
1411bin:10.0.0.1771350NULLseedNULL
1512bin:10.0.0.1781350NULLNULLNULL

"full sharing" clusterer table example

NAT pinging

loadmodule "nathelper.so"
modparam("nathelper", "natping_interval", 30)
modparam("nathelper", "sipping_from", "sip:pinger@localhost")
modparam("nathelper", "sipping_bflag", "SIPPING_ENABLE")
modparam("nathelper", "remove_on_timeout_bflag", "SIPPING_RTO")
modparam("nathelper", "max_pings_lost", 5)

# partition pings across cluster nodes
modparam("usrloc", "shared_pinging", 1)

We then enable these branch flags for some or all contacts before calling save():

[@

    ...

    setbflag(SIPPING_ENABLE);
    setbflag(SIPPING_RTO);

    if (!save("location", "p1v")) {
        sl_reply_error();
        exit;
    }

    ...
November 01, 2019, at 06:08 PM by liviu -
Changed lines 83-117 from:


to:


NAT pinging

Some setups require periodic SIP OPTIONS pings originated by the registrar towards some of the contacts in order to keep the NAT bindings alive. Here is an example configuration:

loadmodule "nathelper.so"
modparam("nathelper", "natping_interval", 30)
modparam("nathelper", "sipping_from", "sip:pinger@localhost")
modparam("nathelper", "sipping_bflag", "SIPPING_ENABLE")
modparam("nathelper", "remove_on_timeout_bflag", "SIPPING_RTO")
modparam("nathelper", "max_pings_lost", 5)

We then enable these branch flags for some or all contacts before calling save():

    ...
    setbflag(SIPPING_ENABLE);
    setbflag(SIPPING_RTO);

    if (!save("location"))
        sl_reply_error();
    ...


To prevent any "permission denied" error logs on the passive node that's trying to originate NAT pings, make sure to hook the nh_enable_ping MI command into your active->backup and backup->active transitions of the VIP:

    opensipsctl fifo nh_enable_ping 1 # run this on the machine that takes over the VIP (new active)
    opensipsctl fifo nh_enable_ping 0 # run this on the machine that gives up the VIP (new passive)
November 01, 2019, at 06:03 PM by liviu -
Changed line 29 from:

IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) should be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated.

to:

IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) must be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated.

November 01, 2019, at 06:02 PM by liviu -
Changed lines 53-54 from:

listen = bin:10.0.0.178

to:

listen = bin:10.0.0.177

Changed lines 82-132 from:

clusterer table example

Advanced "full sharing" + SBC setup

Configuration

For the smallest possible setup, you will need:

  • an SBC frontend, with SIP Path support
  • two OpenSIPS instances
  • a MySQL instance


The relevant opensips.cfg sections:


listen = sip:10.0.0.150 # virtual IP (same on both nodes)
listen = bin:10.0.0.178

loadmodule "usrloc.so"
modparam("usrloc", "use_domain", 1)
modparam("usrloc", "working_mode_preset", "full-sharing-cluster")
modparam("usrloc", "location_cluster", 1)

loadmodule "clusterer.so"
modparam("clusterer", "current_id", 1) # node number #1
modparam("clusterer", "seed_fallback_interval", 5)
modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")

loadmodule "proto_bin.so"

Provisioning

INSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \
(NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \
(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL);


idcluster idnode_idurlstateno_ping_retriesprioritysip_addrflagsdescription
1411bin:10.0.0.1771350NULLseedNULL
1512bin:10.0.0.1781350NULLNULLNULL

clusterer table example

to:

"full sharing" clusterer table example

November 01, 2019, at 05:58 PM by liviu -
Changed lines 10-11 from:

Description

to:

Description

Changed lines 35-38 from:

Configuration

For the smallest possible setup (e.g. an active/backup), you will need:

to:

Basic active/backup setup

Configuration

For the smallest possible setup (a 2-node active/passive with a virtual IP in front), you will need:

Changed lines 42-43 from:
  • a MySQL instance
to:
  • a working shared/virtual IP between the instances (e.g. using keepalived, vrrpd, etc.)
  • a MySQL instance, for provisioning
Changed lines 53-54 from:

listen = bin:10.0.0.178 # the

to:

listen = bin:10.0.0.178

Changed line 61 from:

modparam("clusterer", "current_id", 3) # node number #3

to:

modparam("clusterer", "current_id", 1) # node number #1

Changed lines 68-71 from:


Example clusterer table:

to:

Provisioning

Changed lines 85-91 from:

Call Flows

TODO

NAT pinging

TODO

to:

Advanced "full sharing" + SBC setup

Configuration

For the smallest possible setup, you will need:

  • an SBC frontend, with SIP Path support
  • two OpenSIPS instances
  • a MySQL instance


The relevant opensips.cfg sections:


listen = sip:10.0.0.150 # virtual IP (same on both nodes)
listen = bin:10.0.0.178

loadmodule "usrloc.so"
modparam("usrloc", "use_domain", 1)
modparam("usrloc", "working_mode_preset", "full-sharing-cluster")
modparam("usrloc", "location_cluster", 1)

loadmodule "clusterer.so"
modparam("clusterer", "current_id", 1) # node number #1
modparam("clusterer", "seed_fallback_interval", 5)
modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")

loadmodule "proto_bin.so"

Provisioning

INSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \
(NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \
(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL);


idcluster idnode_idurlstateno_ping_retriesprioritysip_addrflagsdescription
1411bin:10.0.0.1771350NULLseedNULL
1512bin:10.0.0.1781350NULLNULLNULL

clusterer table example

November 01, 2019, at 05:46 PM by liviu -
Changed lines 37-38 from:

For the smallest possible setup (e.g. active/backup), you will need:

to:

For the smallest possible setup (e.g. an active/backup), you will need:

Added lines 49-51:

listen = sip:10.0.0.150 # virtual IP (same on both nodes) listen = bin:10.0.0.178 # the

Added lines 61-62:

loadmodule "proto_bin.so"

Changed lines 72-73 from:

(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL), (NULL, 1, 3, 'bin:10.0.0.179', 1, 3, 50, NULL, NULL, NULL);

to:

(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL);

Deleted line 79:
1613bin:10.0.0.1791350NULLNULLNULL
November 01, 2019, at 05:39 PM by liviu -
Added lines 42-43:


Changed line 78 from:

"full sharing" clusterer table example

to:

clusterer table example

November 01, 2019, at 05:37 PM by liviu -
Changed lines 35-36 from:

Configuration (with cluster sync)

to:

Configuration

Changed lines 40-41 from:
to:
  • a MySQL instance
Deleted lines 50-76:

loadmodule "clusterer.so" modparam("clusterer", "current_id", 3) # node number #3 modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips") @]


Configuration (with MySQL)

For the smallest possible setup (e.g. active/backup), you will need:

  • two OpenSIPS instances
  • two MySQL instances, one on each OpenSIPS box

The relevant opensips.cfg sections:


[@ loadmodule "usrloc.so" modparam("usrloc", "use_domain", 1) modparam("usrloc", "cluster_mode", "full-sharing") modparam("usrloc", "restart_persistency", "load-from-sql") modparam("usrloc", "location_cluster", 1) modparam("usrloc", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")

November 01, 2019, at 05:28 PM by liviu -
November 01, 2019, at 05:26 PM by liviu -
Changed lines 35-41 from:

Configuration

TODO

Registration Flows

TODO

to:

Configuration (with cluster sync)

For the smallest possible setup (e.g. active/backup), you will need:

  • two OpenSIPS instances

The relevant opensips.cfg sections:


loadmodule "usrloc.so"
modparam("usrloc", "use_domain", 1)
modparam("usrloc", "working_mode_preset", "full-sharing-cluster")
modparam("usrloc", "location_cluster", 1)

loadmodule "clusterer.so"
modparam("clusterer", "current_id", 3) # node number #3
modparam("clusterer", "seed_fallback_interval", 5)
modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")


Configuration (with MySQL)

For the smallest possible setup (e.g. active/backup), you will need:

  • two OpenSIPS instances
  • two MySQL instances, one on each OpenSIPS box

The relevant opensips.cfg sections:


loadmodule "usrloc.so"
modparam("usrloc", "use_domain", 1)
modparam("usrloc", "cluster_mode", "full-sharing")
modparam("usrloc", "restart_persistency", "load-from-sql")
modparam("usrloc", "location_cluster", 1)
modparam("usrloc", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")

loadmodule "clusterer.so"
modparam("clusterer", "current_id", 3) # node number #3
modparam("clusterer", "seed_fallback_interval", 5)
modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")


Example clusterer table:

INSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \
(NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \
(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL), \
(NULL, 1, 3, 'bin:10.0.0.179', 1, 3, 50, NULL, NULL, NULL);


idcluster idnode_idurlstateno_ping_retriesprioritysip_addrflagsdescription
1411bin:10.0.0.1771350NULLseedNULL
1512bin:10.0.0.1781350NULLNULLNULL
1613bin:10.0.0.1791350NULLNULLNULL

"full sharing" clusterer table example

November 01, 2019, at 04:24 PM by liviu -
Changed lines 20-21 from:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:

to:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services (a.k.a. "native full sharing") is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:

Changed line 25 from:
  • good horizontal scalability
to:
  • good horizontal scalability, capped by the maximum amount of data that a single node can handle
November 01, 2019, at 04:22 PM by liviu -
Changed line 16 from:

http://opensips.org/pub/images/full-sharing.png

to:

http://opensips.org/pub/images/full-sharing.png

November 01, 2019, at 04:22 PM by liviu -
Changed line 16 from:

http://opensips.org/pub/images/full-sharing.png

to:

http://opensips.org/pub/images/full-sharing.png

November 01, 2019, at 04:22 PM by liviu -
Changed line 16 from:

http://opensips.org/pub/images/full-sharing.png

to:

http://opensips.org/pub/images/full-sharing.png

November 01, 2019, at 04:18 PM by liviu -
Added lines 13-16:


http://opensips.org/pub/images/full-sharing.png

November 01, 2019, at 04:08 PM by liviu -
Changed lines 12-13 from:

Tip: For an overall view on the "full sharing" working mode, see this blog post.

to:

Tip: For an overall view on the "full sharing" strategy, see this blog post.

Changed lines 16-18 from:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers:

  • high availability (any cluster node can serve the incoming traffic)
to:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:

  • high availability (any cluster node can properly serve the incoming SIP traffic)
Changed lines 25-26 from:

IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) must be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated.

to:

IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) should be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated.

Changed line 29 from:

Building upon this setup, the clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data between multiple points of presence, allowing you to scale each POP according to the size of its subscriber pool.

to:

Building upon this setup, the federated user location clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data across different points of presence, allowing you to scale each POP according to the size of its subscriber pool.

November 01, 2019, at 03:52 PM by liviu -
Changed line 16 from:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the cluster nodes. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers:

to:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers:

November 01, 2019, at 03:51 PM by liviu -
Added lines 23-24:


Added lines 26-27:

\\

November 01, 2019, at 03:51 PM by liviu -
Added lines 13-14:

\\

November 01, 2019, at 03:51 PM by liviu -
Added lines 11-12:

Tip: For an overall view on the "full sharing" working mode, see this blog post.

November 01, 2019, at 03:47 PM by liviu -
Changed lines 12-21 from:

TODO

to:

The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the cluster nodes. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers:

  • high availability (any cluster node can serve the incoming traffic)
  • distributed NAT pinging support (NAT pinging origination can be spread across cluster nodes)
  • restart persistency for all cluster nodes
  • good horizontal scalability

IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) must be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated.

Building upon this setup, the clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data between multiple points of presence, allowing you to scale each POP according to the size of its subscriber pool.

June 07, 2018, at 11:21 AM by liviu -
Added lines 12-13:

TODO

Added lines 16-17:

TODO

Added lines 20-21:

TODO

Changed lines 24-28 from:

NAT pinging

to:

TODO

NAT pinging

TODO

May 31, 2018, at 03:15 PM by liviu -
Added lines 1-18:
Documentation -> Tutorials -> How To Configure a "Full Sharing" User Location Cluster

This page has been visited 4119 times.

How To Configure a "Full Sharing" User Location Cluster

by Liviu Chircu

(:toc-float Table of Content:)


Description

Configuration

Registration Flows

Call Flows

NAT pinging


Page last modified on November 04, 2019, at 04:04 PM