Documentation |
Documentation.Tutorials-Distributed-User-Location-Full-Sharing HistoryHide minor edits - Show changes to markup November 04, 2019, at 04:04 PM
by
- Changed line 121 from:
This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling and data storage systems. This, in turn, allows each system to be scaled without wasting resources or affecting the other one. to:
This is the ultra-scalable version of the OpenSIPS user location, allowing you to support subscriber pool sizes exceeding the order of millions. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling and data storage systems. This, in turn, allows each system to be scaled without wasting resources or affecting the other one. November 04, 2019, at 04:00 PM
by
- Changed line 222 from:
# store the registration into the NoSQL DB to:
# store the registration, along with the Path header, into the NoSQL DB November 04, 2019, at 03:59 PM
by
- Changed line 129 from:
to:
November 04, 2019, at 12:11 PM
by
- Changed lines 35-36 from:
Active/backup "full sharing" setupto:
Active/passive "full sharing" setupChanged line 112 from:
To prevent any "permission denied" error logs on the passive node that's trying to originate NAT pings, make sure to hook the nh_enable_ping MI command into your active->backup and backup->active transitions of the VIP: to:
To prevent any "permission denied" error logs on the passive node that's trying to originate NAT pings, make sure to hook the nh_enable_ping MI command into your active->passive and passive->active transitions of the VIP: November 04, 2019, at 11:54 AM
by
- Changed line 12 from:
Tip: For a broader view on the "full sharing" setups, see this blog post. to:
Tip: For a broader view on the "full sharing" topology, see this blog post. November 02, 2019, at 10:46 AM
by
- Changed line 33 from:
Building upon this setup, the federated user location clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data across different points of presence, allowing you to scale each POP according to the size of its subscriber pool. to:
Building upon this setup, the federated user location clustering strategy ensures similar features as above, except it will not replicate user location data across different points of presence, allowing you to scale each POP according to the size of its subscriber pool. November 01, 2019, at 10:40 PM
by
- Changed line 121 from:
This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage. This, in turn, allows each system to be scaled without wasting resources or affecting the other one. to:
This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling and data storage systems. This, in turn, allows each system to be scaled without wasting resources or affecting the other one. November 01, 2019, at 10:39 PM
by
- Changed line 121 from:
This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so. to:
This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage. This, in turn, allows each system to be scaled without wasting resources or affecting the other one. November 01, 2019, at 10:35 PM
by
- Changed line 20 from:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way of performing full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers: to:
The "full sharing" clustering strategy for the OpenSIPS 2.4+ user location service is a way of performing full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers: November 01, 2019, at 10:35 PM
by
- Changed line 20 from:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services (a.k.a. "native full sharing") is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers: to:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way of performing full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers: November 01, 2019, at 07:33 PM
by
- Changed line 12 from:
Tip: For an overall view on the "full sharing" setups, see this blog post. to:
Tip: For a broader view on the "full sharing" setups, see this blog post. November 01, 2019, at 07:33 PM
by
- Changed line 12 from:
Tip: For an overall view on the "full sharing" strategy, see this blog post. to:
Tip: For an overall view on the "full sharing" setups, see this blog post. November 01, 2019, at 07:28 PM
by
- Changed line 82 from:
"full sharing" clusterer table example to:
Native "full sharing" clusterer table Changed line 197 from:
"full sharing" clusterer table example to:
NoSQL "full sharing" clusterer table November 01, 2019, at 07:27 PM
by
- Changed line 121 from:
This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so. to:
This is the ultra-scalable version of the OpenSIPS user location. By letting an external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so. November 01, 2019, at 07:23 PM
by
- Changed line 134 from:
On the backend layer, here is the relevant opensips.cfg sections for the cluster nodes: to:
On the backend layer (cluster instances), here are the relevant opensips.cfg sections: November 01, 2019, at 07:22 PM
by
- Changed line 121 from:
This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each system according to the current need, without wasting resources or affecting the other one by doing so. to:
This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each of these systems according to the current need, without wasting resources or affecting the other one by doing so. November 01, 2019, at 07:21 PM
by
- Changed line 121 from:
This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale system horizontally, according to the current need. to:
This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale each system according to the current need, without wasting resources or affecting the other one by doing so. November 01, 2019, at 07:21 PM
by
- Added lines 120-121:
This is the ultra-scalable version of the OpenSIPS user location. By letting a external, specialized database cluster manage all the registration data, we are able to decouple the SIP signaling from data storage, thus being able to scale system horizontally, according to the current need. November 01, 2019, at 07:18 PM
by
- Changed line 119 from:
NoSQL "full sharing" with a SIP front-endto:
NoSQL "full sharing" cluster with a SIP front-endNovember 01, 2019, at 07:16 PM
by
- Changed line 198 from:
NAT pingingto:
Shared NAT pingingNovember 01, 2019, at 07:15 PM
by
- Deleted line 142:
modparam("usrloc", "shared_pinging", 1) November 01, 2019, at 07:15 PM
by
- Deleted line 162:
modparam("clusterer", "seed_fallback_interval", 5) November 01, 2019, at 07:14 PM
by
- Changed line 194 from:
to:
November 01, 2019, at 07:14 PM
by
- Added line 222:
# store the registration into the NoSQL DB November 01, 2019, at 07:13 PM
by
- Added line 173:
# store the registration into the NoSQL DB Deleted line 221:
# store the registration into the NoSQL DB November 01, 2019, at 07:13 PM
by
- Added line 221:
# store the registration into the NoSQL DB November 01, 2019, at 07:11 PM
by
- Changed line 127 from:
to:
November 01, 2019, at 07:11 PM
by
- Changed lines 35-36 from:
Basic active/backup setupto:
Active/backup "full sharing" setupAdded lines 117-226:
@] NoSQL "full sharing" with a SIP front-endConfigurationFor the smallest possible setup, you will need:
listen = sip:10.0.0.177 listen = bin:10.0.0.177 loadmodule "usrloc.so" modparam("usrloc", "use_domain", 1) modparam("usrloc", "working_mode_preset", "full-sharing-cachedb-cluster") modparam("usrloc", "shared_pinging", 1) modparam("usrloc", "location_cluster", 1) # with Cassandra, make sure to create the keyspace and table beforehand: # CREATE KEYSPACE IF NOT EXISTS opensips WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true; # USE opensips; # CREATE TABLE opensips.userlocation ( # aor text, # aorhash int, # contacts map<text, frozen<map<text, text>>>, # PRIMARY KEY (aor)); loadmodule "cachedb_cassandra.so" modparam("usrloc", "cachedb_url", "cassandra://10.0.0.180:9042/opensips.userlocation") # with MongoDB, we don't need to create any database or collection... loadmodule "cachedb_mongodb.so" modparam("usrloc", "cachedb_url", "mongodb://10.0.0.180:27017/opensipsDB.userlocation") loadmodule "clusterer.so" modparam("clusterer", "current_id", 1) # node number #1 modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips") loadmodule "proto_bin.so" ... route { ... if (!save("location", "p1v")) { send_reply("500", "Server Internal Error"); exit; } ... } ProvisioningINSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \ (NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \ (NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL);
"full sharing" clusterer table example
NAT pingingloadmodule "nathelper.so" modparam("nathelper", "natping_interval", 30) modparam("nathelper", "sipping_from", "sip:pinger@localhost") modparam("nathelper", "sipping_bflag", "SIPPING_ENABLE") modparam("nathelper", "remove_on_timeout_bflag", "SIPPING_RTO") modparam("nathelper", "max_pings_lost", 5) # partition pings across cluster nodes modparam("usrloc", "shared_pinging", 1) We then enable these branch flags for some or all contacts before calling save(): [@ ... setbflag(SIPPING_ENABLE); setbflag(SIPPING_RTO); if (!save("location", "p1v")) { sl_reply_error(); exit; } ... November 01, 2019, at 06:08 PM
by
- Changed lines 83-117 from:
to:
NAT pingingSome setups require periodic SIP OPTIONS pings originated by the registrar towards some of the contacts in order to keep the NAT bindings alive. Here is an example configuration: loadmodule "nathelper.so" modparam("nathelper", "natping_interval", 30) modparam("nathelper", "sipping_from", "sip:pinger@localhost") modparam("nathelper", "sipping_bflag", "SIPPING_ENABLE") modparam("nathelper", "remove_on_timeout_bflag", "SIPPING_RTO") modparam("nathelper", "max_pings_lost", 5) We then enable these branch flags for some or all contacts before calling save(): ... setbflag(SIPPING_ENABLE); setbflag(SIPPING_RTO); if (!save("location")) sl_reply_error(); ...
opensipsctl fifo nh_enable_ping 1 # run this on the machine that takes over the VIP (new active) opensipsctl fifo nh_enable_ping 0 # run this on the machine that gives up the VIP (new passive) November 01, 2019, at 06:03 PM
by
- Changed line 29 from:
IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) should be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated. to:
IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) must be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated. November 01, 2019, at 06:02 PM
by
- Changed lines 53-54 from:
listen = bin:10.0.0.178 to:
listen = bin:10.0.0.177 Changed lines 82-132 from:
clusterer table example
Advanced "full sharing" + SBC setupConfigurationFor the smallest possible setup, you will need:
listen = sip:10.0.0.150 # virtual IP (same on both nodes) listen = bin:10.0.0.178 loadmodule "usrloc.so" modparam("usrloc", "use_domain", 1) modparam("usrloc", "working_mode_preset", "full-sharing-cluster") modparam("usrloc", "location_cluster", 1) loadmodule "clusterer.so" modparam("clusterer", "current_id", 1) # node number #1 modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips") loadmodule "proto_bin.so" ProvisioningINSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \ (NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \ (NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL);
clusterer table example to:
"full sharing" clusterer table example November 01, 2019, at 05:58 PM
by
- Changed lines 10-11 from:
Descriptionto:
DescriptionChanged lines 35-38 from:
ConfigurationFor the smallest possible setup (e.g. an active/backup), you will need: to:
Basic active/backup setupConfigurationFor the smallest possible setup (a 2-node active/passive with a virtual IP in front), you will need: Changed lines 42-43 from:
to:
Changed lines 53-54 from:
listen = bin:10.0.0.178 # the to:
listen = bin:10.0.0.178 Changed line 61 from:
modparam("clusterer", "current_id", 3) # node number #3 to:
modparam("clusterer", "current_id", 1) # node number #1 Changed lines 68-71 from:
to:
ProvisioningChanged lines 85-91 from:
Call FlowsTODO NAT pingingTODO to:
Advanced "full sharing" + SBC setupConfigurationFor the smallest possible setup, you will need:
listen = sip:10.0.0.150 # virtual IP (same on both nodes) listen = bin:10.0.0.178 loadmodule "usrloc.so" modparam("usrloc", "use_domain", 1) modparam("usrloc", "working_mode_preset", "full-sharing-cluster") modparam("usrloc", "location_cluster", 1) loadmodule "clusterer.so" modparam("clusterer", "current_id", 1) # node number #1 modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips") loadmodule "proto_bin.so" ProvisioningINSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \ (NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \ (NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL);
clusterer table example
November 01, 2019, at 05:46 PM
by
- Changed lines 37-38 from:
For the smallest possible setup (e.g. active/backup), you will need: to:
For the smallest possible setup (e.g. an active/backup), you will need: Added lines 49-51:
listen = sip:10.0.0.150 # virtual IP (same on both nodes) listen = bin:10.0.0.178 # the Added lines 61-62:
loadmodule "proto_bin.so" Changed lines 72-73 from:
(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL), (NULL, 1, 3, 'bin:10.0.0.179', 1, 3, 50, NULL, NULL, NULL); to:
(NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL); Deleted line 79:
November 01, 2019, at 05:39 PM
by
- Added lines 42-43:
Changed line 78 from:
"full sharing" clusterer table example to:
clusterer table example November 01, 2019, at 05:37 PM
by
- Changed lines 35-36 from:
Configuration (with cluster sync)to:
ConfigurationChanged lines 40-41 from:
to:
Deleted lines 50-76:
loadmodule "clusterer.so" modparam("clusterer", "current_id", 3) # node number #3 modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips") @]
Configuration (with MySQL)For the smallest possible setup (e.g. active/backup), you will need:
The relevant opensips.cfg sections:
November 01, 2019, at 05:28 PM
by - November 01, 2019, at 05:26 PM
by
- Changed lines 35-41 from:
ConfigurationTODO Registration FlowsTODO to:
Configuration (with cluster sync)For the smallest possible setup (e.g. active/backup), you will need:
The relevant opensips.cfg sections:
loadmodule "usrloc.so" modparam("usrloc", "use_domain", 1) modparam("usrloc", "working_mode_preset", "full-sharing-cluster") modparam("usrloc", "location_cluster", 1) loadmodule "clusterer.so" modparam("clusterer", "current_id", 3) # node number #3 modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")
Configuration (with MySQL)For the smallest possible setup (e.g. active/backup), you will need:
The relevant opensips.cfg sections:
loadmodule "usrloc.so" modparam("usrloc", "use_domain", 1) modparam("usrloc", "cluster_mode", "full-sharing") modparam("usrloc", "restart_persistency", "load-from-sql") modparam("usrloc", "location_cluster", 1) modparam("usrloc", "db_url", "mysql://opensips:opensipsrw@localhost/opensips") loadmodule "clusterer.so" modparam("clusterer", "current_id", 3) # node number #3 modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")
INSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \ (NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, NULL, 'seed', NULL), \ (NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, NULL, NULL, NULL), \ (NULL, 1, 3, 'bin:10.0.0.179', 1, 3, 50, NULL, NULL, NULL);
"full sharing" clusterer table example
November 01, 2019, at 04:24 PM
by
- Changed lines 20-21 from:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers: to:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services (a.k.a. "native full sharing") is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers: Changed line 25 from:
to:
November 01, 2019, at 04:22 PM
by
- Changed line 16 from:
http://opensips.org/pub/images/full-sharing.png to:
http://opensips.org/pub/images/full-sharing.png November 01, 2019, at 04:22 PM
by
- Changed line 16 from:
http://opensips.org/pub/images/full-sharing.png to:
http://opensips.org/pub/images/full-sharing.png November 01, 2019, at 04:22 PM
by
- Changed line 16 from:
http://opensips.org/pub/images/full-sharing.png to:
http://opensips.org/pub/images/full-sharing.png November 01, 2019, at 04:18 PM
by
- Added lines 13-16:
November 01, 2019, at 04:08 PM
by
- Changed lines 12-13 from:
Tip: For an overall view on the "full sharing" working mode, see this blog post. to:
Tip: For an overall view on the "full sharing" strategy, see this blog post. Changed lines 16-18 from:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers:
to:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any SIP UA registered to the cluster. This type of clustering offers:
Changed lines 25-26 from:
IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) must be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated. to:
IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) should be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated. Changed line 29 from:
Building upon this setup, the clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data between multiple points of presence, allowing you to scale each POP according to the size of its subscriber pool. to:
Building upon this setup, the federated user location clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data across different points of presence, allowing you to scale each POP according to the size of its subscriber pool. November 01, 2019, at 03:52 PM
by
- Changed line 16 from:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the cluster nodes. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers: to:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the nodes of an OpenSIPS cluster. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers: November 01, 2019, at 03:51 PM
by
- Added lines 23-24:
Added lines 26-27:
\\ November 01, 2019, at 03:51 PM
by
- Added lines 13-14:
\\ November 01, 2019, at 03:51 PM
by
- Added lines 11-12:
Tip: For an overall view on the "full sharing" working mode, see this blog post. November 01, 2019, at 03:47 PM
by
- Changed lines 12-21 from:
TODO to:
The "full sharing" clustering strategy for OpenSIPS 2.4+ user location services is a way to perform full-mesh data replication between the cluster nodes. Each node will hold the entire user location dataset, thus being able to serve lookups for any registered SIP UA. This type of clustering offers:
IMPORTANT: a mandatory requirement of the full sharing clustering strategy is that any node must be able to route to any registered SIP UA. With simple full sharing setups, such as active/passive, this can be achieved by using a shared virtual IP address between the two nodes. If dealing with larger cluster sizes or if the endpoints register via TCP/TLS, then a front-ending entity (e.g. a SIP load balancer) must be placed in front of the cluster, with enabled Path header support, so any network routing restrictions are alleviated. Building upon this setup, the clustering strategy ensures similar features as above, except it will not full-mesh replicate user location data between multiple points of presence, allowing you to scale each POP according to the size of its subscriber pool. June 07, 2018, at 11:21 AM
by
- Added lines 12-13:
TODO Added lines 16-17:
TODO Added lines 20-21:
TODO Changed lines 24-28 from:
NAT pingingto:
TODO NAT pingingTODO May 31, 2018, at 03:15 PM
by
- Added lines 1-18:
Documentation -> Tutorials -> How To Configure a "Full Sharing" User Location ClusterThis page has been visited 8793 times. How To Configure a "Full Sharing" User Location Clusterby Liviu Chircu (:toc-float Table of Content:) DescriptionConfigurationRegistration FlowsCall FlowsNAT pinging |