Login | Register

About

About -> Available Versions -> 2.4.x Releases -> Release 2.4.0 Overview

OpenSIPS 2.4 philosophy

The OpenSIPS 2.4 version is built around the clustering concept - today’s VoIP world is getting more and more dynamic, services are moving into Clouds and more and more flexibility is needed for the application to fully exploit such environments. But let’s pin point the main reasons for going for a clustered approach:

  • scaling up with the processing/traffic load
  • geographical distribution
  • redundancy and High-Availability


For the OpenSIPS 2.4 we laid down a roadmap that addressed the clustering both from the clustering engine itself (the underlayer) and from the functionalities that will perform on top of the clustering layer, to share data and state, to synchronize and correlate.

This OpenSIPS 2.4 release is the star of OpenSIPS Summit in Amsterdam, May 2018 - beside presentations and workshops around the new cool things in this version, OpenSIPS 2.4 will also be the subject of several interactive demos on its clustering capabilities.


The best of

Before even thinking of building clustering support for high level services like User Location, Dialog Tracking or SIP Presence, it is mandatory to have in place a powerful and flexible clustering engine. Such an engine will become the reliable foundation for approaching more complex clustering scenarios. The OpenSIPS 2.4 clustering engine was completely reworked in order to address more needs at topology layer (more dynamic), at capabilities layer (more flexible) and at application layer (as full data sync’ing and action partitioning support). For a detailed description of the clustering engine capabilities, please see the blog post.


Clustering User Registrations

This is a very complex topic as it exceeds the simple concept of data sharing. By the nature of the data (the user registrations), you may have different constraints on how data is roaming in a cluster – registrations may be tied to a node due network constraints.
The User Location engine in OpenSIPS 2.4 approaches the clustering topic by considering:

  • the amount of shared data - full data sharing versus data federating / partitioning
  • the mechanism for sharing the date - via the build-in clustering engine or via an external no-SQL database
  • the correlation of the ping effort - the nodes partition the pinging effort

The module offers pre-defined clustering modes that covers scenarios like active-backup High Availability, multi-active Load Balancing, geo-distributed federation and data partitioning for scaling.
This topic is more in depth covered in this recent blog post.


Anycast support

A common Anycast setup is to assign the anycast IPs to the nodes at the edge of your platform, facing the clients. This setup ensures that all three features (load balancing, geo-distribution and high-availability) are provided for your customers’ inbound calls. To be able to build a fully-flavored anycast support (addressing both redundancy and balancing), it requires OpenSIPS to replicate/share transaction state across the nodes in the cluster (nodes sharing the same anycast IP).
Our full anycast solution aims to always keeping the anycast IPs in the route for the entire call. This means that your clients will always have one single IP to provision, the anycast IP. And when a node goes down, all sequential messages will be re-routed (by the router) to the next available node. Of course, this node needs to have the entire call information to be able to properly close the call, but that can be easily done in OpenSIPS using dialog replication.
The anycast support added in the Transaction module ensure that all the nodes in the cluster can correlated the parts of the SIP traffic they receive, so that all the transaction events (replies, re-transmissions, Cancels) do aggregate and work as expected.
More details on the Anycast capabilities are provided in this excellent blog post.


Clustering Ongoing Calls

In order to fully cluster the dialog (ongoing calls) you need more the simple data replication. First you need full data sharing - that means the ability to bulk replicate the entire data set, at any moment, between the nodes. A freshly started node will get synchronized (in terms of ongoing calls) in no time, being ready to handle traffic on the spot.
Secondly, the OpenSIPS 2.4 introduced the concept of dialog ownership in order to correlated the nodes in terms of which node is triggering actions for a certain dialog - if you have the dialog data shared across 6 nodes, you definitely want to avoid getting 6 timeout events with 6 separate CDRs. How the dialog clustering works is explained in details in this blog post.


Clustering Presence Services

By using the clustering engine, the presence module provides new ways of distributing and correlating the presence data:

  • data can be fully shared between nodes (via a database) or it can be federated / partitioned across the nodes by using the clustering engine (with broadcasts and node-2-node querying)
  • to keep consistency over the actions triggered by data, an data ownership concept is implemented. Even if data is shared across several nodes, there is only one action triggered at the cluster level. For example, when a presentity expires, there is only one set of notifications set to the subscribers.

Several clustering scenarios are possible with the presence module, to address High Availability, Load Balancing and Geo-distribution. More details on the presence clustering and how to implement the above scenarios are to be found in this original release post.


And more on OpenSIPS 2.4

There is a long list of things that were added or improved in OpenSIPS 2.4, but sticking to the most relevant ones, we need to mention some.

SIPREC based call recording

SIPREC is an IETF standard that describes how to do call recording to an external recorder. It contains specifications about how to send both call metadata and RTP streams to the recorder in a send-only mode, without any impact on the ongoing call.The SIPREC module in OpenSIPS implements this protocol, offering the ability to do call recording for any call that is proxied through it. Being a standard, it can be used to integrate OpenSIPS with any call recorder that implements the protocol, like the Oreka OSS recorder. See more here.

Scripting Advanced FreeSWITCH Integration

We wanted to go beyond a simple load balancing driven integration, and actually offer to the OpenSIPS script writer the power to work with bi-directional, generic communication primitives between OpenSIPS and the FreeSWITCH ESL: (a) subscribe to generic FreeSWITCH events via DB, MI or modparam, (b) catch and manipulate FreeSWITCH event information within an event_route and (c) run a FreeSWITCH ESL command on any FreeSWITCH node, from any route. All the details can be found here.

JSON RPC support

The new EVENT_JSONRPC module in OpenSIPS 2.4 implements a transport protocol for the OpenSIPS Event Interface. Using this module, you can notify applications about OpenSIPS internal events using the JSON-RPC protocol.
Also there is a JSONRPC new module in OpenSIPS 2.4 that provides functions to run JSON-RPC commands against a remote JSON-RPC server, and retrieve the call's response back.
We love JSON RPC as it is flexible and powerful, but the most important as it is a standard way of integrating with external applications.

RTPEngine gets better

There are two important enhancements into the RTPengine module in OpenSIPS 2.4:

  • ability to fetch any statistic from the rtpengine relay. This give you access to the MOS values provided by the end-points;
  • DB based provisioning for the rtpengine end points. You can dynamically add, remove and reload the set of rtpengines you need to use.

Internal Load statistics

OpenSIPS 2.4 comes with a complete re-design of the statistics that gives information about the OpenSIPS internal load. The load is defined as percentage of time spent in doing processing versus total time. Following the model of "top", there are three load values, calculated over different periods of time : (a) realtime, (b) last minute and (c) last 10 minutes. The load can be accessed per process, per OpenSIPS core (covering only core/SIP processes) and per entire OpenSIPS (covering all processes, core and modules). See the full list here.

Pinging Latency

Starting with OpenSIPS 2.3, using the nathelper module, you could keep track of the send pings and detect the disconnected and points. Taking a step further, the OpenSIPS 2.4 can calculate the ping latency and use this information in multiple ways. For example it can fire an event whenever the latency variation exceeded a given threshold ; or you instruct lookup(location) to skip contact having the latency above a given threshold or to order the contacts based on their latency (versus Q or time ordering).


But the full list of goodies offered by OpenSIPS 2.4 (and a more technical one too), together with migration instructions, can be found on the OpenSIPS 2.4 release notes page.



Page last modified on March 28, 2018, at 08:09 PM