Deploying and running distributed SIP services in various clouds becomes more of a default approach in our days. For this reason, the OpenSIPS 3.2 upcoming release will focus on increasing OpenSIPS's ability to integrate with cloud specific services / backends and it will bring more OpenSIPS capabilities to build distributed (multi PoP/location/DC/zone) SIP services. After all the two concepts, of distributed architecture and in-cloud support, are going hand in hand - the biggest advantage of running in clouds is to possibility to organically scale and distribute.
Starting with version 2.4 OpenSIPS has solid support for clustering, which enables the design and implementation of the distributed SIP services with OpenSIPS. Nevertheless, the clustering chapter is a large one, that needs to continuously evolve under the pressure of the requirements/demands coming from the real-word situations. For the OpenSIPS 3.2 we are targeting work on the actual clustering engine in OpenSIPS, but also on adding clustering support for more modules
The plan is to improve the clustering support (or the BIN protocol) in order to secure and increase the management of the cluster:
For the Call Center (or call queuing) module we plan to add clustering support and data replication for the call queue - this is extremely important for achieving High-Availability. In the same time, we are looking to add support for distributed call-center - a geo-distributed single call queue which gets calls via different OpenSIPS instances and which distributed agents connected to different OpenSIPS instances.
There are several modules which may require clustering support in order to be used in distributed deployments. There are modules that has to share data between all the OpenSIPS nodes in order to achieve a global understanding over the clustered service. Such modules are:
For aggregating the presence state in a distributed system, a multi-level subscription setup may be envisioned. This means a local Presence Server (use a partition of users are subscribing to) may subscribe further to a central/master Presence Server. This will considerably reduce the SUBSCRIBE / NOTIFY traffic and also it will offload the NOTIFY'cation effort on the central Presence Server.
As in distributed system you definitely use several media/RTP relays, in same location for load-balancing purposes or in different locations for distribution/short-path purposes. In both cases there is a need to migrate/re-anchor an ongoing call to a different RTP relay. This may needed for failover reasons or re-balancing/offloading purposes. We are looking to add this re-anchoring support in OpenSIPS, without any extra requirement from the actual media rely, by using SIP re-INVITE to re-negotiate the SDPs. This approach will work for RTPproxy, RTPengine, Mediaproxy.
For the in-cloud distributed system, it is a huge advantage to be able to make usage of different services or functionalities provided by the cloud itself. This means more integration capabilities for OpenSIPS 3.2
Apache Kafka is an open-source distributed event streaming platform used for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. A new Kafka backend is considered for the Event Interface
MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT). It is designed as an extremely lightweight publish/subscribe messaging transport that is ideal for connecting remote devices with a small code footprint and minimal network bandwidth. A new MQTT backend is considered for the Event Interface
Prometheus is an engine for crouching statistics. We consider building a native Prometheus connector in OpenSIPS for the Statistics Interface.
Add support for Dynamo noSQL DB, from Amazon, to improve the experience when comes to running OpenSIPS in AWS cloud.
The AWS SSM may be used as a centralized secret manager for handling various credentials to be used by OpenSIPS. For example you can use SSM in order to dynamically change (across multiple OpenSIPS instances) the used DB credentials .
Support for pushing or receiving events from the AWS specific event broker. This will be a new backend for the Event Interface.
A beats plugin for Logstash or ElasticSearch. This will allow OpenSIPS to push data directly into ElasticSearch.
A secure version of the protocol used to communicate with the RTPengine - this will allow the integration of OpenSIPS with RTPengine even across open/public Internet.
Similar to FreeSWITCH integration, the goal is to make OpenSIPS query Asterisk for load information in realtime, in order to adjust the dispatching and load-balancing processes.
Instead of using the XML scenario to drive the B2B logic (mixing between the calls), we want to use the OpenSIPS scripting for this purpose. This will eliminate all the limitations of the XML language (logic and action) and it will tremendously increase the level of integration of the B2B engine with the rest of the OpenSIPS functionalities. Shortly, more complex B2B logic will be possible, and also better integrated with the rest of OpenSIPS.
Instead of using regexp-based changes over the SDP, we envision a structured way of accessing and modifying the SDP payload, by using easy variables. All the changes will be visible on the spot. This will allow multiple changes over the SDP, from script or modules, while keeping a single, consistent data set. For example, if you change an "a" line in the SDP from script level, the change will be visible to rtpengine. Furthermore, the new SDP from rtpengine will be visible (and changeable) at script level.
Explore the options of using GNUtls, LIBREtls or wolfSSL as alternatives to OpenSSL which proved to be a quite disruptive lib, incompatible with the multi-processing model in OpenSIPS.
Explore options to allow the possibility to invoke MI commands from the OpenSIPS script.
While right now we can trace SIP traffic (and logs) via HEP and to DB, a syslog backend may be envisioned for simple tracing needs/scenarios.
We are undergoing an OpenSIPS 3.2 Feature Survey (due 11th January 2021), and we would like to gather opinions on the currently chosen feature set, as well as any additional ideas you may have. Your feedback will help us prioritize the work that will go into the upcoming 3.2 release. Thank you!
Many thanks to all of you voted for this poll! Please find the poll results below -- regarding the additional feature suggestions we received, we will go through them and pick the most popular / interesting ones in a future announcement.
We try to update the list with their development status, so you can have a clear view over the 3.2 progress. Nevertheless, we strongly recommend you to check the Feature list of 3.2.
|Feature Code||Feature Name||Score (1-5)||Implementation Status|
|Misc-2||Structured SDP manipulation||4.31||no-go|
|Cluster-3||Clustering more modules||4.20||done|
|Cloud-8||Secure RTPEngine (NG protocol)||4.18||invalid*|
|Misc-5||Tracing to log||4.13||done|
|Misc-1||Script driven B2B||3.95||done|
|Cluster-5||RTP stream re-anchoring||3.91||done|
|Misc-4||MI from script||3.81||done|
|Cluster-2||Distributed Call Center||3.30||no-go|
|Cloud-6||AWS CloudWatch, SQS, SNS||2.92||no-go|
|Cluster-4||Multi-level presence subscription||2.81||no-go|
|Cloud-5||AWS System Manager (SSM)||2.62||no-go|
* There is no secure way to communicate in RTPEngine - the NG protocol is the actual protocol we are using, and it is basically BSON over TCP.