Login | Register

About

About -> Performance Tests 3.4 -- Scenarios

Details about each test scenario.


1.  Basic Calling Scenarios

1.1  [T1] Minimal Stateful SIP Proxy

In the first test, OpenSIPS behaved as a minimalistic proxy, just statefully passing messages from the UAC to the UAS (a simple t_relay()). The purpose of this test was to see what is the performance penalty introduced by making the proxy stateful. The actual script used for this scenario can be found at here .

Find the test results in this table, where this particular test is marked as test 1.

1.2  [T2] Stateful proxy with loose routing

In the second test, OpenSIPS script implements also the "Record-Route" mechanism, recording the path in initial requests, and then making sequential requests follow the determined path. The purpose of this test was to see what is the performance penalty introduced by the mechanism of record and loose routing.

The actual script used for this scenario can be found here .

Find the test results in this table, where this particular test is marked as test 2.

1.3  [T3] Stateful proxy with loose routing and dialog support

In the 3rd test we additionally made OpenSIPS dialog aware. The purpose of this particular test was to determine the performance penalty introduced by the dialog module.

The actual script used for this scenario can be found here .

Find the test results in this table, where this particular test is marked as test 3.

1.4  [T4] Default Script

The 4th test had OpenSIPS running with the default script (provided by OpenSIPS distros). In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used. The purpose of this test was to see what is the performance penalty of a more advanced routing logic, taking into account the fact that the script used by this scenario is an enhanced version of the script used in the 3.2 test .

The actual script used for this scenario can be found here .

Find the test results in this table, where this particular test is marked as test 4.

1.5  [T5.1] Default Script with dialog support

This scenario added dialog support on top of the previous one. The purpose of this scenario was to determine the performance penalty introduced by the the dialog module.

The actual script used for this scenario can be found here .

Find the test results in this table, where this particular test is marked as test 5.1.

1.6  [T5.2] Default Script with dialog + Topology Hiding

This scenario added topology hiding support on top of the previous one, with Call-ID concealment. The purpose of this scenario was to determine the performance penalty introduced by the the topology_hiding module.

The actual script used for this scenario can be found here .

Find the test results in this table, where this particular test is marked as test 5.2.

1.7  [T6] Default Script with dialog + authentication

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and a local MYSQL was used as the DB back-end. The purpose of this test was to see the performance penalty introduced by having the proxy authenticate users.

The actual script used for this scenario can be found here.

Find the test results under the Authenticated Calls table, where this particular test is marked as test 6.

1.8  [T7] Default Script with dialog + authentication (10k sub)

This test used the same script as the previous one, the only difference being that there were 10.000 users in the subscribers table. The purpose of this test was to see how the USRLOC module scales with the number of registered users.

Find the test results under the Authenticated Calls table, where this particular test is marked as test 7.

1.9  [T8] Subscriber caching

Building on the previous test, one critical change was made: we used the OpenSIPS cachedb_local module in order to fully eliminate DB queries during call setup. The cache expiry time was set to 20 minutes, so during the test, all 10k registered subscribers have been in the cache. The purpose of this test was to see how much DB queries are affecting OpenSIPS performance, and how much can caching help.

The actual script used for this scenario can be found here.

Find the test results under the Authenticated Calls table, where this particular test is marked as test 8.

1.10  [T9] CDR Accounting

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing CDR accounting. The purpose of this test was to see the performance penalty introduced by having OpenSIPS do both READ and WRITE queries to the database.

The actual script used for this scenario can be found here.

Find the test results under the Authenticated Calls table, where this particular test is marked as test 9.

1.11  [T10] CDR accounting + Auth Caching

In this test, OpenSIPS was generating CDRs just as in the previous test, but it was also caching the 10k subscribers it had in the MYSQL database.

The actual script used for this scenario can be found here.

Find the test results under the Authenticated Calls table, where this particular test is marked as test 10.

1.12  [T11] DB_Flatstore CDR accounting

A similar test to T9, however the DB backend used to write the CDR is db_flatstore, writing them directly to disk as CSV records in an optimized fashion.

The actual script used for this scenario can be found here.

Find the test results under the Authenticated Calls table, where this particular test is marked as test 11.

1.13  [T12] DB_Flatstore CDR accounting + Auth Caching

This last test completely eliminated the need for querying MySQL during call setup, as we now also pre-cached the authentication information for all 10k subscribers.

The actual script used for this scenario can be found here.

Find the test results under the Authenticated Calls table, where this particular test is marked as test 12.

2.  Advanced Calling Scenarios

2.1  [T13.1] B2B Topology Hiding

The purpose of this test was to gauge the performance difference of dialog-based topology hiding vs. B2B topology hiding.

The actual script used for this scenario can be found here.

Find the test results under the Complex Calling Scenarios table, where this particular test is marked as test 13.1.

2.2  [T13.2] B2B REFER Handling

The purpose of this test was to assess the maximum CPS possible in the B2B REFER scenario.

The actual script used for this scenario can be found here.

Find the test results under the Complex Calling Scenarios table, where this particular test is marked as test 13.2.

2.3  [T13.3] B2B Marketing Scenario

The purpose of this test was to assess the maximum CPS possible in the B2B Marketing scenario.

The actual script used for this scenario can be found here.

Find the test results under the Complex Calling Scenarios table, where this particular test is marked as test 13.3.

3.  TCP Tests

The objective here was to compare the TCP and UDP engines as part of a testing scenario with a high volume of SIP traffic. As the nature of the SIP scenario was not too relevant, we went with the simplistic T1 setup, of simply performing t_relay() from UAC to UAS, effectively stress-testing the TCP engine to the limit.

3.1  [T14.1] Single Connection, parallel read: OFF

In this test, the UAC/UAS sipp instances were configured in single-connection mode, routing all calls through a single TCP connection.

The actual script used for this scenario can be found here.

Find the test results under the TCP Engine Tests table, where this particular test is marked as test 14.1.

3.2  [T14.2] Single Connection, parallel read: 1

WIP: this test uncovered a TCP connection sharing issue at high traffic volumes which is currently being addressed.

3.3  [T14.3] Single Connection, parallel read: 2

WIP: this test uncovered a TCP connection sharing issue at high traffic volumes which is currently being addressed.

3.4  [T14.4] N Connections, parallel read: OFF

In this test, the UAC/UAS sipp instances were configured in multi-connection mode, routing all calls through multiple TCP connections. Overall, OpenSIPS was managing TCP connections in the order of thousands.

The actual script used for this scenario can be found here.

Find the test results under the TCP Engine Tests table, where this particular test is marked as test 14.4.

3.5  [T14.5] N Connections, parallel read: 1

WIP: this test uncovered a TCP connection sharing issue at high traffic volumes which is currently being addressed.

3.6  [T14.6] N Connections, parallel read: 2

WIP: this test uncovered a TCP connection sharing issue at high traffic volumes which is currently being addressed.


Page last modified on May 10, 2023, at 01:24 PM