Login | Register

About

About -> Performance Tests

This page has been visited 2656 times.


A collection of performance tests and measurements performed on OpenSIPS 3.4, on various subsystems: database, transactions, dialogs, etc. These tests should give you a broad idea on what you could achieve on your own OpenSIPS setup using similar hardware!


1.  Purpose

The objective of the stress tests was to re-assess the performance of various OpenSIPS subsystems, ahead of the upcoming 3.4 beta release. Apart from putting updated maximum capacity numbers on these modules, the tests also pinpointed various performance bottlenecks in each scenario, thanks to code profiling.

2.  Overview

  • the stress-tests were broken down into three categories: calling tests, B2B tests and TCP engine tests
  • within each category, we gradually increased the amount of features (code) ran through by each test
  • the upper limit of each test was determined by various metrics: either max CPU usage on the OpenSIPS box, various error logs at capacity limit or UDP/TCP accumulating Recv-Queue
  • once the CPS limit was discovered -> perform profiling, analyze the CPU usage map and try to spot bottlenecks

2.1  Setup Description


  • all tests used the F_MALLOC memory allocator (the default in all public builds). A performance comparison between F_MALLOC, Q_MALLOC and HP_MALLOC can be found in a separate set of tests below
  • the CPU-bound tests (1-6) used a maximum of 8 UDP workers (typically 4), in order to minimize context-switching (since the OpenSIPS system was a quad-core -- 1:1 worker/CPU mapping)
  • starting with test #7, the SIP workers were bumped to 8, to cope with the added I/O operations (8 workers were enough to satisfy the required ~6k CPS)
  • in tests 1-6, the proxy was pushed to the maximum possible CPU load, while on tests 7-14 the traffic was kept constant at 6000 CPS and we instead monitored the CPU load penalty as we progressed through the tests
  • average call duration: 30 seconds
  • UDP was used as transport protocol for the majority of tests, unless stated otherwise
  • latest git revision the tests were run on: b0068befd (May 9th, master branch)

For all SIP traffic generation purposes, sipp was the main tool which got the job done. Being a single-threaded application, both the sipp UAC and UAS were found to reach their capacity limitation at around 2500 - 3000 CPS. So we simply scale them horizontally, by launching more clients and servers!

2.2  Hardware

  • Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (4 cores, 8 threads, launch date: Q1'17)
  • 16 GB DDR4 (Kingston)
  • SSD 850 EVO 250GB (Samsung)

3.  Raw Results

The following table shows the raw CPS data used in each scenario. Notes:

  • the Avg. CPU column represents the average CPU usage of the SIP worker processes, as shown by top
  • the Load-1m column represents the average OpenSIPS load over 1 minute of the SIP worker processes only, extracted from the load: statistic

3.1  Basic Calling Scenarios (transactions, dialogs)


Unauthenticated Calls
Test IDDescriptionCPSAvg. CPULoad-1mAvg. IN/OUT TrafficProfilingNotes
1TM1300077%80%43 MB/sPDF
21 + RR1250083%84%42 MB/sPDF
32 + DIALOG1000095%94%36 MB/sPDF
4DEF. Script1050082%64%36 MB/sPDF
5.14 + DIALOG1000086%73%36 MB/sPDF
5.25.1 + TH(Call-ID)625091%88%20 MB/sPDF



Authenticated Calls
Test IDDescriptionCPSAvg. CPULoad-1mAvg. IN/OUT TrafficProfilingNotes
65.1 + AUTH 1k600054%65%26 MB/sPDFMySQL 60%+ CPU usage
75.1 + AUTH 10k600059%65%26 MB/sPDFMySQL 65%+ CPU usage
87 + Auth-Caching600065%57%26 MB/sPDFMySQL 0% CPU usage
97 + CDR600055%73%26 MB/sPDFMySQL 110%+ CPU usage
109 + Auth-Caching600058%71%26 MB/sPDFMySQL 70%+ CPU usage
117 + CDR-flat600058%67%26 MB/sPDFMySQL 70%+ CPU usage
1211 + Auth-Caching600065%55%26 MB/sPDFMySQL 0% CPU usage


3.2  Complex Calling Scenarios (B2B)

Test IDDescriptionCPSAvg. CPULoad-1mAvg. IN/OUT TrafficProfiling
13.1B2B - TH120064%60%8 MB/sPDF
13.2B2B - REFER100066%61%6 MB/sPDF
13.3B2B - Marketing90068%63%5 MB/sPDF


3.3  TCP Test Scenarios

Test IDDescriptionCPSAvg. CPULoad-1mAvg. IN/OUT TrafficProfilingNotes
14.1TM-Con-1-Read-01250066%58%42 MB/sPDFTest start: conn balancing
14.2TM-Con-1-Read-1----PDFNote: conn READ bug at high volumes, WIP
14.3TM-Con-1-Read-2----PDFNote: conn READ bug at high volumes, WIP
14.4TM-Con-N-Read-0400052%20%12 MB/sPDF
14.5TM-Con-N-Read-1----PDFNote: conn READ bug at high volumes, WIP
14.6TM-Con-N-Read-2----PDFNote: conn READ bug at high volumes, WIP

4.  Conclusions

  • the newly introduced load: statistic is critical for monitoring the behavior and performance of your OpenSIPS instance. It can help you spot which workers are busy or not. Or when you need extra capacity on your instance, due to being either CPU-bound or I/O-bound.
    • recap: this statistic monitors the "idleness" of your OpenSIPS workers. If they are doing anything other than waiting for a new SIP job, then they are "busy". Otherwise, they are "idle". For example, if an OpenSIPS worker is running a sleep(1000) in your opensips.cfg, its load: value will be 100% (fully busy).
    • a low CPU usage from your OpenSIPS instance does not mean it's necessarily not loaded. It could be stuck in I/O operations and asking for more SIP workers.
  • when adding DB query caching to your OpenSIPS instance, do not be surprised if it's running a higher CPU usage, because the database will be at 0% CPU usage afterwards, resulting in a overall net gain of CPU resource, as well as dramatically reduced I/O wait time (again, watch the load: statistic).
  • the B2B modules currently have a lower CPS performance, due to the internal complexity of the code. We are still evaluating whether there is room for optimization in the current shape of the codebase.
  • the new OpenSIPS TCP connection balancing is based on the load: statistic, so when doing TCP engine stress-testing in single-connection mode (on the clients' side), make sure to start the UACs gradually, one-by-one in order to give the load: statistic a bit of time to update, such that the new high-throughput connections do not all end up in the same TCP worker!

Page last modified on May 21, 2023, at 07:05 PM