Resources.StressTests History

Hide minor edits - Show changes to markup

April 24, 2013, at 01:16 PM by 109.99.235.212 -
Changed lines 1-172 from:

Resources -> Performance Tests -> Stress Tests

This page has been visited 16674 times. (:toc-float Table of Content:)


Several stress tests were performed using OpenSIPS 1.6.4 to simulate some real life scenarios, to get an idea on the load that can be handled by OpenSIPS and to see what is the performance penalty you get when using some OpenSIPS features like dialog support, DB authentication, DB accounting, memory caching, etc.


What hardware was used for the stress tests

The OpenSIPS proxy was installed on an Intel i7 920 @ 2.67GHz CPU with 6 Gb of available RAM. The UAS and UACs resided on the same LAN as the proxy, to avoid network limitations.

What script scenarios were used

The base test used was that of a simple stateful SIP proxy. Than we kept adding features on top of this very basic configuration, features like loose routing, dialog support, authentication and accounting. Each time the proxy ran with 32 children and the database back-end used was MYSQL.

Performance tests

A total of 11 tests were performed, using 11 different scenarios. The goal was to achieve the highest possible CPS in the given scenario, store load samples from the OpenSIPS proxy and then analyze the results.


Simple stateful proxy

In this first test, OpenSIPS behaved as a simple stateful scenario, just statefully passing messages from the UAC to the UAS ( a simple t_relay()). The purpose of this test was to see what is the performance penalty introduced by making the proxy stateful. The actual script used for this scenario can be found at here .

In this scenario we stopped the test at 13000 CPS with an average load of 19.3 % ( actual load returned by htop )

See chart, where this particular test is marked as test A.

Stateful proxy with loose routing

In the second test, OpenSIPS script implements also the "Record-Route" mechanism, recording the path in initial requests, and then making sequential requests follow the determined path. The purpose of this test was to see what is the performance penalty introduced by the mechanism of record and loose routing.

The actual script used for this scenario can be found here .

In this scenario we stopped the test at 12000 CPS with an average load of 20.6 % ( actual load returned by htop )

See chart, where this particular test is marked as test B.

Stateful proxy with loose routing and dialog support

In the 3rd test we additionally made OpenSIPS dialog aware. The purpose of this particular test was to determine the performance penalty introduced by the dialog module.

The actual script used for this scenario can be found here .

In this scenario we stopped the test at 9000 CPS with an average load of 20.5 % ( actual load returned by htop )

See chart, where this particular test is marked as test C.

Default Script

The 4th test had OpenSIPS running with the default script (provided by OpenSIPS distros). In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used. The purpose of this test was to see what is the performance penalty of a more advanced routing logic, taking into account the fact that the script used by this scenario is an enhanced version of the script used in the 3.2 test .

The actual script used for this scenario can be found here .

In this scenario we stopped the test at 9000 CPS with an average load of 17.1 % ( actual load returned by htop )

See chart, where this particular test is marked as test D.

Default Script with dialog support

This scenario added dialog support on top of the previous one. The purpose of this scenario was to determine the performance penalty introduced by the the dialog module.

The actual script used for this scenario can be found here .

In this scenario we stopped the test at 9000 CPS with an average load of 22.3 % ( actual load returned by htop )

See chart, where this particular test is marked as test E.

Default Script with dialog support and authentication

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and a local MYSQL was used as the DB back-end. The purpose of this test was to see the performance penalty introduced by having the proxy authenticate users.

 The actual script used for this scenario can be found here.

In this scenario we stopped the test at 6000 CPS with an average load of 26.7 % ( actual load returned by htop )

See chart, where this particular test is marked as test F.

10k subscribers

This test used the same script as the previous one, the only difference being that there were 10 000 users in the subscribers table. The purpose of this test was to see how the USRLOC module scales with the number of registered users.

In this scenario we stopped the test at 6000 CPS with an average load of 30.3 % ( actual load returned by htop )

See chart, where this particular test is marked as test G.

Subscriber caching

In the test, OpenSIPS used the localcache module in order to do less database queries. The cache expiry time was set to 20 minutes, so during the test, all 10k registered subscribers have been in the cache. The purpose of this test was to see how much DB queries are affecting OpenSIPS performance, and how much can caching help.

 The actual script used for this scenario can be found here.

In this scenario we stopped the test at 6000 CPS with an average load of 18 % ( actual load returned by htop )

See chart, where this particular test is marked as test H.

Accounting

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The purpose of this test was to see the performance penalty introduced by having OpenSIPS do the old type of accounting.

The actual script used for this scenario can be found here.

In this scenario we stopped the test at 6000 CPS with an average load of 43.8 % ( actual load returned by htop )

See chart, where this particular test is marked as test I.

CDR accounting

In this test, OpenSIPS was directly generating CDRs for each call, as opposed to the previous scenario. The purpose of this test was to see how the new type of accounting compares to the old one.

The actual script used for this scenario can be found here.

In this scenario we managed to achieve 6000 CPS with an average load of 38.7 % ( actual load returned by htop )

See chart, where this particular test is marked as test J.

CDR accounting + Auth Caching

In the last test, OpenSIPS was generating CDRs just as in the previous test, but it was also caching the 10k subscribers it had in the MYSQL database.

In this scenario we stopped the test at 6000 CPS with an average load of 28.1 % ( actual load returned by htop )

See chart

Load statistics graph

Each test had different CPS values, ranging from 13000 CPS, in the first test, to 6000 in the last tests. To give you an accurate overall comparison of the tests, we have scaled all the results up to the 13 000 CPS of the first test, adjusting the load in the same time. So, while on the X axis we have the time, the Y axis represents a function based on actual CPU load and CPS.

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

Test naming convention:

Each particular test is described in the following way :

[ PrevTestID + ] Description ( TestId ).

Example: A test adding dialog support on top of a previous test labeled as X would be labeled :

X + Dialog ( Y )

See full size chart


Performance penalty table


Test IDDescriptionCPSAvg LoadScaled Avg loadPenalty percent
3.1TM1300019.319.30% vs 3.1
3.23.1 + RR1200020.622.315.5% vs 3.1
3.33.2 + DIALOG900020.529.632.7% vs 3.2
3.4DEF. Script900017.124.710.7% vs 3.2
3.53.4 + DIALOG900022.332.28.7% vs 3.3
3.63.5 + AUTH 1k600026.757.879.6% vs 3.5
3.73.6 + 10k600030.365.613.4% vs 3.6
3.83.7 + Caching600018.039.0-40.5% vs 3.7
3.93.7 + ACC600043.894.944.6% vs 3.7
3.103.9 + CDR600038.783.8-11.6% vs 3.9
3.113.8 + ACC + CDR600028.160.855.8% vs 3.8


Conclusions

  • Database operations have a big impact on the load and on the amount of CPS that OpenSIPS can handle. Building the queries is pretty CPU intensive and on top of that, in the current design, DB queries are blocking. Use caching whenever possible to improve performance.
  • The CDR type of accounting newly added in 1.6.4 is better from the old type of accounting from two points of view : it automatically generates the CDR in the backend and it is less CPU intensive because it only has to build and block for one query, as opposed to two queries in the old type of accounting.
  • Very rarely is the CPU the actual bottleneck in the system. Slow database queries and DNS queries and even a slow network are the things that will ultimately degrade OpenSIPS performance.
to:

(:redirect About.PerformanceTests-StressTests quiet=1:)

March 08, 2011, at 01:18 PM by vlad_paiu -
Changed lines 155-156 from:
3.2TM + RR1200020.622.315.5% vs 3.1
3.3TM + RR + DIALOG900020.529.632.7% vs 3.2
to:
3.23.1 + RR1200020.622.315.5% vs 3.1
3.33.2 + DIALOG900020.529.632.7% vs 3.2
Changed lines 158-164 from:
3.5DEF. Script + DIALOG900022.332.28.7% vs 3.3
3.6DEF. Script + DIALOG + AUTH 1k600026.757.879.6% vs 3.5
3.7DEF. Script + DIALOG + AUTH 10k600030.365.613.4% vs 3.6
3.8DEF. Script + DIALOG + AUTH 10k + Caching600018.039.0-40.5% vs 3.7
3.9DEF. Script + DIALOG + AUTH 10k + ACC600043.894.944.6% vs 3.7
3.10DEF. Script + DIALOG + AUTH 10k + ACC CDR600038.783.8-11.6% vs 3.9
3.11DEF. Script + DIALOG + AUTH 10k + Caching + ACC CDR600028.160.855.8% vs 3.8
to:
3.53.4 + DIALOG900022.332.28.7% vs 3.3
3.63.5 + AUTH 1k600026.757.879.6% vs 3.5
3.73.6 + 10k600030.365.613.4% vs 3.6
3.83.7 + Caching600018.039.0-40.5% vs 3.7
3.93.7 + ACC600043.894.944.6% vs 3.7
3.103.9 + CDR600038.783.8-11.6% vs 3.9
3.113.8 + ACC + CDR600028.160.855.8% vs 3.8
March 08, 2011, at 01:15 PM by vlad_paiu -
Changed lines 154-164 from:
3.1130001300019.319.30% vs 3.1
3.2130001200020.622.315.5% vs 3.1
3.313000900020.529.632.7% vs 3.2
3.413000900017.124.710.7% vs 3.2
3.513000900022.332.28.7% vs 3.3
3.613000600026.757.879.6% vs 3.5
3.713000600030.365.613.4% vs 3.6
3.813000600018.039.0-40.5% vs 3.7
3.913000600043.894.944.6% vs 3.7
3.1013000600038.783.8-11.6% vs 3.9
3.1113000600028.160.855.8% vs 3.8
to:
3.1TM1300019.319.30% vs 3.1
3.2TM + RR1200020.622.315.5% vs 3.1
3.3TM + RR + DIALOG900020.529.632.7% vs 3.2
3.4DEF. Script900017.124.710.7% vs 3.2
3.5DEF. Script + DIALOG900022.332.28.7% vs 3.3
3.6DEF. Script + DIALOG + AUTH 1k600026.757.879.6% vs 3.5
3.7DEF. Script + DIALOG + AUTH 10k600030.365.613.4% vs 3.6
3.8DEF. Script + DIALOG + AUTH 10k + Caching600018.039.0-40.5% vs 3.7
3.9DEF. Script + DIALOG + AUTH 10k + ACC600043.894.944.6% vs 3.7
3.10DEF. Script + DIALOG + AUTH 10k + ACC CDR600038.783.8-11.6% vs 3.9
3.11DEF. Script + DIALOG + AUTH 10k + Caching + ACC CDR600028.160.855.8% vs 3.8
March 08, 2011, at 01:13 PM by vlad_paiu -
Changed lines 153-164 from:
Test IDCPSAvg LoadScaled Avg loadPenalty percent
3.11300019.319.30% vs 3.1
3.21200020.622.315.5% vs 3.1
3.3900020.529.632.7% vs 3.2
3.4900017.124.710.7% vs 3.2
3.5900022.332.28.7% vs 3.3
3.6600026.757.879.6% vs 3.5
3.7600030.365.613.4% vs 3.6
3.8600018.039.0-40.5% vs 3.7
3.9600043.894.944.6% vs 3.7
3.10600038.783.8-11.6% vs 3.9
3.11600028.160.855.8% vs 3.8
to:
Test IDDescriptionCPSAvg LoadScaled Avg loadPenalty percent
3.1130001300019.319.30% vs 3.1
3.2130001200020.622.315.5% vs 3.1
3.313000900020.529.632.7% vs 3.2
3.413000900017.124.710.7% vs 3.2
3.513000900022.332.28.7% vs 3.3
3.613000600026.757.879.6% vs 3.5
3.713000600030.365.613.4% vs 3.6
3.813000600018.039.0-40.5% vs 3.7
3.913000600043.894.944.6% vs 3.7
3.1013000600038.783.8-11.6% vs 3.9
3.1113000600028.160.855.8% vs 3.8
March 07, 2011, at 08:27 PM by vlad_paiu -
Added lines 166-167:


Changed line 170 from:
  • Database operations have quite an impact on CPU load when it comes to building the queries and on top of that, in the current design, DB queries are blocking. Use caching whenever possible to improve performance.
to:
  • Database operations have a big impact on the load and on the amount of CPS that OpenSIPS can handle. Building the queries is pretty CPU intensive and on top of that, in the current design, DB queries are blocking. Use caching whenever possible to improve performance.
March 07, 2011, at 08:26 PM by vlad_paiu -
Added lines 149-150:


March 07, 2011, at 08:25 PM by vlad_paiu -
Changed lines 151-162 from:
Test IDAvg LoadScaled Avg loadPenalty percent
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
3.626.757.879.6% vs 3.5
3.730.365.613.4% vs 3.6
3.818.039.0-40.5% vs 3.7
3.943.894.944.6% vs 3.7
3.1038.783.8-11.6% vs 3.9
3.1128.160.855.8% vs 3.8
to:
Test IDCPSAvg LoadScaled Avg loadPenalty percent
3.11300019.319.30% vs 3.1
3.21200020.622.315.5% vs 3.1
3.3900020.529.632.7% vs 3.2
3.4900017.124.710.7% vs 3.2
3.5900022.332.28.7% vs 3.3
3.6600026.757.879.6% vs 3.5
3.7600030.365.613.4% vs 3.6
3.8600018.039.0-40.5% vs 3.7
3.9600043.894.944.6% vs 3.7
3.10600038.783.8-11.6% vs 3.9
3.11600028.160.855.8% vs 3.8
March 07, 2011, at 08:13 PM by vlad_paiu -
Changed lines 152-159 from:
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
3.626.757.879.6% vs 3.5
3.730.365.613.4% vs 3.6
3.818.039.0-59.4% vs 3.7
to:
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
3.626.757.879.6% vs 3.5
3.730.365.613.4% vs 3.6
3.818.039.0-40.5% vs 3.7
3.943.894.944.6% vs 3.7
3.1038.783.8-11.6% vs 3.9
3.1128.160.855.8% vs 3.8
March 07, 2011, at 08:03 PM by vlad_paiu -
Changed lines 158-159 from:
3.730.365.613.4% vs 3.5
3.818.039.0-59.4% vs 3.5
to:
3.730.365.613.4% vs 3.6
3.818.039.0-59.4% vs 3.7
March 07, 2011, at 08:02 PM by vlad_paiu -
Changed lines 152-158 from:
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
3.626.757.879.6% vs 3.5
3.730.365.613.4% vs 3.5
to:
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
3.626.757.879.6% vs 3.5
3.730.365.613.4% vs 3.5
3.818.039.0-59.4% vs 3.5
March 07, 2011, at 08:00 PM by vlad_paiu -
Changed lines 152-156 from:
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
to:
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
3.626.757.879.6% vs 3.5
3.730.365.613.4% vs 3.5
March 07, 2011, at 07:56 PM by vlad_paiu -
Changed lines 152-154 from:
3.119.319.30
3.220.619.3x
to:
3.119.319.30% vs 3.1
3.220.622.315.5% vs 3.1
3.320.529.632.7% vs 3.2
3.417.124.710.7% vs 3.2
3.522.332.28.7% vs 3.3
March 07, 2011, at 06:29 PM by vlad_paiu -
Changed lines 152-153 from:
3.119.319.30
3.220.619.3x
to:
3.119.319.30
3.220.619.3x
March 07, 2011, at 06:15 PM by vlad_paiu -
Changed lines 152-153 from:
cell 1cell 2cell 3cell 4
to:
3.119.319.30
3.220.619.3x
March 07, 2011, at 06:13 PM by vlad_paiu -
Changed line 151 from:
Test IDActual LoadScaled loadPenalty percent
to:
Test IDAvg LoadScaled Avg loadPenalty percent
March 07, 2011, at 06:12 PM by vlad_paiu -
Changed lines 150-153 from:

TODO

to:
Test IDActual LoadScaled loadPenalty percent
cell 1cell 2cell 3cell 4
March 07, 2011, at 06:11 PM by vlad_paiu -
Changed line 146 from:





to:


March 07, 2011, at 06:10 PM by vlad_paiu -
Changed line 132 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

March 07, 2011, at 06:10 PM by vlad_paiu -
Added lines 147-150:

Performance penalty table

TODO

March 07, 2011, at 06:03 PM by bogdan -
Changed line 45 from:
 The actual script used for this scenario can be found  here .
to:

The actual script used for this scenario can be found here .

March 07, 2011, at 05:28 PM by vlad_paiu -
Changed lines 33-34 from:

In the second test, OpenSIPS script implements also the "Record-Route" mechanism, recording the path in initial requests, and then making sequential requests follow the determined path. The purpose of this test was to see what is the performance penalty introduce by the mechanism of record and loose routing.

to:

In the second test, OpenSIPS script implements also the "Record-Route" mechanism, recording the path in initial requests, and then making sequential requests follow the determined path. The purpose of this test was to see what is the performance penalty introduced by the mechanism of record and loose routing.

Changed lines 43-44 from:

In the 3rd test we additionally made OpenSIPS dialog aware. The purpose of this particular test was to determin the performance penalty introduced by the dialog module.

to:

In the 3rd test we additionally made OpenSIPS dialog aware. The purpose of this particular test was to determine the performance penalty introduced by the dialog module.

Changed lines 63-64 from:

This scenario added dialog support on top of the previous one. The purpose of this scenario was to determin the performance penalty induced by the dialog module.

to:

This scenario added dialog support on top of the previous one. The purpose of this scenario was to determine the performance penalty introduced by the the dialog module.

Changed line 101 from:

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The purpose of this test was to see the performance penalty introduced by having OpenSIPS do accounting.

to:

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The purpose of this test was to see the performance penalty introduced by having OpenSIPS do the old type of accounting.

March 07, 2011, at 05:25 PM by vlad_paiu -
Changed line 146 from:






to:





March 07, 2011, at 05:25 PM by vlad_paiu -
Changed line 146 from:


to:






March 07, 2011, at 05:25 PM by vlad_paiu -
Changed lines 146-147 from:
to:


March 07, 2011, at 05:24 PM by vlad_paiu -
Added lines 145-147:
March 07, 2011, at 05:23 PM by vlad_paiu -
Changed line 132 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

March 07, 2011, at 05:23 PM by vlad_paiu -
Deleted lines 144-145:

March 07, 2011, at 05:22 PM by vlad_paiu -
Changed line 140 from:

Example: A test adding dialog on top of a previous test labeled as X would be labeled :

to:

Example: A test adding dialog support on top of a previous test labeled as X would be labeled :

March 07, 2011, at 05:21 PM by vlad_paiu -
Changed line 146 from:
to:

March 07, 2011, at 05:20 PM by vlad_paiu -
Changed line 140 from:

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as :

to:

Example: A test adding dialog on top of a previous test labeled as X would be labeled :

March 07, 2011, at 05:18 PM by vlad_paiu -
Changed line 132 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Chart

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

March 07, 2011, at 05:18 PM by vlad_paiu -
Changed line 132 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Chart

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Chart

March 07, 2011, at 05:18 PM by vlad_paiu -
Changed line 132 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Chart

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Chart

March 07, 2011, at 05:14 PM by vlad_paiu -
Changed lines 136-138 from:

Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId ).

to:

Each particular test is described in the following way :

[ PrevTestID + ] Description ( TestId ).

March 07, 2011, at 05:13 PM by vlad_paiu -
Added line 139:
March 07, 2011, at 05:12 PM by vlad_paiu -
Changed lines 135-136 from:
  • Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId ).
to:

Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId ).

March 07, 2011, at 05:12 PM by vlad_paiu -
Changed lines 135-136 from:
  • Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId )
  • Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as :
to:
  • Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId ).

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as :

March 07, 2011, at 04:56 PM by vlad_paiu -
Changed line 133 from:
to:
Deleted lines 137-138:
March 07, 2011, at 04:54 PM by vlad_paiu -
Changed lines 132-134 from:

Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId )

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as :

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Chart

Test naming convention:

  • Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId )
  • Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as :
Deleted lines 138-139:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

March 07, 2011, at 04:52 PM by vlad_paiu -
Changed line 138 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

March 07, 2011, at 04:52 PM by vlad_paiu -
Added line 137:
Added line 139:
March 07, 2011, at 04:51 PM by vlad_paiu -
Changed line 24 from:

In this first test, OpenSIPS behaved as a simple stateful scenario, just statefully passing messages from the UAC to the UAS ( a simple t_relay()).

to:

In this first test, OpenSIPS behaved as a simple stateful scenario, just statefully passing messages from the UAC to the UAS ( a simple t_relay()). The purpose of this test was to see what is the performance penalty introduced by making the proxy stateful.

Changed lines 63-64 from:

This scenario added dialog support on top of the previous one. The actual script used for this scenario can be found here .

to:

This scenario added dialog support on top of the previous one. The purpose of this scenario was to determin the performance penalty induced by the dialog module.

The actual script used for this scenario can be found here .

Changed lines 73-74 from:

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and a local MYSQL was used as the DB back-end. The actual script used for this scenario can be found here.

to:

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and a local MYSQL was used as the DB back-end. The purpose of this test was to see the performance penalty introduced by having the proxy authenticate users.

 The actual script used for this scenario can be found here.
Changed lines 83-84 from:

This test used the same script as the previous one, the only difference being that there were 10 000 users in the subscribers table.

to:

This test used the same script as the previous one, the only difference being that there were 10 000 users in the subscribers table. The purpose of this test was to see how the USRLOC module scales with the number of registered users.

Changed lines 91-92 from:

In the test, OpenSIPS used the localcache module in order to do less database queries. The cache expiry time was set to 20 minutes, so during the test, all 10k registered subscribers have been in the cache. The actual script used for this scenario can be found here.

to:

In the test, OpenSIPS used the localcache module in order to do less database queries. The cache expiry time was set to 20 minutes, so during the test, all 10k registered subscribers have been in the cache. The purpose of this test was to see how much DB queries are affecting OpenSIPS performance, and how much can caching help.

 The actual script used for this scenario can be found here.
Changed lines 101-102 from:

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The actual script used for this scenario can be found here.

to:

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The purpose of this test was to see the performance penalty introduced by having OpenSIPS do accounting.

The actual script used for this scenario can be found here.

Changed lines 111-112 from:

In this test, OpenSIPS was directly generating CDRs for each call, as opposed to the previous scenario. The actual script used for this scenario can be found here.

to:

In this test, OpenSIPS was directly generating CDRs for each call, as opposed to the previous scenario. The purpose of this test was to see how the new type of accounting compares to the old one.

The actual script used for this scenario can be found here.

Changed lines 137-139 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

See full size chart

March 07, 2011, at 04:46 PM by vlad_paiu -
Deleted lines 53-54:
March 07, 2011, at 04:46 PM by vlad_paiu -
Changed lines 53-55 from:

The 4th test had OpenSIPS running with the default script (provided by OpenSIPS distros). In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used. The purpose of this test was to see what is the performance penalty of a more advanced routing logic, taking into account the fact that the script used by this scenario is an enhanced version of the script used in 3.2.

to:

The 4th test had OpenSIPS running with the default script (provided by OpenSIPS distros). In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used. The purpose of this test was to see what is the performance penalty of a more advanced routing logic, taking into account the fact that the script used by this scenario is an enhanced version of the script used in the 3.2 test .

March 07, 2011, at 04:44 PM by vlad_paiu -
Changed lines 33-34 from:

In the second test, OpenSIPS script implements also the "Record-Route" mechanism, recording the path in initial requests, and then making sequential requests follow the determined path.

to:

In the second test, OpenSIPS script implements also the "Record-Route" mechanism, recording the path in initial requests, and then making sequential requests follow the determined path. The purpose of this test was to see what is the performance penalty introduce by the mechanism of record and loose routing.

Changed lines 43-44 from:

In the 3rd test we additionally made OpenSIPS dialog aware. The actual script used for this scenario can be found here .

to:

In the 3rd test we additionally made OpenSIPS dialog aware. The purpose of this particular test was to determin the performance penalty introduced by the dialog module.

 The actual script used for this scenario can be found  here .
Changed line 53 from:

The 4th test had OpenSIPS running with the default script (provided by OpenSIPS distros). In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used.

to:

The 4th test had OpenSIPS running with the default script (provided by OpenSIPS distros). In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used. The purpose of this test was to see what is the performance penalty of a more advanced routing logic, taking into account the fact that the script used by this scenario is an enhanced version of the script used in 3.2.

March 07, 2011, at 04:33 PM by bogdan -
Changed lines 6-7 from:

Several stress tests were performed using OpenSIPS 1.6.4 to simulate some real life scenarios, to get an idea on how much real life traffic can OpenSIPS handle and to see what is the performance penalty you get when using some OpenSIPS features like dialog support, authentication, accounting, etc.

to:

Several stress tests were performed using OpenSIPS 1.6.4 to simulate some real life scenarios, to get an idea on the load that can be handled by OpenSIPS and to see what is the performance penalty you get when using some OpenSIPS features like dialog support, DB authentication, DB accounting, memory caching, etc.

Changed line 24 from:

In this first test, OpenSIPS behaved as a simple stateful scenario, just passing messages from the UAC to the UAS.

to:

In this first test, OpenSIPS behaved as a simple stateful scenario, just statefully passing messages from the UAC to the UAS ( a simple t_relay()).

Changed lines 27-28 from:

In this scenario we managed to achieve 13000 CPS with an average load of 19.3 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 13000 CPS with an average load of 19.3 % ( actual load returned by htop )

Changed line 33 from:

In the second test, OpenSIPS behaved like a loose router, recording the path in initial requests, and then making sequential requests follow the determined path.

to:

In the second test, OpenSIPS script implements also the "Record-Route" mechanism, recording the path in initial requests, and then making sequential requests follow the determined path.

Changed lines 36-37 from:

In this scenario we managed to achieve 12000 CPS with an average load of 20.6 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 12000 CPS with an average load of 20.6 % ( actual load returned by htop )

Changed lines 42-45 from:

The 3rd test has OpenSIPS dialog aware. The actual script used for this scenario can be found here .

In this scenario we managed to achieve 9000 CPS with an average load of 20.5 % ( actual load returned by htop )

to:

In the 3rd test we additionally made OpenSIPS dialog aware. The actual script used for this scenario can be found here .

In this scenario we stopped the test at 9000 CPS with an average load of 20.5 % ( actual load returned by htop )

Changed lines 50-51 from:

The 4th test had OpenSIPS running with the default script. In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used.

to:

The 4th test had OpenSIPS running with the default script (provided by OpenSIPS distros). In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used.

Changed lines 54-55 from:

In this scenario we managed to achieve 9000 CPS with an average load of 17.1 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 9000 CPS with an average load of 17.1 % ( actual load returned by htop )

Changed lines 62-63 from:

In this scenario we managed to achieve 9000 CPS with an average load of 22.3 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 9000 CPS with an average load of 22.3 % ( actual load returned by htop )

Changed lines 68-71 from:

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and MYSQL was used as the DB back-end. The actual script used for this scenario can be found here.

In this scenario we managed to achieve 6000 CPS with an average load of 26.7 % ( actual load returned by htop )

to:

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and a local MYSQL was used as the DB back-end. The actual script used for this scenario can be found here.

In this scenario we stopped the test at 6000 CPS with an average load of 26.7 % ( actual load returned by htop )

Changed lines 78-79 from:

In this scenario we managed to achieve 6000 CPS with an average load of 30.3 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 6000 CPS with an average load of 30.3 % ( actual load returned by htop )

Changed lines 86-87 from:

In this scenario we managed to achieve 6000 CPS with an average load of 18 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 6000 CPS with an average load of 18 % ( actual load returned by htop )

Changed lines 94-95 from:

In this scenario we managed to achieve 6000 CPS with an average load of 43.8 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 6000 CPS with an average load of 43.8 % ( actual load returned by htop )

Changed line 110 from:

In this scenario we managed to achieve 6000 CPS with an average load of 28.1 % ( actual load returned by htop )

to:

In this scenario we stopped the test at 6000 CPS with an average load of 28.1 % ( actual load returned by htop )

March 04, 2011, at 06:43 PM by vlad_paiu -
Changed line 132 from:
  • TODO3
to:
  • Very rarely is the CPU the actual bottleneck in the system. Slow database queries and DNS queries and even a slow network are the things that will ultimately degrade OpenSIPS performance.
March 04, 2011, at 06:18 PM by vlad_paiu -
Changed line 130 from:
  • Database operations have quite an impact on CPU load when it comes to building the queries and on top of that, in the current design, DB queries are blocking. Use caching to improve performance.
to:
  • Database operations have quite an impact on CPU load when it comes to building the queries and on top of that, in the current design, DB queries are blocking. Use caching whenever possible to improve performance.
March 04, 2011, at 06:17 PM by vlad_paiu -
Added lines 125-126:
March 04, 2011, at 06:17 PM by vlad_paiu -
Changed lines 128-129 from:
  • TODO
  • TODO2
to:
  • Database operations have quite an impact on CPU load when it comes to building the queries and on top of that, in the current design, DB queries are blocking. Use caching to improve performance.
  • The CDR type of accounting newly added in 1.6.4 is better from the old type of accounting from two points of view : it automatically generates the CDR in the backend and it is less CPU intensive because it only has to build and block for one query, as opposed to two queries in the old type of accounting.
March 04, 2011, at 06:10 PM by vlad_paiu -
Changed lines 127-130 from:

TODO

to:
  • TODO
  • TODO2
  • TODO3
March 04, 2011, at 06:10 PM by vlad_paiu -
Changed line 119 from:

Each particular test is described in the following way : Description [ + PrevTestID ] ( TestId )

to:

Each particular test is described in the following way : [ PrevTestID + ] Description ( TestId )

March 04, 2011, at 06:09 PM by vlad_paiu -
Changed lines 121-126 from:

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as >>blue<< X + Dialog ( Y )

to:

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as : X + Dialog ( Y )

Deleted lines 124-125:
March 04, 2011, at 06:09 PM by vlad_paiu -
Changed lines 121-122 from:

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as

X + Dialog ( Y )>><<
to:

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as >>blue<< X + Dialog ( Y )

March 04, 2011, at 06:08 PM by vlad_paiu -
Changed line 122 from:

X + Dialog ( Y )

to:
X + Dialog ( Y )>><<
March 04, 2011, at 06:07 PM by vlad_paiu -
Added lines 118-124:

Each particular test is described in the following way : Description [ + PrevTestID ] ( TestId )

Example : A test that adds dialog support on top of a previous test labeled as X would appear in the chart as X + Dialog ( Y )

March 04, 2011, at 05:55 PM by vlad_paiu -
Changed line 119 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Graph

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg |

March 04, 2011, at 05:55 PM by vlad_paiu -
Changed lines 119-120 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Detailed Load Chart

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Load Graph

March 04, 2011, at 05:54 PM by vlad_paiu -
Changed line 119 from:

to:

March 04, 2011, at 05:53 PM by vlad_paiu -
Changed lines 119-121 from:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg"

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg | Detailed Load Chart

March 04, 2011, at 05:52 PM by vlad_paiu -
Changed line 119 from:

<img src="http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg" />

to:

http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg"

March 04, 2011, at 05:51 PM by vlad_paiu -
Changed line 119 from:
to:

<img src="http://www.opensips.org/uploads/Resources/PerformanceTests/LoadGraph.jpg" />

March 04, 2011, at 05:50 PM by vlad_paiu -
Changed line 119 from:
to:
March 04, 2011, at 05:49 PM by vlad_paiu -
Changed lines 34-35 from:

The actual script used for this scenario can be found at TODO .

to:

The actual script used for this scenario can be found here .

Changed lines 42-43 from:

The 3rd test has OpenSIPS dialog aware. The actual script used for this scenario can be found at TODO .

to:

The 3rd test has OpenSIPS dialog aware. The actual script used for this scenario can be found here .

Changed lines 52-53 from:

The actual script used for this scenario can be found at TODO .

to:

The actual script used for this scenario can be found here .

Changed lines 60-61 from:

This scenario added dialog support on top of the previous one. The actual script used for this scenario can be found at TODO .

to:

This scenario added dialog support on top of the previous one. The actual script used for this scenario can be found here .

Changed lines 68-69 from:

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and MYSQL was used as the DB back-end. The actual script used for this scenario can be found at TODO.

to:

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and MYSQL was used as the DB back-end. The actual script used for this scenario can be found here.

Changed lines 84-85 from:

This test used the same script as the previous one, the only difference being that OpenSIPS used the localcache module in order to do less database queries. The cache expiry time was set to 20 minutes, so during the test, all 10k registered subscribers have been in the cache.

to:

In the test, OpenSIPS used the localcache module in order to do less database queries. The cache expiry time was set to 20 minutes, so during the test, all 10k registered subscribers have been in the cache. The actual script used for this scenario can be found here.

Changed lines 90-93 from:

Regular accounting

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The actual script used for this scenario can be found at TODO.

to:

Accounting

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The actual script used for this scenario can be found here.

Changed lines 100-101 from:

In this test, OpenSIPS was directly generating CDRs for each call, as opposed to the previous scenario. The actual script used for this scenario can be found at TODO.

to:

In this test, OpenSIPS was directly generating CDRs for each call, as opposed to the previous scenario. The actual script used for this scenario can be found here.

Changed lines 106-108 from:

CDR accounting

In the last test, OpenSIPS was generating CDRs just as in the previous test, but it was also caching the 10k subscribers it had in the MYSQL database. The actual script used for this scenario can be found at TODO.

to:

CDR accounting + Auth Caching

In the last test, OpenSIPS was generating CDRs just as in the previous test, but it was also caching the 10k subscribers it had in the MYSQL database.

March 04, 2011, at 05:44 PM by vlad_paiu -
Changed line 25 from:

The actual script used for this scenario can be found at TODO .

to:

The actual script used for this scenario can be found at here .

March 04, 2011, at 05:18 PM by vlad_paiu -
Changed line 6 from:

Several stress tests were performed using OpenSIPS 1.6.4 to emulate some real life scenarios, to get an idea on how much real life traffic can OpenSIPS handle and to see what is the performance penalty you get when using some OpenSIPS features like dialog support, authentication, accounting, etc.

to:

Several stress tests were performed using OpenSIPS 1.6.4 to simulate some real life scenarios, to get an idea on how much real life traffic can OpenSIPS handle and to see what is the performance penalty you get when using some OpenSIPS features like dialog support, authentication, accounting, etc.

March 04, 2011, at 05:12 PM by vlad_paiu -
Changed line 119 from:

TODO - Insert PIC

to:
March 04, 2011, at 05:04 PM by vlad_paiu -
Added lines 121-123:

Conclusions

TODO

March 04, 2011, at 05:03 PM by vlad_paiu -
Changed lines 115-120 from:

TODO

to:

Each test had different CPS values, ranging from 13000 CPS, in the first test, to 6000 in the last tests. To give you an accurate overall comparison of the tests, we have scaled all the results up to the 13 000 CPS of the first test, adjusting the load in the same time. So, while on the X axis we have the time, the Y axis represents a function based on actual CPU load and CPS.

TODO - Insert PIC

March 04, 2011, at 04:45 PM by vlad_paiu -
Changed lines 29-30 from:

See chart, where this particular test is marked as test A.

to:

See chart, where this particular test is marked as test A.

Changed lines 38-39 from:

See chart, where this particular test is marked as test B.

to:

See chart, where this particular test is marked as test B.

Changed lines 46-47 from:

See chart, where this particular test is marked as test C.

to:

See chart, where this particular test is marked as test C.

Changed lines 56-57 from:

See chart, where this particular test is marked as test D.

to:

See chart, where this particular test is marked as test D.

Changed lines 64-65 from:

See chart, where this particular test is marked as test E.

to:

See chart, where this particular test is marked as test E.

Changed lines 72-73 from:

See chart, where this particular test is marked as test F.

to:

See chart, where this particular test is marked as test F.

Changed lines 80-81 from:

See chart, where this particular test is marked as test G.

to:

See chart, where this particular test is marked as test G.

Changed lines 88-89 from:

See chart, where this particular test is marked as test H.

to:

See chart, where this particular test is marked as test H.

Changed lines 96-97 from:

See chart, where this particular test is marked as test I.

to:

See chart, where this particular test is marked as test I.

Changed lines 104-105 from:

See chart, where this particular test is marked as test J.

to:

See chart, where this particular test is marked as test J.

Changed line 112 from:

See chart

to:

See chart

March 04, 2011, at 04:42 PM by vlad_paiu -
Changed lines 19-20 from:

A total of 11 tests were performed, using 11 different scripting scenarios. The goal was to achieve the highest possible CPS in the given scenario, store load samples from the OpenSIPS proxy and then analyze the results.

to:

A total of 11 tests were performed, using 11 different scenarios. The goal was to achieve the highest possible CPS in the given scenario, store load samples from the OpenSIPS proxy and then analyze the results.

Changed lines 98-112 from:
to:

CDR accounting

In this test, OpenSIPS was directly generating CDRs for each call, as opposed to the previous scenario. The actual script used for this scenario can be found at TODO.

In this scenario we managed to achieve 6000 CPS with an average load of 38.7 % ( actual load returned by htop )

See chart, where this particular test is marked as test J.

CDR accounting

In the last test, OpenSIPS was generating CDRs just as in the previous test, but it was also caching the 10k subscribers it had in the MYSQL database. The actual script used for this scenario can be found at TODO.

In this scenario we managed to achieve 6000 CPS with an average load of 28.1 % ( actual load returned by htop )

See chart

March 04, 2011, at 04:36 PM by vlad_paiu -
Deleted line 24:
Deleted line 33:
March 04, 2011, at 04:36 PM by vlad_paiu -
Changed lines 28-29 from:

In this scenario we managed to achieve 13000 CPS with an average load of 19.3% ( actual load returned by htop )

to:

In this scenario we managed to achieve 13000 CPS with an average load of 19.3 % ( actual load returned by htop )

Changed lines 38-39 from:

In this scenario we managed to achieve 12000 CPS with an average load of 20.6 ( actual load returned by htop )

to:

In this scenario we managed to achieve 12000 CPS with an average load of 20.6 % ( actual load returned by htop )

Changed lines 46-47 from:

In this scenario we managed to achieve 9000 CPS with an average load of 20.5 ( actual load returned by htop )

to:

In this scenario we managed to achieve 9000 CPS with an average load of 20.5 % ( actual load returned by htop )

Changed lines 52-53 from:

The 4th test had OpenSIPS running with the default script. In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops.

to:

The 4th test had OpenSIPS running with the default script. In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops. OpenSIPS routed requests based on USRLOC, but only one subscriber was used.

Changed lines 56-57 from:

In this scenario we managed to achieve 9000 CPS with an average load of 29.4 ( actual load returned by htop )

to:

In this scenario we managed to achieve 9000 CPS with an average load of 17.1 % ( actual load returned by htop )

Added lines 59-100:

Default Script with dialog support

This scenario added dialog support on top of the previous one. The actual script used for this scenario can be found at TODO .

In this scenario we managed to achieve 9000 CPS with an average load of 22.3 % ( actual load returned by htop )

See chart, where this particular test is marked as test E.

Default Script with dialog support and authentication

Call authentication was added on top of the previous scenario. 1000 subscribers were used, and MYSQL was used as the DB back-end. The actual script used for this scenario can be found at TODO.

In this scenario we managed to achieve 6000 CPS with an average load of 26.7 % ( actual load returned by htop )

See chart, where this particular test is marked as test F.

10k subscribers

This test used the same script as the previous one, the only difference being that there were 10 000 users in the subscribers table.

In this scenario we managed to achieve 6000 CPS with an average load of 30.3 % ( actual load returned by htop )

See chart, where this particular test is marked as test G.

Subscriber caching

This test used the same script as the previous one, the only difference being that OpenSIPS used the localcache module in order to do less database queries. The cache expiry time was set to 20 minutes, so during the test, all 10k registered subscribers have been in the cache.

In this scenario we managed to achieve 6000 CPS with an average load of 18 % ( actual load returned by htop )

See chart, where this particular test is marked as test H.

Regular accounting

This test had OpenSIPS running with 10k subscribers, with authentication ( no caching ), dialog aware and doing old type of accounting ( two entries, one for INVITE and one for BYE ). The actual script used for this scenario can be found at TODO.

In this scenario we managed to achieve 6000 CPS with an average load of 43.8 % ( actual load returned by htop )

See chart, where this particular test is marked as test I.

March 04, 2011, at 04:19 PM by vlad_paiu -
Changed lines 48-49 from:

See chart, where this particular test is marked as test B.

to:

See chart, where this particular test is marked as test C.

Changed line 58 from:
to:

See chart, where this particular test is marked as test D.

March 04, 2011, at 04:18 PM by vlad_paiu -
Changed lines 22-27 from:

Simple stateless proxy

In this first test, OpenSIPS behaved as a simple statefull scenario, just passing messages from the UAC to the UAS. The actual script used for this scenario can be found TODO .

In this scenario we managed to achieve 13000 CPS with an average load of 19.3% ( actual load as returned by htop )

to:

Simple stateful proxy

In this first test, OpenSIPS behaved as a simple stateful scenario, just passing messages from the UAC to the UAS.

The actual script used for this scenario can be found at TODO .

In this scenario we managed to achieve 13000 CPS with an average load of 19.3% ( actual load returned by htop )

Added lines 31-57:

Stateful proxy with loose routing

In the second test, OpenSIPS behaved like a loose router, recording the path in initial requests, and then making sequential requests follow the determined path.

The actual script used for this scenario can be found at TODO .

In this scenario we managed to achieve 12000 CPS with an average load of 20.6 ( actual load returned by htop )

See chart, where this particular test is marked as test B.

Stateful proxy with loose routing and dialog support

The 3rd test has OpenSIPS dialog aware. The actual script used for this scenario can be found at TODO .

In this scenario we managed to achieve 9000 CPS with an average load of 20.5 ( actual load returned by htop )

See chart, where this particular test is marked as test B.

Default Script

The 4th test had OpenSIPS running with the default script. In this scenario, OpenSIPS can act as a SIP registrar, can properly handle CANCELs and detect traffic loops.

The actual script used for this scenario can be found at TODO .

In this scenario we managed to achieve 9000 CPS with an average load of 29.4 ( actual load returned by htop )

March 04, 2011, at 04:07 PM by vlad_paiu -
Changed line 28 from:
to:

See chart, where this particular test is marked as test A.

March 04, 2011, at 04:06 PM by vlad_paiu -
Changed line 32 from:

TODo

to:

TODO

March 04, 2011, at 04:05 PM by vlad_paiu -
Changed lines 15-16 from:

The database back-end used was MYSQL

to:

Each time the proxy ran with 32 children and the database back-end used was MYSQL.

Changed lines 25-27 from:

The actual script used for this scenario can be found here .

to:

The actual script used for this scenario can be found TODO .

In this scenario we managed to achieve 13000 CPS with an average load of 19.3% ( actual load as returned by htop )

March 04, 2011, at 04:00 PM by vlad_paiu -
Added line 21:

Added lines 23-26:

In this first test, OpenSIPS behaved as a simple statefull scenario, just passing messages from the UAC to the UAS. The actual script used for this scenario can be found here .

March 04, 2011, at 03:57 PM by vlad_paiu -
Added lines 16-25:

Performance tests

A total of 11 tests were performed, using 11 different scripting scenarios. The goal was to achieve the highest possible CPS in the given scenario, store load samples from the OpenSIPS proxy and then analyze the results.

Simple stateless proxy

Load statistics graph

TODo

March 04, 2011, at 02:30 PM by vlad_paiu -
Changed lines 9-15 from:

What scenarios were used

to:

What hardware was used for the stress tests

The OpenSIPS proxy was installed on an Intel i7 920 @ 2.67GHz CPU with 6 Gb of available RAM. The UAS and UACs resided on the same LAN as the proxy, to avoid network limitations.

What script scenarios were used

The base test used was that of a simple stateful SIP proxy. Than we kept adding features on top of this very basic configuration, features like loose routing, dialog support, authentication and accounting. The database back-end used was MYSQL

March 04, 2011, at 02:22 PM by vlad_paiu -
Added lines 5-6:

Several stress tests were performed using OpenSIPS 1.6.4 to emulate some real life scenarios, to get an idea on how much real life traffic can OpenSIPS handle and to see what is the performance penalty you get when using some OpenSIPS features like dialog support, authentication, accounting, etc.

March 04, 2011, at 02:14 PM by vlad_paiu -
Changed lines 4-7 from:

to:


What scenarios were used

March 04, 2011, at 02:12 PM by vlad_paiu -
Changed lines 1-4 from:

TODO

to:

Resources -> Performance Tests -> Stress Tests

This page has been visited 16674 times. (:toc-float Table of Content:)


March 04, 2011, at 02:09 PM by vlad_paiu -
Added line 1:

TODO


Page last modified on April 24, 2013, at 01:16 PM