Performance testing your WordPress site

You can hire a company to performance test a website, or you can do it yourself, this article will explain how you can do it yourself using Free available tools.

Testing is an ongoing activity, that you need to plan for in any phase of the development cycle that you are working on. What do you need to test and why is something I hear a lot of. In my view, you need at least a test for the following.

  • Test performance to understand what throughput your server environment can handle.
  • Test to see what happens if you lose your caching layer
  • Test to see what happen when you lose a server, in a scenario where you have fail-over servers.
  • Test your staging server, do not limit the testing to your development server.
  • Test what happens when you use external API, will it slow down your server.
  • Test WordPress to see which plugin takes longer to load than others.
  • Test third-party services that you using, test with different timeout settings.
  • Test to simulate for failure.

How to get started testing WordPress

The only productive way to load test WordPress with Apache is to test real WordPress pages – Loading and processing of multiple WordPress PHP files. Establishment of multiple MySQL connections, and performing multiple table reads, test different parts of your WordPress website. If you just have an almost empty and static page it will not tell us much about how the webserver holds up under stress,  we need to be able to expose the weakest link in the chain, nor how that web server setup will handle real-world concurrent connections.

If you just have an almost empty and static page it will not tell us much about how the webserver holds up under stress,  we need to be able to expose the weakest link in the chain, and know-how that web server setup will handle real-world concurrent connections.

Ideally, a performance test would also perform

  • GETs of all page assets (CSS, js, images) and
  • simulate traffic of which 10% is DB writes.

What tools to use for testing

Now we getting into what tools we can use to test various scenarios.

Apache Bench (ab)

The first tool Apache Bench or ab if you type it on your command line. ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving. ab is included with each Apache version in its \bin directory.

ab the test won’t be the most extensive test, but it will quickly show you:

  • If there is an immediate problem with the setup (this problem will manifest itself in Apache crashing).
  • How far you can push the Apache, PHP, and MySQL web-server (with concurrent connections and page request load).
  • And what Apache and PHP settings you should modify to get better performance and eliminate the crashes.

What can ab not do

  • ab will not parse HTML to get the additional assets of each page (CSS, images, etc).
  • ab can start to error out and break the test as the number of requests to perform is increased, more connections are established but not returned, the load increases and more time passes (see ab -h for an explanation of -r switch).
  • ab is an HTTP/1.0 client, not an HTTP/1.1 client, and “Connection: KeepAlive” (ab -k switch) requests of dynamic pages will not work… Dynamic pages don’t have a predetermined “Content-Length: value“, and using “Transfer-Encoding: chunked” is not possible with HTTP/1.0 clients.

KeepAlive – Apache Directive

A Keep-Alive connection with an HTTP/1.0 client can only be used when the length of the content is known in advance. This implies that dynamic content will generally not use Keep-Alive connections to HTTP/1.0 clients. A persistent connection with an HTTP/1.0 client cannot make use of the chunked transfer-coding and therefore MUST use a Content-Length for marking the ending boundary of each message.

Request Floods

ab will flood the Apache server with requests – as fast as it can generate them (not unlike that of a DDoS attack). ab has no option to set a delay between these requests.

And given that these requests are generated from the same local system they are going to (i.e., the network layer is bypassed), this will create a peak level of requests that will cause Apache to stop responding and the OS to start blocking/dropping additional requests.

The bigger the ab -c (concurrent number of requests to do at the same time) is, the lower your -n (total number of requests to perform) should be… Even with a -c of 5, -n should not be more than 200.

These are the error messages displayed by ab

  • apr_socket_recv: An existing connection was forcibly closed by the remote host. (730054)
  • apr_pollset_add(): Not enough space (12)
  • and the dialog displayed by Windows – httpd-apache-crash

When this happens (the above message is displayed that Apache has crashed), just ignore it as Apache is still running, and keep repeating the test until “Failed requests:” is reported as “0”, AND “Percentage of the requests served within a certain time (ms)” is about 2-20x between the 50% and 99% mark (and not 200x). Otherwise, the test is not reliable due to the issues that present themselves when ab floods Apache on loopback.

This is what you should see on a good test of a simple index.php page…

ab -l -r -n 100 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://www.test.com/

Make sure that:

  • You’ve rebooted the system and don’t have anything extra open/running.
  • Extra PHP extensions are not loaded: Zend OPcache, APC, nor XDebug.
  • You wait 4 minutes before performing another ab test to avoiding TCP/IP Port Exhaustion.
  • And in a test where KeepAlive works (it doesn’t in ab tests getting dynamic pages), the number of Apache Worker Threads is set to be greater than the number of concurrent users/visitors/connections.

If Apache or PHP crashes, you’ve rebooted the computer or VM before performing another test (as some things get stuck and continue to persist after Apache and/or mod_fcgid’s PHP processes are restarted).

Start 1 concurrent user doing 100-page hits

This is 100 sequential page loads by a single user:

ab -l -r -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" http://www.test.com/blog/

This shows you how well the web-server will handle a simple load of 1 user doing a number of page loads.

5 concurrent users each doing 10-page hits

This is 100-page loads by 5 different concurrent users, each user is doing 10 sequential pages loads.

ab -l -r -n 50 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://www.test.com/blog/

This represents a peak load of a website that gets about 50,000+ hits a month.

10 concurrent users each doing 10-page hits

This is 100-page loads by 10 different concurrent users, each user is doing 10 sequential pages loads.

ab -l -r -n 100 -c 10 -k -H "Accept-Encoding: gzip, deflate" http://www.test.com/blog/

10 concurrent (simultaneous) users is a lot of traffic, most websites will be lucky to see 1 or 2 users (visitors) a minute.

30 concurrent users each doing 20-page hits

This is 600-page loads by 30 different concurrent users, each user is doing 20 sequential pages loads.

ab -l -r -n 600 -c 30 -k -H "Accept-Encoding: gzip, deflate" http://www.test.com/blog/

This is the edge of what a non-cached WordPress setup will be able to handle without crashing or timing out the web-server.

90 concurrent users each doing 30-page hits

This is 2700 page loads by 90 different concurrent users, each user is doing 30 sequential pages loads.

ab -n 2700 -c 90 -k -H "Accept-Encoding: gzip, deflate" http://www.test.com/blog/

Only a fully cached (using mod_cache) Apache setup will be able to handle this type of load.

Analyze the ab Results

We only care about 3 things:

  1. How many Requests Per Second are we seeing?
  2. Are there any errors in the website’s or Apache’s (general) error logs or PHP logs?
  3. At what concurrency level does Apache crash and/or time-out?

Siege

Another tool that you can use similar to ab is Siege, Siege is an Http load testing and benchmarking utility. Siege supports basic authentication, cookies, HTTP, HTTPS, and FTP protocols. It lets its user hit a server with a configurable number of simulated clients. Those clients place the server “under siege.”

Running one user for 30S

siege -v -c 1 -i -t 30S -f www.test.com 10

Apache Jmeter

Apache JMeter can be used to test performance both on static and dynamic resources, It can be used to simulate a heavy load on a server, group of servers, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load.

Install Apache Jmeter on a Mac

brew install jmeter

to run it

open /usr/local/bin/jmeter

Other tools to mention

  1. Bees with Machine Guns! (phyton tool)
  2. Multi-Mechanize – Performance Test Framework (phyton tool)

Client-Side performance testing

It’s becoming more important to understand the throughput in a browser or a mobile device. Here are some tools to use

Track your test performance

It’s not enough to test once in a while, you have to make a habit to do it monthly or when upgrade/or add new functionality. You would need tools to track your testing, so historical you can see patterns, e.g. if a third party API slows you down, you want to know when it started so you need some tools for tracking.

sitespeed.io

Sitespeed.io helps you analyze your website speed and performance based on performance best practices and timing metrics. It collects data from multiple pages on your website, analyze them using the rules and output the result as HTML or send the metrics to Graphite.

You can analyze one site, analyze and compare multiple sites or let your continuous integration server break your build when your performance budget is exceeded.

$ sitespeed.io -u http://yoursite.com  -b firefox

You can throttle the connection when you are fetching metrics using the browser. Choose between:

  • mobile3g – 1.6 Mbps/768 Kbps – 300 RTT
  • mobile3gfast – 1.6 Mbps/768 Kbps – 150 RTT
  • cable – 5 Mbps/1 Mbps – 28 RTT
  • native – the current connection

And run it like this:

$ sitespeed.io -u http://yoursite.com -b chrome --connection mobile3g

By default, the browser will collect data until the window.performance.timing.loadEventEnd happens + aprox 2 seconds more. That is perfectly fine for most sites, but if you do ajax loading and you mark them with user timings, you probably want to include them in your test. Do that by changing the script that will end the test (–waitScript). When the scripts return true the browser will close or if the timeout time (default 60 seconds) will be reached:

sitespeed.io -u https://www.sitespeed.io -b chrome --waitScript 'return window.performance.timing.loadEventEnd>0'

For more details on how to use sitespeed.io check out the documentation.

Commercial sites

If you are not testing yourself, here are some commercials sites.

Worth Mentioning

Netflix Simian Army.

The Simian Army is a suite of tools for keeping your cloud operating in top form. Chaos Monkey is a resiliency tool that helps ensure that your applications can tolerate random instance failures.

What’s Next

When you know or have got an idea of what is causing your issues, you can start looking at how to gain more performance from your server.

So let us take a look at what can be done, 99% of all performance gains will come from utilizing Apache’s caching mechanisms, using PHP Zend OPcache, and once the bottleneck is moved from Apache with PHP to MySQL, improving MySQL performance by tuning my.ini settings, and optimizing/restructuring MySQL queries by utilizing MySQL’s Slow Query log.

Switching from 32 bit to 64 bit Apache, PHP, and MySQL versions only provide limited/marginal performance gains.

The order you should take to gain some performance:

  1. Apache’s mod_cache module to cache page requests/results. This will produce 5-10x the performance gains over all other methods combined.
  2.  PHP’s Zend OPcache extension to cache PHP scripts as compiled objects. This will produce a 3-5x Requests Per Second speed up.
  3. memcached + php_memcache setup to cache PHP script’s or web app’s internal data and results. This can produce a good 50%-100% performance gain.
  4. Cache plugins and/or setting adjustments specific to the web-app: Cache plugins for WordPress, etc.
  5. mod_expires to make the client’s (visitor’s) Browser cache pages and page assets for a given time, instead of re-getting those pages and assets on each page load.

Conclusion

Testing is an ongoing activity, you need to select the tools to use, and then continuously testing and tracking performance gains.

I hope this was useful, I appreciate comments.


Posted

in

, ,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *