One of my clients received an email from Heart Internet recently stating that a database on their account had exceeded limit of 500MB and had ballooned to 3,561MB. This was a bit of a shock as the database holds a WordPress install which only has 82 posts.
My initial thought was that the comments hadn't been disabled and were most likely filling up the database, but when I checked phpMyAdmin I found that everything appeared to reasonably small in table size except for `wp_options` which although only had 370 records was somehow using up 3.4GB by itself.
When I looked into the table I found the usual `transient` records at the end of the table, these look to be temporary records with a short ttl. So the space which is being used appears to be overhead, although it doesn't show as such. But rebuilding the table via `OPTIMIZE TABLE` brings down the table size to a mere 1.5MB.
As this was the second time this client got an email from Heart with this issue I put together a scheduled task which optimises the `wp_options` table each day. Hopefully this will be useful for you.
I placed the scheduled task file above the `public_html` folder, within a folder called `crons` and called the file `optimise-db.php`. You can change the location and name of the file but you'll need to update the paths used in the code and control panel.
When you want to measure the actual page load time of a PHP page most developers will get the microtime at the top of the page, whilst this does measure the page load time of the scripts running on the page it doesn't accurately measure the page load time of the script.
To do so you can use $_SERVER['REQUEST_TIME_FLOAT'] to get the actual start time of the page request. Which can differ by 0.01 second or so.
The following error occurred on CentOS 6 with cPanel 11.48.4 when using the CLI version of PHP:
SourceGuardian requires Zend Engine API version 220131226. The Zend Engine API version 220121212 which is installed, is outdated.
Googling for a way to safely update Zend Engine API resulted in 3+ year old threads with no answers or clues on how to update with WHM installed.
If you're looking to update the Zend Engine API, then this guide is not for you. Instead, I chose to remove Zend from the CLI PHP config as the error message being displayed on all PHP crons was breaking the scripts.
Here's how to remove Zend from your PHP CLI config...
First, find the config file for the CLI PHP. You can do this by checking your existing config file using php -i – use grep to grab the path with php -i | grep 'Configuration File' .
For me php.ini was located at /usr/local/lib/php.ini
Find the line zend_extension = "/usr/local/sourceguard/ixed.x.x.lin" and comment it out. Restart Apache and you should notice that the error no longer displays. I'm not entirely sure what the Zend Engine is for under CLI PHP but I haven't encountered any issues yet.
If you encounter issues, just uncomment the line and restart Apache. Following these instructions should be a last resort if you cannot update Zend Engine API, so you do so at your own risk.
Sending the right HTTP protocol for headers from PHP is easy, you just need to find which HTTP protocol your server is using. Most servers will be using HTTP/1.1, but allowing PHP to serve the protocol version for you means safer headers for older systems and greater cross-platform compatibility.
Rather than sending
header( 'HTTP/1.1 404 Not Found' );
we're going to send
header( $_SERVER['SERVER_PROTOCOL'] . ' 404 Not Found' );
gives us the HTTP protocol which matches the previous 'HTTP/x.x' format.
This can be used with any header that requires the HTTP protocol to be sent, it is not limited to 404 errors in particular.
Available from PHP 5.4.0, function 'http_response_code' is the alternative to this and sends the correct protocol along with the correct status code description.
CURLOPT_RETURNTRANSFER is a simple way of copying a file from a remote server onto your own. However, if you're downloading a large file you may hit memory limits because the entire contents of the download have to be read to memory before being saved.
Note: Even if your memory limit is set extremely high, you would be putting unnecessary strain on your server by reading in a large file straight to memory.
Instead you can write the download straight to a file stream using CURLOPT_FILE.
This script allows you to perform an operation in batches. This should allow you to keep the log file for the database down whilst performing large operations.
SET ROWCOUNT allows you to set how many records will be effected per query. SET @intFlag = 1 sets the loop to start at 1 WHILE (@intFlag <=100) loop through this code X times (for this example 100) SET @intFlag = @intFlag + 1 add 1 to the loop count
SET ROWCOUNT 500000;
DECLARE @intFlag INT
SET @intFlag = 1
WHILE (@intFlag <=100)
DELETE FROM tblTemp WHERE tmpDate < '2012-10-01'
SET @intFlag = @intFlag + 1
Here the ROWCOUNT is set to 500,000. On this server attempting to delete 1,000,000 records by running 2 batches of 500,000 delete records was 35 seconds faster than running a single 1,000,000 delete records statement. But running 10 batches of 100,000 delete records was also slower than 2 batches at 500,000 records. This may vary server to server.