Heart Internet :: wp_options table creates lots of transient records

One of my clients received an email from Heart Internet recently stating that a database on their account had exceeded limit of 500MB and had ballooned to 3,561MB. This was a bit of a shock as the database holds a WordPress install which only has 82 posts.

My initial thought was that the comments hadn't been disabled and were most likely filling up the database, but when I checked phpMyAdmin I found that everything appeared to reasonably small in table size except for `wp_options` which although only had 370 records was somehow using up 3.4GB by itself.

When I looked into the table I found the usual `transient` records at the end of the table, these look to be temporary records with a short ttl. So the space which is being used appears to be overhead, although it doesn't show as such. But rebuilding the table via `OPTIMIZE TABLE` brings down the table size to a mere 1.5MB.

As this was the second time this client got an email from Heart with this issue I put together a scheduled task which optimises the `wp_options` table each day. Hopefully this will be useful for you.

I placed the scheduled task file above the `public_html` folder, within a folder called `crons` and called the file `optimise-db.php`. You can change the location and name of the file but you'll need to update the paths used in the code and control panel.

File: ~/crons/optimise-db.php

<?php

require_once $_SERVER['HOME'] . '/public_html/wp-config.php';

// Connect to Database
//
try {
    $dbh = new PDO( 'mysql:host=' . DB_HOST . ';dbname=' . DB_NAME . ';charset=' . DB_CHARSET, DB_USER, DB_PASSWORD );
    $dbh->setAttribute( PDO::ATTR_DEFAULT_FETCH_MODE, PDO::FETCH_OBJ );
} catch( PDOException $e ) {
    echo 'Connection failed: ' . $e->getMessage();
    exit;
}

// Optimise wp_options table
//
try {
    print_r( $result = $dbh->query( 'OPTIMIZE TABLE `' . $table_prefix . 'options`' )->fetch() );
} catch( PDOException $e ) {
    echo 'Connection failed: ' . $e->getMessage();
    exit;
}

The file will use the table prefix as set within `wp-config.php` so this should be a case of just copy and paste. You can use an include file within `public_html` if you want to test this file works.

File ~/public_html/test.php

<?php

require_once '../crons/optimise-db.php';

Next you'll need to add the `optimise-db.php` file to a scheduled task.

The command you use should be similar to the below but you'll need to update the path to the file.

/usr/bin/php56 -f /home/sites/example.org/crons/optimise-db.php

Here I'm using PHP 5.6 as I know Heart will continue to support this for sometime and something as simple as running a single MySQL query won't require PHP 7.

You can set the task to run once per week or once per day (usually at midnight). I opted for each midnight.


With this set, you can click on the `Test Command` button and you should receive the following output:


And that's it. This scheduled task will keep your transient records from filling up your database table.

If you have any questions I might be able to help, so tweet me @WilliamIsted.

Measuring PHP page load time

When you want to measure the actual page load time of a PHP page most developers will get the microtime at the top of the page, whilst this does measure the page load time of the scripts running on the page it doesn't accurately measure the page load time of the script.

To do so you can use $_SERVER['REQUEST_TIME_FLOAT'] to get the actual start time of the page request. Which can differ by 0.01 second or so.

$total_time = ( ( explode(' ', microtime())[0] + explode(' ', microtime())[1] ) - $_SERVER['REQUEST_TIME_FLOAT'] );

If you're just looking for a timer function then this is the most accurate way to do so.

class Timer {
	private $start;
	
	public function __construct() {
		$this->start = ( explode(' ', microtime())[0] + explode(' ', microtime())[1] );
	}
	
	public function stop() {
		return ( explode(' ', microtime())[0] + explode(' ', microtime())[1] ) - $this->start;
	}
	
	public function reset() {
		$this::__construct();
	}
}

$timer = new Timer();
$total_time = $timer->stop();

SourceGuardian - The Zend Engine API is outdated

The following error occurred on CentOS 6 with cPanel 11.48.4 when using the CLI version of PHP:

SourceGuardian requires Zend Engine API version 220131226. The Zend Engine API version 220121212 which is installed, is outdated.

Googling for a way to safely update Zend Engine API resulted in 3+ year old threads with no answers or clues on how to update with WHM installed.

If you're looking to update the Zend Engine API, then this guide is not for you. Instead, I chose to remove Zend from the CLI PHP config as the error message being displayed on all PHP crons was breaking the scripts.

Here's how to remove Zend from your PHP CLI config...

First, find the config file for the CLI PHP. You can do this by checking your existing config file using php -i – use grep to grab the path with php -i | grep 'Configuration File' .

For me php.ini was located at /usr/local/lib/php.ini

Find the line zend_extension = "/usr/local/sourceguard/ixed.x.x.lin" and comment it out. Restart Apache and you should notice that the error no longer displays. I'm not entirely sure what the Zend Engine is for under CLI PHP but I haven't encountered any issues yet.

If you encounter issues, just uncomment the line and restart Apache. Following these instructions should be a last resort if you cannot update Zend Engine API, so you do so at your own risk.

Sending dynamic status headers from PHP

Sending the right HTTP protocol for headers from PHP is easy, you just need to find which HTTP protocol your server is using. Most servers will be using HTTP/1.1, but allowing PHP to serve the protocol version for you means safer headers for older systems and greater cross-platform compatibility.

Rather than sending

header( 'HTTP/1.1 404 Not Found' );

we're going to send

header( $_SERVER['SERVER_PROTOCOL'] . ' 404 Not Found' );

where

$_SERVER['SERVER_PROTOCOL']

gives us the HTTP protocol which matches the previous 'HTTP/x.x' format.

This can be used with any header that requires the HTTP protocol to be sent, it is not limited to 404 errors in particular.

EDIT 2015-10-10:

Available from PHP 5.4.0, function 'http_response_code' is the alternative to this and sends the correct protocol along with the correct status code description.

Usage: http_response_code(404);
Output: HTTP/1.1 404 Not Found

Recursively zip files with PHP

This function is useful for zipping files/directories when you only have FTP access to a site, and for when you are using a web host ‘File Manager’ which bugs out on permissions or otherwise.

function Zip($source, $destination)
{
    if (!extension_loaded('zip') || !file_exists($source)) {
        return false;
    }
    $zip = new ZipArchive();
    if (!$zip->open($destination, ZIPARCHIVE::CREATE)) {
        return false;
    }
    $source = str_replace('\', '/', realpath($source));
    if (is_dir($source) === true)
    {
        $files = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($source), RecursiveIteratorIterator::SELF_FIRST);
        foreach ($files as $file)
        {
            $file = str_replace('\', '/', $file);
            // Ignore "." and ".." folders
            if( in_array(substr($file, strrpos($file, '/')+1), array('.', '..')) )
                continue;
            $file = realpath($file);
            if (is_dir($file) === true)
            {
                $zip->addEmptyDir(str_replace($source . '/', '', $file . '/'));
            }
            else if (is_file($file) === true)
            {
                $zip->addFromString(str_replace($source . '/', '', $file), file_get_contents($file));
            }
        }
    }
    else if (is_file($source) === true)
    {
        $zip->addFromString(basename($source), file_get_contents($source));
    }
    return $zip->close();
}

Zip('/folder/to/compress/', './compressed.zip');

Recursively copy files with PHP

This function is useful for copying files when you only have FTP access to a site, and for when you are using a web host 'File Manager' which bugs out on permissions or otherwise.

function recurse_copy($src, $dst) { 
    $dir = opendir($src); 
    @mkdir($dst); 
    while(false !== ( $file = readdir($dir)) ) { 
        if (( $file != '.' ) && ( $file != '..' )) { 
            if ( is_dir($src . '/' . $file) ) { 
                recurse_copy($src . '/' . $file,$dst . '/' . $file); 
            } 
            else { 
                copy($src . '/' . $file,$dst . '/' . $file); 
            } 
        } 
    } 
    closedir($dir); 
} 

recurse_copy('./big-directory', './big-directory-new');

Download file with cURL & PHP

CURLOPT_RETURNTRANSFER is a simple way of copying a file from a remote server onto your own. However, if you're downloading a large file you may hit memory limits because the entire contents of the download have to be read to memory before being saved.

Note: Even if your memory limit is set extremely high, you would be putting unnecessary strain on your server by reading in a large file straight to memory.

Instead you can write the download straight to a file stream using CURLOPT_FILE.

$url  = 'http://www.example.com/a-large-file.zip';
$path = '/path/to/a-large-file.zip';
$fp = fopen($path, 'w');
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
curl_close($ch);
fclose($fp);