Category: JavaScript

Canvas Reaction

Screenshot from Canvas Reaction game

Play Canvas Reaction

Click here to play Canvas Reaction

What is it?

On a rather lengthy car journey, I developed a bit of an addiction for Zwigglers “Chain Rxn” game on a friends iPhone. It’s also available as an online Flash version.

The idea of the game is to seed an initial explosion – creating a chain reaction. The goal is to explode a given number of balls. Each collision generates an additional score. The further away from the original explosion a ball hits, the more points it is worth (exponentially increasing). Obtaining a high score means timing the initial explosion correctly in order to maximise your score – as well as exploding the required minimum number of balls.

This demo is pretty much a clone of the last level of Chain Rxn: explode 54 out of 60 balls whilst obtaining the highest score possible.


This version is written using pure HTML5 features with JavaScript. It uses the canvas element for the rendering and the audio element for sound. No Flash here! Chain Rxn seemed a prime candidate to demo the features of canvas in modern browsers so, after a bit of late night hacking, here it is.

Because Canvas Reaction uses HTML5 elements it does require a modern browser. I believe it will work in most versions of Firefox 3.0+ as well as recent versions of Safari and Chrome. There may or may not be any sound (when the balls explode) depending on the availability of the audio element and its ability to play ogg files. It also uses the canvas text APIs which are only recently available on most platforms.

Performance seems pretty good in most modern browsers with average hardware. I don’t plan to develop this any further – it’s really just for fun and a chance to play with some HTML5 features. Plus, of course, it’s just a ripoff anyway. Not that Chain Rxn is exactly original of course!

Percent encoding (aka URL encoding) all characters

Occasionally I get the urge to perform a bit of (disclaimer: amusing and non-malicious!) cross-site scripting (XSS) against the odd site I find which is just begging to be abused. Here’s a tool to percent-encode all characters in a URL parameter.

URL/Percent-encoding is used to escape reserved characters in a URL when passing parameters around. For example, a GET parameter with an ampersand in it must be escaped since the browser would treat this character as starting the next variable.

The standard URL encoders take an input and replace all reserved characters with their percent-encoded equivalents. I couldn’t find an online tool to enocde all characters so I knocked up a quick bit of JavaScript to do the job.

Why would I want to do this? When playing around with XSS it’s nicer to hide the full payload in the URL rather than giving away hints as to what’s going to happen with all the unreserved characters still human readable.

So here it is: a JavaScript Percent-Encoder.

Efficient caching of versioned JavaScript, CSS and image assets for fun and profit

“The new image is showing but I think it’s using the old stylesheet!”

Sound familiar?


Caching of a web page’s assets such as CSS and image files can be a double-edged sword. On the one hand, if done right, it can lead to much faster load times with less strain on the server. If done incorrectly, or worse not even considered, developers are opening themselves up to all kinds of synchronisation issues whenever files are modified.

In a typical web application, certain assets rarely change. Common theme images and JavaScript libraries are a good example of this. On the other hand, CSS files and the site’s core JavaScript functionality are prime candidates for frequent change but it is not an exact science and generally impossible to predict.

Caching of assets is the browser’s default behaviour. If an expiry time is not specifically set, it is up to the browser to decide how long to wait before checking the server for a new version. Once a file is in a browsers cache you’re at the mercy of the browser as to when the user will see the new version. Minutes? Hours? Days? Who knows. Your only option is to rename the asset in order to force the new version to be fetched.

So caching is evil, right? Well, no. With a little forethought, caching is your friend. And the user’s friend. And the web server’s friend. Treated right, it’s the life of the party.

Imagine your site is deployed once and nothing changes for eternity. The optimal caching strategy here is to instruct the browser to cache everything indefinitely. This means that, after the first visit, a user may never have to contact the server again. Load times are speedy. Your server’s relaxed. All is well. The problem, of course, is that any changes you do inevitably make will never be shown to users who have the site in their cache. At least, not without renaming the changed asset so the browser considers it a new file.

So the problem is that we want the browser to cache everything forever. Unless we change something. And we want the browser to know when we do this. Without asking us. And it’d be nice if this was automated. Ideas?

Option One – Set an expiry date in the past for all assets

Never cache anything!

Not really an option, but it does solve half of the problem. The browser will never cache anything and so the user will always see the latest version of all site assets. It works, but we’re completely missing out on one of the main benefits of caching – faster loading for the user and less stress on the server. Next.

Option Two – Include a site version string in every URL

One commonly used strategy is to include a unique identifier in every URL which is changed whenever the site is deployed. For example, an image at the following URL:


Would become:


Here, 82 is a unique identifier. With some Apache mod_rewrite trickery, we can transparently map this to the original URL. As far as the browser is concerned, this is a different file to the previous logo.81.png image and so any existing cache of this file is ignored.

Generally, this technique is employed in a semi-automated way. The version number can either be set manually in a configuration file (for example) or pulled from the repository version number. With this technique, all assets can be set to cache indefinitely.

The above is a pretty good solution. I’ve used it myself. But it’s not the most optimal. Every time a new version of the site is deployed, any assets in the users cache are invalidated. The whole site needs to be downloaded again. If site updates are infrequent, this isn’t too much of a problem. It sure as hell beats never caching anything or, worse, leaving the browser to decide how long to cache each item.

Option Three – Fine grained caching + Automated!

Clearly, the solution is to include a unique version string per file. This means that every file is considered independently and will only be re-downloaded if it has actually changed. One technique for doing this is to use the files last-modified timestamp. This gives a unique ID for the file which will change every time the file contents change. If the file is under version control (your projects are versioned, right?) we can’t use the modified timestamp as-is since it will change whenever the file is checked out. But we can find out what revision the file was changed in (under SVN at least) so we’re still good to go.

The goal is as follows: To instruct the browser to cache all assets (in this case, JavaScript, CSS and all image files) indefinitely. Whenever an asset changes, we want the URL to also change. The result of this is that whenever we deploy a new version of the site, only assets that have actually changed will be given a new URL. So if you’ve only changed one CSS file and a couple of images, repeat visits to the site will only need to re-download these files. We’d also like it to be automated. Only a masochist would attempt to manually change URLs whenever something changes on any sufficiently complex site.

Presented here is an automated solution for efficient caching using a bit of PHP and based on a site in an SVN repository. It’s also based around Linux. It could easily be adapted to other scripting languages, operating systems and/or version control systems – these technologies are merely presented here as an example.

To achieve the automated part, we need to run a script on the checked out version of the site prior to its deployment. The script will search the project for URLs (for a specific set of assets) and will rewrite the URL for any that it finds including a unique identifier. In our case, we’ll use the svn info command to find out the last revision the file actually changed in. Another approach would be to simply take a hash of the file contents (md5 would be a good candidate) and use this as its last-changed-identifier.

Rather than renaming each file to match the included identifier we set in the URL, we’ll use mod_rewrite within Apache to match a given format of URL back to its original. So myasset.123.png will be transparently mapped back to its original myasset.png filename.

Here’s a quick script I knocked up in PHP to facilitate this process. It should be run on a checked out working copy. It scans a given directory for files of a given type (in my base, “.tpl” (HTML templates) and .css files). Within each file it finds, it looks for any assets of a given type referenced in applicable areas (href and src attributes in HTML, url() in CSS). It then converts each URL to a filesystem path and checks the working copy for its existence. If it finds it, the URL is rewritten to include the last modified version number (pulled from svn info). Once this is done we just need to include an Apache mod_rewrite rule as discussed above.


< ?php
// config
$arr_config = array(
    // file types to check within for assets to version
    'file_extensions' => array('tpl', 'css'),
    // asset extensions to version
    'asset_extensions' => array('jpg', 'jpeg', 'png', 'gif', 'css', 'ico', 'js', 'htc'),
    // filesystem path to the webroot of the application (so we can translate
    // relative urls to the actual path on the filesystem)
    'webroot' => dirname(__FILE__) . '/../www',
    // regular expressions to match assets
    'regex' => array(
        '/(?:src|href)="(.*)"/iU', // match assets in src and href attributes
        '/url\((.*)\)/iU'          // match assets in CSS url() properties
// arguments
// we require just one argument, the root path to search for files
if(!isset($_SERVER['argv'][1])) {
    die("Error: first argument must be the path to your working copy\n");
// execute
version_assets($_SERVER['argv'][1], $arr_config);
 * Checks each file in the passed path recursively to see if there are any assets
 * to version.
 * Only file extensions defined in the config are checked and then only assets matching
 * a particular filetype are versioned.
 * If an asset referenced is not found on the filesystem or is not under version control
 * within the working copy, the asset is ignored and nothing is changed.
 * @param str $str_search_path    Path to begin scanning of files
 * @param arr $arr_config         Configuration params determining which files to check, which
 *                                asset extensions to check etc.
 * @return void
function version_assets($str_search_path, $arr_config) {
    // pull in filenames to check
    $arr_files = get_files_recursive($str_search_path, $arr_config['file_extensions']);
    foreach($arr_files as $str_file) {
        // load the file into memory
        $str_file_content = file_get_contents($str_file);
        // look for any matching assets in the regex list defined in the config
        $arr_matches = array();
        foreach($arr_config['regex'] as $str_regex) {
            if(preg_match_all($str_regex, $str_file_content, $arr_m)) {
                $arr_matches = array_merge($arr_matches, $arr_m[1]);
        // filter out any matches that do not have an extension defined in the asset list
        $arr_matches_filtered = array();
        foreach($arr_matches as $str_match) {
            $arr_url = parse_url($str_match);
            $str_asset = $arr_url['path'];
            if(preg_match('/\.(' . implode('|', $arr_config['asset_extensions']) . '$)/iU', $str_asset)) {
                $arr_matches_filtered[] = $str_asset;
        // if we've found any matches, process them
        if(count($arr_matches_filtered)) {
            // flag to determine if we need to write any changes back once we've processed
            // each match
            $boo_modified_file = false;
            foreach($arr_matches_filtered as $str_url_asset) {
                // use parse_url to extract just the path
                $arr_parsed = parse_url($str_url_asset);
                $str_url_path = $arr_parsed['path'] . @$arr_parsed['query'] . @$arr_parsed['fragment'];
                // if this is a relative url (e.g. begininng ../) then work out the filesystem path
                // based on the location of the file containing the asset
                if(strpos($str_url_path, '../') === 0) {
                    $str_fs_path = $arr_config['webroot'] . '/' . dirname($str_file) . '/' . $str_url_path;
                else {
                    $str_fs_path = $arr_config['webroot'] . '/' . $str_url_path;
                // normalise path with realpath
                $str_fs_path = realpath($str_fs_path);
                // only proceed if the file exists
                if($str_fs_path) {
                    // execute the svn info command to retrieve the change information
                    $str_svn_result = @shell_exec('svn info ' . $str_fs_path);
                    $arr_svn_matches = array();
                    // extract the last changed revision to use as the version
                    preg_match('/Last Changed Rev: ([0-9]+)/i', $str_svn_result, $arr_svn_matches);
                    // only proceed if this file is in version control (e.g. we retrieved a valid match
                    // from the regex above)
                    if(count($arr_svn_matches)) {
                        $str_version = $arr_svn_matches[1];
                        // add version number into the file url (in the form asset.name.VERSION.ext)
                        $str_versioned_url = preg_replace('/(.*)(\.[a-zA-Z0-9]+)$/', '$1.' . $str_version . '$2', $str_url_asset);
                        $str_file_content = str_replace($str_url_asset, $str_versioned_url, $str_file_content);
                        // flag as
                        $boo_modified_file = true;
                        echo 'Versioned: [' . $str_url_asset . '] referenced in file: [' . $str_file . ']' . "\n";
                    else {
                        echo 'Ignored: [' . $str_url_asset . '] referenced in file: [' . $str_file . '] (not versioned)' . "\n";
                else {
                    echo 'Ignored: [' . $str_url_asset . '] referenced in file: [' . $str_file . '] (not on filesystem)' . "\n";
            if($boo_modified_file) {
                echo '-> WRITING: ' . $str_file . "\n";
                // write changes to this file back to the file system
                file_put_contents($str_file, $str_file_content);
 * Utility method to recursively retrieve all files under a given directory. If
 * an optional array of extensions is passed, only these filetypes will be returned.
 * Ignores any svn directories.
 * @param str $str_path_start  Path to begin searching
 * @param mix $mix_extensions  Array of extensions to match or null to match any
 * @return array
function get_files_recursive($str_path_start, $mix_extensions = null) {
    $arr_files = array();
    if($obj_handle = opendir($str_path_start)) {
        while($str_file = readdir($obj_handle)) {
            // ignore meta files and svn directories
            if(!in_array($str_file, array('.', '..', '.svn'))) {
                // construct full path
                $str_path = $str_path_start . '/' . $str_file;
                // if this is a directory, recursively retrieve its children
                if(is_dir($str_path)) {
                    $arr_files = array_merge($arr_files, get_files_recursive($str_path, $mix_extensions));
                // otherwise add to the list
                else {
                    // only add if it's included in the extension list (if applicable)
                    if($mix_extensions == null || preg_match('/.*\.(' . implode('|', $mix_extensions) .')$/Ui', $str_file)) {
                        $arr_files[] = str_replace('//', '/', $str_path);
    return $arr_files;

This is then executed like so:

php version_assets.php "/path/to/project/checkout"

The Apache config

# Rewrite versioned asset urls
RewriteEngine on
RewriteRule ^(.+)(\.[0-9]+)\.(js|css|jpg|jpeg|gif|png)$ $1.$3 [L]
# Set near indefinite expiry for certain assets
<filesmatch "\.(css|js|jpg|jpeg|png|gif|htc)$">
    ExpiresActive On
    ExpiresDefault "access plus 5 years"

Note: You’ll need the rewrite and expires modules enabled in Apache. This is for Apache 2. The syntax above may be somewhat different for Apache 1.3. To enable the modules in Apache 2 you can simply use:

a2enmod rewrite
a2enmod expires

Done! Now, whenever the site is deployed, only changed assets will be downloaded. Fast, efficient and headache free. Well, unless…


The above script is purely to illustrate the process. Your specific needs may well need a slightly different approach. For example, there may be other areas it needs to look for URLs. If you do a lot of dynamic construction of URLs or funky script includes with JavaScript, you may need a secondary deployment script or procedure in order to accommodate such features. Using this technique, you must be careful to add the unique version to all the file types looked for in the deployment script, otherwise you’re telling the browser to cache a file indefinitely without the URL changing on new versions being deployed.

Another area to watch out for would be if you serve assets from different domains. Again, this technique will work in principle but will need some modification. It’s an exercise left to you, dear reader.

So, there you have it. A reasonably hassle free, efficient and optimised caching policy for your web applications. I hope you find this helpful – good luck.

Extending DOM elements in Mootools ala jQuery

There are a lot of jQuery plugins to add simple enhancements to DOM elements (such as form validation, adding tooltips to links, etc.). In contrast, the Mootools plugin scene is relatively baron which is a shame. The architecture of MooTools is quite different to jQuery and doesn’t perhaps encourage this kind of plugin to the same extent but, for the common case, plugins can be written in a relatively similar fashion.

As an example, here’s how you’d write a method you can call on any DOM Element (or DOM Elements collection) to set the colour to red. Useless, of course, but it illustrates the method simply.

    makered: function() {
        this.setStyle('color', 'red');
        return this;

And it’s as easy as that. To use this world changing functionality (make my links red!), just use the standard Mootools selector syntax. This example will set the colour of all anchor elements in the page to red.


Note that our makered() method must (or rather should) return this in order to maintain the Mootools method chaining functionality.

The following example illustrates the chaining. It calls our custom Element method (making all links red) and then fades them all out:


Extend away!

JSFractal – JavaScript Fractal Explorer

JSFractal Screenshot

Have a play

You can try out JSFractal here.

What is it?

JSFractal is a web-based tool (written entirely client side in JavaScript) to allow you to explore fractals. Currently, only the Mandelbrot set is implemented but I hope to add support for switching to Julia sets and other types in the future.

You can drag-select on the fractal to choose an area to zoom in to. As you progress, you can see a timeline of previous points of the fractal, allowing you to switch to any prior state and continue in a different direction.

As well as allowing you to see previous states of the fractal, the timeline also supports a playback feature so you can watch the entire transition you’ve created zoom from start to finish.

At any point you can change the quality settings and size of the rendered fractal. Please note however, the higher the quality and the larger the fractal size – the longer you’ll wait! Currently there is only one colour scheme to choose from.

Finally, you can bookmark the page at any point. Returning to the URL will render the fractal that was showing at the point you bookmarked the page.

Have a go!.


With the current JavaScript performance arms race going on between V8, Webkit, Mozilla and Opera, I wanted something fun to write that would really push the browsers to the limits.

After picking up James Gleick’s Chaos again, my interest with fractals was renewed and hence this project. Obviously this isn’t the best medium for something as computationally heavy as a fractal exploration tool – but what the hell. It’s a good experiment!

On a reasonably spec’d machine and one of the pre-release ultra-mega-crazy-fast JavaScript engines it’s actually pretty usable. There’s a few tricks used along the way to boost performance which I’ll go into later.

How does it work?


JSFractal is written entirely in JavaScript with no server side components. It uses MooTools 1.2 and the canvas element. Thus, Internet Explorer need not apply. Sorry! I attempted to patch in support with Google’s ExplorerCanvas but it was so ridiculously slow that I had to drop it.


It has been tested and is thought to work in the following browsers:

  • Mozilla Firefox 1.5+
  • Safari 3+
  • Opera 9
  • Google Chrome

For reference, best performance is currently with Firefox 3.1 beta (with JIT JavaScript enabled) and Google Chrome.

Rendering techniques

Every time a fractal is generated, each pixel’s value needs to be calculated. On top of this, each pixel needs to be drawn individually. This means the key to fast performance is:

  • Efficient calculations for each pixel
  • Fast drawing to the canvas

I refined the fractal calculations as much as possible and had it running pretty fast. However, the actual rendering to the canvas proved to be a little trickier to speed up.

Initially I implemented rendering using the fillRect method of the canvas object to draw each pixel as a 1×1 rectangle. This … is … unsurprisingly … slow. There is a lot of overhead in setting up and executing each call (setting fillColor etc.). This meant, particularly on the faster JS engines, the bottleneck was primarily the rendering.

I love createImage(). This is a canvas method implemented in Firefox 3+ (and recent Webkit nightlies I believe) which returns an updateable pixel buffer object. This means we can call it once at the beginning of a render to retrieve access to the pixel buffer and write RGBA values directly (it’s implemented, effectively, as a single dimensional array so access is fast). It can then simply be redrawn back to the canvas at the end. Fast!

The above is great, but not all the current canvas supporting browsers have this method. The spec for it is still somewhat in the air. There’s debates as to quite what it should return – the number of pixels does not always equal the canvas size etc. (at least, in theory). In effect, this means that Chrome and Opera are both missing this method which means the only option is fillRect.

It seemed a great shame that Chrome, despite having a very fast JS engine, was lagging behind Firefox merely because of the awful fillRect() method of rendering.

After posting a comment to an unrelated article on a Spectrum Emulator on Ajaxian, cromwellian suggested the possibility of constructing DataURLs manually as an alternative.


Using DataURLs is actually crazy enough to work. And, even better, it’s pretty fast. It effectively builds up a bitmap image file on the fly, draws it to an Image object and finally draws the loaded Image to the canvas. The great thing about this technique is that it’s shifting the performance burden from the canvas element to the JavaScript engine. And since the current crop of browsers are getting pretty rapid in this area, it isn’t too much of a performance hit.

Measurements in Firefox 3.1b showed only about a 40% hit for using the DataURL method of rendering compared to writing directly to the pixel buffer. This sounds a lot but compared to fillRect() it’s a great improvement.

All three rendering methods are included in JSFractal (the most appropriate is chosen dependent on your browser features). Note: The fillRect() implementation is never actually used since the only browser than I know of that would require this is Internet Explorer.


There are a few issues I’d like to address which I’ll go into here.


Currently there is only one colour scheme available. What’s more, it’s not the most optimal algorithm for producing the prettiest fractals. I’d like to improve this – it’s more evident the deeper into the fractal you go (differentiations in colour become less, leading to large blocks of the same colour). Best results happen when the quality level is whacked up high and the size is set to the maximum.

Changing settings and the timeline

Changing the settings (size, quality, colours) half way through a fractal exploration will only update the current fractal. The timeline playback etc. will still function but previous fractal states may be stretched to fit meaning the animation may change in quality during playback.

There’s not a lot I can do about this, however, since I’d need to re-render all previous fractals which would be an unacceptable performance hit. I think the current solution of allowing you to change settings mid-way through and doing its best to compensate for the change is the current best compromise.


Over and out

Any feedback, bug reports or ideas for improvement very much appreciated.

Tap Trap – The year of procrastination

Since launching Tap Trap, there have been a steady stream of players stumbling across the game and, in some cases, getting hooked!

I wrote the game a couple of years ago now (mostly as an experiment with JavaScript) and didn’t really plan to promote it particularly but it’s great to see around 15 new players a day having a go!

It recently passed the 500,000th game played – around half of those games were actually completed (a score was submitted). From a quick scribble on the back of a Topman receipt (taking a rough average of 2 minutes per game) this means there’s been about 1 man year of procrastination play so far.

Well done to Joe Bloggs 6140 for the current high score of 5,510. The average score is 1981.98 across all the games completed.

The booby prize goes to Joe Bloggs 2681 for achieving the lowest score of just 152. It’s actually quite a feat to achieve such an abysmal score – seriously, try it. It’s not as easy as it looks!

Tap Trap – Javascript puzzle game

Tap Trap Screenshot

“Tap Trap” is a puzzle game written in Javascript. It was born after a lengthy battle with an addiction to the puzzle game Same GNOME. As a standard client application Same GNOME (distributed with the Gnome desktop environment) is great but I wanted a version that could be played online – complete with public high score tables and taking advantage of the global accessibility that the internet provides.

This is a playable version 1.0.

More information on the Tap Trap project page.

Copyright © 2018 2tap.com

Theme by Anders NorenUp ↑