Smart "Back" Button for Wordpress posts/pages

While coding a Wordpress site, I wanted to add a back button for posts and pages.
Hierarchical pages should obviously point back to their parents, but what of non hierarchical pages?

If a user navigated from the "Gallery" page to the "About" page - I would like the sites "back" button to return him to the "Gallery" page. This could be easily implemented using JavaScript but why use user-side scripting when you can generate the code on the server side?

The obvious solution would be to use the referring URL as a target for our "back" button. We must now avoid two special cases:

  1. The page is a sub-page and the back button should point to its parent.
  2. The previous URL was an off-site URL, and the back button should not appear at all.
This code snippet solves both problems: if the post is non-hierarchical, the referring URL is checked - if no URL is provided, or if the URL is off-site, no "back" button is displayed.
$ref_url = wp_get_referer();
$ref_parse = parse_url($ref_url);
$my_parse = parse_url(get_permalink());
if (($ref_url==get_permalink() || empty($ref_url)) && $post->post_parent) {
 $ref_url = get_permalink($post->post_parent);
if ($ref_parse[host]==$my_parse[host] || $show_back_to_parent) { 
 echo $ref_url;

When transmission-daemon settings won't update

The transmission-daemon wiki instructs users to edit settings.json at
But you may find that editing that file has no effect on your daemon. In such cases, you might want to try the following:
1. Stop the daemon (sudo may be required)
/etc/init.d/transmission-daemon stop
2. Edit the settings.json file found at:
3. Restart the daemon
/etc/init.d/transmission-daemon stop

If that doesn't work, you may want to look for the settings file here as well:

This solution by elico of the Ubuntu Forums ended hours of frustration caused by useless edits to the settings.json file in my home directory. I hope you find it helpful as well.

How Dropbox saved me from an SSD faliure

I've long held that laptops are an inherently unsafe place for your data. Laptops can be stolen or lost, HDDs are prone to damage due to frequent transportation and of course:  laptops tend to be used in dangerous places and are consequentially prone to drops and liquid spills.

About two months ago I figured I'd remove one risk factor from the equation, and got myself a spankin' new SanDisk SSD (the 60GB 'Ultra' model). Unfortunately, that disk just died without warning this morning.

While that SSD contained several academic assignments due in mere days as well as the images from my latest commercial photo-shoot, you might expect to find me up a certain creek without a paddle.

Fortunately, I was well prepared for such occurrence:
All my critical data, including my image catalog reside within my Dropbox folder. This reduces my risk of data loss - as I can only lose un-synced files.
Since I rarely venture too far from a high-speed internet connection it's unlikely that I'd lose more than a days work. In this case, I lost nothing (excluding possibly some saved StarCraft II replays).

Backup is actually a secondary function of my Dropbox setup, the main perk is immediate synchronization with my desktop. By the time I get home, my lecture notes are already on my desktop. I can spend a weekend away from home, work on my projects, and return to my desktop without ever worrying about merging file revisions.

A convenient smartphone app even allows me to send out links to my Dropbox files on the go. This is very handy for sending out a resume at a moments notice or submitting a paper at the last minute.

Dropbox only comes with 2GB of free space, which would usually cover any schoolwork you might have but hardly a photography catalog. You can get more space by refereeing customers to the service (you'll notice my deviously placed referral code in the previous links), or pay for 50-100GB plans.

I manage just fine with a free Dropbox account and some referral awarded space. If you need more, Google Drive, Apple iCloud and *shudder* Microsoft SkyDrive offer similar services. Depending on your needs, you may find one suites you better than the other. Engadget has a nice comparison article up here. Note that Dropbox is the only provider that allows you to earn extra free space.

If you are unfortunate enough to be using a laptop as your main computing platform - you should consider CrashPlan as a backup solution. They offer unlimited backup plans for about $50/yr. which is much  cheaper than the above mentioned services and well worth the price if you can do without folder syncing.

I use CrashPlan to backup my desktop(s), and since my Dropbox folder syncs to my main desktop - it is automatically backed up with CrashPlan as well!

As for SanDisk - I'll be replacing the drive under warranty. I hope the next one lasts longer...

Slim down your bloated Starcraft II installation.

Having recently replaced my laptops spacious 250GB HDD with a measly 60GB SSD, it became hard to justify wasting 10GB of space just so I could play an occasional game on

Opening up my SC2 directory, I found approximately 5GB of files under the "Campaigns" folder, and an extra GB of files under the "Versions" folder.

Having already finished the campaign, I began to wonder if I really need these extra 6GB of game files...

WARNING! removing files from your SC2 directory may mess up your installation, erase saved games, replays and possibly cause severe bodily harm (though the latter is VERY unlikely). Also, note that the Blizzard support staff explicitly stated that you should not delete any game file in this forum post - attempt at your own risk!

That said, I've found that you can delete the two folders under the "Campaigns" directory, as well as all but the latest (highest number) "Base#####" and "Shared####" folders under the "Versions" folder.

I'm told that deleting the old "Base" and "Shared" folders will prevent you from viewing old replays, so you might want to keep them around, since they don't take up that much space.

Withe these files removed, my SC2 installation takes up less than 4GB of space, and I can still play multiplayer games on without issues.

Disable Dropbox While Running on Battery Power

This Autohotkey script allows you to run custom commands based on your systems AC power status.
By default, the script will disable Dropbox when the battery is discharging, and restart it once AC power is restored.

The script can be easily modified to preform any task on either "AC Power On" or "AC Power Off" events by modifying the corresponding function. Please note that if your Dropbox daemon is not installed in the default directory, you may have to modify the script appropriately.

I was inspired to write this after reinstalling my laptop when I suddenly discovered a dramatic improvement in battery life. With Dropbox as a prime suspect, I was looking for a simple way to turn it on and off depending on the laptops power state. I hope you find this script useful for keeping Dropbox, and any other power hungry apps you may have in check.

If you've found any other original uses for the script, I'd love to hear about them in the comments!

Download the AHK Script

Source Code:

;; Close Programs Based On Laptop Power State
;; By Boaz Arad

;;Define Custom commands to be run while switching from battery-
;;power to AC power or vice versa

ACPowerOn() ;Run when plugged in
 ;Default Dropbox install dir - modify if necessary 
 Run %DropBoxExe%
 ;MsgBox "AC Power On"

ACPowerOff() ;Run when unplugged
 Process, close, Dropbox.exe
 ;MsgBox "AC Power Off"

Since "On AC Power" and "On AC Power, Charging" are two seperate events
"ACPowerOn" may be run twice When the is plugged in and it's battery is
already charged to over 95% of capacity. This issue can be solved by
setting "B_Threshold" (below) to the maximum charge level of your laptop
battery. This is necessary since if your battery is at full capacity,
plugging in your laptop will not set the battery status to "Charging".
Made Possible by:
Wrapper to catch Power Management events by TheGood
Shimanov's ReadInteger function:
AC/Battery status by antonyb

ReadInteger( p_address, p_offset, p_size, p_hex=true )
  value = 0
  old_FormatInteger := a_FormatInteger
  if ( p_hex )
    SetFormat, integer, hex
    SetFormat, integer, dec
  loop, %p_size%
    value := value+( *( ( p_address+p_offset )+( a_Index-1 ) ) << ( 8* ( a_Index-1 ) ) )
  SetFormat, integer, %old_FormatInteger%
  return, value
OnMessage(536, "OnPBMsg")     ;WM_POWERBROADCAST

OnPBMsg(wParam, lParam, msg, hwnd) {
      If (wParam = 10) {   ;Power state change detected
 VarSetCapacity(powerstatus, 1+1+1+1+4+4)
 success := DllCall("kernel32.dll\GetSystemPowerStatus", "uint", &powerstatus)

 output=AC Status: %acLineStatus%`nBattery Flag: %batteryFlag%`nBattery Life (percent): %batteryLifePercent%`nBattery Life (time): %batteryLifeTime%`nBattery Life (full time): %batteryFullLifeTime%
 ;MsgBox %output% ;Debug - show full powerstatus info

 If (acLineStatus=0)
 Else ;If (batteryFlag>=8 | batteryLifePercent>%B_Threshold%) ;Plugged in and charging (
   ;Must return True after message is processed
   Return True

Download the AHK Script

Retrieving external URL's with PHP without overloading your server

Retrieving external URLs via PHP has already gotten me kicked off a shared server in the past. A handful of visitors may be enough to incur the wrath of your admin. While redesigning my personal homepage, I wanted to scrape my recent running mileage from an Endomondo widget (login required) and embed it into the site text. The simplest way to do this, would be to retrieve and process the widget code from the Endomondo site for each page load.
This approach has two drawbacks: first - increased pageload times due to the remote retrieval, and secondly - increased server load.

To solve this problem, I decided that updating my mileage daily would be sufficient, but I still wanted the updated process to be automatic, and take place on the server.

To accomplish this, I wrote the following code. The concept is simple - scraped data is held in a file on the server, on each page load, the modification time of the file is checked, if it is over a day ago, new data is scraped and written to the file.

This way, remote fetching and it's associated slow-down and server load happen at most once a day.

The code is pretty much self explanatory,  but I'll discuss some of it's finer points below.

function howfar() {
 $rep = "many";
 if (!($mfile = @fopen('mileage.txt','r'))) {
  $latest_mod = 0;
 else {
  $latest_mod = filemtime('mileage.txt');

 $today = date(U);

 if ($today-$latest_mod > 86400 ) {
  //echo "OLD!";
  $mfile = fopen('mileage.txt','w+');
  $endo_url = "";
  $endomondo = file_get_contents($endo_url);
  $loc = strpos($endomondo,'Distance:');
  $loc2 = strpos($endomondo,'</span>',$loc);

 else {
  $mfile = fopen('mileage.txt','r');
 if (is_numeric($distance)) {
  $rep = $distance;
 return $rep;

First, we attempt to open the 'mileage.txt' file, the @ suppresses any possible error messages we don't want the user to see. If the file doesn't exist, latest_mod will be set to zero, thus forcing an update.
If the file is readable, latest_mod is set to its latest update time. The times are then compared, and if the difference is greater than 86400 (a day in Unix timestamp format) - new data is fetched.
Otherwise, the file is opened, and the old data read.
You could easily modify the code to fetch nearly any for of data, and store it locally, the minimum "freshness" of the data is also easily tweakable.

I hope you find this snipplet useful, and I'd love to hear about it in the comments if you did.

Leaving Flickr - How to download all your photos

CC-BY: Dazzie D
Your "Pro" membership might be expiring, you may be nearing the 200 photo limit for free account or recurring rumors of a Yahoo!-Microsoft merger might be prompting you to consider alternative services. Either way - odd are that Flickr has all your precious photos, and you want them back! This is how I did it:

First, I'll warn you that this guide may get a bit technical, as it includes a little command line work, if you're comfortable with that - read on.

The backstory
I've been using Flickr for almost four years, and shortly after joining, I decided to upgrade to a pro status so my account could double as an image backup solution. Since then, I got my first SLR, and by 2009, I was shooting only RAW. Flickr's poor RAW support plus the fact that restoring anything larger than one photo set from Flickr would not be a simple matter - both prompted me to find an alternative backup solution.
Today, I backup my whole photo library to both a local mirror drive, and crashplan+ - a remote online backup service.
Once I no longer needed a Flickr Pro account for backup purposes, 25$ a year became rather pricey for the questionable privilege of  "Access to your original files" and "Unlimited sets and collections" - seriously Flickr, it's my data! let me have it back!

All that remained to be done before I could let my Pro account expire was to download my photostream, and make sure that I indeed had local copies of all my Flickr photos.

Downloading the photostream
CC-BY-SA: rbrwr
There are quite a few programs out there that can help you download full resolution imgaes from your Flickr account, to name a few: FlickrEditDownloadr the for-pay Bulkr and the now banned Flickrdown.
If you only have about 1,000 photos to back up - FlickrEdit is your best choice, it has a user friendly graphic interface, and works rather well.

If, on the other hand, you have thousands of photos to download (over 33,000 in my case) - your best bet it FlickrTouchr. This python script will log on to your Flickr account, and download all your images, into folders based on your Flickr sets!
The original script was set to download "Large" size images rather than originals, and seemed a bit slow on my connection, so I modified to to download originals and... (drumroll) ... use concurrent threaded downloads!

You can download my modified version - FlickrTouchrThreaded. To use it, you'll have Python installed. If your using Linux, you probably know how to do this yourself, Mac users should already have python installed by default, and windows users can find installation instructions here.
Once you have Python installed, backing up your entire photostream is a easy as typing:
python FlickrBackupFolder 3
 Where "FlickrBackupFolder" is the name of the folder you want to back up to, followed by the number of download threads you want to run (in this case - three). This will begin downloading you whole photostream to the your selected backup folder, with sub folders based on your Flickr sets. Images not in sets will be downloaded to a "No Set" folder.

In my case I found that using more than 3 threads didn't improve my overall download speed, you can play around with the number of threads the script uses, but I wouldn't increase it over 10 so Flickr doesn't get too mad at you :)

The first time you run it, it will send you to the Flickr website and request authorization for your Flickr account, subsequent runs will start downloading immediately. You'll also note that if the script was interrupted, you can just re-run it, and it will pick up where it left off, just be sure to point it to the same folder.

A few hours later (or in my case, one week 33,640 photos and about 80GB later) you'll have your entire Flickr photostream backed up.

Finding Corrupt Images
Photo: Boaz Arad
Once you've got your whole photostream backed up, you'll want to be sure that all your images downloaded correctly. FlickrToucher does a rather good job, and will rarely corrupt any images in transit, but even one corrupt photo out of thousands could really ruin your day - especially if it happens to be your best wedding picture, or the only group of your speedskating group (see left).

To scan for corrupt JPG images, i used cPicture.
cPicture is a commercial product ($19.99 per license), but the freely available trial will allow you so scan for corrupt images without limitation. Just pick a folder, and click "Check Pictures". After churning through your photos, it will output a text file with a list of corrupt images.
If you used FlickrTouchr to download your photostream, the filenames of the images should be their Flickr photo-id's - so you can easily find them online by browsing to
If anyone can reccomend a similar utility for Max and Linux users, I'd love to hear from you in the comments.

Finding duplicate photos
Photo: Boaz Arad
Some of the downloaded photos from your photostream will no doubt be duplicates of photos you already have on your hard-drive. In order to prevent wasted space - you should scan the downloaded photos for duplicates. I recommend MindGem's Fast Duplicate File Finder (FDFF) - again, a commercial product, but the trial version likely contains all the features you'll need.

Once you've weeded out corrupt and duplicate images, it's time to merge the backup back into your local photo library. Both Google's (free) Picasa and Adobe's (pricy!) Lightroom do a great job of keeping you pictures in order, and I recommend using one or both regularly.
It's also a good practice to always keep local copies of all your images, and never relay on online services (Flickr, Facebook etc.) as your only photo storage solution - as these can be notoriously unreliable.