The PC is Dead. Feds Question Data-centre.

20 05 2011

There’s a lot going on in the world of IT at the moment. An explosion of internet enabled devices has been pushing the limits of the networks, driving the need for more address space (IPv6), substantial bandwidth upgrades, and changes to the very architecture of the internet.

The address space issue has been evident, and its impact clear for quite some time now, but what’s really interesting about what’s happening at the moment is the shift we’re seeing from the traditional, inherently open internet architecture and x86 based PC’s, to the more rationalised and corporate owned nature of ‘cloud’ computing and mobile (mostly ARM based) devices. Now I’m not going to get into which side of the net neutrality debate I sit, rather, I want to examine some of the causes and possible outcomes of this shift.

Image borrowed from serverrack2.com

First of all, I think it’s important to understand what’s actually happening with the content available on the internet. If you consider content not just to be media, but also any software available as a service (after all, the data that drives software is by definition content), it is evident that a large proportion of the content that we consume is being increasingly delivered by fewer market leaders in their respective areas. This is being enabled by these companies building out massive data centres around the globe, and the use of Content Delivery Networks. This is in effect, creating ‘channels’ upon which these companies are able to get content to users much quicker. Those same channels however, have the potential to start carving up the internet, based upon content type.

So what’s driving this? Many views are available for discussion in this arena, and heated debates are frequent. From my point of view as both a consumer and an IT professional, I can see a few things going on. If you think about the history of computing, up until quite recently, there were two things that dominated all aspects of it – x86 hardware and Microsoft software. For a long time, Microsoft owned virtually the whole ecosystem, and squashed any potential competition attempting to enter the race. This left only one (now fairly obvious) option. Build your own ecosystem.

The traditional ecosystem at this point was based around the desktop PC, with the majority of content being generated locally, with fairly slow connectivity to the internet. Apple were the first to see the potential to create a brand new ecosystem of their own when they set up iTunes and iPods. This move leveraged people’s desire for fresh content and their love for slick looking gadgets in one single master stroke, and set the ball rolling for everyone else.

On the open and neutral internet, where natural human ideology favours the little guy and frowns upon the corporation, a ball rolling is the wrong analogy. Flood gates would be better. And what spawned was not just new ecosystems based around music or video, or other traditional media, but also new ways of delivering rich software experiences without ever needing to install software on the local PC. Now that these new disruptive models are beginning to settle in (under the very ambiguous term of ‘cloud’), we’re also seeing a convergence of faster networking, both mobile and fixed line, and a plethora of consumer devices that don’t use x86 hardware or Microsoft software.

So where is this all heading you ask. I don’t claim to know the future, but there are some distinct elements about all of this that I can have a pretty good guess at. One thing I take issue with is the constant talk of Microsoft being in trouble. Challenging times ahead? No doubt. But trouble? This implies that they’re on a fast track to bankruptcy, and I’m sorry, but this just isn’t going to happen. Microsoft has its fingers in so many pies (XBOX alone made it ~$600m profit in a single quarter), and historically its always been about acquiring and improving other peoples ideas and strategies (look at the Skype acquisition recently). They’ve just released Windows Phone 7 (which is remarkably good), there’s talk of Windows 8 supporting ARM based devices, and they are working hard towards making all of there software work well in ‘the cloud’. They may never be the unstoppable behemoth they once were, but they’ll be here for a long time yet.

Another element is the importance of mobile. It’s very important. The usage statistics available suggest that mobile traffic now accounts for %5 of all internet usage (up from %4 from end of last year), and websites like Twitter report that %50 percent of their total is mobile use. We’re not just talking about mobile phones here though, we talking about non-x86 devices essentially. Devices that aren’t running traditional Windows/OSX/Linux. We’re talking about iOS/Android/WP7, and any other new upstart OS that’s slick enough to grab market share in this new space.

But lets take a step back for a moment. What are we really talking about? I’ve heard a lot of talk about the death of the desktop as a result of all this. But what do we mean by desktop? To me, desktop is a form factor, it’s not a CPU/OS combination. The use cases for desktop and mobile are very different. I don’t want to play a game, or manipulate a massive spreadsheet on a phone or a tablet. The screen is too small and the input method is inefficient. The desktop is by no means dead, but we may see the number of people using it reduce, as they realise that their particular use case does not require the big screen/mouse/keyboard combination.

So if the desktop itself is not dead, what about the architecture delivering that distinctly desktop experience? That’s a much harder thing to predict, and it depends greatly upon how successful ‘the cloud’ ultimately becomes. If such services as OnLive and Google Docs can really prove to be as good running in a data centre as having the software installed locally on your machine, then the relevence of Moore’s law and software compatibility will move away from the desktop, and we will be able to run whatever CPU/OS combination we want and still all consume the same content and services.

That is of course the technological vision driving cloud computing; platform agnosticism, and a unified experience between systems. We’re some way from that goal yet, and I suspect there will be a lot of painful lessons to be learned along the way, but it’s a goal worth heading towards nonetheless.

One last thought springs to mind. Let’s not forget the other thing driving change right now. Control. The old ways of computing typically put control in the hands of the end user. Everything I’ve seen in the last few years in cloud computing and mobile is about trying to move that control to back towards the vendor. To my mind, this isn’t necessarily a good or a bad thing, indeed many people have never wanted control in the first place. It is cause for caution though. If we hand too much power over to any particular vendor we may find ourselves back in Windows95land, and who wants that? Not I.

** UPDATE – I came across this article at CNET today that really illustrates my point about the general failure of many commentators to differenciate between system architectures and form factors. The comments section is also quite interesting. **





Windows Beautification

18 05 2011

Elune SampleI think it’s widely agreed that Windows 7 is the prettiest version of Windows to date. Seriously, I don’t think there was much that Microsoft could have done better. However, it’s now been over 18 months since it was released to retail outlets (and somewhat longer for those of us that have been using it since beta/RC), and just like any environment you spend several hours a day looking at, it will invariably start to seem more bland than it really is.

Windows has support for themes, but by default your choices are limited to changing the tint of the transparency, changing the wallpaper, or reverting back to old-skool Windows styles. YAWN. Time for a bit of hackery-pokery I think. Luckily, this particular bit of hackery-pokery is easy as π.

The first thing you need to do is go to deviantart and search for the Elune theme by *minhtrimatrix. Download the .RAR file (8.8mb), and extract the contents to a folder somewhere convenient.

You will have at this point 4 sub-folders. The first folder you need to go into is the ‘tools’ folder. It’s up to you exactly which of these tools you use, depending on your level of knowledge and whether or not you want to change the Start Orb, but I actually only used the UniversalThemePatcher tool. The following steps worked for me:

1. Run the Theme Patcher for your OS type (mine is x64), and you will see some buttons to allow you to patch the relevant files. Simply click the ‘patch’ button next to each file that is shown as requiring the patch.

2. Now navigate to the ‘themes’ folder that comes with Elune, and copy the contents of that folder (including all the sub-folders) to ‘C:\Windows\Resources\Themes’.

3. Browse to ‘C:\Windows\System32′. If you’re familiar with NTFS permissions, just make sure you have rights to modify ‘explorerframe.dll’ and then rename it to a backup file. If you’re not familiar with NTFS permissions, use the ‘take ownership’ registry mod to add a simple ‘Take Ownership’ option to your right click menu.

4. You could at this point use the ‘Orb Changer’ tool to change the appearance of the start orb, but I actually prefer it the way it is, so I left that bit.

5. Restart Windows, and then go into the desktop personalisation control applet. You should now see a bunch of Elune themes which you can just select to apply!

So there you have it. A lovely new lease of life for your Windows 7 desktop to tide you over until Windows 8. Just make sure that if you’re going to the trouble of using this theme you have a read of the pimptastic advice I gave a while back, and you should have no complaints ;)





Microsoft’s Audio Antaganism

30 01 2011

Got a PC? Got a HDTV? Why not link the two together, and start enjoying the flexibility of having all your media, internet streams, games, etc available on your HDTV? There are loads of ways to achieve this, but I opted to use a single HDMI lead for audio and video, which to my mind is the most sensible option, if you have it available to you.

This short photo guide gives you a good idea of what’s involved to get the hardware set up right. Once you’ve got all your components hooked up, you just need to go into your GPU control applet and set up HDTV settings for resolution, position, desktop scaling, etc.

Now… Audio. That shouldn’t be too hard right? Well, here’s how that went for me.

Hmmm… I’ve got sound coming out of my PC speakers, but there’s no sound going to the HDTV. Let’s look in the GPU control applet again. Aha! I have Nvidia, so there’s an option to select ‘HDMI – HDTV (audio enabled)’. Hmmm… Still no sound from the HDTV. Time to look in the Windows audio properties. Sure enough, there’s another audio device in there for SPDIF output. With that selected, I now get lovely sound out of my HDTV! Hurrah! Oh… Wait a minute… Now there’s nothing coming out of my PC speakers, and there doesn’t seem to be any way to have both analogue and digital output simultaneously!

That’s not really very convenient. I want to be able to stream audio out of both sets of speakers at once so that moving from where my PC is, to where my HDTV is, doesn’t require me to fiddle about in the audio settings every time. My first Googling of the problem had it posited to me by Microsoft that the issue lies within limitations of soundcard architectures. This didn’t seem feasible to me, so I reboot my system into Linux (surprised much? :)). Oh listen… Linux as outputting audio to both the HDTV and my PC speakers at once.

I decided out of curiosity to reboot now into WindowsXP. Lo and behold, XP outputs to both devices simultaneously. I reboot back into Windows7 and get to some serious Googling. Seems that there’s actually a LOT of people out there that are really rather miffed about this.

So what’s going on? The short answer (well, short-ish) is that starting with Vista, Microsoft re-architected their audio subsystem, implementing the amusingly acronym-ed WASAPI layer, which to be fair, included a huge raft of improvements (including the nifty ability to control the volume of individual applications). When implementing these API’s, Microsoft also implemented a process whereby hardware vendors are encouraged to certify their drivers. Part of the certification process is meant to ensure that the drivers do not violate any of the interface rules. Fair enough, I’m all for anything that improves compatibility.

However, in this particular case something is amiss. If you set some music playing, go into Windows’ audio settings, and set your analogue device as default, but right click on the digital device and click ‘test’, you will hear music coming from the analogue speakers, and a test sound coming from the HDTV. This proves that there is no hardware or driver limitation to streaming audio to multiple devices simultaneously. This functionality is actively being disallowed by Windows for reasons that I’m am still unable to fathom.

I won’t go on about how long it took me to find, but if you’re lucky enough to have a Realtek soundcard (many onboard soundcards are), there is a workaround available, courtesy of this forum thread! Download the file ‘rtkhdaud.zip’, and extract the contained ‘rtkhdaud.dat’ file to your ‘C:\Windows\System32\Drivers’ folder, then simply reboot. I have now had this workaround in place for a couple of days, and have had no problems at all with it. If I do experience any stability/compatibility issues, I will update this post with the details.

In the mean time, I would also like to draw your attention to this site, where it would be lovely if you could support the case to have this issue addressed by Microsoft.

**UPDATE** : I recently updated my Linux partition to Ubuntu 11.04 and was aghast at the fact that Canonical have implemented the same insanity as Microsoft. If you go into the Sound Preferences, and you are only able to select one output device at a time. Luckily, this being Linux, there’s a quick fix that isn’t even a hack. Simply open a terminal and install “paprefs” (pulse audio preferences):

sudo apt-get install paprefs

Next run paprefs:

paprefs

Click on the Simultaneous Output tab, and enable “Add virtual output device for simultaneous output on all local sound cards”, then click Close. Finally, return to the Sound Preferences, and in the Output tab, select the newly available “Simultaneous output to…….” device. That’s it. Job done! Now if only Microsoft could produce a fix that worked so simply.

Enjoy :)

**UPDATE 2** : I recently updated my graphics card to a GTX460, which comes with internally integrated sound. This means that the Realtek hack that this post is about no longer works, as the audio device is now an Nvidia device. I am unable to find another fix to the problem in Windows at this time (the Linux fix in the above UPDATE does work still), so in the mean time, I am using a quick switcher tool that someone has written with a shortcut pinned to my taskbar. The tool in question can be found here:

http://blog.contriving.net/2009/05/04/a-hotkey-to-switch-between-headphones-and-speakers-soundswitch/





New methods vs. legacy stacks

17 11 2010

Software has 2 sides to it, the front and the back (sometimes there are other facets, but that’s another story). The front is what it looks like and the functionality it provides, and the back is stuff it’s built from and the tools that IT staff use to keep it going. When a large company purchases software they expect it to last a long time… a really long time… like 10 years. Because of this, it’s quite normal to find software out there that’s been tweaked, patched, skinned and wrapped to appear on the front end as recent. That is, it looks and behaves like something that was developed within the last 2 years. Normally though, the guts of the software and the back-end tools don’t get brought up to date in the same way because the cost/benefit case for doing so is deemed marginal. The view from the business is generally that the IT staff can cope without it, and although it might take them longer to perform maintenance and upgrades, spending £100k on changes that the end user will never see just doesn’t stack up (whether this is actually true is up for debate, but it’s generally the way things are viewed in the real world).

I recently came up against a prime example of how much of an issue this kind of situation can be. Where I’m working, we have have a business critical system that’s based on a flavour of DOS, but still has regular updates, which have recently started failing to apply in seemingly random situations. The issue with the software is that it is deployed in 300 distinct locations, and should be set up  in 1 of 4 specific configurations. In reality, over the years the configuration between sites has diversified due to the manual nature of supporting and updating it. With modern software you get tools for ensuring consistent software deployment and DR from a central location. With a system that’s built on DOS, such tools do not exist, DR’s require an engineer to visit with a build CD, and the scripting language available is BATCH, which is… limited.

I have a problem here that requires some lateral thinking. We’re not about to go replacing the system wholesale because of this, and sending out an engineer to rebuild the store every time we have some software that does not apply correctly is going to get very costly. Enter Linux, and more specifically, the brilliant P.I.N.G (Partimage Is Not Ghost) project. In essence PING is a bootable CD with a BASH environment, Partimage disk imaging, and networking capabilities built into it. The beauty of this from my perspective is that it allows me to do a couple of key things that I simply cannot do natively with the system in question:

- Connect to the FAT16 hard drive of the DOS based systems, and manipulate the files using powerful Linux commands.
- Pull disk images and configuration scripts down from a central server.

To do this, some customisation of the boot image is required. I’m not going to go into masses of detail about this, as to get the process working took me about 2 weeks, but I will impart some of the key things I learnt.

Firstly though, let me explain the build process at a high level:

1. Place the PING CD into system to be rebuilt and reboot.
2. PING script is passed a string of commands to process:

CMD_1 – prior to configuring the network settings:
a) checks the local HDD, and if it finds the file containing the IP settings, will assume them, and write those values out to a file on the transient Linux file system to be used later.
b) if no IP settings are found, prompts the user to input them, and writes those values out to a file on the transient Linux file system to be used later.
c) specifies the network share to connect to and the disk image to retrieve and apply to the local HDD.
CMD_2 – after configuring the network but before pulling down the disk image:
a) simply copies the custom configuration script from the network share to the transient Linux file system.
CMD_3 – after the image has been applied to the local HDD:
a) runs the custom configuration script copied over from CMD_2.

3. The commands above are read into the PING environment variables and processed at the relevant times, allowing PING to connect to the server, and retrieve and apply the image.
4. The PING script’s last action is to call the custom configuration script, which primarily connects to the new cleanly built local HDD, and re-inserts the IP settings. As time goes by, I can use this script to automate most, if not all, of the configuration required, based on minimal user input such as the location’s shortcode. This kind of automation is what BASH scripting was made for.
5. Finally the custom script prompts the user to remove the CD and press enter to reboot.
6. System reboots, and is now back on the network with the original IP address.

This all sounds fairly straight forward right? Well, yes, but only once you’ve worked out how the whole process is going to work, and put together all the commands to make it happen. Let me share with you some of the key files and commands required to make it work.

First off you will need to download PING. The version I used was version 3.00, which can be found here. You will need to extract the contents of the ISO and then customise the following files:

- boot.msg – This is just a message that is displayed at initial boot time. I used this just for revision control
- isolinux.cfg – This is the file that you specify all your custom commands in CMD_1,2,3, as well as the PING specific settings. The documentation and forum support for this is pretty good on the PING website.
- logo.16 –  This is the logo image displayed at initial boot time. I changed this to our company logo.

Now for the commands. Well, there are a lot, so I’m not going to bore you with the dull ones, only the ones that caused me headaches (descriptions of the commands below the command segments):

sudo mkisofs -D -r -cache-inodes -J -l -b isolinux.bin -no-emul-boot -boot-load-size 4 -boot-info-table -o ../Custom_PING.iso .

This command isn’t actually associated with the imaging process, but when you rebuild the ISO that you’ve tinkered with, you need to use this command, otherwise I found my ISO no longer booted correctly.

lss16toppm <logo.16 >logo16.ppm
ppmtolss16 <logo16.ppm >logo.16

These two commands are simply for converting the logo.16 file into a format that allows you to edit it using TheGimp, and then back again once you’ve finished editing the file. Bare in mind that the file needs to be created with only 16 colours.

mount -t msdos /dev/?da1 /mnt/hdd

This is a really simple mount command, but if you don’t specify the filesystem type (‘-t msdos’ in my case), then when you manipulate the files on the disk, the DOS based software can no longer read them when you reboot. This is because by default, Linux will mount the disk as VFAT instead of FAT16.

[ -f /mnt/hdd/dir1/ip.bat ] && OLDIP=$((grep lan0 ip.bat | awk '{print $3}')2>&1)

This command checks for the existence of the file ip.bat, and if found, greps the line with the line containing ‘lan0′ into to awk, which prints the required detail from the line (word number 3 in this case), which is then redirected to the environment variable $OLDIP.

{ case $OLDIP in '') echo please enter the IP Address you want to use and press enter ; read OLDIP ;; esac }

The ‘case’ command checks if the environment variable is set, and if not, prompts the user for the value to set it to, which ‘read’ then applies to the variable name.

Next, here is the part of the process which gave me the most trouble. In the custom configuration script, which is stored and maintained on a server, I need to retrieve the earlier captured IP address, and replace the one in the image with it:

export OLDIP="$(grep IP /IP.TXT | awk '{print $2}')"
export IMAGEIP="$(grep lan0 ip.bat | awk '{print $3}')"

These commands are similar to the one used to first obtain the initial IP address before the imaging process, but because they’re being used to set variables within a this script, which will replace other variables also set in this script, I needed to export the results of command, otherwise the sed command used for string replacement would get horribly confused.

Finally, the sed command is used for string replacement, and as you can see, I actually use it to replace multiple IMAGE variables with multiple OLD variables that have all been set by using the commands previously shown.

sed -i "s=$IMAGEIP=$OLDIP=;s=$IMAGEROUTE=$OLDROUTE=;s=$IMAGENETMASK=$OLDNETMASK=" ip.bat

What I now have is the foundation for a new DR process that I’m hoping will solve a lot of the consistency problems we’re having out in the field, and save a notable sum of money by not having to send out engineers to perform simple system rebuilds. The way it works also means that any updates to the disk image or configuration script can occur centrally, and the build CDs need not be revision controlled. So there you have it, a way of overcoming the limitations of the DOS based system by circumnavigating them with Linux. If I can do it, you can too ;)

 

**Addendum – I’ve been asked to provide my method for pausing and asking the user to ‘press any key to reboot’. I simply added the following 3 lines to the bottom of my script:

echo "Please remove the disc from the tray then press any key to reboot."
read -p \$
shutdown -r now

Enjoy! :)





Final running update

14 07 2010

Despite what the title of this post might lead you to believe, this is not some sort of declaration of impending doom on my part, nor is it confirmation that I have given up running for the joys of cake. It is merely my realisation that running has now just become part of my weekly routine, and to keep updating you about it would be as exciting as keeping you informed of my shoe polishing habits.

Having already dismissed this post as being almost entirely superfluous, I’d better try and keep it short. The good news is that the advice that I followed (as I mentioned in the previous running update), has worked brilliantly. I have in the last week completed two 3 mile runs, and I am no longer suffering from shin splints!

Some more good news is that we have also enlisted another friend into the exercise regime, which apart from increasing the banter, has a bonus side affect of upping the competitiveness slightly, which in turn boosts everyone’s performance :) It’s obviously impossible to comment on the longevity of such an arrangement, but as long as I continue to include running in my weekly routine, these short term increases in effort should result in long term benefits.

So there you have it. I have now reached the point where my physical conditioning is no longer woeful, and I am optimistic that I can continue to improve over the coming months and years. It has been a little over a month since I went out for the first run, and yet due to enduring a fair amount of pain, and having to research how the body works and regenerates to improve my performance, I feel it has been quite a journey. If I had to impart only one piece of advice from my experience it would be this – just persevere, you can run further than you think.

Next stop, 10k charity run. But that’s another story ;)





Backup a little

6 07 2010

There’s a saying in the IT industry – ‘data is king’. It’s a completely stupid saying, but the message is reasonable; you can have the most elegantly designed solution in the universe, but without any meaningful data inside it, it’s all but useless. This principle remains the same for all computer systems, whether they be large scale enterprise solutions or home desktop PC’s, although in reality there are other factors such as sentimentality that makes home user data so valuable.

It goes without saying then that backups are important, and it’s the reason there are such robust backup solutions and high availability architectures available in the enterprise market. But what about home users? Our data is important too, and if you happen to be the sort of person that creates a lot of it, it can be a problem. It’s not like a home user can set up a data mirror between geographically distinct locations, or have a robotic tape backup solution installed in their garage (although it’d be incredibly cool if you could).

So what should you do? There are a myriad of solutions available for the home user, but in my experience, there is no one solution that solves all requirements. First of all, my wife is a photographer, which means she can generate an awful lot of data in a single afternoon! Even the best software on the market isn’t going to address that issue, so let’s back up a little. First of all you need to look at your data and ask yourself some questions about it:

- Does all this data need backing up? Perhaps some of it is replaceable. I used to have a tendency to keep a copy of installers for useful software, but over the years, too many useful programs and a huge increase in Internet bandwidth has meant that this habit has become rather pointless, and I now just download everything at the time I need it.

- Which of this data can be archived? You may have data that you cannot bring yourself to delete, but you really don’t need access to except in vary rare instances. To include it in your standard backup routine is just going to make for very long waits when migrating data to a new or reinstalled system, and possibly cause you to run low on space.

- Which of this data is absolutely irreplaceable? The sort of data that you’d literally cry if you lost it. We have two kids, one is two and half, and the other only a few months old, and already we have thousands of photos and videos totalling gigabytes of data. This level of data accumulation isn’t really sustainable, and we’re going to have to choose a fraction of those files for inclusion into a more resilient backup routine from the standard one.

Once you know the answers to those questions, you can structure your data into folders in a way that represents the the decisions you’ve made. That sounds like an easy task, but what if you have multiple computers in your home? We have two laptops and a Desktop, so we also needed to make some decisions about how to keep the data in sync.

For my wife’s laptop, that was relatively easy as she tends to just use it as temporary storage for when she can’t use the desktop, moving her data to the across when she does has access to it. My needs were a little bit more complicated though because I need access to various files and tools while I’m out and about, but I prefer to work on my desktop when I’m at home. For this reason, I decided to buy some software called GoodSync, which keeps all the data between the desktop and my laptop perfectly in sync. I could have elected to exclude my wife’s data from this sync by placing it in different location, or by setting up some exclusion rules, but for us, the simplest solution was to just sync it all.

With that issue solved, as a happy side effect I’ve actually also created a backup. Whenever I sync my laptop to my desktop, I now have two identical copies of all of our data. However, this doesn’t solve the issue of data accumulation over the years. This is where the data archiving comes in. Every so often, we select which files we want to archive (most of it being photos and videos), and burn them off to DVDs. In an ideal world you would create two copies of the DVD, and keep one set at home, and another set at a friends house, so that if your house burnt down, you’d still have a copy of the archived data. In reality however, I’ll leave you to work out how practical that is, and how well we follow that ideal.

Finally we come to the data that we really can’t afford to lose, even if there’s a fire and all of our hardware and media is destroyed. The only real practical solution for this is the oh-so horribly termed ‘cloud storage’, which simply means ‘someone else’s server‘. There are a couple of constraints with such a solution though. Generally home broadband has very good download speeds, but not such good upload speeds, which means you probably wouldn’t want to be uploading more than a few gigabytes of data, and you certainly wouldn’t want to try and do it all in one go; it would simply take too long. The good news is that there’s another piece of bad news. The other constraint around cloud storage is that the free accounts are generally limited to 2Gb of storage anyway.

So, given that we’re archiving a lot of our semi-precious data away onto DVDs, and for the rest we have a standard backup routine in the form of synced data between to different computers, what remains is to cherry pick the very most precious of your data, and get it up to your cloud storage. When you do this, it is very important that you still keep a local copy of your data. If you select a cloud storage company that suddenly goes pop, or decides to delete your data, you’re going to be inconsolable if you haven’t kept your own copy. The service I ended up choosing for this part of my backup strategy was DropBox, for no other reason than the simplicity of the solution. You simply place your files into specified folder, and the DropBox service takes care of the rest for you. It’s also quite useful that it works in Windows and Linux, but that was more of a side benefit than a reason for choosing the service.

Have I made rather too big of a job out of this? Perhaps. But then again, if you had as much data flying around your home as I have, you’d probably get quite twitchy about it too. Let’s just hope that cloud storage services one day become a viable one stop shop for backing up all your data, but until then, it doesn’t hurt to put a little thought into what you’re doing with it. After all, it is your King you know.





Running update!

24 06 2010

I’m happy to report I’m still at it, and still managing two runs a week (all the while bearing a striking resemblance to the image above from the film ‘Run Fat Boy Run’). I’m also happy to report that my shin splints seem to be fading, thanks to a combination of icing, stretching, and thoroughly massaging of the painful part of my legs.

Unfortunately I’m not happy to report that my performance has plateaued. This week I’ve been out twice, and I’ve only been able to manage about 2 miles on both occasions. I am aware the prevailing wisdom states that a long time couch potato like myself can expect to take up to a year before getting to a respectable level of fitness, but all the same, it is hard not to be disheartened somewhat.

I have come across this article, which apart from highlighting my shameful physical condition, seems to have some very useful advice. My plan now is to take a break from running for a week, and concentrate on my other exercises. Then when I return to the running, the theory goes that I should be able to push myself a little harder than I have done before, and break out of this rut. I will as ever keep you informed.

On a side note, I think I’m done listening to Prodigy and Pendulum, so if anyone has any good suggestions for ‘music to run to’, please feel free to leave some comments.








Follow

Get every new post delivered to your Inbox.