Archive for the ‘Free Software’ Category

How to easily forward Firefox (PC & Android) traffic through an SSH tunnel

March 29th, 2015 No comments

Say you are travelling, or are at a neighbourhood coffee shop, using whatever unsecured WiFi network they make available. You could either:

  1. trust that no one is sniffing your web traffic, capturing passwords, e-mails, IMs, etc.
  2. trust that no one is using more sophisticated methods to trick you into thinking that you are secure (i.e. man in the middle attack)
  3. route your Internet traffic through a secure tunnel to your home PC before going out onto the web, protecting you from everyone at your current location

which would you choose?

VPNs and SSH tunnels are actually a relatively easy means for you to be more secure while browsing the Internet from potentially dangerous locations.

Making use of an SSH tunnel on your PC

There are many, many different ways for you to do this but I find using a Linux PC that is running on your home network to be the easiest.

Step 1: Install SSH Server

Configure your home Linux PC. Install ssh (and sshd if it is separate). If you are using Ubuntu this is as easy as running the following command: sudo apt-get install ssh

Step 2: Make it easy to connect

Sign up for a free dynamic DNS service like DynDNS or No-IP so that you know of a web address that always points to your home Internet connection. To do this follow the instructions at the service you choose.

Step 3: Connect to tunnel

On your laptop (that you have taken with you to the hotel or coffee shop) connect to your home PC’s ssh server. If you are on Windows you will need to get a program like PuTTY. See their documentation on how to forward ports. On Linux you can simply use the ssh command. The goal is to forward a dynamic port to the remote ssh server. For instance if you are using a Linux laptop and ssh then the command would look something like: ssh -D [dynamic port] [user]@[home server] -p [external port number – if not 22]. An example of one would be ssh -D 4096 -p 4000

Step 4: Configure browser to use SSH tunnel proxy

In your browser open the networking options window. This will allow you to tell the browser to forward all of its traffic to a proxy, which in this case, will be our dynamic port that we set up in step 3. Here is an example of my configuration for the example above.
If you don’t feel awesome enough doing the above graphically you can also browse to “about:config” (without quotes) and set the following values:

  • network.proxy.proxy_over_tls
    • true
  • network.proxy.socks
    • Change to “” with no quotes
  • network.proxy.socks_port
    • Change to the SSH Tunnel Local Port set above (4096)
  • network.proxy.socks_remote_dns
    • Change to true
    • Note: you cannot actually set this setting graphically but it is highly recommended to configure this as well!
  • network.proxy.socks_version
    • Change to 5
  • network.proxy.type
      Change to 1

Step 5: Test and use

Browse normally – you are now browsing the Internet by routing all of your traffic (in Firefox) securely through your home PC. Note that this doesn’t actually make web browsing any more secure beyond protecting you from people in your immediate vicinity (i.e. connected to the same insecure WiFi network).

What about Android?

Just like the PC you can also do it on Android even without root access. Please note that while I’m sure there are a few ways to accomplish this, the following is just one way that has worked for me. I’m also assuming that you already have an SSH server to tunnel your traffic through.

Step 1: Install SSH Tunnel

The first thing you’ll want to do is install an application that will actually create the SSH tunnel for you. One such application is the aptly named SSH Tunnel which can be found on the Google Play Store here.

Step 2: Configure SSH Tunnel

Next you’ll want to launch the application and configure it.

  • Set the Host address (either a real domain name, dynamic DNS redirector or IP address of your SSH server) and port to connect on.
  • You’ll also want to configure the User and Password / Passphrase.
  • Check the box that says Use socks proxy.
  • Configure the Local Port that you’ll connect to your tunnel on (perhaps 1984 for the paranoid?)
  • I would recommend checking Auto Reconnect as well, especially if you are on a really poor WiFi connection like at a hotel or something.
  • Finally check Enable DNS Proxy.

Step 3: Connect SSH Tunnel

To start the SSH tunnel simply check the box that says Tunnel Switch.

Step 4: Install Firefox

While you may have a preference for Google Chrome, Firefox is the browser I’m going to recommend setting up the tunnel with. Additionally this way if you do normally use Chrome you can simply leave Firefox configured to always use the SSH tunnel and only switch to it when you want the additional privacy. Firefox can be found on the Google Play store here.

Step 5: Configure Firefox to use SSH Tunnel

In order to make Firefox connect via the SSH tunnel you’ll need to modify some settings. Once you are finished the browser will only work if the SSH tunnel is connected.

  • In the Firefox address bar browse to “about:config” with no quotes.
  • In the page that loads search and modify the following values:
    • network.proxy.proxy_over_tls
      • true
    • network.proxy.socks
      • Change to “” with no quotes
    • network.proxy.socks_port
      • Change to the SSH Tunnel Local Port set above (1984?)
    • network.proxy.socks_remote_dns
      • Change to true
    • network.proxy.socks_version
      • Change to 5
    • network.proxy.type
        Change to 1

Step 6: Test and browse normally

Now that you have configured the above you should be able to browse via the tunnel. How can you check if it is working? Simply turn off the SSH Tunnel and try browsing – you should get an error message. Or if you are on a different WiFi you could try using a service to find your IP address and make sure it is different from where you are. For example if you configured Firefox to work via the SSH tunnel but left Chrome as is then visiting a site like should show different information in each browser.

This post is a complication of two posts which originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).


January 28th, 2014 2 comments

A while back I made it my goal to put together an open source project as my way of contributing back to the community. Well fast forward a couple of months and my hobby project is finally ready to be shown the light of day. I give you… CoreGTK

CoreGTK is an Objective-C binding for the GTK+ library which wraps all objects descending from GtkWidget (plus a few others here and there). Like other “core” Objective-C libraries it is designed to be a very thin wrapper, so that anyone familiar with the C version of GTK+ should be able to pick it up easily.

However the real goal of CoreGTK is not to replace the C implementation for every day use but instead to allow developers to more easily code GTK+ interfaces using Objective-C. This could be especially useful if a developer already has a program, say one they are developing for the Mac, and they want to port it to Linux or Windows. With a little bit of MVC a savvy developer would only need to re-write the GUI portion of their application in CoreGTK.

So what does a CoreGTK application look like? Pretty much like a normal Objective-C program:

 * Objective-C imports
#import <Foundation/Foundation.h>
#import "CGTK.h"
#import "CGTKButton.h"
#import "CGTKSignalConnector.h"
#import "CGTKWindow.h"

 * C imports
#import <gtk/gtk.h>

@interface HelloWorld : NSObject
/* This is a callback function. The data arguments are ignored
 * in this example. More callbacks below. */

/* Another callback */

@implementation HelloWorld
int main(int argc, char *argv[])
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    /* We could use also CGTKWidget here instead */
    CGTKWindow *window;
    CGTKButton *button;

    /* This is called in all GTK applications. Arguments are parsed
    * from the command line and are returned to the application. */
    [CGTK autoInitWithArgc:argc andArgv:argv];

    /* Create a new window */
    window = [[CGTKWindow alloc] initWithGtkWindowType:GTK_WINDOW_TOPLEVEL];

    /* Here we connect the "destroy" event to a signal handler in 
     * the HelloWorld class */
    [CGTKSignalConnector connectGpointer:[window WIDGET] 
        withSignal:@"destroy" toTarget:[HelloWorld class] 
        withSelector:@selector(destroy) andData:NULL];

    /* Sets the border width of the window */
    [window setBorderWidth: [NSNumber numberWithInt:10]];

    /* Creates a new button with the label "Hello World" */
    button = [[CGTKButton alloc] initWithLabel:@"Hello World"];

    /* When the button receives the "clicked" signal, it will call the
     * function hello() in the HelloWorld class (below) */
    [CGTKSignalConnector connectGpointer:[button WIDGET] 
        withSignal:@"clicked" toTarget:[HelloWorld class] 
        withSelector:@selector(hello) andData:NULL];

    /* This packs the button into the window (a gtk container) */
    [window add:button];

    /* The final step is to display this newly created widget */
    [button show];

    /* and the window */
    [window show];

    /* All GTK applications must have a [CGTK main] call. Control ends here
     * and waits for an event to occur (like a key press or
     * mouse event). */
    [CGTK main];

    [pool release];

    return 0;

    NSLog(@"Hello World");

    [CGTK mainQuit];
Hello World in action

Hello World in action

And because Objective-C is completely compatible with regular old C code there is nothing stopping you from simply extracting the GTK+ objects and using them like normal.

// Use it as an Objective-C CoreGTK object!
CGTKWindow *cWindow = [[CGTKWindow alloc] 

// Or as a C GTK+ window!
GtkWindow *gWindow = [cWindow WINDOW];

// Or even as a C GtkWidget!
GtkWidget *gWidget = [cWindow WIDGET];

// This...
[cWindow show];

// the same as this:
gtk_widget_show([cWindow WIDGET]);

You can even use a UI builder like GLADE, import the XML and wire up the signals to Objective-C instance and class methods.

CGTKBuilder *builder = [[CGTKBuilder alloc] init];
if(![builder addFromFile:@""])
    NSLog(@"Error loading GUI file");
    return 1;

[CGTKBuilder setDebug:YES];

NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:
                 [CGTKCallbackData withObject:[CGTK class] 
                     andSEL:@selector(mainQuit)], @"endMainLoop",
                 [CGTKCallbackData withObject:[HelloWorld class] 
                     andSEL:@selector(hello)], @"on_button2_clicked",
                 [CGTKCallbackData withObject:[HelloWorld class] 
                     andSEL:@selector(hello)], @"on_button1_activate",

[builder connectSignalsToObjects:dic];

CGTKWidget *w = [builder getWidgetWithName:@"window1"];
if(w != nil)
    [w showAll];

[builder release];

So there you have it that’s CoreGTK in a nutshell.

There are a variety of ways to help me out with this project if you are so inclined to do so. The first task is probably just to get familiar with it. Download CoreGTK from the GitHub project page and play around with it. If you find a bug (very likely) please create an issue for it.

Another easy way to get familiar with CoreGTK is to help write/fix documentation – a lot of which is written in the source code itself. Sadly most of the current documentation simply states which underlying GTK+ function is called and so it could be cleaned up quite a bit.

At the moment there really isn’t anything more formal than that in place but of course code contributions would also be welcome!

Update: added some pictures of the same program running on all three operating systems.

Hello World on Windows

Hello World on Windows

Hello World on Mac

Hello World on Mac

Hello World on Linux

Hello World on Linux

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

An Ambitious Goal

August 1st, 2013 3 comments

Every since we announced the start of the third Linux Experiment I’ve been trying to think of a way in which I could contribute that would be different from the excellent ideas the others have come up with so far. After batting around some ideas over the past week I think I’ve finally come up with how I want to contribute back to the community. But first a little back story.

A large project now, GNOME was started because there wasn't a good open source alternative at the time

A large project now, GNOME was started because there wasn’t a good open source alternative at the time

During the day I develop commercial software. An unfortunate result of this is that my personal hobby projects often get put on the back burner because in all honesty when I get home I’d rather be doing something else. As a result I’ve developed, pun intended, quite a catalogue of projects which are currently on hold until I can find time/motivation to actually make something of them. These projects run the gamut of little helper scripts, written to make my life more convenient, all the way up to desktop applications, designed to take on bigger tasks. The sad thing is that while a lot of these projects have potential I simply haven’t been able to finish them, and I know that if I just could they would be of use to others as well. So for this Experiment I’ve decided to finally do something with them.

Thanks to OpenOffice, LibreOffice and others there are actual viable open source alternatives to Microsoft Office

Thanks to, LibreOffice and others there are actual viable open source alternatives to Microsoft Office

Open source software is made up of many different components. It is simultaneously one part idea, perhaps a different way to accomplish X would be better, one part ideal, belief that sometimes it is best to give code away for free, one part execution, often times a developer just “scratching an itch” or trying a new technology, and one part delivery, someone enthusiastically giving it away and building a community around it. In fact that’s the wonderful thing about all of the projects we all know and love; they all started because someone somewhere thought they had something to share with the world. And that’s what I plan to do. For this Linux Experiment I plan on giving back by setting one of my hobby projects free.

Before this open source web browser we were all stuck with Internet Explorer 6

Before this open source web browser we were all stuck with Internet Explorer 6

Now obviously this is not only ambitious but perhaps quite naive as well especially given the framework of The Linux Experiment – I fully recognize that I have quite a bit of work ahead of me before any of my hobby code is ready to be viewed, let alone be used, by anyone else. I also understand that, given my own personal commitments and available time, it may be quite a while before anything actually comes of this plan. All of this isn’t exactly well suited for something like The Linux Experiment, which thrives on fresh content; there’s no point in me taking part in the Experiment if I won’t be ready to make a new post until months from now. That is why for my Experiment contributions I won’t be only relying on the open sourcing of my code, but rather I will be posting about the thought process and research that I am doing in order to start an open source project.

Topics that I intend to cover are things relevant to people wishing to free their own creations and will include things such as:

  • weighing the pros and cons as well as discussing the differences between the various open source licenses
  • the best place to host code
  • how to structure the project in order to (hopefully) get good community involvement
  • etc.

An interesting side effect of this approach will be somewhat of a new look into the process of open sourcing a project as it is written piece by piece, step by step, rather than in retrospect.

The first billion dollar company built on open source software

The first billion dollar company built on open source software

Coincidentally as I write this post the excellent website has put together a group of links discussing the pros of starting open source projects. I’ll be sure to read up on those after I first commit to this 😉

Linux: a hobby project initially created and open sourced by one 21 year old developer

Linux: a hobby project initially created and open sourced by one 21 year old developer

I hope that by the end of this Experiment I’ll have at least provided enough information for others to take their own back burner projects to the point where they too can share their ideas and creations with the world… even if I never actually get to that point myself.

P.S. If anyone out there has experience in starting an open source project from scratch or has any helpful insights or suggestions please post in the comments below, I would really love to hear them.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Installing Netflix on Kubuntu

July 27th, 2013 4 comments

The machine I am running Kubuntu on is primarily used for streaming media like Netflix and Youtube, watching files off of a shared server and downloading media.

I decided to try to install Netflix first since it is something I use quite often. I am engrossed in watching the first season of Orange is the New Black and the last season of The West Wing.

Again, I resorted to Googling exactly what I am looking for and came across this fantastic post.

I opened a Terminal instance in Kubuntu and literally copied and pasted the text from the link above.

After going through these motions, I had a functioning instance of Netflix! Woo hoo.

So I decided to throw on an episode of Orange is the new Black, it loaded perfectly…. without sound.

Well shit! I never even thought to see if my audio driver had been picked up… so I guess I should probably go ahead and fix that.

This isn’t going well.

July 26th, 2013 No comments

Today I started out by going into work, only to discover that it is NEXT Friday that I need to cover.

So I came home and decided to get a jump start on installing Kubuntu.

I am now at a screeching halt because the hardware I am using has Win8 installed on it and when I boot into the Start Up settings, I lose the ability to use my keyboard. This is going swimmingly.

So, it is NOW about 3 hours later.

In this time, I have cursed, yelled, felt exasperated and been downright pissed.

This is mainly because Windows 8 does not make it easily accessible to get to the Boot Loader. In fact, the handy Windows made video that is supposed to walk you through how EASY, and user friendly the process of changing system settings is fails to mention what to do if the “Use a Device” option is nowhere to be found (as it was in my case).

So I relied on Google, which is usually pretty good about answering questions about stupid computer issues. I FINALLY came across one post that stated that due to how quickly Windows 8 boots, that there is no time to press F2 or F8. However, I tries anyway. F8 is the key to selecting what device you want to boot from, as you will see later in this post.

What you will want to do if installing any version of Linux is, first format a USB stick to hold your Linux distro. I used Universal USB Loader. The nice thing about this loader is that you don’t have to already have the .iso for the distro you want to use downloaded. You have the option of downloading right in the program.

After you have selected you distro, downloaded the .iso and loaded it onto your USB stick now is the fun part. Plug your USB stick into the computer you wish to load Linux onto.

Considering how easy this was once I figured it all out, I do feel rather silly. If I were to have to do it again, I would feel much more knowledgeable.

If you are using balls-ass Windows 8, like I was, the EASIEST way to select an alternate device to boot from is to restart the computer and press F8 a billion times until a menu pops up, letting you choose from multiple devices. Choose the device with the name of the USB stick, for me it was PENDRIVE.

Once you press enter (from a keyboard that is attached directly to the computer you are using via USB cable, because apparently Win8 loses the ability to use Wireless USB devices before the OS has fully booted…at least that was my experience).

So now, I am being prompted to install Kubuntu (good news, I already know it supports my projector, because I can see this happening).

Now, I have had to plug in a USB wired keyboard and mouse for this process so far. This makes life a little bit difficult because the computer I am using sits in a closet, too far away from my projector screen. This makes it almost impossible for me to see what is going on, on the screen. So installing the drives for my wireless USB devices it a bit of a pain.

However, the hard part is over. The OS is installed successfully. My next post will detail how the hell to install wireless USB devices. I will probably also make a fancy signature, so you all know what I am running.

Come on, really?!

July 25th, 2013 3 comments

So it is 9:40 PM and I started my “Find a Linux distro to install” process. Like many people, I decided to type exactly what I wanted to search into Google. Literally, I typed “Linux Distro Chooser” into Google. Complex and requiring great technical skill, I know.

My next mission was to pick the site that had a description with the least amount of “sketch”. Meaning, I picked the first site in the Google results. I then used my well honed multiple choice skills (ignore the question, pick B) to find my perfect Linux distro match.

After several pages of clicking through, I was presented with a list of Linux distributions that fit my needs and hardware.

See, a nice list, with percents and everything.

This picture has everything... percents, mints, Man Drivers...

This picture has everything… percents, mints, Man Drivers…

So naturally, I do what everyone does with lists.. look at my options and pick the one with the prettiest picture.

For me that distro was Kubuntu. It has a cool sounding name that starts with the same letter as my name.

So I follow the link through to the website to pull the .iso and this pops up.

Fuck Drupal

God damn Drupal!

I have dealt with Drupal before, as it was the platform the website I did data entry for was built on. Needless to say, I hate it. Hey Web Dev with Trev, if you are out there, I hope you burn your toast the next time you make some.

So, to be productive while waiting for Drupal to fix it’s shit, I decided to start a post and rant. In the time this took, the website for Kubuntu has recovered (for now).

So, I downloaded my .iso and am ready to move it onto a USB stick.

I’m debating whether I want to install it now or later, as I would really like to watch some West Wing tonight. I know that if I start this process and fuck it up, I am going to be forced to move upstairs where there is another TV, but it is small :(

Well, here I go, we’ll see how long it takes me to install it. If you are reading this, go ahead and time me… it may be a while.

An Experiment in Transitioning to Open Document Formats

June 15th, 2013 2 comments

Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice,, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

The apps of KDE 4.10 Part I: Rekonq

April 6th, 2013 No comments

It’s been a while since I’ve used KDE, however with the recent rapid (and not always welcome) changes going on in the other two main desktop environments (GNOME 3 and Unity) and the, in my opinion, feature stagnation of environments like Xfce and LXDE I decided to give KDE another shot.

My goal this time is to write up a series of quick reviews of KDE as presented as an overall user experience. That means I will try and stick to the default applications for getting my work done. Obviously depending on the distribution you choose you may have a different set of default KDE applications, and that’s fine. So before you ask, no I won’t be doing another write up for KDE distribution X just because you think its ‘way better for including A instead of B’. I’m also going to try and not cover what I consider more trivial things (i.e. the installer/installation process) and instead focus on what counts when it comes to using an operating system day-to-day.


The default web browser in the distribution I chose is not Konqueror but rather its WebKit cousin Rekonq. Where Konqueror uses KHTML by default and WebKit as an option, Rekonq sticks to the more conventional rendering engine used by Safari and Chrome.


This is not Rekonq, it is Konqueror

Rekonq is a very minimalistic looking browser to the point where I often thought I accidentally started up Chrome instead.

This is Rekonq

This is Rekonq

From my time using it, Rekonq seems to be a capable browser although it is certainly not the speediest, nor does it sport any features that I couldn’t find elsewhere. One thing it does do very nicely is with its integration into the rest of the KDE desktop. This means that the first time you visit YouTube or some other Flash website you get a nice little prompt in the system tray alerting you of the option to install new plugins. If you choose to install the plugin then a little window appears telling you what it is downloading and installing for you, completely automatically. No need to visit a vendor’s website or go plugin hunting online.

Like most other KDE applications Rekonq also allows for quite a bit of customization, although I found its menus to be very straightforward and not nearly as intimidating as some other applications.

The settings menu

The settings menu

I did notice a couple of strange things while working with Rekonq that I should probably mention. First off while typing into a WordPress edit window none of the shortcut keys (i.e. Ctrl+B = bold) seemed to work. I also found that I couldn’t perform a Shift+Arrow Key selection of the text, instead having to use Ctrl+Shift+Arrow Key which highlights an entire word at a time. At this time I’m not sure what other websites may suffer from similar irregularities so while Rekonq is a fine browser in its own right, you may want to keep another one around just in case.

Browsing the best website on the net

Browsing the best website on the net

While I haven’t found any real show-stoppers with Rekonq, I still can’t shake the feeling that I’m missing something. I don’t know how to describe it other than I think I would feel safer using a more mainstream web browser like Firefox, Chrome or even Opera. But like any software, your experience may vary and I would certainly never recommend against trying Rekonq (or even Konqueror). Who knows, you may find out that it is your new favorite web browser.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Free Software, KDE, Linux, Tyler B Tags: , ,

Using Linux to keep an old work PC alive

November 19th, 2012 1 comment

A couple of years ago I helped a small business convert their old virus infected Windows XP computer into a Linux Mint 11 (Katya) Xfce. This was done after a long time of trying to help them keep that machine running at a half-decent speed – the virus being the last straw that finally had them make the switch to Linux. Amazingly, well maybe not to the Linux faithful but to most people, this transition not only went smoothly but was actually extremely well received. Outside of a question or two every couple of months I have heard of no issues whatsoever. Sadly Linux Mint 11 has recently reached its end of life stage and so the time has come to find a replacement.

The Situation

When I said this was an old Windows XP machine I wasn’t kidding. It sports a speedy (sarcasm) 2.4Ghz Intel Pentium 4 processor, 512 whole megabytes of RAM and an Intel integrated graphics card. With specs like those it is pretty obvious that the only two real considerations (from a technical standpoint) are low resource requirements and speed. I’d be tempted to jump to a more specialized distribution like Puppy Linux but the people using the machine are A) used to Linux Mint already and B) expected a familiar, fully featured operating system experience.

Where is Linux Mint today?

Linux Mint 13 has recently been released (including an Xfce version) based on the latest Ubuntu 12.04 stable release. This makes it an ideal candidate for an upgrade because it is something already familiar to the users and will be supported until April 2017.

The following are the steps I took, in no real order, to setup and configure Linux Mint 13 Xfce for their use:

Pre/During Install Configuration

  1. Encrypt the home directory
    Because this is a work computer and will be storing sensitive financial information I configured it to encrypt everything in the home directory. Better safe than sorry.

Post Install Configuration

  1. Install Google Chrome
    I removed Mozilla Firefox and installed Google Chrome for two reasons. First Chrome tends to be, or at least feel, a little bit snappier than even the latest version of Firefox and as I mentioned above speed is king. Secondly, unless something changes, Google’s Chrome (not even Chromium) will be the only Linux browser that will continue to get Adobe Flash updates in a straightforward and easy way for the user. UPDATE: ironically the only issue I found with this whole install related to Google’s embedded Adobe Flash. For some reason the audio on the particular version ran at double speed. This is apparently a known issue.
  2. Install Rhythmbox
    I also removed Banshee and installed Rhythmbox instead. This was done not because I consider one better than the other (or even that these two represent the only options), but simply because the users were already familiar with Rhythmbox. They use Rhythmbox to listen to streaming Internet radio.
  3. Remove unnecessary software (Pidgin, XChat, GNOME Mplayer and Totem)
    Not because they are bad applications, they just simply weren’t needed. I kept VLC because it can pretty much play all audio-video.
  4. Add Trash can to desktop and remove Filesystem icon
  5. Remove all but one workspace
  6. Install preload to speed up commonly used packages on startup
  7. Configure LibreOffice
    The goal of this step is to set up LibreOffice in such a way as to make it use less memory while still keeping most of the functionality. In order to accomplish this I changed the number of undo steps from 100 to 30 and disabled the Java components.
  8. Change screensaver to blank screen
    This looks more professional and uses less memory.
  9. Spin down hard drive when possible
    While I was at it I also went into power management and had the system spin down the hard drives when possible. This configuration had nothing to do with performance, in fact spinning down the drives can slow access to files, but was done because they often just leave the PC running 24-7 and it is not in use at all during the night. I’m sure this will save them a couple of cents per year or something.
  10. Disabled unused startup services like Bluetooth
    The machine doesn’t even have a Bluetooth radio.
  11. Set it so that inserting a removable drive causes the system to open a window for browsing the contents
  12. Change the system tray clock time format from 24 hour time to 12 hour time.
    This was a user preference.
  13. Set updates to be downloaded from best available server

  14. Install Microsoft fonts (i.e. ttf-mscorefonts-installer)
  15. Install 7zip, rar and unrar
    You never know what kind of random archive formats they might need to open so it is better to support them all.
  16. Change login screen theme
    The default login screen is nice but it isn’t the most user friendly. I opted to install the Mint Pro (MDM) theme from
  17. Install all updates
  18. Run Grub boot profiler to speed up the boot process
    If you’re not aware of this it is a great trick. Essentially once you have everything installed (driver wise at least) you do the following:
    -Modify /etc/default/grub and change the line GRUB_CMD_LINE_LINUX_DEFAULT=”quiet splash” to GRUB_CMD_LINE_LINUX_DEFAULT=”quiet splash profile”.
    -Then run sudo update-grub2 and reboot.
    -The next reboot might be slower but once the machine comes back up simply edit that file again and remove the “profile” text. Your computer will now intelligently load drivers as the hard drive head travels across their location, instead of in some other arbitrary order which can actually shave a couple of seconds off of your total boot time.

How did it turn out?

Surprisingly well. The machine isn’t a speed demon by any stretch of the imagination but it does perform its simple tasks well enough. It remains to be seen if the computer will make it to the next long term release of Linux Mint Xfce, or even if it will be able to run it at that time, but for now the users are happy and that is what matters.


I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Staying in shape with open source software

November 21st, 2011 No comments

On a good week, I consider myself an avid runner. Right now I’m training to run a 5k in the spring. Ideally, I’ll be able to get it under 20 minutes. Now, two of the keys to exercise are to set goals and to track your progress. Clearly I’ve got the first half under control, but the second half? Well, it turns out that’s where a lot of people falter, lose motivation, and ultimately fail. I’m no exception – I’ve tried running without really tracking my progress and I found that eventually I just gave up. Manually drawing routes, estimating distances, and keeping time take effort, and frankly I didn’t have the wherewithal to do it. Thankfully, modern technology has come to save the day. I use a Google Nexus S, which comes with a GPS and dozens of apps on the Android Market for tracking exercise.

Google My Tracks

Google happens to make an open source app that tracks runs (My Tracks). It supports waypoints (so you can get data on each mile or kilometre of your run), and it records your speed and altitude. All in all, it’s a very handy app and I use it regularly for my runs. The software integrates with Google accounts and lets you upload your runs to Google Maps and track statistics via their spreadsheets in Google Docs. And if you’re the sharing type, it also exports your runs through .gpx files .kml files and supports sharing through Twitter.

Main My Tracks spreadsheet

My Tracks summary statistics


i discovered Pytrainer through an entry at another blog. If you’re more inclined to keep your data offline, it might be a better solution for you. In order to use Pytrainer, you’ll have to import your .gpx files from your phone and specify the types of activities you were tracking (running, cycling, etc). In order to get the mapping to work properly, I had to install the gpsbabel package.  Once that was set up, I had the option to use either Google Maps or the Open Map Project. The program allows you to enter information about heart rate, calories, and equipment as well, but I didn’t have any of that information available. Gathered statistics are aggregated and can be examined for specified time periods, activities, and athletes.

Uploading a new run into Pytrainer

Mapping my run

Summary statistics in Pytrainer


This doesn’t technically fall into the category of open source, but I feel compelled to add it because it’s actually my preferred tracking solution. Endomondo is a website (with associated Android app) that allows you to track routes with the added benefits of calorie estimation, social integration (such as competitions and commenting/”pep-talks”), and a general smoothness in functionality that the other solutions don’t really reach. It also has a “coach” available and workout playlists, but I don’t make much use of those. Not that I have anything against the functions, but for personal safety reasons, I prefer not to run with headphones.

Endomondo workout imported from My Tracks

My choices

After testing out the programs and apps mentioned here, I’ve decided to go with My Tracks and Endomondo. I chose My Tracks because it integrates seamlessly with Google Maps and Docs (I like screwing around with spreadsheets) and because despite looking stripped down and simple, it’s actually excellent at what it does. As for Endomondo: its functions overlap considerably with My Tracks, but the social environment and the excellent website make it very appealing and easy-to-use. The main reason it won out over Pytrainer is because the app takes away any uploading – the second I’m done my workout, it’s available online.

Categories: Android, Free Software, Sasha D Tags:

LFS: Installing VLC

November 6th, 2011 1 comment

Since the install of Linux From Scratch, one of the main issues I’ve been having is the playback of audio and video files. VLC does both quite well, so I decided to install it.

Like most of my installs in Linux From Scratch, there are millions of dependencies, and you have to install each one manually. I found that the CBLFS VLC page was a great help in determining which packages were required.

One thing I noticed, is that even though it lists some packages as “Optional,” VLC will not compile without a few of them. The easiest way to deal with this is to just install the optional packages as required.

I only ran into one issue while compiling:

D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/var/lib/dbus/machine-id": No such file or directory
See the manual page for dbus-uuidgen to correct this issue.
D-Bus not built with -rdynamic so unable to print a backtrace

The quick fix for this is to just run:

dbus-uuidgen > /var/lib/dbus/machine-id

Now that VLC is compiled, you can run it anytime by using vlc from the command-line. Make sure you don’t pull a Jake and run it as root. It will yell at you.

I am currently running ArchLinux (x86_64).
Check out my profile for more information.

Dropbox Meets Gentoo

November 6th, 2011 No comments

So I’m a big Dropbox user. I primarily use it to keep my personal info synchronized between my machines (don’t worry, I encrypt my stuff before dumping it into Dropbox, I’m not dumb), but it’s also handy for quickly sharing files with others.

Unfortunately, Dropbox doesn’t exist in the Gentoo portage tree.

To get started, head over to the Dropbox website and download the source tar.bzip file for your platform. Unzip it to your desktop, open a root terminal and cd into the resulting directory. Before you can actually install Dropbox, you’ll need to satisfy a few dependencies.

First, make sure that you’ve got python by typing emerge python into the aforementioned root terminal. Next, install docutils by typing emerge docutils in that same terminal. Now you should be able to install the dropbox stub by typing ./configure && make && make install.

At this point, Dropbox will have installed a stub of an application on your machine. You should be able to find it under Applications > Internet > Dropbox. When you launch this application, Dropbox will attempt to automatically download and install the binary portion of the application.

Optional: Verifying Binary Signatures

When dropbox downloads binary files, it verifies their legitimacy by calculating a digital signature and comparing it to a known value. In order for it to perform this task, you’ll need to have the pygpgme library installed on your system. Note that this is not the same as the python-gpgme library. They are different, and Dropbox requires the former. Like most Python libraries, pygpgme is a wrapper around a c-based library, in this case, GPGME. As such, the installation takes two steps. First, run emerge gpgme in your root terminal.

Second, you’ll need to install the pygpgme wrapper. It can be found on the project’s homepage at Launchpad. Unpack the tar.bzip, cd into the resulting directory, and run python build && python install from a root terminal. If the installation fails with an error message like

fatal error: gpgme.h: No such file or directory

then check the location of your gpgme.h file. It should have been included with the emerge gpgme command, but pygpgme expects it to live in /usr/include/. On my system, it was living in  /usr/include/gpgme/. I solved this problem by running cp /usr/include/gpgme/gpgme.h /user/include/. The only catch is that if you upgrade GPGME, you’ll need to remember that you copied the header file in order to make the python wrapper work. Once the file is copied, you should be able to run the setup script above.

Finally, run Dropbox and check to ensure that the warning message about binary signatures has gone away. You should now be good to go!


Edit: After I had figured all of this crap out, I realized that Dropbox actually is available in the Gentoo tree, but it’s called gnome-extra/nautilus-dropbox. You should be able to skip all of these steps and install Dropbox with the command emerge nautilus-dropbox, although I haven’t tried it myself.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Free Software, Gentoo, Jon F Tags: , ,

Linux From Scratch: We Have Lift-off…

November 4th, 2011 No comments

Hi Everyone,

Now that I have a relatively stable environment, I just wanted to write an update of how things went, and some issues that I ran into while installing my desktop environment.

No Sound

Not that I was expecting anything different from LFS, but I had no sound upon booting into KDE. I found this quite strange, as alsamixer was showing my sound card fine. One thing I can tell you, is that alsaconf is a filthy liar. My sound is now working, and it still says it can’t find my card. I’m not sure how I got it working, but here are a few tips.

  • Make sure your sound is un-muted in alsamixer.
  • Check your kernel to make sure that either support is compiled in for your card, or module support is selected.
  • If you selected module supprt, make sure the modules are loaded. For me, this was snd-hda-intel.

Firefox and Adobe Flash

I’m not going to go into too many details about Firefox, as Jake covered this in his post here, but I’d like to note that installing Flash into Firefox was quite easy. All I had to do was download the .tar.gz from Adobe, and do the following:

tar -xvf flash.tar.gz (or whatever the .tar.gz is called)
cd flash
cp ~/.mozilla/plugins (make sure plugins is created if it does not exist.)

KDE Crash On Logout

The first time I tried to logout of KDE, I noticed that it crashed. After doing some investigations, I found a solution here. You want to edit your $KDE4_PREFIX/share/config/kdm/kdmrc to reflect the following:



What’s Next?

I’m actually not sure what I’m going to do next. I suppose I should get VLC running on the system, but that shouldn’t be too difficult. I now have a working web browser, flash, and sound, which should be fine until I can get other things working.

I am currently running ArchLinux (x86_64).
Check out my profile for more information.

KDE4, LFS: Make GTK Applications Look Like QT4 Applications

November 3rd, 2011 2 comments

Do your GTK applications (i.e. Firefox) look like something designed in the 90’s in KDE? I think I can help you.

I installed the latest Firefox, (not the one in the screenshot, I stole this.) and was very disappointed to see something like the following:

Tyler pointed me to the Gentoo guide here, which helped me find out which packages I needed.

If you install Chakra-Gtk-Config, and either oxygen-gtk or qtcurve (make sure to download the gtk2 theme), you will have better looking GTK applications in no time. Note that there are probably tons of other GTK themes for KDE4, these are just some suggestions to get you started.

That is much better.

I am currently running ArchLinux (x86_64).
Check out my profile for more information.

LFS, pre-KDE: Errors Compiling qca-2.0.3

November 2nd, 2011 No comments

If you’re going through the Beyond Linux From Scratch guide, and run into this error while compiling qca-2.0.3 (and I assume many other versions of qca), I think I can help.

You don’t seem to have ‘make’ or ‘gmake’ in your PATH.
Cannot proceed.

The fix is relatively easy. Just make sure to have which installed on the machine. Jake found this out the hard way by looking through the configure script. Doing this experiment on Linux From Scratch has really given me an appreciation for distributions that come with basic utilities such as which.

Since which is very difficult to find on Google, here is a link:

I am currently running ArchLinux (x86_64).
Check out my profile for more information.

Bye Bye Bodhi

November 1st, 2011 10 comments

Ah Linux

One website lists ten reasons to use linux my favourites of which are “Linux is easier to use than Windows” and “Linux is fun.” It is day three of the experiment and so far I haven’t installed Linux but I have taken a Dell Vostro 3350 apart about five times. I borrowed this laptop off a fellow comrade in this experiment, Jake B, as I will be sending my own netbook home this coming December.

Starting off I aimed to install both VectorLinux and Bodhi to compare them. I consider myself a relatively light computer user outside of the office and so comparing two different distributions would give me something to talk about. Alas this choice has come back to bite me in the…

I used unetbootin to begin with, on a USB key that was confirmed to be working. I then put Vector on the USB key and it brought up half a blue screen with the top of the vector logo just appearing above the black lower half of the display. After a couple of tries I figured it was corrupt files or a bad ISO so I reformated the USB in order to try Bodhi instead. Unfortunately I didn’t even get a logo this time. Next I burned a CD of Vector and got as far as the ‘find installation media’ screen but no matter how may refreshes or reloads I did it apparently couldn’t find the CD-ROM or configuration files.

From previously experiencing installers fail to find hard drives and USB keys because of the type of hard drive setting in the BIOS, I changed it from ACHI to ATA and low and behold finally some success. I managed to get the Vector installer to write partitions to the disk (using the CD at this point) after choosing the add-on applications I wanted to install. Again this failed so I tried once more with the USB key. This failed the same way except it said that it could not find live media. I even tried using the USB key and the CD together at the same time with no luck.

Switching between Bodhi and Vector in order to try and get a complete install and many, many CDs later I temporarily gave up. I downloaded a new distribution called Sabyon, a Gentoo based distro with the Enlightenment desktop environment, but alas I kept getting the same errors. I even tried Ubuntu 10.04 and Linux Mint and neither of them could not write to the disk.

Figuring it was a hard drive issue I took out the hard drive from the laptop and mounted it in an enclosure. After a quick reformat, which removed a random 500MB LVM partition that I believed to be corrupt, I put it back in the machine. Still no luck.

The errors I kept getting included disk, I/O, live media, cannot find CD-ROM, no useable media, no config file and a couple of others. Each time I tried installing it would fail at different sections of the install and the error would be different with each media used. Among all of the errors I’ve seen the main one seems to be “(initramfs) unable to find a medium containing a live filesystem”

On a whim I decided to test any other hardware errors by running diagnostics from the BIOS. No errors found. I even dug out my ancient XP Profession disc, and after a couple of BIOS changes and a couple of Blue Screens – that were my fault because I had changed the hard drive out so much – I got XP to successfully load, install, and commit changes to the hard drive.

Turning to Google, and with the help of a more advanced Linux Experiment comrade, I retried installing Linux by adding some commands to the installer boot options. Still no luck.

After more Googling I have found that there are a few possible reasons that this could be happening. I have read that it could be caused by the USB3 ports interfering with the bootable media  or that it cold be related to a CD-ROM master/slave setting. Either way, I still haven’t figured it out and I’m not willing to break someone else’s computer just to see if I can overcome this frustrating first experience with Linux. My next task is to try some ACPI hacks  and after finding this useful link, try to install the latest version of Ubuntu which seems to be compatible with the hardware of this machine. But for now its …

Windows 1 Linux 0

Men using Linux 1 Women using linux 0

I am currently running Mandriva 2011
Check out my profile for more information

Linux from Scratch: A Cautionary Tale, Part 2

November 1st, 2011 3 comments

What Next? Chroot

Once you get into the chroot environment, you will get the incredibly annoying PC speaker beep every time you foul up a command.

When compiling glibc in section 6.9, first ensure that there’s no “lib64” directory in your root; for some reason I had a symlink of lib64 pointing to itself. Make sure you’ve run the sed script correctly or the “make install” portion will fail. Specifically, use -Wl (the letter l) in the command, not -W1 (the number 1). After you fix the idiotic transposition of 1 and L, remove both the glibc-build and glibc-2.14.1 directories under /sources and restart section 6.9 from the beginning. If you don’t restart from the beginning, you’ll still get “glibc cannot find dynamic linker” even though the file exists in /lib64.

Keep Watching What You Type

In section 6.10, when running the grep command to ensure the correct startfiles are used, make sure you use [1in] with a one and not [lin] with an L in the command:

grep -o '/usr/lib.*/crt[1in].*succeeded' dummy.log

In section 6.11 and 6.12, I had to run ldconfig before the new libraries were picked up. It seems like the same problem encountered on this mailing list but I’d confirmed that my PATH was set correctly. The same applied for section 6.22; run ldconfig before attempting the configure/make/make install process for E2fsprogs.

For procps-3.2.8, when applying the sed command in chapter 6.27.1, make sure you’ve copied and pasted it (or at least check your typing.) I missed a forward slash in the regex about four times, causing an error during make:

...undefined reference to `get_pid_digits'
collect2: ld returned 1 exit status

But hey, at least I have things sort of working:

My next few posts will deal with specific problems with reasonable solutions.

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Linux from Scratch: A Cautionary Tale, Part 1

October 30th, 2011 1 comment

And I’m started with Linux from Scratch! Here are some helpful pointers for anyone considering running LFS on their own. Caution: this is highly nerdy and keyworded to hell to hopefully allow your favourite search engine to grab solutions from this post.

Getting Started, AKA: Use a Distribution You Know

LFS needs an existing Linux environment. Don’t try and use unetbootin on the LFS liveCD (I used lfslivecd-x86_64-6.3-r2145-min.iso to get started, but there is a newer revision 2160 available on one of the mirrors.) unetbootin in this configuration is just a bag of hurt and you’ll spend an inordinate amount of time trying to get your root volume to work, so just burn a CD.

If I was building LFS again I’d have started from a stable Debian base or other Linux distribution where I’m comfortable and have network access – there are a number of reasons below I suggest this, but you really want your host system kernel to be 2.6.25 or higher.

Make sure to have all the patches from are downloaded and in a location you can access from your host distribution. USB sticks are OK for this if you don’t have network access (mount the stick, and then copy the patches and packages to the sources directory). Use DownThemAll or a similar mass downloading application/extension on the patches page to save time and grief.

Watch What You Mount

Augh, out of space! It’s quite possible to mount /mnt/lfs on two partitions at the same time by missing a directory, like this:

$ mount /dev/sdb3 /mnt/lfs
$ mount /dev/sdb1 /mnt/lfs

Oops – I missed /boot at the end of the second mount command. To confirm this before copying any files, “mount” should show only one partition active at /mnt/lfs. Since my /dev/sdb1 partition was only 200MB I got to the GCC extraction step and was promptly disappointed. I ended up unmounting everything, recreating the filesystem (mke2fs -v /dev/sdb1) and then remounting (mkdir -pv /mnt/lfs/boot; mount -t ext2 /dev/sdb1 /mnt/lfs/boot).

For more tales of installation havoc, keep reading…

Read more…

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Richard M. Stallman: Troll

October 10th, 2011 15 comments

If you’ve been living under a rock for the past week, you may not be aware that Steve Jobs, co-founder and legendary CEO of Apple Inc., has recently died after a long and protracted battle with pancreatic cancer. After the announcement of his death, many news outlets (tech-oriented and otherwise) ran lengthy tributes to a man who has forever (and often disruptively) altered more industries than any other in recent memory.

The day after Jobs’ death, Free Software visionary and GNU Project founder Richard M. Stallman had this to say about the man:

Steve Jobs, the pioneer of the computer as a jail made cool, designed to sever fools from their freedom, has died.

As Chicago Mayor Harold Washington said of the corrupt former Mayor Daley, “I’m not glad he’s dead, but I’m glad he’s gone.” Nobody deserves to have to die – not Jobs, not Mr. Bill, not even people guilty of bigger evils than theirs. But we all deserve the end of Jobs’ malign influence on people’s computing.

Unfortunately, that influence continues despite his absence. We can only hope his successors, as they attempt to carry on his legacy, will be less effective.

Upon finding this post via Twitter, my immediate reaction was a deep loss of respect for Stallman, a man whose contributions to the Free Software movement cannot be understated. The way that I see it, Stallman and Jobs are one in the same. Both are (or were, in the case of the latter) visionaries, both contributed immeasurably to an industry that employs, informs, and entertains me on a daily basis, and both are/were zealots when it came to their personal opinions about software.

Now I’m not an Apple guy. Far from it, in fact. I don’t own a single Apple product, I use Linux whenever and wherever possible, and I only break from the four essential freedoms when obtaining and enjoying media that cannot be accessed otherwise. But regardless of your thoughts on Steve Jobs, the man deserves your respect.

While Stallman qualified his statement by noting that nobody deserves to die, he also focused his personal fanaticism when it comes to the perceived threat of non-free software directly on the shoulders of one man in a world of many.

There’s something about Freedom that Stallman doesn’t seem to (or want to, as all accounts paint him as a pretty smart dude) understand. It’s a simple point, and one that needs to be reiterated often: Freedom is the right to choose. In politics, in products, and in computing, freedom is the right to choose what is best for you.

Steve Jobs put his ideas and his products into the free market, and paying customers often chose them above those of Stallman. Perhaps those customers got shafted, but when faced with a choice between the freedom to edit configuration files and the beautiful design of an Apple product, they unsurprisingly chose the latter.

That’s freedom, whether you like it or not. Fuck Richard Stallman.

Further Reading:

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Create a GStreamer powered Java media player

March 14th, 2011 1 comment

For something to do I decided to see if I could create a very simple Java media player. After doing some research, and finding out that the Java Media Framework was no longer in development, I decided to settle on GStreamer to power my media player.

GStreamer for the uninitiated is a very powerful multimedia framework that offers both low-level pipeline building as well as high-level playback abstraction. What’s nice about GStreamer, besides being completely open source, is that it presents a unified API no matter what type of file it is playing. For instance if the user only has the free, high quality GStreamer codecs installed, referred to as the good plugins, then the API will only play those files. If however the user installs the other plugins as well, be it the bad or ugly sets, the API remains the same and thus you don’t need to update your code. Unfortunately being a C library this approach does have some drawbacks, notably the need to include the JNA jar as well as the system specific libraries. This approach can be considered similar to how SWT works.


Assuming that you already have a Java development environment, the first thing you’ll need is to install GStreamer. On Linux odds are you already have it, unless you are running a rather stripped down distro or don’t have many media players installed (both Rhythmbox and Banshee use GStreamer). If you don’t it should be pretty straight forward to install along with your choice of plugins. On Windows you’ll need to head over to ossbuild where they have downloadable installers.

The second thing you’ll need is gstreamer-java which you can grab over at their website here. You’ll need to download both gstreamer-java-1.4.jar and jna-3.2.4.jar. Both might contain some extra files that you probably don’t need and can prune out later if you’d like. Setup your development environment so that both of these jar files are in your build path.

Simple playback

GStreamer offers highly abstracted playback engines called PlayBins. This is what we will use to actually play our files. Here is a very simple code example that demonstrates how to actually make use of a PlayBin:

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));


So what does it all mean?

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

The above line takes the incoming command line arguments and passes them to the Gst.init function and returns a new set of arguments. If you have every done any GTK+ programming before this should be instantly recognizable to you. Essentially what GStreamer is doing is grabbing, and removing, any GStreamer specific arguments before your program will actually process them.

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));

The first line of code requests a standard “AudioPlayer” PlayBin. This PlayBin is built right into GStreamer and automatically sets up a default pipeline for you. Essentially this lets us avoid all of the low-level craziness that we would have to normally deal with if we were starting from scratch.

The next line sets the PlayBin’s VideoSink, think of sinks as output locations, to a “fakesink” or null sink. The reason we do this is because PlayBin’s can play both audio and video. For the purposes of this player we only want audio playback so we automatically redirect all video output to the “fakesink”.

The last line is pretty straight forward and just tells GStreamer what file to play.


Finally with the above lines of code we tell the PlayBin to actually start playing and then enter the GStreamer main loop. This loop continues for the duration. The last line is used to reset the PlayBin state and do some cleanup.

Bundle it with a quick GUI

To make it a little more friendly I wrote a very quick GUI to wrap all of the functionality with. The download links for that (binary only package), as well as the source (all package) is below. And there you have it: a very simple cross-platform media player that will playback pretty much anything you throw at it.

Please note that I have provided this software purely as a quick example. If you are really interested in developing a GStreamer powered Java application you would do yourself a favor by reading the official documentation.

Binary Only Package All Package
File name:
Version: March 13, 2011
File size: 1.5MB 1.51MB
File download: Download Here Download Here

Originally posted on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).