Archive

Archive for the ‘Free Software’ Category

CoreGTK

January 28th, 2014 2 comments

A while back I made it my goal to put together an open source project as my way of contributing back to the community. Well fast forward a couple of months and my hobby project is finally ready to be shown the light of day. I give you… CoreGTK

CoreGTK is an Objective-C binding for the GTK+ library which wraps all objects descending from GtkWidget (plus a few others here and there). Like other “core” Objective-C libraries it is designed to be a very thin wrapper, so that anyone familiar with the C version of GTK+ should be able to pick it up easily.

However the real goal of CoreGTK is not to replace the C implementation for every day use but instead to allow developers to more easily code GTK+ interfaces using Objective-C. This could be especially useful if a developer already has a program, say one they are developing for the Mac, and they want to port it to Linux or Windows. With a little bit of MVC a savvy developer would only need to re-write the GUI portion of their application in CoreGTK.

So what does a CoreGTK application look like? Pretty much like a normal Objective-C program:

/*
 * Objective-C imports
 */
#import <Foundation/Foundation.h>
#import "CGTK.h"
#import "CGTKButton.h"
#import "CGTKSignalConnector.h"
#import "CGTKWindow.h"

/*
 * C imports
 */
#import <gtk/gtk.h>

@interface HelloWorld : NSObject
/* This is a callback function. The data arguments are ignored
 * in this example. More callbacks below. */
+(void)hello;

/* Another callback */
+(void)destroy;
@end

@implementation HelloWorld
int main(int argc, char *argv[])
{
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    /* We could use also CGTKWidget here instead */
    CGTKWindow *window;
    CGTKButton *button;

    /* This is called in all GTK applications. Arguments are parsed
    * from the command line and are returned to the application. */
    [CGTK autoInitWithArgc:argc andArgv:argv];

    /* Create a new window */
    window = [[CGTKWindow alloc] initWithGtkWindowType:GTK_WINDOW_TOPLEVEL];

    /* Here we connect the "destroy" event to a signal handler in 
     * the HelloWorld class */
    [CGTKSignalConnector connectGpointer:[window WIDGET] 
        withSignal:@"destroy" toTarget:[HelloWorld class] 
        withSelector:@selector(destroy) andData:NULL];

    /* Sets the border width of the window */
    [window setBorderWidth: [NSNumber numberWithInt:10]];

    /* Creates a new button with the label "Hello World" */
    button = [[CGTKButton alloc] initWithLabel:@"Hello World"];

    /* When the button receives the "clicked" signal, it will call the
     * function hello() in the HelloWorld class (below) */
    [CGTKSignalConnector connectGpointer:[button WIDGET] 
        withSignal:@"clicked" toTarget:[HelloWorld class] 
        withSelector:@selector(hello) andData:NULL];

    /* This packs the button into the window (a gtk container) */
    [window add:button];

    /* The final step is to display this newly created widget */
    [button show];

    /* and the window */
    [window show];

    /* All GTK applications must have a [CGTK main] call. Control ends here
     * and waits for an event to occur (like a key press or
     * mouse event). */
    [CGTK main];

    [pool release];

    return 0;
}

+(void)hello
{
    NSLog(@"Hello World");
}

+(void)destroy
{
    [CGTK mainQuit];
}
@end
Hello World in action

Hello World in action

And because Objective-C is completely compatible with regular old C code there is nothing stopping you from simply extracting the GTK+ objects and using them like normal.

// Use it as an Objective-C CoreGTK object!
CGTKWindow *cWindow = [[CGTKWindow alloc] 
    initWithGtkWindowType:GTK_WINDOW_TOPLEVEL];

// Or as a C GTK+ window!
GtkWindow *gWindow = [cWindow WINDOW];

// Or even as a C GtkWidget!
GtkWidget *gWidget = [cWindow WIDGET];

// This...
[cWindow show];

// ...is the same as this:
gtk_widget_show([cWindow WIDGET]);

You can even use a UI builder like GLADE, import the XML and wire up the signals to Objective-C instance and class methods.

CGTKBuilder *builder = [[CGTKBuilder alloc] init];
if(![builder addFromFile:@"test.glade"])
{
    NSLog(@"Error loading GUI file");
    return 1;
}

[CGTKBuilder setDebug:YES];

NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:
                 [CGTKCallbackData withObject:[CGTK class] 
                     andSEL:@selector(mainQuit)], @"endMainLoop",
                 [CGTKCallbackData withObject:[HelloWorld class] 
                     andSEL:@selector(hello)], @"on_button2_clicked",
                 [CGTKCallbackData withObject:[HelloWorld class] 
                     andSEL:@selector(hello)], @"on_button1_activate",
                 nil];

[builder connectSignalsToObjects:dic];

CGTKWidget *w = [builder getWidgetWithName:@"window1"];
if(w != nil)
{
    [w showAll];
}

[builder release];

So there you have it that’s CoreGTK in a nutshell.

There are a variety of ways to help me out with this project if you are so inclined to do so. The first task is probably just to get familiar with it. Download CoreGTK from the GitHub project page and play around with it. If you find a bug (very likely) please create an issue for it.

Another easy way to get familiar with CoreGTK is to help write/fix documentation – a lot of which is written in the source code itself. Sadly most of the current documentation simply states which underlying GTK+ function is called and so it could be cleaned up quite a bit.

At the moment there really isn’t anything more formal than that in place but of course code contributions would also be welcome!

Update: added some pictures of the same program running on all three operating systems.

Hello World on Windows

Hello World on Windows

Hello World on Mac

Hello World on Mac

Hello World on Linux

Hello World on Linux

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

An Ambitious Goal

August 1st, 2013 3 comments

Every since we announced the start of the third Linux Experiment I’ve been trying to think of a way in which I could contribute that would be different from the excellent ideas the others have come up with so far. After batting around some ideas over the past week I think I’ve finally come up with how I want to contribute back to the community. But first a little back story.

A large project now, GNOME was started because there wasn't a good open source alternative at the time

A large project now, GNOME was started because there wasn’t a good open source alternative at the time

During the day I develop commercial software. An unfortunate result of this is that my personal hobby projects often get put on the back burner because in all honesty when I get home I’d rather be doing something else. As a result I’ve developed, pun intended, quite a catalogue of projects which are currently on hold until I can find time/motivation to actually make something of them. These projects run the gamut of little helper scripts, written to make my life more convenient, all the way up to desktop applications, designed to take on bigger tasks. The sad thing is that while a lot of these projects have potential I simply haven’t been able to finish them, and I know that if I just could they would be of use to others as well. So for this Experiment I’ve decided to finally do something with them.

Thanks to OpenOffice, LibreOffice and others there are actual viable open source alternatives to Microsoft Office

Thanks to OpenOffice.org, LibreOffice and others there are actual viable open source alternatives to Microsoft Office

Open source software is made up of many different components. It is simultaneously one part idea, perhaps a different way to accomplish X would be better, one part ideal, belief that sometimes it is best to give code away for free, one part execution, often times a developer just “scratching an itch” or trying a new technology, and one part delivery, someone enthusiastically giving it away and building a community around it. In fact that’s the wonderful thing about all of the projects we all know and love; they all started because someone somewhere thought they had something to share with the world. And that’s what I plan to do. For this Linux Experiment I plan on giving back by setting one of my hobby projects free.

Before this open source web browser we were all stuck with Internet Explorer 6

Before this open source web browser we were all stuck with Internet Explorer 6

Now obviously this is not only ambitious but perhaps quite naive as well especially given the framework of The Linux Experiment – I fully recognize that I have quite a bit of work ahead of me before any of my hobby code is ready to be viewed, let alone be used, by anyone else. I also understand that, given my own personal commitments and available time, it may be quite a while before anything actually comes of this plan. All of this isn’t exactly well suited for something like The Linux Experiment, which thrives on fresh content; there’s no point in me taking part in the Experiment if I won’t be ready to make a new post until months from now. That is why for my Experiment contributions I won’t be only relying on the open sourcing of my code, but rather I will be posting about the thought process and research that I am doing in order to start an open source project.

Topics that I intend to cover are things relevant to people wishing to free their own creations and will include things such as:

  • weighing the pros and cons as well as discussing the differences between the various open source licenses
  • the best place to host code
  • how to structure the project in order to (hopefully) get good community involvement
  • etc.

An interesting side effect of this approach will be somewhat of a new look into the process of open sourcing a project as it is written piece by piece, step by step, rather than in retrospect.

The first billion dollar company built on open source software

The first billion dollar company built on open source software

Coincidentally as I write this post the excellent website tuxmachines.org has put together a group of links discussing the pros of starting open source projects. I’ll be sure to read up on those after I first commit to this ;)

Linux: a hobby project initially created and open sourced by one 21 year old developer

Linux: a hobby project initially created and open sourced by one 21 year old developer

I hope that by the end of this Experiment I’ll have at least provided enough information for others to take their own back burner projects to the point where they too can share their ideas and creations with the world… even if I never actually get to that point myself.

P.S. If anyone out there has experience in starting an open source project from scratch or has any helpful insights or suggestions please post in the comments below, I would really love to hear them.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Installing Netflix on Kubuntu

July 27th, 2013 4 comments

The machine I am running Kubuntu on is primarily used for streaming media like Netflix and Youtube, watching files off of a shared server and downloading media.

I decided to try to install Netflix first since it is something I use quite often. I am engrossed in watching the first season of Orange is the New Black and the last season of The West Wing.

Again, I resorted to Googling exactly what I am looking for and came across this fantastic post.

I opened a Terminal instance in Kubuntu and literally copied and pasted the text from the link above.

After going through these motions, I had a functioning instance of Netflix! Woo hoo.

So I decided to throw on an episode of Orange is the new Black, it loaded perfectly…. without sound.

Well shit! I never even thought to see if my audio driver had been picked up… so I guess I should probably go ahead and fix that.

This isn’t going well.

July 26th, 2013 No comments

Today I started out by going into work, only to discover that it is NEXT Friday that I need to cover.

So I came home and decided to get a jump start on installing Kubuntu.

I am now at a screeching halt because the hardware I am using has Win8 installed on it and when I boot into the Start Up settings, I lose the ability to use my keyboard. This is going swimmingly.

So, it is NOW about 3 hours later.

In this time, I have cursed, yelled, felt exasperated and been downright pissed.

This is mainly because Windows 8 does not make it easily accessible to get to the Boot Loader. In fact, the handy Windows made video that is supposed to walk you through how EASY, and user friendly the process of changing system settings is fails to mention what to do if the “Use a Device” option is nowhere to be found (as it was in my case).

So I relied on Google, which is usually pretty good about answering questions about stupid computer issues. I FINALLY came across one post that stated that due to how quickly Windows 8 boots, that there is no time to press F2 or F8. However, I tries anyway. F8 is the key to selecting what device you want to boot from, as you will see later in this post.

What you will want to do if installing any version of Linux is, first format a USB stick to hold your Linux distro. I used Universal USB Loader. The nice thing about this loader is that you don’t have to already have the .iso for the distro you want to use downloaded. You have the option of downloading right in the program.

After you have selected you distro, downloaded the .iso and loaded it onto your USB stick now is the fun part. Plug your USB stick into the computer you wish to load Linux onto.

Considering how easy this was once I figured it all out, I do feel rather silly. If I were to have to do it again, I would feel much more knowledgeable.

If you are using balls-ass Windows 8, like I was, the EASIEST way to select an alternate device to boot from is to restart the computer and press F8 a billion times until a menu pops up, letting you choose from multiple devices. Choose the device with the name of the USB stick, for me it was PENDRIVE.

Once you press enter (from a keyboard that is attached directly to the computer you are using via USB cable, because apparently Win8 loses the ability to use Wireless USB devices before the OS has fully booted…at least that was my experience).

So now, I am being prompted to install Kubuntu (good news, I already know it supports my projector, because I can see this happening).

Now, I have had to plug in a USB wired keyboard and mouse for this process so far. This makes life a little bit difficult because the computer I am using sits in a closet, too far away from my projector screen. This makes it almost impossible for me to see what is going on, on the screen. So installing the drives for my wireless USB devices it a bit of a pain.

However, the hard part is over. The OS is installed successfully. My next post will detail how the hell to install wireless USB devices. I will probably also make a fancy signature, so you all know what I am running.

Come on, really?!

July 25th, 2013 3 comments

So it is 9:40 PM and I started my “Find a Linux distro to install” process. Like many people, I decided to type exactly what I wanted to search into Google. Literally, I typed “Linux Distro Chooser” into Google. Complex and requiring great technical skill, I know.

My next mission was to pick the site that had a description with the least amount of “sketch”. Meaning, I picked the first site in the Google results. I then used my well honed multiple choice skills (ignore the question, pick B) to find my perfect Linux distro match.

After several pages of clicking through, I was presented with a list of Linux distributions that fit my needs and hardware.

See, a nice list, with percents and everything.

This picture has everything... percents, mints, Man Drivers...

This picture has everything… percents, mints, Man Drivers…

So naturally, I do what everyone does with lists.. look at my options and pick the one with the prettiest picture.

For me that distro was Kubuntu. It has a cool sounding name that starts with the same letter as my name.

So I follow the link through to the website to pull the .iso and this pops up.

Fuck Drupal

God damn Drupal!

I have dealt with Drupal before, as it was the platform the website I did data entry for was built on. Needless to say, I hate it. Hey Web Dev with Trev, if you are out there, I hope you burn your toast the next time you make some.

So, to be productive while waiting for Drupal to fix it’s shit, I decided to start a post and rant. In the time this took, the website for Kubuntu has recovered (for now).

So, I downloaded my .iso and am ready to move it onto a USB stick.

I’m debating whether I want to install it now or later, as I would really like to watch some West Wing tonight. I know that if I start this process and fuck it up, I am going to be forced to move upstairs where there is another TV, but it is small :(

Well, here I go, we’ll see how long it takes me to install it. If you are reading this, go ahead and time me… it may be a while.

An Experiment in Transitioning to Open Document Formats

June 15th, 2013 2 comments

Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

The apps of KDE 4.10 Part I: Rekonq

April 6th, 2013 No comments

It’s been a while since I’ve used KDE, however with the recent rapid (and not always welcome) changes going on in the other two main desktop environments (GNOME 3 and Unity) and the, in my opinion, feature stagnation of environments like Xfce and LXDE I decided to give KDE another shot.

My goal this time is to write up a series of quick reviews of KDE as presented as an overall user experience. That means I will try and stick to the default applications for getting my work done. Obviously depending on the distribution you choose you may have a different set of default KDE applications, and that’s fine. So before you ask, no I won’t be doing another write up for KDE distribution X just because you think its ‘way better for including A instead of B’. I’m also going to try and not cover what I consider more trivial things (i.e. the installer/installation process) and instead focus on what counts when it comes to using an operating system day-to-day.

Rekonq

The default web browser in the distribution I chose is not Konqueror but rather its WebKit cousin Rekonq. Where Konqueror uses KHTML by default and WebKit as an option, Rekonq sticks to the more conventional rendering engine used by Safari and Chrome.

konqueror_4_4_2

This is not Rekonq, it is Konqueror

Rekonq is a very minimalistic looking browser to the point where I often thought I accidentally started up Chrome instead.

This is Rekonq

This is Rekonq

From my time using it, Rekonq seems to be a capable browser although it is certainly not the speediest, nor does it sport any features that I couldn’t find elsewhere. One thing it does do very nicely is with its integration into the rest of the KDE desktop. This means that the first time you visit YouTube or some other Flash website you get a nice little prompt in the system tray alerting you of the option to install new plugins. If you choose to install the plugin then a little window appears telling you what it is downloading and installing for you, completely automatically. No need to visit a vendor’s website or go plugin hunting online.

Like most other KDE applications Rekonq also allows for quite a bit of customization, although I found its menus to be very straightforward and not nearly as intimidating as some other applications.

The settings menu

The settings menu

I did notice a couple of strange things while working with Rekonq that I should probably mention. First off while typing into a WordPress edit window none of the shortcut keys (i.e. Ctrl+B = bold) seemed to work. I also found that I couldn’t perform a Shift+Arrow Key selection of the text, instead having to use Ctrl+Shift+Arrow Key which highlights an entire word at a time. At this time I’m not sure what other websites may suffer from similar irregularities so while Rekonq is a fine browser in its own right, you may want to keep another one around just in case.

Browsing the best website on the net

Browsing the best website on the net

While I haven’t found any real show-stoppers with Rekonq, I still can’t shake the feeling that I’m missing something. I don’t know how to describe it other than I think I would feel safer using a more mainstream web browser like Firefox, Chrome or even Opera. But like any software, your experience may vary and I would certainly never recommend against trying Rekonq (or even Konqueror). Who knows, you may find out that it is your new favorite web browser.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Free Software, KDE, Linux, Tyler B Tags: , ,

Using Linux to keep an old work PC alive

November 19th, 2012 1 comment

A couple of years ago I helped a small business convert their old virus infected Windows XP computer into a Linux Mint 11 (Katya) Xfce. This was done after a long time of trying to help them keep that machine running at a half-decent speed – the virus being the last straw that finally had them make the switch to Linux. Amazingly, well maybe not to the Linux faithful but to most people, this transition not only went smoothly but was actually extremely well received. Outside of a question or two every couple of months I have heard of no issues whatsoever. Sadly Linux Mint 11 has recently reached its end of life stage and so the time has come to find a replacement.

The Situation

When I said this was an old Windows XP machine I wasn’t kidding. It sports a speedy (sarcasm) 2.4Ghz Intel Pentium 4 processor, 512 whole megabytes of RAM and an Intel integrated graphics card. With specs like those it is pretty obvious that the only two real considerations (from a technical standpoint) are low resource requirements and speed. I’d be tempted to jump to a more specialized distribution like Puppy Linux but the people using the machine are A) used to Linux Mint already and B) expected a familiar, fully featured operating system experience.

Where is Linux Mint today?

Linux Mint 13 has recently been released (including an Xfce version) based on the latest Ubuntu 12.04 stable release. This makes it an ideal candidate for an upgrade because it is something already familiar to the users and will be supported until April 2017.

The following are the steps I took, in no real order, to setup and configure Linux Mint 13 Xfce for their use:

Pre/During Install Configuration

  1. Encrypt the home directory
    Because this is a work computer and will be storing sensitive financial information I configured it to encrypt everything in the home directory. Better safe than sorry.

Post Install Configuration

  1. Install Google Chrome
    I removed Mozilla Firefox and installed Google Chrome for two reasons. First Chrome tends to be, or at least feel, a little bit snappier than even the latest version of Firefox and as I mentioned above speed is king. Secondly, unless something changes, Google’s Chrome (not even Chromium) will be the only Linux browser that will continue to get Adobe Flash updates in a straightforward and easy way for the user. UPDATE: ironically the only issue I found with this whole install related to Google’s embedded Adobe Flash. For some reason the audio on the particular version ran at double speed. This is apparently a known issue.
  2. Install Rhythmbox
    I also removed Banshee and installed Rhythmbox instead. This was done not because I consider one better than the other (or even that these two represent the only options), but simply because the users were already familiar with Rhythmbox. They use Rhythmbox to listen to streaming Internet radio.
  3. Remove unnecessary software (Pidgin, XChat, GNOME Mplayer and Totem)
    Not because they are bad applications, they just simply weren’t needed. I kept VLC because it can pretty much play all audio-video.
  4. Add Trash can to desktop and remove Filesystem icon
  5. Remove all but one workspace
  6. Install preload to speed up commonly used packages on startup
  7. Configure LibreOffice
    The goal of this step is to set up LibreOffice in such a way as to make it use less memory while still keeping most of the functionality. In order to accomplish this I changed the number of undo steps from 100 to 30 and disabled the Java components.
  8. Change screensaver to blank screen
    This looks more professional and uses less memory.
  9. Spin down hard drive when possible
    While I was at it I also went into power management and had the system spin down the hard drives when possible. This configuration had nothing to do with performance, in fact spinning down the drives can slow access to files, but was done because they often just leave the PC running 24-7 and it is not in use at all during the night. I’m sure this will save them a couple of cents per year or something.
  10. Disabled unused startup services like Bluetooth
    The machine doesn’t even have a Bluetooth radio.
  11. Set it so that inserting a removable drive causes the system to open a window for browsing the contents
  12. Change the system tray clock time format from 24 hour time to 12 hour time.
    This was a user preference.
  13. Set updates to be downloaded from best available server

  14. Install Microsoft fonts (i.e. ttf-mscorefonts-installer)
  15. Install 7zip, rar and unrar
    You never know what kind of random archive formats they might need to open so it is better to support them all.
  16. Change login screen theme
    The default login screen is nice but it isn’t the most user friendly. I opted to install the Mint Pro (MDM) theme from GNOME-Look.org.
  17. Install all updates
  18. Run Grub boot profiler to speed up the boot process
    If you’re not aware of this it is a great trick. Essentially once you have everything installed (driver wise at least) you do the following:
    -Modify /etc/default/grub and change the line GRUB_CMD_LINE_LINUX_DEFAULT=”quiet splash” to GRUB_CMD_LINE_LINUX_DEFAULT=”quiet splash profile”.
    -Then run sudo update-grub2 and reboot.
    -The next reboot might be slower but once the machine comes back up simply edit that file again and remove the “profile” text. Your computer will now intelligently load drivers as the hard drive head travels across their location, instead of in some other arbitrary order which can actually shave a couple of seconds off of your total boot time.

How did it turn out?

Surprisingly well. The machine isn’t a speed demon by any stretch of the imagination but it does perform its simple tasks well enough. It remains to be seen if the computer will make it to the next long term release of Linux Mint Xfce, or even if it will be able to run it at that time, but for now the users are happy and that is what matters.

 




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Staying in shape with open source software

November 21st, 2011 No comments

On a good week, I consider myself an avid runner. Right now I’m training to run a 5k in the spring. Ideally, I’ll be able to get it under 20 minutes. Now, two of the keys to exercise are to set goals and to track your progress. Clearly I’ve got the first half under control, but the second half? Well, it turns out that’s where a lot of people falter, lose motivation, and ultimately fail. I’m no exception – I’ve tried running without really tracking my progress and I found that eventually I just gave up. Manually drawing routes, estimating distances, and keeping time take effort, and frankly I didn’t have the wherewithal to do it. Thankfully, modern technology has come to save the day. I use a Google Nexus S, which comes with a GPS and dozens of apps on the Android Market for tracking exercise.

Google My Tracks

Google happens to make an open source app that tracks runs (My Tracks). It supports waypoints (so you can get data on each mile or kilometre of your run), and it records your speed and altitude. All in all, it’s a very handy app and I use it regularly for my runs. The software integrates with Google accounts and lets you upload your runs to Google Maps and track statistics via their spreadsheets in Google Docs. And if you’re the sharing type, it also exports your runs through .gpx files .kml files and supports sharing through Twitter.

Main My Tracks spreadsheet

My Tracks summary statistics

Pytrainer

i discovered Pytrainer through an entry at another blog. If you’re more inclined to keep your data offline, it might be a better solution for you. In order to use Pytrainer, you’ll have to import your .gpx files from your phone and specify the types of activities you were tracking (running, cycling, etc). In order to get the mapping to work properly, I had to install the gpsbabel package.  Once that was set up, I had the option to use either Google Maps or the Open Map Project. The program allows you to enter information about heart rate, calories, and equipment as well, but I didn’t have any of that information available. Gathered statistics are aggregated and can be examined for specified time periods, activities, and athletes.

Uploading a new run into Pytrainer

Mapping my run

Summary statistics in Pytrainer

Endomondo

This doesn’t technically fall into the category of open source, but I feel compelled to add it because it’s actually my preferred tracking solution. Endomondo is a website (with associated Android app) that allows you to track routes with the added benefits of calorie estimation, social integration (such as competitions and commenting/”pep-talks”), and a general smoothness in functionality that the other solutions don’t really reach. It also has a “coach” available and workout playlists, but I don’t make much use of those. Not that I have anything against the functions, but for personal safety reasons, I prefer not to run with headphones.

Endomondo workout imported from My Tracks

My choices

After testing out the programs and apps mentioned here, I’ve decided to go with My Tracks and Endomondo. I chose My Tracks because it integrates seamlessly with Google Maps and Docs (I like screwing around with spreadsheets) and because despite looking stripped down and simple, it’s actually excellent at what it does. As for Endomondo: its functions overlap considerably with My Tracks, but the social environment and the excellent website make it very appealing and easy-to-use. The main reason it won out over Pytrainer is because the app takes away any uploading – the second I’m done my workout, it’s available online.

Categories: Android, Free Software, Sasha D Tags:

LFS: Installing VLC

November 6th, 2011 1 comment

Since the install of Linux From Scratch, one of the main issues I’ve been having is the playback of audio and video files. VLC does both quite well, so I decided to install it.

Like most of my installs in Linux From Scratch, there are millions of dependencies, and you have to install each one manually. I found that the CBLFS VLC page was a great help in determining which packages were required.

One thing I noticed, is that even though it lists some packages as “Optional,” VLC will not compile without a few of them. The easiest way to deal with this is to just install the optional packages as required.

I only ran into one issue while compiling:

D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/var/lib/dbus/machine-id": No such file or directory
See the manual page for dbus-uuidgen to correct this issue.
D-Bus not built with -rdynamic so unable to print a backtrace
Aborted

The quick fix for this is to just run:

dbus-uuidgen > /var/lib/dbus/machine-id

Now that VLC is compiled, you can run it anytime by using vlc from the command-line. Make sure you don’t pull a Jake and run it as root. It will yell at you.


I am currently running ArchLinux (x86_64).
Check out my profile for more information.