Archive

Archive for the ‘Open Source Software’ Category

Setting up Syncthing to share files on Linux

February 21st, 2016 No comments

Syncthing is a file sharing application that lets you easily, and securely, share files between computers without having to store them on a third party server. It is most analogous to BitTorrent Sync (BTS) but whereas BTS is somewhat undocumented and closed source, Syncthing is open source and uses an open protocol that can be independently verified.

This is going to be a basic guide to configure Syncthing to sync a folder between multiple computers. I’m also going to configure these to start automatically when the system starts up and run Syncthing in the background so it doesn’t get in your way if you don’t want to see it.

Download and Install

While it may be possible to get Syncthing from your distribution’s repositories I prefer to grab it right from the source. So for example you can grab the appropriate version for your Linux computer (for example the 64 bit syncthing-linux-amd64-v0.12.19.tar.gz download) right from their website.

Extract the contents to a new folder in your home directory (or a directory wherever you want it to live). One important thing to note is that you want whatever user will be running the program, for example your user account, to have write access to that folder so that Syncthing can auto-update itself. For example you could extract the files to ~/syncthing/ to make things easy.

To start Syncthing all you need to do is execute the syncthing binary in that directory. If you want to configure syncthing to start without also starting up the browser you can simply run it using the -no-browser flag or by changing this behaviour in the settings.

If you are on Debian, Ubuntu or derivatives (such as Linux Mint) there is also an official repository you can add. The steps can be found here but I’ve re-listed them below for completeness sake:

# Add the release PGP keys:
curl -s https://syncthing.net/release-key.txt | sudo apt-key add -

# Add the "release" channel to your APT sources:
echo "deb http://apt.syncthing.net/ syncthing release" | sudo tee /etc/apt/sources.list.d/syncthing.list

# Update and install syncthing:
sudo apt-get update
sudo apt-get install syncthing

This will install syncthing to /usr/bin/syncthing. In order to specify a configuration location you can pass the -home flag which would look something like this:

./usr/bin/syncthing -home="/home/{YOUR USER ACCOUNT}/.config/syncthing"

So to set up syncthing to start automatically without the browser using the specified configuration you would simply add this to your list of startup applications:

/usr/bin/syncthing -no-browser -home="/home/{YOUR USER ACCOUNT}/.config/syncthing"

There are plenty of ways to configure Syncthing to startup automatically but the one described above is a pretty universal method. If you would rather integrate it with your system using runit/systemd/upstart just take a look at the etc folder in the tar.gz.

Here is an example of my Linux Mint configuration in the Startup Applications control panel using the command listed above:

It's easy enough to get Syncthing started

It’s easy enough to get Syncthing started

Configure Syncthing

Once Syncthing is running you should be able to browse to it’s interface by going to http://localhost:8080. From this point forward I’m going to assume you want to sync between two computers which I will refer to as Computer 1 and Computer 2.

First let’s start by letting Computer 1 know about Computer 2 and vice versa.

  1. On Computer 1 click Actions > Show ID. Copy the long device identification text (it will look like a series of XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX-…).
  2. On Computer 2 click Add Device and enter the copied Device ID and give it a Device Name.
  3. Back on Computer 1 you may notice a New Device notification which will allow you to easily add Computer 2 there as well. If you do not see this notification simply follow the steps above but in reverse, copying Computer 2’s device ID to Computer 1.
Once both computers know about each other they can begin syncing!

Once both computers know about each other they can begin syncing!

In order to share a folder you need to start by adding it to the Syncthing on one of the two computers. To make it simple I will do this on Computer 1. Click Add Folder and you will see a popup asking for a bunch of information. The important ones are:

  • Folder ID: This is the name or label of the shared folder. It must be the same on all computers taking part in the share.
  • Folder Path: This is where you want it to store the files on the local computer. For example on Computer 1 I might wan this to be ~/Sync/MyShare but on Computer 2 it could be /syncthing/shares/stuff.
  • Share With Devices: These are the computers you want to share this folder with.

So for example let’s say I want to share a folder called “CoolThings” and I wanted it to live in ~/Sync/CoolThings on Computer 1. Filling in this information would look like this:

syncthing_folder_setup

Finally to share it with Computer 2 I would check Computer 2 under the Share With Devices section.

Once done you should see a new notification on Computer 2 asking if you want to add the newly shared folder there as well.

Syncthing alerts you to newly shared folders

Syncthing alerts you to newly shared folders

Once done the folder should be shared and anything you put into the folder on either computer will be automatically synchronized on the other.

If you would like to add a third or fourth computer just follow the steps above again. Pretty easy no?

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK 3.10.2 Released!

February 19th, 2016 No comments

The next version of CoreGTK, version 3.10.2, has been tagged for release today.

Highlights for this release:

  • This is a bug fix release.
  • Corrected issue with compiling CoreGTK on OS X.

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Turn your computer into your own Chromecast

February 7th, 2016 No comments

Google Chromecasts are neat little devices that let you ‘cast’ (send) media from your phone or tablet to play on your TV. If however, you already have a computer hooked up to your TV you don’t need to go out and buy a new device simply to have the same functionality. Instead you can install the excellent Leapcast program and accomplish the same functionality.

Leapcast works on all major operating systems – Windows, Mac and Linux – but for the purposes of this post I’m going to be focusing on how to set it up on a Debian based Linux distribution.

Step 1) Install Google Chrome browser

The Google Chrome browser is required for Leapcast to work correctly so the first thing you’ll need to do is head over to the download page and install it.

Step 2) Install miscellaneous required applications and libraries

Leapcast also requires a few extra tools and libraries to be installed.

sudo apt-get install virtualenvwrapper python-pip python-twisted-web python2.7-dev

Step 3) Download Leapcast

Head over to the GitHub page and download the zip of the latest Leapcast code. Alternatively you can also install git and use it to grab the latest code that way:

git clone https://github.com/dz0ny/leapcast.git

Step 4) Install Leapcast

In the leapcast directory run the following command. Note you may need to be root in order to do this without error.

sudo python setup.py develop

Step 5) Run Leapcast

Now that Leapcast is install you should be able to run it. Simply open a terminal and type

leapcast

There are some other neat options you can pass it as well. For example if you want your computer to show up as, say, TheLinuxExperiment when someone goes to cast to it simply pass the –name parameter.

leapcast --name TheLinuxExperiment

Happy casting!




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Open formats are… the best formats?

January 17th, 2016 2 comments

Over the past few years there has been a big push to replace proprietary formats with open formats. For example Open Document Format and Office Open XML have largely replaced the legacy binary formats, we’re now seeing HTML5 + JavaScript supplant Silverlight and Java applets, and even the once venerable Flash is on its deathbed.

This of course all makes sense. We’re now in an era where the computing platforms, be it Microsoft Windows, Apple OS X, Android, iOS, Linux, etc., simply don’t command the individual market shares (or at least mind shares) that they once used to. Things are… more diversified now. And while they may not matter to the user the underlying differences in technologies certainly matter to the developer. This is one of the many reasons you see lots of movement to open formats where the same format can be implemented, relatively easily, on all of the aforementioned platforms.

So then the question must be asked: does this trend mean that open formats are the best formats? That is obviously quite a simple question to a deep (and perhaps subjective) subject so perhaps it’s better to look at it from a user adoption perspective. Does being an open format, given all of its advantages, translate to market adoption? There the answer is not as clear.

Open by example

Let’s take a look a few instances where a clear format winner exists and see if it is an open format or a closed/proprietary format.

Documents

When it comes to documents the Open Document Format and Open Office XML have largely taken over. This has been driven largely by Microsoft making Office Open XML the default file format in all versions of Microsoft Office since 2007. Additionally many governments and organizations around the world have standardized on the use of Open Document Format. That said older Microsoft Office binary formats (i.e. .doc, .xls, etc.) are still widely in use.

Verdict: open formats have largely won out.

Audio

For the purposes of the “audio” category let’s consider simply the audio codec that most people use to consume their music. In that regard MP3 is still the absolute dominant format. While it is somewhat encumbered by patents you will hardly find a single device out there that doesn’t support it. This is true even when there are better lossy compression formats (including the proprietary AAC or open Ogg Vorbis) as well as lossless formats like FLAC.

Verdict: the closed/proprietary MP3 format is the de facto standard.

Video

Similarly for the “video” category I’ll only be focusing on the codecs. While there are plenty of open video formats (Theora, WebM, etc.) they are not nearly as well supported as the proprietary formats like MPEG-2, H.264, etc. Additionally the open formats (in general) don’t have quite as good quality vs size ratios as the proprietary ones which is often while you’ll see websites using them in order to save on bandwidth.

Verdict: closed/proprietary formats have largely won out.

File Compression

Compression is something that most people consider more as an algorithm than a format which is why I’ll be focusing on the compressed file container formats for this category. In that regard the ZIP file format is by far the most common. It has native support in every modern operating system and offers decent compression. Other open formats, such as 7-Zip, offer better performance and even some proprietary formats, like RAR, have seen widespread use but for the most part ZIP is the go-to format. What muddies the waters here a bit is that the base ZIP format is open but some of the features added later on were not. However the majority of uses are based on the open standards.

Verdict: the open zip format is the most widely used standard.

Native Applications vs Web Apps

While applications may not, strictly speaking, be a format it does seem to be the case that every year there are stories about how Web Apps will soon replace Native Applications. So far however the results are a little mixed with e-mail being a perfect example of this paradox. For personal desktop e-mail web apps, mostly Gmail and the like, have largely replaced native applications like Microsoft Outlook and Thunderbird. On mobile however the majority of users still access their e-mail via native “apps”. And even then in enterprises the majority of e-mail usage is still done via native applications. I’m honestly not sure which will eventually win out, if either, but for now let’s call it a tie.

Verdict: tie.

The answer to the question is…

Well just on the five quick examples above we’ve got wins for 2 open formats, 2 closed/proprietary formats and one tie. So clearly based on market adoption we’re at a stand still.

Personally I’d prefer if open formats would take over because then I wouldn’t have to worry about my device supporting the format in question or not. Who knows, maybe by next year we’ll see one of the two pull ahead.

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Let’s write a very simple text editor in CoreGTK

December 5th, 2015 No comments

In this post I’ll quickly show how you can write a very simple text editor using CoreGTK. This example is purposely very basic, has no real error handling and few features but it does show how to use CoreGTK in various ways.

To start with I quickly threw something together in GLADE (which if you aren’t aware is an excellent drag and drop GUI editor for GTK+).

coregtk_glade

Very basic shell in GLADE

Next I created a SimpleTextEditor class that will house the majority of my logic and stubbed out my callbacks and methods.

@interface SimpleTextEditor : NSObject
{
    CGTKTextView *txtView;
    CGTKWidget *window;
}

-(void)show;

// Callbacks
-(void)winMain_Destroy;
-(void)btnNew_Clicked;
-(void)btnOpen_Clicked;
-(void)btnSave_Clicked;

// Helper methods to deal with the text view
-(NSString *)getText;
-(void)setText:(NSString *)text;

@end

Now the fun part begins: filling in the implementation of the methods. First create the init and dealloc methods:

-(id)init
{
    self = [super init];
    
    if(self)
    {
        CGTKBuilder *builder = [[CGTKBuilder alloc] init];
        if(![builder addFromFileWithFilename:@"gui.glade" andErr:NULL])
        {
            NSLog(@"Error loading GUI file");
            return nil;
        }
        
        NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:
            [CGTKCallbackData withObject:self andSEL:@selector(winMain_Destroy)], @"winMain_Destroy",
            [CGTKCallbackData withObject:self andSEL:@selector(btnNew_Clicked)], @"btnNew_Clicked",
            [CGTKCallbackData withObject:self andSEL:@selector(btnOpen_Clicked)], @"btnOpen_Clicked",
            [CGTKCallbackData withObject:self andSEL:@selector(btnSave_Clicked)], @"btnSave_Clicked",
            nil];
        
        [CGTKBaseBuilder connectSignalsToObjectsWithBuilder:builder andSignalDictionary:dic];
        
        // Get a reference to the window
        window = [CGTKBaseBuilder getWidgetFromBuilder:builder withName:@"winMain"];
        
        // Get a reference to the text view
        txtView = [[CGTKTextView alloc] initWithGObject:[[CGTKBaseBuilder getWidgetFromBuilder:builder withName:@"txtView"] WIDGET]];
        
        [builder release];
    }
    
    return self;
}
-(void)dealloc
{
    [txtView release];
    [window release];
    [super dealloc];
}

OK let’s break down what we’ve done so far.

CGTKBuilder *builder = [[CGTKBuilder alloc] init];
if(![builder addFromFileWithFilename:@"gui.glade" andErr:NULL])
{
    NSLog(@"Error loading GUI file");
    return nil;
}

First thing is to parse the GLADE file which is what this code does. Next we need to connect the signals we defined for the different events in GLADE to the callback methods we defined in our code:

NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:
    [CGTKCallbackData withObject:self andSEL:@selector(winMain_Destroy)], @"winMain_Destroy",
    [CGTKCallbackData withObject:self andSEL:@selector(btnNew_Clicked)], @"btnNew_Clicked",
    [CGTKCallbackData withObject:self andSEL:@selector(btnOpen_Clicked)], @"btnOpen_Clicked",
    [CGTKCallbackData withObject:self andSEL:@selector(btnSave_Clicked)], @"btnSave_Clicked",
    nil];

[CGTKBaseBuilder connectSignalsToObjectsWithBuilder:builder andSignalDictionary:dic]

Finally extract and store references to the window and the text view for later:

// Get a reference to the window
window = [CGTKBaseBuilder getWidgetFromBuilder:builder withName:@"winMain"];

// Get a reference to the text view
txtView = [[CGTKTextView alloc] initWithGObject:[[CGTKBaseBuilder getWidgetFromBuilder:builder withName:@"txtView"] WIDGET]];

Before we can test anything out we need to fill in a few more basic methods to show the window on command and to exit the GTK+ loop when we close the window:

-(void)show
{
    [window showAll];
}

-(void)winMain_Destroy
{
    [CGTK mainQuit];
}

Now we can actually use our SimpleTextEditor so let’s write a main method to create it:

int main(int argc, char *argv[])
{    
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    /* 
     * This is called in all GTK applications. Arguments are parsed
     * from the command line and are returned to the application. 
     */
    [CGTK autoInitWithArgc:argc andArgv:argv];
    
    // Create and display editor
    SimpleTextEditor *editor = [[SimpleTextEditor alloc] init];
    
    // Check for error
    if(editor == nil)
    {
        return 1;
    }
    
    // Show the window    
    [editor show];
    
    // Start GTK+ loop
    [CGTK main];

    // Release allocated memory
    [editor release];
    [pool release];

    // Return success
    return 0;
}

Compile and run this and you’ll be presented with a cool Simple Text Editor window!

Our very... err... simple text editor

Our very… err… simple text editor

So far so good. Let’s keep filling in our stubbed in methods starting with our helper methods that will allow us to manipulate the underlying text buffer:

-(NSString *)getText
{
    gchar *gText = NULL;
    GtkTextBuffer *buf = NULL;
    GtkTextIter start, end;
    NSString *nsText = nil;
    
    // Grab reference to text buffer
    buf = [txtView getBuffer];
    
    // Determine the bounds of the buffer
    gtk_text_buffer_get_bounds (buf, &start, &end);
    
    // Get the gchar text from the buffer
    gText = gtk_text_buffer_get_text(buf, &start, &end, FALSE);
    
    // Convert it to an NSString
    nsText = [NSString stringWithUTF8String:gText];
    
    // Free the allocated gchar string
    g_free(gText);

    // Return the text
    return nsText;
}

-(void)setText:(NSString *)text
{
    // Get reference to text buffer
    GtkTextBuffer *buf = [txtView getBuffer];
    
    // Set contents of text buffer
    gtk_text_buffer_set_text(buf, [text UTF8String], -1);
}

At this point we have everything we need to implement our New button click callback method:

-(void)btnNew_Clicked
{
    [self setText:@""];
}

Like I said this is a pretty basic example so in a real world application I would hope you would prompt the user before blowing away all of their text!

All that’s left to do at this point is to implement the Open and Save callback methods. For these I’m going to create a new class, MultiDialog, to show how you can still really dig into the GTK+ C code when you need to.

@interface MultiDialog : NSObject
{
}

+(NSString *)presentOpenDialog;
+(NSString *)presentSaveDialog;

@end

And here is the implementation:

@implementation MultiDialog

+(NSString *)presentOpenDialog
{
    // Variables
    CGTKFileChooserDialog *dialog = nil;
    gchar *gText = NULL;
    gint result;
    NSString *filename = nil;

    // Create the dialog itself
    dialog = [[CGTKFileChooserDialog alloc] initWithTitle:@"Open File" andParent:nil andAction:GTK_FILE_CHOOSER_ACTION_OPEN];
    
    // Add cancel and open buttons
    gtk_dialog_add_button ([dialog DIALOG],
                   "_Cancel",
                   GTK_RESPONSE_CANCEL);
    gtk_dialog_add_button ([dialog DIALOG],
                   "_Open",
                   GTK_RESPONSE_ACCEPT);
    
    // Run the dialog
    result = gtk_dialog_run (GTK_DIALOG ([dialog WIDGET]));

    // If the user clicked Open
    if(result == GTK_RESPONSE_ACCEPT)
    {
        // Extract the filename and convert it to an NSString
        gText = gtk_file_chooser_get_filename ([dialog FILECHOOSERDIALOG]);
        filename = [NSString stringWithUTF8String:gText];
    }

    // Cleanup
    g_free(gText);
    gtk_widget_destroy ([dialog WIDGET]);
    [dialog release];
    
    return filename;
}

+(NSString *)presentSaveDialog
{
    // Variables
    CGTKFileChooserDialog *dialog = nil;
    gchar *gText = NULL;
    gint result;
    NSString *filename = nil;

    // Create the dialog itself
    dialog = [[CGTKFileChooserDialog alloc] initWithTitle:@"Save File" andParent:nil andAction:GTK_FILE_CHOOSER_ACTION_SAVE];
    
    // Add cancel and save buttons
    gtk_dialog_add_button ([dialog DIALOG],
                   "_Cancel",
                   GTK_RESPONSE_CANCEL);
    gtk_dialog_add_button ([dialog DIALOG],
                   "_Save",
                   GTK_RESPONSE_ACCEPT);

    // Set settings
    gtk_file_chooser_set_do_overwrite_confirmation ([dialog FILECHOOSERDIALOG], TRUE);
    gtk_file_chooser_set_current_name([dialog FILECHOOSERDIALOG], "Untitled document");
    
    // Run the dialog
    result = gtk_dialog_run (GTK_DIALOG ([dialog WIDGET]));

    // If the user clicked Save
    if(result == GTK_RESPONSE_ACCEPT)
    {
        // Extract the filename and convert it to an NSString
        gText = gtk_file_chooser_get_filename ([dialog FILECHOOSERDIALOG]);
        filename = [NSString stringWithUTF8String:gText];
    }

    // Cleanup
    g_free(gText);
    gtk_widget_destroy ([dialog WIDGET]);
    [dialog release];
    
    return filename;
}

@end

There is quite a bit of code there but hopefully the comments make it pretty easy to follow. Now that we have our MultiDialog class we can use it in our SimpleTextEditor methods:

-(void)btnOpen_Clicked
{
    NSString *text = [NSString stringWithContentsOfFile:[MultiDialog presentOpenDialog]];    
    [self setText:text];
}

-(void)btnSave_Clicked
{
    NSString *filename = [MultiDialog presentSaveDialog];
    NSString *text = [self getText];
        
    NSError *error;
    BOOL succeed = [text writeToFile:filename atomically:YES encoding:NSUTF8StringEncoding error:&error];

    if(!succeed)
    {
        NSLog(@"%@:%s Error saving: %@", [self class], _cmd, [error localizedDescription]);
    }
}

And there you have it a very simple text editor that lets you open text file and save them. You can find the full source for this application under the examples directory of the CoreGTK github repository.

 

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Open Source Software, Tyler B Tags:

Linux alternatives: Mp3tag → puddletag

November 28th, 2015 No comments

Way back when I first made my full-time switch to Linux I made a post about an alternative to the excellent Mp3tag software on Windows. At the time I suggested a program called EasyTAG and while that is still a good program I’ve recently come across one that I think I may actually like more: puddletag.

A screenshot of puddletag from their website

A screenshot of puddletag from their website

While it is very similar to EasyTAG I find puddletag’s layout a bit easier to navigate and use.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Big distributions, little RAM 9

November 28th, 2015 2 comments

It’s been a while but once again here is the latest instalment of the series of posts where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 8.2 (Cinnamon)
  • Debian 8.2 (GNOME)
  • Debian 8.2 (KDE)
  • Debian 8.2 (MATE)
  • Debian 8.2 (Xfce)
  • Elementary OS 0.3.1 (Freya)
  • Kubuntu 15.10 (KDE)
  • Linux Mint 17.2 (Cinnamon)
  • Linux Mint 17.2 (MATE)
  • Linux Mint 17.2 (Xfce)
  • Mageia 5 (GNOME)
  • Mageia 5 (KDE)
  • Ubuntu 15.10 (Unity)
  • Xubuntu 15.10 (Xfce)

I also attempted to try and install Fedora 23, Linux Mint 17.2 (KDE) and OpenSUSE 42.1 but none of them were able to complete installation.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 10GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 5, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run prior to December 2015 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. Measurements were taken using the free -m command for memory and the df -h command for disk usage.

Like before I have provided the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). As always feel free to run your own tests and link them in the comments for everyone to see.

Quick Info

  • Out of the Cinnamon desktops tested Debian 8.2 had the lowest memory footprint
  • Out of the GNOME desktops tested Mageia 5 had the lowest memory footprint
  • Out of the KDE desktops tested Mageia 5 had the lowest memory footprint
  • Out of the Xfce desktops tested Debian 8.2 had the lowest memory footprint
  • Out of the MATE desktops tested Debian 8.2 had the lowest memory footprint
  • Elementary OS 0.3.1 had the highest memory footprint of those tested
  • Debian 8.2 Xfce and MATE tied for the lowest memory footprint of those tested
  • Debian 8.2 Xfce had the lowest install size of those tested
  • Kubuntu 15.10 had the largest install size of those tested
  • Elementary OS 0.3.1 had the lowest change after updates (+2MiB)
  • Mageia 5 KDE had the largest change after updates (-265MiB)

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

big_distro_little_ram_9_first_boot_memory_usageMemory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

big_distro_little_ram_9_memory_usage_after_updatesMemory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

big_distro_little_ram_9_memory_usage_change_after_updatesInstall size after updates

The hard drive space used by the distribution after applying all of the updates.

big_distro_little_ram_9_install_sizeConclusion

Once again I will leave the conclusions to you. Source data provided below.

Source Data




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Archive your IMAP e-mail offline in Thunderbird

September 20th, 2015 5 comments

Thunderbird is an excellent e-mail client and has built in e-mail archiving, however one thing that it doesn’t do intuitively is an offline archive. Here’s the situation: you have an IMAP account in Thunderbird and you want to archive some old e-mail offline (take it off of the IMAP server completely). Simply using Thunderbird’s archive feature will create an Archives folder in your IMAP inbox and move everything to there which isn’t exactly what you want. Instead what you need to do is actually move these e-mails to a new location under your Local Folders. Once the move is complete you can verify that they are indeed now stored locally and (optionally) delete them the IMAP account.

Hopefully this helps out anyone else looking for a solution to an offline IMAP archive!




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

A distro hopping experiment

September 12th, 2015 No comments

Over the last little while I’ve become quite comfortable using a single distribution, Linux Mint, for my day-to-day needs. While this has obviously allowed the operating system to, in a sense, disappear into the background and let me do “real” work it has had the side effect that I haven’t been as exposed to the interesting changes happening elsewhere on the Linux landscape.

That’s why I’ve decided to run my own mini experiment of sorts where I leave the comfort of Linux Mint and start off on a journey of hopping between different distributions again. I don’t exactly know how long I’ll be staying on each distribution but the goal is to stay for around two weeks or so in order to get a good feel for that distribution. Heck I may even throw in the occasional BSD or other alternative operating system here and there as well just to mix things up. I also plan on trying to stick with the majority of the defaults (settings, programs, etc.) that ship with the distribution so that I get the intended experience.

So join me as I jump around and if you have any suggestions for distributions to try let me know!




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK 3.10.1 Released!

September 8th, 2015 No comments

The next version of CoreGTK, version 3.10.1, has been tagged for release today.

Highlights for this release:

  • Added some missing (varargs) GTK+ functions. This makes it easier to create widgets like the FileChooserDialog.

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK 3.10.0 Released!

August 20th, 2015 No comments

The next version of CoreGTK, version 3.10.0, has been tagged for release today.

Highlights for this release:

  • Move from GTK+ 2 to GTK+ 3
  • Prefer the use of glib data types over boxed OpenStep/Cocoa objects (i.e. gint vs NSNumber)
  • Base code generation on GObject Introspection instead of a mix of automated source parsing and manual correction
  • Support for GTK+ 3.10

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Taking a look at some Linux e-mail clients

July 12th, 2015 No comments

Many people now use a browser based solution, like Gmail, for all of their e-mail needs however there are still plenty of reasons why someone might want to use a local e-mail client as well. In this post I’m going to take a look at some of the graphical e-mail client options available on Linux.

Balsa

I have to admin that I hadn’t even heard of Balsa before looking up e-mail clients to include in this list. In my limited time using it Balsa seems to be a relatively simple e-mail client that still offers quite a few options (supports POP3 and IMAP as well as PGP/GPG and even includes a spell checker) while still maintaining a very low memory footprint (less than 7MiB of RAM for an empty inbox). However one thing I couldn’t seem to get working was actually sending e-mail – it’s not that it was difficult to setup, I just simply couldn’t get it to connect to my SMTP server to send the mail. It kept timing out without giving me a cause which was annoying.

Balsa

Balsa

Balsa Project Website

Claws Mail

Similar to Balsa, Claws is also a very lightweight e-mail client that offers quite a few standard features but can also be expanded upon via plugins. Interestingly I couldn’t figure out a way to compose a non-plaintext (i.e. HTML) e-mail so perhaps the developers are of the opinion that e-mail should only be sent as text?

Claws Mail

Claws Mail

Claws Mail Project Website

Evolution

Evolution is/was (depends on who you ask) the golden standard for what an e-mail client on Linux should be. You can think of it as a complete Outlook replacement as it does so much more than just e-mail (contacts, calendar, memos, etc.) all without the need for additional plugins. This does come at a bit of a price as Evolution certainly feels heavier and uses more memory than some other e-mail only clients.

Evolution

Evolution

Evolution Project Website

Geary

Geary is a relative newcomer and has been getting quite a bit of attention as it is included as the default e-mail client in elementaryOS. This application is beautiful however very, very streamlined. You won’t find things like plugins, PGP/GPG, or loads of configuration options here, instead Geary focuses on being the best user experience it can be out of the box.

Geary

Geary

Geary Project Website

GNUMail.app

GNUMail.app is quite a bit different from the other e-mail clients on this list. It is associated with the GNUstep project and runs on both Linux and Mac OS X. Unfortunately while trying to use it on Linux I found myself at a loss… I simply couldn’t figure out how to use the thing! I managed to configure my account settings but could never get it to actually download any e-mail. So without actually being able to use the application I don’t have much else to say about it.

GNUMail.app

GNUMail.app

GNUMail.app Wikipedia Page

KMail

KMail provides the e-mail duties for the Kontact Personal Information Manager collection of software. It is a fully featured e-mail client and, because of the other Kontact applications, offers a compelling pseudo-integrated alternative to something heavy like Evolution. This is especially true if you are using the KDE desktop environment where things feel even more integrated.

KMail

KMail

KMail Project Website

Slypheed

Slypheed and Claws Mail are very similar, which makes sense because they used to be the same project (one was simply a place to try new features before putting it into the “real” project). Even though they share a linage Slypheed and Claws Mail now have different code bases and development teams. That said there aren’t very many obvious differences between the two at this point.

Slypheed

Slypheed

Slypheed Project Website

Thunderbird

Thunderbird is one of the most popular free/open source e-mail clients around and for good reason. It offers a good amount of features and can make use of plugins to add even more functionality. While it may not quite match up to Evolution in terms of advanced functionality for most people, myself included, it works very well.

Thunderbird

Thunderbird

Thunderbird Project Website




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Big distributions, little RAM 8

July 11th, 2015 2 comments

It’s been a while but once again here is the latest instalment of the series of posts where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 8 (Cinnamon)
  • Debian 8 (GNOME)
  • Debian 8 (KDE)
  • Debian 8 (MATE)
  • Debian 8 (Xfce)
  • Elementary OS 0.3 (Freya)
  • Kubuntu 15.04 (KDE)
  • Linux Mint 17.1 (Cinnamon)
  • Linux Mint 17.1 (KDE)
  • Linux Mint 17.1 (MATE)
  • Linux Mint 17.1 (Xfce)
  • Mageia 4.1 (GNOME)
  • Mageia 4.1 (KDE)
  • OpenSUSE 13.2 (GNOME)
  • OpenSUSE 13.2 (KDE)
  • Ubuntu 15.04 (Unity)
  • Ubuntu Mate (MATE)
  • Xubuntu 15.04 (Xfce)

I also attempted to try and install Fedora 21 and Linux Mint 17.2 (KDE) but it just wouldn’t go.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.3.30, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run prior to June 2015 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. Measurements were taken using the free -m command for memory and the df -h command for disk usage.

Like before I have provided the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). As always feel free to run your own tests and link them in the comments for everyone to see.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

First_Boot_AllMemory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

After_Updates_AllMemory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

Usage_Changes_AllInstall size after updates

The hard drive space used by the distribution after applying all of the updates.

Install_SizeConclusion

Once again I will leave the conclusions to you. Source data provided below.

Source Data




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK now supports GTK+ 3 and is built from GObject Introspection

July 1st, 2015 No comments

It has been quite a while since the first release of CoreGTK back in August 2014 and in that time I’ve received a lot of very good feedback about the project, what people liked and didn’t like, as well as their wishlists for new features. While life has been very busy since then I’ve managed to find a little bit of time here and there to implement many of the changes that people were hoping for. As mentioned in my previous post here are the highlighted changes for this new version of CoreGTK:

Move from GTK+ 2 to GTK+ 3

GTK+ 3 is now the current supported widget toolkit and has been since February 2011. Now that GTK+ 3 is supported on all platforms (Windows, Mac and Linux) it makes sense to move over and take advantage of the updated features.

Additionally this allows for a natural break in compatibility with the previous release of CoreGTK. What that means for the end user is that I currently don’t have any plans on going back and applying any of these new ideas/changes to the old GTK+ 2 version of the code base, instead focusing my time and effort on GTK+ 3.

Prefer the use of glib data types over boxed OpenStep/Cocoa objects (i.e. gint vs NSNumber)

When originally designing CoreGTK I decided to put a stake in the ground and simply always favour OpenStep/Cocoa objects where possible. The hope was that this would allow for easier integration with existing Objective-C code bases. Unfortunately good intentions don’t always work out in the best way. One of the major pieces of feedback I got was to take a less strict approach on this and drop the use of some classes where it makes sense. Specifically keep using NSString instead of C strings but stop using NSNumber in place of primitives like gint (which itself is really just a C int). The net result of this change is far less boilerplate code and faster performance.

So instead of writing this:

/* Sets the default size of the window */
[window setDefaultSizeWithWidth: [NSNumber numberWithInt:400] andHeight: [NSNumber numberWithInt:300]];

you can now simply write this:

/* Sets the default size of the window */
[window setDefaultSizeWithWidth: 400 andHeight: 300];

Base code generation on GObject Introspection instead of a mix of automated source parsing and manual correction

The previous version of CoreGTK was, shall we say, hand crafted. I had written some code to parse header files and generate a basic structure for the Objective-C output but there was still quite a bit of manual work (days/weeks/months) involved to clean up this output and make it what it was. Other than the significant investment in time required to make this happen it was also prone to errors and would require starting back at square one with every new release of GTK+.

This time around the output is generated using GObject Introspection, specifically by parsing the generated GIR file for that library with the new utility CoreGTKGen. The proccess of generating new CoreGTK bindings using CoreGTKGen now takes just a couple of seconds and produces very clean and simple source code files. This is also really just the start as I’m sure there are plenty of improvements that can be made to CoreGTKGen to make it even better! Perhaps equally exciting is that once this process is perfected it should be relatively easy to adapt it to support other GObject Introspection supported libraries like Pango, Gdk, GStreamer, etc.

Let’s have an example shall we?

While there are a couple of good examples over at the Getting Started page of the Wiki and even within the CoreGTK repo itself I figured I would show something different here. It has always been my goal with this project to make it as easy as possible for existing Objective-C users to port their applications to GTK+. Perhaps you were previously using a widget toolkit like Cocoa on the Mac and now you want to release your application on more platforms. What better way than to keep your existing business logic and swap out the GUI (you do practice good MVC right? :P).

So going with this idea here is a tutorial of porting the “Start Developing Mac Apps Today” example from Apple’s developer website here. This application is incredibly simplistic but basically lets you set a “volume” value either by typing in a number in the text box at the top, moving the slider up and down, or pressing the Mute button. Regardless of which action you take the rest of the GUI is updated to match.

Step 1) Setup the GUI

For this I will be using GLADE as a replacement for the Xcode Interface Builder but you could always program your GUI by hand as well.

From the Apple website we are trying to re-create something that looks like this:

apple_exampleThankfully in GLADE this is relatively easy and I was able to do a quick and dirty mock up resulting in this:

glade_mockup

 

Step 2) Configure GUI signals (i.e. events)

GLADE also makes this easy, simply click on the widget, flip over to the Signals tab and type in your handler name.

textEntrySignal

Here are the ones I created:

  • window (GtkWindow)
    • Signal: destroy
    • Handler: endGtkLoop
  • entry (GtkEntry)
    • Signal: changed
    • Handler: takeValueForVolume
  • scale (GtkScale)
    • Signal: value-changed
    • Handler: sliderValueChanged
  • mute_button (GtkButton)
    • Signal: clicked
    • Handler: muteButtonClicked

Step 3) Create classes

Even though Cocoa and GTK+ don’t map exactly the same I decided to follow Apple’s conventions where it made sense just for consistency.

AppDelegate.h

#import "CoreGTK/CGTKEntry.h"
#import "CoreGTK/CGTKScale.h"

#import "Track.h"

@interface AppDelegate : NSObject
{
    CGTKEntry *textField;
    CGTKScale *slider;
    Track *track;
    BOOL updateInProgress;
}

@property (nonatomic, retain) CGTKEntry *textField;
@property (nonatomic, retain) CGTKScale *slider;
@property (nonatomic, retain) Track *track;

/* Callbacks */
-(void)mute;
-(void)sliderChanged;
-(void)takeValueForVolume;

/* Methods */
-(void)updateUserInterface;

-(void)dealloc;

@end

AppDelegate.m

#import "AppDelegate.h"

@implementation AppDelegate

@synthesize textField;
@synthesize slider;
@synthesize track;

/* Callbacks */
-(void)mute
{
    if(!updateInProgress)
    {
        updateInProgress = YES;
        
        [self.track setVolume:0.0];
    
        [self updateUserInterface];
        
        updateInProgress = NO;
    }
}

-(void)sliderChanged
{
    if(!updateInProgress)
    {
        updateInProgress = YES;
        
        [self.track setVolume:[self.slider getValue]];
    
        [self updateUserInterface];
        
        updateInProgress = NO;
    }
}

-(void)takeValueForVolume
{
    NSString *text = [self.textField getText];
    if([text length] == 0)
    {
        return;
    }
    
    if(!updateInProgress)
    {
        updateInProgress = YES;
        
        double newValue = [[self.textField getText] doubleValue];
    
        [self.track setVolume:newValue];
    
        [self updateUserInterface];
        
        updateInProgress = NO;
    }
}

/* Methods */
-(void)updateUserInterface
{
    double volume = [self.track volume];
    
    [self.textField setText:[NSString stringWithFormat:@"%1.0f", volume]];
    
    [self.slider setValue:volume];
}

-(void)dealloc
{
    [textField release];
    [slider release];
    [track release];
    [super dealloc];
}

@end

Track.h

/*
 * Objective-C imports
 */
#import <Foundation/Foundation.h>

@interface Track : NSObject
{
    double volume;
}

@property (assign) double volume;

@end

Track.m

#import "Track.h"

@implementation Track

@synthesize volume;

@end

Step 4) Wire everything up

In order to make everything work, load the GUI from the .glade file, connect the signals to the AppDelegate class, etc. we need some glue code. I’ve placed this all in the main.m file.

main.m

int main(int argc, char *argv[])
{    
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
    
    /* This is called in all GTK applications. Arguments are parsed
    * from the command line and are returned to the application. */
    [CGTK autoInitWithArgc:argc andArgv:argv];
        
    /* Create a builder to load GLADE file */
    CGTKBuilder *builder = [[CGTKBuilder alloc] init];
    
    if([builder addFromFileWithFilename:@"mac_app.glade" andErr:NULL] == 0)
    {
        NSLog(@"Error loading GUI file");
        return 1;
    }
    
    /* Create an AppDelegate to link to the GUI */
    AppDelegate *appDelegate = [[AppDelegate alloc] init];
    
    /* Get text field, wrapping returned Widget in new CGTKEntry */
    appDelegate.textField = [[[CGTKEntry alloc] initWithGObject:(GObject*)[[CGTKBaseBuilder 
        getWidgetFromBuilder:builder withName:@"entry"] WIDGET]] autorelease];
    
    /* Get slider, wrapping returned Widget in new CGTKScale */
    appDelegate.slider = [[[CGTKScale alloc] initWithGObject:(GObject*)[[CGTKBaseBuilder 
        getWidgetFromBuilder:builder withName:@"scale"] WIDGET]] autorelease];
    
    /* Create track class for AppDelegate */
    Track *track = [[Track alloc] init];
    appDelegate.track = [track autorelease];
    
    /* Pre-synchronize the GUI */
    [appDelegate updateUserInterface];
    
    /* Use signal dictionary to connect GLADE signals to Objective-C code */
    NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:
                     [CGTKCallbackData withObject:[CGTK class] 
                         andSEL:@selector(mainQuit)], @"endGtkLoop",
                         
                     [CGTKCallbackData withObject:appDelegate 
                         andSEL:@selector(mute)], @"muteButtonClicked",
                         
                     [CGTKCallbackData withObject:appDelegate 
                         andSEL:@selector(sliderChanged)], @"sliderValueChanged",
                         
                     [CGTKCallbackData withObject:appDelegate 
                         andSEL:@selector(takeValueForVolume)], @"takeValueForVolume",
                     nil];

    /* CGTKBaseBuilder is a helper class to maps GLADE signals to Objective-C code */
    [CGTKBaseBuilder connectSignalsToObjectsWithBuilder:builder andSignalDictionary:dic];
    
    /* Show the GUI */
    [[CGTKBaseBuilder getWidgetFromBuilder:builder withName:@"window"] showAll];
    
    /*
     * Release allocated memory
     */
    [builder release];
            
    /* All GTK applications must have a [CGTK main] call. Control ends here
     * and waits for an event to occur (like a key press or
     * mouse event). */
    [CGTK main];
    
    /*
     * Release allocated memory
     */    
    [appDelegate release];
    [pool release];
    
    // Return success
    return 0;
}

 

Step 5) Compile and run

coregtk_result

So while this is a very basic, quick and dirty example it does prove the point. As for CoreGTK this release is still under development as I try and flush out any remaining bugs but please give it a shot, submit issues or pitch in to help if you’re interested! You can find the CoreGTK project at http://coregtk.org.

Example Source Code
File name: mac_port_example.zip
File hashes: Download Here
License: (LGPL) View Here
File size: 5.3KB
File download: Download Here

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Adding GTK+ 3 support and building CoreGTK using GObject Introspection

May 3rd, 2015 No comments

It has been a while since I made any mention of my side project CoreGTK. I’m sure many people can relate that with life generally being very busy it is often hard to find time to work on hobby projects like this. Thankfully while that certainly has slowed the pace of development it hasn’t stopped it outright and now I am just about ready to show off the next update for CoreGTK.

First off thank you to everyone who took a look at the previous release. I received quite a few nice comments as well as some excellent feedback and hope to address quite a bit of that here. The feedback plus my own ideas of where I wanted to take the project defined the goal for the next release that I am currently working toward.

Goals for this release:

  • Move from GTK+ 2 to GTK+ 3
  • Prefer the use of glib data types over boxed OpenStep/Cocoa objects (i.e. gint vs NSNumber)
  • Base code generation on GObject Introspection instead of a mix of automated source parsing and manual correction

In order to explain the rationale behind these goals I figured I would address each point in more detail.

Move from GTK+ 2 to GTK+ 3

This one was pretty much a no-brainer. GTK+ 3 is now the current supported widget toolkit and has been since February 2011. Previously my choice to use GTK+ 2 was simply due to the fact that I wanted to make it as cross-platform as possible and at the time of release GTK+ 3 was not supported on Windows. Now that this has changed it only makes sense to continue forward using the current standard.

Additionally this allows for a natural break in compatibility with the previous release of CoreGTK. What that means for the end user is that I currently don’t have any plans on going back and applying any of these new ideas/changes to the old GTK+ 2 version of the code base, instead focusing my time and effort on GTK+ 3.

Prefer the use of glib data types over boxed OpenStep/Cocoa objects (i.e. gint vs NSNumber)

When originally designing CoreGTK I decided to put a stake in the ground and simply always favour OpenStep/Cocoa objects where possible. The hope was that this would allow for easier integration with existing Objective-C code bases. Unfortunately good intentions don’t always work out in the best way. One of the major pieces of feedback I got was to take a less strict approach on this and drop the use of some classes where it makes sense. Specifically keep using NSString instead of C strings but stop using NSNumber in place of primitives like gint (which itself is really just a C int). The net result of this change is far less boilerplate code and faster performance.

So instead of writing this:

/* Sets the border width of the window */
[window setBorderWidth: [NSNumber numberWithInt:10]];

you can now simply write this:

/* Sets the border width of the window */
[window setBorderWidth: 10];

Base code generation on GObject Introspection instead of a mix of automated source parsing and manual correction

The previous version of CoreGTK was, shall we say, hand crafted. I had written some code to parse header files and generate a basic structure for the Objective-C output but there was still quite a bit of manual work involved to clean up this output and make it what it was. Other than the significant investment in time required to make this happen it was also prone to errors and would require starting back at square one with every new release of GTK+. This time around the output is generated using GObject Introspection, specifically by parsing the generated GIR file for that library. Currently, and I must stress that there is still quite a bit of room for improvement, this allows me to generate CoreGTK bindings from scratch within an hour or so. With some of the final touches I have in mind the time required for this should hopefully be down to minutes (the auto-generation itself only takes seconds but it isn’t 100% yet). Better still once this process is perfected it should be relatively easy to adapt it to support other GObject Introspection supported libraries like Pango, Gdk, GStreamer, etc.

So where is this new release?

I am getting closer to showing off this new code but first I have to do a bit of cleanup on it. This hopefully won’t take too much longer and to show you how close I am here is a screenshot of CoreGTK running using GTK+ 3.

coregtk-3

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

How to easily forward Firefox (PC & Android) traffic through an SSH tunnel

March 29th, 2015 No comments

Say you are travelling, or are at a neighbourhood coffee shop, using whatever unsecured WiFi network they make available. You could either:

  1. trust that no one is sniffing your web traffic, capturing passwords, e-mails, IMs, etc.
  2. trust that no one is using more sophisticated methods to trick you into thinking that you are secure (i.e. man in the middle attack)
  3. route your Internet traffic through a secure tunnel to your home PC before going out onto the web, protecting you from everyone at your current location

which would you choose?

VPNs and SSH tunnels are actually a relatively easy means for you to be more secure while browsing the Internet from potentially dangerous locations.

Making use of an SSH tunnel on your PC

There are many, many different ways for you to do this but I find using a Linux PC that is running on your home network to be the easiest.

Step 1: Install SSH Server

Configure your home Linux PC. Install ssh (and sshd if it is separate). If you are using Ubuntu this is as easy as running the following command: sudo apt-get install ssh

Step 2: Make it easy to connect

Sign up for a free dynamic DNS service like DynDNS or No-IP so that you know of a web address that always points to your home Internet connection. To do this follow the instructions at the service you choose.

Step 3: Connect to tunnel

On your laptop (that you have taken with you to the hotel or coffee shop) connect to your home PC’s ssh server. If you are on Windows you will need to get a program like PuTTY. See their documentation on how to forward ports. On Linux you can simply use the ssh command. The goal is to forward a dynamic port to the remote ssh server. For instance if you are using a Linux laptop and ssh then the command would look something like: ssh -D [dynamic port] [user]@[home server] -p [external port number – if not 22]. An example of one would be ssh -D 4096 user@example.com -p 4000

Step 4: Configure browser to use SSH tunnel proxy

In your browser open the networking options window. This will allow you to tell the browser to forward all of its traffic to a proxy, which in this case, will be our dynamic port that we set up in step 3. Here is an example of my configuration for the example above.
If you don’t feel awesome enough doing the above graphically you can also browse to “about:config” (without quotes) and set the following values:

  • network.proxy.proxy_over_tls
    • true
  • network.proxy.socks
    • Change to “127.0.0.1” with no quotes
  • network.proxy.socks_port
    • Change to the SSH Tunnel Local Port set above (4096)
  • network.proxy.socks_remote_dns
    • Change to true
    • Note: you cannot actually set this setting graphically but it is highly recommended to configure this as well!
  • network.proxy.socks_version
    • Change to 5
  • network.proxy.type
      Change to 1

Step 5: Test and use

Browse normally – you are now browsing the Internet by routing all of your traffic (in Firefox) securely through your home PC. Note that this doesn’t actually make web browsing any more secure beyond protecting you from people in your immediate vicinity (i.e. connected to the same insecure WiFi network).


What about Android?

Just like the PC you can also do it on Android even without root access. Please note that while I’m sure there are a few ways to accomplish this, the following is just one way that has worked for me. I’m also assuming that you already have an SSH server to tunnel your traffic through.

Step 1: Install SSH Tunnel

The first thing you’ll want to do is install an application that will actually create the SSH tunnel for you. One such application is the aptly named SSH Tunnel which can be found on the Google Play Store here.

Step 2: Configure SSH Tunnel

Next you’ll want to launch the application and configure it.

  • Set the Host address (either a real domain name, dynamic DNS redirector or IP address of your SSH server) and port to connect on.
  • You’ll also want to configure the User and Password / Passphrase.
  • Check the box that says Use socks proxy.
  • Configure the Local Port that you’ll connect to your tunnel on (perhaps 1984 for the paranoid?)
  • I would recommend checking Auto Reconnect as well, especially if you are on a really poor WiFi connection like at a hotel or something.
  • Finally check Enable DNS Proxy.

Step 3: Connect SSH Tunnel

To start the SSH tunnel simply check the box that says Tunnel Switch.

Step 4: Install Firefox

While you may have a preference for Google Chrome, Firefox is the browser I’m going to recommend setting up the tunnel with. Additionally this way if you do normally use Chrome you can simply leave Firefox configured to always use the SSH tunnel and only switch to it when you want the additional privacy. Firefox can be found on the Google Play store here.

Step 5: Configure Firefox to use SSH Tunnel

In order to make Firefox connect via the SSH tunnel you’ll need to modify some settings. Once you are finished the browser will only work if the SSH tunnel is connected.

  • In the Firefox address bar browse to “about:config” with no quotes.
  • In the page that loads search and modify the following values:
    • network.proxy.proxy_over_tls
      • true
    • network.proxy.socks
      • Change to “127.0.0.1” with no quotes
    • network.proxy.socks_port
      • Change to the SSH Tunnel Local Port set above (1984?)
    • network.proxy.socks_remote_dns
      • Change to true
    • network.proxy.socks_version
      • Change to 5
    • network.proxy.type
        Change to 1

Step 6: Test and browse normally

Now that you have configured the above you should be able to browse via the tunnel. How can you check if it is working? Simply turn off the SSH Tunnel and try browsing – you should get an error message. Or if you are on a different WiFi you could try using a service to find your IP address and make sure it is different from where you are. For example if you configured Firefox to work via the SSH tunnel but left Chrome as is then visiting a site like http://www.whatismyip.com/ should show different information in each browser.

This post is a complication of two posts which originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Applying updates to Docker and the Plex container

December 7th, 2014 No comments

In my last post, I discussed several Docker containers that I’m using for my home media streaming solution. Since then, Plex Media Server has updated to 0.9.11.4 for non-Plex Pass users, and there’s another update if you happen to pay for a subscription. As the Docker container I used (timhaak/plex) was version 0.9.11.1 at the time, I figured I’d take the opportunity to describe how to

  • update Docker itself to the latest version
  • run a shell inside the container as another process, to review configuration and run commands directly
  • update Plex to the latest version, and describe how not to do this
  • perform leet hax: commit the container to your local system, manually update the package, and re-commit and run Plex

Updating Docker

I alluded to the latest version of Docker having features that make it easier to troubleshoot inside containers. Switching to the latest version was pretty simple: following the instructions to add the Docker repository to my system, then running

sudo apt-get update
sudo apt-get install lxc-docker

upgraded Docker to version 1.3.1 without any trouble or need to manually uninstall the previous Ubuntu package.

Run a shell using docker exec

Let’s take a look inside the plex container. Using the following command will start a bash process so that we can review the filesystem on the container:

docker exec -t -i plex /bin/bash

You will be dropped into a root prompt inside the plex container. Check out the filesystem: there will be a /config and a /data directory pointing to “real” filesystem locations. You can also use ps aux to review the running processes, or even netstat -anp to see active connections and their associated programs. To exit the shell, use Ctrl+C – but the container will still be running when you use docker ps -a from the host system.

Updating Plex in-place: My failed attempt

Different Docker containers will have different methods of performing software updates. In this case, looking at the Dockerfile for timhaak/plex, we see that a separate repository was added for the Plex package – so we should be able to confirm that the latest version is available. This also means that if you destroy your existing container, pull the latest image, then launch a new copy, the latest version of Plex will be installed (generally good practice.)

But wait – the upstream repository at http://shell.ninthgate.se/packages/debian/pool/main/p/plexmediaserver/ does contain the latest .deb packages for Plex, so can’t we just run an apt-get update && apt-get upgrade?

Well, not exactly. If you do this, the initial process used to run Plex Media Server inside the Docker container (start.sh) gets terminated, and Docker takes down the entire plex container when the initial process terminates. Worse, if you then decide to re-launch things with docker start plex, the new version is incompletely installed (dpkg partial configuration).

So the moral of the story: if you’re trying this at home, the easiest way to upgrade is to recreate your Plex container with the following commands:

docker stop plex

docker rm plex

# The 'pull' process may take a while - it depends on the original repository and any dependencies in the Dockerfile. In this case it has to pull the new version of Plex.
docker pull timhaak/plex

# Customize this command with your config and data directories.
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex

Once the container is up and running, access http://yourserver:32400/web/ to confirm that Plex Media Server is running. You can check the version number by clicking the gear icon next to your server in the left navigation panel, then selecting Settings.

Hacking the container: commit it and manually update Plex from upstream

If you’re more interested in hacking the current setup, there’s a way to commit your existing Plex image, manually perform the upgrade, and restart the container.

First, make sure the plex container is running (docker start plex) and then commit the container to your local filesystem (replacing username with your preferred username):

docker commit plex username/plex:latest

Then we can stop the container, and start a new instance where bash is the first process:

docker stop plex

docker rm plex

# Replace username with the username you selected above.
docker run -t -i --name="plex" -h plex username/plex:latest /bin/bash

Once inside the new plex container, let’s grab the latest Plex Media Server package and force installation:

curl -O https://downloads.plex.tv/plex-media-server/0.9.11.4.739-a4e710f/plexmediaserver_0.9.11.4.739-a4e710f_amd64.deb

dpkg -i plexmediaserver_0.9.11.4.739-a4e710f_amd64.deb

# When prompted, select Y to install the package maintainer's versions of files. In my instance, this updated the init script as well as the upstream repository.

Now, we can re-commit the image with the new Plex package. Hit Ctrl+D to exit the bash process, then run:

docker commit plex username/plex:latest

docker rm plex

# Customize this command with your config and data directories.
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 username/plex /start.sh

# Commit the image again so it will run start.sh if ever relaunched:
docker commit plex username/plex:latest

You’ll also need to adjust your /etc/init/plex.conf upstart script to point to username/plex.

The downside of this method is now that you’ve forked the original Plex image locally and will have to do this again for updates. But hey, wasn’t playing around with Docker interesting?




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

Running a containerized media server with Ubuntu 14.04, Docker, and Plex

November 23rd, 2014 No comments

I recently took it upon myself to rebuild a general-purpose home server – installing a new Intel 530 240GB solid-state drive to replace a “spinning rust” drive, and installing a fresh copy of Ubuntu 14.04 now that 14.04.1 has released and there is much less complaining online.

The “new hotness” that I’d like to discuss has been the use of Docker to containerize various processes. Docker gets a lot of press these days, but the way I see it is a way to ensure that your special snowflake applications and services don’t get the opportunity to conflict with one another. In my setup, I have four containers running:

I like the following things about Docker:

  • Since it’s new, there are a lot of repositories and configuration instructions online for reference.
  • I can make sure that applications like Sonarr/NZBDrone get the right version of Mono that won’t conflict with my base system.
  • As a network administrator, I can ensure that only the necessary ports for a service get forwarded outside the container.
  • If an application state gets messed up, it won’t impact the rest of the system as much – I can destroy and recreate the individual container by itself.

There are some drawbacks though:

  • Because a lot of the images and Dockerfiles out there are community-based, there are some that don’t follow best practices or fall out of an update cycle.
  • Software updates can become trickier if the application is unable to upgrade itself in-place; you may have to pull a new Dockerfile and hope that your existing configuration works with a new image.
  • From a security standpoint, it’s best to verify exactly what an image or Dockerfile does before running it – for example, that it pulls content from official repositories (the docker-plex configuration is guilty of using a third-party repo, for example.)

To get started, on Ubuntu 14.04 you can install a stable version of Docker following these instructions, although the latest version has some additional features like docker exec that make “getting inside” containers to troubleshoot much easier. I was able to get all these containers running properly with the current stable version (1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1). Once Docker is installed, you can grab each of the containers above with a combination of docker search and docker pull, then list the downloaded containers with docker images.

There are some quirks to remember. On the first run, you’ll need to docker run most of these containers and provide a hostname, box name, ports to forward and shared directories (known as volumes). On all subsequent runs, you can just use docker start $container_name – but I’ll describe a cheap and easy way of turning that command into an upstart service later. I generally save the start commands as shell scripts in /usr/local/bin/docker-start/*.sh so that I can reference them or adjust them later. The start commands I’ve used look like:

Plex
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex
SABnzbd+
docker run -d -h sabnzbd --name="sabnzbd" -v /etc/docker/sabnzbd:/config -v /mnt/nas:/data -p 8080:8080 -p 9090:9090 timhaak/sabnzbd
Sonarr
docker run -d -h sonarr --name="sonarr" -v /etc/docker/sonarr:/config -v /mnt/nas:/data -p 8989:8989 tuxeh/sonarr
CouchPotato
docker run -d -h couchpotato --name="couchpotato" -e EDGE=1 -v /etc/docker/couchpotato:/config -v /mnt/nas:/data -v /etc/localtime:/etc/localtime:ro -p 5050:5050 needo/couchpotato
These applications have a “/config” and a “/data” shared volume defined. /data points to “/mnt/nas”, which is a CIFS share to a network attached storage appliance mounted on the host. /config points to a directory structure I created for each application on the host in /etc/docker/$container_name. I generally apply “chmod 777” permissions to each configuration directory until I find out what user ID the container is writing as, then lock it down from there.

For each initial start command, I choose to run the service as a daemon with -d. I also set a hostname with the “-h” parameter, as well as a friendly container name with “–name”; otherwise Docker likes to reference containers with wild adjectives combined with scientists, like “drunk_heisenberg”.

Each of these containers generally has a set of instructions to get up and running, whether it be on Github, the developer’s own site or the Docker Hub. Some, like SABnzbd+, just require that you go to http://yourserverip:8080/ and complete the setup wizard. Plex required an additional set of configuration steps described at the original repository:

  • Once Plex starts up on port 32400, access http://yourserverip:32400/web/ and confirm that the interface loads.
  • Switch back to your host machine, and find the place where the /config directory was mounted (in the example above, it’s /etc/docker/plex). Enter the Library/Application Support/Plex Media Server directory and edit the Preferences.xml file. In the <Preferences> tag, add the following attribute: allowedNetworks=”192.168.1.0/255.255.255.0″ where the IP address range matches that of your home network. In my case, the entire file looked like:

    <?xml version="1.0" encoding="utf-8"?>
    <Preferences MachineIdentifier="(guid)" ProcessedMachineIdentifier="(another_guid)" allowedNetworks="192.168.1.0/255.255.255.0" />

  • Run docker stop plex && docker start plex to restart the container, then load http://yourserverip:32400/web/ again. You should be prompted to accept the EULA and can now add library locations to the server.

Sonarr needed to be updated (from the NZBDrone branding) as well. From the GitHub README, you can enable in-container upgrades:

[C]onfigure Sonarr to use the update script in /etc/service/sonarr/update.sh. This is configured under Settings > (show advanced) > General > Updates > change Mechanism to Script.

To automatically ensure these containers start on reboot, you can either use restart policies (Docker 1.2+) or write an upstart script to start and stop the appropriate container. I’ve modified the example from the Docker website slightly to stop the container as well:

description "SABnzbd Docker container"
author "Jake"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a sabnzbd
end script
pre-stop exec /usr/bin/docker stop sabnzbd

Copy this script to /etc/init/sabnzbd.conf; you can then copy it to plex, couchpotato, and sonarr.conf and change the name of the container and title in each. You can then test it by rebooting your system and running “docker ps -a” to ensure that all containers come up cleanly, or running “docker stop $container; service $container start”. If you run into trouble, the upstart logs are in /var/log/upstart/$container_name.conf.

Hopefully this introduction to a media server with Docker containers was thought-provoking; I hope to have further updates down the line for other applications, best practices and how this setup continues to operate in its lifetime.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

Cloud software for a Synology NAS and setting up OwnCloud

November 8th, 2014 No comments

Recently the Kitchener Waterloo Linux Users Group held a couple of presentations on setting up your own personally hosted cloud. With their permission we are pleased to also present it below:

Read more…




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Big distributions, little RAM 7

October 13th, 2014 4 comments

It’s been a while but once again here is the latest instalment of the series of posts where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 7.6 (GNOME)
  • Elementary OS 0.2 (Luna)
  • Fedora 20 (GNOME)
  • Kubuntu 14.04 (KDE)
  • Linux Mint 17 (Cinnamon)
  • Linux Mint 17 (MATE)
  • Mageia 4.1 (GNOME)
  • Mageia 4.1 (KDE)
  • OpenSUSE 13.1 (GNOME)
  • OpenSUSE 13.1 (KDE)
  • Ubuntu 14.04 (Unity)
  • Xubuntu 14.04 (Xfce)

I also attempted to try and install Fedora 20 (KDE) but it just wouldn’t go.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.3.12, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between October 6th, 2014 and October 13th, 2014 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. Measurements were taken using the free -m command for memory and the df -h command for disk usage.

Like before I have provided the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). As always feel free to run your own tests and link them in the comments for everyone to see.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

 

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Install Size

Install Size

Conclusion

Once again I will leave the conclusions to you. Source data provided below.

Source Data




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).