Posts Tagged ‘Linux’

Automatically put computer to sleep and wake it up on a schedule

June 24th, 2012 No comments

Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Sabayon Linux – Stable if not without polish

April 28th, 2012 3 comments

I have been running Sabayon Linux (Xfce) for the past couple of months and figured I would throw a post up on here describing my experience with it.

Reasons for Running

The reason I tried Sabayon in the first place is because I was curious what it would be like to run a rolling release distribution (that is a distribution that you install once and just updates forever with no need to re-install). After doing some research I discovered a number of possible candidates but quick narrowed it down based on the following reasons:

  • Linux Mint Debian Edition – this is an excellent distribution for many people but for whatever reason every time I update it on my hardware it breaks. Sadly this was not an option.
  • Gentoo – I had previously been running Gentoo and while it is technically a rolling release I never bothered to update it because it just took too long to re-compile everything.
  • Arch Linux – Sort of like Gentoo but with binary packages, I turned this one down because it still required a lot of configuration to get up and running.
  • Sabayon Linux – based on Gentoo but with everything pre-compiled for you. Also takes the ‘just works’ approach by including all of the proprietary and closed source  codecs, drivers and programs you could possibly want.

Experience running Sabayon

Sabayon seems to take a change-little approach to packaging applications and the desktop environment. What do I mean by this? Simply that if you install the GNOME, KDE or Xfce versions you will get them how the developers intended – there are very few after-market modifications done by the Sabayon team. That’s not necessarily a bad thing however, because as updates are made upstream you will receive them very quickly thereafter.

This distribution does live up to its promise with the codecs and drivers. My normally troublesome hardware has given me absolutely zero issues running Sabayon which has been a very nice change compared to some other, more popular distributions (*cough* Linux Mint *cough*). My only problem with Sabayon stems from Entropy (their application installer) being very slow compared to some other such implementations (apt, yum, etc). This is especially apparent during the weekly system wide updates which can result in many, many package updates.

Final Thoughts

For anyone looking for a down to basics, Ubuntu-like (in terms of ease of install and use), rolling release distribution I would highly recommend Sabayon. For someone looking for something a bit more polished or extremely user friendly, perhaps you should look elsewhere. That’s not to say that Sabayon is hard to use, just that other distributions might specialize in user friendliness.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Oh Gentoo

December 22nd, 2011 6 comments

Well it’s been a couple of months now since the start of Experiment 2.0 and I’ve had plenty of time to learn about Gentoo, see its strengths and… sit waiting through its weaknesses. I don’t think Gentoo is as bad as everyone makes it out to be, in fact, compared to some other distributions out there, Gentoo doesn’t look bad at all.

Now that the experiment is approaching its end I figured it would be a good time to do a quick post about my experiences running Gentoo as a day-to-day desktop machine.


Gentoo is exactly what you want it to be, nothing more. Sure there are special meta-packages that make it easy to install things such as the KDE desktop, but the real key is that you don’t need to install anything that you don’t want to. As a result Gentoo is fast. My startup time is about 10-20 seconds and, if I had the inclination to do so, could be trimmed down even further through optimization.

Packages are also compiled with your own set of custom options and flags so you get exactly what you need, optimized for your exact hardware. Being a more advanced (see expert) oriented distribution it will also teach you quite a bit about Linux and software configuration as a whole.


Sadly Gentoo is not without its faults. As mentioned above Gentoo can be whatever you want it to be. The major problem with this strength in practice is that the average desktop user just wants a desktop that works. When it takes days of configuration and compilation just to get the most basic of programs installed it can be a major deterrent to the vast majority of users.

Speaking of compiling programs, I find this aspect of Gentoo interesting from a theoretical perspective but I honestly have a hard time believing that it makes enough of a difference to make it worth sitting through the hours days of compiling it takes just to get some things installed. Its so bad that I actually haven’t bothered to re-sync and update my whole system in over 50 days for fear that it would take forever to re-compile and re-install all of the updated programs and libraries.

Worse yet even when I do have programs installed they don’t always play nicely with one another. Gentoo offers a package manager, portage, but it still fails at some dependency resolution – often times making you choose between uninstalling previous programs just to install the new one or to not install the new one at all. Another example of things being more complicated than they should be is my system sound. Even though I have pulseaudio installed and configured my system refuses to play audio from more than one program at a time. These are just a few examples of problems I wouldn’t have to deal with on another distribution.


Well, it’s been interesting but I will not be sticking with Gentoo once this experiment is over. There are just too many little things that make this more of an educational experience than a real day-to-day desktop. While I certainly have learned a lot during this version of the experiment, at the end of the day I’d rather things just work right the first time.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

How to play Red Alert 2 on Linux

December 4th, 2011 No comments

The other day I finally managed to get the classic RTS game Command & Conquer Red Alert 2 running on Linux, and running well in fact. I started by following the instructions here with a few tweaks that I found on other forums that I can’t seem to find links to anymore. Essentially the process is as follows:

  • Install Red Alert 2 on Windows. Yes you just read that right. Apparently the Red Alert 2 installer does not work under wine so you need to install the game files while running Windows.
  • Update the game and apply the CD-Crack via the instructions in the link above. Note that this step may have some legal issues associated with it. If in doubt seek professional legal advice.
  • Copy program files install directory to Linux.
  • Apply speed fix in the how-to section here.
  • Run game using wine and enjoy.

It is a convoluted process that is, at times, ridiculous but it’s worth it for such a classic game. Even better there is a bit of a ‘hack’ that will allow you to play RA2’s multiplayer IPX network mode but over the more modern TCP/IP protocol. The steps for this hack can also be found at the WineHQ link above.

Happy gaming!

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Linux, Tyler B Tags: , , ,

Linux From Scratch : The Beginning…

October 31st, 2011 1 comment

Hi Everyone,

If you don’t remember me, I’m Dave. Last time for the experiment I used SuSE, which I regretted. This time I decided to use Linux From Scratch like Jake, as I couldn’t think of another distribution that I haven’t used in some way or another before. Let me tell you… It’s been quite the experience so far.

The Initial Setup

Unlike Jake, I opted not to use the LFS Live CD, as I figured it would be much easier to start with a Debian Live CD. By the sounds of it, I made a good decision. I had network right out of the gate, which made it easy to copy and paste awful sed commands.

The initial part of the install was relatively painless for me. Well, except that one of the LFS mirrors had a version from 2007 listed as their latest stable build, setting me back about an hour. I followed the book, waited quite a while for some stuff to compile, and I was in my brand new … command-line. Ok, it it’s not very exciting at first, but I was jumping for joy when I ran the following command and got the result I did:

root [ ~ ]# ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=56 time=32.967 ms
64 bytes from icmp_seq=1 ttl=56 time=33.127 ms
64 bytes from icmp_seq=2 ttl=56 time=40.045 ms


Series of Tubes

The internet was working! Keep reading if you want to hear what awful thing happened next…

Read more…

I am currently running ArchLinux (x86_64).
Check out my profile for more information.

Experiment 2.0

October 30th, 2011 No comments

As Jake pointed out in the previous post we have once again decided to run The Linux Experiment. This iteration will once again following the rule where you are not allowed to use a distribution that you have used in the past. We also have a number of new individuals taking part in the experiment: Aíne B, Matt C, Travis G and Warren G. Be sure to check back often as we post about our experiences running our chosen distributions.


Here are the new rules we are playing by for this version of the experiment:

  1. You must have absolutely no prior experience with the distribution you choose
  2. You must use the distribution on your primary computer and it must be your primary day-to-day computing environment
  3. The experiment runs from November 1st, 2011 until January 31st, 2011
  4. You must document your experience
  5. After committing to a distribution you may not later change to a different one


For fun we’ve decided to create a series of challenges to try throughout the experiment. This list can be found here and may be updated as we add more throughout the course of the experiment.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Linux, Tyler B Tags: ,

Big distributions, little RAM 3

August 14th, 2011 2 comments

Once again I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 6.0.2 (GNOME)
  • Fedora 15 (GNOME 3 Fallback Mode)
  • Fedora 15 (KDE)
  • Kubuntu 11.04 (KDE)
  • Linux Mint 11 (GNOME)
  • Linux Mint 10 (KDE)
  • Linux Mint 10 (LXDE)
  • Linux Mint 11 (Xfce)
  • Lubuntu 11.04 (LXDE)
  • Mandriva One (GNOME)
  • Mandriva One (KDE)
  • OpenSUSE 11.4 (GNOME)
  • OpenSUSE 11.4 (KDE)
  • Ubuntu 11.04 (GNOME Unity Fallback Mode)
  • Xubuntu 11.04 (Xfce)

I will be testing all of this within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86

The tests were all done using VirtualBox 4.0.6 on Linux Mint 11, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default 800×600 and accepted the installation defaults. All tests were run on August 14th, 2011 so your results may not be identical.


Following in the tradition of my previous posts I have once again gone through the effort to bring you nothing but the most state of the art in picture graphs for your enjoyment.

Things to know before looking at the graphs

First off none of the Fedora 15 versions would install in 512MB of RAM. They both required a minimum of 640MB and therefore are disqualified from this little experiment. I did however run them in VirtualBox with 640MB of RAM just for comparison purposes. Secondly the Linux Mint 10 KDE distro would not even install in either 512MB or 640MB of RAM, the installer just kept crashing. I was unable to actually get it to work so it was not included in these tests. Finally when I tested Debian I was unable to test before / after applying updates because it seemed to have applied the updates during install.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

Install size after updates

The hard drive space used by the distribution after applying all of the updates.


As before I’m going to leave you to drawing your own conclusions.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Create a GStreamer powered Java media player

March 14th, 2011 1 comment

For something to do I decided to see if I could create a very simple Java media player. After doing some research, and finding out that the Java Media Framework was no longer in development, I decided to settle on GStreamer to power my media player.

GStreamer for the uninitiated is a very powerful multimedia framework that offers both low-level pipeline building as well as high-level playback abstraction. What’s nice about GStreamer, besides being completely open source, is that it presents a unified API no matter what type of file it is playing. For instance if the user only has the free, high quality GStreamer codecs installed, referred to as the good plugins, then the API will only play those files. If however the user installs the other plugins as well, be it the bad or ugly sets, the API remains the same and thus you don’t need to update your code. Unfortunately being a C library this approach does have some drawbacks, notably the need to include the JNA jar as well as the system specific libraries. This approach can be considered similar to how SWT works.


Assuming that you already have a Java development environment, the first thing you’ll need is to install GStreamer. On Linux odds are you already have it, unless you are running a rather stripped down distro or don’t have many media players installed (both Rhythmbox and Banshee use GStreamer). If you don’t it should be pretty straight forward to install along with your choice of plugins. On Windows you’ll need to head over to ossbuild where they have downloadable installers.

The second thing you’ll need is gstreamer-java which you can grab over at their website here. You’ll need to download both gstreamer-java-1.4.jar and jna-3.2.4.jar. Both might contain some extra files that you probably don’t need and can prune out later if you’d like. Setup your development environment so that both of these jar files are in your build path.

Simple playback

GStreamer offers highly abstracted playback engines called PlayBins. This is what we will use to actually play our files. Here is a very simple code example that demonstrates how to actually make use of a PlayBin:

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));


So what does it all mean?

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

The above line takes the incoming command line arguments and passes them to the Gst.init function and returns a new set of arguments. If you have every done any GTK+ programming before this should be instantly recognizable to you. Essentially what GStreamer is doing is grabbing, and removing, any GStreamer specific arguments before your program will actually process them.

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));

The first line of code requests a standard “AudioPlayer” PlayBin. This PlayBin is built right into GStreamer and automatically sets up a default pipeline for you. Essentially this lets us avoid all of the low-level craziness that we would have to normally deal with if we were starting from scratch.

The next line sets the PlayBin’s VideoSink, think of sinks as output locations, to a “fakesink” or null sink. The reason we do this is because PlayBin’s can play both audio and video. For the purposes of this player we only want audio playback so we automatically redirect all video output to the “fakesink”.

The last line is pretty straight forward and just tells GStreamer what file to play.


Finally with the above lines of code we tell the PlayBin to actually start playing and then enter the GStreamer main loop. This loop continues for the duration. The last line is used to reset the PlayBin state and do some cleanup.

Bundle it with a quick GUI

To make it a little more friendly I wrote a very quick GUI to wrap all of the functionality with. The download links for that (binary only package), as well as the source (all package) is below. And there you have it: a very simple cross-platform media player that will playback pretty much anything you throw at it.

Please note that I have provided this software purely as a quick example. If you are really interested in developing a GStreamer powered Java application you would do yourself a favor by reading the official documentation.

Binary Only Package All Package
File name:
Version: March 13, 2011
File size: 1.5MB 1.51MB
File download: Download Here Download Here

Originally posted on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Create a GTK+ application on Linux with Objective-C

December 8th, 2010 8 comments

As sort of follow-up-in-spirit to my older post I decided to share a really straight forward way to use Objective-C to build GTK+ applications.


Objective-C is an improvement to the iconic C programming language that remains backwards compatible while adding many new and interesting features. Chief among these additions is syntax for real objects (and thus object-oriented programming). Popularized by NeXT and eventually Apple, Objective-C is most commonly seen in development for Apple OSX and iOS based platforms. It ships with or without a large standard library (sometimes referred to as the Foundation Kit library) that makes it very easy for developers to quickly create fast and efficient programs. The result is a language that compiles down to binary, requires no virtual machines (just a runtime library), and achieves performance comparable to C and C++.

Marrying Objective-C with GTK+

Normally when writing a GTK+ application the language (or a library) will supply you with bindings that let you create GUIs in a way native to that language. So for instance in C++ you would create GTK+ objects, whereas in C you would create structures or ask functions for pointers back to the objects. Unfortunately while there used to exist a couple of different Objective-C bindings for GTK+, all of them are quite out of date. So instead we are going to rely on the fact that Objective-C is backwards compatible with C to get our program to work.

What you need to start

I’m going to assume that Ubuntu will be our operating system for development. To ensure that we have what we need to compile the programs, just install the following packages:

  1. gnustep-core-devel
  2. libgtk2.0-dev

As you can see from the list above we will be using GNUstep as our Objective-C library of choice.

Setting it all up

In order to make this work we will be creating two Objective-C classes, one that will house our GTK+ window and another that will actually start our program. I’m going to call my GTK+ object MainWindow and create the two necessary files: MainWindow.h and MainWindow.m. Finally I will create a main.m that will start the program and clean it up after it is done.

Let me apologize here for the poor code formatting; apparently WordPress likes to destroy whatever I try and do to make it better. If you want properly indented code please see the download link below.


In the MainWindow.h file put the following code:

#import <gtk/gtk.h>
#import <Foundation/NSObject.h>
#import <Foundation/NSString.h>

//A pointer to this object (set on init) so C functions can call
//Objective-C functions
id myMainWindow;

* This class is responsible for initializing the GTK render loop
* as well as setting up the GUI for the user. It also handles all GTK
* callbacks for the winMain GtkWindow.
@interface MainWindow : NSObject
//The main GtkWindow
GtkWidget *winMain;
GtkWidget *button;

* Constructs the object and initializes GTK and the GUI for the
* application.
* *********************************************************************
* Input
* *********************************************************************
* argc (int *): A pointer to the arg count variable that was passed
* in at the application start. It will be returned
* with the count of the modified argv array.
* argv (char *[]): A pointer to the argument array that was passed in
* at the application start. It will be returned with
* the GTK arguments removed.
* *********************************************************************
* Returns
* *********************************************************************
* MainWindow (id): The constructed object or nil
* arc (int *): The modified input int as described above
* argv (char *[]): The modified input array modified as described above
-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv;

* Frees the Gtk widgets that we have control over

* Starts and hands off execution to the GTK main loop

* Example Objective-C function that prints some output

* C callback functions

* Called when the user closes the window
void on_MainWindow_destroy(GtkObject *object, gpointer user_data);

* Called when the user presses the button
void on_btnPushMe_clicked(GtkObject *object, gpointer user_data);



For the class’ actual code file fill it in as show below. This class will create a GTK+ window with a single button and will react to both the user pressing the button, and closing the window.

#import “MainWindow.h”

* For documentation see MainWindow.h

@implementation MainWindow

-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv
//call parent class’ init
if (self = [super init]) {

//setup the window
winMain = gtk_window_new (GTK_WINDOW_TOPLEVEL);

gtk_window_set_title (GTK_WINDOW (winMain), “Hello World”);
gtk_window_set_default_size(GTK_WINDOW(winMain), 230, 150);

//setup the button
button = gtk_button_new_with_label (“Push me!”);

gtk_container_add (GTK_CONTAINER (winMain), button);

//connect the signals
g_signal_connect (winMain, “destroy”, G_CALLBACK (on_MainWindow_destroy), NULL);
g_signal_connect (button, “clicked”, G_CALLBACK (on_btnPushMe_clicked), NULL);

//force show all

//assign C-compatible pointer
myMainWindow = self;

//return pointer to this object
return self;

//start gtk loop

NSLog(@”Printed from Objective-C’s NSLog function.”);
printf(“Also printed from standard printf function.\n”);


myMainWindow = NULL;

if(GTK_IS_WIDGET (button))
//clean up the button

if(GTK_IS_WIDGET (winMain))
//clean up the main window

[self destroyWidget];

[super dealloc];

void on_MainWindow_destroy(GtkObject *object, gpointer user_data)
//exit the main loop

void on_btnPushMe_clicked(GtkObject *object, gpointer user_data)
printf(“Button was clicked\n”);

//call Objective-C function from C function using global object pointer
[myMainWindow printSomething];



To finish I will write a main file and function that creates the MainWindow object and eventually cleans it up. Objective-C (1.0) does not support automatic garbage collection so it is important that we don’t forget to clean up after ourselves.

#import “MainWindow.h”
#import <Foundation/NSAutoreleasePool.h>

int main(int argc, char *argv[]) {

//create an AutoreleasePool
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

//init gtk engine
gtk_init(&argc, &argv);

//set up GUI
MainWindow *mainWindow = [[MainWindow alloc] initWithArgCount:&argc andArgVals:argv];

//begin the GTK loop
[mainWindow startGtkMainLoop];

//free the GUI
[mainWindow release];

//drain the pool
[pool release];

//exit application
return 0;

Compiling it all together

Use the following command to compile the program. This will automatically include all .m files in the current directory so be careful when and where you run this.

gcc `pkg-config –cflags –libs gtk+-2.0` -lgnustep-base -fconstant-string-class=NSConstantString -o “./myprogram” $(find . -name ‘*.m’) -I /usr/include/GNUstep/ -L /usr/lib/GNUstep/ -std=c99 -O3

Once complete you will notice a new executable in the directory called myprogram. Start this program and you will see our GTK+ window in action.

If you run it from the command line you can see the output that we coded when the button is pushed.

Wrapping it up

There you have it. We now have a program that is written in Objective-C, using C’s native GTK+ ‘bindings’ for the GUI, that can call both regular C and Objective-C functions and code. In addition, thanks to the porting of both GTK+ and GNUstep to Windows, this same code will also produce a cross-platform application that works on both Mac OSX and Windows.

Source Code Downloads

Source Only Package
File name:
File hashes: Download Here
File size: 2.4KB
File download: Download Here

Originally posted on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Setting up an Ubuntu-based ASP.NET Server with Mono

November 21st, 2010 5 comments


In my day job, I work as an infrastructure developer for a small company. While I wouldn’t call us a Microsoft shop by any stretch (we actually make web design tools), we do maintain a large code base in C#, which includes our website and a number of web-based administrative tools. In planning for a future project, I recently spent some time figuring out how to host our existing ASP.NET-based web site on a Linux server. After a great deal of research, and just a bit of trial and error, I came up with the following steps:

VirtualBox Setup:

The server is going to run in a virtual machine, primarily because I don’t have any available hardware to throw at the problem right now. This has the added benefit of being easily expandable, and our web hosting company will actually accept *.vdi files, which allows us to easily pick up the finished machine and put it live with no added hassle. In our case, the host machine was a Windows Server 2008 machine, but these steps would work just as well on a Linux host.

I started off with VirtualBox 3.2.10 r66523, although like I said, grabbing the OSE edition from your repositories will work just as well. The host machine that we’re using is a bit underpowered, so I only gave the virtual machine 512MB of RAM and 10GB of dynamically expanding storage. One important thing – because I’ll want this server to live on our LAN and interact with our other machines, I was careful to change the network card settings to Bridged Adapter and to make sure that the Ethernet adapter of the host machine is selected in the hardware drop down. This is important because we want the virtual machine to ask our office router for an IP address instead of using the host machine as a private subnet.

Installing the Operating System:

For the initial install, I went with the Ubuntu 10.10 Maverick Meerkat 32-bit Desktop Edition. Any server admins reading this will probably pull out their hair over the fact, but in our office, we have administrators who are very used to using Windows’ Remote Desktop utility to log into remote machines, and I don’t feel like training everybody on the intricacies of PuTTy and SSH. If you want to, you can install the Server version instead, and forgo all of the additional overhead of a windowing system on your server. Since all of my installation was done from the terminal, these instructions will work just as well with or without a GUI.

From VirtualBox, you’ll want to mount the Ubuntu ISO in the IDE CD-ROM drive, and start the machine. When prompted, click your way through Ubuntu’s slick new installer, and tell it to erase and use entire disk, since we don’t need any fancy partitioning for this setup. When I went through these steps, I opted to encrypt the home folder of the vm, mostly out of habit, but that’s up to you. Once you make it to a desktop, install VirtualBox Guest Additions.

From Terminal, type sudo apt-get upgrade to apply any patches that might be available.

Setting up a Static IP Address:

From a terminal, type ifconfig and find the HWaddr entry for your ethernet card, usually eth0. It will probably look something like 08:00:27:1c:17:6c. Next, you’ll need to log in to your router and set it up so that any device with this hardware address (also called a MAC address) is always given the same IP address. In my case, I chose to assign the virtual server an IP address of because it was easy to remember. There are other ways that you can go about setting up a static IP, but I find this to be the easiest.

Getting Remote Desktop support up and running:

As I mentioned above, the guys in our office are used to administering remote machines by logging in via Windows’ remote desktop client. In order to provide this functionality, I chose to set up the xrdp project on my little server. Installing this is as easy as typing sudo apt-get install xrdp in your terminal. The installation process will also require the vnc4server and xbase-clients packages.

When the installation has completed, the xrdp service will run on startup and will provide an encrypted remote desktop server that runs on port 3389. From Windows, you can now connect to with the standard rdp client. When prompted for login, make sure that sesman-Xvnc is selected as the protocol, and you should be able to log in with the username and password combination that you chose above.

Installing a Graphical Firewall Utility:

Ubuntu ships with a firewall baked into the kernel that can be accessed from the terminal with the ufw tool. Because some of our administrators are afraid of the command line, I also chose to install a graphical firewall manager. In the terminal, type sudo apt-get install gufw to install an easy to use gui for the firewall. Once complete, it will show up in the standard Gnome menu system under System > Administration > Firewall Configuration.
Let’s do a bit of setup. Open up the Firewall Configuration utility, and check off the box to enable the firewall. Below that box, make sure that all incoming traffic is automatically denied while all outgoing is allowed. These rules can be tightened up later, but are a good starting point for now. To allow incoming remote desktop connections, you’ll need to create a new rule to allow all TCP connections on port 3389. If this server is to be used on the live Internet, you may also consider limiting the IP addresses that these connections can come from so that not just anybody can log in to your server. Remember, defense in depth is your best friend.

Adding SSH Support:

Unlike my coworkers, I prefer to manage my server machines via command line. As such, an SSH server is necessary. Later, the SSH connection can be used for SFTP or a secure tunnel over which we can communicate with our source control and database servers. In terminal, type sudo apt-get install openssh-server to start the OpenSSH installation process. Once it’s done, you’ll want to back up its default configuration file with the command cp /etc/ssh/sshd_config /etc/ssh/sshd_config_old. Next, open up the config file your text editor of choice (mine is nano) and change a couple of the default options:

  • Change the Port to 5000, or some other easy to remember port. Running an SSH server on port 22 can lead to high discoverability, and is regarded by some as a security no-no.
  • Change PermitRootLogin to no. This will ensure that only normal user accounts can log in.
  • At the end of the file, add the line AllowUsers <your-username> to limit the user accounts that can log in to the machine. It is good practice to create a user account with limited privileges and only allow it to log in via SSH. This way, if an attacker does get in, they are limited in the amount of damage that they can do.

Back in your terminal, type sudo /etc/init.d/ssh restart to load the new settings. Using the instructions above, open up your firewall utility and create a new rule to allow all TCP connections on port 5000. Once again, if this server is to be used on the live Internet, it’s a good idea to limit the IP addresses that this traffic can originate from.

With this done, you can log in to the server from any other Linux-based machine using the ssh command in your terminal. From Windows, you’ll need a third-party utility like PuTTy.

Installing Apache and ModMono:

For simplicity’s sake, we’ll install both Apache (the web server) and mod_mono (a module responsible for processing ASP.NET requests) from Ubuntu’s repositories. The downside is that the code base is a bit older, but the upside is that everything should just work, and the code is stable. These instructions are a modified version of the ones found on the HBY Consultancy blog. Credit where credit is due, after all. From your terminal, enter the following:

$ sudo apt-get install monodevelop mono-devel monodevelop-database mono-debugger mono-xsp2 libapache2-mod-mono mono-apache-server2 apache2

$ sudo a2dismod mod_mono

$ sudo a2enmod mod_mono_auto

With this done, Apache and mod_mono are installed. WE’ll need to do a bit of configuration before they’re ready to go. Open up mod_mono’s configuration file in your text editor of choice with something like sudo nano /etc/apache2/mods-available/mod_mono_auto.conf. Scroll down to the bottom and append the following text to the file:

MonoPath default “/usr/lib/mono/3.5”

MonoServerPath default /usr/bin/mod-mono-server2

AddMonoApplications default “/:/var/www”

Finally, restart the Apache web server so that the changes take effect with the command sudo /etc/init.d/apache2 restart. This configuration will allow us to run aspx files out of our /var/www/ directory, just like html or php files that you may have seen hosted in the past.

Having a Beer:

That was a fair bit of work, but I think that it was worth it. If everything went well, you’ve now got a fully functional Apache web server that’s reasonably secure, and can run any ASP.NET code that you throw at it.

The one hiccup that I encountered with this setup was that Mono doesn’t yet have support for .NET’s Entity Framework, which is the object-relational mapping framework that we use as a part of our database stack on the application that we wanted to host. This means that if I want to host the existing code on Linux, I’ll have to modify it so that it uses a different database back end. Its kind of a pain, but not the end of the world, and certainly a situation that can be avoided if you’re coding up a website from scratch. You can read more about the status of Mono’s ASP.NET implementation on their website.

Hopefully this helped somebody. Let me know in the comments if there’s anything that isn’t quite clear or if you encounter any snags with the process.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Compile Windows programs on Linux

September 26th, 2010 No comments

Windows?? *GASP!*

Sometimes you just have to compile Windows programs from the comfort of your Linux install. This is a relatively simple process that basically requires you to only install the following (Ubuntu) packages:

To compile 32-bit programs

  • mingw32 (swap out for gcc-mingw32 if you need 64-bit support)
  • mingw32-binutils
  • mingw32-runtime

Additionally for 64-bit programs (*PLEASE SEE NOTE)

  • mingw-w64
  • gcc-mingw32

Once you have those packages you just need to swap out “gcc” in your normal compile commands with either “i586-mingw32msvc-gcc” (for 32-bit) or “amd64-mingw32msvc-gcc” (for 64-bit). So for example if we take the following hello world program in C

#include <stdio.h>
int main(int argc, char** argv)
printf(“Hello world!\n”);
return 0;

we can compile it to a 32-bit Windows program by using something similar to the following command (assuming the code is contained within a file called main.c)

i586-mingw32msvc-gcc -Wall “main.c” -o “Program.exe”

You can even compile Win32 GUI programs as well. Take the following code as an example

#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
char *msg = “The message box’s message!”;
MessageBox(NULL, msg, “MsgBox Title”, MB_OK | MB_ICONINFORMATION);

return 0;

this time I’ll compile it into a 64-bit Windows application using

amd64-mingw32msvc-gcc -Wall -mwindows “main.c” -o “Program.exe”

You can even test to make sure it worked properly by running the program through wine like

wine Program.exe

You might need to install some extra packages to get Wine to run 64-bit applications but in general this will work.

That’s pretty much it. You might have a couple of other issues (like linking against Windows libraries instead of the Linux ones) but overall this is a very simple drop-in replacement for your regular gcc command.

*NOTE: There is currently a problem with the Lucid packages for the 64-bit compilers. As a work around you can get the packages from this PPA instead.

Originally posted on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Tyler B, Ubuntu Tags: , , , ,

Trying out the Chakra Project

August 24th, 2010 1 comment

After a little bit of pressure from the people responding to my previous post (My search for the best KDE Linux distribution), I have finally given in and tried out Chakra. The Chakra Project starts with Arch Linux as a base but, instead of forcing you to build your own distro piece of piece, Chakra comes more or less pre-packaged.


The installation was one of the best I’ve ever seen. For alpha software this distribution’s first point of interaction is already very polished – even warning me that it is not stable software and might therefore eat my hamster.

The install process even let me decide to install some very useful packages, like Microsoft Core TTF Fonts and Adobe Flash, right away. Even the Language & Time step was incredible, offering a rotating globe that I could drag around and manipulate.

The only issue I had was trying to create a disk partition to install the OS to. This was because I was trying this out inside of VirtualBox, and the virtual hard disk did not have any partitions on it whatsoever. There is a bug and (thankfully) work-around for this known issue with their Tribe installer, and after reading a quick walk-through I was once again ready to install.

The Desktop

The desktop is standard KDE version 4.4.2 after install. Opening up Pacman (or is it Shaman?) showed me a list of brand new software that I could install, including the newest KDE 4.5. One of Project Chakra’s great strengths will be in this rolling release of new software updates. The concept of installing once and always having the most up-to-date applications is very intriguing.

Unfortunately, as with most alpha software, Shaman is still pretty buggy and often crashed whenever I tried to apply the updates. Also unfortunate is that Shaman started a trend of applications simply crashing for no reason. I don’t want to give this distribution a bad reputation, because it is still pre-release software, but I think it goes without saying that the developers have some bug squashing to do before a stable release will be ready. Something I found rather strange is that the current default software selection that Chakra ships with includes two different browsers, Konqueror and rekonq, but no office software whatsoever.

Google Chrome much?

Final Thoughts (for now!)

The Chakra Project looks very promising, albeit very unpolished at the moment. If they can manage to fix up the rest of the distribution, getting it just as polished feeling as the installer, this will definitely be one to look out for. I look forward to trying it out again once it hits a stable release.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: KDE, Linux, Tyler B Tags: , , ,

A Matter of Opinion

July 19th, 2010 No comments

Tonight I installed VirtualBox, an incredibly handy virtualization program that lets me run instances of Windows and other Linux distributions from the comfort of my Linux Mint 9 Isadora desktop. Upon installing the latest version in my repositories, I launched the program, only to be confronted by a dialog box offering a link to a newer version of the program available on its website. So I clicked the link, and downloaded the *.deb of the new version. My package manager started up, tried to install the new package, and complained that it conflicted with the existing VirtualBox install. So I opened synaptic, uninstalled the version of VirtualBox that I got from my repositories, and finally installed the most recent version from the website.

So here’s my question, and please feel free to leave your opinion in the comments below: Should Linux applications warn the user about updates that are not available from their repositories?

On one hand, I like having up to date software, but on the other, package maintainers work hard to ensure that everything that ships with a stable distribution plays well together, and probably don’t appreciate these apps leading users outside of their carefully curated repositories. From a security-oriented point of view, this is also bad practice, as much of the security that is inherent in Linux comes from the fact that the vast majority of the software that you install has been vetted by the package maintainers who work to ensure that your distribution is safe and stable. And surely the guys who program VirtualBox, being the insanely awesome ninja-powered pirate wizards that they are, could have come up with a way to update my install without my having to uninstall and re-install an entirely new version. Just sayin’

Chime in with your opinion in the comments below.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

PulseAudio: Monitoring your Line-In Interface

July 11th, 2010 22 comments

At home, my setup consists of three machines –  a laptop, a PC, and an XBOX 360. The latter two share a set of speakers, but I hate having to climb under the desk to switch the cables around, and wanted a better way to switch them back and forth. My good friend Tyler B suggested that I run the line out from the XBOX into the line-in on my sound card, and just let my computer handle the audio in the same way that it handles music and movies. In theory, this works great. In practice, I had one hell of a time figuring out how to force the GNOME sound manager applet into doing my bidding.

After quite a bit of googling, I found the answer on the Ubuntu forums. It turns out that the secret lies in a pulse audio module that isn’t enabled by default. Open up a terminal and use the following commands to permanently enable this behaviour. As always, make sure that you understand what’s up before running random commands that you find on the internet as root:

pactl load-module module-loopback
sudo sh -c ' echo "load-module module-loopback" >>  /etc/pulse/ '

The first line instructs PulseAudio (one of the many ways that your system talks with the underlying sound hardware) to load a module called loopback, which unsurprisingly, loops incoming audio back through your outputs. This means that you can hear everything that comes into your line-in port in real time. Note that this behaviour does not extend to your microphone input by design. The second line simply tells PulseAudio to load this module whenever the system starts.

Now if you’ll excuse me, I have jerks to run over in GTA…

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Big distributions, little RAM 2

July 5th, 2010 5 comments

As a follow up to my previous post I have decided to re-run the tests, this time with the updated distributions (where available of course). Again I will be testing all of this within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86

The tests were all done using VirtualBox 3.2.6 on Windows, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default 800×600 and accepted the installation defaults. All tests were run on July 3rd, 2010 so your results may not be identical.


As before I have provide state of the art graphs for your enjoyment.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates

Install size after updates

The hard drive space used by the distribution after applying all of the updates.


As before I’m going to leave you to drawing your own conclusions. I will point out though that almost all of the distributions have done a good job of lowering memory usage with system updates, which is very commendable. Also it’s important to note that even though RAM and disk space increase with updates so might performance so it’s all about which metric you hold as most important.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Linux, Tyler B Tags: , , ,

Accessing Windows 7 Shares from Ubuntu is a Pain

June 28th, 2010 16 comments

This blog post is about my experiences. If you hit this page from a search engine looking to fix this issue click here to skip to the solution.

Recently, I’ve been reorganizing my computers based on their usage. My old HTPC, has resumed its duties as my primary desktop/server, my Mac Mini has been attached to the my desktop through Synergy, my server was given to my brother for personal use, and his old computer – a nettop – is now being used as our new HTPC.

After a painful decision making process – a topic for another time, and another post – I decided that this nettop, named Apollo after the Greek god of many things including “music, poetry, and the arts” [as close as I could get to entertainment],  should run Ubuntu 10.4 with XBMC as the media center app. After testing it’s media playback capabilities from a local file, I was rather impressed. I set out to add a SMB share from within XBMC, and was prompted to add a username and password.

I wasn’t really expecting this, because Leviathan – my desktop/sever running Windows 7 – has public sharing turned on, as well as a guest account. I entered in my credentials, and was asked yet again for a username and password. After trying multiple times, I decided to quit XBMC and see if I could get Ubuntu to connect to the share. Here too, I was prompted for a username and password, again and again.

Next I headed to the terminal to run smbclient. This didn’t work either, as I was shown a message saying smbclient failed with “SUCCESS – 0”. I guess success shouldn’t be zero, so my next move was to attempt mounting the network share using CIFS. Again, I was met with repeated defeat.

Begrudgingly I took to the internet with my problem, only to find that there were many people unable to connect to their Windows 7 from Ubuntu. The suggestions ranged from registry hacks to group policy administration, none of which worked. One repeated suggestion however, was to un-install the Windows Live Sign-in Assistant. However, as a user of the Windows Live Essentials (Wave 4) Beta that was recently released – I had no such program. I did however have a similar application called the Windows Live Messenger Companion, which I chose to uninstall – again, to no avail.

However, I soon reasoned that perhaps whatever was blocking people using the Windows Live Sign-in Assistant was now being used within the actual Windows Live Messenger client or the other Windows Live Essentials apps that I’d recently installed. I started by uninstalling everything but Windows Live Messenger – because I really, really like the beta version. Alas, this did not help. Next I uninstalled the actual Windows Live Messenger client and voila – I was able to connect with no prompting for passwords at all. Because that makes -any- sense.

As a matter of interest, I installed the regular WLM non-beta client and made sure that the Windows Live Sign-in Assistant was installed, and tried to connect again. Not surprisingly, I was no longer able to connect to my Windows 7 shares. After un-installing the Windows Live Sign-in Assistant my shares were back up and I was mostly happy. Except that I couldn’t use the new Windows Live Messenger beta.

I can’t be sure if the other tinkering I did also helped clear up my problems, but as a recap here are the steps I recommend to access your Windows 7 shares from Ubuntu:

1) If you have the Windows Live Essentials (Wave 4) beta installed, you’ll have to uninstall all of the applications that come with this. For now, you can install the current version of Windows Live Messenger and the other Windows Live Essentials.

2) If you have Windows Live Messenger installed, or ANY of the Windows Live Essentials programs installed check to see if you have the Windows Live Sign-in Assistant installed. If so, uninstall it.

3) Hopefully, now you can enjoy your Windows 7 shares in Ubuntu

Important Note:

Beta software has this nasty habit of leaving beta status sooner or later. If this issue is not resolved when the newest version of Windows Live Messenger is officially released, you may not be able to use the Window Live Messenger client if you need your Windows 7 shares from Ubuntu. I would suggest using an application like Pidgin as your instant messenger, as it can also connect to the Windows Live Messenger service. Other options include Digsby, Miranda, and Trillian.

Originally posted on my personal website here.

Fix ATI vsync & video tearing issue once and for all!

May 6th, 2010 23 comments

NOTE: ATI’s most recent drivers now include a no tearing option in the driver control panel. Enabling it there is now the preferred method.

Two of the linux machines that I use both have ATI graphics cards from the 4xxx series in them. They work well enough for what I do, very casual gaming, lots of video watching, but one thing has always bothered me to no end: video tearing. I assumed that this was due to vsync being off by default (probably for performance sake) but even after installing the proprietary drivers in the new Ubuntu 10.04 and trying to force it on I still could not get the issue to resolve itself. After some long googling I found what seems to be a solution, at least in my case. I’ll walk you through what I did.

Before you continue read this: In order to fix this issue on my computers I had to trash xorg.conf and start over. If you are afraid you are going to ruin yourself, or if you have a custom setup already, please be very careful and read before doing what I suggest or don’t continue at all. Be sure to make a backup!

1 ) Install the ATI proprietary drivers and restart so that they can take effect.

2 ) Make a backup of your xorg.conf file. Do this by opening a terminal and copying it to a backup location. For example I ran the following code:

sudo cp /etc/X11/xorg.conf /etc/X11/backup.xorg.conf

3 ) Remove your existing (original) xorg.conf file:

sudo rm /etc/X11/xorg.conf

4 ) Generate a new default xorg.conf file using aticonfig (that’s two dashes below):

sudo aticonfig –initial

5 ) Enable video syncing (again two dashes before each command):

sudo aticonfig –sync-video=on –vs=on

6 ) If possible also enable full anti-aliasing:

sudo aticonfig –fsaa=on –fsaa-samples=4

7 ) Restart now so that your computer will load the new xorg.conf file.

8 ) Open up Catalyst Control Center and under 3D -> More Settings make sure the slider under Wait for vertical refresh is set to Always On.

That should be it. Please note that this trick may not work with all media players either (I noticed Totem seemed to still have some issues). One other thing I tried in VLC was to change the video output to be OpenGL which seemed to help a lot.

Good luck!

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Linux Media Players Suck – Part 1: Rhythmbox

May 5th, 2010 50 comments

The state of media players on Linux is a sad one indeed. If you’re a platform enthusiast, you may want to cover your ears and scream “la-la-la-la” while reading this article, because it will likely offend your sensibilities. In fact, the very idea behind this series is to shake up the freetards’ world view, and to make them realize that a decent Winamp or iTunes clone need not be the end of the story for media management and playback on Linux.

This article will concentrate on lambasting Rhythmbox, the default jukebox software of the GNOME desktop environment. Subsequent posts will give the same treatment to other players in this sphere, including Banshee, Amarok, and Songbird (if I can find a copy that will still build on Linux). If you’re a user of media players on Linux, keep your own annoyances firmly in mind, and if I don’t mention them, please share in the comments. If you’re a developer for one of these fine projects, try to keep an open mind and get inspired to do better. A media player is not a hard thing to build, and I do believe that together, we can do better.

For the remainder of this article, please keep in mind that I am currently running Rhythmbox under Kubuntu 9.10, so you’ll see it rendered with qt widgets in all of my screen shots. This doesn’t affect the overall performance of the app, but leads nicely into my first complaint:

  1. Poor Cross-Platform Support: There are basically two desktop environments that matter in the Linux world, GNOME and KDE. Under GNOME, Rhythmbox has a reasonably nice icon set that is comparable to other media players. Under KDE, the qt re-skinning replaces those icons with a horrible set of mismatched images that really make the program look second-rate:
    Isn't this shit awful?

    As you can see, these icons look terrible. Note that there isn't even an icon for 'Burn' and the icon for 'Browse' is a fucking question mark.

    This extends to the CD burning and help features too. They rely on programs like gnome-help and brasero to work, but don’t install them with the media player, so when I try to access these features under KDE, I just get error messages. Nice.

    Honestly, who packaged this thing?

    This is just plain stupid. Every package manager has the concept of dependencies, so why doesn't Rhythmbox use them?

  2. The Player Starts in the Tray: Under what circumstances would it be considered useful for a media player to automatically minimize itself to the system tray on startup? It doesn’t begin to play automatically. The first thing that I always do is click on the tray icon to maximize it so that I can select some music to start playing. Way to start the user experience off on the wrong foot.
  3. Missing Files View: This one is just plain stupid. Whenever I delete a file from my hard drive, it shows up under the ‘Missing Files’ view, even though my intent was clearly to remove the file from my library. Further, I use Rhythmbox to put music on my BlackBerry. Whenever I fill it with music, I first delete the files on it. Those files that I deleted from my mobile device? Yeah, they show up under ‘Missing Files’ too, as if they were a legitimate part of my library! So this view ends up being like a global garbage bin that I have to waste my precious time emptying on occasion, and serves no useful purpose in the mean time. Yeah, I deleted those files. What are you going to do about it?

    Seriously, why the hell are these files in here?

    As you can see, I've highlighted the fact that Rhythmbox is telling me that these files are missing from my mobile device. No shit.

  4. Shared Libraries that I can’t Play: So we’ve known for awhile now that Apple broke the ability to connect to iTunes via the DAAP protocol, and that it’s not possible to connect to a shared iTunes library from Linux. If that’s the case, why does Rhythmbox still show these libraries as available? And how come it shows my library under this node? Why would I listen to my own shared library? Finally, I’ve found that even if I’m running Rhythmbox on another machine, I still can’t connect to my shared library. This feature seems to be downright broken – so why is it still in the build?
  5. The GUI and Backend are on One Thread: I keep about half of my music collection as lossless FLAC files. When I want to rip these files to my portable media device, they need to be converted to the Mp3 format. Turns out that Rhythmbox thinks it appropriate to transcode these files on the same thread that it uses to update its GUI, so that while this process is taking place, the app becomes laggy, and at times, downright unusable. Further, the application doesn’t seem to give me any control over the bitrate that my songs are transcoded to. Fuck!
  6. Lack of Playlist Options: Smart playlists in Rhythmbox are missing a rather key feature: Randomness. When filling the aforementioned mobile device with music, I would like to select a random 4GB of music from my top rated playlist. But I can’t. I can select 4GB of music by most every criteria except randomness, which means that I get the same 600 or so songs on my device every time I fill it. This is strange, because I can shuffle the contents of a static playlist; But I cannot randomly fill a smart playlist. Great.

    If you have a device that has a small amount of memory, this feature is essential

    It's funny; I really want to like Rhythmbox, but it's shit like this that ruins the experience for me

  7. Columns: What the fuck. Who wrote this part of the application? When I choose the columns that are visible in the main window, I can’t re-order them. That’s right. So the only order that I can put my columns in is Track, Title, Genre, Artist, Album, Year, Time, Quality, Rating. Can’t reorder them at all, and I have to go into the preferences menu to choose which ones are displayed, instead of being able to right-click on the column headers to select them like I can in every other program written in the last 10 years. This is just ridiculous. I know that the GTK+ toolkit allows you to create re-order-able columns, because I’ve seen it done.

    This is just so incredibly backward. I mean, columns are a standard part of the GTK+ toolkit, and I've seen plenty of other apps that do this properly.

    Why, for the love of God, can't these be re-ordered?

  8. The Equalizer is Balls: No presets, and no preamp. So I can set the EQ, and my settings are magically saved, but I can only have one setting, because there doesn’t appear to be a way to create multiple profiles. And louder music sounds like balls, because I can’t turn down the preamp, so I get digital distortion throughout my signal. It would be better to just not have an equalizer at all.

    I mean, it works. But...

    I mean, it works. But...

  9. Context Menus Don’t Make Sense: Let’s just take a look at this context menu for a moment. There are three ways to remove a song from a playlist. You can Remove the song, which just removes it from the playlist, but not from your library or your hard drive. Alternatively, you can select Move to Trash, which does what you might expect – it removes the song from the playlist, the library, and your computer. I’ve got a problem with the naming conventions here. The purpose of Remove isn’t well explained, and confused the hell out of me at first. In addition, when browsing a mobile device that you’ve filled with music, the GUI breaks down even further. In this case, you can still hit Remove, which seems to remove the song from Rhythmbox’s listing, but leaves the file on the device. So now I have a file on my device that I can’t access. Great. The right-click menu also has the ability to copy and cut the song, even though there is no immediately obvious way to paste it. For that functionality, you’ll have to head up to the Edit menu.

    The right-click context menu

    I'm starting to run out of anger. The 10,000 papercuts that come along with this app are making me numb to it.

  10. No Command Line Tools: Now, normally, this wouldn’t bother me too much. A music library is something that’s meant to have a GUI, and doesn’t generally lend itself to working from the command line. In this case however, command line access to Rhythmbox would be really handy, because I’d like to set up a hot key on my keyboard that will skip songs or pause playback. Unfortunately, there’s no way to do that within the software, and it doesn’t have any command line arguments that I can call instead. Balls.

There you have it, 10 things that really ruin the Rhythmbox experience. While using this piece of software, I felt like the developers worked really hard to build something that was sort of comparable to Apple’s iTunes, and then stopped trying. That isn’t good enough! If we want to attract users to our platform of choice, and keep them here, we need to give them reasons to check it out, and even more to stick around. If I say to you that I want to have the best Linux media player, you tend to put the emphasis on the word Linux. Why not just make the best media player? GNOME is on at least half of all Linux desktops, if not more. Why hinder it with software that gives people a poor first impression of what Linux is capable of? Seriously guys, let’s step it up.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Empathy: What a Piece of Garbage

March 20th, 2010 17 comments

The Empathy instant messaging client for Gnome is not yet ready to be the default client on your favourite Gnome-based distribution. In fact, I can’t even make it work! Tyler B originally posted about this problem way back in October, but it doesn’t seem to have been fixed during the interim.

To demonstrate my point, allow me to walk you through the process of adding an MSN account, one of the officially supported protocols, to a clean install of Empathy:

  1. After launching Empathy, select Accounts from the Edit menu:

    The accounts manager for Empathy

    Hey guys, nice UI. Way to give that listbox a default width. And why the hell is this dialogue box so big, anyway?

  2. Select the MSN protocol from the dropdown menu, and hit the create button:

    The list of protocols that Empathy "supports"

    Wow, way to get icons for every protocol, guys. Either have icons, or don't, ok?

  3. Enter your MSN email address and account password, and hit the Connect button:

    Adding my account details to the new MSN account in Empathy

    Hey, see that Add button under the listbox? If I click that, I can add a new account, before even finishing with this one! Wow, recursion in a GUI! Sweet!

  4. With the new account created, hit the Close button, and watch as the authentication of your newly added MSN account fails:

    Authentication of my newly added account failed

    Wouldn't you know it, my freshly minted account failed to authenticate. I wonder what the problem is...

  5. Hit the Edit Account button, and open up the Advanced area of the Account Manager window that pops up:

    The Advanced area of the Account Manager window in Empathy

    Have you ever seen anything communicate over port 0? I haven't

  6. Open up your working copy of the trusty Pidgin instant messenging client, put the correct port number into the Port textbox in Empathy, and try to figure out how to save your changes:

    Empathy notifies me that I have unsaved changes

    Since I couldn't click apply, I hit Close. Empathy warned me that I hadn't saved my changes, and only then enabled the Apply button in the Account Manager window... Fuck me

  7. Watch as, even with the correct Server and Port information, Empathy continues to fail miserably at connecting to an MSN account:

    The contact list again

    Hey, it's still failing to connect. Imagine that.

The Bottom line? This application is buggy, untested, incompatible, falsely-advertised garbage. I want my Pidgin back. It may have some rough edges, but at least it connects. How these glaring errors and this horrible GUI design ever got past the community is beyond me. I do hope that Empathy has something even somewhat mediocre up their sleeves for their 2.29 release, but until then, I’m headed back to Pidgin.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Big distributions, little RAM

March 13th, 2010 19 comments

As you know I am currently running OpenSUSE 11.2 on my laptop. While I have enjoyed my time using it, I have noticed that this particular distribution tends to be on the heavy side of memory usage. This got me thinking. If OpenSUSE uses this much memory on my machine, how could it possibly run on a machine with 512MB of RAM (the lowest recommended amount)? If Ubuntu is the most popular distribution, but it is also, what I would call, a fully-fledged desktop distribution, then how does it manage given tighter memory constraints? And so the mini-experiment begins.

Points to make before I begin

  • This is not a very scientific study, but rather something I did in my spare time because I was curious.
  • I have picked the majority of the most popular desktop distributions. These distributions were chosen not because they were designed for minimal system specs but rather because they are popular and provide a full desktop experience out of the gate.
  • What do I mean by full desktop experience? The distribution should be easy enough for a novice Windows user to install, should come with all of the standard software for desktop activities, and should not require any fine tuning.
  • What you won’t find here: DSL, Arch Linux, Slackware (only because it failed at installing in VirtualBox), Gentoo, or other ‘expert’ distributions. You also will not find netbook remixes or low-resource specific distributions. This experiment is designed to see how these big distributions run on little RAM, nothing more.
  • Please do not post things like ‘you forgot to test XYZ’ or other useless comments that don’t actually help the discussion. Yes I am sorry I missed your favourite distribution, but grab a tissue, clear the tears from your eyes and let’s all move on with our lives.

How I tested them

The process was identical for all tested distributions. I set up a new virtual machine inside of VirtualBox with the following specs:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86

The tests were all done using VirtualBox 3.1.4 on Windows, and I did not install VirtualBox tools (although some distributions may have shipped with them) nor did I change the screen resolution from the default 800×600.


I have broken the results down into a variety of categories and included fancy graphs just for you!

First boot memory (RAM) usage

For this test I installed the distribution and then on its first (post-installation) boot measured the amount of memory it used. This was to gauge the amount of resources that the stock distribution required before any updates.

Average first boot memory (RAM) usage by packaging type

This shows the average memory usage broken down by the packing type used.

Memory (RAM) usage after updates

This was a test to see whether or not system updates caused the memory usage to increase or decrease. I updated the system with all current updates and then rebooted and measured the resource usage again.

Memory (RAM) usage change after updates

This graph shows the usage difference between installation and post-updating. The formula I used was [after updates – initial installation].

Average memory (RAM) usage after updates by packaging type

Similar to above. Again this is broken down by packing type.

Filesystem layout

This is a simple graph showing the partitions that each default setup created as well as the relative size of them.

Filesystems used in partitions

This graph shows the different filesystems used for the various partitions. For example if a distribution has a value of 2 under ext4 that means that it used ext4 in two different partitions.

Occurrence of filesystem by packaging type

This graph shows the number of distributions who used a certain type of filesystem. It is broken down by packing type.

Install size after updates

This is the total OS install size after downloading and installing all of the updates. This should represent a fully updated version of the distribution.

Average install size after updates by packaging type

This shows the average install size of the distributions broken down by packing type.


Make you own! …well it is pretty obvious that some of these distributions would perform better than others given these low system specs. There are however other things to consider. For example which packing type you prefer, or for that matter which package manager.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Linux, Tyler B Tags: , , ,