Archive for the ‘Linux’ Category

Python Development on Linux and Why You Should Too

September 1st, 2012 25 comments

If you’re a programmer and you use Linux but you haven’t yet entered the amazing world that is Python development, you’re really missing out on something special. For years, I dismissed Python as just another script kiddie language, eschewing it for more “serious” languages like Java and C#. What I was missing out on were the heady days of rapid development that I so enjoyed while hacking away on Visual Basic .NET in my early years of university.

There was a time when nary a day would go by without me slinging together some code into a crappy Windows Forms application that I wrote to play with a new idea or to automate an annoying task. By and large, these small projects ceased when I moved over to Linux, partially out of laziness, and partially because I missed the rapid prototyping environment that Visual Basic .NET provided. Let’s face it – Java and C# are great languages, but getting a basic forms app set up in either of them takes a significant amount of time and effort.

Enter Python, git/github, pip, and virtualenv. This basic tool chain has got me writing code in my spare time again, and the feeling is great. So without further ado, let me present (yet another) quick tutorial on how to set up a bad-ass Python development environment of your own:

Step 1: Python

If there’s one thing that I really love about Python, it’s the wide availability of libraries for most any task that one can imagine. An important part of the rapid prototyping frame of mind is to not get bogged down on writing low-level libraries. If you have to spend an hour or two writing a custom database interface layer, you’re going to lose the drive that got you started in on the project in the first place. Unless of course, the purpose of the project was to re-invent the database interface layer. In that case, all power to you. In my experience, this is never a problem with Python, as its magical import statement will unlock a world of possibilities that is occupied by literally thousands of libraries to do most anything that you can imagine.

Install this bad boy with the simple command sudo apt-get install python and then find yourself a good python tutorial with which to learn the basics. Alternatively, you can just start hacking away and use StackOverflow to fill in any of the gaps in your knowledge.

Step 2: Git/Github

In my professional life, I live and die by source control. It’s an excellent way to keep track of the status of your project, try out new features or ideas without jeopardizing the bits of your application that already work, and perhaps most importantly, it’s a life saver when you can’t figure out why in the hell you decided to do something that seemed like a good idea at the time but now seems like a truly retarded move. If you work with other developers, it’s also a great way to find out who to blame when the build is broken.

So why Git? Well, if time is on your side, go watch this 1 hour presentation by Linus Torvalds; I guarantee that if you know the first thing about source control, he will convince you to switch. If you don’t have that kind of time on your hands (and really, who does?) suffice it to say that Git plays really well with Github, and Github is like programming + social media + crack. Basically, it’s a website that stores your public (or private) repositories, showing off your code for all the world to see and fork and hack on top of. It also allows you to find and follow other interesting projects and libraries, and to receive updates when they make a change that you might be interested in.

Need a library to do fuzzy string matching? Search Git and find fuzzywuzzy. Install it into your working environment, and start playing with it. If it doesn’t do quite what you need, fork it, check out the source, and start hacking on it until it does! Github is an amazing way to expand your ability to rapid prototype and explore ideas that would take way too long to implement from scratch.

Get started by installing git with the command sudo apt-get install git-core. You should probably also skim through the git tutorial, as it will help you start off on the right foot.

Next, mosey on over to Github and sign up for an account. Seriously, it’s awesome, stop procrastinating and do it.

Step 3: Pip

I’ve already raved about the third-party libraries for Python, but what I haven’t told you yet is that there’s an insanely easy way to get those libraries into your working environment. Pip is like a repository just for Python libraries. If you’re already familiar with Linux, then you know what I’m talking about. Remember the earlier example of needing a fuzzy string matching library in your project? Well with pip, getting one is as easy as typing pip install fuzzywuzzy. This will install the fuzzywuzzy library on your system, and make it available to your application in one easy step.

But I’m getting ahead of myself here: You need to install pip before you can start using it. For that, you’ll need to run sudo apt-get install python-setuptools python-dev build-essential && sudo easy_install -U pip

The other cool thing about pip? When you’re ready to share your project with others (or just want to set up a development environment on another machine that has all of the necessary prerequisites to run it) you can run the command pip freeze > requirements.txt to create a file that describes all of the libraries that are necessary for your app to run correctly. In order to use that list, just run pip install -r requirements.txt on the target machine, and pip will automatically fetch all of your projects prerequisites. I swear, it’s fucking magical.

Step 4: Virtualenv

As I’ve already mentioned, one of my favourite things about Python is the availability of third-party libraries that enable your code to do just about anything with simple import statement. One of the problems with Python is that trying to keep all of the dependencies for all of your projects straight can be a real pain in the ass. Enter virtualenv.

This is an application that allows you to create virtual working environments, complete with their own Python versions and libraries. You can start a new project, use pip to install a whole bunch of libraries, then switch over to another project and work with a whole bunch of other libraries, all without different versions of the same library ever interfering with one another. This technique also keeps the pip requirements files that I mentioned above nice and clean so that each of your projects can state the exact dependency set that it needs to run without introducing cruft into your development environment.

Another tool that I’d like to introduce you to at this time is virtualenvwrapper. Just like the name says, it’s a wrapper for virtualenv that allows you to easily manage the many virtual environments that you will soon have floating around your machine.

Install both with the command pip install virtualenv virtualenvwrapper

Once the installation has completed, you’ll may need to modify your .bashrc profile to initialize virtualenvwrapper whenever you log into your user account. To do so, open up the .bashrc file in your home directory using your favourite text editor, or execute the following command: sudo nano ~/.bashrc 

Now paste the following chunk of code into the bottom of that file, save it, and exit:

# initialize virtualenv wrapper if present
if [ -f /usr/local/bin/ ] ; then
. /usr/local/bin/

Please note that this step didn’t seem to be necessary on Ubuntu 12.04, so it may only be essential for those running older versions of the operating system. I would suggest trying to use virtualenvwrapper with the instructions below before bothering to modify the .bashrc file.

Now you can make a new virtual environment with the command mkvirtualenv <project name>, and activate it with the command workon <project name>. When you create a new virtual environment, it’s like wiping your Python slate clean. Use pip to add some libraries to your virtual environment, write some code, and when you’re done, use the deactivate command to go back to your main system. Don’t forget to use pip freeze inside of your virtual environment to obtain a list of all of the packages that your application depends on.

Step 5: Starting a New Project

Ok, so how do we actually use all of the tools that I’ve raved about here? Follow the steps below to start your very own Python project:

  1. Decide on a name for your project. This is likely the hardest part. It probably shouldn’t have spaces in it, because Linux really doesn’t like spaces.
  2. Create a virtual environment for your project with the command mkvirtualenv <project name>
  3. Activate the virtual environment for your project with the command workon <project name>
  4. Sign into Github and click on the New Repository button in the lower right hand corner of the home page
  5. Give your new repository the same name as your project. If you were a creative and individual snowflake, the name won’t already be taken. If not, consider starting back at step 1, or just tacking your birth year onto the end of the bastard like we used to do with hotmail addresses back in the day.
  6. On the new repository page, make sure that you check the box that says Initialize this repository with a README and that you select Python from the Add .gitignore drop down box. The latter step will make sure that git ignores files types that need not be checked into your repository when you commit your code.
  7. Click theCreate Repository button
  8. Back on your local machine, Clone your repository with the command git clone<github user name>/<project name> this will create a directory for your project that you can do all of your work in.
  9. Write some amazing fucking code that blows everybody’s minds. If you need some libraries (and really, who doesn’t?) make sure to use the pip install <library name> command.
  10. Commit early, commit often with the git commit -am “your commit message goes here” command
  11. When you’re ready to make your work public, post it to github with the command git push<github user name>/<project name>
  12. Don’t forget to script out your project dependencies with the pip freeze > requirements.txt command
  13. Finally, when you’re finished working for the day, use the deactivate command to return to your normal working environment.

In Conclusion:

This post is way longer than I had originally intended. Suck it. I hope your eyes are sore. I also hope that by now, you’ve been convinced of how awesome a Python development environment can be. So get out there and write some amazing code. Oh, and don’t forget to check out my projects on github.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Jon F, Programming Tags: , ,

Ubuntu 12.10 Alpha 3 (Report #1)

August 27th, 2012 No comments

Well it’s been a little while since I made the mistake (joking) of installing Ubuntu 12.10 Alpha 3. Here is what I’ve learned so far.

  1. My laptop really does not like the open source ATI graphics driver – and there are no proprietary drivers for this release yet. It’s not that the driver doesn’t perform well enough graphically, its just that it causes my card to give off more heat than the proprietary driver. This in turn causes my laptop’s fan to run non-stop and drains my battery at a considerable rate.
  2. Ubuntu has changed the way they do updates in this release. Instead of the old Update Manager there is a new application (maybe just a re-skinning of the old) that is much more refined and really quite simple. Interestingly enough the old hardware drivers application is also now gone, instead it is merged into the update manager. Overall I’m neutral on both changes.

    Updates are quite frequent when running an alpha release

  3. There is a new Online Accounts application (part of the system settings) included in this release. This application seems to work like an extension of the GNOME keyring – saving passwords for your various online accounts (go figure). I haven’t really had a chance to play around with it too much yet but it seems to work well enough.

That’s it for now. I’m off to file a bug over this open source driver that is currently melting my computer. I’ll keep you posted on how that goes.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Tyler B, Ubuntu Tags: ,

Test driving the new Ubuntu (12.10)

August 26th, 2012 No comments

Call it crazy but I’ve decided to actually install an Ubuntu Alpha release, specifically Ubuntu 12.10 Alpha 3. Why would anyone in their right mind install an operating system that is bound to be full of bugs and likely destroy all of my data? My reasons are twofold:

  1. I regularly use Ubuntu or Ubuntu derivatives and would like to help in the process of making them better
  2. There are still a few quirks with my particular laptop that I would like to help iron out once and for all, hopefully correcting them in a more universal sense for Linux as a whole

So join me over the next few posts as I relate my most recent experiences running… shall we say, less than production code.


I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Tyler B, Ubuntu Tags: ,

Automatically put computer to sleep and wake it up on a schedule

June 24th, 2012 No comments

Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Mount entire drive dd image

May 21st, 2012 No comments

It is a pretty common practice to use the command dd to make backup images of drives and partitions. It’s as simple as the command:

dd if=[input] of=[output]

A while back I did just that and made a dd backup of not just a partition but of an entire hard drive. This was very simple (I just used if=/dev/sda instead of something like if=/dev/sda2). The problem came when I tried to mount this image. With a partition image you can just use the mount command like normal, i.e. something like this:

sudo mount -o loop -t [filesystem] [path to image file] [path to mount point]

Unfortunately this doesn’t make any sense when mounting an image of an entire hard drive. What if the drive had multiple partitions? What exactly would it be mounting to the mount point? After some searching I found a series of forum posts that dealt with just this scenario. Here are the steps required to mount your whole drive image:

1) Use the fdisk command to list the drive image’s partition table:

fdisk -ul [path to image file]

This should print out a lot of useful information. For example you’ll get something like this:

foo@bar:~$ fdisk -ul imagefile.img
You must set cylinders.
You can do this from the extra functions menu.

Disk imagefile.img: 0 MB, 0 bytes
32 heads, 63 sectors/track, 0 cylinders, total 0 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x07443446

        Device Boot      Start         End      Blocks   Id  System
imagefile.img1   *          63      499967      249952+  83  Linux
imagefile.img2          499968      997919      248976   83  Linux

2) Take a look in what that command prints out for the sector size (512 bytes in the above example) and the start # for the partition you want to mount (let’s say 63 in the above example).

3) Use a slightly modified version of the mount command (with an offset) to mount your partition.

mount -o loop, offset=[offset value] [path to image file] [path to mount point]

Using the example above I would set my offset value to be sector size * offset, so 512*63 = 32256. The command would look something like this:

mount -o loop, offset=32256 image.dd /mnt/point

That’s it. You should now have that partition from the dd backup image mounted to the mount point.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Linux, Tyler B Tags: , , ,

How to test hard drive for errors in Linux

May 21st, 2012 No comments

I recently re-built an older PC from a laundry list of Frankenstein parts. However before installing anything to the hard drive I found I wanted to check it for physical errors and problems as I couldn’t remember why I wasn’t using this particular drive in any of my other systems.

From an Ubuntu 12.04 live CD I used GParted to to delete the old partition on the drive. This let me start from a clean slate. After the drive had absolutely nothing on it I went searching for an easy way to test the drive for errors. I stumbled across this excellent article and began using badblocks to scan the drive. Basically what this program does is write to every spot on the drive and then read it back to ensure that it still holds the data that was just written.

Here is the command I used. NOTE: This command is destructive and will damage the data on the hard drive. DO NOT use this if you want to keep the data that is already on the drive. Please see the above linked article for more information.

badblocks -b 4096 -p 4 -c 16384 -w -s /dev/sda

What does it all mean?

  • -b sets the block size to use. Most drives these days use 4096 byte blocks.
  • -p sets the number of passes to use on the drive. When I used the option -p 4 above it means that it will write/read from each block on the drive 4 times looking for errors. If it makes it through 4 passes without finding new errors then it will consider the process done.
  • -c sets the number of blocks to test at a time. This can help to speed up the process but will also use more RAM.
  • -w turns on write mode. This tells badblocks to do a write test as well.
  • -s turns on progress showing. This lets you know how far the program has gotten testing the drive.
  • /dev/sda is just the path to the drive I’m scanning. Your path may be different.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Sabayon Linux – Stable if not without polish

April 28th, 2012 3 comments

I have been running Sabayon Linux (Xfce) for the past couple of months and figured I would throw a post up on here describing my experience with it.

Reasons for Running

The reason I tried Sabayon in the first place is because I was curious what it would be like to run a rolling release distribution (that is a distribution that you install once and just updates forever with no need to re-install). After doing some research I discovered a number of possible candidates but quick narrowed it down based on the following reasons:

  • Linux Mint Debian Edition – this is an excellent distribution for many people but for whatever reason every time I update it on my hardware it breaks. Sadly this was not an option.
  • Gentoo – I had previously been running Gentoo and while it is technically a rolling release I never bothered to update it because it just took too long to re-compile everything.
  • Arch Linux – Sort of like Gentoo but with binary packages, I turned this one down because it still required a lot of configuration to get up and running.
  • Sabayon Linux – based on Gentoo but with everything pre-compiled for you. Also takes the ‘just works’ approach by including all of the proprietary and closed source  codecs, drivers and programs you could possibly want.

Experience running Sabayon

Sabayon seems to take a change-little approach to packaging applications and the desktop environment. What do I mean by this? Simply that if you install the GNOME, KDE or Xfce versions you will get them how the developers intended – there are very few after-market modifications done by the Sabayon team. That’s not necessarily a bad thing however, because as updates are made upstream you will receive them very quickly thereafter.

This distribution does live up to its promise with the codecs and drivers. My normally troublesome hardware has given me absolutely zero issues running Sabayon which has been a very nice change compared to some other, more popular distributions (*cough* Linux Mint *cough*). My only problem with Sabayon stems from Entropy (their application installer) being very slow compared to some other such implementations (apt, yum, etc). This is especially apparent during the weekly system wide updates which can result in many, many package updates.

Final Thoughts

For anyone looking for a down to basics, Ubuntu-like (in terms of ease of install and use), rolling release distribution I would highly recommend Sabayon. For someone looking for something a bit more polished or extremely user friendly, perhaps you should look elsewhere. That’s not to say that Sabayon is hard to use, just that other distributions might specialize in user friendliness.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Big distributions, little RAM 4

April 9th, 2012 No comments

It’s that time again. Like before I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 6.0 (GNOME)
  • Kubuntu 11.10 (KDE)
  • Linux Mint 12 (GNOME)
  • Linux Mint 201109 LXDE (GNOME)
  • Mandriva 2011 (KDE)
  • OpenSUSE 12.1 (GNOME)
  • OpenSUSE 12.1 (KDE)
  • Sabayon 8 (GNOME)
  • Sabayon 8 (KDE)
  • Sabayon 8 (Xfce)
  • Ubuntu 11.10 (Unity)
  • Ubuntu 12.04 Beta 2 (Unity)
  • Xubuntu 11.10 (Xfce)

I will be testing all of this within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.1.0 on Windows 7, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between April 2nd, 2012 and April 9th, 2012 so your results may not be identical.


Following in the tradition of my previous posts I have once again gone through the effort to bring you nothing but the most state of the art in picture graphs for your enjoyment.

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to installed (i.e. Fedora 16 which requires 768MB of RAM) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). Secondly there may be some distributions that don’t appear on all of the graphs, for example Mandriva. In the case of Mandriva the distribution would not allow me to successfully install the updates and so I only have its first boot RAM usage available. Finally when I tested Debian I was unable to test before / after applying updates because it seemed to have applied the updates during install. As always feel free to run your own tests.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

Install size after updates

The hard drive space used by the distribution after applying all of the updates.


As before I’m going to leave you to drawing your own conclusions.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Oh Gentoo

December 22nd, 2011 6 comments

Well it’s been a couple of months now since the start of Experiment 2.0 and I’ve had plenty of time to learn about Gentoo, see its strengths and… sit waiting through its weaknesses. I don’t think Gentoo is as bad as everyone makes it out to be, in fact, compared to some other distributions out there, Gentoo doesn’t look bad at all.

Now that the experiment is approaching its end I figured it would be a good time to do a quick post about my experiences running Gentoo as a day-to-day desktop machine.


Gentoo is exactly what you want it to be, nothing more. Sure there are special meta-packages that make it easy to install things such as the KDE desktop, but the real key is that you don’t need to install anything that you don’t want to. As a result Gentoo is fast. My startup time is about 10-20 seconds and, if I had the inclination to do so, could be trimmed down even further through optimization.

Packages are also compiled with your own set of custom options and flags so you get exactly what you need, optimized for your exact hardware. Being a more advanced (see expert) oriented distribution it will also teach you quite a bit about Linux and software configuration as a whole.


Sadly Gentoo is not without its faults. As mentioned above Gentoo can be whatever you want it to be. The major problem with this strength in practice is that the average desktop user just wants a desktop that works. When it takes days of configuration and compilation just to get the most basic of programs installed it can be a major deterrent to the vast majority of users.

Speaking of compiling programs, I find this aspect of Gentoo interesting from a theoretical perspective but I honestly have a hard time believing that it makes enough of a difference to make it worth sitting through the hours days of compiling it takes just to get some things installed. Its so bad that I actually haven’t bothered to re-sync and update my whole system in over 50 days for fear that it would take forever to re-compile and re-install all of the updated programs and libraries.

Worse yet even when I do have programs installed they don’t always play nicely with one another. Gentoo offers a package manager, portage, but it still fails at some dependency resolution – often times making you choose between uninstalling previous programs just to install the new one or to not install the new one at all. Another example of things being more complicated than they should be is my system sound. Even though I have pulseaudio installed and configured my system refuses to play audio from more than one program at a time. These are just a few examples of problems I wouldn’t have to deal with on another distribution.


Well, it’s been interesting but I will not be sticking with Gentoo once this experiment is over. There are just too many little things that make this more of an educational experience than a real day-to-day desktop. While I certainly have learned a lot during this version of the experiment, at the end of the day I’d rather things just work right the first time.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

It should not be this hard to change my volume

December 22nd, 2011 1 comment

Normally my laptop is on my desk at home plugged into a sound system, so I never have to change the volume. However I’m currently on holiday, so that means I’m carrying my laptop around. Last night, I had the audacity to lower the volume on my machine. After all, nobody wants to wake up their family at 2am with “The history of the USSR set to Tetris.flv”. Using the media keys on my laptop did nothing. Lowering the sound in KMix did nothing. Muting in KMix did nothing. I figured that something had gone wrong with KMix and maybe I should re-open it. Well, it turns out that was a big goddamn mistake, because that resulted in me having no sound.

It took about 30 minutes to figure out, but the solution ended up being unmuting my headphone channel in alsamixer. It looks like for whatever reason, alsamixer and KMix were set to different master channels (headphone/speaker and HDMI, respectively), thus giving KMix (and my media keys) no actual control over volume.

Categories: Hardware, Kubuntu, Sasha D Tags:

The Linux Experiment Podcast #5.1: Experiment 2.0

December 5th, 2011 No comments

Hosts: Aine B, Dave L, Jake B, Jon F, Matt C, Phil D, Tyler B, & Warren G

Missing in action: Dana H, Sasha D

Show length: 0:31:16


The fifth podcast from the guys at The Linux Experiment. In this reunion episode we kick off the second round of The Linux Experiment.

In this episode:

  • New recruits.
  • Plans for this experiment.
  • And lots more!

Get the show:

Listen here (explicit):


I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

How to play Red Alert 2 on Linux

December 4th, 2011 No comments

The other day I finally managed to get the classic RTS game Command & Conquer Red Alert 2 running on Linux, and running well in fact. I started by following the instructions here with a few tweaks that I found on other forums that I can’t seem to find links to anymore. Essentially the process is as follows:

  • Install Red Alert 2 on Windows. Yes you just read that right. Apparently the Red Alert 2 installer does not work under wine so you need to install the game files while running Windows.
  • Update the game and apply the CD-Crack via the instructions in the link above. Note that this step may have some legal issues associated with it. If in doubt seek professional legal advice.
  • Copy program files install directory to Linux.
  • Apply speed fix in the how-to section here.
  • Run game using wine and enjoy.

It is a convoluted process that is, at times, ridiculous but it’s worth it for such a classic game. Even better there is a bit of a ‘hack’ that will allow you to play RA2’s multiplayer IPX network mode but over the more modern TCP/IP protocol. The steps for this hack can also be found at the WineHQ link above.

Happy gaming!

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Linux, Tyler B Tags: , , ,

Linux from Scratch: I’ve had it up to here!

November 27th, 2011 9 comments

As you may be able to tell from my recent, snooze-worthy technical posts about compilers and makefiles and other assorted garbage, my experience with Linux from Scratch has been equally educational and enraging. Like Dave, I’ve had the pleasure of trying to compile various desktop environments and software packages from scratch, into some god-awful contraption that will let me check my damn email and look at the Twitters.

To be clear, when anyone says I have nobody to blame but myself, that’s complete hokum. From the beginning, this entire process was flawed. The last official LFS LiveCD has a kernel that’s enough revisions behind to cause grief during the setup process. But I really can’t blame the guys behind LFS for all my woes; their documentation is really well-written and explains why you have to pass fifty --do-not-compile-this-obscure-component-or-your-cat-will-crap-on-the-rug arguments.

Patch Your Cares Away

CC attribution licensed from benchilada

Read more…

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Building glibc for LFS from Ubuntu by replacing awk

November 23rd, 2011 No comments

If you run into the following error trying to build LFS from a Ubuntu installation:

make[1]: *** No rule to make target `/mnt/lfs/sources/glibc-build/Versions.all', needed by `/mnt/lfs/sources/glibc-build/abi-versions.h'. Stop.

The mawk utility installed with Ubuntu, and symlinked to /usr/bin/awk by default does not properly handle the regular expressions in this package. Perform the following commands:

# apt-get install gawk
# rm -rf /usr/bin/{m}awk
# ln -snf /usr/bin/gawk /usr/bin/awk

Then you’re just a make clean; ./configure –obnoxious-dash-commands; make; make install away from success.

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Reinstalling LFS soon: it’s not my fault, I swear!

November 17th, 2011 No comments

I went to play around with my Linux from Scratch installation after getting a working version of KDE 4.7.3 up and running. For a few days now my system has been running stood up to light web browsing use and SSH shenanigans, and hasn’t even dropped a remote connection.

This was until this evening, when I decided to reboot to try and fix a number of init scripts that were throwing some terrible error about problems in lsb_base under /lib/ somewhere. The system came back up properly, but when I startx‘d, I was missing borders for most of my windows. Appearance Preferences under KDE wouldn’t even lanch, claiming a segmentation fault.

There were no logs available to easily peruse, but after a few false starts I decided to check the filesystem with fsck from a bootable Ubuntu 11.04 USB stick. The results were not pretty:

root@ubuntu:~# fsck -a /dev/sdb3
fsck from util-linux-ng 2.17.2
/dev/sdb3 contains a file system with errors, check forced.
/dev/sdb3: Inode 1466546 has illegal block(s).

(i.e., without -a or -p options)

Running fsck without the -a option forced me into a nasty scenario, where like a certain Homer Simpson working from his home office, I repeatedly had to press “Y”:

At the end of it, I’d run through the terminal’s entire scroll buffer and continued to get errors like:

Inode 7060472 (/src/kde-workspace-4.7.3/kdm/kcm/main.cpp) has invalid mode (06400).
Clear? yes

i_file_acl for inode 7060473 (/src/kde-workspace-4.7.3/kdm/kcm/kdm-dlg.cpp) is 33554432, should be zero.
Clear? yes

Inode 7060473 (/src/kde-workspace-4.7.3/kdm/kcm/kdm-dlg.cpp) has invalid mode (00).
Clear? yes

i_file_acl for inode 7060474 (/src/kde-workspace-4.7.3/kdm/kcm/CMakeLists.txt) is 3835562035, should be zero.
Clear? yes

Inode 7060474 (/src/kde-workspace-4.7.3/kdm/kcm/CMakeLists.txt) has invalid mode (0167010).
Clear? yes

I actually gave up after after seeing several thousand of these inodes experiencing problems (later I learned that fsck -y will automatically answer yes, which means I’ve improved my productivity several thousand times!)

I was pretty quick to assess the problem: the OCZ Vertex solid state drive where I’d installed Linux has been silently corrupting data as I’ve written to it. Most of the problem sectors are in my source directories, but a few happened to be in my KDE installation on disk. This caused oddities such as power management not loading and the absence of window borders.

So what goes on from here? I plan to replace the OCZ drive under warranty and rebuild LFS on my spinning disk drive, but this time I’ll take my own advice and start building from this LiveUSB Ubuntu install, with an up-to-date kernel and where .tar.xz files are recognized. Onward goes the adventure!

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Hardware, Jake B, Linux from Scratch Tags:

Notifications with Irssi in Screen

November 13th, 2011 2 comments

One of the biggest problems about running irssi in a terminal in screen is that there aren’t any notifications by default if you are mentioned, or if there is activity in a channel. By running these commands, you will be able to get these notifications. They can be tailored based on the notifications that you want.

/set beep_when_window_active ON 
/set beep_when_away ON 
/set bell_beeps ON

I am currently running ArchLinux (x86_64).
Check out my profile for more information.
Categories: Dave L, Linux Tags: , , ,

Wireless Networking: Using a Cisco/Linksys WUSB54GC on Gentoo

November 13th, 2011 1 comment

We live in an old house, which has the unfortunate side-effect of lacking a wired network of any kind. All of our machines connect to a wireless network, and my desktop is no exception. I’ve got an old WUSB54GC wireless stick that was manufactured some time in 2007. In computer years, this is way old hardware. But with a bit of work, I managed to get it working with my Gentoo install.

This bitch is old... but it works

I started out by installing the NetworkManager applet with a tutorial on the Gentoo Wiki. This was a straightforward process, and after a restart, the applet icon appeared in the top right corner of my screen. If you left-click on the icon, it drops down a menu that lists your wireless interfaces. Under the Wireless Networks heading, it said that it was missing the firmware necessary to talk to my hardware.

The next step was to look around the net and figure out the firmware/kernel module combination that supports this stick. I found my answer over at the SerialMonkey project, which is run by a group that took on maintenance of older Ralink firmware after the company of the same name dropped support. According to the SerialMonkey hardware guide, my stick (or at least a very similar stick called the WUSB54GR) works with the rt73usb kernel module and related firmware.

This known, there are two methods of proceeding. Those running older kernels may need to manually compile the necessary packages using instructions similar to these, from the Arch Linux project. For more modern kernels, the Gentoo project provides a Wiki entry detailing the necessary steps.

After following the steps in the Gentoo Wiki entry, I restarted my system, and now have full wireless support. Genius!

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Gentoo, Jon F Tags: , , , ,

Can you install Gnome3 on Gentoo?

November 13th, 2011 1 comment

So my base Gentoo installation came with Gnome 2.3, which while solid, lacks a lot of the prettiness of Gnome’s latest 3.2 release. I thought that I might like to enjoy some of that beauty, so I attempted to upgrade. Because Gnome 3.2 isn’t in the main portage tree yet, I found a tutorial that purported to walk me through the upgrade process using an overlay, which is kind of like a testing branch that you can merge into the main portage tree in order to get unsupported software.

Since the tutorial that I linked above is pretty self-explanatory, I won’t repeat the steps here. There’s also the little fact that the tutorial didn’t work worth a damn…

Problem 1: Masked Packages

#required by dev-libs/folks-9999, 
required by gnome-base/gnome-shell-3.2.1-r1, 
required by gnome-base/gdm-[gnome-shell], 
required by gnome-base/gnome-2.32.1-r1, 
required by @selected, 
required by @world (argument)
>=dev-libs/libgee- introspection
#required by gnome-extra/sushi-0.2.1, 
required by gnome-base/nautilus-3.2.1[previewer], 
required by app-cdr/brasero-3.2.0-r1[nautilus], 
required by media-sound/sound-juicer-2.99.0_pre20111001, 
required by gnome-base/gnome-2.32.1-r1, 
required by @selected, 
required by @world (argument)
>=media-libs/clutter-gtk-1.0.4 introspection

This one is pretty simple to fix: you can add the lines >=dev-libs/libgee- introspection and >=media-libs/clutter-gtk-1.0.4 introspection to the file /etc/portage/package.accept_keywords, or you can run emerge -avuDN world –autounmask-write to get around these autounmask behaviour issues

Problem 2: Permissions

--------------------------- ACCESS VIOLATION SUMMARY ---------------------------
LOG FILE "/var/log/sandbox/sandbox-3222.log"

FORMAT: F - Function called
FORMAT: S - Access Status
FORMAT: P - Path as passed to function
FORMAT: A - Absolute Path (not canonical)
FORMAT: R - Canonical Path
FORMAT: C - Command Line

F: mkdir
S: deny
P: /root/.local/share/webkit
A: /root/.local/share/webkit
R: /root/.local/share/webkit
C: ./epiphany --introspect-dump=

This one totally confused me. If I’m reading it correctly, the install script lacks the permissions necessary to write to the path /root.local/share/webkit/. The odd part of this is that the script is running as the root user, so this simple shouldn’t happen. I was able to give it the permissions that it needed by running chmod 777 /root/.local/share/webkit/, but I had to start the install process all over again, and it just failed with a similar error the first time that it attempted to write a file to that directory. What the fuck?

At 10pm at night, I couldn’t be bothered to find a fix for this… I used the tutorial’s instructions to roll back the changes, and I’ll try again later if I’m feeling motivated. In the mean time, if you know how to fix this process, I’d love to hear about it.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Gentoo, God Damnit Linux, Jon F Tags: , ,

Fixing build issues with phonon-backend-gstreamer-4.5.1

November 9th, 2011 No comments

I’ve decided to try and upgrade my LFS system to the latest version of KDE (4.7.3 as of the time of this writing) and correspondingly needed to upgrade phonon-backend-gstreamer. Unfortunately, following the previous version’s compilation instructions provided this nasty message:

[ 4%] Building CXX object gstreamer/CMakeFiles/phonon_gstreamer.dir/audiooutput.cpp.o
In file included from /sources/phonon-backend-gstreamer-4.5.1/gstreamer/audiooutput.cpp:22:0:
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:38: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:38: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:69: error: template argument 1 is invalid/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:42: error: template argument 1 is invalid
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:263:45: error: ‘Phonon::MediaController::NavigationMenu’ has not been declared
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:42: error: template argument 1 is invalid
make[2]: *** [gstreamer/CMakeFiles/phonon_gstreamer.dir/audiooutput.cpp.o] Error 1make[1]: *** [gstreamer/CMakeFiles/phonon_gstreamer.dir/all] Error 2make: *** [all] Error 2

To fix this issue, make sure you have the latest GStreamer and phonon-backend-xine installed. Then I followed some of the advice from this KDE forum topic.

If, like me, you installed Qt into /opt/qt, create a symbolic link into the qt directory pointing to your system’s latest version of phonon. For later success with kde-runtime, create links to the libphonon libraries in /opt/qt-4.7.1/lib to your recently compiled /usr/lib64 versions (adjust paths to /usr/lib on 32-bit systems):

# mv /opt/qt-4.7.1/include/phonon /tmp
# ln -snf /usr/include/phonon /opt/qt-4.7.1/include/phonon
# cd /opt/qt-4.7.1/lib
# rm -rf libphonon*
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/

Then rerun the compilation process for phonon-backend-gstreamer and voila, no more errors. (You’ll probably still have more issues to work out, but this gets past the phonon-backend-gstreamer blockade.)

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Why do so many open source programs throw C/C++ warnings?

November 8th, 2011 4 comments

Seriously, I’d like to know, because this is a bit ridiculous.

For all the heavily encouraged coding styles out there, nearly all the open source software packages I’ve had to compile for Linux from Scratch have either

  1. Insanely chatty defaults for compilation; that is, GCC provides ‘notices’ about seemingly minor points, or
  2. A large number of warnings when compiling – unused variables, overloaded virtual functions, and deprecated features soon to disappear.

In the worst case, some of these warnings appear to be potential problems with the source. Leaving potentially uninitialized variables around seems to be a great way to run into runtime crashes if someone decides to use them. Overloading virtual functions with a different method signature has the same possible impact. And comparing signed and unsigned numbers is just a recipe for a crash or unpredictable behaviour down the line.

I just don’t get it. In my former development experiences, any compiler notifications were something to pay attention to. Usually when first developing an application, they were indicative of a typo, a forgotten variable or just general stupidity with the language. If a warning was absolutely unavoidable, it was specifically ignored in the build scripts with a clear explanation as to why.

So what’s the case with your programs? Have you noticed any stupid or insightful compiler messages scrolling past?

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.