Archive for the ‘God Damnit Linux’ Category

Ubuntu 14.04 VNC woes? Try this!

April 28th, 2014 No comments

If, like me, you’ve recently upgraded to Ubuntu 14.04 only to find out that for whatever reason you can no longer VNC to that machine anymore (either from Windows or even an existing Linux install) have no fear because I’ve got the fix for you!

Simply open up a terminal and run the following line:

gsettings set org.gnome.Vino require-encryption false

Obviously if you use VNC encryption you may not want to do this but if you’re like me and just use VNC on the local network it should be safe enough to disable.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Change the default sort order in Nautilus

February 9th, 2014 1 comment

The default sort order in Nautilus has been changed to sorting alphabetically by name and the option to change this seems to be broken. For example I prefer my files to be sorted by type so I ran


and browsed to org/gnome/nautilus/preferences. From there you should be able to change the value by using the drop down:


Seems easy enough

Seems easy enough

Unfortunately the only option available is modification time. Once you change it to that you can’t even go back to name. This also appears to be a problem when trying to set the value using the command line interface like this:

dconf write /org/gnome/nautilus/preferences/default-sort-order type

I received an “error: 0-4:unknown keyword” message when I tried to run that.

Thanks to the folks over on the Ask Ubuntu forum I was finally able to get it to change by issuing this command instead:

gsettings set org.gnome.nautilus.preferences default-sort-order type

where type could be swapped out for whatever you prefer it to be ordered by.

Great Success!

Great Success!

A tale of a gillion installs

January 21st, 2014 1 comment

Install number one: LMDE 201303.  I was hoping for the best of both worlds, but I got driver issues instead.  LMDE has known ATI proprietary driver install issues.  I followed the Mint instructions and got it working, then got a blank screen after too much tinkering.  I was surprised that LMDE had this problem since Debian doesn’t, and LMDE should be a more polished version of LMDE.  This wasn’t a big deal, but I decided to give Debian a chance.

Install number two: debian stable (7.3).  The debian website has a convoluted maze of installation links, but it’s still fairly easy to find an ISO for the stable version you need.  I installed from the live ISO using a USB key.  The installation and ATI driver update went smoothly, and I thought all was well at first.  I soon realized that about 50% of reboots failed; the audio driver was the culprit.  I installed the latest driver from Realtec/ALSA and it sort of worked, but I was still getting some crap from # dmesg and the audio would crackle with some files.

LMDE.  I live booted LMDE to see if the same issue existed there and it did.

Time for Mint 16.  As expected everything worked.  Man I really wish Ubuntu hadn’t chosen the dark side – their OS is really good.  All of these distros use ALSA audio drivers, so why is Ubuntu the only one that works?   Kernel versions:

debian stable (7.3):
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version 1.0.24.
Mint 16:
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version k3.11.0-12-generic.

One more thing to check.  What kernel version is the real debian testing “jessie” using:

LMDE 201303 = 3.2
debian stable 7.3 = 3.2
Mint 16 = 3.11
debian testing “jessie - Jan 2014” = 3.12!

I determined to try debian testing before settling for Mint.  I tried a netinstall from USB key which killed my PC and grub bootloader.  The debian stable live iso usb key decided to stop working as well.   I finally got a real DVD debian stable install to work, changed the repositories to point to “jessie” and upgraded.  I was very surprised to see this worked!   I’m having some problems with bash, but all of my day to day software is up and running.  Nice.

TL;DR: LMDE was using an old kernel so I needed the real debian testing (jessie) to solve my driver problems.

Screen brightness work around (part 2)

January 19th, 2014 No comments

As mentioned before I am having some issues with my laptop’s hardware and controlling the screen brightness. Previously my work around was to set acpi_backlight=vender in the grub command line options. While this resulted in having full screen brightness it also removed my ability to use my keyboard function keys to adjust the screen brightness on the fly (not so good when you’re on battery). Removing this option allowed me to manually adjusted my screen brightness again but once again always started the laptop at zero brightness. What to do?

While far from a perfect solution my current work around is to use xdotool to simulate key presses on login which raise the screen brightness for me automatically. Here is the script that I run on startup:

for i in {1..20}
     xdotool key XF86MonBrightnessUp

While this works great it still isn’t perfect. Because xdotool requires an X session it means I cannot run it before one is created. If you were unaware the login screen, in my case MDM, does not run inside of X (it actually starts X when you successfully login). So while this will automatically brighten my screen it won’t do so until I type in my username and password, leaving me to type into a fully dark screen or manually adjust the brightness up enough to see what I’m doing. Hopefully I’ll have a better solution sooner rather than later…

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Fix annoying high-pitched sound

November 28th, 2013 No comments

If you’re like me you’ve been suffering through a crazy high-pitched sound emanating from your laptop speakers. Apparently this is a common issue with certain types of audio devices. Thankfully via the power of the Internet I’ve been able to finally find a solution!

It turns out that the issue actually stems from some power saving features (of all things) in the Intel HDA driver. So I simply turned it off and guess what? It worked.

1) Open up (using root) /usr/lib/pm-utils/power.d/intel-audio-powersave

2) Replace or comment out the line:


3) In its place put the line:


4) Reboot

Hopefully this also works for you but if not check out the site I found the solution at for some additional tips/things to try.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Fix no screen brightness on boot problem

October 14th, 2013 No comments

I recently upgraded my laptop to a brand new Lenovo Y410P and promptly replaced Windows 8 with a Linux install. Unfortunately I immediately ran into a very strange driver(?) issue where, on boot, the computer would default to the absolute lowest screen brightness level. This meant that I would need to manually adjust the screen brightness up just to see the login screen. Thankfully after some help from the excellent people over on the Ubuntu Forums I managed to find a very easy work around.

1) As root open up /etc/default/grub

I did this by simply issuing the following command:

sudo nano /etc/default/grub

2) Find the line that says GRUB_CMDLINE_LINUX= and add “acpi_backlight=vendor” to the list of options.

3) From a terminal run this command to update GRUB

sudo update-grub

4) Reboot!

That’s pretty much it. My computer now boots with the correct screen brightness as one would expect.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

And I thought this would be easy…

September 22nd, 2013 1 comment

Some of you may remember my earlier post about contemplating an upgrade from Windows Home Server (Version 1) to a Linux alternative. Since then, I have decided the following:

Amahi isn’t worth my time


This conclusion was reached after a fruitless install of the latest Amahi 7 installation on the 500 GB ‘system’ drive, included with the EX470. After backing up the Windows Home Server to a single external 2 TB drive (talk about nerve-wracking!), I popped the drive into a spare PC and installed Amahi with the default options.


No, I’m not 13. Yes, this image accurately reflects my frustrations.

Moving the drive back into the EX470 yielded precisely zero results, no matter what I tried – the machine would not respond to a ‘ping’ command, and since I’ve opted to try and do this without a debug board, I don’t even have VGA to tell me what the hell is going on. So, that’s it for Amahi.

When all else fails, Ubuntu


After deciding that I really didn’t feel like a repeat of my earlier Fedora experiment, I decided to try out the Linux ‘Old Faithful’ as it were – Ubuntu 12.04 LTS. I opted for the LTS version due to – well, you know – the ‘long-term support’ deal.

Oh, and I upgraded my storage (new 1 TB system drive not shown, and I apologize for the potato-quality image):


The only kind of ‘TB’ I like. Not tuberculosis.


Following from the earlier Amahi instructions, I popped the primary 1 TB drive into a spare machine and allowed the Ubuntu installer to do its thing. Easy enough! From there, I installed the following two additional items (having to add an additional repository for the latter):

  • Openssh-Server

This allows me to easily control the machine through SSH, and – as I understand it – is pretty much a must for someone wanting to control a headless box. Setup was easy-breezy, in that it required nothing at all.

  • Greyhole

For those unfamiliar, Greyhole is – in their own words – an ‘Easily expandable and redundant storage pool for home servers’. One of my favourite things about WHS v1 was its ‘disk pooling’ capability – essentially a JBOD with software-managed share duplication, ensuring that each selected share was copied over to one other disk in the array.

After those were done with, I popped the drive into the EX470, and – lo and behold! – I was able to SSH in.


This? This is what relatively minor success looks like.

So at this point, I’m feeling relatively confident. I shut down the server (don’t forget -h!) over SSH, popped in the first of the three 3 TB drives, and…

…nothing. Nada. Zip. Zilch. The server happily blinks away like a small puppy wags its tail, excited to see its owner but clearly bereft of purpose when left to its owner. I can’t ping it, I can’t… well, that’s really it. I can’t ping it, so there’s nothing I can do. Looking to see if GRUB was stuck at the menu, I stuck in a USB keyboard and hit ‘Enter’ to no effect. Yes, my troubleshooting skills are that good.

My next step was to pop both the 1 TB and 3 TB drives into the ‘spare’ machine; this ran fine. Running lshw -short -c disk shows a 1 TB and 3 TB drive without issue. I also ran these parted commands:

mklabel gpt

mkpart primary -1 1


(I think that last command is right.) So, all set, right? Cool. Pop the drive back in to the EX470, and…

STILL NOTHING. At this point, I’m ready to go pick up a new four-bay NAS, but I feel like that may be overkill. If anyone has any recommendations on how to get the stupid thing to boot with a 3 TB drive, I’m open to suggestions.


WTF Ubuntu

September 7th, 2013 2 comments

I’m not even sure what to say about this one… it looks like I might have an angry video card.

I sat down at my machine after it had been sitting for three or four days to find this... wtf?

I sat down at my machine after it had been sitting for three or four days to find this… wtf?

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: God Damnit Linux, Jon F, Ubuntu Tags:

Listen up, Kubuntu: the enraging tale of sound over HDMI

August 4th, 2013 2 comments

Full disclosure: I live with Kayla, and had to jump in to help resolve an enraging problem we ran into on the Kubuntu installation with KDE, PulseAudio and the undesirable experience of not having sound in applications. It involved a fair bit of terminal work and investigation, plus a minimal understanding of how sound works on Linux. TuxRadar has a good article that tries to explain things. When there are problems, though, the diagram looks much more like the (admittedly outdated) 2007 version:

The traditional spiderweb of complexity involved in Linux audio.

The traditional spiderweb of complexity involved in Linux audio.

To give you some background, the sound solution for the projection system is more complicated than “audio out from PC, into amplifier”. I’ve had a large amount of success in the past with optical out (S/PDIF) from Linux, with only a single trip to alsamixer required to unmute the relevant output. No, of course the audio path from this environment has to be more complicated, and looks something like:

Approximate diagram of display and audio output involved from Kubuntu machine

As a result, the video card actually acts as the sound output device, and the amplifier takes care of both passing the video signal to the projector and decoding/outputting the audio signal to the speakers and subwoofer. Under Windows, this works very well: in Control Panel > Sound, you right-click on the nVidia HDMI audio output and set it as the default device, then restart whatever application plays audio.

In the KDE environment, sound is managed by a utility called Phonon in the System Settings > Multimedia panel, which has multiple backends for ALSA and PulseAudio. It will essentially communicate with the highest-level sound output system installed that it has support for. When you make a change in a default Kubuntu install in Phonon it appears to be talking to PulseAudio, which in turn changes necessary ALSA settings. Sort of complicated, but I guess it handles the idea that multiple applications can play audio and not tie up the sound card at the same time – which has not always been the case with Linux.

In my traditional experience with the GNOME and Unity interfaces, it always seems like KDE took its own path with audio that wasn’t exactly standard. Here’s the problem I ran into: KDE listed the two audio devices (Intel HDA and nVidia HDA), with the nVidia interface containing four possible outputs – two stereo and two listed as 5.1. In the Phonon control panel, only one of these four was selectable at a time, and not necessarily corresponding to multiple channel output. Testing the output did not play audio, and it was apparent that none of it was making it to the amplifier to be decoded or output to the speakers.

Using some documentation from the ArchLinux wiki on ALSA, I was able to use the aplay -l command to find out the list of detected devices – there were four provided by the video card:

**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0

and then use aplay -D plughw:1,N /usr/share/sounds/alsa/Front_Center.wav repeatedly where N is the number of one of the nVidia detected devices. Trial and error let me discover that card 1, device 7 was the desired output – but there was still no sound from the speakers in any KDE applications or the Netflix Desktop client. Using the ALSA output directly in VLC, I was able to get an MP3 file to play properly when selecting the second nVidia HDMI output in the list. This corresponds to the position in the aplay output, but VLC is opaque about the exact card/device that is selected.

At this point my patience was wearing pretty thin. Examining the audio listing further – and I don’t exactly remember how I got to this point – the “active” HDMI output presented in Phonon was actually presented as card 1, device 3. PulseAudio essentially grabbed the first available output and wouldn’t let me select any others. There were some additional PulseAudio tools provided that showed the only possible “sink” was card 1,3.

The brute-force, ham-handed solution was to remove PulseAudio from a terminal (sudo apt-get remove pulseaudio) and restart KDE, presenting me with the following list of possible devices read directly from ALSA. I bumped the “hw:1,7” card to the top and also quit the system tray version of Amarok.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

Result: Bliss! By forcing KDE to output to the correct device through ALSA, all applications started playing sounds and harmony was restored to the household.

At some point after the experiment I will see if I can get PulseAudio to work properly with this configuration, but both Kayla and I are OK with the limitations of this setup. And hey – audio works wonderfully now.

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Dual Booting Ubuntu 13.04 and Windows 8 on a Lenovo Y400 IdeaPad

July 27th, 2013 1 comment

With the third edition of The Linux Experiment already underway, I decided to get my new laptop set up with an Ubuntu partition to work with over the next few months. A little while back, I purchased this laptop with intent to use it as a gaming rig. It shipped with Windows 8, which was a serious pain in the ass to get used to. Now that I’ve dealt with that and have Steam and Origin set up on the Windows partition, it’s time to make this my primary machine and start taking advantage of the power under its hood by dual-booting an Ubuntu partition for development and experiment work.

I started my adventure by downloading an ISO of the latest release of Ubuntu – at the time of this writing, that’s 13.04. Because my new laptop has UEFI instead of BIOS, I made sure to grab the x64 version of the distribution.

Aside: If you’re using NoScript while browsing Ubuntu’s website, you’ll want to keep an eye on the address bar while navigating through the download steps. In my case, the screen that asks you to donate to the project redirected me to a different version of the ISO until I enabled JavaScript.

After using Ubuntu’s Startup Disk Creator to create a bootable USB stick, I started my first adventure – figuring out how to get the IdeaPad to boot from USB. A bit of quick googling told me that the trick was to alternately tap F10 and F12 during the boot sequence. This brought up a boot menu that allowed me to select the USB stick.

Once Ubuntu had booted off of the USB stick, I opened up GParted and went about making some space for my new operating system. The process was straightforward – I selected the largest existing partition (it also helped that it was labelled WINDOWS_OS), and split it in half. My only mistake in this process was to choose to put the new partition in front of the existing partition on the drive. Because of this, GParted had to copy all of the data on the Windows partition to a new physical location on the hard drive, a process that took about three hours.

The final partitioning scheme with my new Linux partition highlighted

The final partitioning scheme with my new Linux partition highlighted

With my hard drive appropriately partitioned, it was time to install the operating system. The modern Ubuntu installer pretty much takes care of everything, even going so far as selecting an appropriate space to use on the hard drive. I simply told it to install alongside the existing Windows partition, and let it take care of the details.

The installer finished its business in short order, and I restarted the machine. Ubuntu booted with no issues, but my Windows 8 partition refused to cooperate. It would seem as though something that the installer did wasn’t getting along well with UEFI/SecureBoot. Upon attempting to boot Windows, I got the following message:

error: Secure Boot forbids loading module from (hd0,gpt8)/boot/grub/x86_64-efi/ntfs.mod.
error: failure reading sector 0x0 from ‘cd0’
error: no such device: 0030DA4030DA3C7A
error: can’t find command ‘drivemap’
error: invalid EFI file path

Press any key to continue…

Uh oh.

Like I said, I could boot Ubuntu, so I headed on over to their website and read their page on UEFI. At first glance, it seemed as though I had done everything correctly. The only place that I deviated from these instructions was in manually resizing my Windows partition to create space for my new Ubuntu partition.

Thinking that I might be experiencing troubles with  my boot partition, I took a shot at running Ubuntu’s Boot-Repair utility. It seemed to do something, but upon restarting the machine, I found that I had even more problems – now a Master Boot Record wasn’t found at all:

It would appear as though I may have made things worse...

It would appear as though I may have made things worse…

After dismissing the boot device error, I was prompted to choose which device to boot from. I chose to boot Windows’ UEFI Repair partition, and was (luckily) able to get to a desktop. Unfortunately, none of the other partitions on the device seem to work, so I’m back where I started at the beginning, except that now in addition to having to put up with Windows 8, I also have a broken master boot record.

Lenovo: 1 / Jon: 0.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Experience Booting Linux Using the Windows 7 Bootloader

July 26th, 2013 2 comments

Greetings everyone! It has been quite some time since my last post. As you’ll be able to read from my profile (and signature,) I have decided to run ArchLinux for the upcoming experiment. As of yet, I’m not sure what my contributions to the community will be, however, there will be more on that later.

One of the interesting things I wanted to try this time around was to get Linux to boot from the Windows 7 bootloader. The basic principle here is to take the first 512-bytes of your /boot partition (with GRUB installed), and place it on your C:\ as linux.bin. From there, you use BCDEdit in Windows to add it to your bootloader. When you boot Windows, you will be prompted to either start Windows 7 or Linux. If you choose Linux, GRUB will be launched.

Before I go into my experience, I just wanted to let you know that I was not able to get it working. It’s not that it isn’t possible, but for the sake of being able to boot into ArchLinux at some point during the experiment, I decided to install GRUB to the MBR and chainload the Windows bootloader.

I started off with this article from the ArchLinux wiki, that basically explains the process above in more detail. What I failed to realize was that this article was meant to be used when both OSes are on the same disk. In my case, I have Windows running on one disk, and Linux on another.

According to this article on Eric Hameleers’ blog, the Windows 7 Bootloader does not play well with loading operating systems that reside on a different disk. Eric goes into a workaround for this in the article. The proposed solution is to have your /boot partition reside on the same disk as Windows. This way, the second stage of GRUB will be properly loaded, and GRUB will handle the rest properly.

Although I could attempt the above, I don’t really want to be re-sizing my Windows partition at this point, and it will be much easier for me to install GRUB to the MBR on my Linux disk, and have that disk boot first. That way, if I decide to get rid of Linux later, I can change the boot order, and the Windows bootloader will have remained un-touched.

Besides, while I was investigating this approach, I received a lot of ridicule from #archlinux for trying to use the Windows bootloader.

09:49 < AngryArchLinuxUser555> uhm, first 512bytes of /boot is pretty useless
09:49 < AngryArchLinuxUser555> unless you are doing retarded things like not having grub in mbr
(username changed for privacy)

For the record, I was not attempting this because I think it’s a good idea. I do much prefer using GRUB, however, this was FOR SCIENCE!

If I ever do manage to boot into ArchLinux, I will be sure to write another post.

I am currently running ArchLinux (x86_64).
Check out my profile for more information.

Changing ATI power profile to low

April 6th, 2013 No comments

My laptop’s graphics card has never had the best support on linux and has now approached the point in its life where even ATI has stopped supporting it with new driver releases. On one hand I’m thankful that the open source driver performs well enough that I can continue to use this hardware, on the other though it does result in some downright awful power management. With the default settings my graphics card runs extremely hot and requires the fan to be on constantly. Luckily there is a quick way to fix this and tell the open source driver to run my card in a low power state at all times.

  1. Start a root terminal (or use sudo for everything)
  2. Set the card to use the power profile (assuming your computer uses card0)

    echo profile > /sys/class/drm/card0/device/power_method

  3. Set the power profile to “low” setting

    echo low > /sys/class/drm/card0/device/power_profile

You can check what the current setting is by running the following command:

cat /sys/class/drm/card0/device/power_profile

I would also highly recommend rebooting and then checking the setting again. I found that on my laptop the setting was being reset everytime the computer turned on. If this happens to you try my work around – simply edit /etc/rc.local and add the line in step 3 before the return 0. My file looks like:

#!/bin/sh -e

echo low > /sys/class/drm/card0/device/power_profile

exit 0

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Airing of grievances: in which upgrading Ubuntu wreaks havoc

February 24th, 2013 4 comments

I’ve had a few nasty experiences this week with Linux and figured I’d vent here. Unlike my previous efforts with Linux From Scratch and Gentoo, my complaints this time around are related to upgrading Ubuntu.

Ubuntu 10.04 to 12.04: Save yourself the trouble

At this point the current Ubuntu LTS release (12.04) is my preferred distribution to work with: it has become widespread enough that troubleshooting and previous solutions online are easy to locate. In a professional capacity, I also maintain systems that are still on 8.04 LTS (supported until April 2013, so we have to be pretty aggressive about replacing them) or 10.04 LTS (good until April 2015).

I attempted to complete two upgrades from the 10.04 release this week to 12.04 – one 10.04 LTS “desktop” installation, and one 10.04 LTS headless server installation. Both were virtual machines running under VMWare ESXi, but neither had given me any trouble during normal use.

Canonical’s updater process (the wrapper around dist-upgrade) appears to be pretty slick; it gives you appropriate warnings, attempts to start a SSH daemon as a fallback mechanism and starts on its merry way to download the necessary packages to bring your system completely up to date. On my 10.04 desktop VM, the installer fell apart completely during the package replacement/removal/installation sequence. I was left with two nasty message boxes: one advising that my system was now in a broken state, and another that completely contained rectangular, unprintable characters.

To put it bluntly, I was not amused, but it wasn’t a critical system and I was content to replace it with a fresh 12.04 installation rather than waste additional time troubleshooting with apt or dpkg. Strike one for the upgrader.

At least the server came back up!

Next on the upgrade schedule was the 12.04 server VM. Install, package replacement and reboot went fine, but I had several custom PPAs installed to support development of XenonMKV (Github page) – specifically ppa:krull/deadsnakes to add Python 2.7 to Ubuntu 10.04.

Python 2.7 still worked when the server came back up, and all my usual tools of choice like SABnzbd+, SickBeard and CouchPotato were still functional.

For some reason, though, I’d gotten it into my head this evening to check out Mezzanine as a potential WordPress replacement. Mezzanine uses Django, a Python Web framework, and the list of supported features is pretty encompassing.

Sidebar: Django and mod_wsgi – complicated enough?

One of the most irritating things from a system administration point of view is getting Web applications to run in a standard server environment – typically a Linux base system and Apache or nginx to serve content. I suppose I’ve been spoiled with how easy it is to get PHP-based sites up and running these days in that configuration by adding an Apache module through apt. A lot of new Web app frameworks come with their own small webservers for development and testing, but generally their creators recommend that when you’re ready to put your site live, that the product run under a well-known Web or application server.

The Django folks recommend using mod_wsgi in their documentation, which in and of itself really just says “RTFM for mod_wsgi and then you’ll have a much better idea of how to do this.” I had to go poking around on Google for the installation article since there are some broken links, but okay, it’s an Apache module with a small bit of configuration (even though a simple walkthrough in the Django documentation would go a long way to making deployment easier.) This is where I ran into my dependency/PPA problem on Ubuntu 10.04.

I’ve suppose I’ve screwed the pooch…

Running the suggested command, I tried: sudo apt-get install libapache2-mod-wsgi and got the following

The following packages have unmet dependencies:
libapache2-mod-wsgi : Depends: libpython2.7 (>= 2.7) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

Backtracking, I then found out why the library wasn’t going to get installed:

The following packages have unmet dependencies:
libpython2.7 : Depends: python2.7 (= 2.7.3-0ubuntu3.1) but 2.7.3-2+lucid1 is to be installed

Aha! The Python installation from the PPA for Lucid – 10.04 – was installed and acting as the 2.7 package. Since the newly-upgraded Ubuntu 12.04 uses Python 2.7 as a dependency for a good portion of the default applications, I couldn’t just purge or uninstall it, and my attempts to force a reinstallation all ended in:

Reinstallation of python2.7 is not possible, since it cannot be downloaded.


At this point it looks like I’ll have to rebuild the server VM as well, but if any readers have any bright ideas on fixing this dependency hell – please comment with your suggestions!

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: God Damnit Linux, Jake B, Ubuntu Tags:

Querying the State of a Hardware WiFi Switch with RF-Kill

October 8th, 2012 No comments

The laptop that I’m writing this post from has a really annoying strip of touch-response buttons above the keyboard that control things like volume and whether or not the wifi card is on. By touch-response, I mean that the buttons don’t require a finger press, but rather just a touch of the finger. As such, they provide no haptic feedback, so it’s hard to tell whether or not they work except by surveying the results of your efforts in the operating system.

The WiFi button in particular has go to be the worst of these buttons. On Windows, it glows a lovely blue colour when activated, and an angry red colour when disabled. This directly maps to whether or not my physical wireless network interface is enabled or disabled, and is a helpful indicator. Under Linux Mint 12 however, the “button” is always red, which makes it a less than helpful way to diagnose the occasional network drop.

Lately, I’ve been having trouble getting the wifi to reconnect after one of these drops. To troubleshoot, I would open up the Network Settings panel in Mint, which looks something like this:

Mint 12's Wireless Network Configuration Panel

The only problem with this window is that the ON/OFF slider that controls the state of the network interface would refuse to work. If I drag it to the ON position, it would just bounce back to OFF without changing the actual state of the card.

In the past, this behaviour has really frustrated me, driving me so far as to reboot the machine in Windows, re-activate the physical interface, and then switch back to Mint to continue doing whatever it was that I was doing in the first place. Tonight, I decided to investigate.

I started out with my old friend iwconfig:

jonf@jonf-mint ~ $ sudo iwconfig
lo        no wireless extensions.

eth0      no wireless extensions.

wlan0     IEEE 802.11abgn  ESSID:off/any
Mode:Managed  Access Point: Not-Associated   Tx-Power=off
Retry  long limit:7   RTS thr:off   Fragment thr:off
Encryption key:off
Power Management:off

As you can see, the wireless interface is listed, but it appears to be powered off. I was able to confirm this by issuing the iwlist command, which is supposed to spit out a list of nearby wireless networks:

jonf@jonf-mint ~ $ sudo iwlist wlan0 scanning
wlan0     Interface doesn’t support scanning : Network is down

Again, you can see that the interface is not reacting as one might expect it to. Next, I attempted to enable the interface using the ifconfig command:

jonf@jonf-mint ~ $ sudo ifconfig wlan0 up
SIOCSIFFLAGS: Operation not possible due to RF-kill

Ah-ha! A clue! Apparently, something called rfkill was preventing the interface from coming online. It turns out that rfkill is a handy little tool that allows you to query the state of the hardware buttons (and other physical interfaces) on your machine. You can see a list of all of these interfaces by issuing the command rfkill list:

jonf@jonf-mint ~ $ rfkill list
0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: yes
1: hp-wifi: Wireless LAN
Soft blocked: no
Hard blocked: yes

Interestingly enough, it looks like my wireless interface has been turned off by a hardware switch, which is what I had suspected all along. The next thing that I tried was the rfkill event command, which tails the list of hardware interface events. Using this tool, you can see the effect of pressing the physical switches and buttons on the chasis of your machine:

jonf@jonf-mint ~ $ rfkill event
1349740501.558614: idx 0 type 1 op 2 soft 0 hard 0
1349740505.153269: idx 0 type 1 op 2 soft 0 hard 1
1349740505.354608: idx 1 type 1 op 2 soft 0 hard 1
1349740511.030642: idx 1 type 1 op 2 soft 0 hard 0
1349740515.558615: idx 0 type 1 op 2 soft 0 hard 0

Each of the lines that the tool spits out shows a single event. In my case, it shows the button that controls the wireless interface switching the hard block setting (physical on/off) from 0 to 1 and back.

After watching this output while pressing the button a few times, I realized that the button does actually work, but that when the interface is turned on, it can take upwards of 5 seconds for the machine to notice it, connect to my home wireless, and get an ip address via DHCP. In the intervening time, I had typically become frustrated and pressed the button a few more times, trying to get it to do something. Instead, I now know that I have to press the button exactly once and then wait for it to take effect.

I stand by the fact that this is a piss-poor design, but hey, what do I know? I’m not a UX engineer for HP. At least it’s working again, and I am reconnected to my sweet sweet internet.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Linux from Scratch: I’ve had it up to here!

November 27th, 2011 9 comments

As you may be able to tell from my recent, snooze-worthy technical posts about compilers and makefiles and other assorted garbage, my experience with Linux from Scratch has been equally educational and enraging. Like Dave, I’ve had the pleasure of trying to compile various desktop environments and software packages from scratch, into some god-awful contraption that will let me check my damn email and look at the Twitters.

To be clear, when anyone says I have nobody to blame but myself, that’s complete hokum. From the beginning, this entire process was flawed. The last official LFS LiveCD has a kernel that’s enough revisions behind to cause grief during the setup process. But I really can’t blame the guys behind LFS for all my woes; their documentation is really well-written and explains why you have to pass fifty --do-not-compile-this-obscure-component-or-your-cat-will-crap-on-the-rug arguments.

Patch Your Cares Away

CC attribution licensed from benchilada

Read more…

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Building glibc for LFS from Ubuntu by replacing awk

November 23rd, 2011 No comments

If you run into the following error trying to build LFS from a Ubuntu installation:

make[1]: *** No rule to make target `/mnt/lfs/sources/glibc-build/Versions.all', needed by `/mnt/lfs/sources/glibc-build/abi-versions.h'. Stop.

The mawk utility installed with Ubuntu, and symlinked to /usr/bin/awk by default does not properly handle the regular expressions in this package. Perform the following commands:

# apt-get install gawk
# rm -rf /usr/bin/{m}awk
# ln -snf /usr/bin/gawk /usr/bin/awk

Then you’re just a make clean; ./configure –obnoxious-dash-commands; make; make install away from success.

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Can you install Gnome3 on Gentoo?

November 13th, 2011 1 comment

So my base Gentoo installation came with Gnome 2.3, which while solid, lacks a lot of the prettiness of Gnome’s latest 3.2 release. I thought that I might like to enjoy some of that beauty, so I attempted to upgrade. Because Gnome 3.2 isn’t in the main portage tree yet, I found a tutorial that purported to walk me through the upgrade process using an overlay, which is kind of like a testing branch that you can merge into the main portage tree in order to get unsupported software.

Since the tutorial that I linked above is pretty self-explanatory, I won’t repeat the steps here. There’s also the little fact that the tutorial didn’t work worth a damn…

Problem 1: Masked Packages

#required by dev-libs/folks-9999, 
required by gnome-base/gnome-shell-3.2.1-r1, 
required by gnome-base/gdm-[gnome-shell], 
required by gnome-base/gnome-2.32.1-r1, 
required by @selected, 
required by @world (argument)
>=dev-libs/libgee- introspection
#required by gnome-extra/sushi-0.2.1, 
required by gnome-base/nautilus-3.2.1[previewer], 
required by app-cdr/brasero-3.2.0-r1[nautilus], 
required by media-sound/sound-juicer-2.99.0_pre20111001, 
required by gnome-base/gnome-2.32.1-r1, 
required by @selected, 
required by @world (argument)
>=media-libs/clutter-gtk-1.0.4 introspection

This one is pretty simple to fix: you can add the lines >=dev-libs/libgee- introspection and >=media-libs/clutter-gtk-1.0.4 introspection to the file /etc/portage/package.accept_keywords, or you can run emerge -avuDN world –autounmask-write to get around these autounmask behaviour issues

Problem 2: Permissions

--------------------------- ACCESS VIOLATION SUMMARY ---------------------------
LOG FILE "/var/log/sandbox/sandbox-3222.log"

FORMAT: F - Function called
FORMAT: S - Access Status
FORMAT: P - Path as passed to function
FORMAT: A - Absolute Path (not canonical)
FORMAT: R - Canonical Path
FORMAT: C - Command Line

F: mkdir
S: deny
P: /root/.local/share/webkit
A: /root/.local/share/webkit
R: /root/.local/share/webkit
C: ./epiphany --introspect-dump=

This one totally confused me. If I’m reading it correctly, the install script lacks the permissions necessary to write to the path /root.local/share/webkit/. The odd part of this is that the script is running as the root user, so this simple shouldn’t happen. I was able to give it the permissions that it needed by running chmod 777 /root/.local/share/webkit/, but I had to start the install process all over again, and it just failed with a similar error the first time that it attempted to write a file to that directory. What the fuck?

At 10pm at night, I couldn’t be bothered to find a fix for this… I used the tutorial’s instructions to roll back the changes, and I’ll try again later if I’m feeling motivated. In the mean time, if you know how to fix this process, I’d love to hear about it.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Gentoo, God Damnit Linux, Jon F Tags: , ,

Fixing build issues with phonon-backend-gstreamer-4.5.1

November 9th, 2011 No comments

I’ve decided to try and upgrade my LFS system to the latest version of KDE (4.7.3 as of the time of this writing) and correspondingly needed to upgrade phonon-backend-gstreamer. Unfortunately, following the previous version’s compilation instructions provided this nasty message:

[ 4%] Building CXX object gstreamer/CMakeFiles/phonon_gstreamer.dir/audiooutput.cpp.o
In file included from /sources/phonon-backend-gstreamer-4.5.1/gstreamer/audiooutput.cpp:22:0:
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:38: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:38: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:69: error: template argument 1 is invalid/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:42: error: template argument 1 is invalid
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:263:45: error: ‘Phonon::MediaController::NavigationMenu’ has not been declared
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:42: error: template argument 1 is invalid
make[2]: *** [gstreamer/CMakeFiles/phonon_gstreamer.dir/audiooutput.cpp.o] Error 1make[1]: *** [gstreamer/CMakeFiles/phonon_gstreamer.dir/all] Error 2make: *** [all] Error 2

To fix this issue, make sure you have the latest GStreamer and phonon-backend-xine installed. Then I followed some of the advice from this KDE forum topic.

If, like me, you installed Qt into /opt/qt, create a symbolic link into the qt directory pointing to your system’s latest version of phonon. For later success with kde-runtime, create links to the libphonon libraries in /opt/qt-4.7.1/lib to your recently compiled /usr/lib64 versions (adjust paths to /usr/lib on 32-bit systems):

# mv /opt/qt-4.7.1/include/phonon /tmp
# ln -snf /usr/include/phonon /opt/qt-4.7.1/include/phonon
# cd /opt/qt-4.7.1/lib
# rm -rf libphonon*
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/
# ln -snf /usr/lib64/

Then rerun the compilation process for phonon-backend-gstreamer and voila, no more errors. (You’ll probably still have more issues to work out, but this gets past the phonon-backend-gstreamer blockade.)

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

LFS so far – why you should build i686 and x86_64 binaries

November 7th, 2011 No comments

I’ve now been actively using my (Beyond) Linux from Scratch installation for about a week now, and it’s actually pretty neat to have something working that I built with just a general outline. Granted, the LFS guide is very well put together, but going beyond the basic console of a system requires a bit of time and effort.

In really any other distro, the package manager should really be your best friend (except when it breaks.) Even in a source-based Linux like Gentoo, Portage gives you a pretty decent idea of what’s installed and is able to keep track of dependencies. With LFS, there are really some times where I don’t want to have to locate and download seventeen .tar.bz2 files, and ./configure –prefix=/usr; make; make install to each one in sequence. What’s worse is when you run into three dependencies for a particular piece of software, and the first two install properly, but the third one depends on ten additional packages.

This is what building software in LFS looks like.

There are also some libraries that despite being built on an x86_64 system will come out as 32-bit, and require special compiler or configure flags in order to build a pure 64-bit version. LFS x86_64 does not really have patience for anything 32-bit. This is generally fine because you’re building most of the applications yourself, but you can’t “just run” any typical application unless it’s taken the architecture into account.

In summary, while it’s awesome to go to SourceForge and have the very latest version of a package, sometimes I just don’t feel like going through all those hoops and satisfying twenty conditions for a compile to take place. Perhaps I’m OK if your application uses a built-in library rather than relying on whatever happens to be installed in /usr/lib.

The takeaway from this is that besides providing the source, considerate developers should try and build an i686 and x86_64 binary from that same source. If your build system has issues or you find it painful to produce binary releases, remember that anyone attempting to follow the INSTALL file will run into the same pain points. Firefox, for example, has both i686 and x86_64 release tarchives. The 64-bit version works quite well on my LFS installation and it’s how I’m writing this post.

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Getting Firefox 3.6.23 to compile under LFS

November 4th, 2011 No comments

Using the instructions from the BLFS book with the latest available 3.6 build of Firefox, I was able to achieve success. I figured I’d try out 3.6 before going onto something with a terribly inflated version number, and as per usual, ran into some problems:

  • Rebuild libpng-1.5.5 with APNG support. This is actually optional as I ended up commenting out the –with-system-png option in mozconfig.
  • In the suggested mozconfig, comment out the last two lines:

    #ac_add_options --with-system-libxul
    #ac_add_options --with-libxul-sdk=/usr/lib/xulrunner-devel-

    to create a standalone build.

  • Apply the GCC patch from this Bugzilla report (direct download).
  • Apply a partial patch from the Chromium project of all places. I’ve customized it here:

    # TLE Patch for Firefox/LFS

    diff -u a/gfx/ots/src/ b/gfs/ots/src/
    — a/gfx/ots/src/ 2011-11-02 07:10:17.000000000 -0400
    +++ b/gfx/ots/src/ 2011-11-02 07:10:30.000000000 -0400
    @@ -5,6 +5,7 @@
    #include “os2.h”

    #include “head.h”
    +#include <cstddef>

    // OS/2 – OS/2 and Windows Metrics

  • Apply a GCC4.6-specific patch to fix various .cpp files. Some parts of the patch will fail; that’s expected.
  • Manually edit layout/style/nsCSSRuleProcessor.cpp and go to line 1199. Change the source code as follows:

    const nsCaseInsensitiveStringComparator ciComparator;
    should become

    const nsCaseInsensitiveStringComparator ciComparator = nsCaseInsensitiveStringComparator();
  • For the toolkit/components/places/src/SQLFunctions.cpp file, change line 126 to:
    const nsCaseInsensitiveStringComparator caseInsensitiveCompare = nsCaseInsensitiveStringComparator();
  • In toolkit/crashreporter/google-breakpad/src/common/linux/, make sure line 51 is changed to:
    const CPPLanguage CPPLanguageSingleton = CPPLanguage();
  • In toolkit/xre/nsAppRunner.cpp, line 990:

    static const nsXULAppInfo kAppInfo = nsXULAppInfo();
  • While this is resolved in newer Firefox versions, copy security/coreconf/ to security/coreconf/ to add support for the 3.1 kernel.

Your reward will be a working Firefox installation:

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.