Archive

Archive for the ‘God Damnit Linux’ Category

Dual Booting Ubuntu 13.04 and Windows 8 on a Lenovo Y400 IdeaPad

July 27th, 2013 1 comment

With the third edition of The Linux Experiment already underway, I decided to get my new laptop set up with an Ubuntu partition to work with over the next few months. A little while back, I purchased this laptop with intent to use it as a gaming rig. It shipped with Windows 8, which was a serious pain in the ass to get used to. Now that I’ve dealt with that and have Steam and Origin set up on the Windows partition, it’s time to make this my primary machine and start taking advantage of the power under its hood by dual-booting an Ubuntu partition for development and experiment work.

I started my adventure by downloading an ISO of the latest release of Ubuntu – at the time of this writing, that’s 13.04. Because my new laptop has UEFI instead of BIOS, I made sure to grab the x64 version of the distribution.

Aside: If you’re using NoScript while browsing Ubuntu’s website, you’ll want to keep an eye on the address bar while navigating through the download steps. In my case, the screen that asks you to donate to the project redirected me to a different version of the ISO until I enabled JavaScript.

After using Ubuntu’s Startup Disk Creator to create a bootable USB stick, I started my first adventure – figuring out how to get the IdeaPad to boot from USB. A bit of quick googling told me that the trick was to alternately tap F10 and F12 during the boot sequence. This brought up a boot menu that allowed me to select the USB stick.

Once Ubuntu had booted off of the USB stick, I opened up GParted and went about making some space for my new operating system. The process was straightforward – I selected the largest existing partition (it also helped that it was labelled WINDOWS_OS), and split it in half. My only mistake in this process was to choose to put the new partition in front of the existing partition on the drive. Because of this, GParted had to copy all of the data on the Windows partition to a new physical location on the hard drive, a process that took about three hours.

The final partitioning scheme with my new Linux partition highlighted

The final partitioning scheme with my new Linux partition highlighted

With my hard drive appropriately partitioned, it was time to install the operating system. The modern Ubuntu installer pretty much takes care of everything, even going so far as selecting an appropriate space to use on the hard drive. I simply told it to install alongside the existing Windows partition, and let it take care of the details.

The installer finished its business in short order, and I restarted the machine. Ubuntu booted with no issues, but my Windows 8 partition refused to cooperate. It would seem as though something that the installer did wasn’t getting along well with UEFI/SecureBoot. Upon attempting to boot Windows, I got the following message:

error: Secure Boot forbids loading module from (hd0,gpt8)/boot/grub/x86_64-efi/ntfs.mod.
error: failure reading sector 0x0 from ‘cd0’
error: no such device: 0030DA4030DA3C7A
error: can’t find command ‘drivemap’
error: invalid EFI file path

Press any key to continue…

Uh oh.

Like I said, I could boot Ubuntu, so I headed on over to their website and read their page on UEFI. At first glance, it seemed as though I had done everything correctly. The only place that I deviated from these instructions was in manually resizing my Windows partition to create space for my new Ubuntu partition.

Thinking that I might be experiencing troubles with  my boot partition, I took a shot at running Ubuntu’s Boot-Repair utility. It seemed to do something, but upon restarting the machine, I found that I had even more problems – now a Master Boot Record wasn’t found at all:

It would appear as though I may have made things worse...

It would appear as though I may have made things worse…

After dismissing the boot device error, I was prompted to choose which device to boot from. I chose to boot Windows’ UEFI Repair partition, and was (luckily) able to get to a desktop. Unfortunately, none of the other partitions on the device seem to work, so I’m back where I started at the beginning, except that now in addition to having to put up with Windows 8, I also have a broken master boot record.

Lenovo: 1 / Jon: 0.

Experience Booting Linux Using the Windows 7 Bootloader

July 26th, 2013 2 comments

Greetings everyone! It has been quite some time since my last post. As you’ll be able to read from my profile (and signature,) I have decided to run ArchLinux for the upcoming experiment. As of yet, I’m not sure what my contributions to the community will be, however, there will be more on that later.

One of the interesting things I wanted to try this time around was to get Linux to boot from the Windows 7 bootloader. The basic principle here is to take the first 512-bytes of your /boot partition (with GRUB installed), and place it on your C:\ as linux.bin. From there, you use BCDEdit in Windows to add it to your bootloader. When you boot Windows, you will be prompted to either start Windows 7 or Linux. If you choose Linux, GRUB will be launched.

Before I go into my experience, I just wanted to let you know that I was not able to get it working. It’s not that it isn’t possible, but for the sake of being able to boot into ArchLinux at some point during the experiment, I decided to install GRUB to the MBR and chainload the Windows bootloader.

I started off with this article from the ArchLinux wiki, that basically explains the process above in more detail. What I failed to realize was that this article was meant to be used when both OSes are on the same disk. In my case, I have Windows running on one disk, and Linux on another.

According to this article on Eric Hameleers’ blog, the Windows 7 Bootloader does not play well with loading operating systems that reside on a different disk. Eric goes into a workaround for this in the article. The proposed solution is to have your /boot partition reside on the same disk as Windows. This way, the second stage of GRUB will be properly loaded, and GRUB will handle the rest properly.

Although I could attempt the above, I don’t really want to be re-sizing my Windows partition at this point, and it will be much easier for me to install GRUB to the MBR on my Linux disk, and have that disk boot first. That way, if I decide to get rid of Linux later, I can change the boot order, and the Windows bootloader will have remained un-touched.

Besides, while I was investigating this approach, I received a lot of ridicule from #archlinux for trying to use the Windows bootloader.

09:49 < AngryArchLinuxUser555> uhm, first 512bytes of /boot is pretty useless
09:49 < AngryArchLinuxUser555> unless you are doing retarded things like not having grub in mbr
(username changed for privacy)

For the record, I was not attempting this because I think it’s a good idea. I do much prefer using GRUB, however, this was FOR SCIENCE!

If I ever do manage to boot into ArchLinux, I will be sure to write another post.

Changing ATI power profile to low

April 6th, 2013 No comments

My laptop’s graphics card has never had the best support on linux and has now approached the point in its life where even ATI has stopped supporting it with new driver releases. On one hand I’m thankful that the open source driver performs well enough that I can continue to use this hardware, on the other though it does result in some downright awful power management. With the default settings my graphics card runs extremely hot and requires the fan to be on constantly. Luckily there is a quick way to fix this and tell the open source driver to run my card in a low power state at all times.

  1. Start a root terminal (or use sudo for everything)
  2. Set the card to use the power profile (assuming your computer uses card0)

    echo profile > /sys/class/drm/card0/device/power_method

  3. Set the power profile to “low” setting

    echo low > /sys/class/drm/card0/device/power_profile

You can check what the current setting is by running the following command:

cat /sys/class/drm/card0/device/power_profile

I would also highly recommend rebooting and then checking the setting again. I found that on my laptop the setting was being reset everytime the computer turned on. If this happens to you try my work around – simply edit /etc/rc.local and add the line in step 3 before the return 0. My file looks like:

#!/bin/sh -e

echo low > /sys/class/drm/card0/device/power_profile

exit 0

Airing of grievances: in which upgrading Ubuntu wreaks havoc

February 24th, 2013 4 comments

I’ve had a few nasty experiences this week with Linux and figured I’d vent here. Unlike my previous efforts with Linux From Scratch and Gentoo, my complaints this time around are related to upgrading Ubuntu.

Ubuntu 10.04 to 12.04: Save yourself the trouble

At this point the current Ubuntu LTS release (12.04) is my preferred distribution to work with: it has become widespread enough that troubleshooting and previous solutions online are easy to locate. In a professional capacity, I also maintain systems that are still on 8.04 LTS (supported until April 2013, so we have to be pretty aggressive about replacing them) or 10.04 LTS (good until April 2015).

I attempted to complete two upgrades from the 10.04 release this week to 12.04 – one 10.04 LTS “desktop” installation, and one 10.04 LTS headless server installation. Both were virtual machines running under VMWare ESXi, but neither had given me any trouble during normal use.

Canonical’s updater process (the wrapper around dist-upgrade) appears to be pretty slick; it gives you appropriate warnings, attempts to start a SSH daemon as a fallback mechanism and starts on its merry way to download the necessary packages to bring your system completely up to date. On my 10.04 desktop VM, the installer fell apart completely during the package replacement/removal/installation sequence. I was left with two nasty message boxes: one advising that my system was now in a broken state, and another that completely contained rectangular, unprintable characters.

To put it bluntly, I was not amused, but it wasn’t a critical system and I was content to replace it with a fresh 12.04 installation rather than waste additional time troubleshooting with apt or dpkg. Strike one for the upgrader.

At least the server came back up!

Next on the upgrade schedule was the 12.04 server VM. Install, package replacement and reboot went fine, but I had several custom PPAs installed to support development of XenonMKV (Github page) – specifically ppa:krull/deadsnakes to add Python 2.7 to Ubuntu 10.04.

Python 2.7 still worked when the server came back up, and all my usual tools of choice like SABnzbd+, SickBeard and CouchPotato were still functional.

For some reason, though, I’d gotten it into my head this evening to check out Mezzanine as a potential WordPress replacement. Mezzanine uses Django, a Python Web framework, and the list of supported features is pretty encompassing.

Sidebar: Django and mod_wsgi – complicated enough?

One of the most irritating things from a system administration point of view is getting Web applications to run in a standard server environment – typically a Linux base system and Apache or nginx to serve content. I suppose I’ve been spoiled with how easy it is to get PHP-based sites up and running these days in that configuration by adding an Apache module through apt. A lot of new Web app frameworks come with their own small webservers for development and testing, but generally their creators recommend that when you’re ready to put your site live, that the product run under a well-known Web or application server.

The Django folks recommend using mod_wsgi in their documentation, which in and of itself really just says “RTFM for mod_wsgi and then you’ll have a much better idea of how to do this.” I had to go poking around on Google for the installation article since there are some broken links, but okay, it’s an Apache module with a small bit of configuration (even though a simple walkthrough in the Django documentation would go a long way to making deployment easier.) This is where I ran into my dependency/PPA problem on Ubuntu 10.04.

I’ve suppose I’ve screwed the pooch…

Running the suggested command, I tried: sudo apt-get install libapache2-mod-wsgi and got the following

The following packages have unmet dependencies:
libapache2-mod-wsgi : Depends: libpython2.7 (>= 2.7) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

Backtracking, I then found out why the library wasn’t going to get installed:


The following packages have unmet dependencies:
libpython2.7 : Depends: python2.7 (= 2.7.3-0ubuntu3.1) but 2.7.3-2+lucid1 is to be installed

Aha! The Python installation from the PPA for Lucid – 10.04 – was installed and acting as the 2.7 package. Since the newly-upgraded Ubuntu 12.04 uses Python 2.7 as a dependency for a good portion of the default applications, I couldn’t just purge or uninstall it, and my attempts to force a reinstallation all ended in:


Reinstallation of python2.7 is not possible, since it cannot be downloaded.

Rebuild?

At this point it looks like I’ll have to rebuild the server VM as well, but if any readers have any bright ideas on fixing this dependency hell – please comment with your suggestions!

Categories: God Damnit Linux, Jake B, Ubuntu Tags:

Querying the State of a Hardware WiFi Switch with RF-Kill

October 8th, 2012 No comments

The laptop that I’m writing this post from has a really annoying strip of touch-response buttons above the keyboard that control things like volume and whether or not the wifi card is on. By touch-response, I mean that the buttons don’t require a finger press, but rather just a touch of the finger. As such, they provide no haptic feedback, so it’s hard to tell whether or not they work except by surveying the results of your efforts in the operating system.

The WiFi button in particular has go to be the worst of these buttons. On Windows, it glows a lovely blue colour when activated, and an angry red colour when disabled. This directly maps to whether or not my physical wireless network interface is enabled or disabled, and is a helpful indicator. Under Linux Mint 12 however, the “button” is always red, which makes it a less than helpful way to diagnose the occasional network drop.

Lately, I’ve been having trouble getting the wifi to reconnect after one of these drops. To troubleshoot, I would open up the Network Settings panel in Mint, which looks something like this:

Mint 12's Wireless Network Configuration Panel

The only problem with this window is that the ON/OFF slider that controls the state of the network interface would refuse to work. If I drag it to the ON position, it would just bounce back to OFF without changing the actual state of the card.

In the past, this behaviour has really frustrated me, driving me so far as to reboot the machine in Windows, re-activate the physical interface, and then switch back to Mint to continue doing whatever it was that I was doing in the first place. Tonight, I decided to investigate.

I started out with my old friend iwconfig:

jonf@jonf-mint ~ $ sudo iwconfig
lo        no wireless extensions.

eth0      no wireless extensions.

wlan0     IEEE 802.11abgn  ESSID:off/any
Mode:Managed  Access Point: Not-Associated   Tx-Power=off
Retry  long limit:7   RTS thr:off   Fragment thr:off
Encryption key:off
Power Management:off

As you can see, the wireless interface is listed, but it appears to be powered off. I was able to confirm this by issuing the iwlist command, which is supposed to spit out a list of nearby wireless networks:

jonf@jonf-mint ~ $ sudo iwlist wlan0 scanning
wlan0     Interface doesn’t support scanning : Network is down

Again, you can see that the interface is not reacting as one might expect it to. Next, I attempted to enable the interface using the ifconfig command:

jonf@jonf-mint ~ $ sudo ifconfig wlan0 up
SIOCSIFFLAGS: Operation not possible due to RF-kill

Ah-ha! A clue! Apparently, something called rfkill was preventing the interface from coming online. It turns out that rfkill is a handy little tool that allows you to query the state of the hardware buttons (and other physical interfaces) on your machine. You can see a list of all of these interfaces by issuing the command rfkill list:

jonf@jonf-mint ~ $ rfkill list
0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: yes
1: hp-wifi: Wireless LAN
Soft blocked: no
Hard blocked: yes

Interestingly enough, it looks like my wireless interface has been turned off by a hardware switch, which is what I had suspected all along. The next thing that I tried was the rfkill event command, which tails the list of hardware interface events. Using this tool, you can see the effect of pressing the physical switches and buttons on the chasis of your machine:

jonf@jonf-mint ~ $ rfkill event
1349740501.558614: idx 0 type 1 op 2 soft 0 hard 0
1349740505.153269: idx 0 type 1 op 2 soft 0 hard 1
1349740505.354608: idx 1 type 1 op 2 soft 0 hard 1
1349740511.030642: idx 1 type 1 op 2 soft 0 hard 0
1349740515.558615: idx 0 type 1 op 2 soft 0 hard 0

Each of the lines that the tool spits out shows a single event. In my case, it shows the button that controls the wireless interface switching the hard block setting (physical on/off) from 0 to 1 and back.

After watching this output while pressing the button a few times, I realized that the button does actually work, but that when the interface is turned on, it can take upwards of 5 seconds for the machine to notice it, connect to my home wireless, and get an ip address via DHCP. In the intervening time, I had typically become frustrated and pressed the button a few more times, trying to get it to do something. Instead, I now know that I have to press the button exactly once and then wait for it to take effect.

I stand by the fact that this is a piss-poor design, but hey, what do I know? I’m not a UX engineer for HP. At least it’s working again, and I am reconnected to my sweet sweet internet.

Linux from Scratch: I’ve had it up to here!

November 27th, 2011 9 comments

As you may be able to tell from my recent, snooze-worthy technical posts about compilers and makefiles and other assorted garbage, my experience with Linux from Scratch has been equally educational and enraging. Like Dave, I’ve had the pleasure of trying to compile various desktop environments and software packages from scratch, into some god-awful contraption that will let me check my damn email and look at the Twitters.

To be clear, when anyone says I have nobody to blame but myself, that’s complete hokum. From the beginning, this entire process was flawed. The last official LFS LiveCD has a kernel that’s enough revisions behind to cause grief during the setup process. But I really can’t blame the guys behind LFS for all my woes; their documentation is really well-written and explains why you have to pass fifty --do-not-compile-this-obscure-component-or-your-cat-will-crap-on-the-rug arguments.

Patch Your Cares Away

CC attribution licensed from benchilada

Read more…

Building glibc for LFS from Ubuntu by replacing awk

November 23rd, 2011 No comments

If you run into the following error trying to build LFS from a Ubuntu installation:


make[1]: *** No rule to make target `/mnt/lfs/sources/glibc-build/Versions.all', needed by `/mnt/lfs/sources/glibc-build/abi-versions.h'. Stop.

The mawk utility installed with Ubuntu, and symlinked to /usr/bin/awk by default does not properly handle the regular expressions in this package. Perform the following commands:


# apt-get install gawk
# rm -rf /usr/bin/{m}awk
# ln -snf /usr/bin/gawk /usr/bin/awk

Then you’re just a make clean; ./configure –obnoxious-dash-commands; make; make install away from success.

Can you install Gnome3 on Gentoo?

November 13th, 2011 1 comment

So my base Gentoo installation came with Gnome 2.3, which while solid, lacks a lot of the prettiness of Gnome’s latest 3.2 release. I thought that I might like to enjoy some of that beauty, so I attempted to upgrade. Because Gnome 3.2 isn’t in the main portage tree yet, I found a tutorial that purported to walk me through the upgrade process using an overlay, which is kind of like a testing branch that you can merge into the main portage tree in order to get unsupported software.

Since the tutorial that I linked above is pretty self-explanatory, I won’t repeat the steps here. There’s also the little fact that the tutorial didn’t work worth a damn…

Problem 1: Masked Packages

#required by dev-libs/folks-9999, 
required by gnome-base/gnome-shell-3.2.1-r1, 
required by gnome-base/gdm-3.2.1.1-r1[gnome-shell], 
required by gnome-base/gnome-2.32.1-r1, 
required by @selected, 
required by @world (argument)
>=dev-libs/libgee-0.6.2.1:0 introspection
#required by gnome-extra/sushi-0.2.1, 
required by gnome-base/nautilus-3.2.1[previewer], 
required by app-cdr/brasero-3.2.0-r1[nautilus], 
required by media-sound/sound-juicer-2.99.0_pre20111001, 
required by gnome-base/gnome-2.32.1-r1, 
required by @selected, 
required by @world (argument)
>=media-libs/clutter-gtk-1.0.4 introspection

This one is pretty simple to fix: you can add the lines >=dev-libs/libgee-0.6.2.1:0 introspection and >=media-libs/clutter-gtk-1.0.4 introspection to the file /etc/portage/package.accept_keywords, or you can run emerge -avuDN world –autounmask-write to get around these autounmask behaviour issues

Problem 2: Permissions

--------------------------- ACCESS VIOLATION SUMMARY ---------------------------
LOG FILE "/var/log/sandbox/sandbox-3222.log"

VERSION 1.0
FORMAT: F - Function called
FORMAT: S - Access Status
FORMAT: P - Path as passed to function
FORMAT: A - Absolute Path (not canonical)
FORMAT: R - Canonical Path
FORMAT: C - Command Line

F: mkdir
S: deny
P: /root/.local/share/webkit
A: /root/.local/share/webkit
R: /root/.local/share/webkit
C: ./epiphany --introspect-dump=
/var/tmp/portage/www-client/epiphany-3.0.4/temp/tmp-introspectSfeqBO/functions.txt,
/var/tmp/portage/www-client/epiphany-3.0.4/temp/tmp-introspectSfeqBO/dump.xml
--------------------------------------------------------------------------------

This one totally confused me. If I’m reading it correctly, the install script lacks the permissions necessary to write to the path /root.local/share/webkit/. The odd part of this is that the script is running as the root user, so this simple shouldn’t happen. I was able to give it the permissions that it needed by running chmod 777 /root/.local/share/webkit/, but I had to start the install process all over again, and it just failed with a similar error the first time that it attempted to write a file to that directory. What the fuck?

At 10pm at night, I couldn’t be bothered to find a fix for this… I used the tutorial’s instructions to roll back the changes, and I’ll try again later if I’m feeling motivated. In the mean time, if you know how to fix this process, I’d love to hear about it.

Categories: Gentoo, God Damnit Linux, Jon F Tags: , ,

Fixing build issues with phonon-backend-gstreamer-4.5.1

November 9th, 2011 No comments

I’ve decided to try and upgrade my LFS system to the latest version of KDE (4.7.3 as of the time of this writing) and correspondingly needed to upgrade phonon-backend-gstreamer. Unfortunately, following the previous version’s compilation instructions provided this nasty message:

[ 4%] Building CXX object gstreamer/CMakeFiles/phonon_gstreamer.dir/audiooutput.cpp.o
In file included from /sources/phonon-backend-gstreamer-4.5.1/gstreamer/audiooutput.cpp:22:0:
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:38: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:38: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:200:69: error: template argument 1 is invalid/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:262:42: error: template argument 1 is invalid
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:263:45: error: ‘Phonon::MediaController::NavigationMenu’ has not been declared
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’
/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:11: error: ‘NavigationMenu’ is not a member of ‘Phonon::MediaController’/sources/phonon-backend-gstreamer-4.5.1/gstreamer/mediaobject.h:317:42: error: template argument 1 is invalid
make[2]: *** [gstreamer/CMakeFiles/phonon_gstreamer.dir/audiooutput.cpp.o] Error 1make[1]: *** [gstreamer/CMakeFiles/phonon_gstreamer.dir/all] Error 2make: *** [all] Error 2

To fix this issue, make sure you have the latest GStreamer and phonon-backend-xine installed. Then I followed some of the advice from this KDE forum topic.

If, like me, you installed Qt into /opt/qt, create a symbolic link into the qt directory pointing to your system’s latest version of phonon. For later success with kde-runtime, create links to the libphonon libraries in /opt/qt-4.7.1/lib to your recently compiled /usr/lib64 versions (adjust paths to /usr/lib on 32-bit systems):

# mv /opt/qt-4.7.1/include/phonon /tmp
# ln -snf /usr/include/phonon /opt/qt-4.7.1/include/phonon
# cd /opt/qt-4.7.1/lib
# rm -rf libphonon*
# ln -snf /usr/lib64/libphonon.so libphonon.so
# ln -snf /usr/lib64/libphonon.so.4 libphonon.so.4
# ln -snf /usr/lib64/libphonon.so.4.5.1 libphonon.so.4.5.1
# ln -snf /usr/lib64/libphononexperimental.so libphononexperimental.so
# ln -snf /usr/lib64/libphononexperimental.so.4 libphononexperimental.so.4
# ln -snf /usr/lib64/libphononexperimental.so.4.5.1 libphononexperimental.so.4.5.1

Then rerun the compilation process for phonon-backend-gstreamer and voila, no more errors. (You’ll probably still have more issues to work out, but this gets past the phonon-backend-gstreamer blockade.)

LFS so far – why you should build i686 and x86_64 binaries

November 7th, 2011 No comments

I’ve now been actively using my (Beyond) Linux from Scratch installation for about a week now, and it’s actually pretty neat to have something working that I built with just a general outline. Granted, the LFS guide is very well put together, but going beyond the basic console of a system requires a bit of time and effort.

In really any other distro, the package manager should really be your best friend (except when it breaks.) Even in a source-based Linux like Gentoo, Portage gives you a pretty decent idea of what’s installed and is able to keep track of dependencies. With LFS, there are really some times where I don’t want to have to locate and download seventeen .tar.bz2 files, and ./configure –prefix=/usr; make; make install to each one in sequence. What’s worse is when you run into three dependencies for a particular piece of software, and the first two install properly, but the third one depends on ten additional packages.

This is what building software in LFS looks like.

There are also some libraries that despite being built on an x86_64 system will come out as 32-bit, and require special compiler or configure flags in order to build a pure 64-bit version. LFS x86_64 does not really have patience for anything 32-bit. This is generally fine because you’re building most of the applications yourself, but you can’t “just run” any typical application unless it’s taken the architecture into account.

In summary, while it’s awesome to go to SourceForge and have the very latest version of a package, sometimes I just don’t feel like going through all those hoops and satisfying twenty conditions for a compile to take place. Perhaps I’m OK if your application uses a built-in library rather than relying on whatever happens to be installed in /usr/lib.

The takeaway from this is that besides providing the source, considerate developers should try and build an i686 and x86_64 binary from that same source. If your build system has issues or you find it painful to produce binary releases, remember that anyone attempting to follow the INSTALL file will run into the same pain points. Firefox, for example, has both i686 and x86_64 release tarchives. The 64-bit version works quite well on my LFS installation and it’s how I’m writing this post.

Getting Firefox 3.6.23 to compile under LFS

November 4th, 2011 No comments

Using the instructions from the BLFS book with the latest available 3.6 build of Firefox, I was able to achieve success. I figured I’d try out 3.6 before going onto something with a terribly inflated version number, and as per usual, ran into some problems:

  • Rebuild libpng-1.5.5 with APNG support. This is actually optional as I ended up commenting out the –with-system-png option in mozconfig.
  • In the suggested mozconfig, comment out the last two lines:

    #ac_add_options --with-system-libxul
    #ac_add_options --with-libxul-sdk=/usr/lib/xulrunner-devel-1.9.2.13

    to create a standalone build.

  • Apply the GCC patch from this Bugzilla report (direct download).
  • Apply a partial patch from the Chromium project of all places. I’ve customized it here:


    # TLE Patch for Firefox/LFS

    diff -u a/gfx/ots/src/os2.cc b/gfs/ots/src/os2.cc
    — a/gfx/ots/src/os2.cc 2011-11-02 07:10:17.000000000 -0400
    +++ b/gfx/ots/src/os2.cc 2011-11-02 07:10:30.000000000 -0400
    @@ -5,6 +5,7 @@
    #include “os2.h”

    #include “head.h”
    +#include <cstddef>

    // OS/2 – OS/2 and Windows Metrics
    // http://www.microsoft.com/opentype/otspec/os2.htm

  • Apply a GCC4.6-specific patch to fix various .cpp files. Some parts of the patch will fail; that’s expected.
  • Manually edit layout/style/nsCSSRuleProcessor.cpp and go to line 1199. Change the source code as follows:

    const nsCaseInsensitiveStringComparator ciComparator;
    should become

    const nsCaseInsensitiveStringComparator ciComparator = nsCaseInsensitiveStringComparator();
  • For the toolkit/components/places/src/SQLFunctions.cpp file, change line 126 to:
    const nsCaseInsensitiveStringComparator caseInsensitiveCompare = nsCaseInsensitiveStringComparator();
  • In toolkit/crashreporter/google-breakpad/src/common/linux/language.cc, make sure line 51 is changed to:
    const CPPLanguage CPPLanguageSingleton = CPPLanguage();
  • In toolkit/xre/nsAppRunner.cpp, line 990:

    static const nsXULAppInfo kAppInfo = nsXULAppInfo();
  • While this is resolved in newer Firefox versions, copy security/coreconf/Linux2.6.mk to security/coreconf/Linux3.1.mk to add support for the 3.1 kernel.

Your reward will be a working Firefox installation:

Installing glib-1.2.10 in LFS to get XMMS working

November 3rd, 2011 1 comment

So I wanted to install XMMS in Linux From Scratch, as it’s one of the more reliable MP3 players and one of the first multimedia Linux apps I’ve used. It’s very reminiscent of Winamp 2:

If you would also like to get it installed, you’ll need the source and glib-1.2.10. Then, check out a common problem when installing glib, and a patch to fix the ./configure step.

LFS, pre-KDE: kdebindings and kdebase-runtime

November 2nd, 2011 No comments

kdebindings

Are you running into the following problem when compiling kdebindings? Well, you’re probably not, because you picked a saner distribution than LFS, but here goes anyway!

ASSERT failure in QList::at: “index out of range”, file /qt/trunk/include/QtCore/qlist.h, line 456
/bin/sh: line 1: 7841 Aborted (core dumped)

From http://old.nabble.com/Smokegen-core-dump-td30797484.html, you can fix this with a patch to indexedstring.cpp:

--- generator/parser/indexedstring.cpp.orig 2011-02-23 22:12:38.695255708 +0100
+++ generator/parser/indexedstring.cpp 2011-02-24 02:36:09.035361151 +0100
@@ -195,12 +195,15 @@
}

QByteArray IndexedString::byteArray() const {
+  qDebug() << "strings()->size():" << strings()->size() << ", m_index:" << m_index;
if(!m_index)
return QByteArray();
else if((m_index & 0xffff0000) == 0xffff0000)
return QString(QChar((char)m_index & 0xff)).toUtf8();
-  else
+  else if (m_index < strings()->size())
return strings()->at(m_index).toUtf8(); /*arrayFromItem(globalIndexedStringRepository->itemFromIndex(m_index));*/
+  else
+    return QByteArray();
}

unsigned int IndexedString::hashString(const char* str, unsigned short length) {

I ended up removing the first qDebug() line before the if statement as I don’t need my compiler to be that chatty – I just need this package to compile properly. Reconfigure and attempt to make with:

cmake -DCMAKE_INSTALL_PREFIX=$KDE4_PREFIX \
    -DKDE_DEFAULT_HOME=.kde4 \
    -DSYSCONF_INSTALL_DIR=/etc/kde4 \
    .. &&
make

kdebase-runtime

You can patch away your problems if you run into the following message:

[ 39%] Building CXX object kioslave/nfs/CMakeFiles/kio_nfs.dir/kio_nfs.o
In file included from /sources/kdebase-runtime-4.6.0/kioslave/nfs/kio_nfs.cpp:21:0:
/sources/kdebase-runtime-4.6.0/kioslave/nfs/kio_nfs.h:33:21: fatal error: rpc/rpc.h: No such file or directory
compilation terminated.

First, get libtirpc installed to make this work, but then again, you could have just guessed that you needed it, right? 😉

Used under Creative Commons NC license from zhenech

There are some LFS-specific instructions to follow before libtirpc will compile:

  • Unpack glibc-2.14.1
  • In its directory, execute:
    mkdir -p /usr/include/rpc{,svc}
    cp sunrpc/rpc/*.h /usr/include/rpc
    cp nis/rpcsvc/*.h /usr/include/rpcsvc
  • Compile libtirpc with ./configure --prefix=/usr && make && make install

Then from Sourcemage, linking to an old Bugzilla installation:

diff --git a/kioslave/nfs/CMakeLists.txt b/kioslave/nfs/CMakeLists.txt
index b973a73..6556769 100644
--- a/kioslave/nfs/CMakeLists.txt
+++ b/kioslave/nfs/CMakeLists.txt
@@ -3,8 +3,8 @@ set(kio_nfs_PART_SRCS kio_nfs.cpp mount_xdr.c nfs_prot_xdr.c )

 kde4_add_plugin(kio_nfs ${kio_nfs_PART_SRCS})

-
-target_link_libraries(kio_nfs   ${KDE4_KIO_LIBS})
+include_directories(/usr/include/tirpc)
+target_link_libraries(kio_nfs   ${KDE4_KIO_LIBS} tirpc)

 install(TARGETS kio_nfs  DESTINATION ${PLUGIN_INSTALL_DIR} )

Once this is complete you should be able to get kdebase-runtime compiled.

Categories: God Damnit Linux, Jake B, KDE Tags:

LFS, pre-KDE: Fixing libmng with -fPIC and xine with a header

November 2nd, 2011 No comments

Fixing libmng with -fPIC

In preparation for getting KDE4 (and Qt4, and all the other dependencies) working with my Linux from Scratch install, I noticed an issue when compiling libmng:

/usr/bin/ld: libmng_chunk_io.o: relocation R_X86_64_32 against `.rodata' can not be used when making a shared object; recompile with -fPIC
libmng_chunk_io.o: could not read symbols: Bad valuecollect2: ld returned 1 exit status
make: *** [libmng.so.1.1.0.9] Error 1

To fix this, you’ll have to edit the makefile in /sources/libmng-1.0.10/makefiles/makefile.linux as per this osdir mailing list thread. Line 47 currently reads:

FLAGS=-I$(ZLIBINC) -I$(JPEGINC) -I$(LCMSINC) -Wall -O3 -funroll-loops \

Add the -fPIC flag instead:

FLAGS=-I$(ZLIBINC) -I$(JPEGINC) -I$(LCMSINC) -Wall -O3 -fPIC -funroll-loops \

Then change back to /sources/libmng-1.0.10 and run make clean; cp makefiles/makefile.linux Makefile && make to successfully compile the library.

And Xine

Xine appears to be missing a header, causing an xmcc compilation error. Check out the original solution and add the line with the + where indicated:

Index: src/video_out/xxmc.h
src/video_out/xxmc.h 2011-01-23 17:55:01.333928003 +0100
+++ src/video_out/xxmc.h 2011-01-23 17:54:48.509926463 +0100
@@ -79,6 +79,7 @@
#include <X11/extensions/Xvlib.h>
#ifdef HAVE_VLDXVMC
#include <X11/extensions/vldXvMC.h>
+ #include <X11/extensions/XvMClib.h>
#else
#include <X11/extensions/XvMClib.h>
#include <X11/extensions/XvMC.h>

LFS, pre-KDE: Errors Compiling qca-2.0.3

November 2nd, 2011 No comments

If you’re going through the Beyond Linux From Scratch guide, and run into this error while compiling qca-2.0.3 (and I assume many other versions of qca), I think I can help.

You don’t seem to have ‘make’ or ‘gmake’ in your PATH.
Cannot proceed.

The fix is relatively easy. Just make sure to have which installed on the machine. Jake found this out the hard way by looking through the configure script. Doing this experiment on Linux From Scratch has really given me an appreciation for distributions that come with basic utilities such as which.

Since which is very difficult to find on Google, here is a link: http://www.linuxfromscratch.org/blfs/view/svn/general/which.html

Bye Bye Bodhi

November 1st, 2011 10 comments

Ah Linux

One website lists ten reasons to use linux my favourites of which are “Linux is easier to use than Windows” and “Linux is fun.” It is day three of the experiment and so far I haven’t installed Linux but I have taken a Dell Vostro 3350 apart about five times. I borrowed this laptop off a fellow comrade in this experiment, Jake B, as I will be sending my own netbook home this coming December.

Starting off I aimed to install both VectorLinux and Bodhi to compare them. I consider myself a relatively light computer user outside of the office and so comparing two different distributions would give me something to talk about. Alas this choice has come back to bite me in the…

I used unetbootin to begin with, on a USB key that was confirmed to be working. I then put Vector on the USB key and it brought up half a blue screen with the top of the vector logo just appearing above the black lower half of the display. After a couple of tries I figured it was corrupt files or a bad ISO so I reformated the USB in order to try Bodhi instead. Unfortunately I didn’t even get a logo this time. Next I burned a CD of Vector and got as far as the ‘find installation media’ screen but no matter how may refreshes or reloads I did it apparently couldn’t find the CD-ROM or configuration files.

From previously experiencing installers fail to find hard drives and USB keys because of the type of hard drive setting in the BIOS, I changed it from ACHI to ATA and low and behold finally some success. I managed to get the Vector installer to write partitions to the disk (using the CD at this point) after choosing the add-on applications I wanted to install. Again this failed so I tried once more with the USB key. This failed the same way except it said that it could not find live media. I even tried using the USB key and the CD together at the same time with no luck.

Switching between Bodhi and Vector in order to try and get a complete install and many, many CDs later I temporarily gave up. I downloaded a new distribution called Sabyon, a Gentoo based distro with the Enlightenment desktop environment, but alas I kept getting the same errors. I even tried Ubuntu 10.04 and Linux Mint and neither of them could not write to the disk.

Figuring it was a hard drive issue I took out the hard drive from the laptop and mounted it in an enclosure. After a quick reformat, which removed a random 500MB LVM partition that I believed to be corrupt, I put it back in the machine. Still no luck.

The errors I kept getting included disk, I/O, live media, cannot find CD-ROM, no useable media, no config file and a couple of others. Each time I tried installing it would fail at different sections of the install and the error would be different with each media used. Among all of the errors I’ve seen the main one seems to be “(initramfs) unable to find a medium containing a live filesystem”

On a whim I decided to test any other hardware errors by running diagnostics from the BIOS. No errors found. I even dug out my ancient XP Profession disc, and after a couple of BIOS changes and a couple of Blue Screens – that were my fault because I had changed the hard drive out so much – I got XP to successfully load, install, and commit changes to the hard drive.

Turning to Google, and with the help of a more advanced Linux Experiment comrade, I retried installing Linux by adding some commands to the installer boot options. Still no luck.

After more Googling I have found that there are a few possible reasons that this could be happening. I have read that it could be caused by the USB3 ports interfering with the bootable media  or that it cold be related to a CD-ROM master/slave setting. Either way, I still haven’t figured it out and I’m not willing to break someone else’s computer just to see if I can overcome this frustrating first experience with Linux. My next task is to try some ACPI hacks  and after finding this useful link, try to install the latest version of Ubuntu which seems to be compatible with the hardware of this machine. But for now its …

Windows 1 Linux 0

Men using Linux 1 Women using linux 0

Kernel Panic!

November 1st, 2011 2 comments

So like Tyler, I’ve decided to run Gentoo. Hey, it seemed like a good idea at the time.

My experience thus far can be summed up with a single word: frustrating. I spent my first day working through the (excellent) Gentoo Handbook. Like Jake, I found it handy to have run lshw on my system prior to installing Gentoo. This provided me with a list of my hardware that I could refer back to during the installation process, and saved me a few headaches.

At first, my live-cd environment lacked a network connection. My machine has two network interfaces in it. One uses the sky2 kernel module, while the other uses skge. I ran:

modprobe skge
net-setup eth1
[follow on-screen instructions]
ping google.com

and was successful.

On that first day of dicking about, I managed to get all the way to Chapter 10: Configuring the Bootloader. It was at this point, in subchapter 10.d, that I was instructed to reboot the system, as though it would be a relaxing, daisy-scented walk in the park. Not so.

Apparently, the kernel that I’ve managed to compile does not recognize the SATA interface on my motherboard. When I attempt to boot, GRUB hands control off to the kernel, which goes looking for my root partition on /dev/sda3. It then dies with a message like

Kernel panic – not syncing: VFS: Unable to mount root fs on unknown-block(8,3)

This error message is the bane of my existence.

After a great deal of head-vs-desk action, approximately 37 kernel compilations, and a great deal of googling, I managed to find a Gentoo wiki entry that instructs users of my chipset on how to compile their very own working kernel. Tonight, I intend to follow it, in hopes that I can get the system to boot some time soon.

At this rate, I’ll be lucky to have a working desktop by the end of the experiment.

Categories: Gentoo, God Damnit Linux, Jon F, Linux Tags:

Linux from Scratch: A Cautionary Tale, Part 2

November 1st, 2011 3 comments

What Next? Chroot

Once you get into the chroot environment, you will get the incredibly annoying PC speaker beep every time you foul up a command.

When compiling glibc in section 6.9, first ensure that there’s no “lib64” directory in your root; for some reason I had a symlink of lib64 pointing to itself. Make sure you’ve run the sed script correctly or the “make install” portion will fail. Specifically, use -Wl (the letter l) in the command, not -W1 (the number 1). After you fix the idiotic transposition of 1 and L, remove both the glibc-build and glibc-2.14.1 directories under /sources and restart section 6.9 from the beginning. If you don’t restart from the beginning, you’ll still get “glibc cannot find dynamic linker” even though the file exists in /lib64.

Keep Watching What You Type

In section 6.10, when running the grep command to ensure the correct startfiles are used, make sure you use [1in] with a one and not [lin] with an L in the command:

grep -o '/usr/lib.*/crt[1in].*succeeded' dummy.log

In section 6.11 and 6.12, I had to run ldconfig before the new libraries were picked up. It seems like the same problem encountered on this mailing list but I’d confirmed that my PATH was set correctly. The same applied for section 6.22; run ldconfig before attempting the configure/make/make install process for E2fsprogs.

For procps-3.2.8, when applying the sed command in chapter 6.27.1, make sure you’ve copied and pasted it (or at least check your typing.) I missed a forward slash in the regex about four times, causing an error during make:

...undefined reference to `get_pid_digits'
collect2: ld returned 1 exit status

But hey, at least I have things sort of working:

My next few posts will deal with specific problems with reasonable solutions.

Linux From Scratch : The Beginning…

October 31st, 2011 1 comment

Hi Everyone,

If you don’t remember me, I’m Dave. Last time for the experiment I used SuSE, which I regretted. This time I decided to use Linux From Scratch like Jake, as I couldn’t think of another distribution that I haven’t used in some way or another before. Let me tell you… It’s been quite the experience so far.

The Initial Setup

Unlike Jake, I opted not to use the LFS Live CD, as I figured it would be much easier to start with a Debian Live CD. By the sounds of it, I made a good decision. I had network right out of the gate, which made it easy to copy and paste awful sed commands.

The initial part of the install was relatively painless for me. Well, except that one of the LFS mirrors had a version from 2007 listed as their latest stable build, setting me back about an hour. I followed the book, waited quite a while for some stuff to compile, and I was in my brand new … command-line. Ok, it it’s not very exciting at first, but I was jumping for joy when I ran the following command and got the result I did:

root [ ~ ]# ping google.ca
PING google.ca (74.125.226.82): 56 data bytes
64 bytes from 74.125.226.82: icmp_seq=0 ttl=56 time=32.967 ms
64 bytes from 74.125.226.82: icmp_seq=1 ttl=56 time=33.127 ms
64 bytes from 74.125.226.82: icmp_seq=2 ttl=56 time=40.045 ms

 

Series of Tubes

The internet was working! Keep reading if you want to hear what awful thing happened next…

Read more…

Linux from Scratch: A Cautionary Tale, Part 1

October 30th, 2011 1 comment

And I’m started with Linux from Scratch! Here are some helpful pointers for anyone considering running LFS on their own. Caution: this is highly nerdy and keyworded to hell to hopefully allow your favourite search engine to grab solutions from this post.

Getting Started, AKA: Use a Distribution You Know

LFS needs an existing Linux environment. Don’t try and use unetbootin on the LFS liveCD (I used lfslivecd-x86_64-6.3-r2145-min.iso to get started, but there is a newer revision 2160 available on one of the mirrors.) unetbootin in this configuration is just a bag of hurt and you’ll spend an inordinate amount of time trying to get your root volume to work, so just burn a CD.

If I was building LFS again I’d have started from a stable Debian base or other Linux distribution where I’m comfortable and have network access – there are a number of reasons below I suggest this, but you really want your host system kernel to be 2.6.25 or higher.

Make sure to have all the patches from linuxfromscratch.org/lfs/view/stable/chapter03/patches.html are downloaded and in a location you can access from your host distribution. USB sticks are OK for this if you don’t have network access (mount the stick, and then copy the patches and packages to the sources directory). Use DownThemAll or a similar mass downloading application/extension on the patches page to save time and grief.

Watch What You Mount

Augh, out of space! It’s quite possible to mount /mnt/lfs on two partitions at the same time by missing a directory, like this:

$ mount /dev/sdb3 /mnt/lfs
$ mount /dev/sdb1 /mnt/lfs

Oops – I missed /boot at the end of the second mount command. To confirm this before copying any files, “mount” should show only one partition active at /mnt/lfs. Since my /dev/sdb1 partition was only 200MB I got to the GCC extraction step and was promptly disappointed. I ended up unmounting everything, recreating the filesystem (mke2fs -v /dev/sdb1) and then remounting (mkdir -pv /mnt/lfs/boot; mount -t ext2 /dev/sdb1 /mnt/lfs/boot).

For more tales of installation havoc, keep reading…

Read more…