Archive

Archive for the ‘Hardware’ Category

Querying the State of a Hardware WiFi Switch with RF-Kill

October 8th, 2012 No comments

The laptop that I’m writing this post from has a really annoying strip of touch-response buttons above the keyboard that control things like volume and whether or not the wifi card is on. By touch-response, I mean that the buttons don’t require a finger press, but rather just a touch of the finger. As such, they provide no haptic feedback, so it’s hard to tell whether or not they work except by surveying the results of your efforts in the operating system.

The WiFi button in particular has go to be the worst of these buttons. On Windows, it glows a lovely blue colour when activated, and an angry red colour when disabled. This directly maps to whether or not my physical wireless network interface is enabled or disabled, and is a helpful indicator. Under Linux Mint 12 however, the “button” is always red, which makes it a less than helpful way to diagnose the occasional network drop.

Lately, I’ve been having trouble getting the wifi to reconnect after one of these drops. To troubleshoot, I would open up the Network Settings panel in Mint, which looks something like this:

Mint 12's Wireless Network Configuration Panel

The only problem with this window is that the ON/OFF slider that controls the state of the network interface would refuse to work. If I drag it to the ON position, it would just bounce back to OFF without changing the actual state of the card.

In the past, this behaviour has really frustrated me, driving me so far as to reboot the machine in Windows, re-activate the physical interface, and then switch back to Mint to continue doing whatever it was that I was doing in the first place. Tonight, I decided to investigate.

I started out with my old friend iwconfig:

jonf@jonf-mint ~ $ sudo iwconfig
lo        no wireless extensions.

eth0      no wireless extensions.

wlan0     IEEE 802.11abgn  ESSID:off/any
Mode:Managed  Access Point: Not-Associated   Tx-Power=off
Retry  long limit:7   RTS thr:off   Fragment thr:off
Encryption key:off
Power Management:off

As you can see, the wireless interface is listed, but it appears to be powered off. I was able to confirm this by issuing the iwlist command, which is supposed to spit out a list of nearby wireless networks:

jonf@jonf-mint ~ $ sudo iwlist wlan0 scanning
wlan0     Interface doesn’t support scanning : Network is down

Again, you can see that the interface is not reacting as one might expect it to. Next, I attempted to enable the interface using the ifconfig command:

jonf@jonf-mint ~ $ sudo ifconfig wlan0 up
SIOCSIFFLAGS: Operation not possible due to RF-kill

Ah-ha! A clue! Apparently, something called rfkill was preventing the interface from coming online. It turns out that rfkill is a handy little tool that allows you to query the state of the hardware buttons (and other physical interfaces) on your machine. You can see a list of all of these interfaces by issuing the command rfkill list:

jonf@jonf-mint ~ $ rfkill list
0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: yes
1: hp-wifi: Wireless LAN
Soft blocked: no
Hard blocked: yes

Interestingly enough, it looks like my wireless interface has been turned off by a hardware switch, which is what I had suspected all along. The next thing that I tried was the rfkill event command, which tails the list of hardware interface events. Using this tool, you can see the effect of pressing the physical switches and buttons on the chasis of your machine:

jonf@jonf-mint ~ $ rfkill event
1349740501.558614: idx 0 type 1 op 2 soft 0 hard 0
1349740505.153269: idx 0 type 1 op 2 soft 0 hard 1
1349740505.354608: idx 1 type 1 op 2 soft 0 hard 1
1349740511.030642: idx 1 type 1 op 2 soft 0 hard 0
1349740515.558615: idx 0 type 1 op 2 soft 0 hard 0

Each of the lines that the tool spits out shows a single event. In my case, it shows the button that controls the wireless interface switching the hard block setting (physical on/off) from 0 to 1 and back.

After watching this output while pressing the button a few times, I realized that the button does actually work, but that when the interface is turned on, it can take upwards of 5 seconds for the machine to notice it, connect to my home wireless, and get an ip address via DHCP. In the intervening time, I had typically become frustrated and pressed the button a few more times, trying to get it to do something. Instead, I now know that I have to press the button exactly once and then wait for it to take effect.

I stand by the fact that this is a piss-poor design, but hey, what do I know? I’m not a UX engineer for HP. At least it’s working again, and I am reconnected to my sweet sweet internet.

Automatically put computer to sleep and wake it up on a schedule

June 24th, 2012 No comments

Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

#!/bin/sh
t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.

How to test hard drive for errors in Linux

May 21st, 2012 No comments

I recently re-built an older PC from a laundry list of Frankenstein parts. However before installing anything to the hard drive I found I wanted to check it for physical errors and problems as I couldn’t remember why I wasn’t using this particular drive in any of my other systems.

From an Ubuntu 12.04 live CD I used GParted to to delete the old partition on the drive. This let me start from a clean slate. After the drive had absolutely nothing on it I went searching for an easy way to test the drive for errors. I stumbled across this excellent article and began using badblocks to scan the drive. Basically what this program does is write to every spot on the drive and then read it back to ensure that it still holds the data that was just written.

Here is the command I used. NOTE: This command is destructive and will damage the data on the hard drive. DO NOT use this if you want to keep the data that is already on the drive. Please see the above linked article for more information.

badblocks -b 4096 -p 4 -c 16384 -w -s /dev/sda

What does it all mean?

  • -b sets the block size to use. Most drives these days use 4096 byte blocks.
  • -p sets the number of passes to use on the drive. When I used the option -p 4 above it means that it will write/read from each block on the drive 4 times looking for errors. If it makes it through 4 passes without finding new errors then it will consider the process done.
  • -c sets the number of blocks to test at a time. This can help to speed up the process but will also use more RAM.
  • -w turns on write mode. This tells badblocks to do a write test as well.
  • -s turns on progress showing. This lets you know how far the program has gotten testing the drive.
  • /dev/sda is just the path to the drive I’m scanning. Your path may be different.

It should not be this hard to change my volume

December 22nd, 2011 1 comment

Normally my laptop is on my desk at home plugged into a sound system, so I never have to change the volume. However I’m currently on holiday, so that means I’m carrying my laptop around. Last night, I had the audacity to lower the volume on my machine. After all, nobody wants to wake up their family at 2am with “The history of the USSR set to Tetris.flv”. Using the media keys on my laptop did nothing. Lowering the sound in KMix did nothing. Muting in KMix did nothing. I figured that something had gone wrong with KMix and maybe I should re-open it. Well, it turns out that was a big goddamn mistake, because that resulted in me having no sound.

It took about 30 minutes to figure out, but the solution ended up being unmuting my headphone channel in alsamixer. It looks like for whatever reason, alsamixer and KMix were set to different master channels (headphone/speaker and HDMI, respectively), thus giving KMix (and my media keys) no actual control over volume.

Categories: Hardware, Kubuntu, Sasha D Tags:

Reinstalling LFS soon: it’s not my fault, I swear!

November 17th, 2011 No comments

I went to play around with my Linux from Scratch installation after getting a working version of KDE 4.7.3 up and running. For a few days now my system has been running stood up to light web browsing use and SSH shenanigans, and hasn’t even dropped a remote connection.

This was until this evening, when I decided to reboot to try and fix a number of init scripts that were throwing some terrible error about problems in lsb_base under /lib/ somewhere. The system came back up properly, but when I startx‘d, I was missing borders for most of my windows. Appearance Preferences under KDE wouldn’t even lanch, claiming a segmentation fault.

There were no logs available to easily peruse, but after a few false starts I decided to check the filesystem with fsck from a bootable Ubuntu 11.04 USB stick. The results were not pretty:


root@ubuntu:~# fsck -a /dev/sdb3
fsck from util-linux-ng 2.17.2
/dev/sdb3 contains a file system with errors, check forced.
/dev/sdb3: Inode 1466546 has illegal block(s).

/dev/sdb3: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)

Running fsck without the -a option forced me into a nasty scenario, where like a certain Homer Simpson working from his home office, I repeatedly had to press “Y”:

At the end of it, I’d run through the terminal’s entire scroll buffer and continued to get errors like:


Inode 7060472 (/src/kde-workspace-4.7.3/kdm/kcm/main.cpp) has invalid mode (06400).
Clear? yes

i_file_acl for inode 7060473 (/src/kde-workspace-4.7.3/kdm/kcm/kdm-dlg.cpp) is 33554432, should be zero.
Clear? yes

Inode 7060473 (/src/kde-workspace-4.7.3/kdm/kcm/kdm-dlg.cpp) has invalid mode (00).
Clear? yes

i_file_acl for inode 7060474 (/src/kde-workspace-4.7.3/kdm/kcm/CMakeLists.txt) is 3835562035, should be zero.
Clear? yes

Inode 7060474 (/src/kde-workspace-4.7.3/kdm/kcm/CMakeLists.txt) has invalid mode (0167010).
Clear? yes

I actually gave up after after seeing several thousand of these inodes experiencing problems (later I learned that fsck -y will automatically answer yes, which means I’ve improved my productivity several thousand times!)

I was pretty quick to assess the problem: the OCZ Vertex solid state drive where I’d installed Linux has been silently corrupting data as I’ve written to it. Most of the problem sectors are in my source directories, but a few happened to be in my KDE installation on disk. This caused oddities such as power management not loading and the absence of window borders.

So what goes on from here? I plan to replace the OCZ drive under warranty and rebuild LFS on my spinning disk drive, but this time I’ll take my own advice and start building from this LiveUSB Ubuntu install, with an up-to-date kernel and where .tar.xz files are recognized. Onward goes the adventure!

Categories: Hardware, Jake B, Linux from Scratch Tags:

Great Success!

November 1st, 2011 No comments

Just a quick note tonight – I finally managed to get a bootable Gentoo system installed!

After my last post, things were looking pretty grim. Instead of continuing to perpetuate the recompile/reboot cycle, I decided to start fresh, in hopes that I had simply missed a step the first time around. With this in mind, I started back at page one of the Gentoo Handbook and worked my way through the entire thing.

When it came time to compile my kernel, I opted for a slightly less error-prone method, and started off by installing Genkernel, a tool that automates some of the kernel creation steps. When running it however, I was sure to pass the –menuconfig parameter, which gave me full control over what modules were included in the final product.

Next, I followed the kernel tutorials in the Gentoo Handbook and on the Gentoo Wiki Asus P5Q-E page. This ensured that I included every component that was necessary for my system.

Once I rebooted the machine, a login prompt came up the first time. Great success indeed!

One little gotcha that’s important to note at this step. On my first login, I didn’t have any network access. Two things that might help:

  1. Open up /etc/conf.d/net in nano and add a line like config_eth0=”dhcp” for each network interface in your machine, where eth0 is the name of the interface. This tells the machine to use DHCP when initializing the device. On most home networks, this will get you an IP address.
  2. Make sure that any required modules are loaded. I have two network interfaces. One uses the sky2 module, and the other uses skge. You can check to ensure that these are loaded with the command lsmod | grep sky2 where sky2 is the name of the module that you’re looking for. If it isn’t loaded, run modprobe sky2 to get it up and running. Note that you may need to recompile your kernel with support for the module in question if you missed it first time ’round.

Tomorrow, I’ll compile an X11 server, and hopefully get started on the GNOME desktop environment. Christ there’s still a lot to do…

Categories: Gentoo, Jon F, kernel, Networking Tags:

Bye Bye Bodhi

November 1st, 2011 10 comments

Ah Linux

One website lists ten reasons to use linux my favourites of which are “Linux is easier to use than Windows” and “Linux is fun.” It is day three of the experiment and so far I haven’t installed Linux but I have taken a Dell Vostro 3350 apart about five times. I borrowed this laptop off a fellow comrade in this experiment, Jake B, as I will be sending my own netbook home this coming December.

Starting off I aimed to install both VectorLinux and Bodhi to compare them. I consider myself a relatively light computer user outside of the office and so comparing two different distributions would give me something to talk about. Alas this choice has come back to bite me in the…

I used unetbootin to begin with, on a USB key that was confirmed to be working. I then put Vector on the USB key and it brought up half a blue screen with the top of the vector logo just appearing above the black lower half of the display. After a couple of tries I figured it was corrupt files or a bad ISO so I reformated the USB in order to try Bodhi instead. Unfortunately I didn’t even get a logo this time. Next I burned a CD of Vector and got as far as the ‘find installation media’ screen but no matter how may refreshes or reloads I did it apparently couldn’t find the CD-ROM or configuration files.

From previously experiencing installers fail to find hard drives and USB keys because of the type of hard drive setting in the BIOS, I changed it from ACHI to ATA and low and behold finally some success. I managed to get the Vector installer to write partitions to the disk (using the CD at this point) after choosing the add-on applications I wanted to install. Again this failed so I tried once more with the USB key. This failed the same way except it said that it could not find live media. I even tried using the USB key and the CD together at the same time with no luck.

Switching between Bodhi and Vector in order to try and get a complete install and many, many CDs later I temporarily gave up. I downloaded a new distribution called Sabyon, a Gentoo based distro with the Enlightenment desktop environment, but alas I kept getting the same errors. I even tried Ubuntu 10.04 and Linux Mint and neither of them could not write to the disk.

Figuring it was a hard drive issue I took out the hard drive from the laptop and mounted it in an enclosure. After a quick reformat, which removed a random 500MB LVM partition that I believed to be corrupt, I put it back in the machine. Still no luck.

The errors I kept getting included disk, I/O, live media, cannot find CD-ROM, no useable media, no config file and a couple of others. Each time I tried installing it would fail at different sections of the install and the error would be different with each media used. Among all of the errors I’ve seen the main one seems to be “(initramfs) unable to find a medium containing a live filesystem”

On a whim I decided to test any other hardware errors by running diagnostics from the BIOS. No errors found. I even dug out my ancient XP Profession disc, and after a couple of BIOS changes and a couple of Blue Screens – that were my fault because I had changed the hard drive out so much – I got XP to successfully load, install, and commit changes to the hard drive.

Turning to Google, and with the help of a more advanced Linux Experiment comrade, I retried installing Linux by adding some commands to the installer boot options. Still no luck.

After more Googling I have found that there are a few possible reasons that this could be happening. I have read that it could be caused by the USB3 ports interfering with the bootable media  or that it cold be related to a CD-ROM master/slave setting. Either way, I still haven’t figured it out and I’m not willing to break someone else’s computer just to see if I can overcome this frustrating first experience with Linux. My next task is to try some ACPI hacks  and after finding this useful link, try to install the latest version of Ubuntu which seems to be compatible with the hardware of this machine. But for now its …

Windows 1 Linux 0

Men using Linux 1 Women using linux 0

Fix PulseAudio loopback delay

July 1st, 2011 14 comments

Sort of a follow up (in spirit) to two of Jon’s previous posts regarding pulse audio loopback; I noticed that there was quite a bit of delay (~500ms to 1second) in the default configuration and began searching for a way to fix it. After some research I found an alternative way to achieve the loopback but with must less delay.

1. Install paman

First install the PulseAudio Manager application so that you can correctly identify the input device (i.e. your mic or line-in) and your output device (i.e. the sound card you are using).

sudo apt-get install paman

You can find the input sources under the Sources section and the output devices under the Sinks section of the Devices tab. Make note of the names of the two devices.

2. Unload any previous loopback modules

If you had followed Jon’s previous posts then you will need to unload the modules (and potentially change your PulseAudio configuration so they don’t get loaded again on next restart). This is to stop it from doubling all loopback sound.

3. Create an executable script

Create a script and copy the following command into it:

pacat -r --latency-msec=1 -d [input] | pacat -p --latency-msec=1 -d [output]

where [input] is the name of your input device found in step 1 and [output] is the name of the output device. In my case it would look like:

pacat -r --latency-msec=1 -d alsa_input.pci-0000_05_02.0.analog-stereo | pacat -p --latency-msec=1 -d alsa_output.pci-0000_05_02.0.analog-surround-51

4. Run script

By simply running the script now you should get correct loopback and with much less delay than using the default loopback module. Even better if you set this script to run at startup you won’t have to worry about it ever again.

Unwanted Effects on my Line-in Interface

August 26th, 2010 No comments

Shortly after purchasing an xbox360, I wrote a short piece that gave instructions for forwarding your line-in audio through your pc speakers. By using this method and sharing my network connection, I’ve managed to run my xbox as a peripheral to my main computer setup, saving me space and money.

Lately however, the line-in loopback has not been working as expected. At times, it sounds like effects have been applied to the line. In particular, it sounds like somebody has applied a phaser or a delay effect to the input signal.

For the last week or so, I’ve been scratching my head about this issue, trying to figure out what part of my system may have applied effects to my loopback, but not to other audio on the system. Tonight, I was reviewing my original instructions for setting the thing up, and noticed that the module was being loaded on startup after being added to a system config file:

sudo sh -c ' echo "load-module module-loopback" >>  /etc/pulse/default.pa '

On a hunch, I took a look at the end of the file, and found the following lines:

### Make some devices default
#set-default-sink output
#set-default-source input
load-module module-loopback
load-module module-loopback

It looked like the instruction to load the loopback module had ended up in the config file twice! Because of this, the module was being loaded twice on startup.

So what does this have to do with the effects on the line? Well, if you play two copies of the same sound with a half-second gap between them, your ears will be tricked into thinking that you’re hearing one copy of the sound, but that it’s all echoey, as if a delay effect had been applied. If you repeat the experiment but this time decrease the gap between the two sounds even further, say to a few milliseconds, your ears will hear one copy of the sound with a phaser effect applied.

Essentially, when the module loaded twice, it was capturing the mix from the line-in port twice and playing back two separate copies of the audio. Depending on how close together these instances  were, the result sounded normal, phased, or delayed. I fixed the issue by removing one of the lines and then restarting the machine. This time, it started only one copy of the service, and everything sounded fine.

The moral of the story: If you’re loading modules at startup, make sure that you only start one copy of them.

PulseAudio: Monitoring your Line-In Interface

July 11th, 2010 22 comments

At home, my setup consists of three machines –  a laptop, a PC, and an XBOX 360. The latter two share a set of speakers, but I hate having to climb under the desk to switch the cables around, and wanted a better way to switch them back and forth. My good friend Tyler B suggested that I run the line out from the XBOX into the line-in on my sound card, and just let my computer handle the audio in the same way that it handles music and movies. In theory, this works great. In practice, I had one hell of a time figuring out how to force the GNOME sound manager applet into doing my bidding.

After quite a bit of googling, I found the answer on the Ubuntu forums. It turns out that the secret lies in a pulse audio module that isn’t enabled by default. Open up a terminal and use the following commands to permanently enable this behaviour. As always, make sure that you understand what’s up before running random commands that you find on the internet as root:

pactl load-module module-loopback
sudo sh -c ' echo "load-module module-loopback" >>  /etc/pulse/default.pa '

The first line instructs PulseAudio (one of the many ways that your system talks with the underlying sound hardware) to load a module called loopback, which unsurprisingly, loops incoming audio back through your outputs. This means that you can hear everything that comes into your line-in port in real time. Note that this behaviour does not extend to your microphone input by design. The second line simply tells PulseAudio to load this module whenever the system starts.

Now if you’ll excuse me, I have jerks to run over in GTA…

Setting up a RocketRaid 2320 controller on Linux Mint 9

July 4th, 2010 8 comments

After the most recently recorded podcast, I decided to take a stab at running Linux on my primary media server. The machine sports a Highpoint RocketRaid 2320 storage controller, which has support for running up to eight SATA drives. Over the course of last evening, I found out that the solution wasn’t quite as plug-and-play as running the same card under Windows. Here’s what I found out and how you can avoid the same mistakes.

Remove the RocketRaid card when installing Mint.

Make sure you have decent physical access to the machine, as the Mint installer apparently does not play nicely with this card. I replicated a complete system freeze (no keyboard or mouse input) after progressing past the keyboard layout section during the installer. Temporarily removing the 2320 from its PCI-Express slot avoided this problem; I was then able to re-insert the card after installation was complete.

Compile the Open Source driver for best results.

Highpoint has a download page for their 2300-series cards, which points to Debian and Ubuntu (x86 and x64) compatible versions of the rr232x driver. Unfortunately, the Ubuntu 64-bit version did not seem to successfully initialize – the device just wasn’t present.

A post on the Ubuntu forums (for version 9.04) was quite helpful in pointing out the required steps, but had a broken link that wasn’t easy to find. To obtain the Open Source driver, click through to the “Archive Driver Downloads for Linux and FreeBSD” page, then scroll to the bottom and grab the 32/64-bit .tar.gz file with a penguin icon. I’ve mirrored version 1.10 here in case the URLs on the HighPoint site change again: rr232x-linux-src-v1.10-090716-0928.tar.gz

The process for building the driver is as in the original post:

  • Extract the .tar.gz file to a reasonably permanent location. I say this because you will likely need to rebuild the module for any kernel upgrades. I’m going to assume you’ve created something under /opt, such as /opt/rr232x.
  • Change to the extraction directory and run:cd product/rr232x/linux
    sudo make
    sudo make install
  • Reboot your system after the installation process and the kernel will load the rr232x driver as a module.

Install gnome-disk-utility to verify and mount your filesystem.

I’m not sure why this utility disappeared as a default between Mint 8 and 9, but gnome-disk-utility will display all connected devices and allow you to directly mount partitions. It will also let you know if it “sees” the RR2320 controller. In my case, after installing the driver and rebooting, I was able to click on the 3.5TB NTFS-formatted storage and assign it a mount point of /media/Raid5 in two clicks.

What’s next?

Most of the remaining complaints online revolve around booting to the RR2320 itself, which seems like more of a pain than it’s worth (even under Windows this would seem to be the case.) I personally run a separate system drive; the actual Ubuntu installation manual from Highpoint may have additional details on actually booting to your RAID volume.

I’ve yet to install the Web or CLI management interface for Linux, which should happen in the next few days. One of the really neat items about this controller is that it can email you if a disk falls out of the array, but I’ll need to get the Web interface running in order to change some outgoing mail servers.

I also haven’t done any performance testing or benchmarking with the controller versus Windows, or if there would be an improvement migrating the filesystem to ext4 as opposed to NTFS. I do plan to stick with NTFS as I’d like portability across all major platforms with this array. From initial observations, I can play back HD content from the array without stuttering while large files are being decompressed and checksummed, which is my main goal.

Fix ATI vsync & video tearing issue once and for all!

May 6th, 2010 23 comments

NOTE: ATI’s most recent drivers now include a no tearing option in the driver control panel. Enabling it there is now the preferred method.

Two of the linux machines that I use both have ATI graphics cards from the 4xxx series in them. They work well enough for what I do, very casual gaming, lots of video watching, but one thing has always bothered me to no end: video tearing. I assumed that this was due to vsync being off by default (probably for performance sake) but even after installing the proprietary drivers in the new Ubuntu 10.04 and trying to force it on I still could not get the issue to resolve itself. After some long googling I found what seems to be a solution, at least in my case. I’ll walk you through what I did.

Before you continue read this: In order to fix this issue on my computers I had to trash xorg.conf and start over. If you are afraid you are going to ruin yourself, or if you have a custom setup already, please be very careful and read before doing what I suggest or don’t continue at all. Be sure to make a backup!

1 ) Install the ATI proprietary drivers and restart so that they can take effect.

2 ) Make a backup of your xorg.conf file. Do this by opening a terminal and copying it to a backup location. For example I ran the following code:

sudo cp /etc/X11/xorg.conf /etc/X11/backup.xorg.conf

3 ) Remove your existing (original) xorg.conf file:

sudo rm /etc/X11/xorg.conf

4 ) Generate a new default xorg.conf file using aticonfig (that’s two dashes below):

sudo aticonfig –initial

5 ) Enable video syncing (again two dashes before each command):

sudo aticonfig –sync-video=on –vs=on

6 ) If possible also enable full anti-aliasing:

sudo aticonfig –fsaa=on –fsaa-samples=4

7 ) Restart now so that your computer will load the new xorg.conf file.

8 ) Open up Catalyst Control Center and under 3D -> More Settings make sure the slider under Wait for vertical refresh is set to Always On.

That should be it. Please note that this trick may not work with all media players either (I noticed Totem seemed to still have some issues). One other thing I tried in VLC was to change the video output to be OpenGL which seemed to help a lot.

Good luck!

Pulse Audio Nonsense

January 4th, 2010 3 comments

Just a heads up: This isn’t the kind of post that contains answers to your problems. It is, unfortunately, the kind of post that contains a lot of the steps that I took to fix a problem, without much information about the order in which I performed them, why I performed them, or what they did. All that I can tell you is that after doing some or all of these things in an arbitrary order, stuff seemed to work better than it did before.

It’s funny how these posts often seem to come about when trying to get hardware related things working. I distinctly remember writing one of these about getting hardware compositing working on Debian. This one is about getting reliable audio on Kubuntu 9.10.

You see, I have recently been experiencing some odd behaviour from my audio stack in Kubuntu. My machine almost always plays the startup/shutdown noises, Banshee usually provides audio by way of GStreamer, videos playing in VLC are sometimes accompanied by audio, and Flash videos almost never have working sound. Generally speaking, restarting the machine will change one or all of these items, and sometimes none. The system is usuable, but frustrating (although I might be forgiven for saying that having no audio in Flash prevents me from wasting so much time watching youtube videos when I ought to be working).

Tonight, after some time on the #kubuntu IRC channel and the #pulseaudio channel on freenode, I managed to fix all of that, and my system now supports full 5.1 surround audio, at all times, and from all applications. Cool, no? Basically, the fix was to install some PulseAudio apps:

sudo apt-get install pulseaudio pavucontrol padevchooser

Next, go to System Settings > Multimedia, and set PulseAudio as the preferred audio device in each of the categories on the left. Finally, restart the machine a couple of times. If you’re lucky, once you restart and run pavucontrol from the terminal, you’ll see a dialog box called Volume Control. Head over to the Configuration tab, and start choosing different profiles until you can hear some audio from your system. Also, I found that most of these profiles were muted by default – you can change that on the Output Devices tab. If one of the profiles works for  you, congratulations! If not, well, I guess you’re no worse off than you were before. I warned you that this was that kind of post.

Also, while attempting to fix my audio problems, I found some neat sites:

  • Colin Guthrie – I spoke to this guy on IRC, and he was really helpful. He also seems to write a lot of stuff for the PulseAudio/Phonon stack in KDE. His site is a wealth of information about the stack that I really don’t understand, but makes for good reading.
  • Musings on Maintaining Ubuntu – Some guy named Dan who seems to be a lead audio developer for the Ubuntu project. Also a very interesting read, and full of interesting information about audio support in Karmic.
  • A Script that Profiles your Audio Setup – This bash script compiles a readout of what your machine thinks is going on with your audio hardware, and automatically hosts it on the web so that you can share it with people trying to help you out.
  • A Handy Diagram of the Linux Audio Stack – This really explains a lot about what the hell is going on when an application tries to play audio in the Linux.
  • What the Linux Audio Stack Seems Like – This diagram reflects my level of understanding of Linux audio. It also reminds me of XKCD.
  • Ardour – The Digital Audio Workstation – In the classic tradition of running before walking, I just have to try this app out.

Kubuntu 9.10 (Part II)

January 4th, 2010 No comments

Well I managed to fix my compositing problem but I honestly don’t know why it worked. Basically I went into the System Settings > Desktop > Desktop Effects menu and manually turned off all desktop effects. Next I used jockey-text to disable the ATI driver. After a quick restart I re-enabled the ATI driver and restarted again. Once I logged in I went back into the System Settings > Desktop > Desktop Effects menu and enabled desktop effects. This magically worked… but only until I restarted. In order to actually get it to start enabled I had to go back into System Settings > Desktop > Desktop Effects and then click on the Advanced tab and then disable functionality checks. I am sure this is dangerous or something but its the only way I can get my computer to restart with the effects enabled by default.

I’m really starting to hate this graphics card…

Going Linux, Once and for All

December 23rd, 2009 7 comments

With the linux experiment coming to an end, and my Vista PC requiring a reinstall, I decided to take the leap and go all linux all the time. To that end, I’ve installed Kubuntu on my desktop PC.

I would like to be able to report that the Kubuntu install experience was better than the Debian one, or even on par with a Windows install. Unfortunately, that just isn’t the case.

My machine contains three 500GB hard drives. One is used as the system drive, while an integrated hardware RAID controller binds the other two together as a RAID1 array. Under Windows, this setup worked perfectly. Under Kubuntu, it crashed the graphical installer, and threw the text-based installer into fits of rage.

With plenty of help from the #kubuntu IRC channel on freenode, I managed to complete the Kubuntu install by running it with the two RAID drives disconnected from the motherboard. After finishing the install, I shut down, reconnected the RAID drives, and booted back up. At this point, the RAID drives were visible from Dolphin, but appeared as two discrete drives.

It was explained to me via this article that the hardware RAID support that I had always enjoyed under windows was in fact a ‘fake RAID,’ and is not supported on Linux. Instead, I need to reformat the two drives, and then link them together with a software RAID. More on that process in a later post, once I figure out how to actually do it.

At this point, I have my desktop back up and running, reasonably customized, and looking good. After trying KDE’s default Amarok media player and failing to figure out how to properly import an m3u playlist, I opted to use Gnome’s Banshee player for the time being instead. It is a predictable yet stable iTunes clone that has proved more than capable of handling my library for the time being. I will probably look into Amarok and a few other media players in the future. On that note, if you’re having trouble playing your MP3 files on Linux, check out this post on the ubuntu forums for information about a few of the necessary GStreamer plugins.

For now, my main tasks include setting up my RAID array, getting my ergonomic bluetooth wireless mouse working, and working out folder and printer sharing on our local Windows network. In addition, I would like to set up a Windows XP image inside of Sun’s Virtual Box so that I can continue to use Microsoft Visual Studio, the only Windows application that I’ve yet to find a Linux replacement for.

This is just the beginning of the next chapter of my own personal Linux experiment; stay tuned for more excitement.

This post first appeared at Index out of Bounds.

This… looks… awesome!

December 8th, 2009 No comments

Looks being the key word there because I haven’t actually been able to successfully run either of  these seemingly awesome pieces of software.

Amahi is the name of an open source software collection, for lack of a better term, that resembles what Windows Home Server has to offer. I first came across this while listening to an episode of Going Linux (I think it was episode #85 but I can’t remember anymore!) and instantly looked it up. Here is a quick rundown of what Amahi offers for you:

  • Currently built on top of Fedora 10, but they are hoping to move it to the most recent version of Ubuntu
  • Audio streaming to various apps like iTunes and Rhythmbox over your home network
  • Media streaming to other networked appliances including the Xbox 360
  • Acts as a NAS and can even act as a professional grade DHCP server (taking over the job from your router) making things even easier
  • Built in VPN so that you can securely connect to your home network from remote locations
  • SMB and NFS file sharing for your whole network
  • Provides smart feedback of your drives and system, including things like disk space and temperature
  • Built-in Wiki so that you can easily organize yourself with your fellow co-workers, roommates or family members
  • Allows you to use the server as a place to automate backups to
  • Windows, Mac & Linux calendar integration, letting you share a single calendar with everyone on the network
  • Implements the OpenSearch protocol so that you can add the server as a search location in your favorite browser. This lets you search your server files from right within your web browser!
  • Includes an always-on BitTorrent client that lets you drop torrent files onto the server and have it download them for you
  • Supports all Linux file systems and can also read/write to FAT32 and read from NTFS.
  • Sports a plugin architecture that lets developers expand the platform in new and exciting ways
  • Inherits all of the features from Fedora 10
  • Finally Amahi offers a free DNS service so you only have to remember a web address, not your changing home IP address

FreeNAS is a similar product, although I use that term semi-loosely seeing as it is also open source, except instead of being based on Linux, FreeNAS is currently based on FreeBSD 7.2. Plans are currently in the works to fork the project and build a parallel Linux based version. Unlike Amahi, FreeNAS sticks closer to the true definition of a NAS and only includes a few additional features in the base install, letting the user truly customize it to their needs. Installed it can take up less than 64MB of disk space. It can (through extensions) include the following features:

  • SMB and NFS as well as TFTP, FTP, SSH, rsync, AFP, and UPnP
  • Media streaming support for iTunes and Xbox 360
  • BitTorrent support allowing you to centralize your torrenting
  • Built-in support for Dynamic DNS through major players like DynDNS, etc.
  • Includes full support for ZFS, UFS, ext2, ext3. Can also fully use FAT32 (just not install to), and can read from NTFS formatted drives.
  • Small enough footprint to boot from a USB drive
  • Many supported hardware and software RAID levels
  • Full disk encryption via geli

Both of these can be fully operated via a web browser interface and seem very powerful. Unfortunately I was unable to get either up and running inside of a VirtualBox environment. While I recognize that I could just install a regular Linux machine and then add most of these features myself, it is nice to see projects like that package them in for ease of use.

This is definitely something that I will be looking more closely at in the future; you know once these pesky exams are finished. In the mean time if anyone has any experience with either of these I would love to hear about it.

[UPDATE]

While publishing this, the folks over at Amahi sent out an e-mail detailing many new improvements. Turns out they released a new version now based on Fedora 12. Here are their notable improvements:

  • Amahi in the cloud! This release has support for VPS servers (Virtual Private Servers).
  • Major performance and memory improvements, providing a much faster web interface and a 30% smaller memory footprint.
  • Based on Fedora 12, with optimizations for Atom processors built-in, preliminary support in SAMBA for PDC (Primary Domain Controller) with Windows 7 clients and much more.
  • Completely revamped web-based installer.
  • Users are more easily and securely setup now, the with password-protected pages and admin users.
  • Brand new architecture, with future growth in mind, supporting more types of apps, and more importantly, bring us closer to supporting Ubuntu and other platforms. Over 100+ apps are working in this release out of the gates!

It all sounds great. I will be looking into this new version as soon as I have a moment to do so.

Today, the search engines…

November 23rd, 2009 No comments

I would just like to point out that thanks to you the readers, who I’d like to reinforce are fantastic and have been a huge help to us (as well as making us feel good that people are making use of the site!) have catapulted us to previously unknown heights in the world of Canadian search engine fame!

The big three search engines with Canadian domains – Google, Bing, and Yahoo – have all launched us up to top-shelf status on their search pages with a search string of ‘The Linux Experiment’:

Bing.ca – first search result (yay!)

Yahoo.ca – first search result (double yay!)

Google.ca – second search result (darn you, PC World)

Let’s collectively step it up and get us to the top of the Google search charts.  With a scant 38 days left in the Experiment, time is quickly running out!

Today, the search engines… tomorrow, the (PC) world!

Twelve to twelve

November 5th, 2009 3 comments

Well, it’s official – twelve more days remain until the November 17 release of Fedora 12 (Constantine).  I, for one, can hardly wait – Fedora 11 has been rock-solid for me so far (under Gnome, anyways – but I’ll leave that subject alone) and I can only imagine that Fedora 12 is going to bring more of the same my way.

Among some of the more notable changes being made that caught my interest:

  • Gnome 2.28 – the current version bundled into my Fedora 11 distribution, 2.26.3, has been nothing but amazing.  Unflinchingly stable, fast, and reliable – it’s everything I want in a desktop environment.
  • Better webcam support – not sure how this can get any better from my perspective since my LG P300’s built-in webcam worked straight out of the box on Fedora 11, but I’m interested to see exactly what they bring to the table here
  • Better IPv6 support – since our router does actively support this protocol, it’s nice to see Fedora taking charge and always improving the standard
  • Better power management – for me, this is a major headache under Gnome (I know, I know…) since it really doesn’t let me customize anything as much as I would like to.   Among other things, it’s supposed to offer better support for wake-from-disk and wake-from-RAM.  We’ll see.

I’m sure that Tyler and I will keep you posted as the due date gets closer, and especially once we’ve done the upgrade itself!

Interesting Linux article

October 26th, 2009 4 comments

I stumbled across a very interesting post linked off of Digg, which I browse on a fairly regular basis.  In it, the author attempts to put to rest some of the more common (and, for the most part, completely inaccurate) stories that revolve around various Linux distributions.

Though I think Jake B might have something to say about the first point on the list, it made for interesting reading at the very least – and for the most part, I agree with the author wholeheartedly.  Link after the jump!

Debunking Some Linux Myths

Categories: Dana H, Free Software, Hardware, Linux Tags:

Well shit, that was easy

October 12th, 2009 1 comment

One of my big griefs with Mint was that the sound was far too quiet. I assumed this was some sort of hardware compatibility issue. Apparently it’s not and it’s really easy to fix. Essentially, the default “front speaker” volume is not at the max level. While this has given me a great max volume, my latest problem is getting Mint to increment/decrement the volume properly – the master volume is essentially muted at 70%. That being said, I’m glad I can finally watch online videos from my laptop without needing headphones or a soundproof room.

Categories: Hardware, Linux Mint, Sasha D Tags: