Archive for the ‘Hardware’ Category

Distro hopping: Import music stored on NAS into Music

September 19th, 2015 No comments

So you’re running elementary OS and want to access the music files you have stored on a Network-attached-storage device within the Music program. Unfortunately while you can easily browse the network and find these files you can’t do so within Music. Luckily there is a solution to this problem! Borrowing heavily from a previous post this will walk you through how to set up a persistent media folder on your computer that will ‘point’ to the music directory on your NAS.

Step 1) Open up a terminal

Now wasn't that easy?

Now wasn’t that easy?

Step 2) Install the required software

For the purpose of this post I’m going to assume the NAS is presenting a Windows file share so we’ll need the software to be able to make use of it. Simply run the following command to install the needed software:

sudo apt-get install cifs-utils
Installing some software!

Installing some software!

Step 3) Create a location for where you want the media to appear

If this is just going to be used for your user account you can simply create a new folder in your home folder. For example create a new folder under the Music folder called “NAS”. However if we want multiple users to be able to access this then you’ll want to put it somewhere else (for example /media/NAS).

For my example I'm just going to put it under a new NAS folder inside of my Music folder

For my example I’m just going to put it under a new NAS folder inside of my Music folder

Step 4) Edit the fstab file and add the share(s) so that they auto connect on startup

So basically there is a file on your computer called fstab that contains information about all of the hard drives and mounts that the computer should create on boot. To make it so our new NAS folder points to the actual NAS directory we’re going to add a new line to this file telling our computer to do just that. Start by using your terminal and opening that file in an editor. You can use a terminal editor like nano or even a graphical one like Scratch.

To use the terminal editor nano run the following command:

sudo nano /etc/fstab
fstab in nano

fstab open in nano

To use the graphical editor Scratch run the following command:

sudo scratch-text-editor /etc/fstab
fstab open in Scratch

fstab open in Scratch

On a new line add the following (modifying it according to your system). Note that this should be a single line even though it may appear broken up over multiple lines here:

//<path to server>/<share name>  <path to local directory>  cifs  
guest,uid=<user id to mount files as>,iocharset=utf8  0  0

Breaking it down a little bit:

  • <path to server>: This is the network name or IP address of the computer hosting the share (in my case the NAS). For example it could be something like “” or something like “MyNas”
  • <share name>: This is the name of the share on that computer. For example I set up my NAS to share different directories one of which was called “Files”
  • <path to local directory>: This is where you want the remote files to appear locally. For example if you want them to appear in a folder under your Music directory you could do something like “/home/tyler/Music/NAS”. Just make sure that the directory exists (that’s why we created it above :)).
  • <user id to mount files as>: This defines the permissions to give the files. On elementary OS (as well as other Ubuntu distributions) the first user you create is usually given uid 1000 so you could put “1000” here. To find out the uid of any random user use the command “id <user>” in the terminal without quotes.

As an example the line I added for my example configuration here was:

//  /home/tyler/Music/NAS  cifs  
guest,uid=1000,iocharset=utf8  0  0

Now save the file.

Step 5) Test that it worked

Finally in the terminal we’re going to run command to actually test it:

sudo mount -a

This will do essentially the same thing that happens when your computer first boots so if this works it should work the next time you restart as well. If you don’t get any errors then congratulations it should have all worked! You can verify by now opening up your NAS folder and confirming that it shows the contents of your actual NAS directory.

We have music!

We have music!

Step 6) Import the music into Music

Now that we have the NAS music showing up in a local folder the Music application will be able to add it no problem. Simply open up Music and use the import option to import the music from your folder (in my case ~/Music/NAS).



This post is part of a series:

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

How to set a static IP address on Ubuntu 14.04 server (and others)

September 16th, 2014 No comments

This assumes you want to set a static IP address on the network device eth0.

Open up the interfaces file

sudo nano /etc/network/interfaces

and remove or comment out the line that says

iface eth0 inet dhcp

then add the following lines in its place:

iface eth0 inet static
address [static IP address, i.e.]
netmask [i.e.]
network [i.e.]
broadcast [i.e.]
gateway [i.e.]
dns-nameservers [i.e.]

Save the file and reboot the server. On some systems you may also need to update /etc/resolv.conf and /etc/hosts

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Extend the life of your SSD on linux

February 9th, 2014 2 comments

This past year I purchased a laptop that came with two drives, a small 24GB SSD and a larger 1TB HDD. My configuration has placed the root filesystem (i.e. /) on the SSD and my home directory (i.e. /home) on the HDD so that I benefit from very fast system booting and application loading but still have loads of space for my personal files. The only downside to this configuration is that linux is sometimes not the best at ensuring your SSD lives a long life.

Unlike HDDs, SSDs have a finite number of write operations before they are guaranteed to fail (although you could argue HDDs aren’t all that great either…). Quite a few linux distributions have not yet been updated to detect and configure SSDs in such a way as to extend their life. Luckily for us it isn’t all that difficult to make the changes ourselves.

Change #1 – noatime

The first change that I do is to configure my system so that it no longer updates each files access time on the SSD partition. By default Linux records information about when files were created and last modified as well as when it was last accessed. There is a cost associated with recording the last access time and including this option can not only significantly reduce the number of writes to the drive but also give you a slight performance improvement as well. Note that if you care about access times (for example if you like to perform filesystem audits or something like that) then obviously disabling this may not be an option for you.

Open /etc/fstab as root. For example I used nano so I ran:

sudo nano /etc/fstab

Find the SSD partition(s) (remember mine is just the root, /, partition) and add noatime to the mounting options:

UUID=<some hex string> /               ext4    noatime,errors=remount-ro

Change #2 – discard

UPDATE: Starting with 14.04 you no longer need to add discard to your fstab file. It is now handled automatically for you through a different system mechanism.

TRIM is a technology that allows a filesystem to immediately notify the SSD when a file is deleted so that it can more efficiently manage the underlying storage and improve the lifespan of the drive. Not all filesystems support TRIM but if you are like most people and use ext4 then you can safely enable this feature. Note that some people have actually had drastic write performance decreases when enabling this option but personally I’d rather have that than a dead drive.

To enable TRIM support start by again opening /etc/fstab as root and find the SSD partition(s). This time add discard to the mounting options:

UUID=<some hex string> /               ext4    noatime,errors=remount-ro,discard

Change #3 – tmpfs

If you have enough RAM you can also dedicate some of it to mounting specific partitions via tmpfs. Tmpfs essentially makes a fake hard drive, known as a RAM disk, that exists only in your computer’s RAM memory while it is running. You could use this to store commonly written to temporary filesystems like /tmp or log file locations such as /var/logs.

This has a number of consequences. For one anything that gets written to tmpfs will not be there the second you restart or turn the computer off – it never gets written back to a real hard drive. This means that while you can save your SSD all of those log file writes you also won’t be able to debug a problem using those log files on a computer crash or something of the like. Also being a RAM disk means that it will slowly(?) eat up your RAM growing larger and larger the more you write to it between restarts. There are options for putting limits on how large a tmpfs partition can grow but I’ll leave you to search for those.

To set this up open /etc/fstab as root. This time add new tmpfs lines using the following format:

tmpfs   /tmp    tmpfs   defaults  0       0

You can lock it down even more by adding some additional options like noexec (disallows execution of binaries on the filesystem) and nosuid (block the operation of suid, and sgid bits). Some other locations you may consider adding are /var/log, /var/cache/apt etc. Please read up on each of these before applying them as YMMV.

Categories: Hardware, Tyler B Tags: , , , , ,

A tale of a gillion installs

January 21st, 2014 1 comment

Install number one: LMDE 201303.  I was hoping for the best of both worlds, but I got driver issues instead.  LMDE has known ATI proprietary driver install issues.  I followed the Mint instructions and got it working, then got a blank screen after too much tinkering.  I was surprised that LMDE had this problem since Debian doesn’t, and LMDE should be a more polished version of LMDE.  This wasn’t a big deal, but I decided to give Debian a chance.

Install number two: debian stable (7.3).  The debian website has a convoluted maze of installation links, but it’s still fairly easy to find an ISO for the stable version you need.  I installed from the live ISO using a USB key.  The installation and ATI driver update went smoothly, and I thought all was well at first.  I soon realized that about 50% of reboots failed; the audio driver was the culprit.  I installed the latest driver from Realtec/ALSA and it sort of worked, but I was still getting some crap from # dmesg and the audio would crackle with some files.

LMDE.  I live booted LMDE to see if the same issue existed there and it did.

Time for Mint 16.  As expected everything worked.  Man I really wish Ubuntu hadn’t chosen the dark side – their OS is really good.  All of these distros use ALSA audio drivers, so why is Ubuntu the only one that works?   Kernel versions:

debian stable (7.3):
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version 1.0.24.
Mint 16:
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version k3.11.0-12-generic.

One more thing to check.  What kernel version is the real debian testing “jessie” using:

LMDE 201303 = 3.2
debian stable 7.3 = 3.2
Mint 16 = 3.11
debian testing “jessie - Jan 2014” = 3.12!

I determined to try debian testing before settling for Mint.  I tried a netinstall from USB key which killed my PC and grub bootloader.  The debian stable live iso usb key decided to stop working as well.   I finally got a real DVD debian stable install to work, changed the repositories to point to “jessie” and upgraded.  I was very surprised to see this worked!   I’m having some problems with bash, but all of my day to day software is up and running.  Nice.

TL;DR: LMDE was using an old kernel so I needed the real debian testing (jessie) to solve my driver problems.

Screen brightness work around (part 2)

January 19th, 2014 No comments

As mentioned before I am having some issues with my laptop’s hardware and controlling the screen brightness. Previously my work around was to set acpi_backlight=vender in the grub command line options. While this resulted in having full screen brightness it also removed my ability to use my keyboard function keys to adjust the screen brightness on the fly (not so good when you’re on battery). Removing this option allowed me to manually adjusted my screen brightness again but once again always started the laptop at zero brightness. What to do?

While far from a perfect solution my current work around is to use xdotool to simulate key presses on login which raise the screen brightness for me automatically. Here is the script that I run on startup:

for i in {1..20}
     xdotool key XF86MonBrightnessUp

While this works great it still isn’t perfect. Because xdotool requires an X session it means I cannot run it before one is created. If you were unaware the login screen, in my case MDM, does not run inside of X (it actually starts X when you successfully login). So while this will automatically brighten my screen it won’t do so until I type in my username and password, leaving me to type into a fully dark screen or manually adjust the brightness up enough to see what I’m doing. Hopefully I’ll have a better solution sooner rather than later…

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Fix no screen brightness on boot problem

October 14th, 2013 No comments

I recently upgraded my laptop to a brand new Lenovo Y410P and promptly replaced Windows 8 with a Linux install. Unfortunately I immediately ran into a very strange driver(?) issue where, on boot, the computer would default to the absolute lowest screen brightness level. This meant that I would need to manually adjust the screen brightness up just to see the login screen. Thankfully after some help from the excellent people over on the Ubuntu Forums I managed to find a very easy work around.

1) As root open up /etc/default/grub

I did this by simply issuing the following command:

sudo nano /etc/default/grub

2) Find the line that says GRUB_CMDLINE_LINUX= and add “acpi_backlight=vendor” to the list of options.

3) From a terminal run this command to update GRUB

sudo update-grub

4) Reboot!

That’s pretty much it. My computer now boots with the correct screen brightness as one would expect.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

And I thought this would be easy…

September 22nd, 2013 1 comment

Some of you may remember my earlier post about contemplating an upgrade from Windows Home Server (Version 1) to a Linux alternative. Since then, I have decided the following:

Amahi isn’t worth my time


This conclusion was reached after a fruitless install of the latest Amahi 7 installation on the 500 GB ‘system’ drive, included with the EX470. After backing up the Windows Home Server to a single external 2 TB drive (talk about nerve-wracking!), I popped the drive into a spare PC and installed Amahi with the default options.


No, I’m not 13. Yes, this image accurately reflects my frustrations.

Moving the drive back into the EX470 yielded precisely zero results, no matter what I tried – the machine would not respond to a ‘ping’ command, and since I’ve opted to try and do this without a debug board, I don’t even have VGA to tell me what the hell is going on. So, that’s it for Amahi.

When all else fails, Ubuntu


After deciding that I really didn’t feel like a repeat of my earlier Fedora experiment, I decided to try out the Linux ‘Old Faithful’ as it were – Ubuntu 12.04 LTS. I opted for the LTS version due to – well, you know – the ‘long-term support’ deal.

Oh, and I upgraded my storage (new 1 TB system drive not shown, and I apologize for the potato-quality image):


The only kind of ‘TB’ I like. Not tuberculosis.


Following from the earlier Amahi instructions, I popped the primary 1 TB drive into a spare machine and allowed the Ubuntu installer to do its thing. Easy enough! From there, I installed the following two additional items (having to add an additional repository for the latter):

  • Openssh-Server

This allows me to easily control the machine through SSH, and – as I understand it – is pretty much a must for someone wanting to control a headless box. Setup was easy-breezy, in that it required nothing at all.

  • Greyhole

For those unfamiliar, Greyhole is – in their own words – an ‘Easily expandable and redundant storage pool for home servers’. One of my favourite things about WHS v1 was its ‘disk pooling’ capability – essentially a JBOD with software-managed share duplication, ensuring that each selected share was copied over to one other disk in the array.

After those were done with, I popped the drive into the EX470, and – lo and behold! – I was able to SSH in.


This? This is what relatively minor success looks like.

So at this point, I’m feeling relatively confident. I shut down the server (don’t forget -h!) over SSH, popped in the first of the three 3 TB drives, and…

…nothing. Nada. Zip. Zilch. The server happily blinks away like a small puppy wags its tail, excited to see its owner but clearly bereft of purpose when left to its owner. I can’t ping it, I can’t… well, that’s really it. I can’t ping it, so there’s nothing I can do. Looking to see if GRUB was stuck at the menu, I stuck in a USB keyboard and hit ‘Enter’ to no effect. Yes, my troubleshooting skills are that good.

My next step was to pop both the 1 TB and 3 TB drives into the ‘spare’ machine; this ran fine. Running lshw -short -c disk shows a 1 TB and 3 TB drive without issue. I also ran these parted commands:

mklabel gpt

mkpart primary -1 1


(I think that last command is right.) So, all set, right? Cool. Pop the drive back in to the EX470, and…

STILL NOTHING. At this point, I’m ready to go pick up a new four-bay NAS, but I feel like that may be overkill. If anyone has any recommendations on how to get the stupid thing to boot with a 3 TB drive, I’m open to suggestions.


Finding a replacement for Windows Home Server

July 29th, 2013 6 comments

Hello, everyone! It’s great to be back in the hot seat for this, our third installment of The Linux Experiment. I know that last time I caused a bit of a stir with my KDE-bashing post, so will try to keep it relatively PG this time around.

Not many people know about it or have used it, but – through an employee purchase program about five years ago – I was able to get my hands on the HP EX470 MediaSmart Home Server. What manner of witchcraft is this particular device, you may ask? Here’s a photo preview:


It really is about as simple as it looks. The EX470 (stock) came equipped with a 500 GB drive, pre-loaded with Windows Home Server – which in turn was built on Windows Server 2003. 512 MB of RAM and an AMD Sempron 3400+ rounded it off; the device is completely headless, meaning that no monitor hookup is possible without a debug cable. The server also comes with four(?) USB ports, eSATA, and gigabit ethernet.

My current configuration is 3 x 1 TB drives, plus the original 500 GB, and an upgraded 2 GB DIMM. One of the things I’ve always loved about Windows Home Server is its ‘folder duplication’. Not merely content to RAID the drives together, Microsoft cooked up an idea to have each folder able to duplicate itself over to another drive in case of failure. It’s sort of like RAID 1, but without entirely-mirrored disks. Still, pretty solid redundancy.

Unfortunately for me, this feature was removed in the latest update to Windows Home Server 2011 – and support for that is even waning now, leading me to believe that patches for this OS may stop coming entirely within the next year or two. So, where does that leave me? I’m not keen to run a non-supported OS on this thing (it is internet-connected), so I’m definitely looking into alternatives.

Over the next few days, I plan to write about my upcoming ‘adventures’ in finding a suitable Linux-based alternative to Windows Home Server. Will I find one that sticks, or will I end up going with a Windows 8 Pro install? Only time will tell. Stay tuned!

Categories: Dana H, Hardware, Linux Tags:

Installing Bluetooth devices on Kubuntu

July 27th, 2013 No comments

This is actually a much easier process than I imagined it would be.

First: Ensure your devices (mouse, headphones, keyboard, etc…) are charged and turned on.

Next click on the “Start” menu icon in the bottom left of the desktop screen.

Then click on the “Computer” icon along the bottom, followed by System Settings.

Computer Tab

This will take you into the System Settings folder where you can change many things. Here we will select Bluetooth, since that is the type of device you want to install.

Bluetooth Menu

I took these pictures after I successfully installed my wireless USB keyboard and mouse. So you know I am not bullshitting about this process actually working.

Like most Bluetooth devices, mine have a red “Connect” button on the bottom. Ignore the sweet, sweet compulsion to press that button. I’m convinced it is nearly useless. Instead, use the “Add devices” method, as seen here.

Add Device

More awesome Photoshop.

Now, if you followed my first instruction (charge and turn on your Bluetooth Device) you should see them appear in this menu. Select the item you would like to add and click next. This will prompt you to enter a PIN on the device you wish to insyall (if installing a keyboard), or it will just add your device. If you have done this process successfully, your device will show up in the device menu. If it does not, you fucked up.


Dual Booting Ubuntu 13.04 and Windows 8 on a Lenovo Y400 IdeaPad

July 27th, 2013 1 comment

With the third edition of The Linux Experiment already underway, I decided to get my new laptop set up with an Ubuntu partition to work with over the next few months. A little while back, I purchased this laptop with intent to use it as a gaming rig. It shipped with Windows 8, which was a serious pain in the ass to get used to. Now that I’ve dealt with that and have Steam and Origin set up on the Windows partition, it’s time to make this my primary machine and start taking advantage of the power under its hood by dual-booting an Ubuntu partition for development and experiment work.

I started my adventure by downloading an ISO of the latest release of Ubuntu – at the time of this writing, that’s 13.04. Because my new laptop has UEFI instead of BIOS, I made sure to grab the x64 version of the distribution.

Aside: If you’re using NoScript while browsing Ubuntu’s website, you’ll want to keep an eye on the address bar while navigating through the download steps. In my case, the screen that asks you to donate to the project redirected me to a different version of the ISO until I enabled JavaScript.

After using Ubuntu’s Startup Disk Creator to create a bootable USB stick, I started my first adventure – figuring out how to get the IdeaPad to boot from USB. A bit of quick googling told me that the trick was to alternately tap F10 and F12 during the boot sequence. This brought up a boot menu that allowed me to select the USB stick.

Once Ubuntu had booted off of the USB stick, I opened up GParted and went about making some space for my new operating system. The process was straightforward – I selected the largest existing partition (it also helped that it was labelled WINDOWS_OS), and split it in half. My only mistake in this process was to choose to put the new partition in front of the existing partition on the drive. Because of this, GParted had to copy all of the data on the Windows partition to a new physical location on the hard drive, a process that took about three hours.

The final partitioning scheme with my new Linux partition highlighted

The final partitioning scheme with my new Linux partition highlighted

With my hard drive appropriately partitioned, it was time to install the operating system. The modern Ubuntu installer pretty much takes care of everything, even going so far as selecting an appropriate space to use on the hard drive. I simply told it to install alongside the existing Windows partition, and let it take care of the details.

The installer finished its business in short order, and I restarted the machine. Ubuntu booted with no issues, but my Windows 8 partition refused to cooperate. It would seem as though something that the installer did wasn’t getting along well with UEFI/SecureBoot. Upon attempting to boot Windows, I got the following message:

error: Secure Boot forbids loading module from (hd0,gpt8)/boot/grub/x86_64-efi/ntfs.mod.
error: failure reading sector 0x0 from ‘cd0’
error: no such device: 0030DA4030DA3C7A
error: can’t find command ‘drivemap’
error: invalid EFI file path

Press any key to continue…

Uh oh.

Like I said, I could boot Ubuntu, so I headed on over to their website and read their page on UEFI. At first glance, it seemed as though I had done everything correctly. The only place that I deviated from these instructions was in manually resizing my Windows partition to create space for my new Ubuntu partition.

Thinking that I might be experiencing troubles with  my boot partition, I took a shot at running Ubuntu’s Boot-Repair utility. It seemed to do something, but upon restarting the machine, I found that I had even more problems – now a Master Boot Record wasn’t found at all:

It would appear as though I may have made things worse...

It would appear as though I may have made things worse…

After dismissing the boot device error, I was prompted to choose which device to boot from. I chose to boot Windows’ UEFI Repair partition, and was (luckily) able to get to a desktop. Unfortunately, none of the other partitions on the device seem to work, so I’m back where I started at the beginning, except that now in addition to having to put up with Windows 8, I also have a broken master boot record.

Lenovo: 1 / Jon: 0.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

What is this, text for ants? Part I

July 26th, 2013 No comments

Unlike many people who may be installing a version of Linux, I am doing so on a machine that has a projector with a 92″ screen as it’s main display.

So, upon initial installation of Kubuntu, I couldn’t see ANY of the text on the desktop, it was itty bitty.

Font for Ants

I can’t even read this standing inches away.

In order to fix this, I had to hook up an additional display.

Thankfully, living in a house with a computer guru, I had many to choose from.

In order to get my secondary display to appear, I had to first plug it into the display port on the machine I am using. I then had to turn off the current display (projector) and reboot the machine so that it would initialize the use of my new monitor.

Sounds easy enough, and it was, albeit with some gentle guidance from Jake B.

From here, I am able to properly configure my display.

The thing I am enjoying most about Kubuntu so far is that it is very user friendly. It seems almost intuitive where each setting can be found in menus.

So these are the steps I followed to change my display configuration.

I went into Menu > Computer > System Settings

Computer Tab

Check out my sweet Photoshop Skills. I may have taken this picture with a potato.

Once you get into the System Settings folder, you have the option to change a lot of things. For example, your display resolution.

System Settings

Looks a lot like the OSX System Preferences layout.

Now that you are in this menu, you will want to select Display and Monitor from the options. Here you can set your resolution, monitor priority, mirroring, and multiple displays. Since I will only be using this display on the Projector, I ensured that the resolution was set so that I could read the text properly on the Projector Screen. Before disabling my secondary monitor, I also set up my Bluetooth keyboard and mouse, which I will talk about in another post.

This process only took a few moments. I will still have to tweak the font scaling, as I have shit-tastic eyesite.

Make printing easy with the Samsung Unified Linux Driver Repository

July 13th, 2013 No comments

I recently picked up a cheap Samsung laser printer and decided to give the Samsung Unified Linux Driver Repository a shot while installing it. Basically the SULDR is a repository you add to your /etc/apt/sources.list file which allows you to install one of their driver management applications. Once that is installed anytime you go to hookup a new printer the management application automatically searches the repository, full of the official Samsung printer drivers, finds the correct one for you and installs it. Needless to say I didn’t have any problems getting this printer to work on linux!

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Hardware, Linux, Tyler B Tags: , ,

Changing ATI power profile to low

April 6th, 2013 No comments

My laptop’s graphics card has never had the best support on linux and has now approached the point in its life where even ATI has stopped supporting it with new driver releases. On one hand I’m thankful that the open source driver performs well enough that I can continue to use this hardware, on the other though it does result in some downright awful power management. With the default settings my graphics card runs extremely hot and requires the fan to be on constantly. Luckily there is a quick way to fix this and tell the open source driver to run my card in a low power state at all times.

  1. Start a root terminal (or use sudo for everything)
  2. Set the card to use the power profile (assuming your computer uses card0)

    echo profile > /sys/class/drm/card0/device/power_method

  3. Set the power profile to “low” setting

    echo low > /sys/class/drm/card0/device/power_profile

You can check what the current setting is by running the following command:

cat /sys/class/drm/card0/device/power_profile

I would also highly recommend rebooting and then checking the setting again. I found that on my laptop the setting was being reset everytime the computer turned on. If this happens to you try my work around – simply edit /etc/rc.local and add the line in step 3 before the return 0. My file looks like:

#!/bin/sh -e

echo low > /sys/class/drm/card0/device/power_profile

exit 0

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Querying the State of a Hardware WiFi Switch with RF-Kill

October 8th, 2012 No comments

The laptop that I’m writing this post from has a really annoying strip of touch-response buttons above the keyboard that control things like volume and whether or not the wifi card is on. By touch-response, I mean that the buttons don’t require a finger press, but rather just a touch of the finger. As such, they provide no haptic feedback, so it’s hard to tell whether or not they work except by surveying the results of your efforts in the operating system.

The WiFi button in particular has go to be the worst of these buttons. On Windows, it glows a lovely blue colour when activated, and an angry red colour when disabled. This directly maps to whether or not my physical wireless network interface is enabled or disabled, and is a helpful indicator. Under Linux Mint 12 however, the “button” is always red, which makes it a less than helpful way to diagnose the occasional network drop.

Lately, I’ve been having trouble getting the wifi to reconnect after one of these drops. To troubleshoot, I would open up the Network Settings panel in Mint, which looks something like this:

Mint 12's Wireless Network Configuration Panel

The only problem with this window is that the ON/OFF slider that controls the state of the network interface would refuse to work. If I drag it to the ON position, it would just bounce back to OFF without changing the actual state of the card.

In the past, this behaviour has really frustrated me, driving me so far as to reboot the machine in Windows, re-activate the physical interface, and then switch back to Mint to continue doing whatever it was that I was doing in the first place. Tonight, I decided to investigate.

I started out with my old friend iwconfig:

jonf@jonf-mint ~ $ sudo iwconfig
lo        no wireless extensions.

eth0      no wireless extensions.

wlan0     IEEE 802.11abgn  ESSID:off/any
Mode:Managed  Access Point: Not-Associated   Tx-Power=off
Retry  long limit:7   RTS thr:off   Fragment thr:off
Encryption key:off
Power Management:off

As you can see, the wireless interface is listed, but it appears to be powered off. I was able to confirm this by issuing the iwlist command, which is supposed to spit out a list of nearby wireless networks:

jonf@jonf-mint ~ $ sudo iwlist wlan0 scanning
wlan0     Interface doesn’t support scanning : Network is down

Again, you can see that the interface is not reacting as one might expect it to. Next, I attempted to enable the interface using the ifconfig command:

jonf@jonf-mint ~ $ sudo ifconfig wlan0 up
SIOCSIFFLAGS: Operation not possible due to RF-kill

Ah-ha! A clue! Apparently, something called rfkill was preventing the interface from coming online. It turns out that rfkill is a handy little tool that allows you to query the state of the hardware buttons (and other physical interfaces) on your machine. You can see a list of all of these interfaces by issuing the command rfkill list:

jonf@jonf-mint ~ $ rfkill list
0: phy0: Wireless LAN
Soft blocked: no
Hard blocked: yes
1: hp-wifi: Wireless LAN
Soft blocked: no
Hard blocked: yes

Interestingly enough, it looks like my wireless interface has been turned off by a hardware switch, which is what I had suspected all along. The next thing that I tried was the rfkill event command, which tails the list of hardware interface events. Using this tool, you can see the effect of pressing the physical switches and buttons on the chasis of your machine:

jonf@jonf-mint ~ $ rfkill event
1349740501.558614: idx 0 type 1 op 2 soft 0 hard 0
1349740505.153269: idx 0 type 1 op 2 soft 0 hard 1
1349740505.354608: idx 1 type 1 op 2 soft 0 hard 1
1349740511.030642: idx 1 type 1 op 2 soft 0 hard 0
1349740515.558615: idx 0 type 1 op 2 soft 0 hard 0

Each of the lines that the tool spits out shows a single event. In my case, it shows the button that controls the wireless interface switching the hard block setting (physical on/off) from 0 to 1 and back.

After watching this output while pressing the button a few times, I realized that the button does actually work, but that when the interface is turned on, it can take upwards of 5 seconds for the machine to notice it, connect to my home wireless, and get an ip address via DHCP. In the intervening time, I had typically become frustrated and pressed the button a few more times, trying to get it to do something. Instead, I now know that I have to press the button exactly once and then wait for it to take effect.

I stand by the fact that this is a piss-poor design, but hey, what do I know? I’m not a UX engineer for HP. At least it’s working again, and I am reconnected to my sweet sweet internet.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Automatically put computer to sleep and wake it up on a schedule

June 24th, 2012 No comments

Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

How to test hard drive for errors in Linux

May 21st, 2012 No comments

I recently re-built an older PC from a laundry list of Frankenstein parts. However before installing anything to the hard drive I found I wanted to check it for physical errors and problems as I couldn’t remember why I wasn’t using this particular drive in any of my other systems.

From an Ubuntu 12.04 live CD I used GParted to to delete the old partition on the drive. This let me start from a clean slate. After the drive had absolutely nothing on it I went searching for an easy way to test the drive for errors. I stumbled across this excellent article and began using badblocks to scan the drive. Basically what this program does is write to every spot on the drive and then read it back to ensure that it still holds the data that was just written.

Here is the command I used. NOTE: This command is destructive and will damage the data on the hard drive. DO NOT use this if you want to keep the data that is already on the drive. Please see the above linked article for more information.

badblocks -b 4096 -p 4 -c 16384 -w -s /dev/sda

What does it all mean?

  • -b sets the block size to use. Most drives these days use 4096 byte blocks.
  • -p sets the number of passes to use on the drive. When I used the option -p 4 above it means that it will write/read from each block on the drive 4 times looking for errors. If it makes it through 4 passes without finding new errors then it will consider the process done.
  • -c sets the number of blocks to test at a time. This can help to speed up the process but will also use more RAM.
  • -w turns on write mode. This tells badblocks to do a write test as well.
  • -s turns on progress showing. This lets you know how far the program has gotten testing the drive.
  • /dev/sda is just the path to the drive I’m scanning. Your path may be different.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

It should not be this hard to change my volume

December 22nd, 2011 1 comment

Normally my laptop is on my desk at home plugged into a sound system, so I never have to change the volume. However I’m currently on holiday, so that means I’m carrying my laptop around. Last night, I had the audacity to lower the volume on my machine. After all, nobody wants to wake up their family at 2am with “The history of the USSR set to Tetris.flv”. Using the media keys on my laptop did nothing. Lowering the sound in KMix did nothing. Muting in KMix did nothing. I figured that something had gone wrong with KMix and maybe I should re-open it. Well, it turns out that was a big goddamn mistake, because that resulted in me having no sound.

It took about 30 minutes to figure out, but the solution ended up being unmuting my headphone channel in alsamixer. It looks like for whatever reason, alsamixer and KMix were set to different master channels (headphone/speaker and HDMI, respectively), thus giving KMix (and my media keys) no actual control over volume.

Categories: Hardware, Kubuntu, Sasha D Tags:

Reinstalling LFS soon: it’s not my fault, I swear!

November 17th, 2011 No comments

I went to play around with my Linux from Scratch installation after getting a working version of KDE 4.7.3 up and running. For a few days now my system has been running stood up to light web browsing use and SSH shenanigans, and hasn’t even dropped a remote connection.

This was until this evening, when I decided to reboot to try and fix a number of init scripts that were throwing some terrible error about problems in lsb_base under /lib/ somewhere. The system came back up properly, but when I startx‘d, I was missing borders for most of my windows. Appearance Preferences under KDE wouldn’t even lanch, claiming a segmentation fault.

There were no logs available to easily peruse, but after a few false starts I decided to check the filesystem with fsck from a bootable Ubuntu 11.04 USB stick. The results were not pretty:

root@ubuntu:~# fsck -a /dev/sdb3
fsck from util-linux-ng 2.17.2
/dev/sdb3 contains a file system with errors, check forced.
/dev/sdb3: Inode 1466546 has illegal block(s).

(i.e., without -a or -p options)

Running fsck without the -a option forced me into a nasty scenario, where like a certain Homer Simpson working from his home office, I repeatedly had to press “Y”:

At the end of it, I’d run through the terminal’s entire scroll buffer and continued to get errors like:

Inode 7060472 (/src/kde-workspace-4.7.3/kdm/kcm/main.cpp) has invalid mode (06400).
Clear? yes

i_file_acl for inode 7060473 (/src/kde-workspace-4.7.3/kdm/kcm/kdm-dlg.cpp) is 33554432, should be zero.
Clear? yes

Inode 7060473 (/src/kde-workspace-4.7.3/kdm/kcm/kdm-dlg.cpp) has invalid mode (00).
Clear? yes

i_file_acl for inode 7060474 (/src/kde-workspace-4.7.3/kdm/kcm/CMakeLists.txt) is 3835562035, should be zero.
Clear? yes

Inode 7060474 (/src/kde-workspace-4.7.3/kdm/kcm/CMakeLists.txt) has invalid mode (0167010).
Clear? yes

I actually gave up after after seeing several thousand of these inodes experiencing problems (later I learned that fsck -y will automatically answer yes, which means I’ve improved my productivity several thousand times!)

I was pretty quick to assess the problem: the OCZ Vertex solid state drive where I’d installed Linux has been silently corrupting data as I’ve written to it. Most of the problem sectors are in my source directories, but a few happened to be in my KDE installation on disk. This caused oddities such as power management not loading and the absence of window borders.

So what goes on from here? I plan to replace the OCZ drive under warranty and rebuild LFS on my spinning disk drive, but this time I’ll take my own advice and start building from this LiveUSB Ubuntu install, with an up-to-date kernel and where .tar.xz files are recognized. Onward goes the adventure!

I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Hardware, Jake B, Linux from Scratch Tags:

Great Success!

November 1st, 2011 No comments

Just a quick note tonight – I finally managed to get a bootable Gentoo system installed!

After my last post, things were looking pretty grim. Instead of continuing to perpetuate the recompile/reboot cycle, I decided to start fresh, in hopes that I had simply missed a step the first time around. With this in mind, I started back at page one of the Gentoo Handbook and worked my way through the entire thing.

When it came time to compile my kernel, I opted for a slightly less error-prone method, and started off by installing Genkernel, a tool that automates some of the kernel creation steps. When running it however, I was sure to pass the –menuconfig parameter, which gave me full control over what modules were included in the final product.

Next, I followed the kernel tutorials in the Gentoo Handbook and on the Gentoo Wiki Asus P5Q-E page. This ensured that I included every component that was necessary for my system.

Once I rebooted the machine, a login prompt came up the first time. Great success indeed!

One little gotcha that’s important to note at this step. On my first login, I didn’t have any network access. Two things that might help:

  1. Open up /etc/conf.d/net in nano and add a line like config_eth0=”dhcp” for each network interface in your machine, where eth0 is the name of the interface. This tells the machine to use DHCP when initializing the device. On most home networks, this will get you an IP address.
  2. Make sure that any required modules are loaded. I have two network interfaces. One uses the sky2 module, and the other uses skge. You can check to ensure that these are loaded with the command lsmod | grep sky2 where sky2 is the name of the module that you’re looking for. If it isn’t loaded, run modprobe sky2 to get it up and running. Note that you may need to recompile your kernel with support for the module in question if you missed it first time ’round.

Tomorrow, I’ll compile an X11 server, and hopefully get started on the GNOME desktop environment. Christ there’s still a lot to do…

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Gentoo, Jon F, kernel, Networking Tags:

Bye Bye Bodhi

November 1st, 2011 10 comments

Ah Linux

One website lists ten reasons to use linux my favourites of which are “Linux is easier to use than Windows” and “Linux is fun.” It is day three of the experiment and so far I haven’t installed Linux but I have taken a Dell Vostro 3350 apart about five times. I borrowed this laptop off a fellow comrade in this experiment, Jake B, as I will be sending my own netbook home this coming December.

Starting off I aimed to install both VectorLinux and Bodhi to compare them. I consider myself a relatively light computer user outside of the office and so comparing two different distributions would give me something to talk about. Alas this choice has come back to bite me in the…

I used unetbootin to begin with, on a USB key that was confirmed to be working. I then put Vector on the USB key and it brought up half a blue screen with the top of the vector logo just appearing above the black lower half of the display. After a couple of tries I figured it was corrupt files or a bad ISO so I reformated the USB in order to try Bodhi instead. Unfortunately I didn’t even get a logo this time. Next I burned a CD of Vector and got as far as the ‘find installation media’ screen but no matter how may refreshes or reloads I did it apparently couldn’t find the CD-ROM or configuration files.

From previously experiencing installers fail to find hard drives and USB keys because of the type of hard drive setting in the BIOS, I changed it from ACHI to ATA and low and behold finally some success. I managed to get the Vector installer to write partitions to the disk (using the CD at this point) after choosing the add-on applications I wanted to install. Again this failed so I tried once more with the USB key. This failed the same way except it said that it could not find live media. I even tried using the USB key and the CD together at the same time with no luck.

Switching between Bodhi and Vector in order to try and get a complete install and many, many CDs later I temporarily gave up. I downloaded a new distribution called Sabyon, a Gentoo based distro with the Enlightenment desktop environment, but alas I kept getting the same errors. I even tried Ubuntu 10.04 and Linux Mint and neither of them could not write to the disk.

Figuring it was a hard drive issue I took out the hard drive from the laptop and mounted it in an enclosure. After a quick reformat, which removed a random 500MB LVM partition that I believed to be corrupt, I put it back in the machine. Still no luck.

The errors I kept getting included disk, I/O, live media, cannot find CD-ROM, no useable media, no config file and a couple of others. Each time I tried installing it would fail at different sections of the install and the error would be different with each media used. Among all of the errors I’ve seen the main one seems to be “(initramfs) unable to find a medium containing a live filesystem”

On a whim I decided to test any other hardware errors by running diagnostics from the BIOS. No errors found. I even dug out my ancient XP Profession disc, and after a couple of BIOS changes and a couple of Blue Screens – that were my fault because I had changed the hard drive out so much – I got XP to successfully load, install, and commit changes to the hard drive.

Turning to Google, and with the help of a more advanced Linux Experiment comrade, I retried installing Linux by adding some commands to the installer boot options. Still no luck.

After more Googling I have found that there are a few possible reasons that this could be happening. I have read that it could be caused by the USB3 ports interfering with the bootable media  or that it cold be related to a CD-ROM master/slave setting. Either way, I still haven’t figured it out and I’m not willing to break someone else’s computer just to see if I can overcome this frustrating first experience with Linux. My next task is to try some ACPI hacks  and after finding this useful link, try to install the latest version of Ubuntu which seems to be compatible with the hardware of this machine. But for now its …

Windows 1 Linux 0

Men using Linux 1 Women using linux 0

I am currently running Mandriva 2011
Check out my profile for more information