Archive

Archive for the ‘God Damnit Linux’ Category

Blackbery Sync Attempt #3: Compiling from Source

October 5th, 2009 7 comments

After my first two attempts at getting my Blackberry to sync with Mozilla Thunderbird, I got pissed off and went right to the source of my problems. I emailed the developer of the opensync-plugin-mozilla package that (allegedly) allows Thunderbird to play nicely with OpenSync, and gave him the what for, (politely) asking what I should do. He suggested that I follow the updated installation instructions for checking out and compiling the latest version of his plugin from scratch instead of using the older, precompiled versions that are no longer supported.

I set to it, first removing all of the packages that I had installed during my last two attempts, excluding Barry, as I had already built and installed the latest version of its libraries. Everything else, including OpenSync and all of its plugins went, and I started from scratch. Luckily, the instructions were easy to follow, although they recommended that I get the latest versions of some libraries by adding Debian’s sid repositories to my sources list. This resulted in me shitting my pants later in the day, when I saw 642 available updates for my system in Synaptic. I figured out what was going on pretty quickly and disabled updates from sid, without ruining my system. If there’s one thing that Windows has taught me over the years, it is to never set a machine to auto-install updates.

Once I had the source code and dependency libraries, the install was a snap. The plugin source came with a utils directory full of easy to use scripts that automated most of the process. With everything going swimmingly, I was jarred out of my good mood by a nasty error that occurred when I ran the build-install-opensync.sh script:

CMake Error at cmake/modules/FindPkgConfig.cmake:357 (message):
None of the required ‘libopensync1;>=0.39’ found
Call Stack (most recent call first):
cmake/modules/FindOpenSync.cmake:27 (PKG_SEARCH_MODULE)
CMakeLists.txt:15 (FIND_PACKAGE)

CMake Error at cmake/modules/FindOpenSync.cmake:46 (MESSAGE):
OpenSync cmake modules not found.  Have you installed opensync core or did
you set your PKG_CONFIG_PATH if installing in a non system directory ?
Call Stack (most recent call first):
CMakeLists.txt:15 (FIND_PACKAGE)

It turns out that the plugin requires OpenSync v0.39 or greater to be installed to work. Of course, the latest version of same in either the Debian main or lenny-backports repositories is v0.22-2. This well-aged philosophy of the Debian Stable build has irked me a couple of times now, and I fully intend to update my system to the testing repositories before the end of the month. In any case, I quickly made my way over to the OpenSync homepage to obtain a newer build of their libraries. There I found out not only that version 0.39 had just been released on September 21st, and also that it isn’t all that stable:

Releases 0.22 (and 0.2x svn branch) and before are considered stable and suitable for production. 0.3x releases introduce major architecture and API changes and are targeted for developers and testers only and may not even compile or are likely to contain severe bugs.

0.3x releases are not recommended for end users or distribution packaging.

Throwing caution to the wind, I grabbed a tarball of compilation scripts from the website, and went about my merry way gentooing it up. After a couple of minor tweaks to the setEnvOpensync.sh script, I got the cmpOpensync script to run, which checked out the latest trunk from the svn, and automatically compiled and installed it for me. By running the command msynctool –version, I found out that I now had OpenSync v0.40-snapshot installed. Relieved, I headed back to my BlueZync installation. This time around, I managed to get right up to the build-install-bluezync.sh script before encountering another horrible dependency error:

— checking for one of the modules ‘glib-2.0’
—   found glib-2.0, version 2.16.6
— Found GLib2: glib-2.0 /usr/include/glib-2.0;/usr/lib/glib-2.0/include
— Looking for include files HAVE_GLIB_GREGEX_H
— Looking for include files HAVE_GLIB_GREGEX_H – found
— checking for one of the modules ‘libxml-2.0’
—   found libxml-2.0, version 2.6.32
— checking for one of the modules ‘libopensync1’
—   found libopensync1, version 0.40-snapshot
— checking for one of the modules ‘thunderbird-xpcom;icedove-xpcom’
—   found icedove-xpcom, version 2.0.0.22
—     THUNDERBIRD_XPCOM_VERSION 2.0.0.22
—     THUNDERBIRD_VERSION_MAIN 2
—     THUNDERBIRD_XPCOM_MAIN_INCLUDE_DIR /usr/include/icedove
—     NSPR_MAIN_INCLUDE_DIR /usr/include/nspr
—     THUNDERBIRD_XPCOM_LIBRARY_DIRS /usr/lib/icedove
—     THUNDERBIRD_XPCOM_LIBRARIES xpcom;plds4;plc4;nspr4;pthread;dl
— checking for one of the modules ‘sunbird-xpcom;iceowl-xpcom’
—   found iceowl-xpcom, version 0.8
SUNBIRD_INCLUDE_DIRS /usr/include/iceowl;/usr/include/iceowl/xpcom;/usr/include/iceowl/string;/usr/include/nspr
SEVERAL
—      SUNBIRD_MAIN_INCLUDE_DIR /usr/include/iceowl
—      SUNBIRD_VERSION 0.8
— Found xpcom (thunderbird and sunbird):
—   THUNDERBIRD_XPCOM_VERSION=[2.0.0.22]
—   SUNBIRD_VERSION=[0.8]
—   THUNDERBIRD_VERSION_MAIN=[2]
—   SUNBIRD_VERSION_MAIN=[0]
—   XPCOM_INCLUDE_DIRS /usr/include/nspr;/usr/include/icedove;/usr/include/icedove/addrbook;/usr/include/icedove/extensions;/usr/include/icedove/rdf;/usr/include/icedove/string;/usr/include/icedove/xpcom_obsolete;/usr/include/icedove/xpcom;/usr/include/icedove/xulapp;/usr/include/iceowl
—   XPCOM_LIBRARY_DIRS /usr/lib/icedove
—   XPCOM_LIBRARIES xpcom;plds4;plc4;nspr4;pthread;dl
—   SUNBIRD_VERSION 0.8
CALENDAR_VERSION=[8]
LIBTBXPCOM_INCLUDE_DIR
XPCOM_LIBRARIES  xpcom;plds4;plc4;nspr4;pthread;dl
ENABLE_TESTING [yes]
TESTING ENABLED
— checking for one of the modules ‘check’
CMake Error at cmake/modules/FindPkgConfig.cmake:357 (message):
None of the required ‘check’ found
Call Stack (most recent call first):
cmake/modules/FindCheck.cmake:27 (PKG_SEARCH_MODULE)
CMakeLists.txt:73 (FIND_PACKAGE)

CMAKING mozilla-sync 0.1.7
— Configuring done

From what I can gather from this output, the configuration file was checking for dependencies, and got hung up on one called “check.” Unfortunately, this gave me zero information that I could use to solve the problem. I can verify that the install failed by running msynctool –listplugins, which returns:

Available plugins:
msynctool: symbol lookup error: msynctool: undefined symbol: osync_plugin_env_num_plugins

Ah, shit. Looks like I’m stuck again. Maybe one day I’ll figure it out. Until then, if any of our readers has ever seen something like this, I could use a couple of pointers.

More XKCD

October 4th, 2009 No comments

I swear that I’ve encountered this before…

That is all.

Categories: Flash, God Damnit Linux, Hardware, Jon F, Linux Tags:

WTF #17(qq)

October 2nd, 2009 No comments

It’s no secret that Linux, as with any other operating system (and yes, I realize that I just grouped all Linux distributions into a collective) has its idiosyncrasies.  The little things that just sort of make me cock my head to the side and wonder why I’m doing this to myself, or make me want to snap my entire laptop in half.

One of these things is something Tyler previously complained about – a kernel update on Fedora 11 that just happened to tank his graphics capabilities.  Now, I might just be lucky but why in the hell would Fedora release a kernel update before compatibility for two major graphics card manufacturers wasn’t released yet?

Fortunately for Tyler, a kmod-catalyst driver was released for his ATI graphics card yesterday (today?) and he’s now rocking the latest kernel with the latest video drivers.  Unfortunately for me, some slacker has yet to update my kmod-nvidia drivers to operate properly with the latest kernel.

While this is more of a rant than anything else, it’s still a valid point.  I’ve never had trouble on a Windows-based machine wherein a major update will cause a driver to no longer function (short of an actual version incrementation – so of course, I would expect Windows XP drivers to not function in Vista, and Vista drivers to not function in Windows 7; similarly, I would not expect Fedora 11 drivers to function in Fedora 12).

<end rant>

Top 10 things I have learned since the start of this experiment

October 2nd, 2009 4 comments

In a nod to Dave’s classic top ten segment I will now share with you the top 10 things I have learned  since starting this experiment one month ago.

10: IRC is not dead

Who knew? I’m joking of course but I had no idea that so many people still actively participated in IRC chats. As for the characters who hang out in these channels… well some are very helpful and some… answer questions like this:

Tyler: Hey everyone. I’m looking for some help with Gnome’s Empathy IM client. I can’t seem to get it to connect to MSN.

Some asshat: Tyler, if I wanted a pidgin clone, I would just use pidgin

It’s this kind of ‘you’re doing it wrong because that’s not how I would do it’ attitude can be very damaging to new Linux users. There is nothing more frustrating than trying to get help and someone throwing BS like that back in your face.

9: Jokes about Linux for nerds can actually be funny

Stolen from Sasha’s post.

Admit it, you laughed too

Admit it, you laughed too

8. Buy hardware for your Linux install, not the other way around

Believe me, if you know that your hardware is going to be 100% compatible ahead of time you will have a much more enjoyable experience. At the start of this experiment Jon pointed out this useful website. Many similar sites also exist and you should really take advantage of them if you want the optimal Linux experience.

7. When it works, it’s unparalleled

Linux seems faster, more featured and less resource hogging than a comparable operating system from either Redmond or Cupertino. That is assuming it’s working correctly…

6. Linux seems to fail for random or trivial reasons

If you need proof of these just go take a look back on the last couple of posts on here. There are times when I really think Linux could be used by everyone… and then there are moments when I don’t see how anyone outside of the most hardcore computer users could ever even attempt it. A brand new user should not have to know about xorg.conf or how to edit their DNS resolver.

Mixer - buttons unchecked

5. Linux might actually have a better game selection than the Mac!

Obviously there was some jest in there but Linux really does have some gems for games out there. Best of all most of them are completely free! Then again some are free for a reason

Armagetron

Armagetron

4. A Linux distribution defines a lot of your user experience

This can be especially frustrating when the exact same hardware performs so differently. I know there are a number of technical reasons why this is the case but things seem so utterly inconsistent that a new Linux user paired with the wrong distribution might be easily turned off.

3. Just because its open source doesn’t mean it will support everything

Even though it should damn it! The best example I have for this happens to be MSN clients. Pidgin is by far my favourite as it seems to work well and even supports a plethora of useful plugins! However, unlike many other clients, it doesn’t support a lot of MSN features such as voice/video chat, reliable file transfers, and those god awful winks and nudges that have appeared in the most recent version of the official client. Is there really that good of a reason holding the Pidgin developers back from just making use of the other open source libraries that already support these features?

2. I love the terminal

I can’t believe I actually just said that but it’s true. On a Windows machine I would never touch the command line because it is awful. However on Linux I feel empowered by using the terminal. It lets me quickly perform tasks that might take a lot of mouse clicks through a cumbersome UI to otherwise perform.

And the #1 thing I have learned since the start of this experiment? Drum roll please…

1. Linux might actually be ready to replace Windows for me

But I guess in order to find out if that statement ends up being true you’ll have to keep following along 😉

Resolving the DNS Issue Once and For All

October 2nd, 2009 3 comments

A little while ago, I wrote about problems that I was having with my laptop not resolving DNS requests. After I restarted today (because X11 crashed, but that’s a whole other can of worms), it started happening again, even though I had fixed the problem once before. Turns out that the big warning banner at the top of the resolv.conf file was relevant, and that my changes were eventually lost, just not on the first reboot.

So I moved back to my Windows machine for a few minutes to hit up the #debian IRC channel, where I explained my issue and what I had done to solve it last time. Luckily, somebody there presented me with a new solution to the issue that should persist restarts. Instead of making edits directly to resolv.conf, I was instructed to add a prepend line to the /etc/dhcp3/dhclient.conf file:

#add a prepend line to fix DNS issues
prepend domain-name-servers 64.71.255.202;

Where the IP address is the IP of your DNS server (OpenDNS, in my case). After saving the file, I ran

/etc/init.d/resolvconf restart

to apply the changes and restart the DNS lookup service thinger. I know that doesn’t sound very technical, but I honestly don’t know anything about the part of the network stack in Debian is responsible for DNS lookups, aside from the fact that it may or may not be called resolvconf, so you’ll have to live with it.

In any case, this seems to have worked quite well, so check into it if you’re having problems resolving DNS addresses on your machine.

Barry: Round Two with the Blogosphere riding Shotgun

September 30th, 2009 2 comments

Given the problems that I’ve been having lately with getting my Blackberry calendar and contacts to synchronize with anything in Linux, I was quite surprised when I almost got it working tonight. Forgetting everything that I’ve learned about the process, I started over, following these helpful tutorials and working through the entire install from the beginning. Unfortunately, aside from some excellent documentation of the install process (finally), the only new idea that those blogs provided me with was to try syncing the phone with different pieces of software. Specifically, Chip recommended KDEPIM, although I opted to  jump through a few more hoops before giving in and dropping the Thunderbird/Lightning combination entirely.

After a bit more mucking about, I decided to give up Lightning and installed Iceowl, Debian’s rebranding of Mozilla Sunbird, instead. Iceowl is the standalone calendar application that Lightning is based on, and is a very lightweight solution that is supposed to cooperate with the opensync-plugin-iceowl package. In theory, this allows calendar data to be shared between my device and the Iceowl calendar after configuring the plugin to read my Iceowl calendar from the /home/username/.mozilla/iceowl/crazyfoldername/storage.sdb file. In practice, the sync process gets locked up every time:

Screenshot-PIM Synchronization - KitchenSync-1

Why must you tease me?

Well, I’ve tried everything that I can think of to get my phone to synchronize with any Mozilla product. I’m very close to giving up, which is a shame, because they really are superior products. The ridiculousness of the entire thing is that I can easily dump my PIM data to a folder, and Thunderbird stores it’s data in an SQLite database. If this were Windows, I’d have written a VB app to fix my problems hours ago… Anybody know any python?

Update: I’ve also managed to successfully synchronize my phone with the Evolution mail client. Unfortunately, Evolution looks rather pale next to Thunderbird. In fact, the entire reason that I switched to Thunderbird about a week ago is that Evolution mysteriously stopped receiving my IMAP email with no explanation. No new email comes in, and the Send/Receive button is grayed out. Until now, I was happy with my decision, as Thunderbird is a superior application.

Update

September 30th, 2009 1 comment

Hi Everyone,

Sorry about the lack of updates. I’ve been pretty busy lately. After a lot of fighting and arguing, Linux and I are finally getting along.

I was unsuccessful in installing Linux as I had mentioned early, by running it from my portable hard drive off of my Mac. As a result, I decided to wipe the Ubuntu partition on my Asus eeePC and install openSUSE on there. It was fairly simple to do, and it installed without much hassle. This guide came in handy with the smooth transition.

Although Gentoo is definitely the best flavour of Linux I’ve encountered, openSUSE hasn’t been too bad.

With that being said, I have a few tasks for the coming days, and I will be sure to post about all of them. First, I want to install a softphone to connect to my Asterisk server. Jake has said after some fighting he managed to get this to work. If I run into issues, I can always ask him. Additionally, I have to get Eclipse set up with some various environments I’m going to have to use in the coming weeks. I’ve successfully set it up to work in OpenGL thus far.

That’s it for now. I’ll be posting more in the next few days as I accomplish these tasks.

The Linux and its ability to brick itself.

September 29th, 2009 1 comment

Over the weekend, I started a stats assignment that required me to use R. R runs in the terminal, but when you create plots, it brings up graphics. Normally in Windows, you can just copy the window and paste the new plot into whatever word processor you’re using. Linux Mint wasn’t letting me copy the plot – in fact, it wasn’t even letting me use alt-printscreen. Finally, I gave up and tried to install ksnapshot (I figured I could just screenshot a selected area). This is where my troubles began. Ksnapshot refused to install. Actually, everything refused to install. I restarted the computer and found this ridiculous scene on my desktop:

So many screenshots

So many screenshots

Seeing as I apparently had an abundance of screenshots, I gave up on ksnapshot and moved on with my life.

Today I tried to update my system through mintUpdate. Unfortunately, none of the updates went through. I called Tyler and Jake in and we tried installing something – anything – else. Nothing worked, and I kept getting this message in the console:

“dpkg failed in buffer read”

It turns out that Festival (the text-to-speech program) was completely ruining everything. We tried removing it through the terminal, but to no avail. We tried simply accessing it, but the system was having none of it. In the end, we had to go into recovery mode and do some weird file system stuff (I’ll have to ask Jake and Tyler on the details of what exactly it was I did). So far the system seems to be functioning again, but if Tyler and Jake weren’t around I’m sure I’d still be struggling to figure out what the hell was going on.

Categories: God Damnit Linux, Linux Mint, Sasha D Tags:

Another kernel update, another rebuild of my kernel

September 29th, 2009 No comments

Seriously, this is getting annoying

And just when I thought it couldn’t get anymore annoying… it seems as though there isn’t a kmod-catalyst for the newest version of the kernel that I just got updated to. Which means either I get the new kernel or I get to keep my graphics. I think for now I will be sticking with the latter and only move up to the new kernel when there is a kmod-catalyst ready for me.

The Magic of Lenny Backports

September 28th, 2009 No comments

This afternoon saw me in a really annoying situation. I was in a coffee shop, wearing a beret, and writing poetry, and couldn’t get a ‘net connection. The coffee shop runs an open network access point, but some asshat in a nearby complex was running a secured access point with the same SSID.

For some reason, my version of the network-manager-gnome package (the older one that shipped with Lenny) could not tell the difference, and I could not get a connection. When I attempted to force a connection, it crashed. Repeatedly.

This being my first experience with anything on Linux crashing, I immediately (and rashly) determined that the problem must lie with my (relatively) old network manager. After all, I was running v0.6.6-4 of an application that had since matured to v0.7.7-1! And my companions, who were running the latest version, were connecting no problem! Of course, this also wasn’t the first set of problems that I had encountered with my network manager.

So upon returning to my domicile (I’ve always wanted to use that word in a sentence), I hit the #debian IRC channel and asked about upgrading to the testing repository, where all of the latest and greatest code is awaiting release as Squeeze, the next version of Debian. Having heard that the code was frozen in July, and that the release was slated for early spring, I figured that by this point, the code there would be fairly mature, and easy enough to use. To the contrary, the members of the channel weren’t comfortable giving me advice on how to upgrade, since in their words, I shouldn’t be considering upgrading to testing unless I understood how to do as much.

With this warning, I was then given instructions on how to update (which didn’t make me feel any better – the last step in the instructions was “be ready for problems”), along with the suggestion that I check out backports.org first.

Essentially, this site is an alternate repository dedicated to backporting the latest and greatest code from testing to the last stable version of Debian. This means that, with a simple modification to my etc/apt/sources.list file, I could selectively upgrade the packages on my machine to newer versions.

In fact, I had actually already added this repository to my sources.list file, back when I was working on getting Flash 10 installed. At the time, I just didn’t know enough to understand what it was, or what it’s implications were.

So now, running the newest version of network-manager-gnome, a somewhat more recent version of gnome-do, and clinging to the promise that I can upgrade anything else that seems to have gotten better since the time of the dinosaurs when Lenny was released; my urge to upgrade has subsided, and my commitment to wait out the proper release has been restored.

A minor setback

September 28th, 2009 2 comments

Since this crazy job of mine doesn’t quite feed my mad electronics fetish as much as I might like to, I do a lot of computer troubleshooting on the side… it helps pay the bills, and is a nice way to stay on my toes as far as keeping on top of possible threats out there (since our company’s firewall keeps them out for the most part).  I’ll usually head to a person’s house, get some stuff done, and if it’s still in rough shape (requires a full backup and format) I’ll bring the machine home.

Yesterday, I headed over to my former AVP (Assistant Vice-Preisdent, for those of you not in the know)’s house to get her wireless network running and troubleshoot problems with her one desktop, as well as get file and printer sharing working between two machines.  Her wireless router is a little bit old – a D-Link DI-524 – but it’s something I’ve dealt with before.

After a firmware upgrade, the option to use WPA-PSK encryption was made available (as opposed to standard WEP before).  Great, I thought!  I go to put in a key, hit Apply, and…

Nothing.  Hitting the Apply button does absolutely nothing.  Two computer and router restarts (including a full reset) later, and the same thing was happening.  Some quick research indicated that, hooray hooray, there was an incompatibility with that router’s administration page, Java, and Firefox.  Solution?  Use Internet Explorer.

Here’s where I really ran into a pickle.  This is the first time I’ve ever felt the disadvantage of using a non-Windows operating system.  If I had Windows, I would have been able to fire up IE and just get everything going for them.  Instead, I had to try and install IE6 for Linux, which failed (Wine threw some kind of error).  I ended up using one of my client’s laptops, which they thankfully had sitting around.  Frustrating, but it was easy enough to work around.

Has anyone else had experiences like this?  Things that are *just* out of reach for you because of your choice to use Linux over Windows?

How I solved my audio problems

September 27th, 2009 No comments

Short answer: IRC and #fedora

Long answer:

As you may recall I have been without sound for quite some time now. Finally getting sick and tired of it I ventured into the official Fedora IRC channel to try and get some help. Thankfully the people over there are very helpful. After about an hour of trying this, that and the other thing I finally found success by doing the following:

yum install pavucontrol padevchooser

This installed some very easy to use tools for PulseAudio, the component that I long thought was the cause of my problems.

PulseAudio made easy!

PulseAudio made easy!

After pulling this up I noticed that it was sending the master audio stream to my ATi HDMI port for some reason. A quick switch of this to the “Internal Audio” and everything seemed to work fine! Not sure what caused my default audio stream to be switched to the HDMI port that I’m not even using but I’m just glad that after all of this time I have finally solved the problem!

DNS Not Satisfactory

September 25th, 2009 No comments

While trying to connect to a remote webserver via SSH last night, I found that my machine refused to resolve the hostname to an IP address. I couldn’t ping the server either, but could view a webpage hosted on it. Now this was a new one on me – I figured that my machine was caching a bad DNS record for the webserver, and couldn’t connect because the server’s IP had since changed. That didn’t really explain why I was able to access the server from a webbrowser, but I ran with it. So how do you refresh your DNS cache in Linux? It’s easy to do in Windows, but the Goog and the Bing let me down spectacularly on this issue.

This morning, I tried to connect via SSH from my school network, and couldn’t get a connection there either. This reinforced the idea that a local DNS cache might have an outdated record in it, because at school, I was using a different nameserver than at home, and a whole 12 hours had elapsed. Out of theories, and lacking a method to refresh my local DNS cache, I hit the #debian channel on IRC for some guidance. Unlike my last two trips to this channel, I got help from a number of people within minutes (must be a timezone thing), and found out that unless I manually installed one, Debian does not maintain a DNS cache. Well, there goes that idea.

So where was I getting my DNS lookup service? A quick look at my /etc/resolv.conf file showed that the only entry in it was 192.168.1.1, which is the IP of my home router. The file also has a huge warning banner that claims that any changes will be overwritten by the operating system. Makes sense, as when I connect to a new network, I presumably get DNS resolution from their router, which may have a different IP address than mine. The guys on IRC instructed me to try to connect to the server with it’s IP address instead of it’s hostname, thereby taking the DNS resolution at the router out of the picture. This worked just fine.

They then instructed me to add a line to the file with the IP address of the nameserver that the router is using. In the case of our home network, we use OpenDNS, a local company with static servers. I did so, and could immediately resolve the IP of my remote server, and obtain an SSH connection to it.

Well fine, my problem is solved by bypassing DNS resolution at the router, but it still doesn’t explain what’s going on here. Why, if DNS resolution was failing at the router level (presumably because the router maintains some kind of DNS cache), did it work for my webbrowser, but not the for ssh, scp, or ping commands? Don’t they all resolve nameservers in the same way? Further, if it was the router cache that had a bad record in it, why did the problem also manifest itself at school, where the router is taken entirely out of the picture?

Further, will the file actually be overwritten by the OS the next time I connect to a different wireless network? If so, will my manual entry be erased, and will the problem return? Time will tell. Something smells fishy here, and it all points to the fact that my machine is in fact retaining a local DNS cache. How else can I explain away the problem manifesting itself on the school network? Further, even if I do have a local cache that is corrupted or contains a bad record, why did Iceweasel bypass it and resolve the address of the webserver at the router level (thereby allowing it to connect, even though the ssh, scp, and ping commands could not)?

LINUX!!11

My audio doesn’t work anymore

September 21st, 2009 1 comment

Yup. Not sure why. It just happened. I have tried messing around in my audio settings and still nothing. In fact the only audio device I can get to play is not PulseAudio, or anything standard like that, but rather the Intel audio card that it found for my system. While this is all fine and promising it still doesn’t work right. When I tried to set it as my primary device and restarted my machine KDE threw a bunch of error messages my way saying that it couldn’t use the Intel device (really? because that was the only one that worked for me…) and instead fell back to PulseAudio (really? because that one doesn’t work for me…).

Why is it that Linux works great for a short while and then suddenly breaks itself?

These lockups are getting pretty annoying

September 20th, 2009 4 comments

This morning I was using Firefox with about a dozen tabs open when my computer locked up. It froze and was completely unresponsive – basically DOA. I decided to reboot the system, and everything was working fine until I reopened Firefox. It loaded my previous session, and the computer locked up again. After a second reboot, I opened Firefox and decided to start a new session and so far everything has been running smoothly.

I’m not sure why my system does this, but it’s getting pretty damn annoying. More importantly, the fact that these crashes are forcing me to reboot is really getting on my nerves. While I didn’t have anything important open, in the next few days I’m going to be using R through the terminal, and there’s a chance that a crash like this could lose me a significant amount of work, particularly since R doesn’t have a restore capability like my other programs.

I’m also hesitant to blame Firefox for what’s going on since this has also happened in Thunderbird and in Pidgin. Hopefully I can figure out what’s going on soon – Linux Mint has been pretty fantastic, and this has really put a damper on my experience. Is there any sort of error log I can look at? Ideally I want to be able to replicate the conditions before the crash to see if I can isolate any causes.

Categories: God Damnit Linux, Linux Mint, Sasha D Tags:

Mounting an NTFS-formatted External Drive

September 20th, 2009 7 comments

I have a Western Digital 250GB NTFS-formatted external hard drive that I use primarily to store backups of my Windows machine. Since I’m away from my house for a couple of days, I used the drive to bring along some entertainment, but encountered some troubles getting Debian Lenny to play nice with it:

mount-errorAfter searching around for a bit, I found a helpful thread on the Ubuntu forums that explained that this problem could be caused by a few different things. First, with the drive plugged in, I ran

sudo fdisk -l

from the terminal, which brought up a summary of all disks currently recognized by the machine:

jon@debtop:/$ sudo fdisk -l
Disk /dev/sda: 40.0 GB, 40007761920 bytes
255 heads, 63 sectors/track, 4864 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xcccdcccd
 Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          31      248976   83  Linux
/dev/sda2              32        4864    38821072+  83  Linux

Disk /dev/dm-0: 39.7 GB, 39751725568 bytes
255 heads, 63 sectors/track, 4832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 38.0 GB, 38067503104 bytes
255 heads, 63 sectors/track, 4628 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 1681 MB, 1681915904 bytes
255 heads, 63 sectors/track, 204 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/sdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x5b6ac646

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       30401   244196001    7  HPFS/NTFS

Judging by the size of the drives, I figured out that the OS saw my drive at the location /dev/sdb, and the partition that I wanted to mount (the only partition on the drive) at the location /dev/sdb1.

Now, to determine why Linux wasn’t mounting the drive, I checked the fstab file at /etc/fstab to see if there was some other entry for sdb that was preventing it from mounting correctly:

# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/mapper/debtop-root /               ext3    errors=remount-ro 0       1
/dev/sda1       /boot           ext2    defaults        0       2
/dev/mapper/debtop-swap_1 none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

Since there was no entry there that should have overwritten sdb, I gave up on that line of inquiry, and decided to try manually mounting the drive. I know that Debian can read ntfs drives using the -t ntfs argument for the mount command, so I navigated over to the /media/ directory and created a folder to mount the drive in:

jon@debtop:/$ cd /media/
jon@debtop:/media$ sudo mkdir WesternDigital
jon@debtop:/media$ ls
cdrom  cdrom0  floppy  floppy0  WesternDigital
jon@debtop:/media$ sudo mount -t ntfs /dev/sdb1 /media/WesternDigital/
jon@debtop:/media$ sudo -s
root@debtop:/media# cd WesternDigital
root@debtop:/media/WesternDigital# ls
KeePass.kdbx  nws  $RECYCLE.BIN  System Volume Information  workspace.tc

As you can see, the contents of my external drive were now accessible in the location where they ought to have been if Debian had correctly mounted the drive when it was plugged in. The only caveat to the process is that the mount function is available only to root users, meaning that the mountpoint was created by root, and my user account lacks the necessary permissions to read or write to the external drive:

no-permissions

I figured that this issue could be solved by using chmod to grant all users read and write permissions to the mountpoint:

root@debtop:/media# chmod +rw WesternDigital
chmod: changing permissions of `WesternDigital': Read-only file system

Well what the hell does that mean? According to this post (again on the Ubuntu forums), the ntfs support in Linux is experimental, and as such, all ntfs drives are mounted as read only. Specifically, this drive is owned by the root user, and has only read and execute permisions, but lacks write permissions.

According to this thread on the slax.org forums, there is another ntfs driver for Linux called ntfs-3g that will allow me full access to my ntfs-formatted drive. After sucessfully adding the ntfs-3g drivers to my system, I dismounted the drive, and attempted to re-mount it with the following command:

mount -t ntfs-3g /dev/sdb1 /media/WesternDigital

This time, the mount command appeared to almost work, but I got an error message along the way, indicating that the drive had not been properly dismounted the last time it was used on Windows, and giving me the option to force the mount:

Mount is denied because NTFS is marked to be in use. Choose one action:

Choice 1: If you have Windows then disconnect the external devices by
 clicking on the 'Safely Remove Hardware' icon in the Windows
 taskbar then shutdown Windows cleanly.

Choice 2: If you don't have Windows then you can use the 'force' option for
 your own responsibility. For example type on the command line:

 mount -t ntfs-3g /dev/sdb1 /media/WesternDigital -o force

Well, since I didn”t have a Windows box lying about that I can use to dismount the drive properly, I’ll took a shot at using the force option. After warning me again that it was resetting the log file and forcing the mount, the machine finally mounted my drive with full permissions for the owner, group, and other users!

drwxrwxrwx 1 root root  4096 2009-09-18 15:40 WesternDigital

After a couple of manual tests, I confirmed that both my user account and the root user had full read/write/execute access to this drive, and that I could use it like any other drive that the system has access to. Further, thanks to the painful XBMC install process, I already had the codecs required to play all of the TV shows that I brought along.

Wireless Network Manager Woes

September 16th, 2009 No comments

Debian Lenny ships with the Network Manager package, version 0.6.6-4, which for all intents and purposes is a well written and very useful network management application. But of course, I wanted something more. At home, I have my music library (hosted on a Windows Vista machine) shared to the local network, and wanted to mount that drive using Samba so that I could share my music library between my two machines while on my home network.

On a Windows machine, one can just point an application to files on a networked drive, while Windows handles all of the dirty details related to allowing that application use those files as if they were on the local machine. On Linux, the application in question seems to have to be aware of how to handle a Windows share (usually via the Samba package), and handle that drive sharing on it’s own, unless the network drive has been mounted first. Further, when mounting a network share in Linux, one can choose any folder on their hard drive to put its contents into, ensuring that it always appears in the same location, and is easy to find.

Unfortunately, as far as I can divine, a networked drive can only be mounted by the root user, which seriously reduces the number of applications that can perform that mounting action. In my quest to get my home music share working, I looked into plenty of different methods for automatically mounting network drives, including startup scripts, modifying the fstab file, and manually connecting from a root terminal. None worked very well.

Eventually, I stumbled across a web post advertising the pros of the WICD network manager, which as I understand, will be used as an alternative to the network manager package by Debian Squeeze, and can currently be pulled into Lenny by adding the Debian-Lenny Backports repository to your sources list. I installed it, replacing the default network-manager-gnome package.

My first impression of WICD was extremely positive. Not only did it connect to my home network immediately, it also allowed me to define default networks to connect to (something that is conspiciously absent from the NetworkManager interface), and to set scripts that are run when my client connects to or disconnects from any of the networks in the list. This allowed me to write a simple one line script that mounted my network share on connection to my home wireless network. It worked every time, and mysteriously did so without asking me for my Sudo password, even though it used the sudo command internally to get rights to perform the mount.

Odd security peculiarities aside, I was happy with what I had accomplished – now I could tell my laptop to automatically connect to my home wireless network, and to mount my music share as soon as it did so! Then I went to school. Shit.

The wireless network at my University uses EAP-TTLS with PAP inner-authentication as a security protocol, something that WICD apparently had no idea how to handle. This protocol is extremely secure, as the host identifies itself to the client with a certificate that the client uses to tunnel into the host, allowing connection to take place without any user information being passed in the clear. At least that’s how it’s supposed to work, except that our school doesn’t have a certificate or certificate authority, so… Whatever.

In any case, WICD does not include a template for this type of network (which is fair I suppose, since Windows requires an add-on to access it as well), but for the life of me, I couldn’t figure out what to do to fix the problem. I trolled the internet from a wired machine and tried editing the WICD encryption templates, while Tyler (on Fedora) and Phil (on OpenSuse) connected on first try.

Eventually, after an hour or so of fruitless trial and error, I gave up, came home, and reinstalled the NetworkManager application, because that’s what Tyler and Phil were using on their systems, and it seemed to work fine. Sure enough, the next day I connected after just a minor tweaking of the network properties in the NetworkManager dialog.

Unfortunately, while I can now connect to my home and school networks, I once again have lost the ability to automatically connect to networks, and to execute scripts on connection, meaning that I’m back to square one with the mounted networked music share – for now, I just do the mounting manually from a root terminal. Balls.

New monitor woes

September 15th, 2009 No comments

So I’ve gone out and purchased myself a gorgeous LG Flatron W2243T. Unfortunately, getting it to work correctly has proven difficult so far. It’s connected to my computer through a DVI-to-HDMI cable. Now, adding a monitor to my Windows XP machine was fairly simple – all I had to do was plug it in, add it through display properties, and then I could futz around with it to my heart’s content. The task has proven more arduous on Mint.

Mint’s display manager really brought my system to its knees – as soon as I opened it, the computer slowed to a crawl and was basically unusable. Some of the information on the display manager was correct: there were two monitors (the laptop monitor and the new LG external monitor), and one was wider than the other; unfortunately, every other piece of information was “unknown”, and trying to change anything killed my system. After I rebooted, the monitor worked right from startup, which was a pleasant surprise, but that’s where the fun ended. I tried to get into my display manager again, but all it did was slow my system down and present me with a blank screen. I’ve tried going in through terminal and finding anything I could online, but I’m not sure what to do. Hopefully Jake can help me out when he gets home – otherwise I’m stuck with a mirrored dual monitor setup in a non-optimal resolution. Thankfully, my monitor and laptop share the same display ratio, so at least everything is in proportion.

Oh God How Did This Happen

Oh God How Did This Happen

Eclipse Fails It

September 14th, 2009 No comments

Man, Eclipse works great on Debian! It gives me this cool message on startup:

JVM terminated. Exit code=127
/usr/lib/jvm/java-gcj/bin/java
-Djava.library.path=/usr/lib/jni
-Dgnu.gcj.precompiled.db.path=/var/lib/gcj-4.2/classmap.db
-Dgnu.gcj.runtime.VMClassLoader.library_control=never
-Dosgi.locking=none
-jar /usr/lib/eclipse/startup.jar
-os linux
-ws gtk
-arch x86
-launcher /usr/lib/eclipse/eclipse
-name Eclipse
-showsplash 600
-exitdata 3a0015
-install /usr/lib/eclipse
-vm /usr/lib/jvm/java-gcj/bin/java
-vmargs
-Djava.library.path=/usr/lib/jni
-Dgnu.gcj.precompiled.db.path=/var/lib/gcj-4.2/classmap.db
-Dgnu.gcj.runtime.VMClassLoader.library_control=never
-Dosgi.locking=none
-jar /usr/lib/eclipse/startup.jar

After uninstalling, reinstalling, changing which JVM I was using, uninstalling, reinstalling, googling, yahooing, and binging, I finally found this post over at Debian Help that instructed me to first install XULRunner. With the addition of this simple step, everything suddenly worked great.

The strange part about the whole thing is that Eclipse doesn’t install XULRunner as a dependency, and the Wikipedia article about XULRunner doesn’t mention Eclipse anywhere. I don’t really understand their relationship, aside from the fact that Eclipse supports plugins that may or may not be written on top of XULRunner.

Regardless of their strange and undocumented relationship, the Eclipse/XULRunner combo seem to work perfectly, allowing me to create Java, C/C++, and Plugin projects out of the box. Next steps include adding plugins for Subversion, Python, and PHP.

Challenger Approaching: Phil tries to install openSUSE

September 13th, 2009 No comments

I’m the newest guinea pig in this experiment, and yes, I’m a few days late joining up. Since I’ve already become comfortable with Ubuntu, I decided to choose openSUSE for my distribution. However, because I do a lot of Windows development for both of my jobs, I’ll be the only participant of this experiment who’ll be dual booting.

Before you go and cry foul, I checked the rules very carefully. The rules state: “[you] must use the distribution on your primary computer and it must be your primary day-to-day computing environment”. That means that as long as I use it 50.1% of the time, I’ll be within the bounds of the experiment. Of course I plan to use it considerably more than 50.1% of the time.

While everyone else in the experiment has been starting to finally get their computers to a productive state, I’m just started installed openSUSE last Tuesday. I might have had some time to start getting my shit in order, however my first attempt to burn the openSUSE DVD was met with a burn error.

Wasted DVD Count: 1

Not wanting to risk installing from a faulty disc, I burnt it again. Same error. Out of boredom, I figured “what’s the worst that can happen?” and tried to install anyways. Needless to say, the installation failed about 3/4 through, but Windows booted anyways so I figured I’d be okay.

Wasted DVD Count: 2

My next step was to re-download the ISO, then try to burn the disc again from another computer. Shockingly, I encountered the same burn error. Since the last failed burn attempt didn’t completely ruin my system, I figured I’d try it again. Again I was met by disastrous failure, but this time, Windows would not boot.

Wasted DVD Count: 3

After using my Windows 7 RC disc to “repair Windows”, I finally got the system to boot. However, it took over 30 minutes from power on to functional desktop. Immediately I ran a disk defrag and scheduled a checkdisk, and went to bed.

The analysis alone for the defrag took around 4 hours [I know because I happened to wake up in the middle of the night and decided to go check it, and it was about 90% done]. Incase you’ve never run a disk defrag, that’s WAY above normal. In the morning I ran the actual defrag, and it took about 2 hours. Once it finished, I rebooted to start the checkdisk – which hadn’t finished before I left for work 2 hours later. When I got home, 5.5 hours after I started the checkdisk, it was just finishing. In total it took 6 hours. Windows now ran smoothly, but was lacking sound, and nothing I could do made it work. So I re-installed Windows 7 and everything was back to normal before I started trying to install openSUSE.

I decided to burn another copy of the openSUSE install disc, and ran the media check that’s installed on the disc. Around 3/4 of the way through the check it failed. Running it on another machine yielded the same result.

Wasted DVD Count: 4

I decided to get a MD5 program to verify the integrity of the ISO’s I downloaded. They both matched perfectly to the MD5 provided on the openSUSE download page, so with few options left, I asked Tyler to download a copy of the ISO and burn it. Although there was a burn error in that process as well, I decided to run the Media Check on that DVD as well. Surprisingly it succeeded and I proceeded to attempt to install openSUSE.

One of the nice things about openSUSE is that it proposes either a partition based or an LVM based method for installing the OS. Usually, this involved shrinking the Windows partition and using the available space for Boot, Swap, Home, and Root partitions. Because of all the screwing around with hard drive partitions and disk fragmentation, openSUSE was unable to shrink my Windows partition to roughly 40 GB. Instead, I had to boot back into Windows 7, shrink the partition there, and then manually assign partitions from within the openSUSE installer. I ended up choosing to set aside 4GB for my Swap partition [2 * the amount of RAM I have], and to group Home, Root, and Boot into one partition with the remaining 26 GB.

So on Friday night [or Saturday morning] openSUSE finally booted, taking up 5 DVD’s in the process. More to come on making openSUSE do my bidding.

Categories: God Damnit Linux, openSUSE, Phil D Tags: