I’m not even sure what to say about this one… it looks like I might have an angry video card.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
I’m not even sure what to say about this one… it looks like I might have an angry video card.
Elementary OS is the latest darling for the Linux community at large and with some good reason. It isn’t that Elementary OS is The. Best. Distro. Ever. In fact being only version 0.2 I doubt its own authors would try to make that claim. It does however bring something poorly needed to the Linux desktop – application focus.
Most distributions are put together in such a way as to make sure it works well enough for everyone that will end up using it. This is an admirable goal but one that often ends up falling short of greatness. Elementary OS seems to take a different approach, one that focuses on selecting applications that do the basics extremely well even if they don’t support all of those extra features. Take the aptly named (Maya) Calendar application. You know what it does? That’s right, calendar things.
Or the Geary e-mail client, another example of a beautiful application that just does the basics. So what if it doesn’t have all of the plugins that an application like Thunderbird does? It still lets you read and send e-mail in style.
Probably the best example of how far this refinement goes is in the music application Noise. Noise looks a lot like your standard iTunes-ish media player but that familiarity betrays the simplicity that Noise brings. As you may have guessed by now, it simply plays music and plays it well.
OK I understand that this approach to application development isn’t for everyone. In fact it is something that larger players, such as Apple, get called out over all the time over. Personally though I think there is a fine balance between streamlined simplicity and refinement. The Linux desktop has come a long way in the past few years but one thing that is still missing from a large portion of it is that refined user experience that you do get with something like an Apple product, or the applications selected for inclusion in Elementary OS. Too often open source projects happily jump ahead with new feature development long before the existing feature set is refined. To be clear I don’t blame them, programming new exciting features is always more fun than fixing the old broken or cumbersome ones, although this is definitely one area where improvements could be made.
Perhaps other projects can (or will) take the approach that Elementary has and dedicate one release, every so often, to making these refinements reality. I’m thinking something like Ubuntu’s One Hundred Paper Cuts but on a smaller scale. In the meantime I will continue to enjoy the simplicity that Elementary OS is currently bringing my desktop Linux computing life.
Full disclosure: I live with Kayla, and had to jump in to help resolve an enraging problem we ran into on the Kubuntu installation with KDE, PulseAudio and the undesirable experience of not having sound in applications. It involved a fair bit of terminal work and investigation, plus a minimal understanding of how sound works on Linux. TuxRadar has a good article that tries to explain things. When there are problems, though, the diagram looks much more like the (admittedly outdated) 2007 version:
To give you some background, the sound solution for the projection system is more complicated than “audio out from PC, into amplifier”. I’ve had a large amount of success in the past with optical out (S/PDIF) from Linux, with only a single trip to alsamixer required to unmute the relevant output. No, of course the audio path from this environment has to be more complicated, and looks something like:
As a result, the video card actually acts as the sound output device, and the amplifier takes care of both passing the video signal to the projector and decoding/outputting the audio signal to the speakers and subwoofer. Under Windows, this works very well: in Control Panel > Sound, you right-click on the nVidia HDMI audio output and set it as the default device, then restart whatever application plays audio.
In the KDE environment, sound is managed by a utility called Phonon in the System Settings > Multimedia panel, which has multiple backends for ALSA and PulseAudio. It will essentially communicate with the highest-level sound output system installed that it has support for. When you make a change in a default Kubuntu install in Phonon it appears to be talking to PulseAudio, which in turn changes necessary ALSA settings. Sort of complicated, but I guess it handles the idea that multiple applications can play audio and not tie up the sound card at the same time – which has not always been the case with Linux.
In my traditional experience with the GNOME and Unity interfaces, it always seems like KDE took its own path with audio that wasn’t exactly standard. Here’s the problem I ran into: KDE listed the two audio devices (Intel HDA and nVidia HDA), with the nVidia interface containing four possible outputs – two stereo and two listed as 5.1. In the Phonon control panel, only one of these four was selectable at a time, and not necessarily corresponding to multiple channel output. Testing the output did not play audio, and it was apparent that none of it was making it to the amplifier to be decoded or output to the speakers.
Using some documentation from the ArchLinux wiki on ALSA, I was able to use the aplay -l command to find out the list of detected devices – there were four provided by the video card:
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
and then use aplay -D plughw:1,N /usr/share/sounds/alsa/Front_Center.wav repeatedly where N is the number of one of the nVidia detected devices. Trial and error let me discover that card 1, device 7 was the desired output – but there was still no sound from the speakers in any KDE applications or the Netflix Desktop client. Using the ALSA output directly in VLC, I was able to get an MP3 file to play properly when selecting the second nVidia HDMI output in the list. This corresponds to the position in the aplay output, but VLC is opaque about the exact card/device that is selected.
At this point my patience was wearing pretty thin. Examining the audio listing further – and I don’t exactly remember how I got to this point – the “active” HDMI output presented in Phonon was actually presented as card 1, device 3. PulseAudio essentially grabbed the first available output and wouldn’t let me select any others. There were some additional PulseAudio tools provided that showed the only possible “sink” was card 1,3.
The brute-force, ham-handed solution was to remove PulseAudio from a terminal (sudo apt-get remove pulseaudio) and restart KDE, presenting me with the following list of possible devices read directly from ALSA. I bumped the “hw:1,7” card to the top and also quit the system tray version of Amarok.
Result: Bliss! By forcing KDE to output to the correct device through ALSA, all applications started playing sounds and harmony was restored to the household.
At some point after the experiment I will see if I can get PulseAudio to work properly with this configuration, but both Kayla and I are OK with the limitations of this setup. And hey – audio works wonderfully now.
Hello, everyone! It’s great to be back in the hot seat for this, our third installment of The Linux Experiment. I know that last time I caused a bit of a stir with my KDE-bashing post, so will try to keep it relatively PG this time around.
Not many people know about it or have used it, but – through an employee purchase program about five years ago – I was able to get my hands on the HP EX470 MediaSmart Home Server. What manner of witchcraft is this particular device, you may ask? Here’s a photo preview:
It really is about as simple as it looks. The EX470 (stock) came equipped with a 500 GB drive, pre-loaded with Windows Home Server – which in turn was built on Windows Server 2003. 512 MB of RAM and an AMD Sempron 3400+ rounded it off; the device is completely headless, meaning that no monitor hookup is possible without a debug cable. The server also comes with four(?) USB ports, eSATA, and gigabit ethernet.
My current configuration is 3 x 1 TB drives, plus the original 500 GB, and an upgraded 2 GB DIMM. One of the things I’ve always loved about Windows Home Server is its ‘folder duplication’. Not merely content to RAID the drives together, Microsoft cooked up an idea to have each folder able to duplicate itself over to another drive in case of failure. It’s sort of like RAID 1, but without entirely-mirrored disks. Still, pretty solid redundancy.
Unfortunately for me, this feature was removed in the latest update to Windows Home Server 2011 – and support for that is even waning now, leading me to believe that patches for this OS may stop coming entirely within the next year or two. So, where does that leave me? I’m not keen to run a non-supported OS on this thing (it is internet-connected), so I’m definitely looking into alternatives.
Over the next few days, I plan to write about my upcoming ‘adventures’ in finding a suitable Linux-based alternative to Windows Home Server. Will I find one that sticks, or will I end up going with a Windows 8 Pro install? Only time will tell. Stay tuned!
Hello again everyone! By this point, I have successfully installed ArchLinux, as well as KDE, and various other everyday applications necessary for my desktop.
Aside from the issues with the bootloader I experienced, the installation was relatively straight forward. Since I have never used ArchLinux before, I decided to follow the Beginner’s Guide in order to make sure I wasn’t screwing anything up. The really nice thing about this guide is that it only gives you the information that you need to get up and running. From here, you can add any packages you want, and do any necessary customization.
Overall, the install was fairly uneventful. I also managed to install KDE, Firefox, Flash, and Netflix (more below) without any issues.
Some time ago, there was a package created for Ubuntu that allows you to watch Netflix on Linux. Since then, someone has created a package for ArchLinux called netflix-desktop. What this does, is creates an instance of Firefox in WINE that runs Silverlight so that the Netflix video can be loaded. The only issue that I’m running into with this package is that when I full-screen the Netflix video, my taskbar in KDE still appears. For the time being, I’ve just set the taskbar to allow windows to go over top. If anyone has any suggestions on how to resolve this, please let me know.
Back to a little more about ArchLinux specifically. I’ve really been enjoying their package management system. From my understanding so far, there are two main ways to obtain packages. The official repositories are backed by “pacman” which is the main package manager. Therefore, if you wanted to install kde, you would do “pacman -S kde”. This is similar to the package managers on other distributions such as apt-get. The Arch User Repository is a repository of build scripts created by ArchLinux users that allow you to compile and configure other packages not contained within the official repositories. The really neat thing about this is that it can also download and install and dependencies contained in the official repositories using pacman automatically.
As I go forward, I am also thinking of ways I can contribute to the ArchLinux community, but for now, I will continue to explore and experiment.
The machine I am running Kubuntu on is primarily used for streaming media like Netflix and Youtube, watching files off of a shared server and downloading media.
Again, I resorted to Googling exactly what I am looking for and came across this fantastic post.
I opened a Terminal instance in Kubuntu and literally copied and pasted the text from the link above.
After going through these motions, I had a functioning instance of Netflix! Woo hoo.
So I decided to throw on an episode of Orange is the new Black, it loaded perfectly…. without sound.
Well shit! I never even thought to see if my audio driver had been picked up… so I guess I should probably go ahead and fix that.
Back to my shit-tastic eyesight for a moment.
Now that we have our Bluetooth devices installed, I can now sit in front of my projector, instead of in the closet, to fiddle with the font scaling.
We will want to go through the process of pulling up the System Settings again. Why don’t we refer to this image… again.
The next step to to select Application Appearance, it looks like this.
This will bring you into this menu where you will select Fonts from the toolbar on the left hand side.
In the next screen you can change the font settings. There is a nice option in here that you can select to change all the fonts at once… spoiler, it is called “Adjust all fonts”. This is what I used to change the fonts to a size that my blind ass could see from the couch without squinting too much.
You can also force font DPI and select anti-aliasing, as you can see below. For the most part, this has made it possible for me to see what the hell is going on on my screen.
For my next adventure, I will be trying to get Netflix to work. Which I have heard is actually pretty simple.
This is actually a much easier process than I imagined it would be.
First: Ensure your devices (mouse, headphones, keyboard, etc…) are charged and turned on.
Next click on the “Start” menu icon in the bottom left of the desktop screen.
Then click on the “Computer” icon along the bottom, followed by System Settings.
This will take you into the System Settings folder where you can change many things. Here we will select Bluetooth, since that is the type of device you want to install.
Like most Bluetooth devices, mine have a red “Connect” button on the bottom. Ignore the sweet, sweet compulsion to press that button. I’m convinced it is nearly useless. Instead, use the “Add devices” method, as seen here.
Now, if you followed my first instruction (charge and turn on your Bluetooth Device) you should see them appear in this menu. Select the item you would like to add and click next. This will prompt you to enter a PIN on the device you wish to insyall (if installing a keyboard), or it will just add your device. If you have done this process successfully, your device will show up in the device menu. If it does not, you fucked up.
With the third edition of The Linux Experiment already underway, I decided to get my new laptop set up with an Ubuntu partition to work with over the next few months. A little while back, I purchased this laptop with intent to use it as a gaming rig. It shipped with Windows 8, which was a serious pain in the ass to get used to. Now that I’ve dealt with that and have Steam and Origin set up on the Windows partition, it’s time to make this my primary machine and start taking advantage of the power under its hood by dual-booting an Ubuntu partition for development and experiment work.
I started my adventure by downloading an ISO of the latest release of Ubuntu – at the time of this writing, that’s 13.04. Because my new laptop has UEFI instead of BIOS, I made sure to grab the x64 version of the distribution.
After using Ubuntu’s Startup Disk Creator to create a bootable USB stick, I started my first adventure – figuring out how to get the IdeaPad to boot from USB. A bit of quick googling told me that the trick was to alternately tap F10 and F12 during the boot sequence. This brought up a boot menu that allowed me to select the USB stick.
Once Ubuntu had booted off of the USB stick, I opened up GParted and went about making some space for my new operating system. The process was straightforward – I selected the largest existing partition (it also helped that it was labelled WINDOWS_OS), and split it in half. My only mistake in this process was to choose to put the new partition in front of the existing partition on the drive. Because of this, GParted had to copy all of the data on the Windows partition to a new physical location on the hard drive, a process that took about three hours.
With my hard drive appropriately partitioned, it was time to install the operating system. The modern Ubuntu installer pretty much takes care of everything, even going so far as selecting an appropriate space to use on the hard drive. I simply told it to install alongside the existing Windows partition, and let it take care of the details.
The installer finished its business in short order, and I restarted the machine. Ubuntu booted with no issues, but my Windows 8 partition refused to cooperate. It would seem as though something that the installer did wasn’t getting along well with UEFI/SecureBoot. Upon attempting to boot Windows, I got the following message:
error: Secure Boot forbids loading module from (hd0,gpt8)/boot/grub/x86_64-efi/ntfs.mod.
error: failure reading sector 0x0 from ‘cd0’
error: no such device: 0030DA4030DA3C7A
error: can’t find command ‘drivemap’
error: invalid EFI file path
Press any key to continue…
Like I said, I could boot Ubuntu, so I headed on over to their website and read their page on UEFI. At first glance, it seemed as though I had done everything correctly. The only place that I deviated from these instructions was in manually resizing my Windows partition to create space for my new Ubuntu partition.
Thinking that I might be experiencing troubles with my boot partition, I took a shot at running Ubuntu’s Boot-Repair utility. It seemed to do something, but upon restarting the machine, I found that I had even more problems – now a Master Boot Record wasn’t found at all:
After dismissing the boot device error, I was prompted to choose which device to boot from. I chose to boot Windows’ UEFI Repair partition, and was (luckily) able to get to a desktop. Unfortunately, none of the other partitions on the device seem to work, so I’m back where I started at the beginning, except that now in addition to having to put up with Windows 8, I also have a broken master boot record.
Lenovo: 1 / Jon: 0.
Unlike many people who may be installing a version of Linux, I am doing so on a machine that has a projector with a 92″ screen as it’s main display.
So, upon initial installation of Kubuntu, I couldn’t see ANY of the text on the desktop, it was itty bitty.
In order to fix this, I had to hook up an additional display.
Thankfully, living in a house with a computer guru, I had many to choose from.
In order to get my secondary display to appear, I had to first plug it into the display port on the machine I am using. I then had to turn off the current display (projector) and reboot the machine so that it would initialize the use of my new monitor.
Sounds easy enough, and it was, albeit with some gentle guidance from Jake B.
From here, I am able to properly configure my display.
The thing I am enjoying most about Kubuntu so far is that it is very user friendly. It seems almost intuitive where each setting can be found in menus.
So these are the steps I followed to change my display configuration.
I went into Menu > Computer > System Settings
Once you get into the System Settings folder, you have the option to change a lot of things. For example, your display resolution.
Now that you are in this menu, you will want to select Display and Monitor from the options. Here you can set your resolution, monitor priority, mirroring, and multiple displays. Since I will only be using this display on the Projector, I ensured that the resolution was set so that I could read the text properly on the Projector Screen. Before disabling my secondary monitor, I also set up my Bluetooth keyboard and mouse, which I will talk about in another post.
This process only took a few moments. I will still have to tweak the font scaling, as I have shit-tastic eyesite.
Greetings everyone! It has been quite some time since my last post. As you’ll be able to read from my profile (and signature,) I have decided to run ArchLinux for the upcoming experiment. As of yet, I’m not sure what my contributions to the community will be, however, there will be more on that later.
One of the interesting things I wanted to try this time around was to get Linux to boot from the Windows 7 bootloader. The basic principle here is to take the first 512-bytes of your /boot partition (with GRUB installed), and place it on your C:\ as linux.bin. From there, you use BCDEdit in Windows to add it to your bootloader. When you boot Windows, you will be prompted to either start Windows 7 or Linux. If you choose Linux, GRUB will be launched.
Before I go into my experience, I just wanted to let you know that I was not able to get it working. It’s not that it isn’t possible, but for the sake of being able to boot into ArchLinux at some point during the experiment, I decided to install GRUB to the MBR and chainload the Windows bootloader.
I started off with this article from the ArchLinux wiki, that basically explains the process above in more detail. What I failed to realize was that this article was meant to be used when both OSes are on the same disk. In my case, I have Windows running on one disk, and Linux on another.
According to this article on Eric Hameleers’ blog, the Windows 7 Bootloader does not play well with loading operating systems that reside on a different disk. Eric goes into a workaround for this in the article. The proposed solution is to have your /boot partition reside on the same disk as Windows. This way, the second stage of GRUB will be properly loaded, and GRUB will handle the rest properly.
Although I could attempt the above, I don’t really want to be re-sizing my Windows partition at this point, and it will be much easier for me to install GRUB to the MBR on my Linux disk, and have that disk boot first. That way, if I decide to get rid of Linux later, I can change the boot order, and the Windows bootloader will have remained un-touched.
Besides, while I was investigating this approach, I received a lot of ridicule from #archlinux for trying to use the Windows bootloader.
09:49 < AngryArchLinuxUser555> uhm, first 512bytes of /boot is pretty useless
09:49 < AngryArchLinuxUser555> unless you are doing retarded things like not having grub in mbr
(username changed for privacy)
For the record, I was not attempting this because I think it’s a good idea. I do much prefer using GRUB, however, this was FOR SCIENCE!
If I ever do manage to boot into ArchLinux, I will be sure to write another post.
Today I started out by going into work, only to discover that it is NEXT Friday that I need to cover.
So I came home and decided to get a jump start on installing Kubuntu.
I am now at a screeching halt because the hardware I am using has Win8 installed on it and when I boot into the Start Up settings, I lose the ability to use my keyboard. This is going swimmingly.
So, it is NOW about 3 hours later.
In this time, I have cursed, yelled, felt exasperated and been downright pissed.
This is mainly because Windows 8 does not make it easily accessible to get to the Boot Loader. In fact, the handy Windows made video that is supposed to walk you through how EASY, and user friendly the process of changing system settings is fails to mention what to do if the “Use a Device” option is nowhere to be found (as it was in my case).
So I relied on Google, which is usually pretty good about answering questions about stupid computer issues. I FINALLY came across one post that stated that due to how quickly Windows 8 boots, that there is no time to press F2 or F8. However, I tries anyway. F8 is the key to selecting what device you want to boot from, as you will see later in this post.
What you will want to do if installing any version of Linux is, first format a USB stick to hold your Linux distro. I used Universal USB Loader. The nice thing about this loader is that you don’t have to already have the .iso for the distro you want to use downloaded. You have the option of downloading right in the program.
After you have selected you distro, downloaded the .iso and loaded it onto your USB stick now is the fun part. Plug your USB stick into the computer you wish to load Linux onto.
Considering how easy this was once I figured it all out, I do feel rather silly. If I were to have to do it again, I would feel much more knowledgeable.
If you are using balls-ass Windows 8, like I was, the EASIEST way to select an alternate device to boot from is to restart the computer and press F8 a billion times until a menu pops up, letting you choose from multiple devices. Choose the device with the name of the USB stick, for me it was PENDRIVE.
Once you press enter (from a keyboard that is attached directly to the computer you are using via USB cable, because apparently Win8 loses the ability to use Wireless USB devices before the OS has fully booted…at least that was my experience).
So now, I am being prompted to install Kubuntu (good news, I already know it supports my projector, because I can see this happening).
Now, I have had to plug in a USB wired keyboard and mouse for this process so far. This makes life a little bit difficult because the computer I am using sits in a closet, too far away from my projector screen. This makes it almost impossible for me to see what is going on, on the screen. So installing the drives for my wireless USB devices it a bit of a pain.
However, the hard part is over. The OS is installed successfully. My next post will detail how the hell to install wireless USB devices. I will probably also make a fancy signature, so you all know what I am running.
So it is 9:40 PM and I started my “Find a Linux distro to install” process. Like many people, I decided to type exactly what I wanted to search into Google. Literally, I typed “Linux Distro Chooser” into Google. Complex and requiring great technical skill, I know.
My next mission was to pick the site that had a description with the least amount of “sketch”. Meaning, I picked the first site in the Google results. I then used my well honed multiple choice skills (ignore the question, pick B) to find my perfect Linux distro match.
After several pages of clicking through, I was presented with a list of Linux distributions that fit my needs and hardware.
See, a nice list, with percents and everything.
So naturally, I do what everyone does with lists.. look at my options and pick the one with the prettiest picture.
For me that distro was Kubuntu. It has a cool sounding name that starts with the same letter as my name.
So I follow the link through to the website to pull the .iso and this pops up.
I have dealt with Drupal before, as it was the platform the website I did data entry for was built on. Needless to say, I hate it. Hey Web Dev with Trev, if you are out there, I hope you burn your toast the next time you make some.
So, to be productive while waiting for Drupal to fix it’s shit, I decided to start a post and rant. In the time this took, the website for Kubuntu has recovered (for now).
So, I downloaded my .iso and am ready to move it onto a USB stick.
I’m debating whether I want to install it now or later, as I would really like to watch some West Wing tonight. I know that if I start this process and fuck it up, I am going to be forced to move upstairs where there is another TV, but it is small
Well, here I go, we’ll see how long it takes me to install it. If you are reading this, go ahead and time me… it may be a while.
I recently picked up a cheap Samsung laser printer and decided to give the Samsung Unified Linux Driver Repository a shot while installing it. Basically the SULDR is a repository you add to your /etc/apt/sources.list file which allows you to install one of their driver management applications. Once that is installed anytime you go to hookup a new printer the management application automatically searches the repository, full of the official Samsung printer drivers, finds the correct one for you and installs it. Needless to say I didn’t have any problems getting this printer to work on linux!
It’s that time again where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:
I even happened to have a Windows 7 (64-bit) VM lying around and, while I think you would be a fool to run a 64-bit OS on the limited test hardware, I’ve included as sort of a benchmark.
All of the tests were done within VirtualBox on ‘machines’ with the following specifications:
The tests were all done using VirtualBox 4.2.16, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between July 1st, 2013 and July 5th, 2013 so your results may not be identical.
Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. This time around however I’ve changed how things are measured slightly in order to be more accurate. Measurements (on linux) were taken using the free -m command for memory and the df -h command for disk usage. On Windows I used Task Manager and Windows Explorer.
In addition this will be the first time where I provide the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).
Things to know before looking at the graphs
First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). Secondly there may be some distributions that don’t appear on all of the graphs, for example because I was using an existing Windows 7 VM I didn’t have a ‘first boot’ result. As always feel free to run your own tests. Thirdly you may be asking yourself ‘why does Fedora 18 and 19 make the list?’ Well basically because I had already run the tests for 18 and then 19 happened to be released. Finally Fedora 19 (GNOME), while included, does not have any data because I simply could not get it to install.
First boot memory (RAM) usage
This test was measured on the first startup after finishing a fresh install.
Memory (RAM) usage after updates
This test was performed after all updates were installed and a reboot was performed.
Memory (RAM) usage change after updates
The net growth or decline in RAM usage after applying all of the updates.
Install size after updates
The hard drive space used by the distribution after applying all of the updates.
Once again I will leave the conclusions to you. This time however, as promised above, I will provide my source data for you to
I have to start by first admitting that I’ve actually run Amarok once or twice in the past, but sadly could never really figure it out. This always bothered me because people who can figure it out seem to love it. So I made it my mission this time around to really dig into the application to see what all the noise was about (poor pun intended).
Starting with the navigation pane on the left hand side of the screen I drilled down into my Local Music collection. For the purposes of testing I just threw two albums in my Music folder.
Double clicking Local Music opens up a view into your Music folder that lets you play songs or search through your artists and albums.
When you play a song the main portion in the center of the application changes to give you a ton of information about that track.
This is actually a pretty neat feature but also has the downside that its not always correct. For instance when I started playing the above song by the 90s band Fuel I ended up getting shown the following Wikipedia page about fuel (i.e. an energy source) and not the correct page about the band.
Placing a CD in the computer caused it to appear under Local Media (although under a different section). Importing tracks was very straight forward; simply right-click on the CD and choose Copy to Collection -> Local Collection. You then get to pick your encoding options (which you can deeply customize to fit your needs).
For Internet media Amarok comes loaded with a number of sources including a number of streaming radio stations, Jamendo, Last.fm, Librivox.org, Magnatune.com, Amazon’s MP3 store and a podcast directory. Like most other media, Amarok also tries to display relevant information about what you’re listening to.
There are loads of other features in Amarok, from its excellent playlist support to loads of expandable plugins, but writing about all of them would take all day. Instead I will wrap up here with a few final thoughts.
Is Amarok the best media manager ever made? To some maybe, but I still find its interface a bit too clunky for my liking. I also noticed that it tended to take up quite a bit of RAM (~220MB currently) which puts it on the beefier side of the media manager resource usage spectrum. The amount of information that it presents about what you’re currently listening to is impressive, but often times when I’m listening to music I’m doing so as a background activity. I don’t foresee a situation where I would be actively watching Amarok in order to benefit from its full potential as a way to ‘rediscover my music’. Still, for at least its deep integration within the KDE desktop, I say give it a try and see if it works for you.
KTorrent represents KDE’s take on what a BitTorrent client should be. It presents a relatively standard interface that reminds me a lot of other fully featured BitTorrent clients such as uTorrent and Deluge.
Being a KDE application it is also one of the more fully customizable BitTorrent clients out there, although not to the scale of some of the advanced menus seen in Vuze. It allows you to customize various options including things like encryption, queuing options and bandwidth usage. It also benefits from using a bunch of shared KDE libraries. When I checked its memory usage it was sitting at a respectable 16MB which makes it not the leanest client but certainly not the heaviest either.
Similar to Deluge, KTorrent supports a wide array of plugins which allows you to really tailor the program to your needs. In my testing I didn’t notice a way to browse for new plugins from within the application but I’m sure there are ways to add them elsewhere.
I have to admit that I actually went into this article expecting to have a lot more to say about this application but the bottom line is this: it does exactly what you expect. If you need to download torrent files then KTorrent might be for you – and not just if you’re running KDE either. Perhaps its because KTorrent covers the bases so well but I actually can’t think of anything that I dislike about it. It’s a solid application that serves a single purpose and what’s not to love about that?
Continuing on where I left off last time I decided my next order of business would be to set up my e-mail accounts and calendar. KDE provides a number of different, more or less single purpose applications to handle all of your personal information management. For example e-mail is handled by KMail, RSS feeds are pulled in via Akregator, calendars are maintained through KOrganizer, etc. Each of these applications could easily be reviewed on their own, however there is yet another application provided in KDE, Kontact, that unifies all of these distinct programs into one. For the purposes of this article I will be treating all of these as part of Kontact as a whole but will still try and focus on each individual component where needed.
The first time you start Kontact it automatically starts an “Account Assistant” wizard that walks you through setting up your e-mail accounts. This brings me to the first embedded application: KMail.
The first item below summary on the left hand side of Kontact is Mail which makes it, in my opinion, the showcase application for Kontact.
Mail is actually powered by the KMail application which at this point is very mature and fully featured. Setting up an e-mail account is relatively straightforward although I do take issue with some of the default settings. While some are personal preference, for example I prefer to start my e-mail reply above the quote instead of below it, others are just plain strange. For instance by default KMail won’t display HTML e-mails, only plain text e-mails, supposedly in the name of security. Insecure or not I think consensus says HTML is the way forward.
Following in the standard KDE tradition KMail is crammed full of customization and configuration possibilities. For instance you remember that reply above/below the quote thing I mentioned above? In most other e-mail clients this is a simple combobox or switch, in KMail however you can configure everything from the location of the quote to the position of the cursor.
KMail also takes spam filtering and anti-virus to a whole new level. You have your choice from any compatible installed spam or anti-virus applications (i.e. SpamAssassin, ClamAV etc.). This gives you some flexibility if you find one works better for you than another.
Next up is Contacts, this time powered by KAddressBook.
This is a pretty straightforward application and so I don’t have much to say about it other than it allows you to store a lot of information about a given person (from regular details like e-mail and websites to location and OpenPGP keys). It even generates a fancy little QR code for your contacts.
For Calendar/To-do List/Journal functionality Kontact makes use of the KOrganizer application. Like KAddressBook this program functions exactly as expected which is not a bad thing. You can create events, send e-mail invitations and get alerts. It supports multiple calendars and is very functional.
The journal feature is kind of neat but I’m not sure who would actually make use of it on a regular basis. Perhaps I’m not the target market for it.
If your thing is RSS feeds look no further than Akregator. I personally don’t normally use RSS feeds all that much but I know those that do are very addicted to it. Add to that the recent shutdown of Google Reader and this might just be your cup of tea.
As RSS readers go this one is also full of options. You can even configure a sharing service, such as Twitter or Identi.ca, if you happen to stumble across an article that you wish to spread.
Last on the list is Popup Notes powered by KNotes. This is basically a sticky note application that lets you jot down little random thoughts or reminders. There isn’t a whole lot to this one.
So how does Kontact stand up at the end of the day? I like it. It does an effective job at unifying all of the different features you may need without making you feel like you need to pay attention to any one of them. In my use case I mainly stick to e-mail and calendar but in my limited time playing around with Kontact I have very few complaints.
Is it better than the alternatives like Thunderbird or Evolution? In some ways absolutely, in others there is still some work to be done. Outside of mail, calendar and RSS feeds the remaining functionality feels a bit lackluster or, at worst, simply there to round off some feature list bullet point. Thankfully this is something that could be easily remedied with a bit more attention and polish.
Give Kontact a try and let me know what you think in the comments.
It’s been a while since I’ve used KDE, however with the recent rapid (and not always welcome) changes going on in the other two main desktop environments (GNOME 3 and Unity) and the, in my opinion, feature stagnation of environments like Xfce and LXDE I decided to give KDE another shot.
My goal this time is to write up a series of quick reviews of KDE as presented as an overall user experience. That means I will try and stick to the default applications for getting my work done. Obviously depending on the distribution you choose you may have a different set of default KDE applications, and that’s fine. So before you ask, no I won’t be doing another write up for KDE distribution X just because you think its ‘way better for including A instead of B’. I’m also going to try and not cover what I consider more trivial things (i.e. the installer/installation process) and instead focus on what counts when it comes to using an operating system day-to-day.
The default web browser in the distribution I chose is not Konqueror but rather its WebKit cousin Rekonq. Where Konqueror uses KHTML by default and WebKit as an option, Rekonq sticks to the more conventional rendering engine used by Safari and Chrome.
Rekonq is a very minimalistic looking browser to the point where I often thought I accidentally started up Chrome instead.
From my time using it, Rekonq seems to be a capable browser although it is certainly not the speediest, nor does it sport any features that I couldn’t find elsewhere. One thing it does do very nicely is with its integration into the rest of the KDE desktop. This means that the first time you visit YouTube or some other Flash website you get a nice little prompt in the system tray alerting you of the option to install new plugins. If you choose to install the plugin then a little window appears telling you what it is downloading and installing for you, completely automatically. No need to visit a vendor’s website or go plugin hunting online.
Like most other KDE applications Rekonq also allows for quite a bit of customization, although I found its menus to be very straightforward and not nearly as intimidating as some other applications.
I did notice a couple of strange things while working with Rekonq that I should probably mention. First off while typing into a WordPress edit window none of the shortcut keys (i.e. Ctrl+B = bold) seemed to work. I also found that I couldn’t perform a Shift+Arrow Key selection of the text, instead having to use Ctrl+Shift+Arrow Key which highlights an entire word at a time. At this time I’m not sure what other websites may suffer from similar irregularities so while Rekonq is a fine browser in its own right, you may want to keep another one around just in case.
While I haven’t found any real show-stoppers with Rekonq, I still can’t shake the feeling that I’m missing something. I don’t know how to describe it other than I think I would feel safer using a more mainstream web browser like Firefox, Chrome or even Opera. But like any software, your experience may vary and I would certainly never recommend against trying Rekonq (or even Konqueror). Who knows, you may find out that it is your new favorite web browser.
I’ve had a few nasty experiences this week with Linux and figured I’d vent here. Unlike my previous efforts with Linux From Scratch and Gentoo, my complaints this time around are related to upgrading Ubuntu.
At this point the current Ubuntu LTS release (12.04) is my preferred distribution to work with: it has become widespread enough that troubleshooting and previous solutions online are easy to locate. In a professional capacity, I also maintain systems that are still on 8.04 LTS (supported until April 2013, so we have to be pretty aggressive about replacing them) or 10.04 LTS (good until April 2015).
I attempted to complete two upgrades from the 10.04 release this week to 12.04 – one 10.04 LTS “desktop” installation, and one 10.04 LTS headless server installation. Both were virtual machines running under VMWare ESXi, but neither had given me any trouble during normal use.
Canonical’s updater process (the wrapper around dist-upgrade) appears to be pretty slick; it gives you appropriate warnings, attempts to start a SSH daemon as a fallback mechanism and starts on its merry way to download the necessary packages to bring your system completely up to date. On my 10.04 desktop VM, the installer fell apart completely during the package replacement/removal/installation sequence. I was left with two nasty message boxes: one advising that my system was now in a broken state, and another that completely contained rectangular, unprintable characters.
To put it bluntly, I was not amused, but it wasn’t a critical system and I was content to replace it with a fresh 12.04 installation rather than waste additional time troubleshooting with apt or dpkg. Strike one for the upgrader.
Next on the upgrade schedule was the 12.04 server VM. Install, package replacement and reboot went fine, but I had several custom PPAs installed to support development of XenonMKV (Github page) – specifically ppa:krull/deadsnakes to add Python 2.7 to Ubuntu 10.04.
For some reason, though, I’d gotten it into my head this evening to check out Mezzanine as a potential WordPress replacement. Mezzanine uses Django, a Python Web framework, and the list of supported features is pretty encompassing.
One of the most irritating things from a system administration point of view is getting Web applications to run in a standard server environment – typically a Linux base system and Apache or nginx to serve content. I suppose I’ve been spoiled with how easy it is to get PHP-based sites up and running these days in that configuration by adding an Apache module through apt. A lot of new Web app frameworks come with their own small webservers for development and testing, but generally their creators recommend that when you’re ready to put your site live, that the product run under a well-known Web or application server.
The Django folks recommend using mod_wsgi in their documentation, which in and of itself really just says “RTFM for mod_wsgi and then you’ll have a much better idea of how to do this.” I had to go poking around on Google for the installation article since there are some broken links, but okay, it’s an Apache module with a small bit of configuration (even though a simple walkthrough in the Django documentation would go a long way to making deployment easier.) This is where I ran into my dependency/PPA problem on Ubuntu 10.04.
Running the suggested command, I tried: sudo apt-get install libapache2-mod-wsgi and got the following
The following packages have unmet dependencies:
libapache2-mod-wsgi : Depends: libpython2.7 (>= 2.7) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
Backtracking, I then found out why the library wasn’t going to get installed:
The following packages have unmet dependencies:
libpython2.7 : Depends: python2.7 (= 2.7.3-0ubuntu3.1) but 2.7.3-2+lucid1 is to be installed
Aha! The Python installation from the PPA for Lucid – 10.04 – was installed and acting as the 2.7 package. Since the newly-upgraded Ubuntu 12.04 uses Python 2.7 as a dependency for a good portion of the default applications, I couldn’t just purge or uninstall it, and my attempts to force a reinstallation all ended in:
Reinstallation of python2.7 is not possible, since it cannot be downloaded.
At this point it looks like I’ll have to rebuild the server VM as well, but if any readers have any bright ideas on fixing this dependency hell – please comment with your suggestions!