I’m not even sure what to say about this one… it looks like I might have an angry video card.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
I’m not even sure what to say about this one… it looks like I might have an angry video card.
I’ll have to admit that during the previous week or so, I haven’t been able to exclusively use FreeBSD at home or Linux as a workstation in the office. Kayla and I have still been using Kubuntu (now with improved PulseAudio support) on the basement machine, and that’s been working quite well for both Netflix and media files stored over NFS. Even Dragon Player, the default KDE association for .avi and .mkv files, is quite reasonable for lightweight playback and has given us no issues.
It’s a combination of things that have contributed to my slide back to Windows/OS X. First has been time at the office. When there are several urgent projects, it’s significantly easier to use the tools and infrastructure that are already set up on a Windows partition by virtue of Group Policy or already-existing tools. Wasting several hours because you can’t access a DFS share that would take one click from Outlook on Windows is unproductive.
What’s more, my choice of Arch Linux meant that there were several “rough around the edges” spots where I was missing packages or things just weren’t as polished as something like Fedora or Ubuntu. Font smoothing, for example, wasn’t quite what I was expecting and replacing/editing complicated XML files was going to be very frustrating. Arch seems very powerful and customizable, but that’s not something I can justify when there is a corporate-provided Ubuntu image available for install at the office.
FreeBSD has been fairly standard, to say the least. It supports the usual assortment of desktop applications, but the missing 20% of things that I do under Windows start to really show after a few weeks. My large Steam library sitting on another drive becomes almost worthless, and tasks such as scanning a document are also painful – Brother, for example, does a great job of shipping OS X and Linux (.deb and .rpm-enclosed) drivers but when it comes down to just needing a PDF, it’s way easier to grab the nearest Windows laptop and get things done.
What am I going to try next? After some review, I will be installing PC-BSD 9.1 (based on FreeBSD) and seeing if there’s a more polished experience available out of the box. I’m also going to be reviewing and polishing some of my GitHub-hosted scripts for BSD compatibility.
Lately I’ve been taking a look at the various open source software licenses in an attempt to better understand the differences between them. Here is my five minute summary of the most popular licenses:
Requires that any project using a GPL-licensed component must also be made available under the GPL. Basically once you go GPL you can’t go back.
Basically the same as the GPL except that if something uses software licensed as LGPL it also doesn’t need to be licensed the same. So if you write a program that uses an LGPL library, say a program with a GTK+ user interface, it doesn’t need to be licensed LGPL. This is useful for commercial applications that rely on open source technology.
v2 vs v3
There are a number of differences between version 2 and version 3 of the GPL and LGPL licenses. Version 3 attempts to clarify a number of issues in version 2 including how patents, DRM, etc. are handled but a number of developers don’t seem to like the differences so version 2 is still quite popular.
This license allows for almost anything as long as a copy of the license and copyright are included in any distribution of the code. It can be used in commercial software without issue.
Similar to the MIT, this license basically only requires that a copy of the license and copyright are included in any distribution of the code. The major difference between this and the MIT is that the BSD3 prohibits the use of the copyright holder’s name in any promotion of derivative work.
Apache is similar to the BSD license in that you have to provide a copy of the license in any derivative works. In addition there are a number of extra safeguards, such as patent grants, that set it apart from BSD.
Elementary OS is the latest darling for the Linux community at large and with some good reason. It isn’t that Elementary OS is The. Best. Distro. Ever. In fact being only version 0.2 I doubt its own authors would try to make that claim. It does however bring something poorly needed to the Linux desktop – application focus.
Most distributions are put together in such a way as to make sure it works well enough for everyone that will end up using it. This is an admirable goal but one that often ends up falling short of greatness. Elementary OS seems to take a different approach, one that focuses on selecting applications that do the basics extremely well even if they don’t support all of those extra features. Take the aptly named (Maya) Calendar application. You know what it does? That’s right, calendar things.
Or the Geary e-mail client, another example of a beautiful application that just does the basics. So what if it doesn’t have all of the plugins that an application like Thunderbird does? It still lets you read and send e-mail in style.
Probably the best example of how far this refinement goes is in the music application Noise. Noise looks a lot like your standard iTunes-ish media player but that familiarity betrays the simplicity that Noise brings. As you may have guessed by now, it simply plays music and plays it well.
OK I understand that this approach to application development isn’t for everyone. In fact it is something that larger players, such as Apple, get called out over all the time over. Personally though I think there is a fine balance between streamlined simplicity and refinement. The Linux desktop has come a long way in the past few years but one thing that is still missing from a large portion of it is that refined user experience that you do get with something like an Apple product, or the applications selected for inclusion in Elementary OS. Too often open source projects happily jump ahead with new feature development long before the existing feature set is refined. To be clear I don’t blame them, programming new exciting features is always more fun than fixing the old broken or cumbersome ones, although this is definitely one area where improvements could be made.
Perhaps other projects can (or will) take the approach that Elementary has and dedicate one release, every so often, to making these refinements reality. I’m thinking something like Ubuntu’s One Hundred Paper Cuts but on a smaller scale. In the meantime I will continue to enjoy the simplicity that Elementary OS is currently bringing my desktop Linux computing life.
Full disclosure: I live with Kayla, and had to jump in to help resolve an enraging problem we ran into on the Kubuntu installation with KDE, PulseAudio and the undesirable experience of not having sound in applications. It involved a fair bit of terminal work and investigation, plus a minimal understanding of how sound works on Linux. TuxRadar has a good article that tries to explain things. When there are problems, though, the diagram looks much more like the (admittedly outdated) 2007 version:
To give you some background, the sound solution for the projection system is more complicated than “audio out from PC, into amplifier”. I’ve had a large amount of success in the past with optical out (S/PDIF) from Linux, with only a single trip to alsamixer required to unmute the relevant output. No, of course the audio path from this environment has to be more complicated, and looks something like:
As a result, the video card actually acts as the sound output device, and the amplifier takes care of both passing the video signal to the projector and decoding/outputting the audio signal to the speakers and subwoofer. Under Windows, this works very well: in Control Panel > Sound, you right-click on the nVidia HDMI audio output and set it as the default device, then restart whatever application plays audio.
In the KDE environment, sound is managed by a utility called Phonon in the System Settings > Multimedia panel, which has multiple backends for ALSA and PulseAudio. It will essentially communicate with the highest-level sound output system installed that it has support for. When you make a change in a default Kubuntu install in Phonon it appears to be talking to PulseAudio, which in turn changes necessary ALSA settings. Sort of complicated, but I guess it handles the idea that multiple applications can play audio and not tie up the sound card at the same time – which has not always been the case with Linux.
In my traditional experience with the GNOME and Unity interfaces, it always seems like KDE took its own path with audio that wasn’t exactly standard. Here’s the problem I ran into: KDE listed the two audio devices (Intel HDA and nVidia HDA), with the nVidia interface containing four possible outputs – two stereo and two listed as 5.1. In the Phonon control panel, only one of these four was selectable at a time, and not necessarily corresponding to multiple channel output. Testing the output did not play audio, and it was apparent that none of it was making it to the amplifier to be decoded or output to the speakers.
Using some documentation from the ArchLinux wiki on ALSA, I was able to use the aplay -l command to find out the list of detected devices – there were four provided by the video card:
**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0]
Subdevice #0: subdevice #0
and then use aplay -D plughw:1,N /usr/share/sounds/alsa/Front_Center.wav repeatedly where N is the number of one of the nVidia detected devices. Trial and error let me discover that card 1, device 7 was the desired output – but there was still no sound from the speakers in any KDE applications or the Netflix Desktop client. Using the ALSA output directly in VLC, I was able to get an MP3 file to play properly when selecting the second nVidia HDMI output in the list. This corresponds to the position in the aplay output, but VLC is opaque about the exact card/device that is selected.
At this point my patience was wearing pretty thin. Examining the audio listing further – and I don’t exactly remember how I got to this point – the “active” HDMI output presented in Phonon was actually presented as card 1, device 3. PulseAudio essentially grabbed the first available output and wouldn’t let me select any others. There were some additional PulseAudio tools provided that showed the only possible “sink” was card 1,3.
The brute-force, ham-handed solution was to remove PulseAudio from a terminal (sudo apt-get remove pulseaudio) and restart KDE, presenting me with the following list of possible devices read directly from ALSA. I bumped the “hw:1,7″ card to the top and also quit the system tray version of Amarok.
Result: Bliss! By forcing KDE to output to the correct device through ALSA, all applications started playing sounds and harmony was restored to the household.
At some point after the experiment I will see if I can get PulseAudio to work properly with this configuration, but both Kayla and I are OK with the limitations of this setup. And hey – audio works wonderfully now.
Every since we announced the start of the third Linux Experiment I’ve been trying to think of a way in which I could contribute that would be different from the excellent ideas the others have come up with so far. After batting around some ideas over the past week I think I’ve finally come up with how I want to contribute back to the community. But first a little back story.
During the day I develop commercial software. An unfortunate result of this is that my personal hobby projects often get put on the back burner because in all honesty when I get home I’d rather be doing something else. As a result I’ve developed, pun intended, quite a catalogue of projects which are currently on hold until I can find time/motivation to actually make something of them. These projects run the gamut of little helper scripts, written to make my life more convenient, all the way up to desktop applications, designed to take on bigger tasks. The sad thing is that while a lot of these projects have potential I simply haven’t been able to finish them, and I know that if I just could they would be of use to others as well. So for this Experiment I’ve decided to finally do something with them.
Open source software is made up of many different components. It is simultaneously one part idea, perhaps a different way to accomplish X would be better, one part ideal, belief that sometimes it is best to give code away for free, one part execution, often times a developer just “scratching an itch” or trying a new technology, and one part delivery, someone enthusiastically giving it away and building a community around it. In fact that’s the wonderful thing about all of the projects we all know and love; they all started because someone somewhere thought they had something to share with the world. And that’s what I plan to do. For this Linux Experiment I plan on giving back by setting one of my hobby projects free.
Now obviously this is not only ambitious but perhaps quite naive as well especially given the framework of The Linux Experiment – I fully recognize that I have quite a bit of work ahead of me before any of my hobby code is ready to be viewed, let alone be used, by anyone else. I also understand that, given my own personal commitments and available time, it may be quite a while before anything actually comes of this plan. All of this isn’t exactly well suited for something like The Linux Experiment, which thrives on fresh content; there’s no point in me taking part in the Experiment if I won’t be ready to make a new post until months from now. That is why for my Experiment contributions I won’t be only relying on the open sourcing of my code, but rather I will be posting about the thought process and research that I am doing in order to start an open source project.
Topics that I intend to cover are things relevant to people wishing to free their own creations and will include things such as:
An interesting side effect of this approach will be somewhat of a new look into the process of open sourcing a project as it is written piece by piece, step by step, rather than in retrospect.
Coincidentally as I write this post the excellent website tuxmachines.org has put together a group of links discussing the pros of starting open source projects. I’ll be sure to read up on those after I first commit to this
I hope that by the end of this Experiment I’ll have at least provided enough information for others to take their own back burner projects to the point where they too can share their ideas and creations with the world… even if I never actually get to that point myself.
P.S. If anyone out there has experience in starting an open source project from scratch or has any helpful insights or suggestions please post in the comments below, I would really love to hear them.
Hello, everyone! It’s great to be back in the hot seat for this, our third installment of The Linux Experiment. I know that last time I caused a bit of a stir with my KDE-bashing post, so will try to keep it relatively PG this time around.
Not many people know about it or have used it, but – through an employee purchase program about five years ago – I was able to get my hands on the HP EX470 MediaSmart Home Server. What manner of witchcraft is this particular device, you may ask? Here’s a photo preview:
It really is about as simple as it looks. The EX470 (stock) came equipped with a 500 GB drive, pre-loaded with Windows Home Server – which in turn was built on Windows Server 2003. 512 MB of RAM and an AMD Sempron 3400+ rounded it off; the device is completely headless, meaning that no monitor hookup is possible without a debug cable. The server also comes with four(?) USB ports, eSATA, and gigabit ethernet.
My current configuration is 3 x 1 TB drives, plus the original 500 GB, and an upgraded 2 GB DIMM. One of the things I’ve always loved about Windows Home Server is its ‘folder duplication’. Not merely content to RAID the drives together, Microsoft cooked up an idea to have each folder able to duplicate itself over to another drive in case of failure. It’s sort of like RAID 1, but without entirely-mirrored disks. Still, pretty solid redundancy.
Unfortunately for me, this feature was removed in the latest update to Windows Home Server 2011 – and support for that is even waning now, leading me to believe that patches for this OS may stop coming entirely within the next year or two. So, where does that leave me? I’m not keen to run a non-supported OS on this thing (it is internet-connected), so I’m definitely looking into alternatives.
Over the next few days, I plan to write about my upcoming ‘adventures’ in finding a suitable Linux-based alternative to Windows Home Server. Will I find one that sticks, or will I end up going with a Windows 8 Pro install? Only time will tell. Stay tuned!
Hello again everyone! By this point, I have successfully installed ArchLinux, as well as KDE, and various other everyday applications necessary for my desktop.
Aside from the issues with the bootloader I experienced, the installation was relatively straight forward. Since I have never used ArchLinux before, I decided to follow the Beginner’s Guide in order to make sure I wasn’t screwing anything up. The really nice thing about this guide is that it only gives you the information that you need to get up and running. From here, you can add any packages you want, and do any necessary customization.
Overall, the install was fairly uneventful. I also managed to install KDE, Firefox, Flash, and Netflix (more below) without any issues.
Some time ago, there was a package created for Ubuntu that allows you to watch Netflix on Linux. Since then, someone has created a package for ArchLinux called netflix-desktop. What this does, is creates an instance of Firefox in WINE that runs Silverlight so that the Netflix video can be loaded. The only issue that I’m running into with this package is that when I full-screen the Netflix video, my taskbar in KDE still appears. For the time being, I’ve just set the taskbar to allow windows to go over top. If anyone has any suggestions on how to resolve this, please let me know.
Back to a little more about ArchLinux specifically. I’ve really been enjoying their package management system. From my understanding so far, there are two main ways to obtain packages. The official repositories are backed by “pacman” which is the main package manager. Therefore, if you wanted to install kde, you would do “pacman -S kde”. This is similar to the package managers on other distributions such as apt-get. The Arch User Repository is a repository of build scripts created by ArchLinux users that allow you to compile and configure other packages not contained within the official repositories. The really neat thing about this is that it can also download and install and dependencies contained in the official repositories using pacman automatically.
As I go forward, I am also thinking of ways I can contribute to the ArchLinux community, but for now, I will continue to explore and experiment.
While I haven’t quite figured out what I’m going to be doing for this round of The Linux Experiment, I have decided that now is a good time to try something I’ve been meaning to try for a while: get Linux to boot off of an external hard drive. This was actually such a straight forward process, simply install like normal but choose the external drive for the location of all files, that I won’t bother you with the details. The only special thing I did was decide to install GRUB on the external drive making the whole install essentially a completely isolated thing – that way if I turn off the external drive then the computer boots up off of the internal drive like normal, if I boot with the external drive on then GRUB asks me what to do.
The only downside to a setup like this is that I am using USB 2.0 as my connection to the hard drive which means the disk I/O and throughput will be theoretically lower than normal. Arguably I could get around this by using something like USB 3.0 or eSATA but so far in my experience this hasn’t really been an issue. Besides once the OS boots up almost everything is running and/or cached within RAM anyway. In fact that only problems I have run into with running Linux on this desktop were, ironically, driver issues.
First up is the wireless drivers. Yes, it is 2013 and I am still having Linux WiFi driver issues… This issue was unlike any I had seen before – the wireless card was automatically detected, the Broadcom proprietary driver was automatically selected and enabled, it even appeared to work but no matter what I tried it simply would not make a lasting connection to the wireless network. On a whim I decided to just turn off the device driver and, even though the dialog window told me that I would no longer be using the device, things suddenly started working like magic. I have to assume that buried deep within the Linux kernel is already an open source implementation for my wireless driver and that is what is actually working right now. Whatever the actual cause, the device is now working flawlessly.
The other driver issue I had was again related to a proprietary driver, this time for my graphics card. By default the install used the open source driver and this worked fine. However I have had a long battle with AMD/ATI cards working on Linux without using the proprietary driver and so I decided to enable it in order to avoid any future problems.
One reboot later and not only was my colour and resolution completed screwed up but I also got this “awesome” overlay on my desktop that said “Hardware not supported”. I tried to take a screenshot of it but apparently it is drawn onto the screen post-display or something (the screenshot did not show the overlay). So for now I am back to using strictly open source drivers for everything and amazingly it is all just working. That’s probably the first time I’ve ever been able to say that about Linux before.
Some of you may have noticed some previously working links going to 404 (page not found) pages. This is due to a change we’ve made in order to make permalinks more consistent among different authors and topics. Sorry for any inconvinence this may cause. On the plus side the website has a search bar that you can use to find what you were looking for