Archive

Archive for the ‘Open Source Software’ Category

Running a containerized media server with Ubuntu 14.04, Docker, and Plex

November 23rd, 2014 No comments

I recently took it upon myself to rebuild a general-purpose home server – installing a new Intel 530 240GB solid-state drive to replace a “spinning rust” drive, and installing a fresh copy of Ubuntu 14.04 now that 14.04.1 has released and there is much less complaining online.

The “new hotness” that I’d like to discuss has been the use of Docker to containerize various processes. Docker gets a lot of press these days, but the way I see it is a way to ensure that your special snowflake applications and services don’t get the opportunity to conflict with one another. In my setup, I have four containers running:

I like the following things about Docker:

  • Since it’s new, there are a lot of repositories and configuration instructions online for reference.
  • I can make sure that applications like Sonarr/NZBDrone get the right version of Mono that won’t conflict with my base system.
  • As a network administrator, I can ensure that only the necessary ports for a service get forwarded outside the container.
  • If an application state gets messed up, it won’t impact the rest of the system as much – I can destroy and recreate the individual container by itself.

There are some drawbacks though:

  • Because a lot of the images and Dockerfiles out there are community-based, there are some that don’t follow best practices or fall out of an update cycle.
  • Software updates can become trickier if the application is unable to upgrade itself in-place; you may have to pull a new Dockerfile and hope that your existing configuration works with a new image.
  • From a security standpoint, it’s best to verify exactly what an image or Dockerfile does before running it – for example, that it pulls content from official repositories (the docker-plex configuration is guilty of using a third-party repo, for example.)

To get started, on Ubuntu 14.04 you can install a stable version of Docker following these instructions, although the latest version has some additional features like docker exec that make “getting inside” containers to troubleshoot much easier. I was able to get all these containers running properly with the current stable version (1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1). Once Docker is installed, you can grab each of the containers above with a combination of docker search and docker pull, then list the downloaded containers with docker images.

There are some quirks to remember. On the first run, you’ll need to docker run most of these containers and provide a hostname, box name, ports to forward and shared directories (known as volumes). On all subsequent runs, you can just use docker start $container_name – but I’ll describe a cheap and easy way of turning that command into an upstart service later. I generally save the start commands as shell scripts in /usr/local/bin/docker-start/*.sh so that I can reference them or adjust them later. The start commands I’ve used look like:

Plex
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex
SABnzbd+
docker run -d -h sabnzbd --name="sabnzbd" -v /etc/docker/sabnzbd:/config -v /mnt/nas:/data -p 8080:8080 -p 9090:9090 timhaak/sabnzbd
Sonarr
docker run -d -h sonarr --name="sonarr" -v /etc/docker/sonarr:/config -v /mnt/nas:/data -p 8989:8989 tuxeh/sonarr
CouchPotato
docker run -d -h couchpotato --name="couchpotato" -e EDGE=1 -v /etc/docker/couchpotato:/config -v /mnt/nas:/data -v /etc/localtime:/etc/localtime:ro -p 5050:5050 needo/couchpotato
These applications have a “/config” and a “/data” shared volume defined. /data points to “/mnt/nas”, which is a CIFS share to a network attached storage appliance mounted on the host. /config points to a directory structure I created for each application on the host in /etc/docker/$container_name. I generally apply “chmod 777” permissions to each configuration directory until I find out what user ID the container is writing as, then lock it down from there.

For each initial start command, I choose to run the service as a daemon with -d. I also set a hostname with the “-h” parameter, as well as a friendly container name with “–name”; otherwise Docker likes to reference containers with wild adjectives combined with scientists, like “drunk_heisenberg”.

Each of these containers generally has a set of instructions to get up and running, whether it be on Github, the developer’s own site or the Docker Hub. Some, like SABnzbd+, just require that you go to http://yourserverip:8080/ and complete the setup wizard. Plex required an additional set of configuration steps described at the original repository:

  • Once Plex starts up on port 32400, access http://yourserverip:32400/web/ and confirm that the interface loads.
  • Switch back to your host machine, and find the place where the /config directory was mounted (in the example above, it’s /etc/docker/plex). Enter the Library/Application Support/Plex Media Server directory and edit the Preferences.xml file. In the <Preferences> tag, add the following attribute: allowedNetworks=”192.168.1.0/255.255.255.0″ where the IP address range matches that of your home network. In my case, the entire file looked like:

    <?xml version="1.0" encoding="utf-8"?>
    <Preferences MachineIdentifier="(guid)" ProcessedMachineIdentifier="(another_guid)" allowedNetworks="192.168.1.0/255.255.255.0" />

  • Run docker stop plex && docker start plex to restart the container, then load http://yourserverip:32400/web/ again. You should be prompted to accept the EULA and can now add library locations to the server.

Sonarr needed to be updated (from the NZBDrone branding) as well. From the GitHub README, you can enable in-container upgrades:

[C]onfigure Sonarr to use the update script in /etc/service/sonarr/update.sh. This is configured under Settings > (show advanced) > General > Updates > change Mechanism to Script.

To automatically ensure these containers start on reboot, you can either use restart policies (Docker 1.2+) or write an upstart script to start and stop the appropriate container. I’ve modified the example from the Docker website slightly to stop the container as well:

description "SABnzbd Docker container"
author "Jake"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a sabnzbd
end script
pre-stop exec /usr/bin/docker stop sabnzbd

Copy this script to /etc/init/sabnzbd.conf; you can then copy it to plex, couchpotato, and sonarr.conf and change the name of the container and title in each. You can then test it by rebooting your system and running “docker ps -a” to ensure that all containers come up cleanly, or running “docker stop $container; service $container start”. If you run into trouble, the upstart logs are in /var/log/upstart/$container_name.conf.

Hopefully this introduction to a media server with Docker containers was thought-provoking; I hope to have further updates down the line for other applications, best practices and how this setup continues to operate in its lifetime.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

Cloud software for a Synology NAS and setting up OwnCloud

November 8th, 2014 No comments

Recently the Kitchener Waterloo Linux Users Group held a couple of presentations on setting up your own personally hosted cloud. With their permission we are pleased to also present it below:

Read more…




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Big distributions, little RAM 7

October 13th, 2014 4 comments

It’s been a while but once again here is the latest instalment of the series of posts where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 7.6 (GNOME)
  • Elementary OS 0.2 (Luna)
  • Fedora 20 (GNOME)
  • Kubuntu 14.04 (KDE)
  • Linux Mint 17 (Cinnamon)
  • Linux Mint 17 (MATE)
  • Mageia 4.1 (GNOME)
  • Mageia 4.1 (KDE)
  • OpenSUSE 13.1 (GNOME)
  • OpenSUSE 13.1 (KDE)
  • Ubuntu 14.04 (Unity)
  • Xubuntu 14.04 (Xfce)

I also attempted to try and install Fedora 20 (KDE) but it just wouldn’t go.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.3.12, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between October 6th, 2014 and October 13th, 2014 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. Measurements were taken using the free -m command for memory and the df -h command for disk usage.

Like before I have provided the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). As always feel free to run your own tests and link them in the comments for everyone to see.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

 

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Install Size

Install Size

Conclusion

Once again I will leave the conclusions to you. Source data provided below.

Source Data




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK 2.24.0 Released!

August 4th, 2014 No comments

The initial version of CoreGTK, version 2.24.0, has been tagged for release today.

Features include:

  • Targets GTK+ 2.24
  • Support for GtkBuilder
  • Can be used on Linux, Mac and Windows

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Linux alternatives: Mp3tag → EasyTAG

August 4th, 2014 1 comment

A big part of my move from Windows to Linux has been finding replacements for the applications that I had previously used day-to-day that are not available on Linux. For the major applications like my web browser (Firefox), e-mail client (Thunderbird), password manager (KeePass2) this hasn’t been a problem because they are all available on Linux as well. Heck you can even install Microsoft Office with the latest version of wine if you wanted to.

Unfortunately there still remains some programs that will simply not run under Linux. Thankfully this isn’t a huge deal because Linux has plenty of alternative applications that fill in all of the gaps – the trick is just finding the one that is right for you.

Mp3tag is an excellent Windows application that lets you edit the meta data (i.e. artist, album, track, etc.) inside of an MP3, OGG or similar file.

Mp3tag on Windows

Mp3tag on Windows

As a Linux alternative to this excellent program I’ve found a very similar application called EasyTAG that offers at least all of the features that I used to use in Mp3tag (and possibly even more).

EasyTAG on Linux

EasyTAG on Linux

For anyone looking for a good meta data editor I would highly recommend trying this one out.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Force Thunderbird/Enigmail to use a specific signing (hash) algorithm

June 8th, 2014 No comments

If you’ve had issues trying to get Thunderbird to send your PGP signed e-mail using anything other than SHA-1 there is a quick and easy fix that will let you pick whichever hash you prefer.

1) Open up Thunderbird’s preferences

2) On the Advanced Tab, under General click Config Editor

3) In the about:config window search for “extensions.enigmail.mimeHashAlgorithm” without quotes. Double click on this and enter a value. The value will determine which hash algorithm is used for signing.

The values are as follows:

0: Automatic selection, let GnuPG choose (note that while this may be the default it may also be the one that doesn’t work depending on your configuration).
1: SHA-1
2: RIPEMD-160
3: SHA-256
4: SHA-384
5: SHA-512
6: SHA-224

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Change the default sort order in Nautilus

February 9th, 2014 1 comment

The default sort order in Nautilus has been changed to sorting alphabetically by name and the option to change this seems to be broken. For example I prefer my files to be sorted by type so I ran

dconf-editor

and browsed to org/gnome/nautilus/preferences. From there you should be able to change the value by using the drop down:

 

Seems easy enough

Seems easy enough

Unfortunately the only option available is modification time. Once you change it to that you can’t even go back to name. This also appears to be a problem when trying to set the value using the command line interface like this:

dconf write /org/gnome/nautilus/preferences/default-sort-order type

I received an “error: 0-4:unknown keyword” message when I tried to run that.

Thanks to the folks over on the Ask Ubuntu forum I was finally able to get it to change by issuing this command instead:

gsettings set org.gnome.nautilus.preferences default-sort-order type

where type could be swapped out for whatever you prefer it to be ordered by.

Great Success!

Great Success!

CoreGTK

January 28th, 2014 2 comments

A while back I made it my goal to put together an open source project as my way of contributing back to the community. Well fast forward a couple of months and my hobby project is finally ready to be shown the light of day. I give you… CoreGTK

CoreGTK is an Objective-C binding for the GTK+ library which wraps all objects descending from GtkWidget (plus a few others here and there). Like other “core” Objective-C libraries it is designed to be a very thin wrapper, so that anyone familiar with the C version of GTK+ should be able to pick it up easily.

However the real goal of CoreGTK is not to replace the C implementation for every day use but instead to allow developers to more easily code GTK+ interfaces using Objective-C. This could be especially useful if a developer already has a program, say one they are developing for the Mac, and they want to port it to Linux or Windows. With a little bit of MVC a savvy developer would only need to re-write the GUI portion of their application in CoreGTK.

So what does a CoreGTK application look like? Pretty much like a normal Objective-C program:

/*
 * Objective-C imports
 */
#import <Foundation/Foundation.h>
#import "CGTK.h"
#import "CGTKButton.h"
#import "CGTKSignalConnector.h"
#import "CGTKWindow.h"

/*
 * C imports
 */
#import <gtk/gtk.h>

@interface HelloWorld : NSObject
/* This is a callback function. The data arguments are ignored
 * in this example. More callbacks below. */
+(void)hello;

/* Another callback */
+(void)destroy;
@end

@implementation HelloWorld
int main(int argc, char *argv[])
{
    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    /* We could use also CGTKWidget here instead */
    CGTKWindow *window;
    CGTKButton *button;

    /* This is called in all GTK applications. Arguments are parsed
    * from the command line and are returned to the application. */
    [CGTK autoInitWithArgc:argc andArgv:argv];

    /* Create a new window */
    window = [[CGTKWindow alloc] initWithGtkWindowType:GTK_WINDOW_TOPLEVEL];

    /* Here we connect the "destroy" event to a signal handler in 
     * the HelloWorld class */
    [CGTKSignalConnector connectGpointer:[window WIDGET] 
        withSignal:@"destroy" toTarget:[HelloWorld class] 
        withSelector:@selector(destroy) andData:NULL];

    /* Sets the border width of the window */
    [window setBorderWidth: [NSNumber numberWithInt:10]];

    /* Creates a new button with the label "Hello World" */
    button = [[CGTKButton alloc] initWithLabel:@"Hello World"];

    /* When the button receives the "clicked" signal, it will call the
     * function hello() in the HelloWorld class (below) */
    [CGTKSignalConnector connectGpointer:[button WIDGET] 
        withSignal:@"clicked" toTarget:[HelloWorld class] 
        withSelector:@selector(hello) andData:NULL];

    /* This packs the button into the window (a gtk container) */
    [window add:button];

    /* The final step is to display this newly created widget */
    [button show];

    /* and the window */
    [window show];

    /* All GTK applications must have a [CGTK main] call. Control ends here
     * and waits for an event to occur (like a key press or
     * mouse event). */
    [CGTK main];

    [pool release];

    return 0;
}

+(void)hello
{
    NSLog(@"Hello World");
}

+(void)destroy
{
    [CGTK mainQuit];
}
@end
Hello World in action

Hello World in action

And because Objective-C is completely compatible with regular old C code there is nothing stopping you from simply extracting the GTK+ objects and using them like normal.

// Use it as an Objective-C CoreGTK object!
CGTKWindow *cWindow = [[CGTKWindow alloc] 
    initWithGtkWindowType:GTK_WINDOW_TOPLEVEL];

// Or as a C GTK+ window!
GtkWindow *gWindow = [cWindow WINDOW];

// Or even as a C GtkWidget!
GtkWidget *gWidget = [cWindow WIDGET];

// This...
[cWindow show];

// ...is the same as this:
gtk_widget_show([cWindow WIDGET]);

You can even use a UI builder like GLADE, import the XML and wire up the signals to Objective-C instance and class methods.

CGTKBuilder *builder = [[CGTKBuilder alloc] init];
if(![builder addFromFile:@"test.glade"])
{
    NSLog(@"Error loading GUI file");
    return 1;
}

[CGTKBuilder setDebug:YES];

NSDictionary *dic = [[NSDictionary alloc] initWithObjectsAndKeys:
                 [CGTKCallbackData withObject:[CGTK class] 
                     andSEL:@selector(mainQuit)], @"endMainLoop",
                 [CGTKCallbackData withObject:[HelloWorld class] 
                     andSEL:@selector(hello)], @"on_button2_clicked",
                 [CGTKCallbackData withObject:[HelloWorld class] 
                     andSEL:@selector(hello)], @"on_button1_activate",
                 nil];

[builder connectSignalsToObjects:dic];

CGTKWidget *w = [builder getWidgetWithName:@"window1"];
if(w != nil)
{
    [w showAll];
}

[builder release];

So there you have it that’s CoreGTK in a nutshell.

There are a variety of ways to help me out with this project if you are so inclined to do so. The first task is probably just to get familiar with it. Download CoreGTK from the GitHub project page and play around with it. If you find a bug (very likely) please create an issue for it.

Another easy way to get familiar with CoreGTK is to help write/fix documentation – a lot of which is written in the source code itself. Sadly most of the current documentation simply states which underlying GTK+ function is called and so it could be cleaned up quite a bit.

At the moment there really isn’t anything more formal than that in place but of course code contributions would also be welcome!

Update: added some pictures of the same program running on all three operating systems.

Hello World on Windows

Hello World on Windows

Hello World on Mac

Hello World on Mac

Hello World on Linux

Hello World on Linux

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Open source project hosting options

September 8th, 2013 2 comments

So you want to host an open source project using one of the many free services available but can’t decide which one to use? If only someone would put together a quick summary of each of the major offerings…

Hosting providers covered in this post:

  • Bitbucket
  • CodePlex
  • GitHub
  • Gitorious
  • Google Code
  • Launchpad
  • SourceForge

Bitbucket

Bitbucket is a hosting site for the distributed version control systems (DVCS) Git and Mercurial. The service offering includes an issue tracker and wiki, as well as integration with a number of popular services such as Basecamp, Flowdock, and Twitter.

Features:

  • Supports both Git and Mercurial
  • Allows private repositories for free, up to 5 users
  • Unlimited repositories
  • Has JIRA integration for issue tracking
  • Has its own REST API

Downsides:

  • Only allows up to 5 users for free (a user defined as someone with read or write access)

CodePlex

CodePlex is Microsoft’s free open source project hosting site. You can create projects to share with the world, collaborate with others on their projects, and download open source software.

Features:

  • Supports both Git & Mercurial
  • Integrated Wiki that allows to add rich documentation and nice looking pages
  • Bug Tracker and Discussion Forums included

Downsides:

  • Often times feels more like a code publishing platform than a collaboration site
  • Primarily geared toward .NET projects

GitHub

Build software better, together. Powerful collaboration, code review, and code management for open source and private projects.

Features:

  • Supports Git
  • Powerful and easy to use graphical tools
  • Easy team management
  • Integrated wiki, issue tracker and code review

Downsides:

  • Only supports Git
  • Quite a few ‘dead’ projects on the site

Gitorious

The Git hosting software that you can install yourself. Gitorious.org provides free hosting for open source projects that use Git.

Features:

  • Supports Git
  • Free project hosting
  • Integrated wiki
  • Can download the software and install it on your own server

Downsides:

  • Only supports Git

Google Code

Project Hosting on Google Code provides a free collaborative development environment for open source projects.

Features:

  • Supports Subversion, Mercurial Git
  • Integrated wiki

Downsides:

  • Not very pretty

Launchpad

Launchpad is a software collaboration platform.

Features:

  • Supports Bazaar
  • Integrated bug tracking and code reviews
  • Ubuntu package building and hosting
  • Mailing lists

Downsides:

  • Only supports Bazaar
  • Geared toward Ubuntu (which can be a downside depending on your project)

SourceForge

Find, Create, and Publish Open Source software for free.

Features:

  • Supports Git, Mercurial, Subversion
  • Integrated issue tracking, wiki, discussion forums
  • Stat tracking

Downsides:

  • Ads
  • A lot of ‘dead’ projects

 

Now obviously I’ve missed some things and glossed over others but my goal here was to provide a quick ‘at a glance’ summary of each. Check the individual websites for more. Thanks to the people over at Stack Exchange for doing a lot of the legwork.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

One license to rule them all? Noooooooooope!

August 20th, 2013 No comments

Lately I’ve been taking a look at the various open source software licenses in an attempt to better understand the differences between them. Here is my five minute summary of the most popular licenses:

GNU Public License (GPL)

Requires that any project using a GPL-licensed component must also be made available under the GPL. Basically once you go GPL you can’t go back.

Lesser GNU Public License (LGPL)

Basically the same as the GPL except that if something uses software licensed as LGPL it also doesn’t need to be licensed the same. So if you write a program that uses an LGPL library, say a program with a GTK+ user interface, it doesn’t need to be licensed LGPL. This is useful for commercial applications that rely on open source technology.

v2 vs v3

There are a number of differences between version 2 and version 3 of the GPL and LGPL licenses. Version 3 attempts to clarify a number of issues in version 2 including how patents, DRM, etc. are handled but a number of developers don’t seem to like the differences so version 2 is still quite popular.

MIT

This license allows for almost anything as long as a copy of the license and copyright are included in any distribution of the code. It can be used in commercial software without issue.

BSD3

Similar to the MIT, this license basically only requires that a copy of the license and copyright are included in any distribution of the code. The major difference between this and the MIT is that the BSD3 prohibits the use of the copyright holder’s name in any promotion of derivative work.

Apache

Apache is similar to the BSD license in that you have to provide a copy of the license in any derivative works. In addition there are a number of extra safeguards, such as patent grants, that set it apart from BSD.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Listen up, Kubuntu: the enraging tale of sound over HDMI

August 4th, 2013 2 comments

Full disclosure: I live with Kayla, and had to jump in to help resolve an enraging problem we ran into on the Kubuntu installation with KDE, PulseAudio and the undesirable experience of not having sound in applications. It involved a fair bit of terminal work and investigation, plus a minimal understanding of how sound works on Linux. TuxRadar has a good article that tries to explain things. When there are problems, though, the diagram looks much more like the (admittedly outdated) 2007 version:

The traditional spiderweb of complexity involved in Linux audio.

The traditional spiderweb of complexity involved in Linux audio.

To give you some background, the sound solution for the projection system is more complicated than “audio out from PC, into amplifier”. I’ve had a large amount of success in the past with optical out (S/PDIF) from Linux, with only a single trip to alsamixer required to unmute the relevant output. No, of course the audio path from this environment has to be more complicated, and looks something like:

Approximate diagram of display and audio output involved from Kubuntu machine

As a result, the video card actually acts as the sound output device, and the amplifier takes care of both passing the video signal to the projector and decoding/outputting the audio signal to the speakers and subwoofer. Under Windows, this works very well: in Control Panel > Sound, you right-click on the nVidia HDMI audio output and set it as the default device, then restart whatever application plays audio.

In the KDE environment, sound is managed by a utility called Phonon in the System Settings > Multimedia panel, which has multiple backends for ALSA and PulseAudio. It will essentially communicate with the highest-level sound output system installed that it has support for. When you make a change in a default Kubuntu install in Phonon it appears to be talking to PulseAudio, which in turn changes necessary ALSA settings. Sort of complicated, but I guess it handles the idea that multiple applications can play audio and not tie up the sound card at the same time – which has not always been the case with Linux.

In my traditional experience with the GNOME and Unity interfaces, it always seems like KDE took its own path with audio that wasn’t exactly standard. Here’s the problem I ran into: KDE listed the two audio devices (Intel HDA and nVidia HDA), with the nVidia interface containing four possible outputs – two stereo and two listed as 5.1. In the Phonon control panel, only one of these four was selectable at a time, and not necessarily corresponding to multiple channel output. Testing the output did not play audio, and it was apparent that none of it was making it to the amplifier to be decoded or output to the speakers.

Using some documentation from the ArchLinux wiki on ALSA, I was able to use the aplay -l command to find out the list of detected devices – there were four provided by the video card:

**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0

and then use aplay -D plughw:1,N /usr/share/sounds/alsa/Front_Center.wav repeatedly where N is the number of one of the nVidia detected devices. Trial and error let me discover that card 1, device 7 was the desired output – but there was still no sound from the speakers in any KDE applications or the Netflix Desktop client. Using the ALSA output directly in VLC, I was able to get an MP3 file to play properly when selecting the second nVidia HDMI output in the list. This corresponds to the position in the aplay output, but VLC is opaque about the exact card/device that is selected.

At this point my patience was wearing pretty thin. Examining the audio listing further – and I don’t exactly remember how I got to this point – the “active” HDMI output presented in Phonon was actually presented as card 1, device 3. PulseAudio essentially grabbed the first available output and wouldn’t let me select any others. There were some additional PulseAudio tools provided that showed the only possible “sink” was card 1,3.

The brute-force, ham-handed solution was to remove PulseAudio from a terminal (sudo apt-get remove pulseaudio) and restart KDE, presenting me with the following list of possible devices read directly from ALSA. I bumped the “hw:1,7” card to the top and also quit the system tray version of Amarok.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

Result: Bliss! By forcing KDE to output to the correct device through ALSA, all applications started playing sounds and harmony was restored to the household.

At some point after the experiment I will see if I can get PulseAudio to work properly with this configuration, but both Kayla and I are OK with the limitations of this setup. And hey – audio works wonderfully now.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

An Ambitious Goal

August 1st, 2013 3 comments

Every since we announced the start of the third Linux Experiment I’ve been trying to think of a way in which I could contribute that would be different from the excellent ideas the others have come up with so far. After batting around some ideas over the past week I think I’ve finally come up with how I want to contribute back to the community. But first a little back story.

A large project now, GNOME was started because there wasn't a good open source alternative at the time

A large project now, GNOME was started because there wasn’t a good open source alternative at the time

During the day I develop commercial software. An unfortunate result of this is that my personal hobby projects often get put on the back burner because in all honesty when I get home I’d rather be doing something else. As a result I’ve developed, pun intended, quite a catalogue of projects which are currently on hold until I can find time/motivation to actually make something of them. These projects run the gamut of little helper scripts, written to make my life more convenient, all the way up to desktop applications, designed to take on bigger tasks. The sad thing is that while a lot of these projects have potential I simply haven’t been able to finish them, and I know that if I just could they would be of use to others as well. So for this Experiment I’ve decided to finally do something with them.

Thanks to OpenOffice, LibreOffice and others there are actual viable open source alternatives to Microsoft Office

Thanks to OpenOffice.org, LibreOffice and others there are actual viable open source alternatives to Microsoft Office

Open source software is made up of many different components. It is simultaneously one part idea, perhaps a different way to accomplish X would be better, one part ideal, belief that sometimes it is best to give code away for free, one part execution, often times a developer just “scratching an itch” or trying a new technology, and one part delivery, someone enthusiastically giving it away and building a community around it. In fact that’s the wonderful thing about all of the projects we all know and love; they all started because someone somewhere thought they had something to share with the world. And that’s what I plan to do. For this Linux Experiment I plan on giving back by setting one of my hobby projects free.

Before this open source web browser we were all stuck with Internet Explorer 6

Before this open source web browser we were all stuck with Internet Explorer 6

Now obviously this is not only ambitious but perhaps quite naive as well especially given the framework of The Linux Experiment – I fully recognize that I have quite a bit of work ahead of me before any of my hobby code is ready to be viewed, let alone be used, by anyone else. I also understand that, given my own personal commitments and available time, it may be quite a while before anything actually comes of this plan. All of this isn’t exactly well suited for something like The Linux Experiment, which thrives on fresh content; there’s no point in me taking part in the Experiment if I won’t be ready to make a new post until months from now. That is why for my Experiment contributions I won’t be only relying on the open sourcing of my code, but rather I will be posting about the thought process and research that I am doing in order to start an open source project.

Topics that I intend to cover are things relevant to people wishing to free their own creations and will include things such as:

  • weighing the pros and cons as well as discussing the differences between the various open source licenses
  • the best place to host code
  • how to structure the project in order to (hopefully) get good community involvement
  • etc.

An interesting side effect of this approach will be somewhat of a new look into the process of open sourcing a project as it is written piece by piece, step by step, rather than in retrospect.

The first billion dollar company built on open source software

The first billion dollar company built on open source software

Coincidentally as I write this post the excellent website tuxmachines.org has put together a group of links discussing the pros of starting open source projects. I’ll be sure to read up on those after I first commit to this 😉

Linux: a hobby project initially created and open sourced by one 21 year old developer

Linux: a hobby project initially created and open sourced by one 21 year old developer

I hope that by the end of this Experiment I’ll have at least provided enough information for others to take their own back burner projects to the point where they too can share their ideas and creations with the world… even if I never actually get to that point myself.

P.S. If anyone out there has experience in starting an open source project from scratch or has any helpful insights or suggestions please post in the comments below, I would really love to hear them.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

My Initial Thoughts/Experiences with ArchLinux

July 29th, 2013 2 comments

Hello again everyone! By this point, I have successfully installed ArchLinux, as well as KDE, and various other everyday applications necessary for my desktop.

Aside from the issues with the bootloader I experienced, the installation was relatively straight forward. Since I have never used ArchLinux before, I decided to follow the Beginner’s Guide in order to make sure I wasn’t screwing anything up. The really nice thing about this guide is that it only gives you the information that you need to get up and running. From here, you can add any packages you want, and do any necessary customization.

Overall, the install was fairly uneventful. I also managed to install KDE, Firefox, Flash, and Netflix (more below) without any issues.

Some time ago, there was a package created for Ubuntu that allows you to watch Netflix on Linux. Since then, someone has created a package for ArchLinux called netflix-desktop. What this does, is creates an instance of Firefox in WINE that runs Silverlight so that the Netflix video can be loaded. The only issue that I’m running into with this package is that when I full-screen the Netflix video, my taskbar in KDE still appears. For the time being, I’ve just set the taskbar to allow windows to go over top. If anyone has any suggestions on how to resolve this, please let me know.

netflix

This isn’t my screenshot. I found it on the interweb. I just wanted to give you a good idea of how netflix-desktop looked. I’d like to thank Richard in advance for the screenshot.

Back to a little more about ArchLinux specifically. I’ve really been enjoying their package management system. From my understanding so far, there are two main ways to obtain packages. The official repositories are backed by “pacman” which is the main package manager. Therefore, if you wanted to install kde, you would do “pacman -S kde”. This is similar to the package managers on other distributions such as apt-get. The Arch User Repository is a repository of build scripts created by ArchLinux users that allow you to compile and configure other packages not contained within the official repositories. The really neat thing about this is that it can also download and install and dependencies contained in the official repositories using pacman automatically.

As I go forward, I am also thinking of ways I can contribute to the ArchLinux community, but for now, I will continue to explore and experiment.


I am currently running ArchLinux (x86_64).
Check out my profile for more information.

Getting FreeBSD up and running with X.org and nVidia drivers

July 27th, 2013 No comments

The Experiment has officially begun, and with that I’ve gone through the FreeBSD installation process. The actual install was fairly uneventful: apart from the fact that FreeBSD defaults to a different base filesystem and has partitioning identifiers, sysinstall did the trick without the same bootloader issues that Dave experienced.

The first major difference, coming from something like Ubuntu or Debian, is that FreeBSD uses a combination of both source packages and already-prepared binary packages. Ostensibly the binary packages are for the most popular software and source packages are provided for convenience when there is no dedicated package build/maintainer team. In practice, depending on what you need to install, there are several possible locations and methods:

  • As a package, which is the binary compiled version. Available with the pkg_add -r option that acts like apt-get install on Ubuntu. The next version of this is pkgng, but I haven’t had much luck with it so far.
  • As a port, the source version of the program with FreeBSD hints to make the software compile. There are stubs in /usr/ports for a wide variety of software, and the “make install clean” process performs what seems to be a level of dependency injection as well.
  • From source directly, where you download and compile the package directly from its creator’s website; I’m avoiding this unless absolutely necessary.

As a result, I just end up using Google to find the package and then installing using the suggested command line. Hilariously enough, when looking for “take screenshot FreeBSD”, the suggested package was called scrot. Here’s that result:

My FreeBSD/xfce4 desktop taken with 'scrot'.

In order to get the desktop working, I had to fight a bit with X.org. Reading the documentation was incredibly helpful in getting my mouse and keyboard to work – I needed to add hald and dbus to the /etc/rc.conf file:

hald_enable="YES"
dbus_enable="YES"

Once that was set up, I then embarked on the process of getting my monitor to display at native 2560×1600 resolution. First, I was stymied by the Xorg -configure process, which provided a number of created screens does not match number of detected devices error but still generated a configuration file. Copying that file into /etc/X11/xorg.conf and running startx subsequently gave a no screens detected message.

A number of suggestions online related to adding a preferred resolution as a “Modes” line to the Screen section in this file, but there was no change. What eventually worked was changing the Driver line from nv to vesa – clearly my GeForce 660 isn’t supported by the default open-source nVidia driver.

As a result, it was necessary to look at installing the closed-source binary nVidia driver. The first stumbling block in this process was during the make install clean command, where I was first told I’d have to install the FreeBSD kernel source. Using this forum article and adjusting the URL to reference 9.1-RELEASE, I successfully obtained and decompressed the code to /usr/src.

The next problem was with my choice of setup options. Initially during the make install process, I selected the default options, and was now blocked at:

===> Installing for nvidia-driver-304.60
===> nvidia-driver-304.60 depends on file: /compat/linux/etc/fedora-release - not found
===> Verifying install for /compat/linux/etc/fedora-release in /usr/ports/emulators/linux_base-f10
===> linux_base-f10-10_5 linuxulator is not (kld)loaded.
*** [install] Error code 1

Stop in /usr/ports/emulators/linux_base-f10.
*** [run-depends] Error code 1

Stop in /usr/ports/x11/nvidia-driver.
*** [install] Error code 1

Stop in /usr/ports/x11/nvidia-driver.

There didn’t seem to be a good way to get back to the options screen to deselect the Linux compatibility mode and make clean didn’t help the situation. Poking around, I was able to reselect the correct options (remove Linux, and also ensure not to select the FreeBSD AGP option) by running make config. A make install clean command after that, and I could continue to follow the rest of the instructions – creating /boot/loader.conf and adding nvidia_load="YES", editing xorg.conf to set the Driver to nvidia, and then it was time for a reboot.

As a side note, unlike other Linux distributions, the idea of installing proprietary drivers wasn’t portrayed as shameful and against Free Software ideals. The attitude and design of FreeBSD seems to be that you should be able to do what you want with it.

So after this work, what was the result when I ran startx again? Nearly flawless detection of multiple monitors, a readable desktop and non-balls graphics performance. A quick trip to sudo /usr/local/bin/nvidia-settings fixed the monitor alignment and was quite easy to use. Now to work on the rest of the desktop components to make this a more usable system, and I’ll be well on the way to future moments of rage.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: FreeBSD, Jake B, XFCE, Xorg/X11 Tags:

Big distributions, little RAM 6

July 9th, 2013 3 comments

It’s that time again where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Fedora 18 (GNOME)
  • Fedora 18 (KDE)
  • Fedora 19 (GNOME
  • Fedora 19 (KDE)
  • Kubuntu 13.04 (KDE)
  • Linux Mint 15 (Cinnamon)
  • Linux Mint 15 (MATE)
  • Mageia 3 (GNOME)
  • Mageia 3 (KDE)
  • OpenSUSE 12.3 (GNOME)
  • OpenSUSE 12.3 (KDE)
  • Ubuntu 13.04 (Unity)
  • Xubuntu 13.04 (Xfce)

I even happened to have a Windows 7 (64-bit) VM lying around and, while I think you would be a fool to run a 64-bit OS on the limited test hardware, I’ve included as sort of a benchmark.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.2.16, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between July 1st, 2013 and July 5th, 2013 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. This time around however I’ve changed how things are measured slightly in order to be more accurate. Measurements (on linux) were taken using the free -m command for memory and the df -h command for disk usage. On Windows I used Task Manager and Windows Explorer.

In addition this will be the first time where I provide the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). Secondly there may be some distributions that don’t appear on all of the graphs, for example because I was using an existing Windows 7 VM I didn’t have a ‘first boot’ result. As always feel free to run your own tests. Thirdly you may be asking yourself ‘why does Fedora 18 and 19 make the list?’ Well basically because I had already run the tests for 18 and then 19 happened to be released. Finally Fedora 19 (GNOME), while included, does not have any data because I simply could not get it to install.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

 

All Data Points

All Data Points

RAM

RAM

Buffers/Cache Only

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

After_Updates_All

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap

Swap

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Install Size

Install Size

Conclusion

Once again I will leave the conclusions to you. This time however, as promised above, I will provide my source data for you to plunder enjoy.

Source Data




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

An Experiment in Transitioning to Open Document Formats

June 15th, 2013 2 comments

Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

The apps of KDE 4.10 Part VII: Dragon Player

May 27th, 2013 2 comments

Rounding out this little series I took a look at KDE’s video player of choice: Dragon Player.

Dragon Player

For those of you familiar with similar applications such as VLC, Totem or even Windows Media Player, Dragon Player is a simplistic interface on top of quite powerful video playback.

Everyone loves Big Buck Bunny!

Everyone loves Big Buck Bunny!

Dragon Player’s power comes from the integrated KDE media backend Phonon. What this means for the user is that it is completely compatible with all installed system codecs. Speaking of codecs, Dragon Player prompts you whenever it doesn’t recognize a new piece of media and offers the ability to automatically search and install the required codecs. This works very well and allows you to keep your system relatively free of nonsense codecs you’ll never actually use, instead installing what you need as you need it.

For a KDE application Dragon Player is surprisingly streamlined and doesn’t offer very many configuration options. In fact almost any other video player has more configuration options than Dragon Player. The only real settings I could find were changing how the video playback looks:

Video Settings

Video Settings

And that’s it. No seriously, there isn’t anything else to mention about this application and believe it or not that’s a good thing! This program is designed for exactly one thing and it does it well. If you’re looking for a single use video player application, and you’re not already a VLC fan, I would highly suggest this as an alternative.

More in this series




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: KDE, Tyler B Tags: ,

The apps of KDE 4.10 Part VI: Calligra Suite

May 24th, 2013 1 comment

LibreOffice? Pfft. OpenOffice? Blah. KOffice? Dead for a while now. Calligra Suite? Now we’re talking!

Calligra Suite

You may be a bit confused as to what Calligra Suite is, in fact you may not have ever even heard of it before now. Essentially Calligra Suite is a fork of the KOffice project from back in 2010 and has now become the de facto group of KDE publishing/office applications, as KOffice isn’t really being developed any more. It consists of the following applications:

For the purposes of this post I’m going to be going over the first three which I think are the most commonly used day-to-day applications.

Calligra Words

You’ve seen one word processor, you’ve seen them all right? Well maybe not in this case. Calligra Words has quite a different interface than its contemporaries (even counting the new-ish Microsoft Office ribbon interface in that category).

Take that ribbon!

Take that ribbon!

The first thing you’ll notice is that the majority of the buttons and options are located on the right hand side of the interface. Initially this seems quite strange but I suppose if you were working on a large widescreen monitor, as well all should be right?, this makes perfect sense. As you click in the little tabs they expand to reveal additional categorized options. It is sort of like putting the ribbon interface from Microsoft Office on its side.

Side bar in action

Side bar in action

While there is nothing inherently wrong with Calligra Words there were times when I found it confusing. For instance there seems to be some places where the application ignores the conventional paradigm for doing something specific, instead opting for their own way with mixed success. A good example of this is formatting the lines on an inserted table. Normally you would simply select the table, go into some format properties window and change it there. Instead Calligra Words has you select the format you want, from the side bar, and then paint it onto the existing table one line at a time. Again not a big deal if you were first learning to edit documents using Calligra Words, but I could easily see people having a difficult time transitioning from Microsoft Office or LibreOffice.

Other things are just strange. For example the application supports spellcheck and will happily underline words you’ve misspelled but I couldn’t find the option to run through a spellcheck on the whole document. Instead it seems as though you need to hunt through the document manually in order to avoid missing anything. I also had the application crash on me when I attempted to insert a bibliography.

Overall I just get the feeling that Calligra Words is still very much under development and not quite mature enough to be used in everyday life. Perhaps in a few released this could become a legitimate replacement for some of the other mainstream word processors, but for now I can’t say that I would recommend it beyond those who are curious to see its unique interface.

Calligra Sheets

Like Words, Sheets shares the sidebar interface for manipulating data.

Example balance sheet template

Example balance sheet template

Most of the standard functionality makes an appearance (i.e. cell formulas, formatted text, etc.) although once again I’m going to have to focus on the negatives here. Like Words I found some of the features very confusing. For instance I tried to make a simple bar chart with two columns worth of data (x and y). Instead I ended up with a bar chart showing both data sets against some random x plane. Try as I might I couldn’t force it to do what I wanted. The program also seemed very unstable for me and crashed often. Unfortunately I became so frustrated with this program that I just couldn’t dive too deeply into its features.

Calligra Stage

Stage is Calligra Suite’s version of Microsoft Office’s PowerPoint or LibreOffice’s Presentation.

Showing one of the included templates

Showing one of the included templates

 

This is the first application of the three that I think really benefits from having the side bar and it makes finding what you’re after surprisingly easy and straight forward. The only weird thing I really ran into was when adding animation to part of the slide. Again you need to select animation, then sort of paint it on kind of like what you had to do with tables in Words.

Like the rest, I think Stage could use some more development and maturity but unlike the other two I think Stage feels much further along (it didn’t even crash on me once!).

Conclusion

If you can’t read between the lines above allow me to summarize my feelings in this way: Calligra Suite is a solid set of applications but one that feels very young and very much still under development. This is not exactly the sort of feeling you want when you are working on a business or time critical document. However I do like some of the things they’ve started here and look forward to seeing where they take it in the future.

More in this series




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: KDE, Tyler B Tags: ,

The apps of KDE 4.10 Part V: Kopete

May 15th, 2013 2 comments

What does KDE offer for instant communication with your co-workers and friends? Kopete steps up to be your all-in-one IM solution.

Kopete

Kopete provides a KDE integrated instant messaging experience that aims at reducing the number of other instant messaging clients you need to run simultaneously in order to stay in touch with your friends. Rather than running a client for Yahoo Messenger, Facebook chat and Windows Live Messenger, you can instead fire up Kopete, add all of your accounts and take advantage of a single unified interface for all of them. This drastically reduces the on-screen clutter.

Kopete supports a lot of different networks!

Kopete supports a lot of different networks!

The process by which you actually configure all of these accounts is also very straight forward. In fact the first time you start Kopete up (and every time thereafter that you wish to add a new account) you get this nice little interface that helps walk you through the process.

Adding a new account

Adding a new account

Once through that easy process you are taken to the main Kopete interface screen where it allows you to view your online friends and, of course, chat with them.

Main contacts screen

Main contacts screen

Not that it should come as any surprise to anyone familiar with KDE but Kopete also supports quite a bit of customization. You can adjust any of the standard settings that you would expect (i.e. auto-away time out, ‘now playing…’ song statuses, etc.) as well as the general look and feel of your conversations.

With this much customization you're sure to find something that works for you

With this much customization you’re sure to find something that works for you

While I don’t have much bad to say about Kopete I should point out a couple of its more obvious deficiencies. For one Kopete has no Skype support. Skype is fast becoming one of the most popular instant messaging platforms and its absence is a bit disappointing.

Secondly Kopete varies from being just an acceptable, somewhat decent instant messaging client to being a great instant messaging client, all dependant on which IM network you are using. What I mean by this is basically that Kopete is designed to be a very generic IM client  in order to support as many networks as possible, and that’s fine. However because of this design choice it rarely excels at being the best IM client for networks which handle more than just simple text messages. There are many times when the official client for a given IM network will support many more features than Kopete.

Neither of these should deter you from using Kopete (or at least giving it a try). Like all of the other applications I’ve written about in this series, Kopete offers a KDE feeling and integration to your day-to-day applications and for some people that could be far more worth while than having 100% of all features.

Update: as pointed out in the comments this application is actually now known by the name KDE Telepathy. Sorry for the confusion.

More in this series




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: KDE, Tyler B Tags: , ,

The apps of KDE 4.10 Part IV: Amarok

April 25th, 2013 No comments

Ready to rock out with KDE’s premier music management application? Let’s rediscover our music with Amarok.

Amarok

I have to start by first admitting that I’ve actually run Amarok once or twice in the past, but sadly could never really figure it out. This always bothered me because people who can figure it out seem to love it. So I made it my mission this time around to really dig into the application to see what all the noise was about (poor pun intended).

 

Rediscover Your Music

Rediscover Your Music

Starting with the navigation pane on the left hand side of the screen I drilled down into my Local Music collection. For the purposes of testing I just threw two albums in my Music folder.

The navigation panel

The navigation panel

Double clicking Local Music opens up a view into your Music folder that lets you play songs or search through your artists and albums.

Local media list

Local media list

When you play a song the main portion in the center of the application changes to give you a ton of information about that track.

Automatically pulls lyrics and other information from the web

Automatically pulls lyrics and other information from the web

This is actually a pretty neat feature but also has the downside that its not always correct. For instance when I started playing the above song by the 90s band Fuel I ended up getting shown the following Wikipedia page about fuel (i.e. an energy source) and not the correct page about the band.

I don't think that's right...

I don’t think that’s right…

Placing a CD in the computer caused it to appear under Local Media (although under a different section). Importing tracks was very straight forward; simply right-click on the CD and choose Copy to Collection -> Local Collection. You then get to pick your encoding options (which you can deeply customize to fit your needs).

Pick your encoding format and go

Pick your encoding format and go

For Internet media Amarok comes loaded with a number of sources including a number of streaming radio stations, Jamendo, Last.fm, Librivox.org, Magnatune.com, Amazon’s MP3 store and a podcast directory. Like most other media, Amarok also tries to display relevant information about what you’re listening to.

Internet Radio

Internet Radio on Amarok

There are loads of other features in Amarok, from its excellent playlist support to loads of expandable plugins, but writing about all of them would take all day. Instead I will wrap up here with a few final thoughts.

Is Amarok the best media manager ever made? To some maybe, but I still find its interface a bit too clunky for my liking. I also noticed that it tended to take up quite a bit of RAM (~220MB currently) which puts it on the beefier side of the media manager resource usage spectrum. The amount of information that it presents about what you’re currently listening to is impressive, but often times when I’m listening to music I’m doing so as a background activity. I don’t foresee a situation where I would be actively watching Amarok in order to benefit from its full potential as a way to ‘rediscover my music’. Still, for at least its deep integration within the KDE desktop, I say give it a try and see if it works for you.

More in this series




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: KDE, Linux, Tyler B Tags: , ,