Archive

Archive for the ‘Jake B’ Category

Applying updates to Docker and the Plex container

December 7th, 2014 No comments

In my last post, I discussed several Docker containers that I’m using for my home media streaming solution. Since then, Plex Media Server has updated to 0.9.11.4 for non-Plex Pass users, and there’s another update if you happen to pay for a subscription. As the Docker container I used (timhaak/plex) was version 0.9.11.1 at the time, I figured I’d take the opportunity to describe how to

  • update Docker itself to the latest version
  • run a shell inside the container as another process, to review configuration and run commands directly
  • update Plex to the latest version, and describe how not to do this
  • perform leet hax: commit the container to your local system, manually update the package, and re-commit and run Plex

Updating Docker

I alluded to the latest version of Docker having features that make it easier to troubleshoot inside containers. Switching to the latest version was pretty simple: following the instructions to add the Docker repository to my system, then running

sudo apt-get update
sudo apt-get install lxc-docker

upgraded Docker to version 1.3.1 without any trouble or need to manually uninstall the previous Ubuntu package.

Run a shell using docker exec

Let’s take a look inside the plex container. Using the following command will start a bash process so that we can review the filesystem on the container:

docker exec -t -i plex /bin/bash

You will be dropped into a root prompt inside the plex container. Check out the filesystem: there will be a /config and a /data directory pointing to “real” filesystem locations. You can also use ps aux to review the running processes, or even netstat -anp to see active connections and their associated programs. To exit the shell, use Ctrl+C – but the container will still be running when you use docker ps -a from the host system.

Updating Plex in-place: My failed attempt

Different Docker containers will have different methods of performing software updates. In this case, looking at the Dockerfile for timhaak/plex, we see that a separate repository was added for the Plex package – so we should be able to confirm that the latest version is available. This also means that if you destroy your existing container, pull the latest image, then launch a new copy, the latest version of Plex will be installed (generally good practice.)

But wait – the upstream repository at http://shell.ninthgate.se/packages/debian/pool/main/p/plexmediaserver/ does contain the latest .deb packages for Plex, so can’t we just run an apt-get update && apt-get upgrade?

Well, not exactly. If you do this, the initial process used to run Plex Media Server inside the Docker container (start.sh) gets terminated, and Docker takes down the entire plex container when the initial process terminates. Worse, if you then decide to re-launch things with docker start plex, the new version is incompletely installed (dpkg partial configuration).

So the moral of the story: if you’re trying this at home, the easiest way to upgrade is to recreate your Plex container with the following commands:

docker stop plex

docker rm plex

# The 'pull' process may take a while - it depends on the original repository and any dependencies in the Dockerfile. In this case it has to pull the new version of Plex.
docker pull timhaak/plex

# Customize this command with your config and data directories.
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex

Once the container is up and running, access http://yourserver:32400/web/ to confirm that Plex Media Server is running. You can check the version number by clicking the gear icon next to your server in the left navigation panel, then selecting Settings.

Hacking the container: commit it and manually update Plex from upstream

If you’re more interested in hacking the current setup, there’s a way to commit your existing Plex image, manually perform the upgrade, and restart the container.

First, make sure the plex container is running (docker start plex) and then commit the container to your local filesystem (replacing username with your preferred username):

docker commit plex username/plex:latest

Then we can stop the container, and start a new instance where bash is the first process:

docker stop plex

docker rm plex

# Replace username with the username you selected above.
docker run -t -i --name="plex" -h plex username/plex:latest /bin/bash

Once inside the new plex container, let’s grab the latest Plex Media Server package and force installation:

curl -O https://downloads.plex.tv/plex-media-server/0.9.11.4.739-a4e710f/plexmediaserver_0.9.11.4.739-a4e710f_amd64.deb

dpkg -i plexmediaserver_0.9.11.4.739-a4e710f_amd64.deb

# When prompted, select Y to install the package maintainer's versions of files. In my instance, this updated the init script as well as the upstream repository.

Now, we can re-commit the image with the new Plex package. Hit Ctrl+D to exit the bash process, then run:

docker commit plex username/plex:latest

docker rm plex

# Customize this command with your config and data directories.
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 username/plex /start.sh

# Commit the image again so it will run start.sh if ever relaunched:
docker commit plex username/plex:latest

You’ll also need to adjust your /etc/init/plex.conf upstart script to point to username/plex.

The downside of this method is now that you’ve forked the original Plex image locally and will have to do this again for updates. But hey, wasn’t playing around with Docker interesting?




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

Running a containerized media server with Ubuntu 14.04, Docker, and Plex

November 23rd, 2014 No comments

I recently took it upon myself to rebuild a general-purpose home server – installing a new Intel 530 240GB solid-state drive to replace a “spinning rust” drive, and installing a fresh copy of Ubuntu 14.04 now that 14.04.1 has released and there is much less complaining online.

The “new hotness” that I’d like to discuss has been the use of Docker to containerize various processes. Docker gets a lot of press these days, but the way I see it is a way to ensure that your special snowflake applications and services don’t get the opportunity to conflict with one another. In my setup, I have four containers running:

I like the following things about Docker:

  • Since it’s new, there are a lot of repositories and configuration instructions online for reference.
  • I can make sure that applications like Sonarr/NZBDrone get the right version of Mono that won’t conflict with my base system.
  • As a network administrator, I can ensure that only the necessary ports for a service get forwarded outside the container.
  • If an application state gets messed up, it won’t impact the rest of the system as much – I can destroy and recreate the individual container by itself.

There are some drawbacks though:

  • Because a lot of the images and Dockerfiles out there are community-based, there are some that don’t follow best practices or fall out of an update cycle.
  • Software updates can become trickier if the application is unable to upgrade itself in-place; you may have to pull a new Dockerfile and hope that your existing configuration works with a new image.
  • From a security standpoint, it’s best to verify exactly what an image or Dockerfile does before running it – for example, that it pulls content from official repositories (the docker-plex configuration is guilty of using a third-party repo, for example.)

To get started, on Ubuntu 14.04 you can install a stable version of Docker following these instructions, although the latest version has some additional features like docker exec that make “getting inside” containers to troubleshoot much easier. I was able to get all these containers running properly with the current stable version (1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1). Once Docker is installed, you can grab each of the containers above with a combination of docker search and docker pull, then list the downloaded containers with docker images.

There are some quirks to remember. On the first run, you’ll need to docker run most of these containers and provide a hostname, box name, ports to forward and shared directories (known as volumes). On all subsequent runs, you can just use docker start $container_name – but I’ll describe a cheap and easy way of turning that command into an upstart service later. I generally save the start commands as shell scripts in /usr/local/bin/docker-start/*.sh so that I can reference them or adjust them later. The start commands I’ve used look like:

Plex
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex
SABnzbd+
docker run -d -h sabnzbd --name="sabnzbd" -v /etc/docker/sabnzbd:/config -v /mnt/nas:/data -p 8080:8080 -p 9090:9090 timhaak/sabnzbd
Sonarr
docker run -d -h sonarr --name="sonarr" -v /etc/docker/sonarr:/config -v /mnt/nas:/data -p 8989:8989 tuxeh/sonarr
CouchPotato
docker run -d -h couchpotato --name="couchpotato" -e EDGE=1 -v /etc/docker/couchpotato:/config -v /mnt/nas:/data -v /etc/localtime:/etc/localtime:ro -p 5050:5050 needo/couchpotato
These applications have a “/config” and a “/data” shared volume defined. /data points to “/mnt/nas”, which is a CIFS share to a network attached storage appliance mounted on the host. /config points to a directory structure I created for each application on the host in /etc/docker/$container_name. I generally apply “chmod 777″ permissions to each configuration directory until I find out what user ID the container is writing as, then lock it down from there.

For each initial start command, I choose to run the service as a daemon with -d. I also set a hostname with the “-h” parameter, as well as a friendly container name with “–name”; otherwise Docker likes to reference containers with wild adjectives combined with scientists, like “drunk_heisenberg”.

Each of these containers generally has a set of instructions to get up and running, whether it be on Github, the developer’s own site or the Docker Hub. Some, like SABnzbd+, just require that you go to http://yourserverip:8080/ and complete the setup wizard. Plex required an additional set of configuration steps described at the original repository:

  • Once Plex starts up on port 32400, access http://yourserverip:32400/web/ and confirm that the interface loads.
  • Switch back to your host machine, and find the place where the /config directory was mounted (in the example above, it’s /etc/docker/plex). Enter the Library/Application Support/Plex Media Server directory and edit the Preferences.xml file. In the <Preferences> tag, add the following attribute: allowedNetworks=”192.168.1.0/255.255.255.0″ where the IP address range matches that of your home network. In my case, the entire file looked like:

    <?xml version="1.0" encoding="utf-8"?>
    <Preferences MachineIdentifier="(guid)" ProcessedMachineIdentifier="(another_guid)" allowedNetworks="192.168.1.0/255.255.255.0" />

  • Run docker stop plex && docker start plex to restart the container, then load http://yourserverip:32400/web/ again. You should be prompted to accept the EULA and can now add library locations to the server.

Sonarr needed to be updated (from the NZBDrone branding) as well. From the GitHub README, you can enable in-container upgrades:

[C]onfigure Sonarr to use the update script in /etc/service/sonarr/update.sh. This is configured under Settings > (show advanced) > General > Updates > change Mechanism to Script.

To automatically ensure these containers start on reboot, you can either use restart policies (Docker 1.2+) or write an upstart script to start and stop the appropriate container. I’ve modified the example from the Docker website slightly to stop the container as well:

description "SABnzbd Docker container"
author "Jake"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a sabnzbd
end script
pre-stop exec /usr/bin/docker stop sabnzbd

Copy this script to /etc/init/sabnzbd.conf; you can then copy it to plex, couchpotato, and sonarr.conf and change the name of the container and title in each. You can then test it by rebooting your system and running “docker ps -a” to ensure that all containers come up cleanly, or running “docker stop $container; service $container start”. If you run into trouble, the upstart logs are in /var/log/upstart/$container_name.conf.

Hopefully this introduction to a media server with Docker containers was thought-provoking; I hope to have further updates down the line for other applications, best practices and how this setup continues to operate in its lifetime.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

Initial thoughts about PC-BSD

January 16th, 2014 No comments

[Please note: this is a historical post – I’m no longer running *BSD in 2014, and this is a collection of thoughts on its setup in case I decide to return to the operating system. Further posts from me will focus on other Linux experiences.]

So after not too much effort, I’ve gotten PC-BSD to replace my FreeBSD installation and am back up and running. Some minor tips, interesting facts and tweaks:

  • Default filesystem and mountpoints all seem to be ZFS, which would make PC-BSD probably the quickest and easiest way to get a functional desktop environment running with this neat filesystem.
  • To enable Flash playback in Chromium (and I assume Firefox), run
    flashpluginctl on

    from the terminal (under your own user account, not root) and restart the browser. Thanks to the PC-BSD forums for this answer.

  • Enabling SSH server: add sshd_enable=”YES” to /etc/rc.conf, then /etc/rc.d/sshd start. You’ll also need to allow TCP port 22 inbound through the firewall in the PC-BSD Control Panel/Networking/Firewall Manager application.
  • Sound worked out of the box without any driver finagling, and is a much more simplistic setup:

 

From the PC-BSD Control Panel, a very simple way to select the default sound device.

From the PC-BSD Control Panel, a very simple way to select the default sound device.

I’m assuming the situation would have been better than the Kubuntu trials and tribulations with PulseAudio – all the possible nVidia HDMI output ports are listed in this dropdown list, as well as my onboard sound and USB/stereo audio adapter. In Phonon, the list is much simpler:

No greyed-out cards or shenanigans - Phonon just shows the default sound card from PC-BSD.

No greyed-out cards or “missing sink”s – Phonon just shows the default sound card from PC-BSD.

 

So far this has been a pretty great introductory experience – the desktop is polished, KDE integration appears to work well, and manual configuration has been limited to what I’d consider more advanced functionality like the SSH daemon.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Flash, Jake B, PC-BSD Tags: , ,

Falling off the FreeBSD bandwagon

August 25th, 2013 No comments

I’ll have to admit that during the previous week or so, I haven’t been able to exclusively use FreeBSD at home or Linux as a workstation in the office. Kayla and I have still been using Kubuntu (now with improved PulseAudio support) on the basement machine, and that’s been working quite well for both Netflix and media files stored over NFS. Even Dragon Player, the default KDE association for .avi and .mkv files, is quite reasonable for lightweight playback and has given us no issues.

Dragon Player

Here be dragons

It’s a combination of things that have contributed to my slide back to Windows/OS X. First has been time at the office. When there are several urgent projects, it’s significantly easier to use the tools and infrastructure that are already set up on a Windows partition by virtue of Group Policy or already-existing tools. Wasting several hours because you can’t access a DFS share that would take one click from Outlook on Windows is unproductive.

What’s more, my choice of Arch Linux meant that there were several “rough around the edges” spots where I was missing packages or things just weren’t as polished as something like Fedora or Ubuntu. Font smoothing, for example, wasn’t quite what I was expecting and replacing/editing complicated XML files was going to be very frustrating. Arch seems very powerful and customizable, but that’s not something I can justify when there is a corporate-provided Ubuntu image available for install at the office.

FreeBSD has been fairly standard, to say the least. It supports the usual assortment of desktop applications, but the missing 20% of things that I do under Windows start to really show after a few weeks. My large Steam library sitting on another drive becomes almost worthless, and tasks such as scanning a document are also painful – Brother, for example, does a great job of shipping OS X and Linux (.deb and .rpm-enclosed) drivers but when it comes down to just needing a PDF, it’s way easier to grab the nearest Windows laptop and get things done.

What am I going to try next? After some review, I will be installing PC-BSD 9.1 (based on FreeBSD) and seeing if there’s a more polished experience available out of the box. I’m also going to be reviewing and polishing some of my GitHub-hosted scripts for BSD compatibility.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: FreeBSD, Jake B Tags:

Listen up, Kubuntu: the enraging tale of sound over HDMI

August 4th, 2013 2 comments

Full disclosure: I live with Kayla, and had to jump in to help resolve an enraging problem we ran into on the Kubuntu installation with KDE, PulseAudio and the undesirable experience of not having sound in applications. It involved a fair bit of terminal work and investigation, plus a minimal understanding of how sound works on Linux. TuxRadar has a good article that tries to explain things. When there are problems, though, the diagram looks much more like the (admittedly outdated) 2007 version:

The traditional spiderweb of complexity involved in Linux audio.

The traditional spiderweb of complexity involved in Linux audio.

To give you some background, the sound solution for the projection system is more complicated than “audio out from PC, into amplifier”. I’ve had a large amount of success in the past with optical out (S/PDIF) from Linux, with only a single trip to alsamixer required to unmute the relevant output. No, of course the audio path from this environment has to be more complicated, and looks something like:

Approximate diagram of display and audio output involved from Kubuntu machine

As a result, the video card actually acts as the sound output device, and the amplifier takes care of both passing the video signal to the projector and decoding/outputting the audio signal to the speakers and subwoofer. Under Windows, this works very well: in Control Panel > Sound, you right-click on the nVidia HDMI audio output and set it as the default device, then restart whatever application plays audio.

In the KDE environment, sound is managed by a utility called Phonon in the System Settings > Multimedia panel, which has multiple backends for ALSA and PulseAudio. It will essentially communicate with the highest-level sound output system installed that it has support for. When you make a change in a default Kubuntu install in Phonon it appears to be talking to PulseAudio, which in turn changes necessary ALSA settings. Sort of complicated, but I guess it handles the idea that multiple applications can play audio and not tie up the sound card at the same time – which has not always been the case with Linux.

In my traditional experience with the GNOME and Unity interfaces, it always seems like KDE took its own path with audio that wasn’t exactly standard. Here’s the problem I ran into: KDE listed the two audio devices (Intel HDA and nVidia HDA), with the nVidia interface containing four possible outputs – two stereo and two listed as 5.1. In the Phonon control panel, only one of these four was selectable at a time, and not necessarily corresponding to multiple channel output. Testing the output did not play audio, and it was apparent that none of it was making it to the amplifier to be decoded or output to the speakers.

Using some documentation from the ArchLinux wiki on ALSA, I was able to use the aplay -l command to find out the list of detected devices – there were four provided by the video card:

**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0

and then use aplay -D plughw:1,N /usr/share/sounds/alsa/Front_Center.wav repeatedly where N is the number of one of the nVidia detected devices. Trial and error let me discover that card 1, device 7 was the desired output – but there was still no sound from the speakers in any KDE applications or the Netflix Desktop client. Using the ALSA output directly in VLC, I was able to get an MP3 file to play properly when selecting the second nVidia HDMI output in the list. This corresponds to the position in the aplay output, but VLC is opaque about the exact card/device that is selected.

At this point my patience was wearing pretty thin. Examining the audio listing further – and I don’t exactly remember how I got to this point – the “active” HDMI output presented in Phonon was actually presented as card 1, device 3. PulseAudio essentially grabbed the first available output and wouldn’t let me select any others. There were some additional PulseAudio tools provided that showed the only possible “sink” was card 1,3.

The brute-force, ham-handed solution was to remove PulseAudio from a terminal (sudo apt-get remove pulseaudio) and restart KDE, presenting me with the following list of possible devices read directly from ALSA. I bumped the “hw:1,7″ card to the top and also quit the system tray version of Amarok.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

Result: Bliss! By forcing KDE to output to the correct device through ALSA, all applications started playing sounds and harmony was restored to the household.

At some point after the experiment I will see if I can get PulseAudio to work properly with this configuration, but both Kayla and I are OK with the limitations of this setup. And hey – audio works wonderfully now.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Getting FreeBSD up and running with X.org and nVidia drivers

July 27th, 2013 No comments

The Experiment has officially begun, and with that I’ve gone through the FreeBSD installation process. The actual install was fairly uneventful: apart from the fact that FreeBSD defaults to a different base filesystem and has partitioning identifiers, sysinstall did the trick without the same bootloader issues that Dave experienced.

The first major difference, coming from something like Ubuntu or Debian, is that FreeBSD uses a combination of both source packages and already-prepared binary packages. Ostensibly the binary packages are for the most popular software and source packages are provided for convenience when there is no dedicated package build/maintainer team. In practice, depending on what you need to install, there are several possible locations and methods:

  • As a package, which is the binary compiled version. Available with the pkg_add -r option that acts like apt-get install on Ubuntu. The next version of this is pkgng, but I haven’t had much luck with it so far.
  • As a port, the source version of the program with FreeBSD hints to make the software compile. There are stubs in /usr/ports for a wide variety of software, and the “make install clean” process performs what seems to be a level of dependency injection as well.
  • From source directly, where you download and compile the package directly from its creator’s website; I’m avoiding this unless absolutely necessary.

As a result, I just end up using Google to find the package and then installing using the suggested command line. Hilariously enough, when looking for “take screenshot FreeBSD”, the suggested package was called scrot. Here’s that result:

My FreeBSD/xfce4 desktop taken with 'scrot'.

In order to get the desktop working, I had to fight a bit with X.org. Reading the documentation was incredibly helpful in getting my mouse and keyboard to work – I needed to add hald and dbus to the /etc/rc.conf file:

hald_enable="YES"
dbus_enable="YES"

Once that was set up, I then embarked on the process of getting my monitor to display at native 2560×1600 resolution. First, I was stymied by the Xorg -configure process, which provided a number of created screens does not match number of detected devices error but still generated a configuration file. Copying that file into /etc/X11/xorg.conf and running startx subsequently gave a no screens detected message.

A number of suggestions online related to adding a preferred resolution as a “Modes” line to the Screen section in this file, but there was no change. What eventually worked was changing the Driver line from nv to vesa – clearly my GeForce 660 isn’t supported by the default open-source nVidia driver.

As a result, it was necessary to look at installing the closed-source binary nVidia driver. The first stumbling block in this process was during the make install clean command, where I was first told I’d have to install the FreeBSD kernel source. Using this forum article and adjusting the URL to reference 9.1-RELEASE, I successfully obtained and decompressed the code to /usr/src.

The next problem was with my choice of setup options. Initially during the make install process, I selected the default options, and was now blocked at:

===> Installing for nvidia-driver-304.60
===> nvidia-driver-304.60 depends on file: /compat/linux/etc/fedora-release - not found
===> Verifying install for /compat/linux/etc/fedora-release in /usr/ports/emulators/linux_base-f10
===> linux_base-f10-10_5 linuxulator is not (kld)loaded.
*** [install] Error code 1

Stop in /usr/ports/emulators/linux_base-f10.
*** [run-depends] Error code 1

Stop in /usr/ports/x11/nvidia-driver.
*** [install] Error code 1

Stop in /usr/ports/x11/nvidia-driver.

There didn’t seem to be a good way to get back to the options screen to deselect the Linux compatibility mode and make clean didn’t help the situation. Poking around, I was able to reselect the correct options (remove Linux, and also ensure not to select the FreeBSD AGP option) by running make config. A make install clean command after that, and I could continue to follow the rest of the instructions – creating /boot/loader.conf and adding nvidia_load="YES", editing xorg.conf to set the Driver to nvidia, and then it was time for a reboot.

As a side note, unlike other Linux distributions, the idea of installing proprietary drivers wasn’t portrayed as shameful and against Free Software ideals. The attitude and design of FreeBSD seems to be that you should be able to do what you want with it.

So after this work, what was the result when I ran startx again? Nearly flawless detection of multiple monitors, a readable desktop and non-balls graphics performance. A quick trip to sudo /usr/local/bin/nvidia-settings fixed the monitor alignment and was quite easy to use. Now to work on the rest of the desktop components to make this a more usable system, and I’ll be well on the way to future moments of rage.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: FreeBSD, Jake B, XFCE, Xorg/X11 Tags:

Linux: does it work for workers who work in the workplace?

July 27th, 2013 No comments

In the ramp-up to the 2013 Linux Experiment, I got ambitious and decided to try not only FreeBSD as my official entry, but to install one or more versions of Linux at the office (so take that, anyone who says “Well FreeBSD isn’t Linux!” I’m aware.)

There are a number of reasons I wanted to check out Linux in an office environment, and was able to consider this secondary experiment:

  • Most of my work is Linux-based already. We have moved away from Windows-based systems fairly drastically since 2011, and there is minimal Windows administration effort. The much more common presence of professionally managed Windows virtual machines means that I can use tools like rdesktop if a Windows UI is absolutely required. Having a built-in SSH client is one of the reasons I picked a MacBook Pro for a corporate laptop, and Linux distributions offer the same ssh packages.
  • I have the good fortune to have multiple corporate-issued systems available on short notice. If the experiment goes poorly, I’m only down for ten minutes to reconnect a Windows or OSX-based system. I can then resume my remote tasks through the diligent use of screen and multiple SSH tunnels.
  • Another point in favour is that most IT support is now self-directed for software issues; there is a large (and growing) Linux user community internally and corporate documentation now tends to indicate proper server names and connection information rather than “just use Outlook”.
  • Finally, there’s an easy way to back out if something goes wrong – it’s possible to reimage a laptop and rejoin it to the corporate domain without engaging technical support. I don’t keep files locally and my key configuration files are all backed up on a remote Git server, so getting back to Windows 7 wouldn’t be too hard at all.

Hopefully with this adventure I’ll be able to better able to contribute internally to the Linux user community, and appropriately redacted, share the trials and tribulations of running Linux (mostly) full time in the workplace. Wish me luck!




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Airing of grievances: in which upgrading Ubuntu wreaks havoc

February 24th, 2013 4 comments

I’ve had a few nasty experiences this week with Linux and figured I’d vent here. Unlike my previous efforts with Linux From Scratch and Gentoo, my complaints this time around are related to upgrading Ubuntu.

Ubuntu 10.04 to 12.04: Save yourself the trouble

At this point the current Ubuntu LTS release (12.04) is my preferred distribution to work with: it has become widespread enough that troubleshooting and previous solutions online are easy to locate. In a professional capacity, I also maintain systems that are still on 8.04 LTS (supported until April 2013, so we have to be pretty aggressive about replacing them) or 10.04 LTS (good until April 2015).

I attempted to complete two upgrades from the 10.04 release this week to 12.04 – one 10.04 LTS “desktop” installation, and one 10.04 LTS headless server installation. Both were virtual machines running under VMWare ESXi, but neither had given me any trouble during normal use.

Canonical’s updater process (the wrapper around dist-upgrade) appears to be pretty slick; it gives you appropriate warnings, attempts to start a SSH daemon as a fallback mechanism and starts on its merry way to download the necessary packages to bring your system completely up to date. On my 10.04 desktop VM, the installer fell apart completely during the package replacement/removal/installation sequence. I was left with two nasty message boxes: one advising that my system was now in a broken state, and another that completely contained rectangular, unprintable characters.

To put it bluntly, I was not amused, but it wasn’t a critical system and I was content to replace it with a fresh 12.04 installation rather than waste additional time troubleshooting with apt or dpkg. Strike one for the upgrader.

At least the server came back up!

Next on the upgrade schedule was the 12.04 server VM. Install, package replacement and reboot went fine, but I had several custom PPAs installed to support development of XenonMKV (Github page) – specifically ppa:krull/deadsnakes to add Python 2.7 to Ubuntu 10.04.

Python 2.7 still worked when the server came back up, and all my usual tools of choice like SABnzbd+, SickBeard and CouchPotato were still functional.

For some reason, though, I’d gotten it into my head this evening to check out Mezzanine as a potential WordPress replacement. Mezzanine uses Django, a Python Web framework, and the list of supported features is pretty encompassing.

Sidebar: Django and mod_wsgi – complicated enough?

One of the most irritating things from a system administration point of view is getting Web applications to run in a standard server environment – typically a Linux base system and Apache or nginx to serve content. I suppose I’ve been spoiled with how easy it is to get PHP-based sites up and running these days in that configuration by adding an Apache module through apt. A lot of new Web app frameworks come with their own small webservers for development and testing, but generally their creators recommend that when you’re ready to put your site live, that the product run under a well-known Web or application server.

The Django folks recommend using mod_wsgi in their documentation, which in and of itself really just says “RTFM for mod_wsgi and then you’ll have a much better idea of how to do this.” I had to go poking around on Google for the installation article since there are some broken links, but okay, it’s an Apache module with a small bit of configuration (even though a simple walkthrough in the Django documentation would go a long way to making deployment easier.) This is where I ran into my dependency/PPA problem on Ubuntu 10.04.

I’ve suppose I’ve screwed the pooch…

Running the suggested command, I tried: sudo apt-get install libapache2-mod-wsgi and got the following

The following packages have unmet dependencies:
libapache2-mod-wsgi : Depends: libpython2.7 (>= 2.7) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

Backtracking, I then found out why the library wasn’t going to get installed:


The following packages have unmet dependencies:
libpython2.7 : Depends: python2.7 (= 2.7.3-0ubuntu3.1) but 2.7.3-2+lucid1 is to be installed

Aha! The Python installation from the PPA for Lucid – 10.04 – was installed and acting as the 2.7 package. Since the newly-upgraded Ubuntu 12.04 uses Python 2.7 as a dependency for a good portion of the default applications, I couldn’t just purge or uninstall it, and my attempts to force a reinstallation all ended in:


Reinstallation of python2.7 is not possible, since it cannot be downloaded.

Rebuild?

At this point it looks like I’ll have to rebuild the server VM as well, but if any readers have any bright ideas on fixing this dependency hell – please comment with your suggestions!




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: God Damnit Linux, Jake B, Ubuntu Tags:

Linux from Scratch: I’ve had it up to here!

November 27th, 2011 9 comments

As you may be able to tell from my recent, snooze-worthy technical posts about compilers and makefiles and other assorted garbage, my experience with Linux from Scratch has been equally educational and enraging. Like Dave, I’ve had the pleasure of trying to compile various desktop environments and software packages from scratch, into some god-awful contraption that will let me check my damn email and look at the Twitters.

To be clear, when anyone says I have nobody to blame but myself, that’s complete hokum. From the beginning, this entire process was flawed. The last official LFS LiveCD has a kernel that’s enough revisions behind to cause grief during the setup process. But I really can’t blame the guys behind LFS for all my woes; their documentation is really well-written and explains why you have to pass fifty --do-not-compile-this-obscure-component-or-your-cat-will-crap-on-the-rug arguments.

Patch Your Cares Away

CC attribution licensed from benchilada

Read more…




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

Building glibc for LFS from Ubuntu by replacing awk

November 23rd, 2011 No comments

If you run into the following error trying to build LFS from a Ubuntu installation:


make[1]: *** No rule to make target `/mnt/lfs/sources/glibc-build/Versions.all', needed by `/mnt/lfs/sources/glibc-build/abi-versions.h'. Stop.

The mawk utility installed with Ubuntu, and symlinked to /usr/bin/awk by default does not properly handle the regular expressions in this package. Perform the following commands:


# apt-get install gawk
# rm -rf /usr/bin/{m}awk
# ln -snf /usr/bin/gawk /usr/bin/awk

Then you’re just a make clean; ./configure –obnoxious-dash-commands; make; make install away from success.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.