Archive

Archive for the ‘Ubuntu’ Category

Applying updates to Docker and the Plex container

December 7th, 2014 No comments

In my last post, I discussed several Docker containers that I’m using for my home media streaming solution. Since then, Plex Media Server has updated to 0.9.11.4 for non-Plex Pass users, and there’s another update if you happen to pay for a subscription. As the Docker container I used (timhaak/plex) was version 0.9.11.1 at the time, I figured I’d take the opportunity to describe how to

  • update Docker itself to the latest version
  • run a shell inside the container as another process, to review configuration and run commands directly
  • update Plex to the latest version, and describe how not to do this
  • perform leet hax: commit the container to your local system, manually update the package, and re-commit and run Plex

Updating Docker

I alluded to the latest version of Docker having features that make it easier to troubleshoot inside containers. Switching to the latest version was pretty simple: following the instructions to add the Docker repository to my system, then running

sudo apt-get update
sudo apt-get install lxc-docker

upgraded Docker to version 1.3.1 without any trouble or need to manually uninstall the previous Ubuntu package.

Run a shell using docker exec

Let’s take a look inside the plex container. Using the following command will start a bash process so that we can review the filesystem on the container:

docker exec -t -i plex /bin/bash

You will be dropped into a root prompt inside the plex container. Check out the filesystem: there will be a /config and a /data directory pointing to “real” filesystem locations. You can also use ps aux to review the running processes, or even netstat -anp to see active connections and their associated programs. To exit the shell, use Ctrl+C – but the container will still be running when you use docker ps -a from the host system.

Updating Plex in-place: My failed attempt

Different Docker containers will have different methods of performing software updates. In this case, looking at the Dockerfile for timhaak/plex, we see that a separate repository was added for the Plex package – so we should be able to confirm that the latest version is available. This also means that if you destroy your existing container, pull the latest image, then launch a new copy, the latest version of Plex will be installed (generally good practice.)

But wait – the upstream repository at http://shell.ninthgate.se/packages/debian/pool/main/p/plexmediaserver/ does contain the latest .deb packages for Plex, so can’t we just run an apt-get update && apt-get upgrade?

Well, not exactly. If you do this, the initial process used to run Plex Media Server inside the Docker container (start.sh) gets terminated, and Docker takes down the entire plex container when the initial process terminates. Worse, if you then decide to re-launch things with docker start plex, the new version is incompletely installed (dpkg partial configuration).

So the moral of the story: if you’re trying this at home, the easiest way to upgrade is to recreate your Plex container with the following commands:

docker stop plex

docker rm plex

# The 'pull' process may take a while - it depends on the original repository and any dependencies in the Dockerfile. In this case it has to pull the new version of Plex.
docker pull timhaak/plex

# Customize this command with your config and data directories.
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex

Once the container is up and running, access http://yourserver:32400/web/ to confirm that Plex Media Server is running. You can check the version number by clicking the gear icon next to your server in the left navigation panel, then selecting Settings.

Hacking the container: commit it and manually update Plex from upstream

If you’re more interested in hacking the current setup, there’s a way to commit your existing Plex image, manually perform the upgrade, and restart the container.

First, make sure the plex container is running (docker start plex) and then commit the container to your local filesystem (replacing username with your preferred username):

docker commit plex username/plex:latest

Then we can stop the container, and start a new instance where bash is the first process:

docker stop plex

docker rm plex

# Replace username with the username you selected above.
docker run -t -i --name="plex" -h plex username/plex:latest /bin/bash

Once inside the new plex container, let’s grab the latest Plex Media Server package and force installation:

curl -O https://downloads.plex.tv/plex-media-server/0.9.11.4.739-a4e710f/plexmediaserver_0.9.11.4.739-a4e710f_amd64.deb

dpkg -i plexmediaserver_0.9.11.4.739-a4e710f_amd64.deb

# When prompted, select Y to install the package maintainer's versions of files. In my instance, this updated the init script as well as the upstream repository.

Now, we can re-commit the image with the new Plex package. Hit Ctrl+D to exit the bash process, then run:

docker commit plex username/plex:latest

docker rm plex

# Customize this command with your config and data directories.
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 username/plex /start.sh

# Commit the image again so it will run start.sh if ever relaunched:
docker commit plex username/plex:latest

You’ll also need to adjust your /etc/init/plex.conf upstart script to point to username/plex.

The downside of this method is now that you’ve forked the original Plex image locally and will have to do this again for updates. But hey, wasn’t playing around with Docker interesting?




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

Running a containerized media server with Ubuntu 14.04, Docker, and Plex

November 23rd, 2014 No comments

I recently took it upon myself to rebuild a general-purpose home server – installing a new Intel 530 240GB solid-state drive to replace a “spinning rust” drive, and installing a fresh copy of Ubuntu 14.04 now that 14.04.1 has released and there is much less complaining online.

The “new hotness” that I’d like to discuss has been the use of Docker to containerize various processes. Docker gets a lot of press these days, but the way I see it is a way to ensure that your special snowflake applications and services don’t get the opportunity to conflict with one another. In my setup, I have four containers running:

I like the following things about Docker:

  • Since it’s new, there are a lot of repositories and configuration instructions online for reference.
  • I can make sure that applications like Sonarr/NZBDrone get the right version of Mono that won’t conflict with my base system.
  • As a network administrator, I can ensure that only the necessary ports for a service get forwarded outside the container.
  • If an application state gets messed up, it won’t impact the rest of the system as much – I can destroy and recreate the individual container by itself.

There are some drawbacks though:

  • Because a lot of the images and Dockerfiles out there are community-based, there are some that don’t follow best practices or fall out of an update cycle.
  • Software updates can become trickier if the application is unable to upgrade itself in-place; you may have to pull a new Dockerfile and hope that your existing configuration works with a new image.
  • From a security standpoint, it’s best to verify exactly what an image or Dockerfile does before running it – for example, that it pulls content from official repositories (the docker-plex configuration is guilty of using a third-party repo, for example.)

To get started, on Ubuntu 14.04 you can install a stable version of Docker following these instructions, although the latest version has some additional features like docker exec that make “getting inside” containers to troubleshoot much easier. I was able to get all these containers running properly with the current stable version (1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1). Once Docker is installed, you can grab each of the containers above with a combination of docker search and docker pull, then list the downloaded containers with docker images.

There are some quirks to remember. On the first run, you’ll need to docker run most of these containers and provide a hostname, box name, ports to forward and shared directories (known as volumes). On all subsequent runs, you can just use docker start $container_name – but I’ll describe a cheap and easy way of turning that command into an upstart service later. I generally save the start commands as shell scripts in /usr/local/bin/docker-start/*.sh so that I can reference them or adjust them later. The start commands I’ve used look like:

Plex
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex
SABnzbd+
docker run -d -h sabnzbd --name="sabnzbd" -v /etc/docker/sabnzbd:/config -v /mnt/nas:/data -p 8080:8080 -p 9090:9090 timhaak/sabnzbd
Sonarr
docker run -d -h sonarr --name="sonarr" -v /etc/docker/sonarr:/config -v /mnt/nas:/data -p 8989:8989 tuxeh/sonarr
CouchPotato
docker run -d -h couchpotato --name="couchpotato" -e EDGE=1 -v /etc/docker/couchpotato:/config -v /mnt/nas:/data -v /etc/localtime:/etc/localtime:ro -p 5050:5050 needo/couchpotato
These applications have a “/config” and a “/data” shared volume defined. /data points to “/mnt/nas”, which is a CIFS share to a network attached storage appliance mounted on the host. /config points to a directory structure I created for each application on the host in /etc/docker/$container_name. I generally apply “chmod 777″ permissions to each configuration directory until I find out what user ID the container is writing as, then lock it down from there.

For each initial start command, I choose to run the service as a daemon with -d. I also set a hostname with the “-h” parameter, as well as a friendly container name with “–name”; otherwise Docker likes to reference containers with wild adjectives combined with scientists, like “drunk_heisenberg”.

Each of these containers generally has a set of instructions to get up and running, whether it be on Github, the developer’s own site or the Docker Hub. Some, like SABnzbd+, just require that you go to http://yourserverip:8080/ and complete the setup wizard. Plex required an additional set of configuration steps described at the original repository:

  • Once Plex starts up on port 32400, access http://yourserverip:32400/web/ and confirm that the interface loads.
  • Switch back to your host machine, and find the place where the /config directory was mounted (in the example above, it’s /etc/docker/plex). Enter the Library/Application Support/Plex Media Server directory and edit the Preferences.xml file. In the <Preferences> tag, add the following attribute: allowedNetworks=”192.168.1.0/255.255.255.0″ where the IP address range matches that of your home network. In my case, the entire file looked like:

    <?xml version="1.0" encoding="utf-8"?>
    <Preferences MachineIdentifier="(guid)" ProcessedMachineIdentifier="(another_guid)" allowedNetworks="192.168.1.0/255.255.255.0" />

  • Run docker stop plex && docker start plex to restart the container, then load http://yourserverip:32400/web/ again. You should be prompted to accept the EULA and can now add library locations to the server.

Sonarr needed to be updated (from the NZBDrone branding) as well. From the GitHub README, you can enable in-container upgrades:

[C]onfigure Sonarr to use the update script in /etc/service/sonarr/update.sh. This is configured under Settings > (show advanced) > General > Updates > change Mechanism to Script.

To automatically ensure these containers start on reboot, you can either use restart policies (Docker 1.2+) or write an upstart script to start and stop the appropriate container. I’ve modified the example from the Docker website slightly to stop the container as well:

description "SABnzbd Docker container"
author "Jake"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a sabnzbd
end script
pre-stop exec /usr/bin/docker stop sabnzbd

Copy this script to /etc/init/sabnzbd.conf; you can then copy it to plex, couchpotato, and sonarr.conf and change the name of the container and title in each. You can then test it by rebooting your system and running “docker ps -a” to ensure that all containers come up cleanly, or running “docker stop $container; service $container start”. If you run into trouble, the upstart logs are in /var/log/upstart/$container_name.conf.

Hopefully this introduction to a media server with Docker containers was thought-provoking; I hope to have further updates down the line for other applications, best practices and how this setup continues to operate in its lifetime.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

How to set a static IP address on Ubuntu 14.04 server (and others)

September 16th, 2014 No comments

This assumes you want to set a static IP address on the network device eth0.

Open up the interfaces file

sudo nano /etc/network/interfaces

and remove or comment out the line that says

iface eth0 inet dhcp

then add the following lines in its place:

iface eth0 inet static
address [static IP address, i.e. 192.168.1.123]
netmask [i.e. 255.255.255.0]
network [i.e. 192.168.1.0]
broadcast [i.e. 192.168.1.255]
gateway [i.e. 192.168.1.1]
dns-nameservers [i.e. 8.8.8.8]

Save the file and reboot the server. On some systems you may also need to update /etc/resolv.conf and /etc/hosts




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Ubuntu 14.04 VNC woes? Try this!

April 28th, 2014 No comments

If, like me, you’ve recently upgraded to Ubuntu 14.04 only to find out that for whatever reason you can no longer VNC to that machine anymore (either from Windows or even an existing Linux install) have no fear because I’ve got the fix for you!

Simply open up a terminal and run the following line:

gsettings set org.gnome.Vino require-encryption false

Obviously if you use VNC encryption you may not want to do this but if you’re like me and just use VNC on the local network it should be safe enough to disable.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

A tale of a gillion installs

January 21st, 2014 1 comment

Install number one: LMDE 201303.  I was hoping for the best of both worlds, but I got driver issues instead.  LMDE has known ATI proprietary driver install issues.  I followed the Mint instructions and got it working, then got a blank screen after too much tinkering.  I was surprised that LMDE had this problem since Debian doesn’t, and LMDE should be a more polished version of LMDE.  This wasn’t a big deal, but I decided to give Debian a chance.

Install number two: debian stable (7.3).  The debian website has a convoluted maze of installation links, but it’s still fairly easy to find an ISO for the stable version you need.  I installed from the live ISO using a USB key.  The installation and ATI driver update went smoothly, and I thought all was well at first.  I soon realized that about 50% of reboots failed; the audio driver was the culprit.  I installed the latest driver from Realtec/ALSA and it sort of worked, but I was still getting some crap from # dmesg and the audio would crackle with some files.

LMDE.  I live booted LMDE to see if the same issue existed there and it did.

Time for Mint 16.  As expected everything worked.  Man I really wish Ubuntu hadn’t chosen the dark side – their OS is really good.  All of these distros use ALSA audio drivers, so why is Ubuntu the only one that works?   Kernel versions:

debian stable (7.3):
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version 1.0.24.
Mint 16:
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version k3.11.0-12-generic.

One more thing to check.  What kernel version is the real debian testing “jessie” using:

http://packages.debian.org/testing/kernel/linux-image-3.12-1-amd64

LMDE 201303 = 3.2
debian stable 7.3 = 3.2
Mint 16 = 3.11
debian testing “jessie - Jan 2014” = 3.12!

I determined to try debian testing before settling for Mint.  I tried a netinstall from USB key which killed my PC and grub bootloader.  The debian stable live iso usb key decided to stop working as well.   I finally got a real DVD debian stable install to work, changed the repositories to point to “jessie” and upgraded.  I was very surprised to see this worked!   I’m having some problems with bash, but all of my day to day software is up and running.  Nice.

TL;DR: LMDE was using an old kernel so I needed the real debian testing (jessie) to solve my driver problems.

So many flavours – with bonus privacy rant!

January 21st, 2014 1 comment

It’s interesting reading the old Linux Experiment first posts when people were contemplating which distro to install.  It’s been 4.5 years since then and the linux world has evolved.  Most noticeable, was no one talking about Mint!

I was considering three distros for my home PC dual boot:

  1. Debian
  2. LMDE
  3. Mint

I wanted something in the debian family since it seems to be receiving, by far, the most attention.  I expect this also means it gets the most activity and updates.  Ubuntu would probably work the best out of the box, but as you probably already know:

https://en.wikipedia.org/wiki/Unity_%28user_interface%29#Privacy_controversy

Ubuntu’s privacy issues are a deal breaker of course, but they also made me question Mint.  I don’t want to support Ubuntu and I think using Mint would indirectly do that.  Also, Mint does have some minor default search engine sketchyness going on.   I realize that these developers need funding, but I don’t think selling their users’ stats or useage is the way to do it.  I think donations are the way to go and they seem to be working for Wikimedia.  Developing non-essential non-related commercial software in parallel with the OS might be another alternative… hmm, sounds like a slippery slope.

The plan was: Try LMDE first, Debian stable if more stability is needed, and Mint if I got to the point that I just wanted things to work.  Results to follow!

TL;DR:  I planned to install LMDE or Debian, since Ubuntu wants to track me.

And I thought this would be easy…

September 22nd, 2013 1 comment

Some of you may remember my earlier post about contemplating an upgrade from Windows Home Server (Version 1) to a Linux alternative. Since then, I have decided the following:

Amahi isn’t worth my time

 

This conclusion was reached after a fruitless install of the latest Amahi 7 installation on the 500 GB ‘system’ drive, included with the EX470. After backing up the Windows Home Server to a single external 2 TB drive (talk about nerve-wracking!), I popped the drive into a spare PC and installed Amahi with the default options.

ffuu

No, I’m not 13. Yes, this image accurately reflects my frustrations.

Moving the drive back into the EX470 yielded precisely zero results, no matter what I tried – the machine would not respond to a ‘ping’ command, and since I’ve opted to try and do this without a debug board, I don’t even have VGA to tell me what the hell is going on. So, that’s it for Amahi.

When all else fails, Ubuntu

 

After deciding that I really didn’t feel like a repeat of my earlier Fedora experiment, I decided to try out the Linux ‘Old Faithful’ as it were – Ubuntu 12.04 LTS. I opted for the LTS version due to – well, you know – the ‘long-term support’ deal.

Oh, and I upgraded my storage (new 1 TB system drive not shown, and I apologize for the potato-quality image):

IMG_20130921_234311

The only kind of ‘TB’ I like. Not tuberculosis.

 

Following from the earlier Amahi instructions, I popped the primary 1 TB drive into a spare machine and allowed the Ubuntu installer to do its thing. Easy enough! From there, I installed the following two additional items (having to add an additional repository for the latter):

  • Openssh-Server

This allows me to easily control the machine through SSH, and – as I understand it – is pretty much a must for someone wanting to control a headless box. Setup was easy-breezy, in that it required nothing at all.

  • Greyhole

For those unfamiliar, Greyhole is – in their own words – an ‘Easily expandable and redundant storage pool for home servers’. One of my favourite things about WHS v1 was its ‘disk pooling’ capability – essentially a JBOD with software-managed share duplication, ensuring that each selected share was copied over to one other disk in the array.

After those were done with, I popped the drive into the EX470, and – lo and behold! – I was able to SSH in.

sshsuccess

This? This is what relatively minor success looks like.

So at this point, I’m feeling relatively confident. I shut down the server (don’t forget -h!) over SSH, popped in the first of the three 3 TB drives, and…

…nothing. Nada. Zip. Zilch. The server happily blinks away like a small puppy wags its tail, excited to see its owner but clearly bereft of purpose when left to its owner. I can’t ping it, I can’t… well, that’s really it. I can’t ping it, so there’s nothing I can do. Looking to see if GRUB was stuck at the menu, I stuck in a USB keyboard and hit ‘Enter’ to no effect. Yes, my troubleshooting skills are that good.

My next step was to pop both the 1 TB and 3 TB drives into the ‘spare’ machine; this ran fine. Running lshw -short -c disk shows a 1 TB and 3 TB drive without issue. I also ran these parted commands:

mklabel gpt

mkpart primary -1 1

 

(I think that last command is right.) So, all set, right? Cool. Pop the drive back in to the EX470, and…

STILL NOTHING. At this point, I’m ready to go pick up a new four-bay NAS, but I feel like that may be overkill. If anyone has any recommendations on how to get the stupid thing to boot with a 3 TB drive, I’m open to suggestions.

 

WTF Ubuntu

September 7th, 2013 2 comments

I’m not even sure what to say about this one… it looks like I might have an angry video card.

I sat down at my machine after it had been sitting for three or four days to find this... wtf?

I sat down at my machine after it had been sitting for three or four days to find this… wtf?




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: God Damnit Linux, Jon F, Ubuntu Tags:

Dual Booting Ubuntu 13.04 and Windows 8 on a Lenovo Y400 IdeaPad

July 27th, 2013 1 comment

With the third edition of The Linux Experiment already underway, I decided to get my new laptop set up with an Ubuntu partition to work with over the next few months. A little while back, I purchased this laptop with intent to use it as a gaming rig. It shipped with Windows 8, which was a serious pain in the ass to get used to. Now that I’ve dealt with that and have Steam and Origin set up on the Windows partition, it’s time to make this my primary machine and start taking advantage of the power under its hood by dual-booting an Ubuntu partition for development and experiment work.

I started my adventure by downloading an ISO of the latest release of Ubuntu – at the time of this writing, that’s 13.04. Because my new laptop has UEFI instead of BIOS, I made sure to grab the x64 version of the distribution.

Aside: If you’re using NoScript while browsing Ubuntu’s website, you’ll want to keep an eye on the address bar while navigating through the download steps. In my case, the screen that asks you to donate to the project redirected me to a different version of the ISO until I enabled JavaScript.

After using Ubuntu’s Startup Disk Creator to create a bootable USB stick, I started my first adventure – figuring out how to get the IdeaPad to boot from USB. A bit of quick googling told me that the trick was to alternately tap F10 and F12 during the boot sequence. This brought up a boot menu that allowed me to select the USB stick.

Once Ubuntu had booted off of the USB stick, I opened up GParted and went about making some space for my new operating system. The process was straightforward – I selected the largest existing partition (it also helped that it was labelled WINDOWS_OS), and split it in half. My only mistake in this process was to choose to put the new partition in front of the existing partition on the drive. Because of this, GParted had to copy all of the data on the Windows partition to a new physical location on the hard drive, a process that took about three hours.

The final partitioning scheme with my new Linux partition highlighted

The final partitioning scheme with my new Linux partition highlighted

With my hard drive appropriately partitioned, it was time to install the operating system. The modern Ubuntu installer pretty much takes care of everything, even going so far as selecting an appropriate space to use on the hard drive. I simply told it to install alongside the existing Windows partition, and let it take care of the details.

The installer finished its business in short order, and I restarted the machine. Ubuntu booted with no issues, but my Windows 8 partition refused to cooperate. It would seem as though something that the installer did wasn’t getting along well with UEFI/SecureBoot. Upon attempting to boot Windows, I got the following message:

error: Secure Boot forbids loading module from (hd0,gpt8)/boot/grub/x86_64-efi/ntfs.mod.
error: failure reading sector 0x0 from ‘cd0′
error: no such device: 0030DA4030DA3C7A
error: can’t find command ‘drivemap’
error: invalid EFI file path

Press any key to continue…

Uh oh.

Like I said, I could boot Ubuntu, so I headed on over to their website and read their page on UEFI. At first glance, it seemed as though I had done everything correctly. The only place that I deviated from these instructions was in manually resizing my Windows partition to create space for my new Ubuntu partition.

Thinking that I might be experiencing troubles with  my boot partition, I took a shot at running Ubuntu’s Boot-Repair utility. It seemed to do something, but upon restarting the machine, I found that I had even more problems – now a Master Boot Record wasn’t found at all:

It would appear as though I may have made things worse...

It would appear as though I may have made things worse…

After dismissing the boot device error, I was prompted to choose which device to boot from. I chose to boot Windows’ UEFI Repair partition, and was (luckily) able to get to a desktop. Unfortunately, none of the other partitions on the device seem to work, so I’m back where I started at the beginning, except that now in addition to having to put up with Windows 8, I also have a broken master boot record.

Lenovo: 1 / Jon: 0.




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Airing of grievances: in which upgrading Ubuntu wreaks havoc

February 24th, 2013 4 comments

I’ve had a few nasty experiences this week with Linux and figured I’d vent here. Unlike my previous efforts with Linux From Scratch and Gentoo, my complaints this time around are related to upgrading Ubuntu.

Ubuntu 10.04 to 12.04: Save yourself the trouble

At this point the current Ubuntu LTS release (12.04) is my preferred distribution to work with: it has become widespread enough that troubleshooting and previous solutions online are easy to locate. In a professional capacity, I also maintain systems that are still on 8.04 LTS (supported until April 2013, so we have to be pretty aggressive about replacing them) or 10.04 LTS (good until April 2015).

I attempted to complete two upgrades from the 10.04 release this week to 12.04 – one 10.04 LTS “desktop” installation, and one 10.04 LTS headless server installation. Both were virtual machines running under VMWare ESXi, but neither had given me any trouble during normal use.

Canonical’s updater process (the wrapper around dist-upgrade) appears to be pretty slick; it gives you appropriate warnings, attempts to start a SSH daemon as a fallback mechanism and starts on its merry way to download the necessary packages to bring your system completely up to date. On my 10.04 desktop VM, the installer fell apart completely during the package replacement/removal/installation sequence. I was left with two nasty message boxes: one advising that my system was now in a broken state, and another that completely contained rectangular, unprintable characters.

To put it bluntly, I was not amused, but it wasn’t a critical system and I was content to replace it with a fresh 12.04 installation rather than waste additional time troubleshooting with apt or dpkg. Strike one for the upgrader.

At least the server came back up!

Next on the upgrade schedule was the 12.04 server VM. Install, package replacement and reboot went fine, but I had several custom PPAs installed to support development of XenonMKV (Github page) – specifically ppa:krull/deadsnakes to add Python 2.7 to Ubuntu 10.04.

Python 2.7 still worked when the server came back up, and all my usual tools of choice like SABnzbd+, SickBeard and CouchPotato were still functional.

For some reason, though, I’d gotten it into my head this evening to check out Mezzanine as a potential WordPress replacement. Mezzanine uses Django, a Python Web framework, and the list of supported features is pretty encompassing.

Sidebar: Django and mod_wsgi – complicated enough?

One of the most irritating things from a system administration point of view is getting Web applications to run in a standard server environment – typically a Linux base system and Apache or nginx to serve content. I suppose I’ve been spoiled with how easy it is to get PHP-based sites up and running these days in that configuration by adding an Apache module through apt. A lot of new Web app frameworks come with their own small webservers for development and testing, but generally their creators recommend that when you’re ready to put your site live, that the product run under a well-known Web or application server.

The Django folks recommend using mod_wsgi in their documentation, which in and of itself really just says “RTFM for mod_wsgi and then you’ll have a much better idea of how to do this.” I had to go poking around on Google for the installation article since there are some broken links, but okay, it’s an Apache module with a small bit of configuration (even though a simple walkthrough in the Django documentation would go a long way to making deployment easier.) This is where I ran into my dependency/PPA problem on Ubuntu 10.04.

I’ve suppose I’ve screwed the pooch…

Running the suggested command, I tried: sudo apt-get install libapache2-mod-wsgi and got the following

The following packages have unmet dependencies:
libapache2-mod-wsgi : Depends: libpython2.7 (>= 2.7) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

Backtracking, I then found out why the library wasn’t going to get installed:


The following packages have unmet dependencies:
libpython2.7 : Depends: python2.7 (= 2.7.3-0ubuntu3.1) but 2.7.3-2+lucid1 is to be installed

Aha! The Python installation from the PPA for Lucid – 10.04 – was installed and acting as the 2.7 package. Since the newly-upgraded Ubuntu 12.04 uses Python 2.7 as a dependency for a good portion of the default applications, I couldn’t just purge or uninstall it, and my attempts to force a reinstallation all ended in:


Reinstallation of python2.7 is not possible, since it cannot be downloaded.

Rebuild?

At this point it looks like I’ll have to rebuild the server VM as well, but if any readers have any bright ideas on fixing this dependency hell – please comment with your suggestions!




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: God Damnit Linux, Jake B, Ubuntu Tags: