Archive

Archive for the ‘Linux’ Category

Running a containerized media server with Ubuntu 14.04, Docker, and Plex

November 23rd, 2014 1 comment

I recently took it upon myself to rebuild a general-purpose home server – installing a new Intel 530 240GB solid-state drive to replace a “spinning rust” drive, and installing a fresh copy of Ubuntu 14.04 now that 14.04.1 has released and there is much less complaining online.

The “new hotness” that I’d like to discuss has been the use of Docker to containerize various processes. Docker gets a lot of press these days, but the way I see it is a way to ensure that your special snowflake applications and services don’t get the opportunity to conflict with one another. In my setup, I have four containers running:

I like the following things about Docker:

  • Since it’s new, there are a lot of repositories and configuration instructions online for reference.
  • I can make sure that applications like Sonarr/NZBDrone get the right version of Mono that won’t conflict with my base system.
  • As a network administrator, I can ensure that only the necessary ports for a service get forwarded outside the container.
  • If an application state gets messed up, it won’t impact the rest of the system as much – I can destroy and recreate the individual container by itself.

There are some drawbacks though:

  • Because a lot of the images and Dockerfiles out there are community-based, there are some that don’t follow best practices or fall out of an update cycle.
  • Software updates can become trickier if the application is unable to upgrade itself in-place; you may have to pull a new Dockerfile and hope that your existing configuration works with a new image.
  • From a security standpoint, it’s best to verify exactly what an image or Dockerfile does before running it – for example, that it pulls content from official repositories (the docker-plex configuration is guilty of using a third-party repo, for example.)

To get started, on Ubuntu 14.04 you can install a stable version of Docker following these instructions, although the latest version has some additional features like docker exec that make “getting inside” containers to troubleshoot much easier. I was able to get all these containers running properly with the current stable version (1.0.1~dfsg1-0ubuntu1~ubuntu0.14.04.1). Once Docker is installed, you can grab each of the containers above with a combination of docker search and docker pull, then list the downloaded containers with docker images.

There are some quirks to remember. On the first run, you’ll need to docker run most of these containers and provide a hostname, box name, ports to forward and shared directories (known as volumes). On all subsequent runs, you can just use docker start $container_name – but I’ll describe a cheap and easy way of turning that command into an upstart service later. I generally save the start commands as shell scripts in /usr/local/bin/docker-start/*.sh so that I can reference them or adjust them later. The start commands I’ve used look like:

Plex
docker run -d -h plex --name="plex" -v /etc/docker/plex:/config -v /mnt/nas:/data -p 32400:32400 timhaak/plex
SABnzbd+
docker run -d -h sabnzbd --name="sabnzbd" -v /etc/docker/sabnzbd:/config -v /mnt/nas:/data -p 8080:8080 -p 9090:9090 timhaak/sabnzbd
Sonarr
docker run -d -h sonarr --name="sonarr" -v /etc/docker/sonarr:/config -v /mnt/nas:/data -p 8989:8989 tuxeh/sonarr
CouchPotato
docker run -d -h couchpotato --name="couchpotato" -e EDGE=1 -v /etc/docker/couchpotato:/config -v /mnt/nas:/data -v /etc/localtime:/etc/localtime:ro -p 5050:5050 needo/couchpotato
These applications have a “/config” and a “/data” shared volume defined. /data points to “/mnt/nas”, which is a CIFS share to a network attached storage appliance mounted on the host. /config points to a directory structure I created for each application on the host in /etc/docker/$container_name. I generally apply “chmod 777” permissions to each configuration directory until I find out what user ID the container is writing as, then lock it down from there.

For each initial start command, I choose to run the service as a daemon with -d. I also set a hostname with the “-h” parameter, as well as a friendly container name with “–name”; otherwise Docker likes to reference containers with wild adjectives combined with scientists, like “drunk_heisenberg”.

Each of these containers generally has a set of instructions to get up and running, whether it be on Github, the developer’s own site or the Docker Hub. Some, like SABnzbd+, just require that you go to http://yourserverip:8080/ and complete the setup wizard. Plex required an additional set of configuration steps described at the original repository:

  • Once Plex starts up on port 32400, access http://yourserverip:32400/web/ and confirm that the interface loads.
  • Switch back to your host machine, and find the place where the /config directory was mounted (in the example above, it’s /etc/docker/plex). Enter the Library/Application Support/Plex Media Server directory and edit the Preferences.xml file. In the <Preferences> tag, add the following attribute: allowedNetworks=”192.168.1.0/255.255.255.0″ where the IP address range matches that of your home network. In my case, the entire file looked like:

    <?xml version="1.0" encoding="utf-8"?>
    <Preferences MachineIdentifier="(guid)" ProcessedMachineIdentifier="(another_guid)" allowedNetworks="192.168.1.0/255.255.255.0" />

  • Run docker stop plex && docker start plex to restart the container, then load http://yourserverip:32400/web/ again. You should be prompted to accept the EULA and can now add library locations to the server.

Sonarr needed to be updated (from the NZBDrone branding) as well. From the GitHub README, you can enable in-container upgrades:

[C]onfigure Sonarr to use the update script in /etc/service/sonarr/update.sh. This is configured under Settings > (show advanced) > General > Updates > change Mechanism to Script.

To automatically ensure these containers start on reboot, you can either use restart policies (Docker 1.2+) or write an upstart script to start and stop the appropriate container. I’ve modified the example from the Docker website slightly to stop the container as well:

description "SABnzbd Docker container"
author "Jake"
start on filesystem and started docker
stop on runlevel [!2345]
respawn
script
/usr/bin/docker start -a sabnzbd
end script
pre-stop exec /usr/bin/docker stop sabnzbd

Copy this script to /etc/init/sabnzbd.conf; you can then copy it to plex, couchpotato, and sonarr.conf and change the name of the container and title in each. You can then test it by rebooting your system and running “docker ps -a” to ensure that all containers come up cleanly, or running “docker stop $container; service $container start”. If you run into trouble, the upstart logs are in /var/log/upstart/$container_name.conf.

Hopefully this introduction to a media server with Docker containers was thought-provoking; I hope to have further updates down the line for other applications, best practices and how this setup continues to operate in its lifetime.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.
Categories: Docker, Jake B, Plex, Ubuntu Tags:

Big distributions, little RAM 7

October 13th, 2014 4 comments

It’s been a while but once again here is the latest instalment of the series of posts where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 7.6 (GNOME)
  • Elementary OS 0.2 (Luna)
  • Fedora 20 (GNOME)
  • Kubuntu 14.04 (KDE)
  • Linux Mint 17 (Cinnamon)
  • Linux Mint 17 (MATE)
  • Mageia 4.1 (GNOME)
  • Mageia 4.1 (KDE)
  • OpenSUSE 13.1 (GNOME)
  • OpenSUSE 13.1 (KDE)
  • Ubuntu 14.04 (Unity)
  • Xubuntu 14.04 (Xfce)

I also attempted to try and install Fedora 20 (KDE) but it just wouldn’t go.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.3.12, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between October 6th, 2014 and October 13th, 2014 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. Measurements were taken using the free -m command for memory and the df -h command for disk usage.

Like before I have provided the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). As always feel free to run your own tests and link them in the comments for everyone to see.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

 

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Install Size

Install Size

Conclusion

Once again I will leave the conclusions to you. Source data provided below.

Source Data




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

How to set a static IP address on Ubuntu 14.04 server (and others)

September 16th, 2014 No comments

This assumes you want to set a static IP address on the network device eth0.

Open up the interfaces file

sudo nano /etc/network/interfaces

and remove or comment out the line that says

iface eth0 inet dhcp

then add the following lines in its place:

iface eth0 inet static
address [static IP address, i.e. 192.168.1.123]
netmask [i.e. 255.255.255.0]
network [i.e. 192.168.1.0]
broadcast [i.e. 192.168.1.255]
gateway [i.e. 192.168.1.1]
dns-nameservers [i.e. 8.8.8.8]

Save the file and reboot the server. On some systems you may also need to update /etc/resolv.conf and /etc/hosts




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

CoreGTK 2.24.0 Released!

August 4th, 2014 No comments

The initial version of CoreGTK, version 2.24.0, has been tagged for release today.

Features include:

  • Targets GTK+ 2.24
  • Support for GtkBuilder
  • Can be used on Linux, Mac and Windows

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Linux alternatives: Mp3tag → EasyTAG

August 4th, 2014 1 comment

A big part of my move from Windows to Linux has been finding replacements for the applications that I had previously used day-to-day that are not available on Linux. For the major applications like my web browser (Firefox), e-mail client (Thunderbird), password manager (KeePass2) this hasn’t been a problem because they are all available on Linux as well. Heck you can even install Microsoft Office with the latest version of wine if you wanted to.

Unfortunately there still remains some programs that will simply not run under Linux. Thankfully this isn’t a huge deal because Linux has plenty of alternative applications that fill in all of the gaps – the trick is just finding the one that is right for you.

Mp3tag is an excellent Windows application that lets you edit the meta data (i.e. artist, album, track, etc.) inside of an MP3, OGG or similar file.

Mp3tag on Windows

Mp3tag on Windows

As a Linux alternative to this excellent program I’ve found a very similar application called EasyTAG that offers at least all of the features that I used to use in Mp3tag (and possibly even more).

EasyTAG on Linux

EasyTAG on Linux

For anyone looking for a good meta data editor I would highly recommend trying this one out.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

How to migrate from TrueCrypt to LUKS file containers

June 15th, 2014 2 comments

With the recent questions surrounding the security of TrueCrypt there has been a big push to move away from that program and switch to alternatives. One such alternative, on Linux anyway, is the Linux Unified Key Setup (or LUKS) which allows you to encrypt disk volumes. This guide will show you how to create encrypted file volumes, just like you could using TrueCrypt.

The Differences

There are a number of major differences between TrueCrypt and LUKS that you may want to be aware of:

  • TrueCrypt supported the concept of hidden volumes, LUKS does not.
  • TrueCrypt allowed you to encrypt a volume in-place, without losing data, LUKS does not.
  • TrueCrypt supports cipher cascades where the data is encrypted using multiple different algorithms just in case one of them is broken at some point in the future. As I understand it this is being talked about for the LUKS 2.0 spec but is currently not a feature.

If you are familiar with the terminology in TrueCrypt you can think of LUKS as offering both full disk encryption and standard file containers.

How to create an encrypted LUKS file container

The following steps borrow heavily from a previous post so you should go read that if you want more details on some of the commands below. Also note that while LUKS offers a lot of options in terms of cipher/digest/key size/etc, this guide will try to keep it simple and just use the defaults.

Step 1: Create a file to house your encrypted volume

The easiest way is to run the following commands which will create the file and then fill it with random noise:

# fallocate -l <size> <file to create>
# dd if=/dev/urandom of=<file to create> bs=1M count=<size>

For example let’s say you wanted a 500MiB file container called MySecrets.img, just run this command:

# fallocate -l 500M MySecrets.img
# dd if=/dev/urandom of=MySecrets.img bs=1M count=500

Here is a handy script that you can use to slightly automate this process:

#!/bin/bash
NUM_ARGS=$#

if [ $NUM_ARGS -ne 2 ] ; then
    echo Wrong number of arguments.
    echo Usage: [size in MiB] [file to create]

else

    SIZE=$1
    FILE=$2

    echo Creating $FILE with a size of ${SIZE}MB

    # create file
    fallocate -l ${SIZE}M $FILE

    #randomize file contents
    dd if=/dev/urandom of=$FILE bs=1M count=$SIZE

fi

Just save the above script to a file, say “create-randomized-file-volume.sh”, mark it as executable and run it like this:

# ./create-randomized-file-volume.sh 500 MySecrets.img

Step 2: Format the file using LUKS + ext4

There are ways to do this in the terminal but for the purpose of this guide I’ll be showing how to do it all within gnome-disk-utility. From the menu in Disks, select Disks -> Attach Disk Image and browse to your newly created file (i.e. MySecrets.img).

Don't forget to uncheck the box!

Don’t forget to uncheck the box!

Be sure to uncheck “Set up read-only loop device”. If you leave this checked you won’t be able to format or write anything to the volume. Select the file and click Attach.

This will attach the file, as if it were a real hard drive, to your computer:

attachedindisksNext we need to format the volume. Press the little button with two gears right below the attached volume and click Format. Make sure you do this for the correct ‘drive’ so that you don’t accidentally format your real hard drive!

Please use a better password

Please use a better password

From this popup you can select the filesystem type and even name the drive. In the image above the settings will format the drive to LUKS and then create an ext4 filesystem within the encrypted LUKS one. Click Format, confirm the action and you’re done. Disks will format the file and even auto-mount it for you. You can now copy files to your mounted virtual drive. When you’re done simply eject the drive like normal or (with the LUKS partition highlighted) press the lock button in Disks. To use that same volume again in the future just re-attach the disk image using the steps above, enter your password to unlock the encrypted partition and you’re all set.

But I don’t even trust TrueCrypt enough to unlock my already encrypted files!

If you’re just using TrueCrypt to open an existing file container so that you can copy your files out of there and into your newly created LUKS container I think you’ll be OK. That said there is a way for you to still use your existing TrueCrypt file containers without actually using the TrueCrypt application.

First install an application called tc-play. This program works with the TrueCrypt format but doesn’t share any of its code. To install it simply run:

# sudo apt-get install tcplay

Next we need to mount your existing TrueCrypt file container. For the sake of this example we’ll assume your file container is called TOPSECRET.tc.

We need to use a loop device but before doing that we need to first find a free one. Running the following command

# sudo losetup -f

should return the first free loop device. For example it may print out

/dev/loop0

Next you want to associate the loop device with your TrueCrypt file container. You can do this by running the following command (sub in your loop device if it differs from mine):

# sudo losetup /dev/loop0 TOPSECRET.tc

Now that our loop device is associated we need to actually unlock the TrueCrypt container:

# sudo tcplay -m TOPSECRET.tc -d /dev/loop0

Finally we need to mount the unlocked TrueCrypt container to a directory so we can actually use it. Let’s say you wanted to mount the TrueCrypt container to a folder in the current directory called SecretStuff:

# sudo mount -o nosuid,uid=1000,gid=100 /dev/mapper/TOPSECRET.tc SecretStuff/

Note that you should swap your own uid and gid in the above command if they aren’t 1000 and 100 respectively. You should now be able to view your TrueCrypt files in your SecretStuff directory. For completeness sake here is how you unmount and re-lock the same TrueCrypt file container when you are done:

# sudo umount SecretStuff/
# sudo dmsetup remove TOPSECRET.tc
# sudo losetup -d /dev/loop0

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Create a virtual hard drive volume within a file in Linux

June 15th, 2014 5 comments

If you are not familiar with the concept of virtual hard drive volumes, sometimes called file containers, they are basically regular looking files that can be used by your computer as if they were real hard drives. So for example you could have a file called MyDrive.img on your computer and with a few quick actions it would appear as though you had just plugged in an external USB stick or hard drive into your computer. It acts just like a normal, physical, drive but whenever you copy anything to that location the copied files are actually being written to the MyDrive.img file behind the scenes. This is not unlike the dmg files you would find on a Mac or even something akin to TrueCrypt file containers.

Why would I want this?

There are a number of reasons why you may be interested in creating virtual volumes. From adding additional swap space to your computer (i.e. something similar to a page file on Windows without needing to create a new hard drive partition) to creating portable virtual disk drives to back up files to, or even just doing it because this is Linux and it’s kind of a neat thing to do.

What are the steps to creating a file container?

The process seems a bit strange but it’s actually really straight forward.

  1. Create a new file to hold the virtual drive volume
      (Optional) Initialize it by filling it with data
  2. Format the volume
  3. Mount the volume and use it

Create a new file to hold the virtual drive volume

There are probably a million different ways to do this but I think the most simple way is to run the following command from a terminal:

fallocate -l <size> <file to create>

So let’s say you wanted to create a virtual volume in a file called MyDrive.img in the current directory with a size of 500MiB. You would simply run the following command:

fallocate -l 500M MyDrive.img

You may notice that this command finishes almost instantly. That’s because while the system created a 500MiB file it didn’t actually write 500MiB worth of data to the file.

This is where the optional step of ‘initializing’ the file comes into play. To be clear you do not need to do this step at all but it can be good practice if you want to clean out the contents of the allocated space. For instance if you wanted to prevent someone from easily noticing when you write data to that file you may pre-fill the space with random data to make it more difficult to see or you may simply want to zero out that part of the hard drive first.

Anyway if you choose to pre-fill the file with data the easiest method is to use the dd command. PLEASE BE CAREFUL – dd is often nicknamed disk destroyer because it will happily overwrite any data you tell it to, including the stuff you wanted to keep if you make a mistake typing the command!

To fill the file with all zeros simply run this command:

dd if=/dev/zero of=<your file> bs=1M count=<your file size in MiB>

So for the above file you would run:

dd if=/dev/zero of=MyDrive.img bs=1M count=500

If you want to fill it with random data instead just swap /dev/zero for /dev/urandom or /dev/random in the command:

dd if=/dev/urandom of=MyDrive.img bs=1M count=500

Format and mount the virtual volume

Next up we need to give the volume a filesystem. You can either do this via the command line or using a graphical tool. I’ll show you an example of both.

From the terminal you would run the appropriate mkfs command on the file. As an example this will format the file above using the ext3 filesystem:

mkfs -t ext3 MyDrive.img

You may get a warning that looks like this

MyDrive.img is not a block special device.
Proceed anyway? (y,n)

Simply type the letter ‘y’ and press Enter. With any luck you’ll see a bunch of text telling you exactly what happened and you now have a file that is formatted with ext3!

If you would rather do things the graphical way you could use a tool like Disks (gnome-disk-utility) to format the file.

From the menu in Disks, select Disks -> Attach Disk Image and browse to your newly created file (i.e. MyDrive.img).

Don't forget to uncheck the box!

Don’t forget to uncheck the box!

Be sure to uncheck “Set up read-only loop device”. If you leave this checked you won’t be able to format or write anything to the volume. Select the file and click Attach.

This will attach the file, as if it were a real hard drive, to your computer:

MyDriveAttached

Next we need to format the volume. Press the little button with two gears right below the attached volume and click Format. Make sure you do this for the correct ‘drive’ so that you don’t accidentally format your real hard drive!

Make sure you're formatting the correct drive!

Make sure you’re formatting the correct drive!

From this popup you can select the filesystem type and even name the drive. You may also use the “Erase” option to write zeros to the file if you wanted to do it here instead of via the terminal as shown previously. In the image above the settings will format the drive using the ext4 filesystem. Click Format, confirm the action and you’re done. Disks will format the file and even auto-mount it for you. You can now copy files to your mounted virtual drive. When you’re done simply eject the drive like normal or press the square Stop button in Disks. To use that same volume again in the future just re-attach the disk image using the steps above.

To mount the formatted file from the terminal you will need to first create a folder to mount it to. Let’s say we wanted to mount it to the folder /media/MyDrive. First create the folder there:

sudo mkdir /media/MyDrive

Next mount the file to the folder:

sudo mount -t auto -o loop MyDrive.img /media/MyDrive/

Now you can copy files to the drive just like before. When you’re finished unmount the volume by running this command:

sudo umount /media/MyDrive/

And there you have it. Now you know how to create virtual volume files that you can use for just about anything and easily move from computer to computer.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Set up KeePass Auto-Type on Linux

June 8th, 2014 3 comments

If you’ve used KeePass on Windows you may be very attached to its auto-type feature, where with a single key-combo press the application with magically type your user name and password into the website or application you’re trying to use. This is super handy and something that is sadly missing by default on Linux. Thankfully its also very easy to make work on Linux.

1. Start by installing the xdotool package

On Debian/Ubuntu/etc simply run:

sudo apt-get install xdotool

2. Next find out where the keepass2 executable is installed on your system

The easiest way to do this is to run:

which keepass2

On my system this returns /usr/bin/keepass2. This file is actually not the program itself but a script that bootstraps the program. So to find out where the real executable run:

cat /usr/bin/keepass2

On my system this returns

#!/bin/sh
exec /usr/bin/cli /usr/lib/keepass2/KeePass.exe "$@"

So the program itself is actually located at /usr/lib/keepass2/KeePass.exe.

3. Create a custom keyboard shortcut

linuxmintkeyboardshortcut

The process for this will differ depending on which distribution you’re running but it’s usually under the Keyboard settings. For the command enter the following:

mono /usr/lib/keepass2/KeePass.exe --auto-type

Now whenever you key in your shortcut keyboard combo it will tell KeePass to auto-type your configured username/password/whatever you setup in KeePass. The only catch is that you must first open KeePass and unlock your database.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.
Categories: Linux, Tyler B Tags:

How to mount a Windows share on startup

April 28th, 2014 2 comments

I recently invested in a NAS device to add a little bit of redundancy to my personal files. With this particular NAS the most convenient way to use the files it stores is via the Windows share protocol (also known a SMB or CIFS). Linux has supported these protocols for a while now so that’s great but I wanted it to automatically map the shared directory on the NAS to a directory on my Linux computer on startup. Thankfully there is a very easy way to do just that.

1) First install cifs-utils

sudo apt-get install cifs-utils

2) Next edit the fstab file and add the share(s)

To do this you’ll need to add a new line to the end of the file. You can easily open the file using nano in the terminal by running the command:

sudo nano /etc/fstab

Then use the arrow keys to scroll all the way to the bottom and add the share in the following format:

//<path to server>/<share name>     <path to local directory>     cifs     guest,uid=<user id to mount files as>,iocharset=utf8     0     0

Breaking it down a little bit:

  • <path to server>: This is the network name or IP address of the computer hosting the share (in my case the NAS). For example it could be something like “192.168.1.1” or something like “MyNas”
  • <share name>: This is the name of the share on that computer. For example I set up my NAS to share different directories one of which was called “Files”
  • <path to local directory>: This is where you want the remote files to appear locally. For example if you want them to appear in a folder under /media you could do something like “/media/NAS”. Just make sure that the directory exists (create it if you need to).
  • <user id to mount files as>: This defines the permissions to give the files. On Ubuntu the first user you create is usually give uid 1000 so you could put “1000” here. To find out the uid of any random user use the command “id <user>” without quotes.

So for example my added line in fstab was

//192.168.3.25/Files     /media/NAS     cifs     guest,uid=1000,iocharset=utf8     0     0

Then save the file “Ctrl+O” and then Enter in nano.

3) Mount the remote share

Run this command to test the share:

sudo mount -a

If that works you should see the files appear in your local directory path. When you restart the computer it will also attempt to connect to the share and place the files in that location as well. Keep in mind that anything you do to the files there also changes them on the share!




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.
Categories: Linux, Tyler B Tags: , , , , ,

Ubuntu 14.04 VNC woes? Try this!

April 28th, 2014 No comments

If, like me, you’ve recently upgraded to Ubuntu 14.04 only to find out that for whatever reason you can no longer VNC to that machine anymore (either from Windows or even an existing Linux install) have no fear because I’ve got the fix for you!

Simply open up a terminal and run the following line:

gsettings set org.gnome.Vino require-encryption false

Obviously if you use VNC encryption you may not want to do this but if you’re like me and just use VNC on the local network it should be safe enough to disable.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Cloud Saves for Minecraft

February 21st, 2014 No comments

I’ve recently become addicted to Minecraft. I realize that I’m late to this game, having only recently discovered it despite its popularity over the past couple of years. As readers know, I typically switch between a few different machines throughout my day, and indeed between a few different operating systems. Luckily, Minecraft is portable and can be played on any platform – but how to go about transferring saved games?

By default, Minecraft puts your user data and game saves in a hidden folder within your home folder. In particular, save game data is stored at ~/.minecraft/saves/. My solution to the cloud save problem was to create a minecraft folder in my DropBox, and then symlink the default save folder to this location.

Start by creating a folder in your DropBox (or other cloud share platform) folder:

jonf@UBUNTU:~$ mkdir ~/Dropbox/minecraft
jonf@UBUNTU:~$ mkdir ~/Dropbox/minecraft/saves

Next, back up your existing save games folder. We’ll restore these once the symlink has been created.

jonf@UBUNTU:~$ mv ~/.minecraft/saves/ ~/.minecraft/saves.old

Now create the symlink between the new DropBox folder and the save game location:

jonf@UBUNTU:~$ ln -s ~/Dropbox/minecraft/saves/ ~/.minecraft/saves
jonf@UBUNTU:~$ ls -la ~/.minecraft
total 24
drwxrwxr-x  3 jonf jonf  4096 Feb 21 08:58 .
drwx------ 43 jonf jonf 12288 Feb 21 08:55 ..
lrwxrwxrwx  1 jonf jonf    38 Feb 21 08:58 saves -> /home/jonf/Dropbox/minecraft/saves/
drwxrwxr-x  2 jonf jonf  4096 Feb 21 08:55 saves.old

As you can see, the saves folder under the .minecraft folder now points to the saves folder that we created inside of our DropBox folder. This means that if we put anything inside of that folder, it will be automatically written to the DropBox folder, which will be synced to all of my other computers.

Finally, let’s restore the existing saved games folder into the new shared folder:

jonf@UBUNTU:~$ mv ~/.minecraft/saves.old/ ~/.minecraft/saves

If I take the same steps on my other machines, then I can play Minecraft from any of my machines with my saved games always available, no matter where I am. Keep in mind that the ln syntax for Mac OSX is slightly different than the example above. The steps remain the same, but you’ll want to check the docs if you’re trying to adopt these steps for a different platform.




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

A tale of a gillion installs

January 21st, 2014 1 comment

Install number one: LMDE 201303.  I was hoping for the best of both worlds, but I got driver issues instead.  LMDE has known ATI proprietary driver install issues.  I followed the Mint instructions and got it working, then got a blank screen after too much tinkering.  I was surprised that LMDE had this problem since Debian doesn’t, and LMDE should be a more polished version of LMDE.  This wasn’t a big deal, but I decided to give Debian a chance.

Install number two: debian stable (7.3).  The debian website has a convoluted maze of installation links, but it’s still fairly easy to find an ISO for the stable version you need.  I installed from the live ISO using a USB key.  The installation and ATI driver update went smoothly, and I thought all was well at first.  I soon realized that about 50% of reboots failed; the audio driver was the culprit.  I installed the latest driver from Realtec/ALSA and it sort of worked, but I was still getting some crap from # dmesg and the audio would crackle with some files.

LMDE.  I live booted LMDE to see if the same issue existed there and it did.

Time for Mint 16.  As expected everything worked.  Man I really wish Ubuntu hadn’t chosen the dark side – their OS is really good.  All of these distros use ALSA audio drivers, so why is Ubuntu the only one that works?   Kernel versions:

debian stable (7.3):
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version 1.0.24.
Mint 16:
cat /proc/asound/version
Advanced Linux Sound Architecture Driver Version k3.11.0-12-generic.

One more thing to check.  What kernel version is the real debian testing “jessie” using:

http://packages.debian.org/testing/kernel/linux-image-3.12-1-amd64

LMDE 201303 = 3.2
debian stable 7.3 = 3.2
Mint 16 = 3.11
debian testing “jessie - Jan 2014” = 3.12!

I determined to try debian testing before settling for Mint.  I tried a netinstall from USB key which killed my PC and grub bootloader.  The debian stable live iso usb key decided to stop working as well.   I finally got a real DVD debian stable install to work, changed the repositories to point to “jessie” and upgraded.  I was very surprised to see this worked!   I’m having some problems with bash, but all of my day to day software is up and running.  Nice.

TL;DR: LMDE was using an old kernel so I needed the real debian testing (jessie) to solve my driver problems.

So many flavours – with bonus privacy rant!

January 21st, 2014 1 comment

It’s interesting reading the old Linux Experiment first posts when people were contemplating which distro to install.  It’s been 4.5 years since then and the linux world has evolved.  Most noticeable, was no one talking about Mint!

I was considering three distros for my home PC dual boot:

  1. Debian
  2. LMDE
  3. Mint

I wanted something in the debian family since it seems to be receiving, by far, the most attention.  I expect this also means it gets the most activity and updates.  Ubuntu would probably work the best out of the box, but as you probably already know:

https://en.wikipedia.org/wiki/Unity_%28user_interface%29#Privacy_controversy

Ubuntu’s privacy issues are a deal breaker of course, but they also made me question Mint.  I don’t want to support Ubuntu and I think using Mint would indirectly do that.  Also, Mint does have some minor default search engine sketchyness going on.   I realize that these developers need funding, but I don’t think selling their users’ stats or useage is the way to do it.  I think donations are the way to go and they seem to be working for Wikimedia.  Developing non-essential non-related commercial software in parallel with the OS might be another alternative… hmm, sounds like a slippery slope.

The plan was: Try LMDE first, Debian stable if more stability is needed, and Mint if I got to the point that I just wanted things to work.  Results to follow!

TL;DR:  I planned to install LMDE or Debian, since Ubuntu wants to track me.

Screen brightness work around (part 2)

January 19th, 2014 No comments

As mentioned before I am having some issues with my laptop’s hardware and controlling the screen brightness. Previously my work around was to set acpi_backlight=vender in the grub command line options. While this resulted in having full screen brightness it also removed my ability to use my keyboard function keys to adjust the screen brightness on the fly (not so good when you’re on battery). Removing this option allowed me to manually adjusted my screen brightness again but once again always started the laptop at zero brightness. What to do?

While far from a perfect solution my current work around is to use xdotool to simulate key presses on login which raise the screen brightness for me automatically. Here is the script that I run on startup:

#!/bin/bash
for i in {1..20}
do
     xdotool key XF86MonBrightnessUp
done

While this works great it still isn’t perfect. Because xdotool requires an X session it means I cannot run it before one is created. If you were unaware the login screen, in my case MDM, does not run inside of X (it actually starts X when you successfully login). So while this will automatically brighten my screen it won’t do so until I type in my username and password, leaving me to type into a fully dark screen or manually adjust the brightness up enough to see what I’m doing. Hopefully I’ll have a better solution sooner rather than later…




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

What up? (First Post)

January 16th, 2014 No comments

My first post here – starting a linux experiment:

First of all, I would like to thank Tyler B for helping me get started by patiently answering my level 0 linux questions.

I’ve installed linux several times in the last ten years, sometimes for fun, but usually when required for school.  I’ve even developed a linux app complete with a GUI and DB integration.  But even with all this exposure to linux I’ve managed to learn very little about it.  How is that possible?  Well, if you stick to pre-configured dev environments with working tools, avoiding learning about the OS is easy.

My new project has a different motivation.  Rather than using Linux to complete a project, using Linux is the project.  I want to understand how linux works and I think the best way to start is to “learn by doing”.  My plan is to use linux on my main home computer for everything except Windows gaming, which is rare for me anyways.  I would then like to move on to LFS.

TL;DR: I’m going to install and learn about linux.

Categories: Greg W, Linux Tags:

Fix no screen brightness on boot problem

October 14th, 2013 No comments

I recently upgraded my laptop to a brand new Lenovo Y410P and promptly replaced Windows 8 with a Linux install. Unfortunately I immediately ran into a very strange driver(?) issue where, on boot, the computer would default to the absolute lowest screen brightness level. This meant that I would need to manually adjust the screen brightness up just to see the login screen. Thankfully after some help from the excellent people over on the Ubuntu Forums I managed to find a very easy work around.

1) As root open up /etc/default/grub

I did this by simply issuing the following command:

sudo nano /etc/default/grub

2) Find the line that says GRUB_CMDLINE_LINUX= and add “acpi_backlight=vendor” to the list of options.

3) From a terminal run this command to update GRUB

sudo update-grub

4) Reboot!

That’s pretty much it. My computer now boots with the correct screen brightness as one would expect.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

And I thought this would be easy…

September 22nd, 2013 1 comment

Some of you may remember my earlier post about contemplating an upgrade from Windows Home Server (Version 1) to a Linux alternative. Since then, I have decided the following:

Amahi isn’t worth my time

 

This conclusion was reached after a fruitless install of the latest Amahi 7 installation on the 500 GB ‘system’ drive, included with the EX470. After backing up the Windows Home Server to a single external 2 TB drive (talk about nerve-wracking!), I popped the drive into a spare PC and installed Amahi with the default options.

ffuu

No, I’m not 13. Yes, this image accurately reflects my frustrations.

Moving the drive back into the EX470 yielded precisely zero results, no matter what I tried – the machine would not respond to a ‘ping’ command, and since I’ve opted to try and do this without a debug board, I don’t even have VGA to tell me what the hell is going on. So, that’s it for Amahi.

When all else fails, Ubuntu

 

After deciding that I really didn’t feel like a repeat of my earlier Fedora experiment, I decided to try out the Linux ‘Old Faithful’ as it were – Ubuntu 12.04 LTS. I opted for the LTS version due to – well, you know – the ‘long-term support’ deal.

Oh, and I upgraded my storage (new 1 TB system drive not shown, and I apologize for the potato-quality image):

IMG_20130921_234311

The only kind of ‘TB’ I like. Not tuberculosis.

 

Following from the earlier Amahi instructions, I popped the primary 1 TB drive into a spare machine and allowed the Ubuntu installer to do its thing. Easy enough! From there, I installed the following two additional items (having to add an additional repository for the latter):

  • Openssh-Server

This allows me to easily control the machine through SSH, and – as I understand it – is pretty much a must for someone wanting to control a headless box. Setup was easy-breezy, in that it required nothing at all.

  • Greyhole

For those unfamiliar, Greyhole is – in their own words – an ‘Easily expandable and redundant storage pool for home servers’. One of my favourite things about WHS v1 was its ‘disk pooling’ capability – essentially a JBOD with software-managed share duplication, ensuring that each selected share was copied over to one other disk in the array.

After those were done with, I popped the drive into the EX470, and – lo and behold! – I was able to SSH in.

sshsuccess

This? This is what relatively minor success looks like.

So at this point, I’m feeling relatively confident. I shut down the server (don’t forget -h!) over SSH, popped in the first of the three 3 TB drives, and…

…nothing. Nada. Zip. Zilch. The server happily blinks away like a small puppy wags its tail, excited to see its owner but clearly bereft of purpose when left to its owner. I can’t ping it, I can’t… well, that’s really it. I can’t ping it, so there’s nothing I can do. Looking to see if GRUB was stuck at the menu, I stuck in a USB keyboard and hit ‘Enter’ to no effect. Yes, my troubleshooting skills are that good.

My next step was to pop both the 1 TB and 3 TB drives into the ‘spare’ machine; this ran fine. Running lshw -short -c disk shows a 1 TB and 3 TB drive without issue. I also ran these parted commands:

mklabel gpt

mkpart primary -1 1

 

(I think that last command is right.) So, all set, right? Cool. Pop the drive back in to the EX470, and…

STILL NOTHING. At this point, I’m ready to go pick up a new four-bay NAS, but I feel like that may be overkill. If anyone has any recommendations on how to get the stupid thing to boot with a 3 TB drive, I’m open to suggestions.

 

WTF Ubuntu

September 7th, 2013 2 comments

I’m not even sure what to say about this one… it looks like I might have an angry video card.

I sat down at my machine after it had been sitting for three or four days to find this... wtf?

I sat down at my machine after it had been sitting for three or four days to find this… wtf?




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: God Damnit Linux, Jon F, Ubuntu Tags:

The real lesson to take from Elementary OS

August 18th, 2013 No comments

Elementary OS is the latest darling for the Linux community at large and with some good reason. It isn’t that Elementary OS is The. Best. Distro. Ever. In fact being only version 0.2 I doubt its own authors would try to make that claim. It does however bring something poorly needed to the Linux desktop – application focus.

Focus?

Most distributions are put together in such a way as to make sure it works well enough for everyone that will end up using it. This is an admirable goal but one that often ends up falling short of greatness. Elementary OS seems to take a different approach, one that focuses on selecting applications that do the basics extremely well even if they don’t support all of those extra features. Take the aptly named (Maya) Calendar application. You know what it does? That’s right, calendar things.

Yeah, a calendar. What else were you expecting?

Yeah, a calendar. What else were you expecting?

Or the Geary e-mail client, another example of a beautiful application that just does the basics. So what if it doesn’t have all of the plugins that an application like Thunderbird does? It still lets you read and send e-mail in style.

It does e-mail

It does e-mail

Probably the best example of how far this refinement goes is in the music application Noise. Noise looks a lot like your standard iTunes-ish media player but that familiarity betrays the simplicity that Noise brings. As you may have guessed by now, it simply plays music and plays it well.

The best thing about Noise is that it plays music well

The best thing about Noise is that it plays music well

But what about feature X?

OK I understand that this approach to application development isn’t for everyone. In fact it is something that larger players, such as Apple, get called out over all the time over. Personally though I think there is a fine balance between streamlined simplicity and refinement. The Linux desktop has come a long way in the past few years but one thing that is still missing from a large portion of it is that refined user experience that you do get with something like an Apple product, or the applications selected for inclusion in Elementary OS. Too often open source projects happily jump ahead with new feature development long before the existing feature set is refined. To be clear I don’t blame them, programming new exciting features is always more fun than fixing the old broken or cumbersome ones, although this is definitely one area where improvements could be made.

Perhaps other projects can (or will) take the approach that Elementary has and dedicate one release, every so often, to making these refinements reality. I’m thinking something like Ubuntu’s One Hundred Paper Cuts but on a smaller scale. In the meantime I will continue to enjoy the simplicity that Elementary OS is currently bringing my desktop Linux computing life.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.
Categories: Linux, Tyler B Tags: , , ,

Listen up, Kubuntu: the enraging tale of sound over HDMI

August 4th, 2013 2 comments

Full disclosure: I live with Kayla, and had to jump in to help resolve an enraging problem we ran into on the Kubuntu installation with KDE, PulseAudio and the undesirable experience of not having sound in applications. It involved a fair bit of terminal work and investigation, plus a minimal understanding of how sound works on Linux. TuxRadar has a good article that tries to explain things. When there are problems, though, the diagram looks much more like the (admittedly outdated) 2007 version:

The traditional spiderweb of complexity involved in Linux audio.

The traditional spiderweb of complexity involved in Linux audio.

To give you some background, the sound solution for the projection system is more complicated than “audio out from PC, into amplifier”. I’ve had a large amount of success in the past with optical out (S/PDIF) from Linux, with only a single trip to alsamixer required to unmute the relevant output. No, of course the audio path from this environment has to be more complicated, and looks something like:

Approximate diagram of display and audio output involved from Kubuntu machine

As a result, the video card actually acts as the sound output device, and the amplifier takes care of both passing the video signal to the projector and decoding/outputting the audio signal to the speakers and subwoofer. Under Windows, this works very well: in Control Panel > Sound, you right-click on the nVidia HDMI audio output and set it as the default device, then restart whatever application plays audio.

In the KDE environment, sound is managed by a utility called Phonon in the System Settings > Multimedia panel, which has multiple backends for ALSA and PulseAudio. It will essentially communicate with the highest-level sound output system installed that it has support for. When you make a change in a default Kubuntu install in Phonon it appears to be talking to PulseAudio, which in turn changes necessary ALSA settings. Sort of complicated, but I guess it handles the idea that multiple applications can play audio and not tie up the sound card at the same time – which has not always been the case with Linux.

In my traditional experience with the GNOME and Unity interfaces, it always seems like KDE took its own path with audio that wasn’t exactly standard. Here’s the problem I ran into: KDE listed the two audio devices (Intel HDA and nVidia HDA), with the nVidia interface containing four possible outputs – two stereo and two listed as 5.1. In the Phonon control panel, only one of these four was selectable at a time, and not necessarily corresponding to multiple channel output. Testing the output did not play audio, and it was apparent that none of it was making it to the amplifier to be decoded or output to the speakers.

Using some documentation from the ArchLinux wiki on ALSA, I was able to use the aplay -l command to find out the list of detected devices – there were four provided by the video card:

**** List of PLAYBACK Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC892 Analog [ALC892 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 1: ALC892 Digital [ALC892 Digital]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 3: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 7: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 8: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 1: NVidia [HDA NVidia], device 9: HDMI 0 [HDMI 0]
Subdevices: 1/1
Subdevice #0: subdevice #0

and then use aplay -D plughw:1,N /usr/share/sounds/alsa/Front_Center.wav repeatedly where N is the number of one of the nVidia detected devices. Trial and error let me discover that card 1, device 7 was the desired output – but there was still no sound from the speakers in any KDE applications or the Netflix Desktop client. Using the ALSA output directly in VLC, I was able to get an MP3 file to play properly when selecting the second nVidia HDMI output in the list. This corresponds to the position in the aplay output, but VLC is opaque about the exact card/device that is selected.

At this point my patience was wearing pretty thin. Examining the audio listing further – and I don’t exactly remember how I got to this point – the “active” HDMI output presented in Phonon was actually presented as card 1, device 3. PulseAudio essentially grabbed the first available output and wouldn’t let me select any others. There were some additional PulseAudio tools provided that showed the only possible “sink” was card 1,3.

The brute-force, ham-handed solution was to remove PulseAudio from a terminal (sudo apt-get remove pulseaudio) and restart KDE, presenting me with the following list of possible devices read directly from ALSA. I bumped the “hw:1,7” card to the top and also quit the system tray version of Amarok.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

A list of all the raw ALSA devices detected by KDE/Phonon after removing PulseAudio.

Result: Bliss! By forcing KDE to output to the correct device through ALSA, all applications started playing sounds and harmony was restored to the household.

At some point after the experiment I will see if I can get PulseAudio to work properly with this configuration, but both Kayla and I are OK with the limitations of this setup. And hey – audio works wonderfully now.




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.