Archive

Archive for the ‘Guinea Pigs’ Category

KWLUG: Personal Information Manager Synchronization (2016-07)

July 9th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Personal Information Manager Synchronization published on July 5th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

RetroPie – turning your Raspberry Pi into a retro-gaming console!

June 12th, 2016 No comments

Recently I decided to pick up a new Raspberry Pi 3 B from BuyaPi.ca. I wasn’t exactly sure what I was going to do with it but I figured with all of the neat little projects going on for the device I would find something. After doing some searching I stumbled upon a few candidate projects before finally settling on RetroPie as my first shot at playing around with the Raspberry Pi.

RetroPie works great on other Raspberry Pi models as well but performance is much better on the 3

RetroPie works great on other Raspberry Pi models as well but performance is much better on the 3

RetroPie, as their site says, “allows you to turn your Raspberry Pi into a retro-gaming machine.” It does this by linking together multiple Raspberry Pi projects, including Raspbian, EmulationStation, RetroArch and more, into a really nice interface that essentially just works out of the box.

Setup

The setup couldn’t be easier. Simply follow the instructions to download a ready made image for your SD Card, put the RetroPie image on your SD Card, plug in a controller (I used a wired Xbox 360 controller), power it on and follow the setup instructions.

When it gets to the controller configuration settings screen be careful what you select. If you follow the on-screen button pushes by default (i.e. button “A” for “A” and button “B” for “B”, etc.) you will end up with something that matches the name of the button but not the placement you’re expecting. This is because RetroPie/RetroArch uses the SNES Controller layout as its default.

The 'default' SNES controller layout

The ‘default’ SNES controller layout

So if you simply followed the on-screen wizard and pushed the Xbox 360 controller’s “A” button instead of it’s “B” button (which is the location of the “A” button on the SNES) you’ll experience all sorts of weird behaviour in the various emulators. So be sure to actually follow the setup guide for your particular controller (see below for example).

Notice how you actually have to push "B" when it asks for "A" and so on during the initial controller configuration

Notice how you actually have to push “B” when it asks for “A” and so on during the initial controller configuration

The one confusing downside to this work around is that all of the menus in RetroPie itself still ask you to push “A” or “B” but they really mean what you mapped that to, so it’s kind of backwards until you actually get into a game. That said it’s a minor thing and one that I’m sure I could fix, if I cared enough to do so, by setting a custom alternative controller layout for the menu only.

Games

RetroPie supports a crazy number of emulators. No seriously it’s a bit ridiculous. Look at this list (as of the time of writing):

RetroPie automatically detects if you have games for the systems. So if you had a SNES game for example you would get a SNES system to choose from on the main menu.

RetroPie automatically detects if you have games for the systems. So if you had a SNES game you would get a SNES system to choose from on the main menu.

Additionally you get PC emulators like DOSBox and the Apple II and there are a number of custom ports of PC games including DOOM, Duke Nukem 3D, Minecraft Pi Edition, OpenTTD and more!

Now obviously not all of the above emulators work flawlessly. Some are still labeled experimental and some systems even offer multiple emulators so you can customize it to the game you are trying to play – just in case one emulator happens to offer better compatibility than another. That said for the majority of the emulators I tried, especially for the older systems, things work great.

The RetroPie SD Card contains various folders that you simply copy the ROM or various bits of game data to. Once the files are there you just restart EmulationStation and it automatically discovers the new games.

Remote Storage

One thing I had to try was to see if I could use a remote share to play the games on the RetroPie off of my NAS instead. This would save quite a bit of space on the SD Card and as long as the transfer speeds between the Raspberry Pi and the NAS were decent enough should actually work.

I figured using a Windows share from the NAS was the easiest (this would also let you share games from basically any computer on your network). Here are the steps to set it up:

SSH into the Raspberry Pi

The default login for RetroPie is username pi and password raspberry. You can usually find it on the network by simply connecting to the device name retropie.

Add remote mounts to fstab

The most simple way to set up the remote mounts is to use fstab. This will ensure that the system gets the share as soon as it boots up. However you might run into problems booting the RaspberryPi if it can’t find the share on the network… so that is something to keep in mind.

Open up /etc/fstab (I used nano):

sudo nano /etc/fstab

Then add a line that looks like this to the end of the file

//{the location of the share}    /home/pi/RetroPie/roms/{the location to mount it}    cifs    guest,uid=1000,iocharset=utf8    0    0

replacing the pieces in { brackets } with where you actually want things to mount. So for example let’s say the NAS is at IP address 192.168.1.50 and you wanted to mount a share on the NAS called SNES that contains SNES ROMs for RetroPie. First I would recommend creating a new sub-directory in the standard SNES ROMs location so that you can have both ROMs on the SD Card and remote ones:

mkdir /home/pi/RetroPie/roms/snes/NASGames

Then you would add something like this to your fstab file:

//192.168.1.50/SNES    /home/pi/RetroPie/roms/snes/NASGames    cifs    guest,uid=1000,iocharset=utf8    0    0

The next time you boot up your Raspberry Pi it should successfully add that remote share and show you any SNES ROMs that are on the NAS in RetroPie!

After testing a few remote games this way I can say that it does indeed work well (via WiFi no less!). This is especially true for the older systems where game size is only a few KiB or MiBs. When you start to get into larger PC or disc based games were the sizes are in the hundreds of MiB it still works decently well but the first time you access something you might notice a bit of a delay. Thankfully Linux does a decent job of caching the file data after it’s read it once and so subsequent reads are much faster. That said if you had a good wired connection I have no doubt that things would work even more smoothly.

Portable Console? Best Console? A bit of both.

The RetroPie project is really neat, not only for its feature set but also because as a games console it’s one of the smallest and has the potential to have one of the largest games library ever!

My setup is pretty plain but some people have done awesome things with theirs!

My setup is pretty plain but some people have done awesome things with theirs like turning it into a full arcade cabinet!

If you like to play classic games then I would seriously recommend giving RetroPie a try.

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

KWLUG: Raspberry Pi Projects (2016-06)

June 11th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Raspberry Pi Projects published on June 7th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

KWLUG: Sound in Linux, Part 2 (2016-05)

June 11th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Sound in Linux, Part 2 published on May 3rd 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Surviving systemd – a quick look at a few alternatives…

June 5th, 2016 1 comment

Regardless of why (and there a number of valid reasons), you might like to avoid using such a large project without so much as a specification or standard behind it.  Fortunately there are still a number of options out there if you don’t want a systemdOS clone.  I’ll present three options ranging from could do better to plausible and then finally the best in class.

Devuan

Sadly I have to say Devuan is a real disappointment, its taken a very long time to get to beta let alone release and while it provides you with a familiar Debian like environment (before Debian morphed into yet another systemdOS clone) I have to say I have very serious reservation about the security of Devuan and this is not down to any particular defaults, but solely to lack of regular package updates.  It appears as if they have taken Debian packages and rooted them firmly in cement.  Opting for a fork of udev instead of a more actively pursued eudev (from Gentoo) I have to wonder how much day to day work is being done on vdev, although it does seems there is a package for eudev this isn’t installed by default.

All in all I’m really not sure about the viability of Devuan they seem to have taken a long time to provide a lot of old packages, with very sparse updates, the back ports repo looks empty and I’m unsure what their policy is regarding timely updates to packages for security updates (recently published zero days etc).  You might think I’m being harsh but where more than a week can go by without an update it doesn’t inspire confidence.

Gentoo

The only real criticisms you can level at Gentoo is the constant compiling and its quite technical nature, you’re not going to leave this installed on some none technical relatives computer unless you visit them regularly and probably if they also cook for you, you’re looking at extended building of packages as often as every few days – while you can lash something up to compile in the wee small hours – not everyone leaves their computer on 24/7 and it certainly wouldn’t be a hands off affair… That said hardware is faster today then it ever was and AMD have some 32+ core chips on the horizon that look promising so…. who knows….

Of course the real place that Gentoo shines is in its flexibility, you can configure most packages to work with (or without need for) many different dependencies and this level of flexibility is unprecedented maybe only approached by an adventurous off piste riffle through the LFS

If you are confident in your technical ability and don’t mind you cpu grinding away while you are doing other things, Gentoo should definitely not be discarded out of hand.

Void Linux

For a while this OS did struggle with my favourite waste of time and money (Steam) but they have by now got a firm grasp on avoiding the less than ideal implementation of SSL that many others seem to lean towards.  This isn’t the only indication they aren’t scared of doing something different for the sake of improving things (not just to be new!), while I’m not convinced of any desperate need to improve sysv – runit plays its role just fine, for a little bit of learning its a low overhead low pain replacement.  There really isn’t any need to add a whole extra layer to the userland just to “solve” a none problem that’s not intrinsically that complex.

This rolling release is maintained brilliantly and there are updates usually on a daily basis, the package manager (xbps) while it take a little learning is fast and has yet to choke on me in some of the spectacular ways I’ve seen RPM do in my past history.  I’ve left a number of none technical people with Void on their machines and while the xbps gui (OctoXbps) needs some explaining (it could be a little more intuitive) I’ve basically had a hands off experience with their machines. Xbps will even allow some actions without root access, for example you can synchronise the repo in memory (the sync is volatile), this allows you to check for an update without root credentials – coupled with zenity its trivial to whip up a GUI script to notify you of updates without need to type a password after log in ! There are a lot of options and its a powerful suite of tools.  Another nice touch is the vkpurge tool which lets you easily get rid of old kernels properly – something often not so well implemented on some systems.

 

So there really is life after systemd and despite people wanting to dictate exactly how your machine should be set up, you still can have a system that feels distinct, flexible and easy to use… Maybe Linux will survive the corporate onslaught….

Introducing Chris C, our occasional guest writer.
This article was originally published at his personal website here.

Categories: Chris C, Linux Tags:

Fix: trying to overwrite ‘/usr/share/accounts/services/google-im.service’ installing kubuntu-desktop

June 5th, 2016 No comments

I have an Ubuntu 16.04 desktop installation with Unity and wanted to try KDE, so I ran sudo apt-get install kubuntu-desktop. apt failed with the following message:

trying to overwrite '/usr/share/accounts/services/google-im.service', which is also in package account-plugin-google [...]

The original issue at Ask Ubuntu has several suggestions but none of them worked – any apt commands returned the same requirement to run apt-get -f install, which in turn gave the original “trying to overwrite” error message. synaptic also wasn’t installed so I couldn’t use it (or install it, as all other apt installation commands failed.)

I was able to get the dpkg database out of its bad state and continue to install kubuntu-desktop by running the following:

dpkg -P account-plugin-google unity-scope-gdrive
apt-get -f install

(Link to original Kubuntu bug for posterity: https://bugs.launchpad.net/kubuntu-ppa/+bug/1451728)

This post was cross-posted to my personal website.

Categories: God Damnit Linux, Jake B, KDE, Kubuntu, Ubuntu Tags:

Extract album art from MP3 files

May 7th, 2016 No comments

Recently I needed to extract the album art from an MP3 file and came across a really easy to use command line utility called eyeD3 to do just that (among other things). Here is how you can extract all of the album art from a file MyFile.mp3 into a directory called Output.

1) Install eyeD3

sudo apt-get install eyeD3

2) Extract all embedded album art from the file

eyeD3 --write-images=Output/ MyFile.mp3

Pretty simple!




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

KWLUG: Docker Tutorial (2016-04)

April 23rd, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Docker published on April 5th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

KWLUG: Mastering Photo DVDs, KDEnlive (2016-03)

March 8th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Mastering Photo DVDs, KDEnlive published on March 8th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Murdering misbehaving applications with xkill

March 5th, 2016 No comments

Have you ever had a window in Linux freeze on you and no matter how many times you tried to close it, it just wouldn’t go away? Then when you try and find the process in System Monitor (or the like) you can’t seem to identify it for whatever reason?

Thankfully there is a really easy to use command that lets you simply click on the offending window and POOF!… it goes away instantly. So how does it work? Let’s say you have a window that is frozen like this

As long as you can see it you can kill it!

As long as you can see it you can xkill it!

First open up a new terminal window and type the command

xkill

and hit Enter. This will then tell you to simply click on the window you want to kill:

Select the window whose client you wish to kill with button 1….

Next it is as simple as actually clicking on the frozen window and you can say goodbye to your problem. Happy xkill-ing 🙂

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Limit bandwidth used by a command in Linux

February 28th, 2016 No comments

If you’ve ever wanted to run a bandwidth intensive command (for example downloading system updates) but limit how much of the available bandwidth it can actually use then trickle may be what you’re after.

Simply install it using

sudo apt-get install trickle

and then you can use it with the following syntax

trickle -d X -u Y command

where X is download limit in KB/s, Y is the upload limit in KB/s and command is the process you want to start limited to these bandwidth constraints. For example if I wanted to start a download of the latest (as of this writing) AMD64 VirtualBox for Ubuntu using wget but limit it to only using 50KB/s down and 20KB/s up then I would run

trickle -d 50 -u 20 wget http://download.virtualbox.org/virtualbox/5.0.14/virtualbox-5.0_5.0.14-105127~Ubuntu~trusty_amd64.deb

I should point out that trickle does it’s best to limit the bandwidth to what you select but often won’t be exact in how it does this. Either way it is another cool little tool for your Linux toolbox.

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Linux, Tyler B Tags:

Playing the Simpsons Theme on an Arduino

February 28th, 2016 No comments

A little while back, I picked up an Arduino Starter Kit. It contains an Arduino Uno, a bunch of common components, and an instruction book that walks beginners through building circuits and using the Arduino to interact with hardware.

One of the projects in the book is a basic synthesizer that lets you play tones on a piezo speaker. The schematic for this is dead simple:

Tone_Fritzing

After getting the Arduino to play a few notes, I decided to extend the project a bit and play a melody. After a bit of screwing around, I managed to get the thing to play the Simpsons theme.

The code for this sketch can be found on GitHub.

This article was originally posted at jonathanfritz.ca




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Arduino, Jon F Tags:

A nifty utility to limit CPU usage on Linux

February 27th, 2016 No comments

If you want to run a command that you know is going to use quite a bit of CPU but you don’t want it to completely take over your system there is a really neat utility that can help you out. It’s called cpulimit and it does exactly what you think it would.

The basic usage is this:

cpulimit -l XX command

where XX is the CPU % you want to limit the process to and command is the process you want to run. So for example let’s say you wanted Firefox to only ever use 30% of your CPU you would simply start it from a terminal like this:

cpulimit -l 30 firefox

You can even limit it based on CPU cores instead of overall CPU usage if you want using the -c flag instead of the -l flag.

If you want more advanced features you could use something like cgroups but for simple stuff cpulimit seems to work very well.

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Tyler B Tags:

Setting up Syncthing to share files on Linux

February 21st, 2016 No comments

Syncthing is a file sharing application that lets you easily, and securely, share files between computers without having to store them on a third party server. It is most analogous to BitTorrent Sync (BTS) but whereas BTS is somewhat undocumented and closed source, Syncthing is open source and uses an open protocol that can be independently verified.

This is going to be a basic guide to configure Syncthing to sync a folder between multiple computers. I’m also going to configure these to start automatically when the system starts up and run Syncthing in the background so it doesn’t get in your way if you don’t want to see it.

Download and Install

While it may be possible to get Syncthing from your distribution’s repositories I prefer to grab it right from the source. So for example you can grab the appropriate version for your Linux computer (for example the 64 bit syncthing-linux-amd64-v0.12.19.tar.gz download) right from their website.

Extract the contents to a new folder in your home directory (or a directory wherever you want it to live). One important thing to note is that you want whatever user will be running the program, for example your user account, to have write access to that folder so that Syncthing can auto-update itself. For example you could extract the files to ~/syncthing/ to make things easy.

To start Syncthing all you need to do is execute the syncthing binary in that directory. If you want to configure syncthing to start without also starting up the browser you can simply run it using the -no-browser flag or by changing this behaviour in the settings.

If you are on Debian, Ubuntu or derivatives (such as Linux Mint) there is also an official repository you can add. The steps can be found here but I’ve re-listed them below for completeness sake:

# Add the release PGP keys:
curl -s https://syncthing.net/release-key.txt | sudo apt-key add -

# Add the "release" channel to your APT sources:
echo "deb http://apt.syncthing.net/ syncthing release" | sudo tee /etc/apt/sources.list.d/syncthing.list

# Update and install syncthing:
sudo apt-get update
sudo apt-get install syncthing

This will install syncthing to /usr/bin/syncthing. In order to specify a configuration location you can pass the -home flag which would look something like this:

./usr/bin/syncthing -home="/home/{YOUR USER ACCOUNT}/.config/syncthing"

So to set up syncthing to start automatically without the browser using the specified configuration you would simply add this to your list of startup applications:

/usr/bin/syncthing -no-browser -home="/home/{YOUR USER ACCOUNT}/.config/syncthing"

There are plenty of ways to configure Syncthing to startup automatically but the one described above is a pretty universal method. If you would rather integrate it with your system using runit/systemd/upstart just take a look at the etc folder in the tar.gz.

Here is an example of my Linux Mint configuration in the Startup Applications control panel using the command listed above:

It's easy enough to get Syncthing started

It’s easy enough to get Syncthing started

Configure Syncthing

Once Syncthing is running you should be able to browse to it’s interface by going to http://localhost:8080. From this point forward I’m going to assume you want to sync between two computers which I will refer to as Computer 1 and Computer 2.

First let’s start by letting Computer 1 know about Computer 2 and vice versa.

  1. On Computer 1 click Actions > Show ID. Copy the long device identification text (it will look like a series of XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX-…).
  2. On Computer 2 click Add Device and enter the copied Device ID and give it a Device Name.
  3. Back on Computer 1 you may notice a New Device notification which will allow you to easily add Computer 2 there as well. If you do not see this notification simply follow the steps above but in reverse, copying Computer 2’s device ID to Computer 1.
Once both computers know about each other they can begin syncing!

Once both computers know about each other they can begin syncing!

In order to share a folder you need to start by adding it to the Syncthing on one of the two computers. To make it simple I will do this on Computer 1. Click Add Folder and you will see a popup asking for a bunch of information. The important ones are:

  • Folder ID: This is the name or label of the shared folder. It must be the same on all computers taking part in the share.
  • Folder Path: This is where you want it to store the files on the local computer. For example on Computer 1 I might wan this to be ~/Sync/MyShare but on Computer 2 it could be /syncthing/shares/stuff.
  • Share With Devices: These are the computers you want to share this folder with.

So for example let’s say I want to share a folder called “CoolThings” and I wanted it to live in ~/Sync/CoolThings on Computer 1. Filling in this information would look like this:

syncthing_folder_setup

Finally to share it with Computer 2 I would check Computer 2 under the Share With Devices section.

Once done you should see a new notification on Computer 2 asking if you want to add the newly shared folder there as well.

Syncthing alerts you to newly shared folders

Syncthing alerts you to newly shared folders

Once done the folder should be shared and anything you put into the folder on either computer will be automatically synchronized on the other.

If you would like to add a third or fourth computer just follow the steps above again. Pretty easy no?

This post originally appeared on my website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Installing ROS on a Raspberry Pi

February 21st, 2016 No comments

As a lover of technology, I tend to accumulate bits and pieces of interesting devices. Usually, these are purchased for use on unrelated projects, and on occasion, I have the opportunity to bring them together into a single project in a previously unanticipated way. Such is the case with my Arduino and Raspberry Pi. Both are interesting microcomputers with their own strengths and weaknesses, so it was when I learned that they could be made to work together with the help of Robot Operating System, I had to give it a shot.

My raspberry pi

ROS is an open-sourced project that is dedicated to providing a framework of libraries for performing common tasks under the general heading of robotics. It also includes drivers that allow you to easily interface with common hardware. The core of ROS is a reactor model of observables and observers that send messages to one another, typically over a serial connection, allowing any number of controllers to interface with one another and form a unified whole.

The rosserial_arduino library is a project that allows ROS on a Raspberry Pi (or other *nix device) to interface with an Arduino over a USB serial connection, thereby combining the computing power and versatility of a Linux-based microcomputer with the IO capabilities of an Arduino.

What You’ll Need to Get Started

Installing Raspbian on the Pi

If your Pi already has an operating system on it, you can probably skip this step. If, however, it’s straight out of the box, you’ll need to install the Raspbian distribution.

As of this writing, the latest version of Raspbian is Jesse, released in September of 2015. I wasn’t able to get ROS working with this version, and backed down to the Wheezy release from May of 2015 instead. To install the operating system, I did the following:

  1. Download the Raspbian Wheezy image via a bittorrent client.
  2. When the download is complete, follow these instructions to copy the image file to your MicroSD card.
  3. Unmount the card, insert it into your Pi, and hook up the power. Your device should boot into a command prompt. From here, you can run raspi-config to customize the installation, or get right to installing ROS.

Once the installation is complete, be sure to check for updates:

pi@raspberrypi ~ $ sudo apt-get update
pi@raspberrypi ~ $ sudo apt-get upgrade

An up to date system is a safe system.

SSH

Once your Pi has an operating system, you can switch to interacting with it via SSH. My TV is the only “monitor” in my house that has an HDMI input on it, so SSH works much better for me.

Make sure that sshd is running on your Pi:

pi@raspberrypi ~ $ sudo service sshd status
● ssh.service - OpenBSD Secure Shell server
 Loaded: loaded (/lib/systemd/system/ssh.service; enabled)
 Active: active (running) since Thu 2015-10-08 12:17:06 UTC; 4 days ago
 Main PID: 506 (sshd)
 CGroup: /system.slice/ssh.service
 └─506 /usr/sbin/sshd -D

If everything is working, you should see the text active (running) in the result. Once we know that an ssh server is running, we can check our ip address with the ifconfig command. The output should look something like this:

pi@raspberrypi ~ $ ifconfig
eth0      Link encap:Ethernet  HWaddr b8:27:eb:b9:49:cd  
          inet addr:192.168.0.109  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5150 errors:0 dropped:51 overruns:0 frame:0
          TX packets:565 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:552488 (539.5 KiB)  TX bytes:60766 (59.3 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1104 (1.0 KiB)  TX bytes:1104 (1.0 KiB)

If your Pi is connected to a LAN cable, you’ll want to look at the eth0 section. If it’s connected to WiFi, look for a wlan0 section. Both sections should have an inet addr field whose value starts with a 192.168.x.x address. In my case, it’s 192.168.0.109. From a terminal on my computer, I can connect with:

jfritz@IDEAPAD-UBUNTU:~$ ssh pi@192.168.0.109

When prompted to accept the Pi’s RSA key, I do, and when prompted for a password, I enter the default password raspberry. If you intend to leave the Pi connected to your network for long periods of time, you should change this password or add key-based authentication to the system.

If you have problems getting connected, check out the official instructions on the Raspberry Pi website.

Installing ROS on the Pi

As of this writing, the most recent version of ROS is Indigo, released in July of 2014. To get it running on the Pi, you’ll want to follow the official ROSberryPi installation instructions on the ROS website.

While following these instructions, I had a few false starts. It’s important to read the instructions carefully, as they’re fairly generic, and can be used to install different configurations of ROS on different versions of Raspbian. I found that the instructions for the ros_conn configuration worked best on Raspbian’s Wheezy release.

The  trickiest part of the instructions is section 2.2 Resolve Dependencies. It took me a couple of reads to realize that if you’re installing ROS Indigo’s ros_conn configuration on Raspbian Wheezy, you only need to compile two packages from source: libconsole-bridge-dev and liblz4-dev. Installing any other packages at this step just costs you time, and may introduce problems down the road.

I also found that the install process went much smoother when the Pi was connected to a LAN rather than WiFi. The WiFi signal in my house is relatively weak, and the Realtek #814B is really cheap, so downloading a lot of files while maintaining an SSH connection is a big ask.

Once the installation is complete, open up your ~/.bashrc file, and add two lines to the end:

# export ROS environment variables
source /opt/ros/indigo/setup.bash

This will make sure that the appropriate environment variables are set to interact with ROS on every startup. You can check that it worked by rebooting your Pi and running

pi@raspberrypi ~ $ printenv | grep ROS
ROS_ROOT=/opt/ros/indigo/share/ros
ROS_PACKAGE_PATH=/opt/ros/indigo/share:/opt/ros/indigo/stacks
ROS_MASTER_URI=http://localhost:11311
ROS_DISTRO=indigo
ROS_ETC_DIR=/opt/ros/indigo/etc/ros

If you see all of the ROS_* environment variables print out, then everything is set up and ready to go. Now it’s time to start on some tutorials.

Eventually, I want to get the Raspberry Pi communicating with the Arduino, and use the latter as a sensor platform and motor controller for some kind of a robot. For now, I need to find my way around ROS.

This article originally appeared at jonathanfritz.ca




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

CoreGTK 3.10.2 Released!

February 19th, 2016 No comments

The next version of CoreGTK, version 3.10.2, has been tagged for release today.

Highlights for this release:

  • This is a bug fix release.
  • Corrected issue with compiling CoreGTK on OS X.

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Automating Let’s Encrypt certificates on nginx

February 19th, 2016 1 comment

Let’s Encrypt is a new Certificate Authority that provides free SSL certificates. It is intended to be automated, so that certificates are renewed automatically. We’re using Let’s Encrypt certificates for our set of free Calculus practice problems. Our front end is currently served by an Ubuntu server running nginx, and here’s how we have it scripted on that machine. In a future post, I’ll describe how it’s automated on our Docker setup with HAProxy.

First of all, we’re using acme-tiny instead of the official Let’s Encrypt client, since it’s much smaller and, IMHO, easier to use. It takes a bit more to set up, but works well once it’s set up.

We installed acme-tiny in /opt/acme-tiny, and created a new letsencrypt user. The letsencrypt user is only used to run the acme-tiny client with reduced priviledge. In theory, you could run the entire renewal process with a reduced priviledge user, but the rest of the process is just basic shell commands, and my paranoia level is not that high.

We created an /opt/acme-tiny/challenge directory, owned by the letsencrypt user, and we created /etc/acme-tiny with the following contents:

  • account.key: the account key created in step 1 from the acme-tiny README. This file should be readable only by the letsencrypt user.
  • certs: a directory containing a subdirectory for each certificate that we want. Each subdirectory should have a domain.csr file, which is the certificate signing request created in step 2 from the acme-tiny README. The certs directory should be publicly readable, and the subdirectories should be writable by the user that the cron job will run as (which does not have to be the letsencrypt user).
  • private: a directory containing a subdirectory for each certificate that we want, like we had with the certs directory. Each subdirectory has a file named privkey.key, which will be the private key associated with the certificate. To coincide with the common setup on Debian systems, the private directory should be readable only by the ssl-cert group.

Instead of creating the CSR files as described in the acme-tiny README, I created a script called gen_csr.sh:

#!/bin/bash
openssl req -new -sha256 -key /etc/acme-tiny/private/"$1"/privkey.pem -subj "/" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:") <(cat /etc/acme-tiny/certs/"$1"/domains | sed "s/\\s*,\\s*/,DNS:/g")) > /etc/acme-tiny/certs/"$1"/domain.csr

The script is invoked as gen_scr.sh <name>. It reads a file named /etc/acme-tiny/certs/<name>/domains, which is a text file containing a comma-separated list of domains, and it writes the /etc/acme-tiny/certs/<name>/domain.csr file.

Now we need to configure nginx to serve the challenge files. We created a /etc/nginx/snippets/acme-tiny.conf file with the following contents:

location /.well-known/acme-challenge/ {
    auth_basic off;
    alias /opt/acme-tiny/challenge/;
}

(The “auth_basic off;” line is needed because some of our virtual hosts on that server use basic HTTP authentication.) We then modify the sites in /etc/nginx/sites-enabled that we want to use Let’s Encrypt certificates to include the line “include snippets/acme-tiny.conf;“.

After this is set up, we created a /usr/local/sbin/letsencrypt-renew script that will be used to request a new certificate:

#!/bin/sh
set +e

# only renew if certificate will expire within 20 days (=1728000 seconds)
openssl x509 -checkend 1728000 -in /etc/acme-tiny/certs/"$1"/cert.pem && exit 255

set -e
DATE=`date +%FT%R`
su letsencrypt -s /bin/sh -c "python /opt/acme-tiny/acme_tiny.py --account-key /etc/acme-tiny/account.key --csr /etc/acme-tiny/certs/\"$1\"/domain.csr --acme-dir /opt/acme-tiny/challenge/" > /etc/acme-tiny/certs/"$1"/cert-"$DATE".pem
ln -sf cert-"$DATE".pem /etc/acme-tiny/certs/"$1"/cert.pem
wget https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem -O /etc/acme-tiny/lets-encrypt-x1-cross-signed.pem
cat /etc/acme-tiny/certs/"$1"/cert-"$DATE".pem /etc/acme-tiny/lets-encrypt-x1-cross-signed.pem > /etc/acme-tiny/certs/"$1"/fullchain-"$DATE".pem
ln -sf fullchain-"$DATE".pem /etc/acme-tiny/certs/"$1"/fullchain.pem

The script will only request a new certificate if the current certificate will expire within 20 days. The certificates are stored in /etc/acme-tiny/certs/<name>/cert-<date>.pem (symlinked to /etc/acme-tiny/certs/<name>/cert.pem). The full chain (including the intermediate CA certificate) is stored in /etc/acme-tiny/certs/<name>/fullchain-<date>.pem (symlinked to /etc/acme-tiny/certs/<name>/fullchain.pem).

As-is, the script must be run as root, since it does a su to the letsencrypt user. It should be trivial to modify it to use sudo instead, so that it can be run by any user that has the appropriate permissions on /etc/acme-tiny.

the letsencrypt-renew script is run by another script that will restart the necessary servers if needed. For us, the script looks like this:

#!/bin/sh

letsencrypt-renew sbscalculus.com

RV=$?

set -e

if [ $RV -eq 255 ] ; then
  # renewal not needed
  exit 0
elif [ $RV -eq 0 ] ; then
  # restart servers
  service nginx reload;
else
  exit $RV;
fi

This is then called by a cron script of the form chronic /usr/local/sbin/letsencrypt-renew-and-restart. Chronic is a script from the moreutils package that runs a command and only passes through its output if it fails. Since the renewal script checks whether the certificate will expire, we run the cron task daily.

Of course, once you have the certificate, you want to tell nginx to use it. We have another file in /etc/nginx/snippets that, aside from setting various SSL parameters, includes

ssl_certificate /etc/acme-tiny/certs/sbscalculus.com/fullchain.pem;
ssl_certificate_key /etc/acme-tiny/private/sbscalculus.com/privkey.pem;

This is the setup we use for one of our server. I tried to make it fairly general, and it should be fairly easy to modify for other setups.

 

This article was originally published at Hubert’s personal website here.

Categories: Hubert C, Linux, Ubuntu Tags: ,

Turn your computer into your own Chromecast

February 7th, 2016 No comments

Google Chromecasts are neat little devices that let you ‘cast’ (send) media from your phone or tablet to play on your TV. If however, you already have a computer hooked up to your TV you don’t need to go out and buy a new device simply to have the same functionality. Instead you can install the excellent Leapcast program and accomplish the same functionality.

Leapcast works on all major operating systems – Windows, Mac and Linux – but for the purposes of this post I’m going to be focusing on how to set it up on a Debian based Linux distribution.

Step 1) Install Google Chrome browser

The Google Chrome browser is required for Leapcast to work correctly so the first thing you’ll need to do is head over to the download page and install it.

Step 2) Install miscellaneous required applications and libraries

Leapcast also requires a few extra tools and libraries to be installed.

sudo apt-get install virtualenvwrapper python-pip python-twisted-web python2.7-dev

Step 3) Download Leapcast

Head over to the GitHub page and download the zip of the latest Leapcast code. Alternatively you can also install git and use it to grab the latest code that way:

git clone https://github.com/dz0ny/leapcast.git

Step 4) Install Leapcast

In the leapcast directory run the following command. Note you may need to be root in order to do this without error.

sudo python setup.py develop

Step 5) Run Leapcast

Now that Leapcast is install you should be able to run it. Simply open a terminal and type

leapcast

There are some other neat options you can pass it as well. For example if you want your computer to show up as, say, TheLinuxExperiment when someone goes to cast to it simply pass the –name parameter.

leapcast --name TheLinuxExperiment

Happy casting!




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Listener Feedback podcast episode 52: Silence – L’autre Endroit

January 24th, 2016 No comments

Listener Feedback podcast is a royalty free and Creative Commons music podcast. This episode, titled “Episode 52: Full Album – Silence – L’autre Endroit” was released on Sunday January 24th, 2016. To suggest artists and albums that should be featured you can send an e-mail to contact@listenerfeedback.net or message @LFpodcast on Twitter.

To subscribe to the podcast add the feed here or listen to this episode by clicking here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Enter the Void! First impressions of Void Linux

January 23rd, 2016 1 comment
Void Linux

Void Linux

While Gentoo is a great way to spin your own flavour of Linux, after a year I’ve found that recompiling packages every time you do an update becomes a bit of a drag. With that in mind I decided to look around for an alternative distribution, and while nothing is 100% perfect I have to say I really am very happy with Void Linux. There are a number of “live” iso images which will happily boot from a USB stick, I only looked at two of the images Cinnamon and Xfce, while Cinnamon was all very pretty and all that, I couldn’t get the audio volume widget to show itself and besides I didn’t see any real advantage. I’ve long been a fan of Xfce basically because of what it doesn’t try to do, you don’t get the kitchen sink (thankfully) but what you do get works solidly.

Now its entirely possible that I missed something obvious with the void_installer script but it has two distinct behaviours depending on what installation source you choose. If you choose to install from the internet what you get is a bare minimum of packages (command line only) and you’ll be left with a fair bit of configuration to do for yourself – this isn’t always a bad thing if for example you have some specific use maybe an embedded kiosk for example. For more usual desktop use, its better to choose the installation media itself as the source, this basically copies and configures the “live” image onto your machine. I did find that after an update I had to manually delete the old kernel, but once I did that and a few more of the usual chores one normally expects when installing a new system – (eventually after correctly using the installer!) I found myself in possession of a really nice system.

Void is a sleek system, obviously well engineered and the best thing has to be the package manager – xbps comprises in a small suite of interrelated console applications, which do just one narrow function each, for example given the choice of xbps-install, xbps-query, xbps-remove etc you can make a good guess at what they do. As you’d expect each xbps app if run without parameters gives you a quick run down of commands and parameters. One thing I did notice with xbps is its fast, and no mistake! There is a graphical package manager too (octoxbps) which is both familiar but refreshingly slightly different (and not just for the sake of it either) I didn’t manage to find an update reminder but the xbps tools are easy to work with and I was able to make a simple script to do a dry run update (so nothing’s changed) from this I can infer the number of updates that need to be done.

#!/bin/bash
n=`xbps-install -Snu | wc -l`
if [ $n -gt 0 ]
 then
 if [ $n -eq 1 ]
 then
 m="there is a system update to do"
 else
 m="there are $n system updates to do"
 fi
 zenity --info --text="$m"
 fi

I auto run this when the desktop environment starts, keeping package information synced and warning me of updates – this will be most useful when installed on machines that I maintain for others who let us say just need a gentle reminder about updates…

Of course everything was going far too well (spoiling my fun! just working like that) the only one big issue I had was with steam, but the Void forum soon came to my aid with a work around, a particular version of the Xorg driver for Intel gpu’s was causing some kind of networking issue (of all things) I won’t bore you with the details but suffice to say its working with a “little” persuasion!

I did notice pulse audio is installed by default, which seemed a little odd as I’m not sure it provides a great deal for the extra overhead, uninstalling pulse audio doesn’t break anything and getting Alsa going is just a case of enabling the daemon in the alsa-utils package and adding the usual Alsa configuration (~/.asoundrc)

pcm.!default {
 type plug
 slave {
 pcm "hw:1,0"
 }
 }
 ctl.!default {
 type hw
 card 1
 }

I’ve since found a better way to configure ALSA – a quote from their website

Neither .asoundrc or /etc/asound.conf is normally required. You should be able to play and record sound without either (assuming your mic and speakers are hooked up properly). If your system won’t work without one, and you are running the most current version of ALSA, you probably should file a bug report.

That said it did assume the HDMI output was the default so an even simpler config for the kernel module itself allowed me to disable HDMI sound which I have no use for…

/etc/modprobe.d/alsa-base.conf 

options snd_hda_intel enable=1 index=0
options snd_hda_intel enable=0 index=1

Your system might differ but its not really a problem, its just nice there are no enforced and/or needless package dependencies like there are with some other distributions.

Further testing centred on Java development, I knew if I was going to hit any major show stoppers that Jgles2 (with its multiple build systems) would likely show up issues. Not unusually for Void, Ant’s package was bang up to date (wish I could say the same of Gentoo – as at the time of writing Ant is languishing at version 1.9.2 which isn’t enough to compile Lwjgl….) installing g++, make and pkg-config had the complete build system for Jgles2 working as I’d expect.

While its still early days, and its not impossible I could find some wrinkles – all in all so far Void seems a positive experience and well worth considering.

Introducing Chris C, our occasional guest writer.
This article was originally published at his personal website here.

Categories: Chris C, Linux Tags: