Archive

Archive for the ‘Linux’ Category

KWLUG: Emulating Tor (2016-10)

October 4th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Emulating Tor published on October 4th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

KWLUG: Watcamp calendar, Indieweb, Key Retention using Guile (2016-09)

October 4th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Watcamp calendar, Indieweb, Key Retention using Guile published on September 13th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

Ubuntu 16.04 VNC woes? Try this!

October 2nd, 2016 No comments

You may recall a few years back I made a very similar post about Ubuntu 14.04’s ‘VNC woes’. Well unfortunately it seems things have changed slightly between 14.04 and 16.04 and now the setting that once fixed everything now doesn’t persist and is only good for that session. Thankfully it is pretty easy to adapt the existing work around into a script that gets run on startup in order to ‘fix it’ forever. Note that these steps should also work on any Ubuntu derivatives such as Linux Mint 18, etc.

Credit goes to the excellent post over at ThinkingMedia for confirming that the fix is basically the same as the one I had for 14.04. What follows is their instructions on creating a start up script:

1. Create a text file called vino-fix.sh and place the following in it:

#!/bin/bash
export DISPLAY=0:0
gsettings set org.gnome.Vino require-encryption false 

2. Modify the file’s permissions so that it becomes executable. You can do this via the terminal with the following command:

chmod +x vino-fix.sh

3. Create a new startup application and point it at your script. Now every time you reboot it will run that script for you and ‘fix’ the issue.

One last thing I should point out – this work around disables the built in VNC encryption. Generally I would absolutely not recommend disabling any sort of security like this however VNC at its core is not really a secure protocol to begin with. You are far better off setting up VNC to only listen to local connections and then using SSH+VNC for your secure remote desktop needs. Just my two cents.

How To Set Up An OpenVPN Client On Linux

September 28th, 2016 No comments

Getting a VPN set up right on your Linux machine has a number of advantages, especially today when online privacy is a must and files are being shared remotely more extensively than ever. First off, securing your connection with a virtual private network will keep your online traffic encrypted and safe from hackers and other people with malicious intents. But originally, VPNs weren’t used for that reason at all; rather, they were exactly what the name suggests: virtual private networks. By connecting to a VPN, your computer and, for example, your colleague’s remote computer (that’s not physically connected to it via a LAN cable), can “see” each other as if they were part of a local area network and share files via the Internet. VPNs can also be utilized for remotely accessing a computer to offer assistance, or for whatever other reason you’d need to.

OpenVPN is regarded as one of the most secure and most efficient tunneling protocols for VPNs, and fortunately enough it’s quite simple to set up an OpenVPN client on a Linux computer if you know your way around the terminal.

Installing and Configuring The Client

First of all, you have to install the OpenVPN package, which you can easily do via the terminal command sudo apt-get install openvpn. Enter your sudo password (the password of your account) and press Enter. A few dependencies ask for permission to be installed, so just accept all of them for the installation to finish.

Then you’ll have to grab a few certificates off the server that the client side needs in order for OpenVPN to work. Locate the following files on your server PC and put them on a flash drive, so that you can copy them to your client PC:

  • /etc/openvpn/easy-rsa/keys/hostname.crt

  • /etc/openvpn/easy-rsa/keys/hostname.key

  • /etc/openvpn/ca.crt

  • /etc/openvpn/ta.key

Copy all of the files to the /etc/openvpn directory of your client PC (note that instead of “hostname”, in the first two files, it will be the hostname of your client). To further configure the client you have to use the command sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn, which copies a sample configuration file to the right directory.

Editing The Configuration File

Use a text editor such as gpedit to open the client.conf file and locate the following text:

dev tap
remote vpn.example.com 1181
cert hostname.crt
key hostname.key
tls-auth ta.key 1

You need to make a few changes here. Instead of “vpn.example.com”, put your server’s address. “1181” should be the port of your OpenVPN server, and “hostname” should, once again, be the actual name of the certificates that you copied to etc/openvpn/easy-rsa/keys a moment ago.

Now that you’ve set all of this up, you need to restart OpenVPN with the following command: sudo /etc/init.d/openvpn restart. Your remote local area network should be accessible now, which you can check by pinging the server’s VPN IP address.

Setting Up A Graphic UI Tool for OpenVPN

Unless you feel like using the terminal to navigate to every file and folder on your virtual network, it’s a good idea to set up some kind of a GUI. The Gadmin OpenVPN client does a fantastic job at this, and it’s real simple to set up, either via the Ubuntu Software Center, Synaptic or PackageKit. No matter what you choose, once it’s installed simply run the command sudo gadmin-openvpn-client and a neat graphic user interface will appear on the screen.

Now all you have to do is input some information about the server, and you’re set. Fill in the Connection name (what you’d like the connection to your VPN to be called), the Server address (the IP address of your OpenVPN server), the Server port, and the location of the certificates (the ca.crt and ta.key files mentioned earlier). Once you’re done with that, click the Add button, select the connection that you’ve just created and click Activate. Your VPN network will now be accessible.

That’s it, you’re done! You now have your own OpenVPN server that you can use to share data. Note that there are plenty other GUI tools for VPNs to be found in the Software store, so if you don’t like Gadmin, you can always use something else and still have access to OpenVPN, just through a different interface.

Summary

As you can see, it’s pretty simple to set up an OpenVPN client and connect to an existing VPN server. Setting up an OpenVPN server on Linux is a bit more of a challenge, though it’s perfectly possible. For a better and smoother experience, though, you might want to think about subscribing to a dedicated VPN provider, such as ExpressVPN. It’s not free, but it’ll give you greater security and stability, and save you the hassle of maintaining an OpenVPN server by yourself. If you’re interested, you should check out some ExpressVPN reviews before you make your choice.

Thomas Milva is an IT Security Analyst, Web entrepreneur and Tech enthusiast. He is the co-editor of http://wefollowtech.com

KWLUG: Summer Smorgasboard (2016-08)

August 21st, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Summer Smorgasboard published on August 11th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

KWLUG: Personal Information Manager Synchronization (2016-07)

July 9th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Personal Information Manager Synchronization published on July 5th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

RetroPie – turning your Raspberry Pi into a retro-gaming console!

June 12th, 2016 No comments

Recently I decided to pick up a new Raspberry Pi 3 B from BuyaPi.ca. I wasn’t exactly sure what I was going to do with it but I figured with all of the neat little projects going on for the device I would find something. After doing some searching I stumbled upon a few candidate projects before finally settling on RetroPie as my first shot at playing around with the Raspberry Pi.

RetroPie works great on other Raspberry Pi models as well but performance is much better on the 3

RetroPie works great on other Raspberry Pi models as well but performance is much better on the 3

RetroPie, as their site says, “allows you to turn your Raspberry Pi into a retro-gaming machine.” It does this by linking together multiple Raspberry Pi projects, including Raspbian, EmulationStation, RetroArch and more, into a really nice interface that essentially just works out of the box.

Setup

The setup couldn’t be easier. Simply follow the instructions to download a ready made image for your SD Card, put the RetroPie image on your SD Card, plug in a controller (I used a wired Xbox 360 controller), power it on and follow the setup instructions.

When it gets to the controller configuration settings screen be careful what you select. If you follow the on-screen button pushes by default (i.e. button “A” for “A” and button “B” for “B”, etc.) you will end up with something that matches the name of the button but not the placement you’re expecting. This is because RetroPie/RetroArch uses the SNES Controller layout as its default.

The 'default' SNES controller layout

The ‘default’ SNES controller layout

So if you simply followed the on-screen wizard and pushed the Xbox 360 controller’s “A” button instead of it’s “B” button (which is the location of the “A” button on the SNES) you’ll experience all sorts of weird behaviour in the various emulators. So be sure to actually follow the setup guide for your particular controller (see below for example).

Notice how you actually have to push "B" when it asks for "A" and so on during the initial controller configuration

Notice how you actually have to push “B” when it asks for “A” and so on during the initial controller configuration

The one confusing downside to this work around is that all of the menus in RetroPie itself still ask you to push “A” or “B” but they really mean what you mapped that to, so it’s kind of backwards until you actually get into a game. That said it’s a minor thing and one that I’m sure I could fix, if I cared enough to do so, by setting a custom alternative controller layout for the menu only.

Games

RetroPie supports a crazy number of emulators. No seriously it’s a bit ridiculous. Look at this list (as of the time of writing):

RetroPie automatically detects if you have games for the systems. So if you had a SNES game for example you would get a SNES system to choose from on the main menu.

RetroPie automatically detects if you have games for the systems. So if you had a SNES game you would get a SNES system to choose from on the main menu.

Additionally you get PC emulators like DOSBox and the Apple II and there are a number of custom ports of PC games including DOOM, Duke Nukem 3D, Minecraft Pi Edition, OpenTTD and more!

Now obviously not all of the above emulators work flawlessly. Some are still labeled experimental and some systems even offer multiple emulators so you can customize it to the game you are trying to play – just in case one emulator happens to offer better compatibility than another. That said for the majority of the emulators I tried, especially for the older systems, things work great.

The RetroPie SD Card contains various folders that you simply copy the ROM or various bits of game data to. Once the files are there you just restart EmulationStation and it automatically discovers the new games.

Remote Storage

One thing I had to try was to see if I could use a remote share to play the games on the RetroPie off of my NAS instead. This would save quite a bit of space on the SD Card and as long as the transfer speeds between the Raspberry Pi and the NAS were decent enough should actually work.

I figured using a Windows share from the NAS was the easiest (this would also let you share games from basically any computer on your network). Here are the steps to set it up:

SSH into the Raspberry Pi

The default login for RetroPie is username pi and password raspberry. You can usually find it on the network by simply connecting to the device name retropie.

Add remote mounts to fstab

The most simple way to set up the remote mounts is to use fstab. This will ensure that the system gets the share as soon as it boots up. However you might run into problems booting the RaspberryPi if it can’t find the share on the network… so that is something to keep in mind.

Open up /etc/fstab (I used nano):

sudo nano /etc/fstab

Then add a line that looks like this to the end of the file

//{the location of the share}    /home/pi/RetroPie/roms/{the location to mount it}    cifs    guest,uid=1000,iocharset=utf8    0    0

replacing the pieces in { brackets } with where you actually want things to mount. So for example let’s say the NAS is at IP address 192.168.1.50 and you wanted to mount a share on the NAS called SNES that contains SNES ROMs for RetroPie. First I would recommend creating a new sub-directory in the standard SNES ROMs location so that you can have both ROMs on the SD Card and remote ones:

mkdir /home/pi/RetroPie/roms/snes/NASGames

Then you would add something like this to your fstab file:

//192.168.1.50/SNES    /home/pi/RetroPie/roms/snes/NASGames    cifs    guest,uid=1000,iocharset=utf8    0    0

The next time you boot up your Raspberry Pi it should successfully add that remote share and show you any SNES ROMs that are on the NAS in RetroPie!

After testing a few remote games this way I can say that it does indeed work well (via WiFi no less!). This is especially true for the older systems where game size is only a few KiB or MiBs. When you start to get into larger PC or disc based games were the sizes are in the hundreds of MiB it still works decently well but the first time you access something you might notice a bit of a delay. Thankfully Linux does a decent job of caching the file data after it’s read it once and so subsequent reads are much faster. That said if you had a good wired connection I have no doubt that things would work even more smoothly.

Portable Console? Best Console? A bit of both.

The RetroPie project is really neat, not only for its feature set but also because as a games console it’s one of the smallest and has the potential to have one of the largest games library ever!

My setup is pretty plain but some people have done awesome things with theirs!

My setup is pretty plain but some people have done awesome things with theirs like turning it into a full arcade cabinet!

If you like to play classic games then I would seriously recommend giving RetroPie a try.

This post originally appeared on my website here.

KWLUG: Raspberry Pi Projects (2016-06)

June 11th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Raspberry Pi Projects published on June 7th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

KWLUG: Sound in Linux, Part 2 (2016-05)

June 11th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Sound in Linux, Part 2 published on May 3rd 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Surviving systemd – a quick look at a few alternatives…

June 5th, 2016 2 comments

Regardless of why (and there a number of valid reasons), you might like to avoid using such a large project without so much as a specification or standard behind it.  Fortunately there are still a number of options out there if you don’t want a systemdOS clone.  I’ll present three options ranging from could do better to plausible and then finally the best in class.

Devuan

Sadly I have to say Devuan is a real disappointment, its taken a very long time to get to beta let alone release and while it provides you with a familiar Debian like environment (before Debian morphed into yet another systemdOS clone) I have to say I have very serious reservation about the security of Devuan and this is not down to any particular defaults, but solely to lack of regular package updates.  It appears as if they have taken Debian packages and rooted them firmly in cement.  Opting for a fork of udev instead of a more actively pursued eudev (from Gentoo) I have to wonder how much day to day work is being done on vdev, although it does seems there is a package for eudev this isn’t installed by default.

All in all I’m really not sure about the viability of Devuan they seem to have taken a long time to provide a lot of old packages, with very sparse updates, the back ports repo looks empty and I’m unsure what their policy is regarding timely updates to packages for security updates (recently published zero days etc).  You might think I’m being harsh but where more than a week can go by without an update it doesn’t inspire confidence.

Gentoo

The only real criticisms you can level at Gentoo is the constant compiling and its quite technical nature, you’re not going to leave this installed on some none technical relatives computer unless you visit them regularly and probably if they also cook for you, you’re looking at extended building of packages as often as every few days – while you can lash something up to compile in the wee small hours – not everyone leaves their computer on 24/7 and it certainly wouldn’t be a hands off affair… That said hardware is faster today then it ever was and AMD have some 32+ core chips on the horizon that look promising so…. who knows….

Of course the real place that Gentoo shines is in its flexibility, you can configure most packages to work with (or without need for) many different dependencies and this level of flexibility is unprecedented maybe only approached by an adventurous off piste riffle through the LFS

If you are confident in your technical ability and don’t mind you cpu grinding away while you are doing other things, Gentoo should definitely not be discarded out of hand.

Void Linux

For a while this OS did struggle with my favourite waste of time and money (Steam) but they have by now got a firm grasp on avoiding the less than ideal implementation of SSL that many others seem to lean towards.  This isn’t the only indication they aren’t scared of doing something different for the sake of improving things (not just to be new!), while I’m not convinced of any desperate need to improve sysv – runit plays its role just fine, for a little bit of learning its a low overhead low pain replacement.  There really isn’t any need to add a whole extra layer to the userland just to “solve” a none problem that’s not intrinsically that complex.

This rolling release is maintained brilliantly and there are updates usually on a daily basis, the package manager (xbps) while it take a little learning is fast and has yet to choke on me in some of the spectacular ways I’ve seen RPM do in my past history.  I’ve left a number of none technical people with Void on their machines and while the xbps gui (OctoXbps) needs some explaining (it could be a little more intuitive) I’ve basically had a hands off experience with their machines. Xbps will even allow some actions without root access, for example you can synchronise the repo in memory (the sync is volatile), this allows you to check for an update without root credentials – coupled with zenity its trivial to whip up a GUI script to notify you of updates without need to type a password after log in ! There are a lot of options and its a powerful suite of tools.  Another nice touch is the vkpurge tool which lets you easily get rid of old kernels properly – something often not so well implemented on some systems.

 

So there really is life after systemd and despite people wanting to dictate exactly how your machine should be set up, you still can have a system that feels distinct, flexible and easy to use… Maybe Linux will survive the corporate onslaught….

Introducing Chris C, our occasional guest writer.
This article was originally published at his personal website here.

Categories: Chris C, Linux Tags:

Fix: trying to overwrite ‘/usr/share/accounts/services/google-im.service’ installing kubuntu-desktop

June 5th, 2016 No comments

I have an Ubuntu 16.04 desktop installation with Unity and wanted to try KDE, so I ran sudo apt-get install kubuntu-desktop. apt failed with the following message:

trying to overwrite '/usr/share/accounts/services/google-im.service', which is also in package account-plugin-google [...]

The original issue at Ask Ubuntu has several suggestions but none of them worked – any apt commands returned the same requirement to run apt-get -f install, which in turn gave the original “trying to overwrite” error message. synaptic also wasn’t installed so I couldn’t use it (or install it, as all other apt installation commands failed.)

I was able to get the dpkg database out of its bad state and continue to install kubuntu-desktop by running the following:

dpkg -P account-plugin-google unity-scope-gdrive
apt-get -f install

(Link to original Kubuntu bug for posterity: https://bugs.launchpad.net/kubuntu-ppa/+bug/1451728)

This post was cross-posted to my personal website.

Categories: God Damnit Linux, Jake B, KDE, Kubuntu, Ubuntu Tags:

Extract album art from MP3 files

May 7th, 2016 No comments

Recently I needed to extract the album art from an MP3 file and came across a really easy to use command line utility called eyeD3 to do just that (among other things). Here is how you can extract all of the album art from a file MyFile.mp3 into a directory called Output.

1) Install eyeD3

sudo apt-get install eyeD3

2) Extract all embedded album art from the file

eyeD3 --write-images=Output/ MyFile.mp3

Pretty simple!

KWLUG: Docker Tutorial (2016-04)

April 23rd, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Docker published on April 5th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

KWLUG: Mastering Photo DVDs, KDEnlive (2016-03)

March 8th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Mastering Photo DVDs, KDEnlive published on March 8th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Murdering misbehaving applications with xkill

March 5th, 2016 No comments

Have you ever had a window in Linux freeze on you and no matter how many times you tried to close it, it just wouldn’t go away? Then when you try and find the process in System Monitor (or the like) you can’t seem to identify it for whatever reason?

Thankfully there is a really easy to use command that lets you simply click on the offending window and POOF!… it goes away instantly. So how does it work? Let’s say you have a window that is frozen like this

As long as you can see it you can kill it!

As long as you can see it you can xkill it!

First open up a new terminal window and type the command

xkill

and hit Enter. This will then tell you to simply click on the window you want to kill:

Select the window whose client you wish to kill with button 1….

Next it is as simple as actually clicking on the frozen window and you can say goodbye to your problem. Happy xkill-ing 🙂

This post originally appeared on my website here.

Limit bandwidth used by a command in Linux

February 28th, 2016 No comments

If you’ve ever wanted to run a bandwidth intensive command (for example downloading system updates) but limit how much of the available bandwidth it can actually use then trickle may be what you’re after.

Simply install it using

sudo apt-get install trickle

and then you can use it with the following syntax

trickle -d X -u Y command

where X is download limit in KB/s, Y is the upload limit in KB/s and command is the process you want to start limited to these bandwidth constraints. For example if I wanted to start a download of the latest (as of this writing) AMD64 VirtualBox for Ubuntu using wget but limit it to only using 50KB/s down and 20KB/s up then I would run

trickle -d 50 -u 20 wget http://download.virtualbox.org/virtualbox/5.0.14/virtualbox-5.0_5.0.14-105127~Ubuntu~trusty_amd64.deb

I should point out that trickle does it’s best to limit the bandwidth to what you select but often won’t be exact in how it does this. Either way it is another cool little tool for your Linux toolbox.

This post originally appeared on my website here.

Categories: Linux, Tyler B Tags:

Setting up Syncthing to share files on Linux

February 21st, 2016 No comments

Syncthing is a file sharing application that lets you easily, and securely, share files between computers without having to store them on a third party server. It is most analogous to BitTorrent Sync (BTS) but whereas BTS is somewhat undocumented and closed source, Syncthing is open source and uses an open protocol that can be independently verified.

This is going to be a basic guide to configure Syncthing to sync a folder between multiple computers. I’m also going to configure these to start automatically when the system starts up and run Syncthing in the background so it doesn’t get in your way if you don’t want to see it.

Download and Install

While it may be possible to get Syncthing from your distribution’s repositories I prefer to grab it right from the source. So for example you can grab the appropriate version for your Linux computer (for example the 64 bit syncthing-linux-amd64-v0.12.19.tar.gz download) right from their website.

Extract the contents to a new folder in your home directory (or a directory wherever you want it to live). One important thing to note is that you want whatever user will be running the program, for example your user account, to have write access to that folder so that Syncthing can auto-update itself. For example you could extract the files to ~/syncthing/ to make things easy.

To start Syncthing all you need to do is execute the syncthing binary in that directory. If you want to configure syncthing to start without also starting up the browser you can simply run it using the -no-browser flag or by changing this behaviour in the settings.

If you are on Debian, Ubuntu or derivatives (such as Linux Mint) there is also an official repository you can add. The steps can be found here but I’ve re-listed them below for completeness sake:

# Add the release PGP keys:
curl -s https://syncthing.net/release-key.txt | sudo apt-key add -

# Add the "release" channel to your APT sources:
echo "deb http://apt.syncthing.net/ syncthing release" | sudo tee /etc/apt/sources.list.d/syncthing.list

# Update and install syncthing:
sudo apt-get update
sudo apt-get install syncthing

This will install syncthing to /usr/bin/syncthing. In order to specify a configuration location you can pass the -home flag which would look something like this:

./usr/bin/syncthing -home="/home/{YOUR USER ACCOUNT}/.config/syncthing"

So to set up syncthing to start automatically without the browser using the specified configuration you would simply add this to your list of startup applications:

/usr/bin/syncthing -no-browser -home="/home/{YOUR USER ACCOUNT}/.config/syncthing"

There are plenty of ways to configure Syncthing to startup automatically but the one described above is a pretty universal method. If you would rather integrate it with your system using runit/systemd/upstart just take a look at the etc folder in the tar.gz.

Here is an example of my Linux Mint configuration in the Startup Applications control panel using the command listed above:

It's easy enough to get Syncthing started

It’s easy enough to get Syncthing started

Configure Syncthing

Once Syncthing is running you should be able to browse to it’s interface by going to http://localhost:8080. From this point forward I’m going to assume you want to sync between two computers which I will refer to as Computer 1 and Computer 2.

First let’s start by letting Computer 1 know about Computer 2 and vice versa.

  1. On Computer 1 click Actions > Show ID. Copy the long device identification text (it will look like a series of XXXXXXX-XXXXXXX-XXXXXXX-XXXXXXX-…).
  2. On Computer 2 click Add Device and enter the copied Device ID and give it a Device Name.
  3. Back on Computer 1 you may notice a New Device notification which will allow you to easily add Computer 2 there as well. If you do not see this notification simply follow the steps above but in reverse, copying Computer 2’s device ID to Computer 1.
Once both computers know about each other they can begin syncing!

Once both computers know about each other they can begin syncing!

In order to share a folder you need to start by adding it to the Syncthing on one of the two computers. To make it simple I will do this on Computer 1. Click Add Folder and you will see a popup asking for a bunch of information. The important ones are:

  • Folder ID: This is the name or label of the shared folder. It must be the same on all computers taking part in the share.
  • Folder Path: This is where you want it to store the files on the local computer. For example on Computer 1 I might wan this to be ~/Sync/MyShare but on Computer 2 it could be /syncthing/shares/stuff.
  • Share With Devices: These are the computers you want to share this folder with.

So for example let’s say I want to share a folder called “CoolThings” and I wanted it to live in ~/Sync/CoolThings on Computer 1. Filling in this information would look like this:

syncthing_folder_setup

Finally to share it with Computer 2 I would check Computer 2 under the Share With Devices section.

Once done you should see a new notification on Computer 2 asking if you want to add the newly shared folder there as well.

Syncthing alerts you to newly shared folders

Syncthing alerts you to newly shared folders

Once done the folder should be shared and anything you put into the folder on either computer will be automatically synchronized on the other.

If you would like to add a third or fourth computer just follow the steps above again. Pretty easy no?

This post originally appeared on my website here.

Installing ROS on a Raspberry Pi

February 21st, 2016 1 comment

As a lover of technology, I tend to accumulate bits and pieces of interesting devices. Usually, these are purchased for use on unrelated projects, and on occasion, I have the opportunity to bring them together into a single project in a previously unanticipated way. Such is the case with my Arduino and Raspberry Pi. Both are interesting microcomputers with their own strengths and weaknesses, so it was when I learned that they could be made to work together with the help of Robot Operating System, I had to give it a shot.

My raspberry pi

ROS is an open-sourced project that is dedicated to providing a framework of libraries for performing common tasks under the general heading of robotics. It also includes drivers that allow you to easily interface with common hardware. The core of ROS is a reactor model of observables and observers that send messages to one another, typically over a serial connection, allowing any number of controllers to interface with one another and form a unified whole.

The rosserial_arduino library is a project that allows ROS on a Raspberry Pi (or other *nix device) to interface with an Arduino over a USB serial connection, thereby combining the computing power and versatility of a Linux-based microcomputer with the IO capabilities of an Arduino.

What You’ll Need to Get Started

Installing Raspbian on the Pi

If your Pi already has an operating system on it, you can probably skip this step. If, however, it’s straight out of the box, you’ll need to install the Raspbian distribution.

As of this writing, the latest version of Raspbian is Jesse, released in September of 2015. I wasn’t able to get ROS working with this version, and backed down to the Wheezy release from May of 2015 instead. To install the operating system, I did the following:

  1. Download the Raspbian Wheezy image via a bittorrent client.
  2. When the download is complete, follow these instructions to copy the image file to your MicroSD card.
  3. Unmount the card, insert it into your Pi, and hook up the power. Your device should boot into a command prompt. From here, you can run raspi-config to customize the installation, or get right to installing ROS.

Once the installation is complete, be sure to check for updates:

pi@raspberrypi ~ $ sudo apt-get update
pi@raspberrypi ~ $ sudo apt-get upgrade

An up to date system is a safe system.

SSH

Once your Pi has an operating system, you can switch to interacting with it via SSH. My TV is the only “monitor” in my house that has an HDMI input on it, so SSH works much better for me.

Make sure that sshd is running on your Pi:

pi@raspberrypi ~ $ sudo service sshd status
● ssh.service - OpenBSD Secure Shell server
 Loaded: loaded (/lib/systemd/system/ssh.service; enabled)
 Active: active (running) since Thu 2015-10-08 12:17:06 UTC; 4 days ago
 Main PID: 506 (sshd)
 CGroup: /system.slice/ssh.service
 └─506 /usr/sbin/sshd -D

If everything is working, you should see the text active (running) in the result. Once we know that an ssh server is running, we can check our ip address with the ifconfig command. The output should look something like this:

pi@raspberrypi ~ $ ifconfig
eth0      Link encap:Ethernet  HWaddr b8:27:eb:b9:49:cd  
          inet addr:192.168.0.109  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5150 errors:0 dropped:51 overruns:0 frame:0
          TX packets:565 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:552488 (539.5 KiB)  TX bytes:60766 (59.3 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1104 (1.0 KiB)  TX bytes:1104 (1.0 KiB)

If your Pi is connected to a LAN cable, you’ll want to look at the eth0 section. If it’s connected to WiFi, look for a wlan0 section. Both sections should have an inet addr field whose value starts with a 192.168.x.x address. In my case, it’s 192.168.0.109. From a terminal on my computer, I can connect with:

jfritz@IDEAPAD-UBUNTU:~$ ssh pi@192.168.0.109

When prompted to accept the Pi’s RSA key, I do, and when prompted for a password, I enter the default password raspberry. If you intend to leave the Pi connected to your network for long periods of time, you should change this password or add key-based authentication to the system.

If you have problems getting connected, check out the official instructions on the Raspberry Pi website.

Installing ROS on the Pi

As of this writing, the most recent version of ROS is Indigo, released in July of 2014. To get it running on the Pi, you’ll want to follow the official ROSberryPi installation instructions on the ROS website.

While following these instructions, I had a few false starts. It’s important to read the instructions carefully, as they’re fairly generic, and can be used to install different configurations of ROS on different versions of Raspbian. I found that the instructions for the ros_conn configuration worked best on Raspbian’s Wheezy release.

The  trickiest part of the instructions is section 2.2 Resolve Dependencies. It took me a couple of reads to realize that if you’re installing ROS Indigo’s ros_conn configuration on Raspbian Wheezy, you only need to compile two packages from source: libconsole-bridge-dev and liblz4-dev. Installing any other packages at this step just costs you time, and may introduce problems down the road.

I also found that the install process went much smoother when the Pi was connected to a LAN rather than WiFi. The WiFi signal in my house is relatively weak, and the Realtek #814B is really cheap, so downloading a lot of files while maintaining an SSH connection is a big ask.

Once the installation is complete, open up your ~/.bashrc file, and add two lines to the end:

# export ROS environment variables
source /opt/ros/indigo/setup.bash

This will make sure that the appropriate environment variables are set to interact with ROS on every startup. You can check that it worked by rebooting your Pi and running

pi@raspberrypi ~ $ printenv | grep ROS
ROS_ROOT=/opt/ros/indigo/share/ros
ROS_PACKAGE_PATH=/opt/ros/indigo/share:/opt/ros/indigo/stacks
ROS_MASTER_URI=http://localhost:11311
ROS_DISTRO=indigo
ROS_ETC_DIR=/opt/ros/indigo/etc/ros

If you see all of the ROS_* environment variables print out, then everything is set up and ready to go. Now it’s time to start on some tutorials.

Eventually, I want to get the Raspberry Pi communicating with the Arduino, and use the latter as a sensor platform and motor controller for some kind of a robot. For now, I need to find my way around ROS.

This article originally appeared at jonathanfritz.ca

CoreGTK 3.10.2 Released!

February 19th, 2016 No comments

The next version of CoreGTK, version 3.10.2, has been tagged for release today.

Highlights for this release:

  • This is a bug fix release.
  • Corrected issue with compiling CoreGTK on OS X.

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.

Automating Let’s Encrypt certificates on nginx

February 19th, 2016 1 comment

Let’s Encrypt is a new Certificate Authority that provides free SSL certificates. It is intended to be automated, so that certificates are renewed automatically. We’re using Let’s Encrypt certificates for our set of free Calculus practice problems. Our front end is currently served by an Ubuntu server running nginx, and here’s how we have it scripted on that machine. In a future post, I’ll describe how it’s automated on our Docker setup with HAProxy.

First of all, we’re using acme-tiny instead of the official Let’s Encrypt client, since it’s much smaller and, IMHO, easier to use. It takes a bit more to set up, but works well once it’s set up.

We installed acme-tiny in /opt/acme-tiny, and created a new letsencrypt user. The letsencrypt user is only used to run the acme-tiny client with reduced priviledge. In theory, you could run the entire renewal process with a reduced priviledge user, but the rest of the process is just basic shell commands, and my paranoia level is not that high.

We created an /opt/acme-tiny/challenge directory, owned by the letsencrypt user, and we created /etc/acme-tiny with the following contents:

  • account.key: the account key created in step 1 from the acme-tiny README. This file should be readable only by the letsencrypt user.
  • certs: a directory containing a subdirectory for each certificate that we want. Each subdirectory should have a domain.csr file, which is the certificate signing request created in step 2 from the acme-tiny README. The certs directory should be publicly readable, and the subdirectories should be writable by the user that the cron job will run as (which does not have to be the letsencrypt user).
  • private: a directory containing a subdirectory for each certificate that we want, like we had with the certs directory. Each subdirectory has a file named privkey.key, which will be the private key associated with the certificate. To coincide with the common setup on Debian systems, the private directory should be readable only by the ssl-cert group.

Instead of creating the CSR files as described in the acme-tiny README, I created a script called gen_csr.sh:

#!/bin/bash
openssl req -new -sha256 -key /etc/acme-tiny/private/"$1"/privkey.pem -subj "/" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:") <(cat /etc/acme-tiny/certs/"$1"/domains | sed "s/\\s*,\\s*/,DNS:/g")) > /etc/acme-tiny/certs/"$1"/domain.csr

The script is invoked as gen_scr.sh <name>. It reads a file named /etc/acme-tiny/certs/<name>/domains, which is a text file containing a comma-separated list of domains, and it writes the /etc/acme-tiny/certs/<name>/domain.csr file.

Now we need to configure nginx to serve the challenge files. We created a /etc/nginx/snippets/acme-tiny.conf file with the following contents:

location /.well-known/acme-challenge/ {
    auth_basic off;
    alias /opt/acme-tiny/challenge/;
}

(The “auth_basic off;” line is needed because some of our virtual hosts on that server use basic HTTP authentication.) We then modify the sites in /etc/nginx/sites-enabled that we want to use Let’s Encrypt certificates to include the line “include snippets/acme-tiny.conf;“.

After this is set up, we created a /usr/local/sbin/letsencrypt-renew script that will be used to request a new certificate:

#!/bin/sh
set +e

# only renew if certificate will expire within 20 days (=1728000 seconds)
openssl x509 -checkend 1728000 -in /etc/acme-tiny/certs/"$1"/cert.pem && exit 255

set -e
DATE=`date +%FT%R`
su letsencrypt -s /bin/sh -c "python /opt/acme-tiny/acme_tiny.py --account-key /etc/acme-tiny/account.key --csr /etc/acme-tiny/certs/\"$1\"/domain.csr --acme-dir /opt/acme-tiny/challenge/" > /etc/acme-tiny/certs/"$1"/cert-"$DATE".pem
ln -sf cert-"$DATE".pem /etc/acme-tiny/certs/"$1"/cert.pem
wget https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem -O /etc/acme-tiny/lets-encrypt-x1-cross-signed.pem
cat /etc/acme-tiny/certs/"$1"/cert-"$DATE".pem /etc/acme-tiny/lets-encrypt-x1-cross-signed.pem > /etc/acme-tiny/certs/"$1"/fullchain-"$DATE".pem
ln -sf fullchain-"$DATE".pem /etc/acme-tiny/certs/"$1"/fullchain.pem

The script will only request a new certificate if the current certificate will expire within 20 days. The certificates are stored in /etc/acme-tiny/certs/<name>/cert-<date>.pem (symlinked to /etc/acme-tiny/certs/<name>/cert.pem). The full chain (including the intermediate CA certificate) is stored in /etc/acme-tiny/certs/<name>/fullchain-<date>.pem (symlinked to /etc/acme-tiny/certs/<name>/fullchain.pem).

As-is, the script must be run as root, since it does a su to the letsencrypt user. It should be trivial to modify it to use sudo instead, so that it can be run by any user that has the appropriate permissions on /etc/acme-tiny.

the letsencrypt-renew script is run by another script that will restart the necessary servers if needed. For us, the script looks like this:

#!/bin/sh

letsencrypt-renew sbscalculus.com

RV=$?

set -e

if [ $RV -eq 255 ] ; then
  # renewal not needed
  exit 0
elif [ $RV -eq 0 ] ; then
  # restart servers
  service nginx reload;
else
  exit $RV;
fi

This is then called by a cron script of the form chronic /usr/local/sbin/letsencrypt-renew-and-restart. Chronic is a script from the moreutils package that runs a command and only passes through its output if it fails. Since the renewal script checks whether the certificate will expire, we run the cron task daily.

Of course, once you have the certificate, you want to tell nginx to use it. We have another file in /etc/nginx/snippets that, aside from setting various SSL parameters, includes

ssl_certificate /etc/acme-tiny/certs/sbscalculus.com/fullchain.pem;
ssl_certificate_key /etc/acme-tiny/private/sbscalculus.com/privkey.pem;

This is the setup we use for one of our server. I tried to make it fairly general, and it should be fairly easy to modify for other setups.

 

This article was originally published at Hubert’s personal website here.

Categories: Hubert C, Linux, Ubuntu Tags: ,