Archive

Author Archive

Setup your own VPN with OpenVPN

February 6th, 2017 No comments

Using the excellent Digital Ocean tutorial as my base I decided to setup an OpenVPN server on a Linux Mint 18 computer running on my home network so that I can have an extra layer of protection when connecting to those less than reputable WiFi hotspots at airports and hotels.

While this post is not meant to be an in-depth guide, you should use the original for that, it is meant to allow me to look back at this at some point in the future and easily re-create my setup.

1. Install everything you need

sudo apt-get update
sudo apt-get install openvpn easy-rsa

2. Setup Certificate Authority (CA)

make-cadir ~/openvpn-ca
cd ~/openvpn-ca
nano vars

3. Update CA vars

Set these to something that makes sense:

export KEY_COUNTRY=”US”
export KEY_PROVINCE=”CA”
export KEY_CITY=”SanFrancisco”
export KEY_ORG=”Fort-Funston”
export KEY_EMAIL=”me@myhost.mydomain”
export KEY_OU=”MyOrganizationalUnit”

Set the KEY_NAME to something that makes sense:

export KEY_NAME=”server”

4. Build the CA

source vars
./clean-all
./build-ca

5. Build server certificate and key

./build-key-server server
./build-dh
openvpn –genkey –secret keys/ta.key

6. Generate client certificate

source vars
./build-key-pass clientname

7. Configure OpenVPN

cd ~/openvpn-ca/keys
sudo cp ca.crt ca.key server.crt server.key ta.key dh2048.pem /etc/openvpn
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | sudo tee /etc/openvpn/server.conf

Edit config file:

sudo nano /etc/openvpn/server.conf

Uncomment the following:

tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
push “redirect-gateway def1 bypass-dhcp”
push “route 192.168.10.0 255.255.255.0”
push “route 192.168.20.0 255.255.255.0”

Add the following:

key-direction 0
auth SHA256

Edit config file:

sudo nano /etc/sysctl.conf

Uncomment the following:

net.ipv4.ip_forward=1

Run:

sudo sysctl -p

8. Setup UFW rules

Run:

ip route | grep default

To find the name of the network adaptor. For example:

default via 192.168.x.x dev enp3s0  src 192.168.x.x  metric 202

Edit config file:

sudo nano /etc/ufw/before.rules

Add the following, replacing your network adaptor name, above the bit that says # Don’t delete these required lines…

# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0
-A POSTROUTING -s 10.8.0.0/8 -o enp3s0 -j MASQUERADE
COMMIT
# END OPENVPN RULES

Edit config file:

sudo nano /etc/default/ufw

Change DEFAULT_FORWARD_POLICY to ACCEPT.

DEFAULT_FORWARD_POLICY=”ACCEPT”

Add port and OpenVPN to ufw, allow it and restart ufw to enable:

sudo ufw allow 1194/udp
sudo ufw allow OpenSSH
sudo ufw disable
sudo ufw enable

9. Start OpenVPN Service and set it to enable at boot

sudo systemctl start openvpn@server
sudo systemctl enable openvpn@server

10. Setup client configuration

mkdir -p ~/client-configs/files
chmod 700 ~/client-configs/files
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf

Edit config file:

nano ~/client-configs/base.conf

Replace remote server_IP_address port with the external IP address and port you are planning on using. The IP address can also be a hostname, such as a re-director.

Add the following:

cipher AES-128-CBC
auth SHA256
key-direction 1

Uncomment the following:

user nobody
group nogroup

Comment out the following:

#ca ca.crt
#cert client.crt
#key client.key

11. Make a client configuration generation script

Create the file:

nano ~/client-configs/make_config.sh

Add the following to it:

#!/bin/bash

# First argument: Client identifier

KEY_DIR=~/openvpn-ca/keys
OUTPUT_DIR=~/client-configs/files
BASE_CONFIG=~/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e ‘<ca>’) \
${KEY_DIR}/ca.crt \
<(echo -e ‘</ca>\n<cert>’) \
${KEY_DIR}/${1}.crt \
<(echo -e ‘</cert>\n<key>’) \
${KEY_DIR}/${1}.key \
<(echo -e ‘</key>\n<tls-auth>’) \
${KEY_DIR}/ta.key \
<(echo -e ‘</tls-auth>’) \
> ${OUTPUT_DIR}/${1}.ovpn

And mark it executable:

chmod 700 ~/client-configs/make_config.sh

12. Generate the client config file

cd ~/client-configs
./make_config.sh clientname

13. Transfer client configuration to device

You can now transfer the client configuration file found in ~/client-configs/files to your device.


This post originally appeared on my personal website here.

Categories: Open Source Software, Tyler B Tags: ,

Blast from the Past: The Search Begins

January 26th, 2017 No comments

This post was originally published on July 29, 2009. The original can be found here.


100% fat free

Picking a flavour of Linux is like picking what you want to eat for dinner; sure some items may taste better than others but in the end you’re still full. At least I hope, the satisfied part still remains to be seen.

Where to begin?

A quick search of Wikipedia reveals that the sheer number of Linux distributions, and thus choices, can be very overwhelming. Thankfully because of my past experience with Ubuntu I can at least remove it and it’s immediate variants, Kubuntu and Xubuntu, from this list of potential candidates. That should only leave me with… well that hardly narrowed it down at all!

Seriously... the number of possible choices is a bit ridiculous

Seriously… the number of possible choices is a bit ridiculous

Learning from others’ experience

My next thought was to use the Internet for what it was designed to do: letting other people do your work for you! To start Wikipedia has a list of popular distributions. I figured if these distributions have somehow managed to make a name for themselves, among all of the possibilities, there must be a reason for that. Removing the direct Ubuntu variants, the site lists these as Arch Linux, CentOS, Debian, Fedora, Gentoo, gOS, Knoppix, Linux Mint, Mandriva, MontaVista Linux, OpenGEU, openSUSE, Oracle Enterprise Linux, Pardus, PCLinuxOS, Red Hat Enterprise Linux, Sabayon Linux, Slackware and, finally, Slax.

Doing a both a Google and a Bing search for “linux distributions” I found a number of additional websites that seem as though they might prove to be very useful. All of these websites aim to provide information about the various distributions or help point you in the direction of the one that’s right for you.

Only the start

Things are just getting started. There is plenty more research to do as I compare and narrow down the distributions until I finally arrive at the one that I will install come September 1st. Hopefully I can wrap my head around things by then.

KWLUG: Vigrant (2017-01)

January 11th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Vigrant published on January 9th 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

Shove ads in your pi-hole!

January 8th, 2017 No comments

There are loads of neat little projects out there for your Raspberry Pi from random little hacks all the way up to full scale home automation and more. In the past I’ve written about RetroPie (which is an awesome project that you should definitely check out!) but this time I’m going to take a moment to mention another really cool project: pi-hole.

Pi-hole, as their website says, is “a black hole for Internet advertisements.” Essentially it’s software that you install on your Raspberry Pi (or other Linux computer) that then acts as a local DNS proxy. Once it is setup and running you can point your devices to it individually or just tell your router to use that instead (which then applies to everything on the network).

Then as you’re browsing the internet and come across a webpage that is trying to serve you ads, pi-hole will simply block the DNS request for the ad from really resolving and instead return a blank image or web page meaning that the site simply can’t download the ad to show you. Voila! Universal ad blocker for your entire network and all of your devices! Even better – because you’re blocking the ads from being downloaded in the first place your browsing speeds can sometimes be improved as well.

Pi-hole dashboard

You can monitor or control which domains are blocked all from a really nice dashboard interface and see the queries come into pi-hole almost in real time.

After running pi-hole for a week now I’m quite surprised with how effective it has really been with removing ads. It’s legitimately pleasant being able to browse the web without seeing ads everywhere or having ad blockers break certain websites. If that sounds like something you too might be interested in then pi-hole might be worth taking a look.


This post originally appeared on my personal website here.

Blast from the Past: Of filesystems and partitions

January 5th, 2017 No comments

This post was originally published on August 25, 2009. The original can be found here.


Following from my last post about finalizing all of those small little choices I will now continue along that line but discuss the merits of the various filesystems that Linux allows me to choose from, as well as discuss how I am going to partition my drive.

Filesystem?

For a Windows or Mac user the filesystem is something they will probably never think about in their daily computing adventures. That is mostly because there really isn’t a choice in the matter. As a Windows user the only time I actually have to worry about the filesystem is when I’m formatting a USB drive. For my hard drives the choices are NTFS, NTFS, and.. oh yeah NTFS. My earliest recollection of what a filesystem is happened when my Windows 98 machine had crashed and I had to wait while the machine forced a filesystem check on the next start up. More recently FAT32 has gotten in my way with it’s 4GB file size limitation.

You mean we get a choice?

Linux seems to be all about choice so why would it be surprising that you don’t get to pick your own filesystem? The main contenders for this choice are ext2, ext3, ext4, ReiserFS, JFS, XFS, Btrfs and in special places Fat16, Fat32, NTFS, and swap.

Ext2

According to the great internet bible, ext2 stands for the second extended filesystem. It was designed as a practical replacement for the original, but very old, Linux filesystem. If I may make an analogy for Windows users, ext2 seems to be the Linux equivalent to Fat32, only much better. This filesystem is now considered mostly outdated and only really still used in places where journaling is not always appropriate; for example on USB drives. Ext2 can be used on the /boot partition and is supported by GRUB.

Ext2 Features

  • Introduced: January 1993
  • File allocation: bitmap (free space), table (metadata)
  • Max file size: 16 GiB – 64 TiB
  • Max number of files: 10^18
  • Max filename length: 255 characters
  • Max volume size: 2 TiB – 32 TiB
  • Date range: December 14, 1901 – January 18, 2038

Ext 3

Ext3 is the successor to ext2 and removed quite a few of the limitations and also added a number of new features, most important of which was journaling. As you might have guessed it’s full name is the third extended filesystem. While ext3 is generally considered to be much better than ext2 there are a couple of problems with it. While ext3 does not have to scan itself after a crash, something that ext2 did have to do, it also does not have a an online defragmenter. Also because ext3 was primarily designed to shore up some of ext2’s faults, it is not the cleanest implementation and can actually have worse performance than ext2 in some situations. Ext3 is still the most popular Linux filesystem and is only now slowly being replaced by its own successor ext4. Ext3 can be used on the /boot partition and is fully supported by GRUB.

Ext3 Features

  • Introduced: November 2001
  • Directory contents: Table, hashed B-tree with dir_index enabled
  • File allocation: bitmap (free space), table (metadata)
  • Max file size: 16 GiB – 2 TiB
  • Max number of files: Variable, allocated at creation time
  • Max filename length: 255 characters
  • Max volume size: 2 TiB – 16 TiB
  • Date range: December 14, 1901 – January 18, 2038

Ext4

Ext4 is the next in the extended filesystem line and the successor to ext3. This addition proved to be quite controversial initially due to its implementation of delayed allocation which resulted in a very long time before writes. However ext4 achieves very fast read time thanks to this delayed allocation and overall it performs very well when compared to ext3. Ext4 is slowly taking over as the defacto filesystem and is actually already the default in many distributions (Fedora included). Ext4 cannot be used on the /boot partition because of GRUB, meaning a separate /boot partition with a different filesystem must be made.

Ext4 Features

  • Introduced: October 21, 2008
  • Directory contents: Linked list, hashed B-tree
  • File allocation: Extents/Bitmap
  • Max file size: 16 TiB
  • Max number of files: 4 billion
  • Max filename length: 256 characters
  • Max volume size: 1 EiB
  • Date range: December 14, 1901 – April 25, 2514

ReiserFS

Created by Hans ‘I didn’t murder my wife’ Reiser, in 2001 this filesystem was very promising for its performance but has since been mostly abandoned  by the Linux community. It’s initial claim to fame was as the first journaling filesystem to be included within the Linux kernel. Carefully configured, ReiserFS can achieve 10 to 15x the performance of ext2 and ext3. ReiserFS can be used on the /boot partition and is supported by GRUB.

ReiserFS Features

  • Introduced: 2001
  • Directory contents: B+ tree
  • File allocation: Bitmap
  • Max file size: 8 TiB
  • Max number of files: ~4 billion
  • Max filename length: 4032 characters theoretically, 255 in practice
  • Max volume size: 16 TiB
  • Date range: December 14, 1901 – January 18, 2038

Journaled File System (JFS)

Developed by IBM, JFS sports many features and is very advanced for its time of release. Among these features are extents and compression. Though not as widely used as other filesystems, JFS is very stable, reliable and fast with low CPU overhead. JFS can be used on the /boot partition and is supported by GRUB.

JFS Features

  • Introduced: 1990 and 1999
  • Directory contents: B+ tree
  • File allocation: Bitmap/extents
  • Max file size: 4 PiB
  • Max number of files: no limit
  • Max filename length: 255 characters
  • Max volume size: 32 PiB

XFS

Like JFS, XFS is one of the oldest and most refined journaling filesystems available on Linux. Unlike JFS, XFS supports many additional advanced features such as striped allocation to optimize RAID setups, delayed allocation to optimize disk data placement, sparse files, extended attributes, advanced I/O features, volume snapshots, online defragmentation, online resizing, native backup/restore and disk quotas. The only real downsides XFS suffers from are its inability to shrink partitions, a difficult to implement un-delete, and quite a bit of overhead when new directories are created and directories are deleted. XFS is supported by GRUB, and thus can be used as the /boot partition, but there are reports that it is not very stable.

XFS Features

  • Introduced: 1994
  • Directory contents: B+ tree
  • File allocation: B+ tree
  • Max file size: 8 EiB
  • Max filename length: 255 characters
  • Max volume size: 16 EiB

Btrfs

Btrfs, or “B-tree FS” or “Butter FS”, is a next generation filesystem will all of the bells and whistles. It is meant to fill the gap of lacking enterprise filesystems on Linux and is being spearheaded by Oracle. Wikipedia lists its new promised features as online balancing, subvolumes (separately-mountable filesystem roots), object-level (RAID-like) functionality, and user-defined transactions among other things. It’s stable version is currently being incorporated into mainstream Linux kernels.

Btrfs Features

  • Introduced: 20xx
  • Directory contents: B+ tree
  • File allocation: extents
  • Max file size: 16 EiB
  • Max number of files: 2^64
  • Max filename length: 255 characters
  • Max volume size: 16 EiB

So what’s it all mean?

Well there you have it, a quick and concise rundown of the filesystem options for your mainstream Linux install. But what exactly does all of this mean? Well, as they say, a picture speaks a thousand words. Many people have done performance tests against the mainstream filesystems and many conclusions have been drawn as to what is the best in many different circumstances. As I assume most people would chose either XFS, ext3, ext4 or maybe even Btrs if they were a glutton for punishment I just happen to have found some interesting pictures to show off the comparison!

Rather than tell you which filesystem to pick I will simply point out a couple of links and tell you that while I think XFS is a very underrated filesystem I, like most people, will be going with ext4 simply because it is currently the best supported.

Links (some have pictures!):

EXT4, Btrfs, NILFS2 Performance Benchmarks

Filesystems (ext3, reiser, xfs, jfs) comparison on Debian Etch

Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2

Hey! You forgot about partitions!

No, I didn’t.

Yes you did!

OK, fine… So as Jon had pointed out in a previous post the Linux filesystem is broken down into a series of more or less standard mount points. The only requirements for Fedora, my distribution of choice, and many others are that at least these three partitions exist: /boot for holding the bootable kernels, / (root) for everything else, and a swap partition to move things in and out of RAM. I was thinking about creating a fourth /home partition but I gave up when I realized I didn’t know enough about Linux to determine a good partition size for that.

OK, so break it down

/boot

Fedora recommends that this partition is a minimum of 100MB in size. Even though kernels are each roughly 6MB in size it is better to be safe than sorry! Also because ext4 is not supported by GRUB I will be making this partition ext3.

LVM

I know what you’re thinking, what the hell is LVM? LVM stands for Logical Volume Manager and allows a single physical partition to hold many virtual partitions. I will be using LVM to store the remainder of my partitions wrapped inside of a physical encrypted partition. At least that’s the plan.

swap

Fedora recommends using the following formula to calculate how much swap space you need.

If M < 2
S = M *2
Else
S = M + 2

Where M is the amount of memory you have and S is the swap partition size in GiB. So for example the machine I am using for this experiment has 4 GiB of RAM. That translates to a swap partition of 6 GiB. If your machine only has 1 GiB of RAM then the formula would translate to 2 GiB worth of swap space. 6 GiB seems a bit overkill for a swap partition but what do I know?

/ (root)

And last but not least the most important part, the root partition. This partition will hold everything else and as such will be taking up the rest of my drive. On the advice of Fedora I am going to leave 10 GiB of LVM disk space unallocated for future use should the need arise. This translates to a root partition of about ~300 GiB, plenty of space. Again I will be formatting this partition using ext4.

Well there you go

Are you still with me? You certainly are a trooper! If you have any suggestions as to different disk configurations please let me know. I understand a lot of this in theory but if you have actual experience with this stuff I’d love to hear from you!

Download YouTube videos with youtube-dl

December 31st, 2016 No comments

Have you ever wanted to save a YouTube video for offline use? Well now it’s super simple to do with the easy to use command line utility youtube-dl.

Start by installing the utility either through your package manager (always the recommended approach) or from the project GitHub page here.

sudo apt-get install youtube-dl

This utility comes packed with loads of neat options that I encourage you to read about on the project page but if you just want to quickly grab a video all you need to do is run the command:

youtube-dl https://www.youtube.com/YOUR_REAL_VIDEO_HERE

So for example if you wanted to grab this year’s YouTube Rewind you would just run:

youtube-dl https://www.youtube.com/watch?v=_GuOjXYl5ew

and the video would end up in whatever directory your terminal is currently in. Can’t get much easier than that!

Categories: Linux, Tyler B Tags: ,

Happy New Year!

December 31st, 2016 No comments

Yeah I know technically this is going up a day early but so what? 😛

From all of us at The Linux Experiment we want to wish you all the best in 2017!

 

Categories: Tyler B Tags:

Fixing Areca Backup on Ubuntu 16.04 (and related distributions)

December 30th, 2016 No comments

Seems like I’m at it again, this time fixing Areca Backup on Ubuntu 16.04 (actually Linux Mint 18.1 in my case). For some reason when I download the current version (Areca 7.5 for Linux/GTK) and try and run the areca.sh script I get the following error:

tyler@computer $ ./areca.sh
ls: cannot access ‘/usr/java’: No such file or directory
No valid JRE found in /usr/java.

This is especially odd because I quite clearly do have Java installed:

tyler@computer $ java -version
openjdk version “1.8.0_111”
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)

Now granted this may be an issue exclusive to OpenJDK, or just this version of OpenJDK, but I’m hardly going to install Sun Java just to make this program work.

After some digging I narrowed it down to the look_for_java() function inside of the areca_run.sh script located in the Areca /bin/ directory. Now I’m quite sure there is a far more elegant solution than this but I simply commented out the vast majority of this function and hard coded the directory of my system’s Java binary. Here is how you can do the same.

First locate where your Java is installed by running the which command:

tyler@computer $ which java
/usr/bin/java

As you can see from the output above my java executable exists in my /usr/bin/ directory.

Next open up areca_run.sh inside of the Areca /bin/ directory and modify the look_for_java() function. In here you’ll want to set the variable JAVA_PROGRAM_DIR to your directory above (i.e. in my case it would be /usr/bin/) and then return 0 indicating no error. You can either simply delete the rest or just comment out the remaining function script by placing a # character at the start of each line.

#Method to locate matching JREs
look_for_java() {
JAVA_PROGRAM_DIR=”/usr/bin/”
return 0
# IFS=$’\n’
# potential_java_dirs=(`ls -1 “$JAVADIR” | sort | tac`)
# IFS=
# for D in “${potential_java_dirs[@]}”; do
#    if [[ -d “$JAVADIR/$D” && -x “$JAVADIR/$D/bin/java” ]]; then
#       JAVA_PROGRAM_DIR=”$JAVADIR/$D/bin/”
#       echo “JRE found in ${JAVA_PROGRAM_DIR} directory.”
#       if check_version ; then
#          return 0
#       else
#          return 1
#       fi
#    fi
# done
# echo “No valid JRE found in ${JAVADIR}.”
# return 1
}

Once you’ve saved the file you should be able to run the normal areca.sh script now without encountering any errors!


This post originally appeared on my personal website here.

vi(m) or emacs? Neither, just use nano!

December 29th, 2016 No comments

There is quite a funny, almost religious, debate within the Linux community between the two venerable command line text editors vi or Vim and Emacs. Sure they have loads of features and plugins but do you really need that from a command line editor? I think for the most part, or perhaps for most people at least, the answer is no. Instead I’m going to do something really stupid and publicly state, on a Linux related website, that you should ignore both of those text editors and instead use what many consider an inferior, more simplistic one instead: nano.

Sure it doesn’t have all of the fancy bells and whistles but honestly half the time I’m using a command line editor it’s just to change one line of text. I don’t generally need the extra fluff and when I do I can always use a graphical editor instead. Besides it’s so easy to use… just look at how easy it is to use:

Create/edit a file:

  1. Open the file in nano
  2. Make your changes
  3. Save the changes

Open the file:

nano doc.txt

Make your changes:

Admit it, it's true!

Admit it, it’s true!

 

Save your changes:

No complicated key combinations to remember – it’s all listed at the bottom of the screen. Want to save, “Write Out,” your changes? Easy as Ctrl+O and then Enter to confirm.

Sooooooooo easy to use

Sooooooooo easy to use

So let’s all come together, stop the fanboy wars, and all embrace nano as the best command line text editor out there! 😛

 

Categories: Linux, Open Source Software, Tyler B Tags: , , ,

Going Linux Podcast (Still Going)

December 28th, 2016 No comments

Way back in the early days of The Linux Experiment I came across an excellent podcast called Going Linux which offered beginners advice for those people trying out Linux for the first time or just wanting to know more about Linux in general. I so happy to see that the podcast is still going strong (now at over 300 episodes!) and wanted to mention them again here because they were very helpful in our original experiment’s goal of Going Linux.

Their mascot might actually be better than ours!

Please do check them out at http://goinglinux.com/ or subscribe to their podcast by following the steps here.

 

Categories: Podcast, Tyler B Tags: ,

Help out a project with OpenHatch

December 27th, 2016 No comments

OpenHatch is a site that aggregates all of the help postings from a variety of open source projects. It maintains a whole community dedicated to matching people who want to contribute with people who need their help. You don’t even need to be technical like a programmer or something like that. Instead if you want to lend your artistic talents to creating icons and logos for a project, or your writing skills to help them out with documentation – both areas a lot of open source projects aren’t the best with – I’m sure they would be greatly appreciative.

So what are you waiting for? Get connected and give back to the community that helps create the applications you use on a daily basis!

Blast from the Past: Automatically put computer to sleep and wake it up on a schedule

December 26th, 2016 No comments

This post was originally published on June 24, 2012. The original can be found here.


Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

#!/bin/sh
t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.

 

5 apt tips and tricks

December 22nd, 2016 No comments

Everyone loves apt. It’s a simple command line tool to install new programs and update your system, but beyond the standard commands like update, install and upgrade did you know there are a load of other useful apt-based commands you can run?

1) Search for a package name with apt-cache search

Can’t remember the exact package name but you know some of it? Make it easy on yourself and search using apt-cache. For example:

apt-cache search Firefox

It lists all results for your search. Nice and easy!

It lists all results for your search. Nice and easy!

2) Search for package information with apt-cache show

Want details of a package before you install it? Simple just search for it with apt-cache show.

apt-cache show firefox

More details than you probably even wanted!

More details than you probably even wanted!

3) Upgrade only a specific package

So you already know that you can upgrade your whole system by running

apt-get upgrade

but did you know you can upgrade a specific package instead of the whole system? It’s easy, just specify the package name in the upgrade command. For example to upgrade just firefox run:

apt-get upgrade firefox

4) Install specific package version

Normally when you apt-get install something you get the latest version available but what if that’s not what you wanted? What if you wanted a specific version of the package instead? Again, simple, just specify it when you run the install command. For example run:

apt-get install firefox=version

Where version is the version number you wish to install.

5) Free up disk space with clean

When you download and install packages apt will automatically cache them on your hard drive. This can be useful for a number of reasons, for example some distributions use delta packages so that only what has changed between versions are re-downloaded. In order to do this it needs to have a base cached file already on your hard drive. However these files can take up a lot of space and often times don’t get a lot of updates anyway. Thankfully there are two quick commands that free up this disk space.

apt-get clean

apt-get autoclean

Both of these essentially do the same thing but the difference here is autoclean only gets rid of cached files that have a newer version also cached on your hard drive. These older packages won’t be used anymore and so they are an easy way to free up some space.

There you have it, you are now officially 5 apt commands smarter. Happy computing!

 

Discover and listen to your next favourite band, all free!

December 20th, 2016 No comments

Jamendo is a wonderful website where artists can share their music and fans can listen all for free. The music featured is all released under various Creative Commons licenses and so it is free to use and listen to, burn it onto a CD (if people still do that?), put it on your music device and take it on the go, or cut it up and use it in your own works like a podcast. In cases where there may otherwise be a license conflict, for example if you are advertising in your podcast or not releasing it under the same license as the music, Jamendo even facilitates buying a commercial license for yours needs. Sure these features make it attractive for content creators looking for some free background music but none of this is where Jamendo really shines. It shines in just how good of an experience it is to find and listen to new music.

A simple interface makes finding new music a joy

A simple interface makes finding new music a joy

From fostering different music communities, putting together nice playlists or featuring new music Jamendo always has something new to discover.

A few quick stats about Jamendo taken straight from their website:

  • A wide catalog of more than 500,000 tracks
  • More than 40,000 artists
  • Over 150 countries

You can find Jamendo’s website at https://www.jamendo.com.

 

Categories: Tyler B Tags: , , ,

CoreGTK 3.18.0 Released!

December 18th, 2016 No comments

The next version of CoreGTK, version 3.18.0, has been tagged for release! This is the first version of CoreGTK to support GTK+ 3.18.

Highlights for this release:

  • Rebased on GTK+ 3.18
  • New supported GtkWidgets in this release:
    • GtkActionBar
    • GtkFlowBox
    • GtkFlowBoxChild
    • GtkGLArea
    • GtkModelButton
    • GtkPopover
    • GtkPopoverMenu
    • GtkStackSidebar
  • Reverted to using GCC as the default compiler (but clang can still be used)

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

Read more about this release here.

This post originally appeared on my personal website here.

Blast from the Past: An Experiment in Transitioning to Open Document Formats

December 16th, 2016 No comments

This post was originally published on June 15, 2013. The original can be found here.


Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.

 

Using screen to keep your terminal sessions alive

December 15th, 2016 No comments

Have you ever connected to a remote Linux computer, using a… let’s say less than ideal WiFi connection, and started running a command only to have your ssh connection drop and your command killed off in a half finished state? In the best of cases this is simply annoying but if it happens during something of consequence, like a system upgrade, it can leave you in a dire state. Thankfully there is a really simple way to avoid this from happening.

Enter: screen

screen is a simple terminal application that basically allows you to create additional persistent terminals. So instead of ssh-ing into your computer and running the command in that session you can instead start a screen session and then run your commands within that. If your connection drops you lose your ‘screen’ but the screen session continues uninterrupted on the computer. Then you can simply re-connect and resume the screen.

Explain with an example!

OK fine. Let’s say I want to write a document over ssh. First you connect to the computer then you start your favourite text editor and begin typing:

ssh user@computer
user@computer’s password:

user@computer: nano doc.txt

What a wonderful document!

What a wonderful document!

Now if I lost my connection at this point all of my hard work would also be lost because I haven’t saved it yet. Instead let’s say I used screen:

ssh user@computer
user@computer’s password:

user@computer: screen

Welcome to screen!

Welcome to screen!

Now with screen running I can just use my terminal like normal and write my story. But oh no I lost my connection! Now what will I do? Well simply re-connect and re-run screen telling it to resume the previous session.

ssh user@computer
user@computer’s password:

user@computer: screen -r

Voila! There you have it – a simple way to somewhat protect your long-running terminal applications from random network disconnects.

 

The Linux Action Show

December 13th, 2016 No comments

This isn’t a normal post for The Linux Experiment but I wanted to give a shout out to the guys over at the Linux Action Show. The Linux Action Show is a long running weekly Jupiter Broadcasting podcast that aims to bring the listeners up to speed on all things Linux news as well as cover one major topic in-depth per episode. While it’s true that sometimes the hosts can be a bit… ‘Linux-preachy’ or gloss over the (sometimes major) hurdles in choosing to run open source software they always put together an entertaining and informative show.

It's a Linux podcast... what did you expect? :P

It’s a Linux podcast… what did you expect? 😛

If a Linux podcast sounds like something you might be interested in I would highly recommend checking them out!

Linux Action Show @ Jupiter Broadcasting

 

Blast from the Past: Top 10 things I have learned since the start of this experiment

December 9th, 2016 No comments

This post was originally published on October 2, 2009. The original can be found here.


In a nod to Dave’s classic top ten segment I will now share with you the top 10 things I have learned since starting this experiment one month ago.

10: IRC is not dead

Who knew? I’m joking of course but I had no idea that so many people still actively participated in IRC chats. As for the characters who hang out in these channels… well some are very helpful and some… answer questions like this:

Tyler: Hey everyone. I’m looking for some help with Gnome’s Empathy IM client. I can’t seem to get it to connect to MSN.

Some asshat: Tyler, if I wanted a pidgin clone, I would just use pidgin

It’s this kind of ‘you’re doing it wrong because that’s not how I would do it’ attitude can be very damaging to new Linux users. There is nothing more frustrating than trying to get help and someone throwing BS like that back in your face.

9: Jokes about Linux for nerds can actually be funny

Stolen from Sasha’s post.

Admit it, you laughed too

Admit it, you laughed too

8. Buy hardware for your Linux install, not the other way around

Believe me, if you know that your hardware is going to be 100% compatible ahead of time you will have a much more enjoyable experience. At the start of this experiment Jon pointed out this useful website. Many similar sites also exist and you should really take advantage of them if you want the optimal Linux experience.

7. When it works, it’s unparalleled

Linux seems faster, more featured and less resource hogging than a comparable operating system from either Redmond or Cupertino. That is assuming it’s working correctly…

6. Linux seems to fail for random or trivial reasons

If you need proof of these just go take a look back on the last couple of posts on here. There are times when I really think Linux could be used by everyone… and then there are moments when I don’t see how anyone outside of the most hardcore computer users could ever even attempt it. A brand new user should not have to know about xorg.conf or how to edit their DNS resolver.

Mixer - buttons unchecked

5. Linux might actually have a better game selection than the Mac!

Obviously there was some jest in there but Linux really does have some gems for games out there. Best of all most of them are completely free! Then again some are free for a reason

Armagetron

Armagetron

4. A Linux distribution defines a lot of your user experience

This can be especially frustrating when the exact same hardware performs so differently. I know there are a number of technical reasons why this is the case but things seem so utterly inconsistent that a new Linux user paired with the wrong distribution might be easily turned off.

3. Just because its open source doesn’t mean it will support everything

Even though it should damn it! The best example I have for this happens to be MSN clients. Pidgin is by far my favourite as it seems to work well and even supports a plethora of useful plugins! However, unlike many other clients, it doesn’t support a lot of MSN features such as voice/video chat, reliable file transfers, and those god awful winks and nudges that have appeared in the most recent version of the official client. Is there really that good of a reason holding the Pidgin developers back from just making use of the other open source libraries that already support these features?

2. I love the terminal

I can’t believe I actually just said that but it’s true. On a Windows machine I would never touch the command line because it is awful. However on Linux I feel empowered by using the terminal. It lets me quickly perform tasks that might take a lot of mouse clicks through a cumbersome UI to otherwise perform.

And the #1 thing I have learned since the start of this experiment? Drum roll please…

1. Linux might actually be ready to replace Windows for me

But I guess in order to find out if that statement ends up being true you’ll have to keep following along 😉

 

KWLUG: C Language, WebOS (2016-12)

December 8th, 2016 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of C Language, WebOS published on December 6th 2016. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,