Archive

Author Archive

Using SoundConverter to convert between file formats

May 15th, 2017 No comments

SoundConverter is an excellent little utility that makes it super simple to convert from one audio format to another. As an example case I had some WAV files that were taking up far too much space and so I wanted to convert them to FLAC files instead.

After firing up SoundConverter I clicked Preferences and selected my options:

Lots of options but nothing too overwhelming

Then I clicked Add File and selected the WAV file I wanted to convert:

You can also drag & drop into the window if that’s more your style

Finally I clicked Convert and watched the program make the change. It was as simple as that!

Do you have a neat little utility like this that you can’t live without? Let us know about it in the comments below!

KWLUG: Functional Programming and Haskell (2017-05)

May 7th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Functional Programming and Haskell published on May 1st 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

To subscribe to the podcast add the feed here or listen to this episode by clicking here or choose from one of the following direct links:

You can find the original Kitchener Waterloo Linux Users Group post here.

Categories: Linux, Podcast, Tyler B Tags: ,

KWLUG: OSSIM, A Brief History of Linux and Open Source (2017-04)

May 7th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of OSSIM, A Brief History of Linux and Open Source published on April 3rd 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

To subscribe to the podcast add the feed here or listen to this episode by clicking here or choose from one of the following direct links:

You can find the original Kitchener Waterloo Linux Users Group post here.

Categories: Linux, Podcast, Tyler B Tags: ,

Alternative software: QupZilla Browser

March 19th, 2017 No comments

I’m back at it, looking for the latest open source alternative software gem. This time around I’m trying out the QupZilla web browser. Never heard of QupZilla before? Well it’s another alternative web browser but this one is built on top of Qt WebEngine, comes with a built in ad blocker and strives to have a native look and feel for whichever distribution/desktop environment you’re running.

QupZilla on first launch

QupZilla supports a number of extensions that expands the browser’s capabilities. These are presented in an easy to use settings dialog.

Extensions for you! And you! And You!

I figured that it would be a good test to quickly see memory usage when loading up a number of tabs:

Number of tabs Memory Usage
1 184.4MB
2 238.8MB
3 275.3MB

While it’s hard to say if that’s a lot of memory usage or not it does seem like quite a bit of RAM to use up for just a single tab. The other thing I noticed was that the browser seems to use quite a bit of CPU when initially rendering the web page. I don’t know if that’s just related to my particular configuration or what but I regularly saw spikes up to ~70% CPU which seems excessive.

Overall though QupZilla seemed to work fine for the websites I tried in it. I suppose if you’re looking to run a browser that no one else is then this one is for you but just like Midori and Konqeror I don’t have a compelling reason to recommend this for every day use.

Blast from the Past: Create a GTK+ application on Linux with Objective-C

March 15th, 2017 No comments

This post was originally published on December 8, 2010. The original can be found here.

Note that while you can still write GTK+ applications on Linux using Objective-C as outlined in this post you may want to check out my little project CoreGTK which makes the whole experience a lot nicer 🙂


As sort of follow-up-in-spirit to my older post I decided to share a really straight forward way to use Objective-C to build GTK+ applications.

Objective-what?

Objective-C is an improvement to the iconic C programming language that remains backwards compatible while adding many new and interesting features. Chief among these additions is syntax for real objects (and thus object-oriented programming). Popularized by NeXT and eventually Apple, Objective-C is most commonly seen in development for Apple OSX and iOS based platforms. It ships with or without a large standard library (sometimes referred to as the Foundation Kit library) that makes it very easy for developers to quickly create fast and efficient programs. The result is a language that compiles down to binary, requires no virtual machines (just a runtime library), and achieves performance comparable to C and C++.

Marrying Objective-C with GTK+

Normally when writing a GTK+ application the language (or a library) will supply you with bindings that let you create GUIs in a way native to that language. So for instance in C++ you would create GTK+ objects, whereas in C you would create structures or ask functions for pointers back to the objects. Unfortunately while there used to exist a couple of different Objective-C bindings for GTK+, all of them are quite out of date. So instead we are going to rely on the fact that Objective-C is backwards compatible with C to get our program to work.

What you need to start

I’m going to assume that Ubuntu will be our operating system for development. To ensure that we have what we need to compile the programs, just install the following packages:

  1. gnustep-core-devel
  2. libgtk2.0-dev

As you can see from the list above we will be using GNUstep as our Objective-C library of choice.

Setting it all up

In order to make this work we will be creating two Objective-C classes, one that will house our GTK+ window and another that will actually start our program. I’m going to call my GTK+ object MainWindow and create the two necessary files: MainWindow.h and MainWindow.m. Finally I will create a main.m that will start the program and clean it up after it is done.

Let me apologize here for the poor code formatting; apparently WordPress likes to destroy whatever I try and do to make it better. If you want properly indented code please see the download link below.

MainWindow.h

In the MainWindow.h file put the following code:

#import <gtk/gtk.h>
#import <Foundation/NSObject.h>
#import <Foundation/NSString.h>

//A pointer to this object (set on init) so C functions can call
//Objective-C functions
id myMainWindow;

/*
* This class is responsible for initializing the GTK render loop
* as well as setting up the GUI for the user. It also handles all GTK
* callbacks for the winMain GtkWindow.
*/
@interface MainWindow : NSObject
{
//The main GtkWindow
GtkWidget *winMain;
GtkWidget *button;
}

/*
* Constructs the object and initializes GTK and the GUI for the
* application.
*
* *********************************************************************
* Input
* *********************************************************************
* argc (int *): A pointer to the arg count variable that was passed
* in at the application start. It will be returned
* with the count of the modified argv array.
* argv (char *[]): A pointer to the argument array that was passed in
* at the application start. It will be returned with
* the GTK arguments removed.
*
* *********************************************************************
* Returns
* *********************************************************************
* MainWindow (id): The constructed object or nil
* arc (int *): The modified input int as described above
* argv (char *[]): The modified input array modified as described above
*/
-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv;

/*
* Frees the Gtk widgets that we have control over
*/
-(void)destroyWidget;

/*
* Starts and hands off execution to the GTK main loop
*/
-(void)startGtkMainLoop;

/*
* Example Objective-C function that prints some output
*/
-(void)printSomething;

/*
********************************************************
* C callback functions
********************************************************
*/

/*
* Called when the user closes the window
*/
void on_MainWindow_destroy(GtkObject *object, gpointer user_data);

/*
* Called when the user presses the button
*/
void on_btnPushMe_clicked(GtkObject *object, gpointer user_data);

@end

MainWindow.m

For the class’ actual code file fill it in as show below. This class will create a GTK+ window with a single button and will react to both the user pressing the button, and closing the window.

#import “MainWindow.h”

/*
* For documentation see MainWindow.h
*/

@implementation MainWindow

-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv
{
//call parent class’ init
if (self = [super init]) {

//setup the window
winMain = gtk_window_new (GTK_WINDOW_TOPLEVEL);

gtk_window_set_title (GTK_WINDOW (winMain), “Hello World”);
gtk_window_set_default_size(GTK_WINDOW(winMain), 230, 150);

//setup the button
button = gtk_button_new_with_label (“Push me!”);

gtk_container_add (GTK_CONTAINER (winMain), button);

//connect the signals
g_signal_connect (winMain, “destroy”, G_CALLBACK (on_MainWindow_destroy), NULL);
g_signal_connect (button, “clicked”, G_CALLBACK (on_btnPushMe_clicked), NULL);

//force show all
gtk_widget_show_all(winMain);
}

//assign C-compatible pointer
myMainWindow = self;

//return pointer to this object
return self;
}

-(void)startGtkMainLoop
{
//start gtk loop
gtk_main();
}

-(void)printSomething{
NSLog(@”Printed from Objective-C’s NSLog function.”);
printf(“Also printed from standard printf function.\n”);
}

-(void)destroyWidget{

myMainWindow = NULL;

if(GTK_IS_WIDGET (button))
{
//clean up the button
gtk_widget_destroy(button);
}

if(GTK_IS_WIDGET (winMain))
{
//clean up the main window
gtk_widget_destroy(winMain);
}
}

-(void)dealloc{
[self destroyWidget];

[super dealloc];
}

void on_MainWindow_destroy(GtkObject *object, gpointer user_data)
{
//exit the main loop
gtk_main_quit();
}

void on_btnPushMe_clicked(GtkObject *object, gpointer user_data)
{
printf(“Button was clicked\n”);

//call Objective-C function from C function using global object pointer
[myMainWindow printSomething];
}

@end

main.m

To finish I will write a main file and function that creates the MainWindow object and eventually cleans it up. Objective-C (1.0) does not support automatic garbage collection so it is important that we don’t forget to clean up after ourselves.

#import “MainWindow.h”
#import <Foundation/NSAutoreleasePool.h>

int main(int argc, char *argv[]) {

//create an AutoreleasePool
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

//init gtk engine
gtk_init(&argc, &argv);

//set up GUI
MainWindow *mainWindow = [[MainWindow alloc] initWithArgCount:&argc andArgVals:argv];

//begin the GTK loop
[mainWindow startGtkMainLoop];

//free the GUI
[mainWindow release];

//drain the pool
[pool release];

//exit application
return 0;
}

Compiling it all together

Use the following command to compile the program. This will automatically include all .m files in the current directory so be careful when and where you run this.

gcc `pkg-config –cflags –libs gtk+-2.0` -lgnustep-base -fconstant-string-class=NSConstantString -o “./myprogram” $(find . -name ‘*.m’) -I /usr/include/GNUstep/ -L /usr/lib/GNUstep/ -std=c99 -O3

Once complete you will notice a new executable in the directory called myprogram. Start this program and you will see our GTK+ window in action.

If you run it from the command line you can see the output that we coded when the button is pushed.

Wrapping it up

There you have it. We now have a program that is written in Objective-C, using C’s native GTK+ ‘bindings’ for the GUI, that can call both regular C and Objective-C functions and code. In addition, thanks to the porting of both GTK+ and GNUstep to Windows, this same code will also produce a cross-platform application that works on both Mac OSX and Windows.

Source Code Downloads

Source Only Package
File name: objective_c_gtk_source.zip
File hashes: Download Here
File size: 2.4KB
File download: Download Here

Originally posted on my personal website here.

Blast from the Past: How to migrate from TrueCrypt to LUKS file containers

March 13th, 2017 No comments

This post was originally published on June 15, 2014. The original can be found here. Note that there are other direct, compatible, alternatives and proper successors to TrueCrypt these days, including VeraCrypt. This post is simply about a different option available to Linux users.


With the recent questions surrounding the security of TrueCrypt there has been a big push to move away from that program and switch to alternatives. One such alternative, on Linux anyway, is the Linux Unified Key Setup (or LUKS) which allows you to encrypt disk volumes. This guide will show you how to create encrypted file volumes, just like you could using TrueCrypt.

The Differences

There are a number of major differences between TrueCrypt and LUKS that you may want to be aware of:

  • TrueCrypt supported the concept of hidden volumes, LUKS does not.
  • TrueCrypt allowed you to encrypt a volume in-place, without losing data, LUKS does not.
  • TrueCrypt supports cipher cascades where the data is encrypted using multiple different algorithms just in case one of them is broken at some point in the future. As I understand it this is being talked about for the LUKS 2.0 spec but is currently not a feature.

If you are familiar with the terminology in TrueCrypt you can think of LUKS as offering both full disk encryption and standard file containers.

How to create an encrypted LUKS file container

The following steps borrow heavily from a previous post so you should go read that if you want more details on some of the commands below. Also note that while LUKS offers a lot of options in terms of cipher/digest/key size/etc, this guide will try to keep it simple and just use the defaults.

Step 1: Create a file to house your encrypted volume

The easiest way is to run the following commands which will create the file and then fill it with random noise:

# fallocate -l <size> <file to create>
# dd if=/dev/urandom of=<file to create> bs=1M count=<size>

For example let’s say you wanted a 500MiB file container called MySecrets.img, just run this command:

# fallocate -l 500M MySecrets.img
# dd if=/dev/urandom of=MySecrets.img bs=1M count=500

Here is a handy script that you can use to slightly automate this process:

#!/bin/bash
NUM_ARGS=$#

if [ $NUM_ARGS -ne 2 ] ; then
    echo Wrong number of arguments.
    echo Usage: [size in MiB] [file to create]

else

    SIZE=$1
    FILE=$2

    echo Creating $FILE with a size of ${SIZE}MB

    # create file
    fallocate -l ${SIZE}M $FILE

    #randomize file contents
    dd if=/dev/urandom of=$FILE bs=1M count=$SIZE

fi

Just save the above script to a file, say “create-randomized-file-volume.sh”, mark it as executable and run it like this:

# ./create-randomized-file-volume.sh 500 MySecrets.img

Step 2: Format the file using LUKS + ext4

There are ways to do this in the terminal but for the purpose of this guide I’ll be showing how to do it all within gnome-disk-utility. From the menu in Disks, select Disks -> Attach Disk Image and browse to your newly created file (i.e. MySecrets.img).

Don't forget to uncheck the box!

Don’t forget to uncheck the box!

Be sure to uncheck “Set up read-only loop device”. If you leave this checked you won’t be able to format or write anything to the volume. Select the file and click Attach.

This will attach the file, as if it were a real hard drive, to your computer:

attachedindisksNext we need to format the volume. Press the little button with two gears right below the attached volume and click Format. Make sure you do this for the correct ‘drive’ so that you don’t accidentally format your real hard drive!

Please use a better password

Please use a better password

From this popup you can select the filesystem type and even name the drive. In the image above the settings will format the drive to LUKS and then create an ext4 filesystem within the encrypted LUKS one. Click Format, confirm the action and you’re done. Disks will format the file and even auto-mount it for you. You can now copy files to your mounted virtual drive. When you’re done simply eject the drive like normal or (with the LUKS partition highlighted) press the lock button in Disks. To use that same volume again in the future just re-attach the disk image using the steps above, enter your password to unlock the encrypted partition and you’re all set.

But I don’t even trust TrueCrypt enough to unlock my already encrypted files!

If you’re just using TrueCrypt to open an existing file container so that you can copy your files out of there and into your newly created LUKS container I think you’ll be OK. That said there is a way for you to still use your existing TrueCrypt file containers without actually using the TrueCrypt application.

First install an application called tc-play. This program works with the TrueCrypt format but doesn’t share any of its code. To install it simply run:

# sudo apt-get install tcplay

Next we need to mount your existing TrueCrypt file container. For the sake of this example we’ll assume your file container is called TOPSECRET.tc.

We need to use a loop device but before doing that we need to first find a free one. Running the following command

# sudo losetup -f

should return the first free loop device. For example it may print out

/dev/loop0

Next you want to associate the loop device with your TrueCrypt file container. You can do this by running the following command (sub in your loop device if it differs from mine):

# sudo losetup /dev/loop0 TOPSECRET.tc

Now that our loop device is associated we need to actually unlock the TrueCrypt container:

# sudo tcplay -m TOPSECRET.tc -d /dev/loop0

Finally we need to mount the unlocked TrueCrypt container to a directory so we can actually use it. Let’s say you wanted to mount the TrueCrypt container to a folder in the current directory called SecretStuff:

# sudo mount -o nosuid,uid=1000,gid=100 /dev/mapper/TOPSECRET.tc SecretStuff/

Note that you should swap your own uid and gid in the above command if they aren’t 1000 and 100 respectively. You should now be able to view your TrueCrypt files in your SecretStuff directory. For completeness sake here is how you unmount and re-lock the same TrueCrypt file container when you are done:

# sudo umount SecretStuff/
# sudo dmsetup remove TOPSECRET.tc
# sudo losetup -d /dev/loop0

This post originally appeared on my personal website here.

Using a swap file instead of a swap partition in linux

March 11th, 2017 2 comments

Historically linux distributions have created a dedicated partition to be used as swap space and while this does come with some advantages there are other issues with it as well. Being a fixed size partition means it is relatively difficult to later change your mind about how much swap your system actually needs.

Thankfully there is another way: using a swap file instead of a swap partition.

To do this all you need to do is create a file that will hold your swap, set the right permissions to it, format it and use it. The ArchWiki has an excellent guide on the whole process here but I’ve re-created my steps below.

I like to use fallocate but you could also use dd or whatever. For example the following will create a 2GB swap file in the root directory:

sudo fallocate -l 2G /swapfile

Next up set the permissions:

sudo chmod 600 /swapfile

Format it:

sudo mkswap /swapfile

Use it:

sudo swapon /swapfile

That’s pretty much it. At this point you have the ability to add as much swap (via swap files) as you’d like. If you ever want to stop using swap, for example to get rid of the file when you’re done, simply issue the following command:

sudo swapoff /swapfile

One other thing to note is that every time you reboot your system you will have to re-run the swapon command above. To avoid this you can add it to your fstab file so that it is done on startup.

/swapfile none swap defaults 0 0

The nice thing about using swap files is that they are really flexible, allowing you to allocate as much or as little as you need at any given time.

Categories: Linux, Tyler B Tags: ,

KWLUG: Roundtable (2017-03)

March 10th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Roundtable published on March 6th 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

To subscribe to the podcast add the feed here or listen to this episode by clicking here or choose from one of the following direct links:

You can find the original Kitchener Waterloo Linux Users Group post here.

Categories: Linux, Podcast, Tyler B Tags: ,

Blast from the Past: Do something nice for a change

March 5th, 2017 No comments

This post was originally published on October 6, 2010. The original can be found here.


Open source software (OSS) is great. It’s powerful, community focused and, lets face it, free. There is not a single day that goes by that I don’t use OSS. Between Firefox, Linux Mint, Thunderbird, Pidgin, Pinta, Deluge, FileZilla and many, many more there is hardly ever an occasion where I find myself in a situation where there isn’t an OSS tool for the job. Unfortunately for all of the benefits that OSS brings me in my daily life I find, in reflection, that I hardly ever do anything to contribute back. What’s worse is that I know I am not alone in this. Many OSS users out there just use the software because it happens to be the best for them. And while there is absolutely nothing wrong with that, many of these individuals could be contributing back. Now obviously I don’t expect everyone, or even half for that matter, to contribute back but I honestly do think that the proportion of people who do contribute back could be much higher.

Why should I?

This is perhaps the easiest to answer. While you don’t have to contribute back, you should if you want to personally make the OSS you love even better.

How to I contribute?

Contributing to a project is incredibly easy. In fact in the vast majority of cases you don’t need to write code, debug software or even do much more than simply use the software in question. What do I mean by this? Well the fact that we here on The Linux Experiment write blog posts praising (or tearing to shreds supplying constructive criticism) to various OSS projects is one form of contributing. Did I lose you? Every time you mention an OSS project you bring attention to it. This attention in turn draws more users/developers to the project and grows it larger. Tell your family, write a blog post, digg stories about OSS or just tell your friends about “this cool new program I found”.

There are many other very easy ways to help out as well. For instance if you notice the program is doing something funky then file a bug. It’s a short process that is usually very painless and quickly brings real world results. I have found that it is also a very therapeutic way to get back at that application that just crashed and lost all of your data. Sometimes you don’t even have to be the one to file it, simply bringing it up in a discussion, such as a forum post, can be enough for others to look into it for you.

Speaking of forum posts, answering new users’ questions about OSS projects can be an excellent way to both spread use of the project and identify problems that new users are facing. The latter could in turn be corrected through subsequent bug or feature requests. Along the same lines, documentation is something that some OSS projects are sorely missing. While it is not the most glamorous job, documentation is key to providing an excellent experience to a first time user. If you know more than one language I can’t think of a single OSS project that couldn’t use your help making translations so that people all over the world can begin to use the software.

For the artists among us there are many OSS projects that could benefit from a complete artwork makeover. As a programmer myself I know all to well the horrors of developer artwork. Creating some awesome graphics, icons, etc. for a project can make a world of difference. Or if you are more interested in user experience and interface design there are many projects that could also benefit from your unique skills. Tools like Glade can even allow individuals to create whole user interfaces without writing a single line of code.

Are you a web developer? Do you like making pretty websites with fancy AJAX fluff? Offer to help the project by designing an attractive website that lures visitors to try the software. You could be the difference between this and this (no offense to the former).

If you’ve been using a particular piece of software for a while, and feel comfortable trying to help others, hop on over to the project’s IRC channel. Help new users troubleshoot their problems and offer suggestions of solutions that have work for you. Just remember: nothing turns off a new user like an angry IRC asshat.

Finally if you are a developer take a look at the software you use on a daily basis. Can you see anything in it that you could help change? Peruse their bug tracker and start picking off the low priority or trivial bugs. These are often issues that get overlooked while the ‘full time’ developers tackle the larger problems. Squashing these small bugs can help to alleviate the 100 paper cuts syndrome many users experience while using some OSS.

Where to start

Depending on how you would like to contribute your starting point could be pretty much anywhere. I would suggest however that you check out your favourite OSS project’s website. Alternatively jump over to an intermediary like OpenHatch that aggregates all of the help postings from a variety of projects. OpenHatch actually has a whole community dedicated to matching people who want to contribute with people who need their help.

I don’t expect anyone, and certainly not myself, to contribute back on a daily basis. I will however personally start by setting a recurring event in my calendar that reminds me to contribute, in some way or another, every week or month. If we all did something similar imagine the rapid improvements we could see in a short time.

Blast from the Past: Compile Windows programs on Linux

February 26th, 2017 No comments

This post was originally published on September 26, 2010. The original can be found here.


Windows?? *GASP!*

Sometimes you just have to compile Windows programs from the comfort of your Linux install. This is a relatively simple process that basically requires you to only install the following (Ubuntu) packages:

To compile 32-bit programs

  • mingw32 (swap out for gcc-mingw32 if you need 64-bit support)
  • mingw32-binutils
  • mingw32-runtime

Additionally for 64-bit programs (*PLEASE SEE NOTE)

  • mingw-w64
  • gcc-mingw32

Once you have those packages you just need to swap out “gcc” in your normal compile commands with either “i586-mingw32msvc-gcc” (for 32-bit) or “amd64-mingw32msvc-gcc” (for 64-bit). So for example if we take the following hello world program in C

#include <stdio.h>
int main(int argc, char** argv)
{
printf(“Hello world!\n”);
return 0;
}

we can compile it to a 32-bit Windows program by using something similar to the following command (assuming the code is contained within a file called main.c)

i586-mingw32msvc-gcc -Wall “main.c” -o “Program.exe”

You can even compile Win32 GUI programs as well. Take the following code as an example

#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
char *msg = “The message box’s message!”;
MessageBox(NULL, msg, “MsgBox Title”, MB_OK | MB_ICONINFORMATION);

return 0;
}

this time I’ll compile it into a 64-bit Windows application using

amd64-mingw32msvc-gcc -Wall -mwindows “main.c” -o “Program.exe”

You can even test to make sure it worked properly by running the program through wine like

wine Program.exe

You might need to install some extra packages to get Wine to run 64-bit applications but in general this will work.

That’s pretty much it. You might have a couple of other issues (like linking against Windows libraries instead of the Linux ones) but overall this is a very simple drop-in replacement for your regular gcc command.

*NOTE: There is currently a problem with the Lucid packages for the 64-bit compilers. As a work around you can get the packages from this PPA instead.

Originally posted on my personal website here.

Blast from the Past: Very short plug for PowerTOP

February 25th, 2017 No comments

This post was originally published on July 4, 2010. The original can be found here.


Recently I decided to try out PowerTOP, a Linux power saving application built by Intel. I am extremely impressed by how easy it was to use and the power savings I am now basking in.

PowerTOP is a terminal application that first scans your computer for a number of things during a set interval. It then reports back which processes are taking up the most power and offers you some options to improve your battery life. All of these options can literally be enabled at a press of a button. It’s sort of like an experience I once had with Clippy in Microsoft Word; “it seems you are trying to save power, let me help you…” After applying a few of the suggestions the estimated battery life on my laptop went from about 3 and a half hours to almost 5 hours. In short, I would highly recommend everyone at least try out PowerTOP. I’m not promising miracles but at the very least it should help you out some.

Blast from the Past: Vorbis is not Theora

February 24th, 2017 No comments

This post was originally published on April 21, 2010. The original can be found here.


Recently I have started to mess around with the Vorbis audio codec, commonly found within the Ogg media container. Unlike Theora, which I had also experimented with but won’t post the results for fear of a backlash, I must say I am rather impressed with Vorbis. I had no idea that the open source community had such a high quality audio codec available to them. Previously I always sort of passed off Vorbis’ reason for being regarded as ‘so great’ within the community as simply a lack of options. However after some comparative tests between Vorbis and MP3 I must say I am a changed man. I would now easily recommend Vorbis as a quality choice if it fits your situation of use.

What is Vorbis?

Like I had mentioned above, Vorbis is the name of a very high quality free and open source audio codec. It is analogous to MP3 in that you can use it to shrink the size of your music collection, but still retain very good sound. Vorbis is unique in that it only offers a VBR mode, which allows it to squeeze the best sound out of the fewest number of bits. This is done by lowering the bitrate during sections of silence or unimportant audio. Additionally, unlike other audio codecs, Vorbis audio is generally encoded at a supplied ‘quality’ level. Currently the bitrate for quality level 4 is 128kbit/s, however as the encoders mature they may be able to squeeze out the same quality at a lower bitrate. This will potentially allow a modern iteration of the encoder to achieve the same quality level but by using a lower bitrate, saving you storage space/bandwidth/etc.

So Vorbis is better than MP3?

Obviously when it comes to comparing the relative quality of competing audio codecs it must always be up to the listener to decide. That being said I firmly believe that Vorbis is far better than MP3 at low bitrates and is, at the very least, very comparable to MP3 as you increase the bitrate.

The Tests

I began by grabbing a FLAC copy of the Creative Commons album The Slip by Nine Inch Nails here. I chose FLAC because it provided me with the highest quality possible (lossless CD quality) from which to encode the samples with. Then, looking around at some Internet radio websites, I decided that I should test the following bitrates: 45kbit/s, 64kbit/s, 96kbit/s, and finally 128kbit/s (for good measure). I encoded them using only the default encoder settings and the following terminal commands:

For MP3 I used LAME and the following command. I chose average bitrate (ABR) which is really just VBR with a target, similar to Vorbis:

flac -cd {input file goes here}.flac | lame –abr {target bitrate} – {output file goes here}.mp3

For Vorbis I used OggEnc and the following command:

oggenc -b {target bitrate} {input file goes here}.flac -o {output file goes here}.ogg

Results

I think I would be a hypocrite if I didn’t tell you to just listen for yourself… The song in question is track #4, Discipline.

Note: if you are using Mozilla Firefox, Google Chrome, or anything else that supports HTML5/Vorbis, you should be able to play the Vorbis file right in your browser.

45kbit/s MP3(1.4MB) Vorbis(1.3MB)

64kbit/s MP3(2.0MB) Vorbis(1.9MB)

96kbit/s MP3(2.9MB) Vorbis(2.8MB)

128kbit/s MP3(3.8MB) Vorbis(3.6MB)

We want you! (to write for The Linux Experiment)

February 20th, 2017 No comments

Are you a Linux user? Thinking about trying your own Linux experiment? Have you ever come across something broken or annoying and figured out a solution? Or maybe you just came up with a really neat way of doing something to make your life easier? Well if you have ever done any of those and can write a decent sentence or two we’d be glad to showcase your content here.

Get the full details at our page here: Write for the Linux Experiment.

Categories: Tyler B Tags:

KWLUG: Let’s Encrypt, Cryptocurrencies (2017-02)

February 11th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Let’s Encrypt, Cryptocurrencies published on February 7th 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

Setup your own VPN with OpenVPN

February 6th, 2017 No comments

Using the excellent Digital Ocean tutorial as my base I decided to setup an OpenVPN server on a Linux Mint 18 computer running on my home network so that I can have an extra layer of protection when connecting to those less than reputable WiFi hotspots at airports and hotels.

While this post is not meant to be an in-depth guide, you should use the original for that, it is meant to allow me to look back at this at some point in the future and easily re-create my setup.

1. Install everything you need

sudo apt-get update
sudo apt-get install openvpn easy-rsa

2. Setup Certificate Authority (CA)

make-cadir ~/openvpn-ca
cd ~/openvpn-ca
nano vars

3. Update CA vars

Set these to something that makes sense:

export KEY_COUNTRY=”US”
export KEY_PROVINCE=”CA”
export KEY_CITY=”SanFrancisco”
export KEY_ORG=”Fort-Funston”
export KEY_EMAIL=”me@myhost.mydomain”
export KEY_OU=”MyOrganizationalUnit”

Set the KEY_NAME to something that makes sense:

export KEY_NAME=”server”

4. Build the CA

source vars
./clean-all
./build-ca

5. Build server certificate and key

./build-key-server server
./build-dh
openvpn –genkey –secret keys/ta.key

6. Generate client certificate

source vars
./build-key-pass clientname

7. Configure OpenVPN

cd ~/openvpn-ca/keys
sudo cp ca.crt ca.key server.crt server.key ta.key dh2048.pem /etc/openvpn
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | sudo tee /etc/openvpn/server.conf

Edit config file:

sudo nano /etc/openvpn/server.conf

Uncomment the following:

tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
push “redirect-gateway def1 bypass-dhcp”
push “route 192.168.10.0 255.255.255.0”
push “route 192.168.20.0 255.255.255.0”

Add the following:

key-direction 0
auth SHA256

Edit config file:

sudo nano /etc/sysctl.conf

Uncomment the following:

net.ipv4.ip_forward=1

Run:

sudo sysctl -p

8. Setup UFW rules

Run:

ip route | grep default

To find the name of the network adaptor. For example:

default via 192.168.x.x dev enp3s0  src 192.168.x.x  metric 202

Edit config file:

sudo nano /etc/ufw/before.rules

Add the following, replacing your network adaptor name, above the bit that says # Don’t delete these required lines…

# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0
-A POSTROUTING -s 10.8.0.0/8 -o enp3s0 -j MASQUERADE
COMMIT
# END OPENVPN RULES

Edit config file:

sudo nano /etc/default/ufw

Change DEFAULT_FORWARD_POLICY to ACCEPT.

DEFAULT_FORWARD_POLICY=”ACCEPT”

Add port and OpenVPN to ufw, allow it and restart ufw to enable:

sudo ufw allow 1194/udp
sudo ufw allow OpenSSH
sudo ufw disable
sudo ufw enable

9. Start OpenVPN Service and set it to enable at boot

sudo systemctl start openvpn@server
sudo systemctl enable openvpn@server

10. Setup client configuration

mkdir -p ~/client-configs/files
chmod 700 ~/client-configs/files
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf

Edit config file:

nano ~/client-configs/base.conf

Replace remote server_IP_address port with the external IP address and port you are planning on using. The IP address can also be a hostname, such as a re-director.

Add the following:

cipher AES-128-CBC
auth SHA256
key-direction 1

Uncomment the following:

user nobody
group nogroup

Comment out the following:

#ca ca.crt
#cert client.crt
#key client.key

11. Make a client configuration generation script

Create the file:

nano ~/client-configs/make_config.sh

Add the following to it:

#!/bin/bash

# First argument: Client identifier

KEY_DIR=~/openvpn-ca/keys
OUTPUT_DIR=~/client-configs/files
BASE_CONFIG=~/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e ‘<ca>’) \
${KEY_DIR}/ca.crt \
<(echo -e ‘</ca>\n<cert>’) \
${KEY_DIR}/${1}.crt \
<(echo -e ‘</cert>\n<key>’) \
${KEY_DIR}/${1}.key \
<(echo -e ‘</key>\n<tls-auth>’) \
${KEY_DIR}/ta.key \
<(echo -e ‘</tls-auth>’) \
> ${OUTPUT_DIR}/${1}.ovpn

And mark it executable:

chmod 700 ~/client-configs/make_config.sh

12. Generate the client config file

cd ~/client-configs
./make_config.sh clientname

13. Transfer client configuration to device

You can now transfer the client configuration file found in ~/client-configs/files to your device.


This post originally appeared on my personal website here.

Categories: Open Source Software, Tyler B Tags: ,

Blast from the Past: The Search Begins

January 26th, 2017 No comments

This post was originally published on July 29, 2009. The original can be found here.


100% fat free

Picking a flavour of Linux is like picking what you want to eat for dinner; sure some items may taste better than others but in the end you’re still full. At least I hope, the satisfied part still remains to be seen.

Where to begin?

A quick search of Wikipedia reveals that the sheer number of Linux distributions, and thus choices, can be very overwhelming. Thankfully because of my past experience with Ubuntu I can at least remove it and it’s immediate variants, Kubuntu and Xubuntu, from this list of potential candidates. That should only leave me with… well that hardly narrowed it down at all!

Seriously... the number of possible choices is a bit ridiculous

Seriously… the number of possible choices is a bit ridiculous

Learning from others’ experience

My next thought was to use the Internet for what it was designed to do: letting other people do your work for you! To start Wikipedia has a list of popular distributions. I figured if these distributions have somehow managed to make a name for themselves, among all of the possibilities, there must be a reason for that. Removing the direct Ubuntu variants, the site lists these as Arch Linux, CentOS, Debian, Fedora, Gentoo, gOS, Knoppix, Linux Mint, Mandriva, MontaVista Linux, OpenGEU, openSUSE, Oracle Enterprise Linux, Pardus, PCLinuxOS, Red Hat Enterprise Linux, Sabayon Linux, Slackware and, finally, Slax.

Doing a both a Google and a Bing search for “linux distributions” I found a number of additional websites that seem as though they might prove to be very useful. All of these websites aim to provide information about the various distributions or help point you in the direction of the one that’s right for you.

Only the start

Things are just getting started. There is plenty more research to do as I compare and narrow down the distributions until I finally arrive at the one that I will install come September 1st. Hopefully I can wrap my head around things by then.

KWLUG: Vigrant (2017-01)

January 11th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Vigrant published on January 9th 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

Shove ads in your pi-hole!

January 8th, 2017 No comments

There are loads of neat little projects out there for your Raspberry Pi from random little hacks all the way up to full scale home automation and more. In the past I’ve written about RetroPie (which is an awesome project that you should definitely check out!) but this time I’m going to take a moment to mention another really cool project: pi-hole.

Pi-hole, as their website says, is “a black hole for Internet advertisements.” Essentially it’s software that you install on your Raspberry Pi (or other Linux computer) that then acts as a local DNS proxy. Once it is setup and running you can point your devices to it individually or just tell your router to use that instead (which then applies to everything on the network).

Then as you’re browsing the internet and come across a webpage that is trying to serve you ads, pi-hole will simply block the DNS request for the ad from really resolving and instead return a blank image or web page meaning that the site simply can’t download the ad to show you. Voila! Universal ad blocker for your entire network and all of your devices! Even better – because you’re blocking the ads from being downloaded in the first place your browsing speeds can sometimes be improved as well.

Pi-hole dashboard

You can monitor or control which domains are blocked all from a really nice dashboard interface and see the queries come into pi-hole almost in real time.

After running pi-hole for a week now I’m quite surprised with how effective it has really been with removing ads. It’s legitimately pleasant being able to browse the web without seeing ads everywhere or having ad blockers break certain websites. If that sounds like something you too might be interested in then pi-hole might be worth taking a look.


This post originally appeared on my personal website here.

Blast from the Past: Of filesystems and partitions

January 5th, 2017 No comments

This post was originally published on August 25, 2009. The original can be found here.


Following from my last post about finalizing all of those small little choices I will now continue along that line but discuss the merits of the various filesystems that Linux allows me to choose from, as well as discuss how I am going to partition my drive.

Filesystem?

For a Windows or Mac user the filesystem is something they will probably never think about in their daily computing adventures. That is mostly because there really isn’t a choice in the matter. As a Windows user the only time I actually have to worry about the filesystem is when I’m formatting a USB drive. For my hard drives the choices are NTFS, NTFS, and.. oh yeah NTFS. My earliest recollection of what a filesystem is happened when my Windows 98 machine had crashed and I had to wait while the machine forced a filesystem check on the next start up. More recently FAT32 has gotten in my way with it’s 4GB file size limitation.

You mean we get a choice?

Linux seems to be all about choice so why would it be surprising that you don’t get to pick your own filesystem? The main contenders for this choice are ext2, ext3, ext4, ReiserFS, JFS, XFS, Btrfs and in special places Fat16, Fat32, NTFS, and swap.

Ext2

According to the great internet bible, ext2 stands for the second extended filesystem. It was designed as a practical replacement for the original, but very old, Linux filesystem. If I may make an analogy for Windows users, ext2 seems to be the Linux equivalent to Fat32, only much better. This filesystem is now considered mostly outdated and only really still used in places where journaling is not always appropriate; for example on USB drives. Ext2 can be used on the /boot partition and is supported by GRUB.

Ext2 Features

  • Introduced: January 1993
  • File allocation: bitmap (free space), table (metadata)
  • Max file size: 16 GiB – 64 TiB
  • Max number of files: 10^18
  • Max filename length: 255 characters
  • Max volume size: 2 TiB – 32 TiB
  • Date range: December 14, 1901 – January 18, 2038

Ext 3

Ext3 is the successor to ext2 and removed quite a few of the limitations and also added a number of new features, most important of which was journaling. As you might have guessed it’s full name is the third extended filesystem. While ext3 is generally considered to be much better than ext2 there are a couple of problems with it. While ext3 does not have to scan itself after a crash, something that ext2 did have to do, it also does not have a an online defragmenter. Also because ext3 was primarily designed to shore up some of ext2’s faults, it is not the cleanest implementation and can actually have worse performance than ext2 in some situations. Ext3 is still the most popular Linux filesystem and is only now slowly being replaced by its own successor ext4. Ext3 can be used on the /boot partition and is fully supported by GRUB.

Ext3 Features

  • Introduced: November 2001
  • Directory contents: Table, hashed B-tree with dir_index enabled
  • File allocation: bitmap (free space), table (metadata)
  • Max file size: 16 GiB – 2 TiB
  • Max number of files: Variable, allocated at creation time
  • Max filename length: 255 characters
  • Max volume size: 2 TiB – 16 TiB
  • Date range: December 14, 1901 – January 18, 2038

Ext4

Ext4 is the next in the extended filesystem line and the successor to ext3. This addition proved to be quite controversial initially due to its implementation of delayed allocation which resulted in a very long time before writes. However ext4 achieves very fast read time thanks to this delayed allocation and overall it performs very well when compared to ext3. Ext4 is slowly taking over as the defacto filesystem and is actually already the default in many distributions (Fedora included). Ext4 cannot be used on the /boot partition because of GRUB, meaning a separate /boot partition with a different filesystem must be made.

Ext4 Features

  • Introduced: October 21, 2008
  • Directory contents: Linked list, hashed B-tree
  • File allocation: Extents/Bitmap
  • Max file size: 16 TiB
  • Max number of files: 4 billion
  • Max filename length: 256 characters
  • Max volume size: 1 EiB
  • Date range: December 14, 1901 – April 25, 2514

ReiserFS

Created by Hans ‘I didn’t murder my wife’ Reiser, in 2001 this filesystem was very promising for its performance but has since been mostly abandoned  by the Linux community. It’s initial claim to fame was as the first journaling filesystem to be included within the Linux kernel. Carefully configured, ReiserFS can achieve 10 to 15x the performance of ext2 and ext3. ReiserFS can be used on the /boot partition and is supported by GRUB.

ReiserFS Features

  • Introduced: 2001
  • Directory contents: B+ tree
  • File allocation: Bitmap
  • Max file size: 8 TiB
  • Max number of files: ~4 billion
  • Max filename length: 4032 characters theoretically, 255 in practice
  • Max volume size: 16 TiB
  • Date range: December 14, 1901 – January 18, 2038

Journaled File System (JFS)

Developed by IBM, JFS sports many features and is very advanced for its time of release. Among these features are extents and compression. Though not as widely used as other filesystems, JFS is very stable, reliable and fast with low CPU overhead. JFS can be used on the /boot partition and is supported by GRUB.

JFS Features

  • Introduced: 1990 and 1999
  • Directory contents: B+ tree
  • File allocation: Bitmap/extents
  • Max file size: 4 PiB
  • Max number of files: no limit
  • Max filename length: 255 characters
  • Max volume size: 32 PiB

XFS

Like JFS, XFS is one of the oldest and most refined journaling filesystems available on Linux. Unlike JFS, XFS supports many additional advanced features such as striped allocation to optimize RAID setups, delayed allocation to optimize disk data placement, sparse files, extended attributes, advanced I/O features, volume snapshots, online defragmentation, online resizing, native backup/restore and disk quotas. The only real downsides XFS suffers from are its inability to shrink partitions, a difficult to implement un-delete, and quite a bit of overhead when new directories are created and directories are deleted. XFS is supported by GRUB, and thus can be used as the /boot partition, but there are reports that it is not very stable.

XFS Features

  • Introduced: 1994
  • Directory contents: B+ tree
  • File allocation: B+ tree
  • Max file size: 8 EiB
  • Max filename length: 255 characters
  • Max volume size: 16 EiB

Btrfs

Btrfs, or “B-tree FS” or “Butter FS”, is a next generation filesystem will all of the bells and whistles. It is meant to fill the gap of lacking enterprise filesystems on Linux and is being spearheaded by Oracle. Wikipedia lists its new promised features as online balancing, subvolumes (separately-mountable filesystem roots), object-level (RAID-like) functionality, and user-defined transactions among other things. It’s stable version is currently being incorporated into mainstream Linux kernels.

Btrfs Features

  • Introduced: 20xx
  • Directory contents: B+ tree
  • File allocation: extents
  • Max file size: 16 EiB
  • Max number of files: 2^64
  • Max filename length: 255 characters
  • Max volume size: 16 EiB

So what’s it all mean?

Well there you have it, a quick and concise rundown of the filesystem options for your mainstream Linux install. But what exactly does all of this mean? Well, as they say, a picture speaks a thousand words. Many people have done performance tests against the mainstream filesystems and many conclusions have been drawn as to what is the best in many different circumstances. As I assume most people would chose either XFS, ext3, ext4 or maybe even Btrs if they were a glutton for punishment I just happen to have found some interesting pictures to show off the comparison!

Rather than tell you which filesystem to pick I will simply point out a couple of links and tell you that while I think XFS is a very underrated filesystem I, like most people, will be going with ext4 simply because it is currently the best supported.

Links (some have pictures!):

EXT4, Btrfs, NILFS2 Performance Benchmarks

Filesystems (ext3, reiser, xfs, jfs) comparison on Debian Etch

Linux Filesystem Performance Comparison for OLTP with Ext2, Ext3, Raw, and OCFS on Direct-Attached Disks using Oracle 9i Release 2

Hey! You forgot about partitions!

No, I didn’t.

Yes you did!

OK, fine… So as Jon had pointed out in a previous post the Linux filesystem is broken down into a series of more or less standard mount points. The only requirements for Fedora, my distribution of choice, and many others are that at least these three partitions exist: /boot for holding the bootable kernels, / (root) for everything else, and a swap partition to move things in and out of RAM. I was thinking about creating a fourth /home partition but I gave up when I realized I didn’t know enough about Linux to determine a good partition size for that.

OK, so break it down

/boot

Fedora recommends that this partition is a minimum of 100MB in size. Even though kernels are each roughly 6MB in size it is better to be safe than sorry! Also because ext4 is not supported by GRUB I will be making this partition ext3.

LVM

I know what you’re thinking, what the hell is LVM? LVM stands for Logical Volume Manager and allows a single physical partition to hold many virtual partitions. I will be using LVM to store the remainder of my partitions wrapped inside of a physical encrypted partition. At least that’s the plan.

swap

Fedora recommends using the following formula to calculate how much swap space you need.

If M < 2
S = M *2
Else
S = M + 2

Where M is the amount of memory you have and S is the swap partition size in GiB. So for example the machine I am using for this experiment has 4 GiB of RAM. That translates to a swap partition of 6 GiB. If your machine only has 1 GiB of RAM then the formula would translate to 2 GiB worth of swap space. 6 GiB seems a bit overkill for a swap partition but what do I know?

/ (root)

And last but not least the most important part, the root partition. This partition will hold everything else and as such will be taking up the rest of my drive. On the advice of Fedora I am going to leave 10 GiB of LVM disk space unallocated for future use should the need arise. This translates to a root partition of about ~300 GiB, plenty of space. Again I will be formatting this partition using ext4.

Well there you go

Are you still with me? You certainly are a trooper! If you have any suggestions as to different disk configurations please let me know. I understand a lot of this in theory but if you have actual experience with this stuff I’d love to hear from you!

Download YouTube videos with youtube-dl

December 31st, 2016 No comments

Have you ever wanted to save a YouTube video for offline use? Well now it’s super simple to do with the easy to use command line utility youtube-dl.

Start by installing the utility either through your package manager (always the recommended approach) or from the project GitHub page here.

sudo apt-get install youtube-dl

This utility comes packed with loads of neat options that I encourage you to read about on the project page but if you just want to quickly grab a video all you need to do is run the command:

youtube-dl https://www.youtube.com/YOUR_REAL_VIDEO_HERE

So for example if you wanted to grab this year’s YouTube Rewind you would just run:

youtube-dl https://www.youtube.com/watch?v=_GuOjXYl5ew

and the video would end up in whatever directory your terminal is currently in. Can’t get much easier than that!

Categories: Linux, Tyler B Tags: ,