Archive

Posts Tagged ‘Linux’

What Is Linux File System? Easy Guide

May 7th, 2017 2 comments

What Is Linux File System?

As we’ve talked about Linux on the previous post and we have chosen the best Linux distro already and also we learn how to install Linux not it is the time to dig down and understand Linux from inside and we will start with what we’ve  mentioned on load post which is what is the Linux file system.

If we want to talk about Linux File System we need to talk about it from download level and top level in order to fully understand it

You can check this link for a general overview File System

Down Level Explanation

Linux File System or any other file system is the layer which is under the operating system that handles the positioning of your data on the storage without it the system cannot know which file starts from where and ends where.

We talk about Linux, which supports many filesystems windows. MacOS and many other file systems, you can even download software that can handle a new file system when it comes and deal with it

Linux File System Types

So what are Linux file system types?

When you try to install Linux you will see that in order to install Linux you have to install it on a partition or a disk that is one of the following file systems

Ext, Ext2, Ext3, Ext4, JFS, XFS, btrfs and swap

So what are those file systems that Linux use?

Ext: old one and no longer used due to limitations

Ext2: first Linux file system that allows 2 terabytes of data allowed

Ext3: came from Ext2, but with upgrades. It keeps the backward compatibility and you can upgrade your Ext2 to Ext3 without problems

The only problem about it that the servers use this kind of file system because this file system does not support file recovery  or disk snapshots

Ext4: faster and allow large files with significant speed

It is a very good option for SSD disks and you notice when you try to install any Linux distro it is the default file system that Linux suggests

JFS: old file system made by IBM it has a Good performance for both large and small files and because of its low CPU usage but failed and files corrupted after long time use reports says

XFS: old file system which shows a bad performance with small files and you can’t compare it with Ext versions rich features

Btrfs: made by oracle it is not stable as Ext in some distros but you can say that it is a replacement for it if you have to, it has a good performance

You may notice From the comparison above  that Ext4 is the best Linux File System

 Top Level Explanation

Now you know what file system Linux offers you ok what is inside that system and what is the system structure

You may come from windows background and windows has partitions like C:\ and D:\ and folders and you install the windows on any of those partitions usually C:\

What about Linux File System Hierarchy well it has a different structure than windows. If you navigate to the root partition which is / and see the structure of the Linux File System. Most distros have the same structures like this:

Linux File System Directories

/bin: Where Linux core commands reside like ls, mv

/boot: Where boot loader and boot files are located.

/dev: Where all physical drives are mounted like USBs DVDs.

/etc: Contains configurations for the installed packages.

/home: Where every user will have a personal folder to put his folders with his name like /home/likegeeks.

/lib: Where the libraries of the installed packages located since libraries shared among all packages. Unlike windows, you may find duplicates in different folders.

/media: Here is the external devices like DVDs and USB sticks are mounted and you can access their files from here.

/mnt: Where you mount other things Network locations and some distros you may find your mounted USB or DVD.

/opt: Some optional packages are located here and this is managed by the package manager.

/proc: Because everything on Linux is a file, this folder for processes running on the system and you can access them and see much info about the current processes.

/root: The home folder for the user root.

/sbin: Like /bin but here binaries for root user only.

/tmp: Contains the temporary files are located.

/usr: Where the utilities and files shared between users on Linux.

/var: Where variable data is located, like system logs.

Now you should understand what the Linux file system is.

Different file system lead to different performance so it is very important to know the file system at least a bit

This post was originally published on Like Geeks site here.

Categories: LikeGeeks, Linux Tags: ,

How To Install Linux Step-By-Step

March 18th, 2017 No comments

How To Install Linux? By Video

Before we dig into how to install Linux lets review what We’ve talked about choosing the best Linux distro that fits your need on this post

Best Linux Distro

and hope you choose the one that is suitable for you

If you want to install Linux there are 2 ways to do that

First way is to download the Linux distribution you want and burn it to a DVD or USB stick and boot your machine with it and complete the installation process

The second way is to install it virtually on a virtual machine like VirtualBox or VMware and continue working on your windows or mac system without rebooting so your Linux system will be contained in a window you can minimize it and continue working on your system.

For me I prefer VirtualBox it’s a free and runs fast on my PC than VMware and natively support installing windows Linux mac OS with all versions

Let’s choose any Linux distro and install it using both 2 ways

Ok I’m going to choose Linux mint they call it the mac os of Linux

It is good distro for laptops and desktops for personal use

The version we are going to install is 18.1 “Serena” at the time of writing that article.

Go to this link and download it

https://www.linuxmint.com/download.php

I prefer Cinnamon desktop version it is promising and elegant

Once you download the iso file you will have to burn it on DVD or the easy way copying it on USB stick using a program called universal USB installer from that link

https://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/

After downloading the program open it and choose from the list the distro you want to install in our case Linux mint and make sure you put your memory stick on the computer and click next and wait till copying process is finished.

how to install linux using usb
how to install Linux using USB

how to install linux mint

install linux choose iso

Now Finally we click create to create bootable USB

create bootable usb install linux

And now you can boot with this memory stick

Then Restart you PC and go to BIOS settings  and select boot options and make sure the USB is the first one then it will show the installation screen press Enter it will load the live CD content which is on the USB or DVD

boot linux

install linux boot menu

Then desktop will appear like that

linux boot complete

Click install Linux mint

Then choose the language used for installation

choose language

install third party

Then choose the installation type and TAKE CARE if you are installing Linux on a disk that contains another system you MUST choose the option called something else

delete partitions

But if you installing it on new disk choose the option Erase disk

Linux requires 2 partitions to work on root partition and swap partition

Now we will create them by clicking the plus button and choose about 12 GB or more as you need but not less than that for root partition and choose mount type as / which stands for root and of course format will be Ext4

linux create partitions

Now we create swap partition choose the remaining free space and click the plus button and choose swap area as shown

create root partition

choose free space

Then Create swap area

create swap area

Then click install now and agree about writing changes to disk

click install linux

confirm install

Now you choose the time zone and click continue the choose the language

choose timezone

choose linux language

Now you write your username and password and click continue

choose username

Finally, installation started

linux installlation started

After finishing installation it will prompt you to reboot the machine and remove the installation media DVD or USB

linux installation finished

And yes this is how to install Linux on Physical machine

linux welcome screen

linux start menu

The second way it to install Linux is to install it on VirtualBox First download VirtualBox from here

https://www.virtualbox.org/wiki/Downloads

Then install it easy steps then

there are 2 ways to use Linux on VirtualBox

The first way is easy it like the normal installation process

Open VirtualBox and click new and choose Linux and ubuntu 64

linux virtualbox create vm

Then choose the RAM required not less than 1 GB and choose the disk file type or leave it as VDI and dynamically allocated and the size not less than 12 GB and hit ok

linux virtualbox assign ram

linux virtualbox hdd type

linux virtualbox hdd file

linux virtualbox dynamic allocated
Then file location for disk to be used

linux virtualbox hdd size

So now the VirtualBox is created we just need to make it boot from the DVD we’ve downloaded

Choose from settings > Storage and choose the iso image and click ok.

linux virtualbox settings
Then choose the iso image
linux virtualbox boot iso
Then Click start

linux virtualbox start

After loading the desktop click install Linux mint and the rest of the steps are as the mentioned above without any change And this is how to install Linux on a virtual machine.

This post was originally published on Like Geeks site here.

Categories: LikeGeeks, Linux Tags:

Blast from the Past: Create a GTK+ application on Linux with Objective-C

March 15th, 2017 No comments

This post was originally published on December 8, 2010. The original can be found here.

Note that while you can still write GTK+ applications on Linux using Objective-C as outlined in this post you may want to check out my little project CoreGTK which makes the whole experience a lot nicer 🙂


As sort of follow-up-in-spirit to my older post I decided to share a really straight forward way to use Objective-C to build GTK+ applications.

Objective-what?

Objective-C is an improvement to the iconic C programming language that remains backwards compatible while adding many new and interesting features. Chief among these additions is syntax for real objects (and thus object-oriented programming). Popularized by NeXT and eventually Apple, Objective-C is most commonly seen in development for Apple OSX and iOS based platforms. It ships with or without a large standard library (sometimes referred to as the Foundation Kit library) that makes it very easy for developers to quickly create fast and efficient programs. The result is a language that compiles down to binary, requires no virtual machines (just a runtime library), and achieves performance comparable to C and C++.

Marrying Objective-C with GTK+

Normally when writing a GTK+ application the language (or a library) will supply you with bindings that let you create GUIs in a way native to that language. So for instance in C++ you would create GTK+ objects, whereas in C you would create structures or ask functions for pointers back to the objects. Unfortunately while there used to exist a couple of different Objective-C bindings for GTK+, all of them are quite out of date. So instead we are going to rely on the fact that Objective-C is backwards compatible with C to get our program to work.

What you need to start

I’m going to assume that Ubuntu will be our operating system for development. To ensure that we have what we need to compile the programs, just install the following packages:

  1. gnustep-core-devel
  2. libgtk2.0-dev

As you can see from the list above we will be using GNUstep as our Objective-C library of choice.

Setting it all up

In order to make this work we will be creating two Objective-C classes, one that will house our GTK+ window and another that will actually start our program. I’m going to call my GTK+ object MainWindow and create the two necessary files: MainWindow.h and MainWindow.m. Finally I will create a main.m that will start the program and clean it up after it is done.

Let me apologize here for the poor code formatting; apparently WordPress likes to destroy whatever I try and do to make it better. If you want properly indented code please see the download link below.

MainWindow.h

In the MainWindow.h file put the following code:

#import <gtk/gtk.h>
#import <Foundation/NSObject.h>
#import <Foundation/NSString.h>

//A pointer to this object (set on init) so C functions can call
//Objective-C functions
id myMainWindow;

/*
* This class is responsible for initializing the GTK render loop
* as well as setting up the GUI for the user. It also handles all GTK
* callbacks for the winMain GtkWindow.
*/
@interface MainWindow : NSObject
{
//The main GtkWindow
GtkWidget *winMain;
GtkWidget *button;
}

/*
* Constructs the object and initializes GTK and the GUI for the
* application.
*
* *********************************************************************
* Input
* *********************************************************************
* argc (int *): A pointer to the arg count variable that was passed
* in at the application start. It will be returned
* with the count of the modified argv array.
* argv (char *[]): A pointer to the argument array that was passed in
* at the application start. It will be returned with
* the GTK arguments removed.
*
* *********************************************************************
* Returns
* *********************************************************************
* MainWindow (id): The constructed object or nil
* arc (int *): The modified input int as described above
* argv (char *[]): The modified input array modified as described above
*/
-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv;

/*
* Frees the Gtk widgets that we have control over
*/
-(void)destroyWidget;

/*
* Starts and hands off execution to the GTK main loop
*/
-(void)startGtkMainLoop;

/*
* Example Objective-C function that prints some output
*/
-(void)printSomething;

/*
********************************************************
* C callback functions
********************************************************
*/

/*
* Called when the user closes the window
*/
void on_MainWindow_destroy(GtkObject *object, gpointer user_data);

/*
* Called when the user presses the button
*/
void on_btnPushMe_clicked(GtkObject *object, gpointer user_data);

@end

MainWindow.m

For the class’ actual code file fill it in as show below. This class will create a GTK+ window with a single button and will react to both the user pressing the button, and closing the window.

#import “MainWindow.h”

/*
* For documentation see MainWindow.h
*/

@implementation MainWindow

-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv
{
//call parent class’ init
if (self = [super init]) {

//setup the window
winMain = gtk_window_new (GTK_WINDOW_TOPLEVEL);

gtk_window_set_title (GTK_WINDOW (winMain), “Hello World”);
gtk_window_set_default_size(GTK_WINDOW(winMain), 230, 150);

//setup the button
button = gtk_button_new_with_label (“Push me!”);

gtk_container_add (GTK_CONTAINER (winMain), button);

//connect the signals
g_signal_connect (winMain, “destroy”, G_CALLBACK (on_MainWindow_destroy), NULL);
g_signal_connect (button, “clicked”, G_CALLBACK (on_btnPushMe_clicked), NULL);

//force show all
gtk_widget_show_all(winMain);
}

//assign C-compatible pointer
myMainWindow = self;

//return pointer to this object
return self;
}

-(void)startGtkMainLoop
{
//start gtk loop
gtk_main();
}

-(void)printSomething{
NSLog(@”Printed from Objective-C’s NSLog function.”);
printf(“Also printed from standard printf function.\n”);
}

-(void)destroyWidget{

myMainWindow = NULL;

if(GTK_IS_WIDGET (button))
{
//clean up the button
gtk_widget_destroy(button);
}

if(GTK_IS_WIDGET (winMain))
{
//clean up the main window
gtk_widget_destroy(winMain);
}
}

-(void)dealloc{
[self destroyWidget];

[super dealloc];
}

void on_MainWindow_destroy(GtkObject *object, gpointer user_data)
{
//exit the main loop
gtk_main_quit();
}

void on_btnPushMe_clicked(GtkObject *object, gpointer user_data)
{
printf(“Button was clicked\n”);

//call Objective-C function from C function using global object pointer
[myMainWindow printSomething];
}

@end

main.m

To finish I will write a main file and function that creates the MainWindow object and eventually cleans it up. Objective-C (1.0) does not support automatic garbage collection so it is important that we don’t forget to clean up after ourselves.

#import “MainWindow.h”
#import <Foundation/NSAutoreleasePool.h>

int main(int argc, char *argv[]) {

//create an AutoreleasePool
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

//init gtk engine
gtk_init(&argc, &argv);

//set up GUI
MainWindow *mainWindow = [[MainWindow alloc] initWithArgCount:&argc andArgVals:argv];

//begin the GTK loop
[mainWindow startGtkMainLoop];

//free the GUI
[mainWindow release];

//drain the pool
[pool release];

//exit application
return 0;
}

Compiling it all together

Use the following command to compile the program. This will automatically include all .m files in the current directory so be careful when and where you run this.

gcc `pkg-config –cflags –libs gtk+-2.0` -lgnustep-base -fconstant-string-class=NSConstantString -o “./myprogram” $(find . -name ‘*.m’) -I /usr/include/GNUstep/ -L /usr/lib/GNUstep/ -std=c99 -O3

Once complete you will notice a new executable in the directory called myprogram. Start this program and you will see our GTK+ window in action.

If you run it from the command line you can see the output that we coded when the button is pushed.

Wrapping it up

There you have it. We now have a program that is written in Objective-C, using C’s native GTK+ ‘bindings’ for the GUI, that can call both regular C and Objective-C functions and code. In addition, thanks to the porting of both GTK+ and GNUstep to Windows, this same code will also produce a cross-platform application that works on both Mac OSX and Windows.

Source Code Downloads

Source Only Package
File name: objective_c_gtk_source.zip
File hashes: Download Here
File size: 2.4KB
File download: Download Here

Originally posted on my personal website here.

Blast from the Past: How to migrate from TrueCrypt to LUKS file containers

March 13th, 2017 No comments

This post was originally published on June 15, 2014. The original can be found here. Note that there are other direct, compatible, alternatives and proper successors to TrueCrypt these days, including VeraCrypt. This post is simply about a different option available to Linux users.


With the recent questions surrounding the security of TrueCrypt there has been a big push to move away from that program and switch to alternatives. One such alternative, on Linux anyway, is the Linux Unified Key Setup (or LUKS) which allows you to encrypt disk volumes. This guide will show you how to create encrypted file volumes, just like you could using TrueCrypt.

The Differences

There are a number of major differences between TrueCrypt and LUKS that you may want to be aware of:

  • TrueCrypt supported the concept of hidden volumes, LUKS does not.
  • TrueCrypt allowed you to encrypt a volume in-place, without losing data, LUKS does not.
  • TrueCrypt supports cipher cascades where the data is encrypted using multiple different algorithms just in case one of them is broken at some point in the future. As I understand it this is being talked about for the LUKS 2.0 spec but is currently not a feature.

If you are familiar with the terminology in TrueCrypt you can think of LUKS as offering both full disk encryption and standard file containers.

How to create an encrypted LUKS file container

The following steps borrow heavily from a previous post so you should go read that if you want more details on some of the commands below. Also note that while LUKS offers a lot of options in terms of cipher/digest/key size/etc, this guide will try to keep it simple and just use the defaults.

Step 1: Create a file to house your encrypted volume

The easiest way is to run the following commands which will create the file and then fill it with random noise:

# fallocate -l <size> <file to create>
# dd if=/dev/urandom of=<file to create> bs=1M count=<size>

For example let’s say you wanted a 500MiB file container called MySecrets.img, just run this command:

# fallocate -l 500M MySecrets.img
# dd if=/dev/urandom of=MySecrets.img bs=1M count=500

Here is a handy script that you can use to slightly automate this process:

#!/bin/bash
NUM_ARGS=$#

if [ $NUM_ARGS -ne 2 ] ; then
    echo Wrong number of arguments.
    echo Usage: [size in MiB] [file to create]

else

    SIZE=$1
    FILE=$2

    echo Creating $FILE with a size of ${SIZE}MB

    # create file
    fallocate -l ${SIZE}M $FILE

    #randomize file contents
    dd if=/dev/urandom of=$FILE bs=1M count=$SIZE

fi

Just save the above script to a file, say “create-randomized-file-volume.sh”, mark it as executable and run it like this:

# ./create-randomized-file-volume.sh 500 MySecrets.img

Step 2: Format the file using LUKS + ext4

There are ways to do this in the terminal but for the purpose of this guide I’ll be showing how to do it all within gnome-disk-utility. From the menu in Disks, select Disks -> Attach Disk Image and browse to your newly created file (i.e. MySecrets.img).

Don't forget to uncheck the box!

Don’t forget to uncheck the box!

Be sure to uncheck “Set up read-only loop device”. If you leave this checked you won’t be able to format or write anything to the volume. Select the file and click Attach.

This will attach the file, as if it were a real hard drive, to your computer:

attachedindisksNext we need to format the volume. Press the little button with two gears right below the attached volume and click Format. Make sure you do this for the correct ‘drive’ so that you don’t accidentally format your real hard drive!

Please use a better password

Please use a better password

From this popup you can select the filesystem type and even name the drive. In the image above the settings will format the drive to LUKS and then create an ext4 filesystem within the encrypted LUKS one. Click Format, confirm the action and you’re done. Disks will format the file and even auto-mount it for you. You can now copy files to your mounted virtual drive. When you’re done simply eject the drive like normal or (with the LUKS partition highlighted) press the lock button in Disks. To use that same volume again in the future just re-attach the disk image using the steps above, enter your password to unlock the encrypted partition and you’re all set.

But I don’t even trust TrueCrypt enough to unlock my already encrypted files!

If you’re just using TrueCrypt to open an existing file container so that you can copy your files out of there and into your newly created LUKS container I think you’ll be OK. That said there is a way for you to still use your existing TrueCrypt file containers without actually using the TrueCrypt application.

First install an application called tc-play. This program works with the TrueCrypt format but doesn’t share any of its code. To install it simply run:

# sudo apt-get install tcplay

Next we need to mount your existing TrueCrypt file container. For the sake of this example we’ll assume your file container is called TOPSECRET.tc.

We need to use a loop device but before doing that we need to first find a free one. Running the following command

# sudo losetup -f

should return the first free loop device. For example it may print out

/dev/loop0

Next you want to associate the loop device with your TrueCrypt file container. You can do this by running the following command (sub in your loop device if it differs from mine):

# sudo losetup /dev/loop0 TOPSECRET.tc

Now that our loop device is associated we need to actually unlock the TrueCrypt container:

# sudo tcplay -m TOPSECRET.tc -d /dev/loop0

Finally we need to mount the unlocked TrueCrypt container to a directory so we can actually use it. Let’s say you wanted to mount the TrueCrypt container to a folder in the current directory called SecretStuff:

# sudo mount -o nosuid,uid=1000,gid=100 /dev/mapper/TOPSECRET.tc SecretStuff/

Note that you should swap your own uid and gid in the above command if they aren’t 1000 and 100 respectively. You should now be able to view your TrueCrypt files in your SecretStuff directory. For completeness sake here is how you unmount and re-lock the same TrueCrypt file container when you are done:

# sudo umount SecretStuff/
# sudo dmsetup remove TOPSECRET.tc
# sudo losetup -d /dev/loop0

This post originally appeared on my personal website here.

Blast from the Past: Compile Windows programs on Linux

February 26th, 2017 No comments

This post was originally published on September 26, 2010. The original can be found here.


Windows?? *GASP!*

Sometimes you just have to compile Windows programs from the comfort of your Linux install. This is a relatively simple process that basically requires you to only install the following (Ubuntu) packages:

To compile 32-bit programs

  • mingw32 (swap out for gcc-mingw32 if you need 64-bit support)
  • mingw32-binutils
  • mingw32-runtime

Additionally for 64-bit programs (*PLEASE SEE NOTE)

  • mingw-w64
  • gcc-mingw32

Once you have those packages you just need to swap out “gcc” in your normal compile commands with either “i586-mingw32msvc-gcc” (for 32-bit) or “amd64-mingw32msvc-gcc” (for 64-bit). So for example if we take the following hello world program in C

#include <stdio.h>
int main(int argc, char** argv)
{
printf(“Hello world!\n”);
return 0;
}

we can compile it to a 32-bit Windows program by using something similar to the following command (assuming the code is contained within a file called main.c)

i586-mingw32msvc-gcc -Wall “main.c” -o “Program.exe”

You can even compile Win32 GUI programs as well. Take the following code as an example

#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
char *msg = “The message box’s message!”;
MessageBox(NULL, msg, “MsgBox Title”, MB_OK | MB_ICONINFORMATION);

return 0;
}

this time I’ll compile it into a 64-bit Windows application using

amd64-mingw32msvc-gcc -Wall -mwindows “main.c” -o “Program.exe”

You can even test to make sure it worked properly by running the program through wine like

wine Program.exe

You might need to install some extra packages to get Wine to run 64-bit applications but in general this will work.

That’s pretty much it. You might have a couple of other issues (like linking against Windows libraries instead of the Linux ones) but overall this is a very simple drop-in replacement for your regular gcc command.

*NOTE: There is currently a problem with the Lucid packages for the 64-bit compilers. As a work around you can get the packages from this PPA instead.

Originally posted on my personal website here.

Blast from the Past: A Practical Reference of Linux Commands

February 23rd, 2017 No comments

This post was originally published on February 19, 2010. The original can be found here.


Just wanted to share a link to a great table that I found – the practical reference of linux commands is a handy little table of terminal commands organized by task. I’ll add it to our sidebar under the ‘Useful Sites’ heading for future reference.

Happy Linuxing!

Sudo apt-get install basic-linux-pt3 –Install & Setup

February 18th, 2017 No comments

It’s been a busy little while, so I haven’t had time to get this written up. So lets see what I can still remember.

Installing Ubuntu Server was as easy as you’d expect. Booting into it wasn’t. Turns out that the BIOS on this box is setup not to boot from the ODD SATA port. It’ll boot from any of the four drives, or from USB; but not from that extra SATA port. My friend’s, who already has the box running, solution was to setup a RAID, where the ODD SATA is RAID number 0, which then allows it to be booted from. I went for a much simpler solution, after noticing that the install process lets you select the location of the GRUB loader. This server has a USB port and a MicroSD card reader inside the case, both bootable. I have plenty of spare MicroSD cards lying around (seriously, since when is 1GB or 2GB big enough for anybody?), so I just inserted one, and reinstalled specifying the MicroSD card as the location for the GRUB.

It felt good to have my new box booting up and actually running. I got its static IP and OpenSSH setup, checked I could access it through Putty, and finally got it up on the shelf and off my desk. Everything from this point on I’ve done in Putty, with no monitor attached to the server. Lets face it, its not like there’s any difference between one white-on-black text interface and another.

Next, I turned my attention to mounting my drives. The mount command is simple enough, but obviously I want my drives to be available right away after boot; so it was time to learn about ‘fstab’ and ‘UUID’s. Luckily this is a fairly straightforward process, especially since my drives only have a single partition on each, other than having to write down the long UUID to copy from the terminal output to the fstab file. I haven’t been able to work with copy & paste in PuTTY. One thing I started to realise at this point is that while Ubuntu boots nice and quickly, the server itself doesn’t.  So each time I want to see if my fiddling has worked, I pretty much have time to make a cup of tea. From looking through various guides etc. I simply used:

UUID=<UUID> /mnt/<mountpoint> ext4 defaults 0 2

for each of my additional drives. After a reboot, I had access to all of my files and media as it was on my old NAS. I spent a little time clearing out the directory structures it had left behind, program files etc. to leave a nice, clean access to all of my files.

NFS and Samba were just as easy to get set up as they had been on the virtual machines. Although with so many different things I wanted to share, I had to add a lot of different entries into each file.  Thankfully there’s no need to reboot after each edit, the services can simply be restarted to pick up the new settings. Samba is simple enough to test, since I’m managing the server over SSH on PuTTY in Windows. NFS required me to test in one of the VMs 0nce again; but after some work, both seemed to be working. I’m not 100% happy with some of my setup, since I’m just allowing open access to anyone on some of these shares. Chances are I’ll be fine, but I’ll want to come back at some point to try and tighten up my user management.

Emby server has a very good set of installation instructions. The main new part for me was adding the new repository, but this means it’ll be kept up to date when I perform other apt-get upgrades. Everything else related to Emby is managed through its web GUI, so straight forward stuff.

In fact, I was surprised at how simple it was to get the majority of things working. FTP just kinda worked, I just needed to make symlinks from my home directory to the other places I need to access. Even Transmission wasn’t too bad to get going and allow my remote GUI to connect. Ont thing that started to get harder from this point was keeping track of the different ports and services I was using. I took some time to make a list of computers and services to plan my external port mapping, and got things like FTP, SSH and Transmission forwarded. Internally I’ve just used the defaults for simplicity; externally I’ve made sure they’re set to something completely different.

Next Up:

Bash-ing things around

This post was originally published on Nathanael’s site here.

Categories: Nathanael Y, Ubuntu Tags: , , , , , , ,

Sudo apt-get install basic-linux-pt2 –Testing-&-VMs

February 17th, 2017 No comments

With the hardware sorted (bar some jiggery-pokery to get the ODD to SSD bay converter to fit properly), I set about deciding what I want this box to do.

The list I came up with looks like this:

  • Media serving to my Kodi devices (2 Raspberry Pi systems, my android tablet, and a new Ubuntu PC I’m putting together for retro gaming with my kids)
  • FTP – I like to use my NAS like my own personal cloud. My tablet can mount an FTP in its file browser just like any other folder. No sFTP support, though, unfortunately (and I don’t like any of the file browsers I tried which do).
  • Transmission (or Deluge) – the main reason for swapping the 4GB of RAM out for 16GB
  • SSH (obviously!)
  • Dropbox and Google Drive – for when various apps and things integrate well with these mobile apps.
  • Backup – the WD My Cloud EX4’s backup options are very poor.
  • General file sharing with Windows and Ubuntu – Samba & NFS, naturally.
  • Hosting & tinkering with other bits I might want to try & learn about – a website (for practise, not for public viewing), a git… who knows.

Being basically completely unfamiliar with most of this stuff, I was undecided between Ubuntu Desktop or Server for quite a while. Desktop obviously just has so much of this stuff already ready to go, it mounts things automatically, I can use the GUI as a fallback if something isn’t right, its just more like what I’m accustomed to. On the other hand, having the GUI running all the time will just use up unnecessary RAM  – granted I probably don’t have a shortage of that, but still…

In the end I installed both onto VMs on my Windows machine, made copies (so I had a clean version always ready to go without having to reinstall again), and started playing.

First up I wanted to sort how I was going to deal with my media backend. On my current setup I use the Kodi client on one of my PCs to manage a central SQL database. While it works, its a bit slow and rather inneficient, so I went looking for either a headless Kodi backend, or just a way to run it without the GUI. I found all sorts of ideas, builds and code , none of which I understand or feel like I could implement. After a discussion with a linux guru (one of my Uni lecturers) it was clear that my plan was probably not going to work; he had pointed out that he just runs his on DLNA, and that Plex seems to be quite good too. More research, and a question in /r/Kodi later, I had been pointed in the direction of Emby, a backend for Kodi without many of the limitations of Plex and DLNA. Installation was simple enough, but accessing the Web UI wasn’t. When I had setup the VMs I had just left their network settings as NAT; this, it turns out, makes accessing the network from the VM possible, but not accessing the VM from elsewhere on the network (includingother VMs on the same system). I did try to just change the settings in the VM to add a bridged adapter, but it didn’t work. Not knowing enough about networking on linux to fix this, I just went ahead and reinstalled, this time setting up the VM with two network adapters – one NAT and another bridged. This worked a treat, and after adding a few media files and installing Kodi on the Desktop VM, I was able to play videos no problem.

Next, for no particular reason, was getting NFS working. I found guides, forums, blogs etc (my Google-fu is pretty strong) and set about trying. I was sure it should be working, I’d installed nfs-kernel-server, added the entry into /etc/exports, setup the permissions, but I just couldn’t mount it in the Desktop VM – even though I could watch them through Kodi. I ended up having to ask Reddit’s linux4noobs sub. Simple answer… sudo /etc/init.d/nfs-kernel-server start … and instantly it mounted no problem. Turns out that Kodi was actually watching a transcoded stream from Emby, until I had NFS working. Thankfully Samba took less time and hassle to get working (surprisingly), and pretty soon I could access files across both linux and Windows. And there was much rejoycing.

At this point I was getting impatient (plus this microserver is taking up a chunk of space on my desk where I really ought to be doing uni work), so I quickly checked I knew how to setup a static IP, and turned my attention to the real thing.

Next Up:

Booting up The Box
Installing, reinstalling and shenanigans

This post was originally published on Nathanael’s site here.

Sudo apt-get install basic-linux-pt1 –The-Task

February 16th, 2017 No comments

I’ve had it in mind to start learning linux for a while, and now I’ve found the perfect project to help me do just that: building my own NAS.

Until November I was perfectly happy with my business grade Draytek router. I bought it about 6 years ago,  particularly because it could host its own VPN, instead of just passing through to a Windows PC. It has served me well. I was also reasonably satisfied with my Western Digital EX4 NAS unit in most respects. Then Gigabit FTTH was installed in my area, and I got a free trial – all hell broke loose (very much in a Fist World Problems sense).

The first thing which became apparent is that my ‘nice’ router was outdated. Its best WAN to LAN topped out at 93Mb, while its LAN to WAN maxed at 73Mb (incidentally this also means that I already wasnt getting the benefit of my old 250Mb connection). My new connection is a Gigabit, symmetric line which realistically gives me up to about 700Mb in either direction. Not wanting to go back to relying on the equipment provided by ISPs, I went out (well, I went online) and found myself something which can cope with over 900Mb in either direction (Netgear Nighthawk R7000 if youre interested). ‘”Excellent”, I thought, “now I’m all set”… but no. The knock on effect of getting such fast internet soon became apparent in my NAS. After adding just a few torrents to Transmission, the speed steadily rose until it was approaching 10MB/s, at which point all interfaces to the device practically stopped responding. I surmised that the cause was likely the limited 512M of RAM inside the poor thing, and so went in search of a replacement.

As I looked around, it quickly became clear that most of the available devices on the market were out of my price bracket. While I would have liked to get myself something like a Synology, it just wasn’t going to happen. A friend of mine pointed me in the direction of an HP Gen 8 Proliant Microserver (http://compadvance.co.uk/en/item/323618/HP-Proliant-MicroServer-Gen8-G1610T-4GB) which he has, and after some extra checking around, I decided it looked good. I also decided to up the 4GB of RAM to 16GB, and add an SSD for the OS in the ODD bay.

Now the fun really started. Naturally I didn’t want to put Windows on this thing (although this is what my friend has done), I obviously wanted to run linux. And as its intended to sit quietly, (apparently) minding its own business for the most part, I wanted to keep as much RAM as possible free by not running a GUI. Ubuntu Server seemed like the perfect choice; except I have practically no experience setting up servers, working in command line (Basic ls, cp and rm don’t really count), or using Ubuntu or linux for anything at all (I’ve tinkered, but thats about it).

Well, no time like the present to learn.

Next Up:

Learning and testing on VMs

This post was originally published on Nathanael’s site here.

Categories: Nathanael Y, Ubuntu Tags: ,

Blast from the Past: Getting KDE on openSUSE is like playing Jenga

February 2nd, 2017 No comments

This post was originally published on October 16, 2009. The original can be found here.


As part of our experiment, everyone is required to try a different desktop manager for two weeks. I chose KDE, since I’ve been using GNOME since I installed openSUSE. However, I’ve found that while trying to get a desktop manager set up one wrong move can cause everything to fall apart.

Switching from GNOME:

This was fairly simple. I started up YaST Software Management, changed my filter from “Search” to “Patterns”, and found the Graphical Environments section. Here I right clicked “KDE Base System”, and selected install. Clicking accept installed the kdebase and kdm packages, with a slew of other KDE default programs. Once this was done, I logged out of my GNOME session, and selected KDE4 as my new login session. My system was slightly confused and booted into GNOME again, so I restarted. This time, I was met with KDE 4.1.

My Thoughts on KDE 4.1:

As much as I had hated the qt look [which I erronously call the ‘quicktime’ look, due to its uncanny similarity to the quicktime app], the desktop was beautiful. The default panel was a very slick, glossy black, which looked quite nice. The “lines” in each window title made the windowing system very ugly, so I set out to turn them off. Its a fairly easy process:

KDE Application Launcher > Configure Desktop > Appearance > Windows > Uncheck the “Show stripes next to the title” box.

Once completed, my windows were simple and effective, and slightly less chunky than the default GNOME theme, so I was content.

Getting rid of the openSUSE Branding:

openSUSE usually draws much ire from me – so its not hard to imagine that I’d prefer not to have openSUSE branding on every god damn application I run, least of all my Desktop Manager. From YaST Software Management I searched for openSUSE and uninstalled every package that had the words “openSUSE” and “branding”. YaST automatically replaces these packages with alternate “upstream” packages, which seem to be the non-openSUSE themes/appearances. Once these were gone, things looked a lot less gray-and-green, and I was happy.

Oh god what happened to my login screen:

A side effect of removing all those openSUSE packages my login screen took a trip back in time, to the Windows 3.1 era. It was a white window on a  blue background with Times New Roman-esque font. After a bit of researching on the GOOG, I found out that this was KDE3 stepping up to take over for my openSUSE branding. Uninstalling the package kde3base or whatever the shit it’s called forced KDE4 to take over, and everything was peachy again.

Installing my Broadcom Wirless Driver

In order to install my driver, I followed this guide TO THE LETTER. Not following this guide actually gave YaST a heart attack and created code conflicts.

KMix Being Weird

KMix magically made my media buttons on my laptop work, however it occasionally decided to change what “audio device” the default slider was controlling. Still, having the media buttons working was a HUGE plus.

Getting Compositing to Work

I did not have a good experience with this. Infact, by fucking around with settings, I ended up bricking my openSUSE install entirely. So alas, I ended up completely re-installing openSUSE. Regardless, to install ATI drivers, follow the guide here using the one-click install method worked perfectly. After finally getting my drivers, turning on compositing was simple:

KDE Application Launcher > Configure Desktop > Appearance > Desktop > Check the “Enable Desktop Effects” box.

From KDE4.1 to KDE4.3

While KDE was really working for me, the notifications system was seriously annoying. Every time my system had an update, or a received a message in Kopete  an ugly, plain, slightly off center, gray box would appear at the top of my screen to inform me. Tyler informed me that this was caused by the fact that I wasn’t running the most recent version of KDE4. A quick check showed me that openSUSE isn’t going to use KDE4.3 until openSUSE 11.2 launches, however you can manually add the KDE 4.3 repositories to YaST, as shown on the openSUSE KDE Repository page.

After adding these repositories, I learned a painful lesson in upgrading your display manager. Do not, under any circumstances, attempt a Display Manager upgrade/switch untill you have an hour to spare,  and enough battery life to last the whole time. I did not, and even though I cancelled the install about 60 seconds in, I found that YaST had already uninstalled my display manager. Upon restart, I was met with a terminal.

From the terminal, I used the command line version of YaST to completely remove kdebase4 and kdm from my system. After that, re-installing the KDE4.3 verison of  kdm from YaST in the terminal installed all the other required applications. However, there are a shitload of dependency issues you gotta sort through and unfortunately the required action is not the same for each application.

KDE4.3

KDE4.3 is absolutely gorgeous, I’ve had no complaints with it. KMix seems to have reassigned itself again, but it assigned itself correctly. Removing the openSUSE branding was the same, but by default the desktop theme used is Air. I prefer the darker look of Oxygen, so I headed over to my desktop to fix it by following these steps:

Desktop > Right Click > Plain Desktop Settings > Change the Desktop Theme from Air to Oxygen.

Concluding Thoughts

Now that all these things are sorted out, I’m surprisingly impressed with KDE, and I might even keep it at the end of this test period for our podcast.

Let me know if you’ve ever had to change desktop managers and your woes in the comments!

get rid of that openSUSE shit:

KDE4.1
uninstall openSUSE branding, except the KDM one maybe?

uninstall kde3base or whatever the shit it’s called. this makes stuff wicked.

KDE4.3
This might have all been unessecary. since installing KDE4.3, I did it all again to no avail. Rightclick desktop, plain desktop settings, theme: oxygen. Then hooray its fine?

Blast from the Past: The Search Begins

January 26th, 2017 No comments

This post was originally published on July 29, 2009. The original can be found here.


100% fat free

Picking a flavour of Linux is like picking what you want to eat for dinner; sure some items may taste better than others but in the end you’re still full. At least I hope, the satisfied part still remains to be seen.

Where to begin?

A quick search of Wikipedia reveals that the sheer number of Linux distributions, and thus choices, can be very overwhelming. Thankfully because of my past experience with Ubuntu I can at least remove it and it’s immediate variants, Kubuntu and Xubuntu, from this list of potential candidates. That should only leave me with… well that hardly narrowed it down at all!

Seriously... the number of possible choices is a bit ridiculous

Seriously… the number of possible choices is a bit ridiculous

Learning from others’ experience

My next thought was to use the Internet for what it was designed to do: letting other people do your work for you! To start Wikipedia has a list of popular distributions. I figured if these distributions have somehow managed to make a name for themselves, among all of the possibilities, there must be a reason for that. Removing the direct Ubuntu variants, the site lists these as Arch Linux, CentOS, Debian, Fedora, Gentoo, gOS, Knoppix, Linux Mint, Mandriva, MontaVista Linux, OpenGEU, openSUSE, Oracle Enterprise Linux, Pardus, PCLinuxOS, Red Hat Enterprise Linux, Sabayon Linux, Slackware and, finally, Slax.

Doing a both a Google and a Bing search for “linux distributions” I found a number of additional websites that seem as though they might prove to be very useful. All of these websites aim to provide information about the various distributions or help point you in the direction of the one that’s right for you.

Only the start

Things are just getting started. There is plenty more research to do as I compare and narrow down the distributions until I finally arrive at the one that I will install come September 1st. Hopefully I can wrap my head around things by then.

Blast from the Past: The Distributions of Debian

January 12th, 2017 No comments

This post was originally published on August 21, 2009. The original can be found here.


Like many of the other varieties of Linux, Debian gives the end user a number of different installation choices. In addition to the choice of installer that Tyler B has already mentioned, the Debian community maintains three different distributions, which means that even though I’ve picked a distribution, I still haven’t picked a distribution! In the case of Debian, these distributions are as follows:

  1. Stable: Last updated on July 27th, 2009, this was the last major Debian release, codenamed “Lenny.” This is the currently supported version of Debian, and receives security patches from the community as they are developed, but no new features. The upside of this feature freeze is that the code is stable and almost bug free, with the downside that the software it contains is somewhat dated.
  2. Testing: Codenamed “Squeeze,” this distribution contains code that is destined for the next major release of Debian. Code is kept in the Testing distribution as long as it doesn’t contain any major bugs that might prevent a proper release (This system is explained here). The upside of running this distribution is that your system always has all of the newest (and mostly) bug free code available to users. The downside is that if a major bug is found, the fix for that bug may be obliged to spend a good deal of time in the Unstable distribution before it is considered stable enough to move over to Testing. As a result, your computer could be left with broken code for weeks on end. Further, this distribution doesn’t get security patches as fast as Stable, which poses a potential danger to the inexperienced user.
  3. Unstable: Nicknamed Sid after the psychotic next door neighbour in Toy Story who destroys toys as a hobby, this is where all of Debian’s newest and potentially buggy code lives. According to what I’ve read, Sid is like a developer’s build – new users who don’t know their way around the system don’t generally use this distribution because the build could break at any time, and there is absolutely no security support.

I’m currently leaning towards running the Testing distribution, mostly because I like new shiny toys, and (I think) want the challenge of becoming a part of the Debian community. Since we’ve been getting a lot of support from the various development communities lately, perhaps some of our readers could set me straight on any information that I might have missed, and perhaps set me straight on which distribution I should run.

Blast from the Past: Linux Media Players Suck – Part 1: Rhythmbox

December 30th, 2016 No comments

This post was originally published on May 5, 2010. The original can be found here.


The state of media players on Linux is a sad one indeed. If you’re a platform enthusiast, you may want to cover your ears and scream “la-la-la-la” while reading this article, because it will likely offend your sensibilities. In fact, the very idea behind this series is to shake up the freetards’ world view, and to make them realize that a decent Winamp or iTunes clone need not be the end of the story for media management and playback on Linux.

This article will concentrate on lambasting Rhythmbox, the default jukebox software of the GNOME desktop environment. Subsequent posts will give the same treatment to other players in this sphere, including Banshee, Amarok, and Songbird (if I can find a copy that will still build on Linux). If you’re a user of media players on Linux, keep your own annoyances firmly in mind, and if I don’t mention them, please share in the comments. If you’re a developer for one of these fine projects, try to keep an open mind and get inspired to do better. A media player is not a hard thing to build, and I do believe that together, we can do better.

For the remainder of this article, please keep in mind that I am currently running Rhythmbox under Kubuntu 9.10, so you’ll see it rendered with qt widgets in all of my screen shots. This doesn’t affect the overall performance of the app, but leads nicely into my first complaint:

  1. Poor Cross-Platform Support: There are basically two desktop environments that matter in the Linux world, GNOME and KDE. Under GNOME, Rhythmbox has a reasonably nice icon set that is comparable to other media players. Under KDE, the qt re-skinning replaces those icons with a horrible set of mismatched images that really make the program look second-rate:
    Isn't this shit awful?

    As you can see, these icons look terrible. Note that there isn’t even an icon for ‘Burn’ and the icon for ‘Browse’ is a fucking question mark.

    This extends to the CD burning and help features too. They rely on programs like gnome-help and brasero to work, but don’t install them with the media player, so when I try to access these features under KDE, I just get error messages. Nice.

    Honestly, who packaged this thing?

    This is just plain stupid. Every package manager has the concept of dependencies, so why doesn’t Rhythmbox use them?

  2. The Player Starts in the Tray: Under what circumstances would it be considered useful for a media player to automatically minimize itself to the system tray on startup? It doesn’t begin to play automatically. The first thing that I always do is click on the tray icon to maximize it so that I can select some music to start playing. Way to start the user experience off on the wrong foot.
  3. Missing Files View: This one is just plain stupid. Whenever I delete a file from my hard drive, it shows up under the ‘Missing Files’ view, even though my intent was clearly to remove the file from my library. Further, I use Rhythmbox to put music on my BlackBerry. Whenever I fill it with music, I first delete the files on it. Those files that I deleted from my mobile device? Yeah, they show up under ‘Missing Files’ too, as if they were a legitimate part of my library! So this view ends up being like a global garbage bin that I have to waste my precious time emptying on occasion, and serves no useful purpose in the mean time. Yeah, I deleted those files. What are you going to do about it?

    Seriously, why the hell are these files in here?

    As you can see, I’ve highlighted the fact that Rhythmbox is telling me that these files are missing from my mobile device. No shit.

  4. Shared Libraries that I can’t Play: So we’ve known for awhile now that Apple broke the ability to connect to iTunes via the DAAP protocol, and that it’s not possible to connect to a shared iTunes library from Linux. If that’s the case, why does Rhythmbox still show these libraries as available? And how come it shows my library under this node? Why would I listen to my own shared library? Finally, I’ve found that even if I’m running Rhythmbox on another machine, I still can’t connect to my shared library. This feature seems to be downright broken – so why is it still in the build?
  5. The GUI and Backend are on One Thread: I keep about half of my music collection as lossless FLAC files. When I want to rip these files to my portable media device, they need to be converted to the Mp3 format. Turns out that Rhythmbox thinks it appropriate to transcode these files on the same thread that it uses to update its GUI, so that while this process is taking place, the app becomes laggy, and at times, downright unusable. Further, the application doesn’t seem to give me any control over the bitrate that my songs are transcoded to. Fuck!
  6. Lack of Playlist Options: Smart playlists in Rhythmbox are missing a rather key feature: Randomness. When filling the aforementioned mobile device with music, I would like to select a random 4GB of music from my top rated playlist. But I can’t. I can select 4GB of music by most every criteria except randomness, which means that I get the same 600 or so songs on my device every time I fill it. This is strange, because I can shuffle the contents of a static playlist; But I cannot randomly fill a smart playlist. Great.

    If you have a device that has a small amount of memory, this feature is essential

    It’s funny; I really want to like Rhythmbox, but it’s shit like this that ruins the experience for me

  7. Columns: What the fuck. Who wrote this part of the application? When I choose the columns that are visible in the main window, I can’t re-order them. That’s right. So the only order that I can put my columns in is Track, Title, Genre, Artist, Album, Year, Time, Quality, Rating. Can’t reorder them at all, and I have to go into the preferences menu to choose which ones are displayed, instead of being able to right-click on the column headers to select them like I can in every other program written in the last 10 years. This is just ridiculous. I know that the GTK+ toolkit allows you to create re-order-able columns, because I’ve seen it done.

    This is just so incredibly backward. I mean, columns are a standard part of the GTK+ toolkit, and I've seen plenty of other apps that do this properly.

    Why, for the love of God, can’t these be re-ordered?

  8. The Equalizer is Balls: No presets, and no preamp. So I can set the EQ, and my settings are magically saved, but I can only have one setting, because there doesn’t appear to be a way to create multiple profiles. And louder music sounds like balls, because I can’t turn down the preamp, so I get digital distortion throughout my signal. It would be better to just not have an equalizer at all.

    I mean, it works. But...

    I mean, it works. But…

  9. Context Menus Don’t Make Sense: Let’s just take a look at this context menu for a moment. There are three ways to remove a song from a playlist. You can Remove the song, which just removes it from the playlist, but not from your library or your hard drive. Alternatively, you can select Move to Trash, which does what you might expect – it removes the song from the playlist, the library, and your computer. I’ve got a problem with the naming conventions here. The purpose of Remove isn’t well explained, and confused the hell out of me at first. In addition, when browsing a mobile device that you’ve filled with music, the GUI breaks down even further. In this case, you can still hit Remove, which seems to remove the song from Rhythmbox’s listing, but leaves the file on the device. So now I have a file on my device that I can’t access. Great. The right-click menu also has the ability to copy and cut the song, even though there is no immediately obvious way to paste it. For that functionality, you’ll have to head up to the Edit menu.

    The right-click context menu

    I’m starting to run out of anger. The 10,000 papercuts that come along with this app are making me numb to it.

  10. No Command Line Tools: Now, normally, this wouldn’t bother me too much. A music library is something that’s meant to have a GUI, and doesn’t generally lend itself to working from the command line. In this case however, command line access to Rhythmbox would be really handy, because I’d like to set up a hot key on my keyboard that will skip songs or pause playback. Unfortunately, there’s no way to do that within the software, and it doesn’t have any command line arguments that I can call instead. Balls.

There you have it, 10 things that really ruin the Rhythmbox experience. While using this piece of software, I felt like the developers worked really hard to build something that was sort of comparable to Apple’s iTunes, and then stopped trying. That isn’t good enough! If we want to attract users to our platform of choice, and keep them here, we need to give them reasons to check it out, and even more to stick around. If I say to you that I want to have the best Linux media player, you tend to put the emphasis on the word Linux. Why not just make the best media player? GNOME is on at least half of all Linux desktops, if not more. Why hinder it with software that gives people a poor first impression of what Linux is capable of? Seriously guys, let’s step it up.

 

Blast from the Past: Going Linux, Once and for All

December 28th, 2016 No comments

This post was originally published on December 23, 2009. The original can be found here.


With the linux experiment coming to an end, and my Vista PC requiring a reinstall, I decided to take the leap and go all linux all the time. To that end, I’ve installed Kubuntu on my desktop PC.

I would like to be able to report that the Kubuntu install experience was better than the Debian one, or even on par with a Windows install. Unfortunately, that just isn’t the case.

My machine contains three 500GB hard drives. One is used as the system drive, while an integrated hardware RAID controller binds the other two together as a RAID1 array. Under Windows, this setup worked perfectly. Under Kubuntu, it crashed the graphical installer, and threw the text-based installer into fits of rage.

With plenty of help from the #kubuntu IRC channel on freenode, I managed to complete the Kubuntu install by running it with the two RAID drives disconnected from the motherboard. After finishing the install, I shut down, reconnected the RAID drives, and booted back up. At this point, the RAID drives were visible from Dolphin, but appeared as two discrete drives.

It was explained to me via this article that the hardware RAID support that I had always enjoyed under windows was in fact a ‘fake RAID,’ and is not supported on Linux. Instead, I need to reformat the two drives, and then link them together with a software RAID. More on that process in a later post, once I figure out how to actually do it.

At this point, I have my desktop back up and running, reasonably customized, and looking good. After trying KDE’s default Amarok media player and failing to figure out how to properly import an m3u playlist, I opted to use Gnome’s Banshee player for the time being instead. It is a predictable yet stable iTunes clone that has proved more than capable of handling my library for the time being. I will probably look into Amarok and a few other media players in the future. On that note, if you’re having trouble playing your MP3 files on Linux, check out this post on the ubuntu forums for information about a few of the necessary GStreamer plugins.

For now, my main tasks include setting up my RAID array, getting my ergonomic bluetooth wireless mouse working, and working out folder and printer sharing on our local Windows network. In addition, I would like to set up a Windows XP image inside of Sun’s Virtual Box so that I can continue to use Microsoft Visual Studio, the only Windows application that I’ve yet to find a Linux replacement for.

This is just the beginning of the next chapter of my own personal Linux experiment; stay tuned for more excitement.

This post first appeared at Index out of Bounds.

 

Blast from the Past: Automatically put computer to sleep and wake it up on a schedule

December 26th, 2016 No comments

This post was originally published on June 24, 2012. The original can be found here.


Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

#!/bin/sh
t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.

 

Blast from the Past: Linux Saves the Day

December 19th, 2016 No comments

This post was originally published on December 23, 2009. The original can be found here.


Earlier this week I had an experience where using Linux got me out of trouble in a relatively quick and easy manner. The catch? It was kind of Linux’s fault that I was in trouble in the first place.

Around halfway through November my Linux install on my laptop crapped out, and really fucked things up hard. However, my Windows install wasn’t affected, so I started using Windows on my laptop primarily, while switching to an openSUSE VM on my desktop for my home computing needs.

About a week back I decided it was time to reinstall Linux on my laptop, since exams and my 600 hojillion final projects were out of the way. I booted into Win7, nuked the partitions being used by Linux and… went and got some pizza and forgot to finish my install. Turns out I hadn’t restarted my PC anywhere between that day and when shit hit the fan. When I did restart, I was informed to the merry tune of a PC Speaker screech that my computer had no bootable media.

… Well shit.

My first reaction was to try again, my second was to check to make sure the hard drive was plugged in firmly. After doing this a few times, I was so enraged about my lost data that I was about ready to repave the whole drive when I had the good sense to throw in a BartPE live CD and check to see if there was any data left on the drive. To my elation, all of my data was still in tact! It was at this precise moment I thought to myself “Oh drat, I bet I uninstalled that darned GRUB bootloader. Fiddlesticks!”

However, all was not lost. I know that Linux is great and is capable of finding other OS installs during its install and setting them up in GRUB without me having to look around for a windows boot point and do it myself. 20 minutes and an openSUSE install later, everything was back to normal on my laptop, Win7 and openSUSE 11.1 included!

As we speak I’m attempting an in-place upgrade to openSUSE 11.2 so hopefully I get lucky and everything goes smoothly!

 

CoreGTK 3.18.0 Released!

December 18th, 2016 No comments

The next version of CoreGTK, version 3.18.0, has been tagged for release! This is the first version of CoreGTK to support GTK+ 3.18.

Highlights for this release:

  • Rebased on GTK+ 3.18
  • New supported GtkWidgets in this release:
    • GtkActionBar
    • GtkFlowBox
    • GtkFlowBoxChild
    • GtkGLArea
    • GtkModelButton
    • GtkPopover
    • GtkPopoverMenu
    • GtkStackSidebar
  • Reverted to using GCC as the default compiler (but clang can still be used)

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

Read more about this release here.

This post originally appeared on my personal website here.

Blast from the Past: An Experiment in Transitioning to Open Document Formats

December 16th, 2016 No comments

This post was originally published on June 15, 2013. The original can be found here.


Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.

 

Blast from the Past: Coming to Grips with Reality

December 12th, 2016 No comments

This post was originally published on December 8, 2009. The original can be found here.


The following is a cautionary tale about putting more trust in the software installed on your system than in your own knowledge.

Recently, while preparing for a big presentation that relied on me running a Java applet in Iceweasel, I discovered that I needed to install an additional package to make it work. This being nothing out of the ordinary, I opened up a terminal, and used apt-cache search to locate the package in question. Upon doing so, my system notified me that I had well over 50 ‘unnecessary’ packages installed. It recommended that I take care of the issue with the apt-get autoremove command.

Bad idea.

On restart, I found that my system was virtually destroyed. It seemed to start X11, but refused to give me either a terminal or a gdm login prompt. After booting into Debian’s rescue mode and messing about in the terminal for some time trying to fix a few circular dependencies and get my system back, I decided that it wasn’t worth my time, backed up my files with an Ubuntu live disk, and reinstalled from a netinst nightly build disk of the testing repositories. (Whew, that was a long sentence)

Unfortunately, just as soon as I rebooted from the install, I found that my system lacked a graphical display manager, and that I could only log in to my terminal, even though I had explicitly told the installer to add GNOME to my system. I headed over to #debian for some help, and found out that the testing repositories were broken, and that my system lacked gdm for some unknown reason. After following their instructions to work around the problem, I got my desktop back, and once more have a fully functioning system.

The moral of the story is a hard one for me to swallow. You see, I have come to the revelation that I don’t know what I’m doing. Over the course of the last 3 months, I have learned an awful lot about running and maintaining a Linux system, but I still lack the ability to fix even the simplest of problems without running for help. Sure, I can install and configure a Debian box like nobody’s business, having done it about 5 times since this experiment started; but I still lack the ability to diagnose a catastrophic failure and to recover from it without a good dose of help. I have also realized something that as a software developer, I know and should have been paying attention to when I used that fatal autoremove command – when something seems wrong, trust your instincts over your software, because they’re usually correct.

This entire experiment has been a huge learning experience for me. I installed an operating system that I had never used before, and eschewed the user-friendly Ubuntu for Debian, a distribution that adheres strictly to free software ideals and isn’t nearly as easy for beginners to use. That done, after a month of experience, I switched over from the stable version of Debian to the testing repositories, figuring that it would net me some newer software that occasionally worked better (especially in the case of Open Office and Gnome Network Manager), and some experience with running a somewhat less stable system. I certainly got what I wished for.

Overall, I don’t regret a thing, and I intend to keep the testing repositories installed on my laptop. I don’t usually use it for anything but note taking in class, so as long as I back it up regularly, I don’t mind if it breaks on occasion; I enjoy learning new things, and Debian keeps me on my toes. In addition, I think that I’ll install Kubuntu on my desktop machine when this whole thing is over. I like Debian a lot, but I’ve heard good things about Ubuntu and its variants, and feel that I should give them a try now that I’ve had my taste of what a distribution that isn’t written with beginners in mind is like. I have been very impressed by Linux, and have no doubts that it will become a major part of my computing experience, if not replacing Windows entirely – but I recognize that I still have a long way to go before I’ve really accomplished my goals.

As an afterthought: If anybody is familiar with some good tutorials for somebody who has basic knowledge but needs to learn more about what’s going on below the surface of a Linux install, please recommend them to me.

 

Blast from the Past: How is it doing that?

December 7th, 2016 No comments

This post was originally published on December 15, 2009. The original can be found here.


Just about everything that I’ve ever read about media playback on Linux has been negative. As I understand the situation, the general consensus of the internet is that Linux should not be relied on to play media of any kind. Further, I know that the other guys have had troubles with video playback in the past.

All of which added up to me being extremely confused when I accidentally discovered that my system takes video playback like a champ. Now from the outset, you should know that my system is extremely underpowered where high definition video playback is concerned. I’m running Debian testing on a laptop with a 1.73 GHz single-core processor, 758MB shared video RAM, and a 128MB Intel GMA 900 integrated graphics card.

Incredibly enough, it turns out that this humble setup is capable of playing almost every video file that I can find, even with compiz effects fully enabled and just a base install of vlc media player.

Most impressively, the machine can flawlessly stream a 1280x528px 1536kb/s *.mkv file over my wireless network.

As a comparison, I have a Windows Vista machine with a 2.3GHz processor, 4GB of RAM, and a 512MB video card upstairs that can’t play the same file without special codecs and the help of a program called CoreAVC. Even with these, it plays the file imperfectly.

I can’t explain how this is possible, but needless to say, I am astounded at the ability of Linux.