Posts Tagged ‘windows’

Distro hopping: Import music stored on NAS into Music

September 19th, 2015 No comments

So you’re running elementary OS and want to access the music files you have stored on a Network-attached-storage device within the Music program. Unfortunately while you can easily browse the network and find these files you can’t do so within Music. Luckily there is a solution to this problem! Borrowing heavily from a previous post this will walk you through how to set up a persistent media folder on your computer that will ‘point’ to the music directory on your NAS.

Step 1) Open up a terminal

Now wasn't that easy?

Now wasn’t that easy?

Step 2) Install the required software

For the purpose of this post I’m going to assume the NAS is presenting a Windows file share so we’ll need the software to be able to make use of it. Simply run the following command to install the needed software:

sudo apt-get install cifs-utils
Installing some software!

Installing some software!

Step 3) Create a location for where you want the media to appear

If this is just going to be used for your user account you can simply create a new folder in your home folder. For example create a new folder under the Music folder called “NAS”. However if we want multiple users to be able to access this then you’ll want to put it somewhere else (for example /media/NAS).

For my example I'm just going to put it under a new NAS folder inside of my Music folder

For my example I’m just going to put it under a new NAS folder inside of my Music folder

Step 4) Edit the fstab file and add the share(s) so that they auto connect on startup

So basically there is a file on your computer called fstab that contains information about all of the hard drives and mounts that the computer should create on boot. To make it so our new NAS folder points to the actual NAS directory we’re going to add a new line to this file telling our computer to do just that. Start by using your terminal and opening that file in an editor. You can use a terminal editor like nano or even a graphical one like Scratch.

To use the terminal editor nano run the following command:

sudo nano /etc/fstab
fstab in nano

fstab open in nano

To use the graphical editor Scratch run the following command:

sudo scratch-text-editor /etc/fstab
fstab open in Scratch

fstab open in Scratch

On a new line add the following (modifying it according to your system). Note that this should be a single line even though it may appear broken up over multiple lines here:

//<path to server>/<share name>  <path to local directory>  cifs  
guest,uid=<user id to mount files as>,iocharset=utf8  0  0

Breaking it down a little bit:

  • <path to server>: This is the network name or IP address of the computer hosting the share (in my case the NAS). For example it could be something like “” or something like “MyNas”
  • <share name>: This is the name of the share on that computer. For example I set up my NAS to share different directories one of which was called “Files”
  • <path to local directory>: This is where you want the remote files to appear locally. For example if you want them to appear in a folder under your Music directory you could do something like “/home/tyler/Music/NAS”. Just make sure that the directory exists (that’s why we created it above :)).
  • <user id to mount files as>: This defines the permissions to give the files. On elementary OS (as well as other Ubuntu distributions) the first user you create is usually given uid 1000 so you could put “1000” here. To find out the uid of any random user use the command “id <user>” in the terminal without quotes.

As an example the line I added for my example configuration here was:

//  /home/tyler/Music/NAS  cifs  
guest,uid=1000,iocharset=utf8  0  0

Now save the file.

Step 5) Test that it worked

Finally in the terminal we’re going to run command to actually test it:

sudo mount -a

This will do essentially the same thing that happens when your computer first boots so if this works it should work the next time you restart as well. If you don’t get any errors then congratulations it should have all worked! You can verify by now opening up your NAS folder and confirming that it shows the contents of your actual NAS directory.

We have music!

We have music!

Step 6) Import the music into Music

Now that we have the NAS music showing up in a local folder the Music application will be able to add it no problem. Simply open up Music and use the import option to import the music from your folder (in my case ~/Music/NAS).



This post is part of a series:

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK 3.10.1 Released!

September 8th, 2015 No comments

The next version of CoreGTK, version 3.10.1, has been tagged for release today.

Highlights for this release:

  • Added some missing (varargs) GTK+ functions. This makes it easier to create widgets like the FileChooserDialog.

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK 3.10.0 Released!

August 20th, 2015 No comments

The next version of CoreGTK, version 3.10.0, has been tagged for release today.

Highlights for this release:

  • Move from GTK+ 2 to GTK+ 3
  • Prefer the use of glib data types over boxed OpenStep/Cocoa objects (i.e. gint vs NSNumber)
  • Base code generation on GObject Introspection instead of a mix of automated source parsing and manual correction
  • Support for GTK+ 3.10

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

CoreGTK 2.24.0 Released!

August 4th, 2014 No comments

The initial version of CoreGTK, version 2.24.0, has been tagged for release today.

Features include:

  • Targets GTK+ 2.24
  • Support for GtkBuilder
  • Can be used on Linux, Mac and Windows

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

You can find more information about the project here and the release itself here.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Big distributions, little RAM 6

July 9th, 2013 3 comments

It’s that time again where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Fedora 18 (GNOME)
  • Fedora 18 (KDE)
  • Fedora 19 (GNOME
  • Fedora 19 (KDE)
  • Kubuntu 13.04 (KDE)
  • Linux Mint 15 (Cinnamon)
  • Linux Mint 15 (MATE)
  • Mageia 3 (GNOME)
  • Mageia 3 (KDE)
  • OpenSUSE 12.3 (GNOME)
  • OpenSUSE 12.3 (KDE)
  • Ubuntu 13.04 (Unity)
  • Xubuntu 13.04 (Xfce)

I even happened to have a Windows 7 (64-bit) VM lying around and, while I think you would be a fool to run a 64-bit OS on the limited test hardware, I’ve included as sort of a benchmark.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.2.16, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between July 1st, 2013 and July 5th, 2013 so your results may not be identical.


Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. This time around however I’ve changed how things are measured slightly in order to be more accurate. Measurements (on linux) were taken using the free -m command for memory and the df -h command for disk usage. On Windows I used Task Manager and Windows Explorer.

In addition this will be the first time where I provide the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). Secondly there may be some distributions that don’t appear on all of the graphs, for example because I was using an existing Windows 7 VM I didn’t have a ‘first boot’ result. As always feel free to run your own tests. Thirdly you may be asking yourself ‘why does Fedora 18 and 19 make the list?’ Well basically because I had already run the tests for 18 and then 19 happened to be released. Finally Fedora 19 (GNOME), while included, does not have any data because I simply could not get it to install.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.


All Data Points

All Data Points



Buffers/Cache Only


RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.


All Data Points





RAM - Buffers/Cache

RAM – Buffers/Cache



RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

All Data Points

All Data Points





RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage


RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Install Size

Install Size


Once again I will leave the conclusions to you. This time however, as promised above, I will provide my source data for you to plunder enjoy.

Source Data

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

An Experiment in Transitioning to Open Document Formats

June 15th, 2013 2 comments

Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice,, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Fix for mount error(12): Cannot allocate memory

October 2nd, 2011 16 comments

Do you have the following situation:

  • You’ve got a share on Windows (XP, Vista, 7) that you’re trying to access from a Linux system, in this case Ubuntu.
  • Mounted through /etc/fstab or directly through the command line.
  • Initially, it works great, but then loses the mountpoint – you’ll go to, say, /mnt/server/mountpoint but there are no directory contents. “mount” shows the path as still mounted.
  • umount’ing the directory and then trying to remount it provides this gem of a message:
    mount error(12): Cannot allocate memory
    Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Of course, since you’re probably a reasonable system administrator, you go and check the memory allotment. top looks fine and nothing else on the system is complaining.

The solution, kindly provided by Alan LaMielle’s blog, gives a registry fix on the Windows side of things. In case that link ever breaks, here is the summary of what needs to happen on the Windows system:

  • In HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management, set the LargeSystemCache key to 1 (hex).
  • In HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters, set the Size key to 3 (hex).’
  • Restart the “Server” service and its dependencies (on my Windows 7 box, these were “Computer Browser” and “Homegroup Listener”, and I had to restart the service twice for the dependencies to also come back up.) Alternatively you can just restart the Windows system as you’re probably due for a large set of updates anyway.

Then re-run the mount command (for entries defined in /etc/fstab, use sudo mount -a) and your shares should be restored to their former glory.

Categories: Jake B Tags: , , ,

Create a GStreamer powered Java media player

March 14th, 2011 1 comment

For something to do I decided to see if I could create a very simple Java media player. After doing some research, and finding out that the Java Media Framework was no longer in development, I decided to settle on GStreamer to power my media player.

GStreamer for the uninitiated is a very powerful multimedia framework that offers both low-level pipeline building as well as high-level playback abstraction. What’s nice about GStreamer, besides being completely open source, is that it presents a unified API no matter what type of file it is playing. For instance if the user only has the free, high quality GStreamer codecs installed, referred to as the good plugins, then the API will only play those files. If however the user installs the other plugins as well, be it the bad or ugly sets, the API remains the same and thus you don’t need to update your code. Unfortunately being a C library this approach does have some drawbacks, notably the need to include the JNA jar as well as the system specific libraries. This approach can be considered similar to how SWT works.


Assuming that you already have a Java development environment, the first thing you’ll need is to install GStreamer. On Linux odds are you already have it, unless you are running a rather stripped down distro or don’t have many media players installed (both Rhythmbox and Banshee use GStreamer). If you don’t it should be pretty straight forward to install along with your choice of plugins. On Windows you’ll need to head over to ossbuild where they have downloadable installers.

The second thing you’ll need is gstreamer-java which you can grab over at their website here. You’ll need to download both gstreamer-java-1.4.jar and jna-3.2.4.jar. Both might contain some extra files that you probably don’t need and can prune out later if you’d like. Setup your development environment so that both of these jar files are in your build path.

Simple playback

GStreamer offers highly abstracted playback engines called PlayBins. This is what we will use to actually play our files. Here is a very simple code example that demonstrates how to actually make use of a PlayBin:

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));


So what does it all mean?

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

The above line takes the incoming command line arguments and passes them to the Gst.init function and returns a new set of arguments. If you have every done any GTK+ programming before this should be instantly recognizable to you. Essentially what GStreamer is doing is grabbing, and removing, any GStreamer specific arguments before your program will actually process them.

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));

The first line of code requests a standard “AudioPlayer” PlayBin. This PlayBin is built right into GStreamer and automatically sets up a default pipeline for you. Essentially this lets us avoid all of the low-level craziness that we would have to normally deal with if we were starting from scratch.

The next line sets the PlayBin’s VideoSink, think of sinks as output locations, to a “fakesink” or null sink. The reason we do this is because PlayBin’s can play both audio and video. For the purposes of this player we only want audio playback so we automatically redirect all video output to the “fakesink”.

The last line is pretty straight forward and just tells GStreamer what file to play.


Finally with the above lines of code we tell the PlayBin to actually start playing and then enter the GStreamer main loop. This loop continues for the duration. The last line is used to reset the PlayBin state and do some cleanup.

Bundle it with a quick GUI

To make it a little more friendly I wrote a very quick GUI to wrap all of the functionality with. The download links for that (binary only package), as well as the source (all package) is below. And there you have it: a very simple cross-platform media player that will playback pretty much anything you throw at it.

Please note that I have provided this software purely as a quick example. If you are really interested in developing a GStreamer powered Java application you would do yourself a favor by reading the official documentation.

Binary Only Package All Package
File name:
Version: March 13, 2011
File size: 1.5MB 1.51MB
File download: Download Here Download Here

Originally posted on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Create a GTK+ application on Linux with Objective-C

December 8th, 2010 8 comments

As sort of follow-up-in-spirit to my older post I decided to share a really straight forward way to use Objective-C to build GTK+ applications.


Objective-C is an improvement to the iconic C programming language that remains backwards compatible while adding many new and interesting features. Chief among these additions is syntax for real objects (and thus object-oriented programming). Popularized by NeXT and eventually Apple, Objective-C is most commonly seen in development for Apple OSX and iOS based platforms. It ships with or without a large standard library (sometimes referred to as the Foundation Kit library) that makes it very easy for developers to quickly create fast and efficient programs. The result is a language that compiles down to binary, requires no virtual machines (just a runtime library), and achieves performance comparable to C and C++.

Marrying Objective-C with GTK+

Normally when writing a GTK+ application the language (or a library) will supply you with bindings that let you create GUIs in a way native to that language. So for instance in C++ you would create GTK+ objects, whereas in C you would create structures or ask functions for pointers back to the objects. Unfortunately while there used to exist a couple of different Objective-C bindings for GTK+, all of them are quite out of date. So instead we are going to rely on the fact that Objective-C is backwards compatible with C to get our program to work.

What you need to start

I’m going to assume that Ubuntu will be our operating system for development. To ensure that we have what we need to compile the programs, just install the following packages:

  1. gnustep-core-devel
  2. libgtk2.0-dev

As you can see from the list above we will be using GNUstep as our Objective-C library of choice.

Setting it all up

In order to make this work we will be creating two Objective-C classes, one that will house our GTK+ window and another that will actually start our program. I’m going to call my GTK+ object MainWindow and create the two necessary files: MainWindow.h and MainWindow.m. Finally I will create a main.m that will start the program and clean it up after it is done.

Let me apologize here for the poor code formatting; apparently WordPress likes to destroy whatever I try and do to make it better. If you want properly indented code please see the download link below.


In the MainWindow.h file put the following code:

#import <gtk/gtk.h>
#import <Foundation/NSObject.h>
#import <Foundation/NSString.h>

//A pointer to this object (set on init) so C functions can call
//Objective-C functions
id myMainWindow;

* This class is responsible for initializing the GTK render loop
* as well as setting up the GUI for the user. It also handles all GTK
* callbacks for the winMain GtkWindow.
@interface MainWindow : NSObject
//The main GtkWindow
GtkWidget *winMain;
GtkWidget *button;

* Constructs the object and initializes GTK and the GUI for the
* application.
* *********************************************************************
* Input
* *********************************************************************
* argc (int *): A pointer to the arg count variable that was passed
* in at the application start. It will be returned
* with the count of the modified argv array.
* argv (char *[]): A pointer to the argument array that was passed in
* at the application start. It will be returned with
* the GTK arguments removed.
* *********************************************************************
* Returns
* *********************************************************************
* MainWindow (id): The constructed object or nil
* arc (int *): The modified input int as described above
* argv (char *[]): The modified input array modified as described above
-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv;

* Frees the Gtk widgets that we have control over

* Starts and hands off execution to the GTK main loop

* Example Objective-C function that prints some output

* C callback functions

* Called when the user closes the window
void on_MainWindow_destroy(GtkObject *object, gpointer user_data);

* Called when the user presses the button
void on_btnPushMe_clicked(GtkObject *object, gpointer user_data);



For the class’ actual code file fill it in as show below. This class will create a GTK+ window with a single button and will react to both the user pressing the button, and closing the window.

#import “MainWindow.h”

* For documentation see MainWindow.h

@implementation MainWindow

-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv
//call parent class’ init
if (self = [super init]) {

//setup the window
winMain = gtk_window_new (GTK_WINDOW_TOPLEVEL);

gtk_window_set_title (GTK_WINDOW (winMain), “Hello World”);
gtk_window_set_default_size(GTK_WINDOW(winMain), 230, 150);

//setup the button
button = gtk_button_new_with_label (“Push me!”);

gtk_container_add (GTK_CONTAINER (winMain), button);

//connect the signals
g_signal_connect (winMain, “destroy”, G_CALLBACK (on_MainWindow_destroy), NULL);
g_signal_connect (button, “clicked”, G_CALLBACK (on_btnPushMe_clicked), NULL);

//force show all

//assign C-compatible pointer
myMainWindow = self;

//return pointer to this object
return self;

//start gtk loop

NSLog(@”Printed from Objective-C’s NSLog function.”);
printf(“Also printed from standard printf function.\n”);


myMainWindow = NULL;

if(GTK_IS_WIDGET (button))
//clean up the button

if(GTK_IS_WIDGET (winMain))
//clean up the main window

[self destroyWidget];

[super dealloc];

void on_MainWindow_destroy(GtkObject *object, gpointer user_data)
//exit the main loop

void on_btnPushMe_clicked(GtkObject *object, gpointer user_data)
printf(“Button was clicked\n”);

//call Objective-C function from C function using global object pointer
[myMainWindow printSomething];



To finish I will write a main file and function that creates the MainWindow object and eventually cleans it up. Objective-C (1.0) does not support automatic garbage collection so it is important that we don’t forget to clean up after ourselves.

#import “MainWindow.h”
#import <Foundation/NSAutoreleasePool.h>

int main(int argc, char *argv[]) {

//create an AutoreleasePool
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

//init gtk engine
gtk_init(&argc, &argv);

//set up GUI
MainWindow *mainWindow = [[MainWindow alloc] initWithArgCount:&argc andArgVals:argv];

//begin the GTK loop
[mainWindow startGtkMainLoop];

//free the GUI
[mainWindow release];

//drain the pool
[pool release];

//exit application
return 0;

Compiling it all together

Use the following command to compile the program. This will automatically include all .m files in the current directory so be careful when and where you run this.

gcc `pkg-config –cflags –libs gtk+-2.0` -lgnustep-base -fconstant-string-class=NSConstantString -o “./myprogram” $(find . -name ‘*.m’) -I /usr/include/GNUstep/ -L /usr/lib/GNUstep/ -std=c99 -O3

Once complete you will notice a new executable in the directory called myprogram. Start this program and you will see our GTK+ window in action.

If you run it from the command line you can see the output that we coded when the button is pushed.

Wrapping it up

There you have it. We now have a program that is written in Objective-C, using C’s native GTK+ ‘bindings’ for the GUI, that can call both regular C and Objective-C functions and code. In addition, thanks to the porting of both GTK+ and GNUstep to Windows, this same code will also produce a cross-platform application that works on both Mac OSX and Windows.

Source Code Downloads

Source Only Package
File name:
File hashes: Download Here
File size: 2.4KB
File download: Download Here

Originally posted on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).

Compile Windows programs on Linux

September 26th, 2010 No comments

Windows?? *GASP!*

Sometimes you just have to compile Windows programs from the comfort of your Linux install. This is a relatively simple process that basically requires you to only install the following (Ubuntu) packages:

To compile 32-bit programs

  • mingw32 (swap out for gcc-mingw32 if you need 64-bit support)
  • mingw32-binutils
  • mingw32-runtime

Additionally for 64-bit programs (*PLEASE SEE NOTE)

  • mingw-w64
  • gcc-mingw32

Once you have those packages you just need to swap out “gcc” in your normal compile commands with either “i586-mingw32msvc-gcc” (for 32-bit) or “amd64-mingw32msvc-gcc” (for 64-bit). So for example if we take the following hello world program in C

#include <stdio.h>
int main(int argc, char** argv)
printf(“Hello world!\n”);
return 0;

we can compile it to a 32-bit Windows program by using something similar to the following command (assuming the code is contained within a file called main.c)

i586-mingw32msvc-gcc -Wall “main.c” -o “Program.exe”

You can even compile Win32 GUI programs as well. Take the following code as an example

#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
char *msg = “The message box’s message!”;
MessageBox(NULL, msg, “MsgBox Title”, MB_OK | MB_ICONINFORMATION);

return 0;

this time I’ll compile it into a 64-bit Windows application using

amd64-mingw32msvc-gcc -Wall -mwindows “main.c” -o “Program.exe”

You can even test to make sure it worked properly by running the program through wine like

wine Program.exe

You might need to install some extra packages to get Wine to run 64-bit applications but in general this will work.

That’s pretty much it. You might have a couple of other issues (like linking against Windows libraries instead of the Linux ones) but overall this is a very simple drop-in replacement for your regular gcc command.

*NOTE: There is currently a problem with the Lucid packages for the 64-bit compilers. As a work around you can get the packages from this PPA instead.

Originally posted on my personal website here.

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Tyler B, Ubuntu Tags: , , , ,

Using KDE on Windows

February 11th, 2010 2 comments

Since the end of The Linux Experiment I have started dual booting my laptop, switching between Kubuntu 9.10 and Windows 7 as needed. While this solves all of my compatibility issues, it does pose some more annoying issues. For example after setting up one operating system just the way I like it I now need to do the same for the other. Furthermore after becoming used to using particular applications under Linux I now have to find alternatives for Windows. Well no more! The KDE guys and gals have ported the libraries to Windows!


To install KDE on Windows all you need to do is head over to and grab a copy of the installer exe. This will more or less walk you through the initial setup and then present you with a list of packages you can choose to install. Most applications are there including things like KTorrent, Konqueror, Konversation and more! Simply select them and watch as they are easily installed.

Image Walkthrough

The first screen you'll see when installing

The package list

kdebase-apps includes things like Konqueror

The installer downloads the source and compiles it locally

After installing the applications show up right in your start menu

The final result. Konqueror and KWrite running on Windows

I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Categories: Free Software, KDE, Tyler B Tags: ,

Going Linux, Once and for All

December 23rd, 2009 7 comments

With the linux experiment coming to an end, and my Vista PC requiring a reinstall, I decided to take the leap and go all linux all the time. To that end, I’ve installed Kubuntu on my desktop PC.

I would like to be able to report that the Kubuntu install experience was better than the Debian one, or even on par with a Windows install. Unfortunately, that just isn’t the case.

My machine contains three 500GB hard drives. One is used as the system drive, while an integrated hardware RAID controller binds the other two together as a RAID1 array. Under Windows, this setup worked perfectly. Under Kubuntu, it crashed the graphical installer, and threw the text-based installer into fits of rage.

With plenty of help from the #kubuntu IRC channel on freenode, I managed to complete the Kubuntu install by running it with the two RAID drives disconnected from the motherboard. After finishing the install, I shut down, reconnected the RAID drives, and booted back up. At this point, the RAID drives were visible from Dolphin, but appeared as two discrete drives.

It was explained to me via this article that the hardware RAID support that I had always enjoyed under windows was in fact a ‘fake RAID,’ and is not supported on Linux. Instead, I need to reformat the two drives, and then link them together with a software RAID. More on that process in a later post, once I figure out how to actually do it.

At this point, I have my desktop back up and running, reasonably customized, and looking good. After trying KDE’s default Amarok media player and failing to figure out how to properly import an m3u playlist, I opted to use Gnome’s Banshee player for the time being instead. It is a predictable yet stable iTunes clone that has proved more than capable of handling my library for the time being. I will probably look into Amarok and a few other media players in the future. On that note, if you’re having trouble playing your MP3 files on Linux, check out this post on the ubuntu forums for information about a few of the necessary GStreamer plugins.

For now, my main tasks include setting up my RAID array, getting my ergonomic bluetooth wireless mouse working, and working out folder and printer sharing on our local Windows network. In addition, I would like to set up a Windows XP image inside of Sun’s Virtual Box so that I can continue to use Microsoft Visual Studio, the only Windows application that I’ve yet to find a Linux replacement for.

This is just the beginning of the next chapter of my own personal Linux experiment; stay tuned for more excitement.

This post first appeared at Index out of Bounds.

On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Setting up some Synergy

October 1st, 2009 3 comments

Last night I was able to set up a neat little program that I think you should all know about! Synergy allows you to set up two or more computers so that they all share one keyboard and one mouse. Even better it works cross platform (i.e. Windows and Linux can both share the same mouse and keyboard).


You need to install synergy on all machines involved. I will only go over the Fedora instructions here. The first thing I did was do a quick yum search for synergy.

yum search synergy

This spit back the following results:

== Matched: synergy ==
quicksynergy.x86_64 : Share keyboard and mouse between computers
synergy.x86_64 : Mouse and keyboard sharing utility
synergy-plus.x86_64 : Mouse and keyboard sharing utility

As you can see in the list above it appears as though the package synergy.x86_64 is the only one I really need so I went and installed it.

sudo yum install synergy

This quickly finished but left me scratching my head. There was no application entry for synergy and not even a man page in the terminal. Looking back at the original search terms I figured synergy-plus must be additional features for the base synergy application and that maybe quicksynergy was some sort of automated or easier to use version of synergy. So I installed that.

sudo yum install quicksynergy

I then set up my synergy server, the computer that would be sharing it’s mouse and keyboard to the others, and defined where the monitors would go.

As you can see I have set up my Fedora computer (XPS) to extend the monitor to the left of my Windows machine

As you can see I have set up my Fedora computer (XPS) to extend the monitor to the left of my Windows machine

Next I jumped back over to my Fedora laptop and launched QuickSynergy. After a bit of tinkering I found out that the Share tab is if this computer is going to be the server and the Use tab is for a client. I tried entering the hostname in the text field but that wouldn’t work for whatever reason. It wasn’t until I entered the IP address of the server that things started working.

QuickSynergy on Fedora

QuickSynergy on Fedora

And now for the pièce de résistance. Here is my desktop computing experience!

3 monitors, 2 machines, 1 keyboard & mouse

3 monitors, 2 machines, 1 keyboard & mouse. Sorry for the poor picture quality.


It’s not cheating to use a Windows machine. I needed it to do work. As far as I can tell the linux doesn’t have Visual Studio 2008 with VB.NET support… yet 😉

A minor setback

September 28th, 2009 2 comments

Since this crazy job of mine doesn’t quite feed my mad electronics fetish as much as I might like to, I do a lot of computer troubleshooting on the side… it helps pay the bills, and is a nice way to stay on my toes as far as keeping on top of possible threats out there (since our company’s firewall keeps them out for the most part).  I’ll usually head to a person’s house, get some stuff done, and if it’s still in rough shape (requires a full backup and format) I’ll bring the machine home.

Yesterday, I headed over to my former AVP (Assistant Vice-Preisdent, for those of you not in the know)’s house to get her wireless network running and troubleshoot problems with her one desktop, as well as get file and printer sharing working between two machines.  Her wireless router is a little bit old – a D-Link DI-524 – but it’s something I’ve dealt with before.

After a firmware upgrade, the option to use WPA-PSK encryption was made available (as opposed to standard WEP before).  Great, I thought!  I go to put in a key, hit Apply, and…

Nothing.  Hitting the Apply button does absolutely nothing.  Two computer and router restarts (including a full reset) later, and the same thing was happening.  Some quick research indicated that, hooray hooray, there was an incompatibility with that router’s administration page, Java, and Firefox.  Solution?  Use Internet Explorer.

Here’s where I really ran into a pickle.  This is the first time I’ve ever felt the disadvantage of using a non-Windows operating system.  If I had Windows, I would have been able to fire up IE and just get everything going for them.  Instead, I had to try and install IE6 for Linux, which failed (Wine threw some kind of error).  I ended up using one of my client’s laptops, which they thankfully had sitting around.  Frustrating, but it was easy enough to work around.

Has anyone else had experiences like this?  Things that are *just* out of reach for you because of your choice to use Linux over Windows?

Gaaaaaaaaaaaay(mes) for Linux

September 26th, 2009 5 comments

Ever the Windows enthusiast, I’ve always been deeply involved in the world of PC gaming.  It’s something I’ve always loved to do, and I’ve been through it all – from the early days of Minesweeper and Solitaire, to the casual gaming market of Elastomania and Peggle, to the full-on phase of Bioshock, Halo, Civilization (all of them), and – sadly, yes – World of Warcraft.

Needless to say, I love gaming on computers.  Always have, always will.  I’ve never been a hardcore console man, but I’ve been known to dabble in Nintendo’s awesome selection (SUPER MARIO GALAXY WHAT) every once in a while.  So to say that gaming on Linux would be important to me is just about the understatement of the century.

I had heard a while back that Unreal Tournament III (UT3) was going to be ported to Linux, after being released to the rest of the world about two years ago.  This game has always interested me, mostly because I get to fire ludicrous weapons and blow up aliens again and again and again.  No such luck in Linux, it would seem – the ‘port’ is still under development.

A quick search of ‘gaming in linux’ on Google spits back a modest fifty million results, so you KNOW I’m not the only person interested in doing something like this.  Several of my former WoW buddies (I kicked the habit) played in Linux with impressive results, and it’s been something I’ve wanted to emulate ever since we all started this experiment.  While I have yet to sit down and attempt the installation of a legitimate Windows-only game into Fedora, I have made a selection of a few free (and some open-source!) games I’ve been keeping occupied with in the meantime.  Hope you enjoy!

  • Nexuiz – a free, open-source first-person cross-platform shooter (runs on Windows, Linux and OS/X)
  • Scorched3D – a 3D update of one of my favourite games of all time, Scorched Earth
  • Armacycles-AD – all ready covered by Tyler, this game is addictive as hell

Any other suggestions you might have would be fantastic!  Next up is trying to get some Steam games running…

Mounting an NTFS-formatted External Drive

September 20th, 2009 7 comments

I have a Western Digital 250GB NTFS-formatted external hard drive that I use primarily to store backups of my Windows machine. Since I’m away from my house for a couple of days, I used the drive to bring along some entertainment, but encountered some troubles getting Debian Lenny to play nice with it:

mount-errorAfter searching around for a bit, I found a helpful thread on the Ubuntu forums that explained that this problem could be caused by a few different things. First, with the drive plugged in, I ran

sudo fdisk -l

from the terminal, which brought up a summary of all disks currently recognized by the machine:

jon@debtop:/$ sudo fdisk -l
Disk /dev/sda: 40.0 GB, 40007761920 bytes
255 heads, 63 sectors/track, 4864 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xcccdcccd
 Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          31      248976   83  Linux
/dev/sda2              32        4864    38821072+  83  Linux

Disk /dev/dm-0: 39.7 GB, 39751725568 bytes
255 heads, 63 sectors/track, 4832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 38.0 GB, 38067503104 bytes
255 heads, 63 sectors/track, 4628 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 1681 MB, 1681915904 bytes
255 heads, 63 sectors/track, 204 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/sdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x5b6ac646

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       30401   244196001    7  HPFS/NTFS

Judging by the size of the drives, I figured out that the OS saw my drive at the location /dev/sdb, and the partition that I wanted to mount (the only partition on the drive) at the location /dev/sdb1.

Now, to determine why Linux wasn’t mounting the drive, I checked the fstab file at /etc/fstab to see if there was some other entry for sdb that was preventing it from mounting correctly:

# /etc/fstab: static file system information.
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/mapper/debtop-root /               ext3    errors=remount-ro 0       1
/dev/sda1       /boot           ext2    defaults        0       2
/dev/mapper/debtop-swap_1 none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

Since there was no entry there that should have overwritten sdb, I gave up on that line of inquiry, and decided to try manually mounting the drive. I know that Debian can read ntfs drives using the -t ntfs argument for the mount command, so I navigated over to the /media/ directory and created a folder to mount the drive in:

jon@debtop:/$ cd /media/
jon@debtop:/media$ sudo mkdir WesternDigital
jon@debtop:/media$ ls
cdrom  cdrom0  floppy  floppy0  WesternDigital
jon@debtop:/media$ sudo mount -t ntfs /dev/sdb1 /media/WesternDigital/
jon@debtop:/media$ sudo -s
root@debtop:/media# cd WesternDigital
root@debtop:/media/WesternDigital# ls
KeePass.kdbx  nws  $RECYCLE.BIN  System Volume Information

As you can see, the contents of my external drive were now accessible in the location where they ought to have been if Debian had correctly mounted the drive when it was plugged in. The only caveat to the process is that the mount function is available only to root users, meaning that the mountpoint was created by root, and my user account lacks the necessary permissions to read or write to the external drive:


I figured that this issue could be solved by using chmod to grant all users read and write permissions to the mountpoint:

root@debtop:/media# chmod +rw WesternDigital
chmod: changing permissions of `WesternDigital': Read-only file system

Well what the hell does that mean? According to this post (again on the Ubuntu forums), the ntfs support in Linux is experimental, and as such, all ntfs drives are mounted as read only. Specifically, this drive is owned by the root user, and has only read and execute permisions, but lacks write permissions.

According to this thread on the forums, there is another ntfs driver for Linux called ntfs-3g that will allow me full access to my ntfs-formatted drive. After sucessfully adding the ntfs-3g drivers to my system, I dismounted the drive, and attempted to re-mount it with the following command:

mount -t ntfs-3g /dev/sdb1 /media/WesternDigital

This time, the mount command appeared to almost work, but I got an error message along the way, indicating that the drive had not been properly dismounted the last time it was used on Windows, and giving me the option to force the mount:

Mount is denied because NTFS is marked to be in use. Choose one action:

Choice 1: If you have Windows then disconnect the external devices by
 clicking on the 'Safely Remove Hardware' icon in the Windows
 taskbar then shutdown Windows cleanly.

Choice 2: If you don't have Windows then you can use the 'force' option for
 your own responsibility. For example type on the command line:

 mount -t ntfs-3g /dev/sdb1 /media/WesternDigital -o force

Well, since I didn”t have a Windows box lying about that I can use to dismount the drive properly, I’ll took a shot at using the force option. After warning me again that it was resetting the log file and forcing the mount, the machine finally mounted my drive with full permissions for the owner, group, and other users!

drwxrwxrwx 1 root root  4096 2009-09-18 15:40 WesternDigital

After a couple of manual tests, I confirmed that both my user account and the root user had full read/write/execute access to this drive, and that I could use it like any other drive that the system has access to. Further, thanks to the painful XBMC install process, I already had the codecs required to play all of the TV shows that I brought along.

vpnc and me

September 17th, 2009 4 comments

After a brief hiatus of making posts (I document my daily trials all day at work, so it’s not usually the first thing I want to do when I get home) I’ve decided to make a beneficial post about how I can now do WORK (from home) on my Fedora 11-based laptop.  Hooray!

At the corporation where I work, our network and firewall infrastructure is – of course – Cisco-based.  Naturally, in order to connect to our corporate network from home, we use Cisco’s own VPN Client.  For distribution to various users across the company, my workplace has provided discs with pre-configured installations of this client, all set and ready to go to connect to our corporate network.  This prevents the dissemination of unnecessary information (VPN IP addresses, etc.) across the ranks, and makes it much easier for the non-savvy user to get connected.

I’ve all ready had a bit of experience using this client on my Windows Vista and Windows 7-based computers.  Unfortunately for me, the Cisco VPN Client we use at work only operates in a 32-bit Windows environment… meaning that on Windows Vista, I had to run a full-fledged copy of Virtual PC with a Windows XP installation.  In Windows 7, I was fortunate enough to be able to use its own built-in Windows XP Mode.

Trial and Error

My first thought to get this software working under Fedora 11 was probably the most simple – run it in Wine!  I’ve had limited experience with Wine in the past, but figured that it was probably my best bet to get the Windows-only Cisco client functioning.  Unfortunately for me, attempting to install the program in Wine only results in a TCP/IP stack error, so that was out of the question.

My next thought – slightly better than the first – came when it was announced that I could nab a copy of the Linux version of the Cisco VPN Client from work.  As luck might have it, it’s a bitch of a program to compile and install, and I had to stop myself short of throwing my laptop into the middle of our busy street before I just gave up.

Better Ideas

At this point, I was just about ready to try anything that could possibly get VPN connectivity working for me on my laptop.  Luckily, a quick search of ‘Cisco VPN Linux’ in Google shot back the wondrous program that is vpnc.  After seeing various peoples’ success with vpnc – a fully Linux-compatible Cisco VPN equivalent – I did a bit of reading up on the documentation and quickly installed it using yum:

$ yum install vpnc.x86_64

There, easy enough.  Further reading on vpnc indicated that I needed to edit a file known as default.conf – located in the /etc/vpnc directory – to store my VPN settings for work, if desired.  Opening up the config file included with the Windows version of the client, I pretty much copied everything over verbatim:

$ cd /etc/vpnc

$ nano default.conf

IPSec gateway [corporate VPN address]

Xauth username [domain ID]

Xauth password [domain password]

Domain [corporate domain]

From there, I performed a write out to the default.conf and saved my information.  The only complaint I might have about this step is that everything in this file is stored as plain-text, and does not appear encrypted whatsoever.  Since we are using a WPA2-encrypted wireless network and the VPN tunnel is secured, I wasn’t too concerned – but still.

At this point, I was now ready to test vpnc connectivity.  Typing in at the terminal

$ vpnc default.conf

I was rewarded with a triumphant ‘vpnc started in background’.  Hooray!  But what to do from here – how to connect to my work computer?  On Windows, I just use Remote Desktop… so logic following through as it does, I typed:

$ rdesktop [computername].[domain]

Instantly, I was showered in the beauty that was a full-screen representation of my Windows XP Professional-based work computer.

A shot of vpnc running in terminal, and my desktop running in rdesktop.

A shot of vpnc running in terminal, and my desktop running in rdesktop.

It certainly was not as easy a process as I’m making it out to be here – indeed, I did have to figure out to add .[domain] to the end of my computer name, as well as allow vpnc’s ports to flow through by performing a terminal netstat command and then opening them accordingly in the Fedora firewall – but I am now connected to work flawlessly, using open-source software.

I am currently running Gnome 2.26 on top of Fedora 11 (Leonidas). Check out my profile for more information.
Categories: Dana H, Fedora, Linux Tags: , , , ,

Alien, OpenPGP & Wine

September 6th, 2009 No comments

Now that the horrors of installation and setup are a part of the past I have been spending my time delving deep into the desktop and the applications. I would like to briefly touch upon three of these.


One of the first things you figure out after you install your distribution of choice is what package manager they are using. Now I’m not talking about Synaptic, mintInstall, or KPackageKit, but rather the packaging format, commonly RPM or DEB. While both of these are excellent they do create problems when you want to install software that only comes in the format that your distribution does not use. This is where alien comes in. Alien is a small command line program that will convert from one package to the other. So I can download a .deb file and use alien to convert it into Fedora’s native .rpm format. It’s simple and works great.


As I am a bit of a privacy nut I have been using Pretty Good Privacy for a while now to secure my e-mail and attachments. My mail client of choice makes this very easy through the use of the Enigmail add-on. What’s even better is Fedora, like most if not all Linux distributions, already ships with the program gpg. GnuPG is a command line application that implements OpenPGP, the open source, fully compatible version of PGP. This means that no matter which program you are using on your system they can all access the same PGP keys seamlessly! I have taken the extra step of generating a GPG key for my e-mail account here, tyler at, which you can find under my page (under Guinea Pigs at the top). I highly recommend anyone who is the least bit computer savvy set themselves up  an key and upload it to a key server. It takes about 1 minute and is very easy to use!


Wine, or Wine Is Not an Emulator, is a Linux program that can run a lot of Window’s programs by tricking them into thinking they are running on a Window’s machine. While I wouldn’t recommend it for everything, Wine is quite powerful and can get you out of a pinch. You can run Windows programs simply by opening a terminal and typing

wine [path to exe]


Notepad running thanks to Wine

The Linux File System Explained

August 3rd, 2009 1 comment

From one Linux newbie to another, read up on the basic file system organization of a Linux machine here. It’s a very basic overview of where the system puts certain types of files, but is a good starting point for anybody who (like me) is trying to wrap their windows-centric head around a new operating system.

Categories: Jon F, Linux Tags: , , , ,

So Many Fruity Flavours…

July 30th, 2009 No comments

I think that I’m the only member of the group with absolutely zero experience with Linux. Sure, I’ve used TightVNC to check the status of a Ubuntu-based file server, and I may even have dropped a live CD into my machine once or twice before in vain attempts to save my files from a bricked Windows install, but I have roughly zero actual experience with any of the distributions. Due to my lack of knowledge and the antique laptop that I’ll likely be using during the experiment, I’ve decided to stick to one of the more popular distributions to ensure ease of use and a wide base of drivers to draw from. So far, the Top Ten Distributions page over at DistroWatch has been very helpful, and I’ve managed to narrow my choice down to just a few of the hundreds of available flavours of Linux (ordered by my current preference):

  • Debian: Over 1000 developers, 20 000 packages, and no corporate backing – the definition of open source community development
  • Fedora: Strictly adheres to the free software philosophy; used by Linus Torvalds himself (If that ain’t street cred…)
  • openSUSE: A pretty looking desktop, with corporate backing from Novell.

While doing my research, I have purposely avoided Ubuntu Linux and it’s variants, as they seem to be “the” distribution of choice these days. To really get a taste of what it’s like to make the switch from Windows with zero previous experience, I’ve decided to stay away from Ubuntu. It’s just too common, and I’m non-conformist as can be.