Archive

Posts Tagged ‘windows’

Big distributions, little RAM 6

July 9th, 2013 3 comments

It’s that time again where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Fedora 18 (GNOME)
  • Fedora 18 (KDE)
  • Fedora 19 (GNOME
  • Fedora 19 (KDE)
  • Kubuntu 13.04 (KDE)
  • Linux Mint 15 (Cinnamon)
  • Linux Mint 15 (MATE)
  • Mageia 3 (GNOME)
  • Mageia 3 (KDE)
  • OpenSUSE 12.3 (GNOME)
  • OpenSUSE 12.3 (KDE)
  • Ubuntu 13.04 (Unity)
  • Xubuntu 13.04 (Xfce)

I even happened to have a Windows 7 (64-bit) VM lying around and, while I think you would be a fool to run a 64-bit OS on the limited test hardware, I’ve included as sort of a benchmark.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.2.16, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between July 1st, 2013 and July 5th, 2013 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. This time around however I’ve changed how things are measured slightly in order to be more accurate. Measurements (on linux) were taken using the free -m command for memory and the df -h command for disk usage. On Windows I used Task Manager and Windows Explorer.

In addition this will be the first time where I provide the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). Secondly there may be some distributions that don’t appear on all of the graphs, for example because I was using an existing Windows 7 VM I didn’t have a ‘first boot’ result. As always feel free to run your own tests. Thirdly you may be asking yourself ‘why does Fedora 18 and 19 make the list?’ Well basically because I had already run the tests for 18 and then 19 happened to be released. Finally Fedora 19 (GNOME), while included, does not have any data because I simply could not get it to install.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

 

All Data Points

All Data Points

RAM

RAM

Buffers/Cache Only

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

After_Updates_All

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap

Swap

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Install Size

Install Size

Conclusion

Once again I will leave the conclusions to you. This time however, as promised above, I will provide my source data for you to plunder enjoy.

Source Data




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

An Experiment in Transitioning to Open Document Formats

June 15th, 2013 2 comments

Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Fix for mount error(12): Cannot allocate memory

October 2nd, 2011 15 comments

Do you have the following situation:

  • You’ve got a share on Windows (XP, Vista, 7) that you’re trying to access from a Linux system, in this case Ubuntu.
  • Mounted through /etc/fstab or directly through the command line.
  • Initially, it works great, but then loses the mountpoint – you’ll go to, say, /mnt/server/mountpoint but there are no directory contents. “mount” shows the path as still mounted.
  • umount’ing the directory and then trying to remount it provides this gem of a message:
    mount error(12): Cannot allocate memory
    Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Of course, since you’re probably a reasonable system administrator, you go and check the memory allotment. top looks fine and nothing else on the system is complaining.

The solution, kindly provided by Alan LaMielle’s blog, gives a registry fix on the Windows side of things. In case that link ever breaks, here is the summary of what needs to happen on the Windows system:

  • In HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management, set the LargeSystemCache key to 1 (hex).
  • In HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters, set the Size key to 3 (hex).’
  • Restart the “Server” service and its dependencies (on my Windows 7 box, these were “Computer Browser” and “Homegroup Listener”, and I had to restart the service twice for the dependencies to also come back up.) Alternatively you can just restart the Windows system as you’re probably due for a large set of updates anyway.

Then re-run the mount command (for entries defined in /etc/fstab, use sudo mount -a) and your shares should be restored to their former glory.

Categories: Jake B Tags: , , ,

Create a GStreamer powered Java media player

March 14th, 2011 1 comment

For something to do I decided to see if I could create a very simple Java media player. After doing some research, and finding out that the Java Media Framework was no longer in development, I decided to settle on GStreamer to power my media player.

GStreamer for the uninitiated is a very powerful multimedia framework that offers both low-level pipeline building as well as high-level playback abstraction. What’s nice about GStreamer, besides being completely open source, is that it presents a unified API no matter what type of file it is playing. For instance if the user only has the free, high quality GStreamer codecs installed, referred to as the good plugins, then the API will only play those files. If however the user installs the other plugins as well, be it the bad or ugly sets, the API remains the same and thus you don’t need to update your code. Unfortunately being a C library this approach does have some drawbacks, notably the need to include the JNA jar as well as the system specific libraries. This approach can be considered similar to how SWT works.

Setup

Assuming that you already have a Java development environment, the first thing you’ll need is to install GStreamer. On Linux odds are you already have it, unless you are running a rather stripped down distro or don’t have many media players installed (both Rhythmbox and Banshee use GStreamer). If you don’t it should be pretty straight forward to install along with your choice of plugins. On Windows you’ll need to head over to ossbuild where they have downloadable installers.

The second thing you’ll need is gstreamer-java which you can grab over at their website here. You’ll need to download both gstreamer-java-1.4.jar and jna-3.2.4.jar. Both might contain some extra files that you probably don’t need and can prune out later if you’d like. Setup your development environment so that both of these jar files are in your build path.

Simple playback

GStreamer offers highly abstracted playback engines called PlayBins. This is what we will use to actually play our files. Here is a very simple code example that demonstrates how to actually make use of a PlayBin:

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));
     playbin.setInputFile("song.mp3");

     playbin.setState(State.PLAYING);
     Gst.main();
     playbin.setState(State.NULL);
}

So what does it all mean?

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

The above line takes the incoming command line arguments and passes them to the Gst.init function and returns a new set of arguments. If you have every done any GTK+ programming before this should be instantly recognizable to you. Essentially what GStreamer is doing is grabbing, and removing, any GStreamer specific arguments before your program will actually process them.

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));
     playbin.setInputFile("song.mp3");

The first line of code requests a standard “AudioPlayer” PlayBin. This PlayBin is built right into GStreamer and automatically sets up a default pipeline for you. Essentially this lets us avoid all of the low-level craziness that we would have to normally deal with if we were starting from scratch.

The next line sets the PlayBin’s VideoSink, think of sinks as output locations, to a “fakesink” or null sink. The reason we do this is because PlayBin’s can play both audio and video. For the purposes of this player we only want audio playback so we automatically redirect all video output to the “fakesink”.

The last line is pretty straight forward and just tells GStreamer what file to play.

     playbin.setState(State.PLAYING);
     Gst.main();
     playbin.setState(State.NULL);

Finally with the above lines of code we tell the PlayBin to actually start playing and then enter the GStreamer main loop. This loop continues for the duration. The last line is used to reset the PlayBin state and do some cleanup.

Bundle it with a quick GUI

To make it a little more friendly I wrote a very quick GUI to wrap all of the functionality with. The download links for that (binary only package), as well as the source (all package) is below. And there you have it: a very simple cross-platform media player that will playback pretty much anything you throw at it.

Please note that I have provided this software purely as a quick example. If you are really interested in developing a GStreamer powered Java application you would do yourself a favor by reading the official documentation.

Binary Only Package All Package
File name: my_media_player_binary.zip my_media_player_all.zip
Screenshots:
Version: March 13, 2011
File size: 1.5MB 1.51MB
File download: Download Here Download Here

Originally posted on my personal website here.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Create a GTK+ application on Linux with Objective-C

December 8th, 2010 8 comments

As sort of follow-up-in-spirit to my older post I decided to share a really straight forward way to use Objective-C to build GTK+ applications.

Objective-what?

Objective-C is an improvement to the iconic C programming language that remains backwards compatible while adding many new and interesting features. Chief among these additions is syntax for real objects (and thus object-oriented programming). Popularized by NeXT and eventually Apple, Objective-C is most commonly seen in development for Apple OSX and iOS based platforms. It ships with or without a large standard library (sometimes referred to as the Foundation Kit library) that makes it very easy for developers to quickly create fast and efficient programs. The result is a language that compiles down to binary, requires no virtual machines (just a runtime library), and achieves performance comparable to C and C++.

Marrying Objective-C with GTK+

Normally when writing a GTK+ application the language (or a library) will supply you with bindings that let you create GUIs in a way native to that language. So for instance in C++ you would create GTK+ objects, whereas in C you would create structures or ask functions for pointers back to the objects. Unfortunately while there used to exist a couple of different Objective-C bindings for GTK+, all of them are quite out of date. So instead we are going to rely on the fact that Objective-C is backwards compatible with C to get our program to work.

What you need to start

I’m going to assume that Ubuntu will be our operating system for development. To ensure that we have what we need to compile the programs, just install the following packages:

  1. gnustep-core-devel
  2. libgtk2.0-dev

As you can see from the list above we will be using GNUstep as our Objective-C library of choice.

Setting it all up

In order to make this work we will be creating two Objective-C classes, one that will house our GTK+ window and another that will actually start our program. I’m going to call my GTK+ object MainWindow and create the two necessary files: MainWindow.h and MainWindow.m. Finally I will create a main.m that will start the program and clean it up after it is done.

Let me apologize here for the poor code formatting; apparently WordPress likes to destroy whatever I try and do to make it better. If you want properly indented code please see the download link below.

MainWindow.h

In the MainWindow.h file put the following code:

#import <gtk/gtk.h>
#import <Foundation/NSObject.h>
#import <Foundation/NSString.h>

//A pointer to this object (set on init) so C functions can call
//Objective-C functions
id myMainWindow;

/*
* This class is responsible for initializing the GTK render loop
* as well as setting up the GUI for the user. It also handles all GTK
* callbacks for the winMain GtkWindow.
*/
@interface MainWindow : NSObject
{
//The main GtkWindow
GtkWidget *winMain;
GtkWidget *button;
}

/*
* Constructs the object and initializes GTK and the GUI for the
* application.
*
* *********************************************************************
* Input
* *********************************************************************
* argc (int *): A pointer to the arg count variable that was passed
* in at the application start. It will be returned
* with the count of the modified argv array.
* argv (char *[]): A pointer to the argument array that was passed in
* at the application start. It will be returned with
* the GTK arguments removed.
*
* *********************************************************************
* Returns
* *********************************************************************
* MainWindow (id): The constructed object or nil
* arc (int *): The modified input int as described above
* argv (char *[]): The modified input array modified as described above
*/
-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv;

/*
* Frees the Gtk widgets that we have control over
*/
-(void)destroyWidget;

/*
* Starts and hands off execution to the GTK main loop
*/
-(void)startGtkMainLoop;

/*
* Example Objective-C function that prints some output
*/
-(void)printSomething;

/*
********************************************************
* C callback functions
********************************************************
*/

/*
* Called when the user closes the window
*/
void on_MainWindow_destroy(GtkObject *object, gpointer user_data);

/*
* Called when the user presses the button
*/
void on_btnPushMe_clicked(GtkObject *object, gpointer user_data);

@end

MainWindow.m

For the class’ actual code file fill it in as show below. This class will create a GTK+ window with a single button and will react to both the user pressing the button, and closing the window.

#import “MainWindow.h”

/*
* For documentation see MainWindow.h
*/

@implementation MainWindow

-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv
{
//call parent class’ init
if (self = [super init]) {

//setup the window
winMain = gtk_window_new (GTK_WINDOW_TOPLEVEL);

gtk_window_set_title (GTK_WINDOW (winMain), “Hello World”);
gtk_window_set_default_size(GTK_WINDOW(winMain), 230, 150);

//setup the button
button = gtk_button_new_with_label (“Push me!”);

gtk_container_add (GTK_CONTAINER (winMain), button);

//connect the signals
g_signal_connect (winMain, “destroy”, G_CALLBACK (on_MainWindow_destroy), NULL);
g_signal_connect (button, “clicked”, G_CALLBACK (on_btnPushMe_clicked), NULL);

//force show all
gtk_widget_show_all(winMain);
}

//assign C-compatible pointer
myMainWindow = self;

//return pointer to this object
return self;
}

-(void)startGtkMainLoop
{
//start gtk loop
gtk_main();
}

-(void)printSomething{
NSLog(@”Printed from Objective-C’s NSLog function.”);
printf(“Also printed from standard printf function.\n”);
}

-(void)destroyWidget{

myMainWindow = NULL;

if(GTK_IS_WIDGET (button))
{
//clean up the button
gtk_widget_destroy(button);
}

if(GTK_IS_WIDGET (winMain))
{
//clean up the main window
gtk_widget_destroy(winMain);
}
}

-(void)dealloc{
[self destroyWidget];

[super dealloc];
}

void on_MainWindow_destroy(GtkObject *object, gpointer user_data)
{
//exit the main loop
gtk_main_quit();
}

void on_btnPushMe_clicked(GtkObject *object, gpointer user_data)
{
printf(“Button was clicked\n”);

//call Objective-C function from C function using global object pointer
[myMainWindow printSomething];
}

@end

main.m

To finish I will write a main file and function that creates the MainWindow object and eventually cleans it up. Objective-C (1.0) does not support automatic garbage collection so it is important that we don’t forget to clean up after ourselves.

#import “MainWindow.h”
#import <Foundation/NSAutoreleasePool.h>

int main(int argc, char *argv[]) {

//create an AutoreleasePool
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

//init gtk engine
gtk_init(&argc, &argv);

//set up GUI
MainWindow *mainWindow = [[MainWindow alloc] initWithArgCount:&argc andArgVals:argv];

//begin the GTK loop
[mainWindow startGtkMainLoop];

//free the GUI
[mainWindow release];

//drain the pool
[pool release];

//exit application
return 0;
}

Compiling it all together

Use the following command to compile the program. This will automatically include all .m files in the current directory so be careful when and where you run this.

gcc `pkg-config –cflags –libs gtk+-2.0` -lgnustep-base -fconstant-string-class=NSConstantString -o “./myprogram” $(find . -name ‘*.m’) -I /usr/include/GNUstep/ -L /usr/lib/GNUstep/ -std=c99 -O3

Once complete you will notice a new executable in the directory called myprogram. Start this program and you will see our GTK+ window in action.

If you run it from the command line you can see the output that we coded when the button is pushed.

Wrapping it up

There you have it. We now have a program that is written in Objective-C, using C’s native GTK+ ‘bindings’ for the GUI, that can call both regular C and Objective-C functions and code. In addition, thanks to the porting of both GTK+ and GNUstep to Windows, this same code will also produce a cross-platform application that works on both Mac OSX and Windows.

Source Code Downloads

Source Only Package
File name: objective_c_gtk_source.zip
File hashes: Download Here
File size: 2.4KB
File download: Download Here

Originally posted on my personal website here.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Compile Windows programs on Linux

September 26th, 2010 No comments

Windows?? *GASP!*

Sometimes you just have to compile Windows programs from the comfort of your Linux install. This is a relatively simple process that basically requires you to only install the following (Ubuntu) packages:

To compile 32-bit programs

  • mingw32 (swap out for gcc-mingw32 if you need 64-bit support)
  • mingw32-binutils
  • mingw32-runtime

Additionally for 64-bit programs (*PLEASE SEE NOTE)

  • mingw-w64
  • gcc-mingw32

Once you have those packages you just need to swap out “gcc” in your normal compile commands with either “i586-mingw32msvc-gcc” (for 32-bit) or “amd64-mingw32msvc-gcc” (for 64-bit). So for example if we take the following hello world program in C

#include <stdio.h>
int main(int argc, char** argv)
{
printf(“Hello world!\n”);
return 0;
}

we can compile it to a 32-bit Windows program by using something similar to the following command (assuming the code is contained within a file called main.c)

i586-mingw32msvc-gcc -Wall “main.c” -o “Program.exe”

You can even compile Win32 GUI programs as well. Take the following code as an example

#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
char *msg = “The message box’s message!”;
MessageBox(NULL, msg, “MsgBox Title”, MB_OK | MB_ICONINFORMATION);

return 0;
}

this time I’ll compile it into a 64-bit Windows application using

amd64-mingw32msvc-gcc -Wall -mwindows “main.c” -o “Program.exe”

You can even test to make sure it worked properly by running the program through wine like

wine Program.exe

You might need to install some extra packages to get Wine to run 64-bit applications but in general this will work.

That’s pretty much it. You might have a couple of other issues (like linking against Windows libraries instead of the Linux ones) but overall this is a very simple drop-in replacement for your regular gcc command.

*NOTE: There is currently a problem with the Lucid packages for the 64-bit compilers. As a work around you can get the packages from this PPA instead.

Originally posted on my personal website here.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B, Ubuntu Tags: , , , ,

Using KDE on Windows

February 11th, 2010 2 comments

Since the end of The Linux Experiment I have started dual booting my laptop, switching between Kubuntu 9.10 and Windows 7 as needed. While this solves all of my compatibility issues, it does pose some more annoying issues. For example after setting up one operating system just the way I like it I now need to do the same for the other. Furthermore after becoming used to using particular applications under Linux I now have to find alternatives for Windows. Well no more! The KDE guys and gals have ported the libraries to Windows!

Installing

To install KDE on Windows all you need to do is head over to http://windows.kde.org/download.php and grab a copy of the installer exe. This will more or less walk you through the initial setup and then present you with a list of packages you can choose to install. Most applications are there including things like KTorrent, Konqueror, Konversation and more! Simply select them and watch as they are easily installed.

Image Walkthrough

The first screen you'll see when installing

The package list

kdebase-apps includes things like Konqueror

The installer downloads the source and compiles it locally

After installing the applications show up right in your start menu

The final result. Konqueror and KWrite running on Windows




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Free Software, KDE, Tyler B Tags: ,

Going Linux, Once and for All

December 23rd, 2009 7 comments

With the linux experiment coming to an end, and my Vista PC requiring a reinstall, I decided to take the leap and go all linux all the time. To that end, I’ve installed Kubuntu on my desktop PC.

I would like to be able to report that the Kubuntu install experience was better than the Debian one, or even on par with a Windows install. Unfortunately, that just isn’t the case.

My machine contains three 500GB hard drives. One is used as the system drive, while an integrated hardware RAID controller binds the other two together as a RAID1 array. Under Windows, this setup worked perfectly. Under Kubuntu, it crashed the graphical installer, and threw the text-based installer into fits of rage.

With plenty of help from the #kubuntu IRC channel on freenode, I managed to complete the Kubuntu install by running it with the two RAID drives disconnected from the motherboard. After finishing the install, I shut down, reconnected the RAID drives, and booted back up. At this point, the RAID drives were visible from Dolphin, but appeared as two discrete drives.

It was explained to me via this article that the hardware RAID support that I had always enjoyed under windows was in fact a ‘fake RAID,’ and is not supported on Linux. Instead, I need to reformat the two drives, and then link them together with a software RAID. More on that process in a later post, once I figure out how to actually do it.

At this point, I have my desktop back up and running, reasonably customized, and looking good. After trying KDE’s default Amarok media player and failing to figure out how to properly import an m3u playlist, I opted to use Gnome’s Banshee player for the time being instead. It is a predictable yet stable iTunes clone that has proved more than capable of handling my library for the time being. I will probably look into Amarok and a few other media players in the future. On that note, if you’re having trouble playing your MP3 files on Linux, check out this post on the ubuntu forums for information about a few of the necessary GStreamer plugins.

For now, my main tasks include setting up my RAID array, getting my ergonomic bluetooth wireless mouse working, and working out folder and printer sharing on our local Windows network. In addition, I would like to set up a Windows XP image inside of Sun’s Virtual Box so that I can continue to use Microsoft Visual Studio, the only Windows application that I’ve yet to find a Linux replacement for.

This is just the beginning of the next chapter of my own personal Linux experiment; stay tuned for more excitement.

This post first appeared at Index out of Bounds.




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

Setting up some Synergy

October 1st, 2009 3 comments

Last night I was able to set up a neat little program that I think you should all know about! Synergy allows you to set up two or more computers so that they all share one keyboard and one mouse. Even better it works cross platform (i.e. Windows and Linux can both share the same mouse and keyboard).

Setup

You need to install synergy on all machines involved. I will only go over the Fedora instructions here. The first thing I did was do a quick yum search for synergy.

yum search synergy

This spit back the following results:

== Matched: synergy ==
quicksynergy.x86_64 : Share keyboard and mouse between computers
synergy.x86_64 : Mouse and keyboard sharing utility
synergy-plus.x86_64 : Mouse and keyboard sharing utility

As you can see in the list above it appears as though the package synergy.x86_64 is the only one I really need so I went and installed it.

sudo yum install synergy

This quickly finished but left me scratching my head. There was no application entry for synergy and not even a man page in the terminal. Looking back at the original search terms I figured synergy-plus must be additional features for the base synergy application and that maybe quicksynergy was some sort of automated or easier to use version of synergy. So I installed that.

sudo yum install quicksynergy

I then set up my synergy server, the computer that would be sharing it’s mouse and keyboard to the others, and defined where the monitors would go.

As you can see I have set up my Fedora computer (XPS) to extend the monitor to the left of my Windows machine

As you can see I have set up my Fedora computer (XPS) to extend the monitor to the left of my Windows machine

Next I jumped back over to my Fedora laptop and launched QuickSynergy. After a bit of tinkering I found out that the Share tab is if this computer is going to be the server and the Use tab is for a client. I tried entering the hostname in the text field but that wouldn’t work for whatever reason. It wasn’t until I entered the IP address of the server that things started working.

QuickSynergy on Fedora

QuickSynergy on Fedora

And now for the pièce de résistance. Here is my desktop computing experience!

3 monitors, 2 machines, 1 keyboard & mouse

3 monitors, 2 machines, 1 keyboard & mouse. Sorry for the poor picture quality.

P.S.

It’s not cheating to use a Windows machine. I needed it to do work. As far as I can tell the linux doesn’t have Visual Studio 2008 with VB.NET support… yet ;)

A minor setback

September 28th, 2009 2 comments

Since this crazy job of mine doesn’t quite feed my mad electronics fetish as much as I might like to, I do a lot of computer troubleshooting on the side… it helps pay the bills, and is a nice way to stay on my toes as far as keeping on top of possible threats out there (since our company’s firewall keeps them out for the most part).  I’ll usually head to a person’s house, get some stuff done, and if it’s still in rough shape (requires a full backup and format) I’ll bring the machine home.

Yesterday, I headed over to my former AVP (Assistant Vice-Preisdent, for those of you not in the know)’s house to get her wireless network running and troubleshoot problems with her one desktop, as well as get file and printer sharing working between two machines.  Her wireless router is a little bit old – a D-Link DI-524 – but it’s something I’ve dealt with before.

After a firmware upgrade, the option to use WPA-PSK encryption was made available (as opposed to standard WEP before).  Great, I thought!  I go to put in a key, hit Apply, and…

Nothing.  Hitting the Apply button does absolutely nothing.  Two computer and router restarts (including a full reset) later, and the same thing was happening.  Some quick research indicated that, hooray hooray, there was an incompatibility with that router’s administration page, Java, and Firefox.  Solution?  Use Internet Explorer.

Here’s where I really ran into a pickle.  This is the first time I’ve ever felt the disadvantage of using a non-Windows operating system.  If I had Windows, I would have been able to fire up IE and just get everything going for them.  Instead, I had to try and install IE6 for Linux, which failed (Wine threw some kind of error).  I ended up using one of my client’s laptops, which they thankfully had sitting around.  Frustrating, but it was easy enough to work around.

Has anyone else had experiences like this?  Things that are *just* out of reach for you because of your choice to use Linux over Windows?