Archive

Posts Tagged ‘Linux’

Installing Netflix on Kubuntu

July 27th, 2013 4 comments

The machine I am running Kubuntu on is primarily used for streaming media like Netflix and Youtube, watching files off of a shared server and downloading media.

I decided to try to install Netflix first since it is something I use quite often. I am engrossed in watching the first season of Orange is the New Black and the last season of The West Wing.

Again, I resorted to Googling exactly what I am looking for and came across this fantastic post.

I opened a Terminal instance in Kubuntu and literally copied and pasted the text from the link above.

After going through these motions, I had a functioning instance of Netflix! Woo hoo.

So I decided to throw on an episode of Orange is the new Black, it loaded perfectly…. without sound.

Well shit! I never even thought to see if my audio driver had been picked up… so I guess I should probably go ahead and fix that.

What is this, text for ants? Part II

July 27th, 2013 No comments

Back to my shit-tastic eyesight for a moment.

Now that we have our Bluetooth devices installed, I can now sit in front of my projector, instead of in the closet, to fiddle with the font scaling.

We will want to go through the process of pulling up the System Settings again. Why don’t we refer to this image… again.

Computer Tab

The next step to to select Application Appearance, it looks like this.

System Settings Fonts

This will bring you into this menu where you will select Fonts from the toolbar on the left hand side.

Fonts

In the next screen you can change the font settings. There is a nice option in here that you can select to change all the fonts at once… spoiler, it is called “Adjust all fonts”. This is what I used to change the fonts to a size that my blind ass could see from the couch without squinting too much.

You can also force font DPI and select anti-aliasing, as you can see below. For the most part, this has made it possible for me to see what the hell is going on on my screen.

For my next adventure, I will be trying to get Netflix to work. Which I have heard is actually pretty simple.

Fonts

Installing Bluetooth devices on Kubuntu

July 27th, 2013 No comments

This is actually a much easier process than I imagined it would be.

First: Ensure your devices (mouse, headphones, keyboard, etc…) are charged and turned on.

Next click on the “Start” menu icon in the bottom left of the desktop screen.

Then click on the “Computer” icon along the bottom, followed by System Settings.

Computer Tab

This will take you into the System Settings folder where you can change many things. Here we will select Bluetooth, since that is the type of device you want to install.

Bluetooth Menu

I took these pictures after I successfully installed my wireless USB keyboard and mouse. So you know I am not bullshitting about this process actually working.

Like most Bluetooth devices, mine have a red “Connect” button on the bottom. Ignore the sweet, sweet compulsion to press that button. I’m convinced it is nearly useless. Instead, use the “Add devices” method, as seen here.

Add Device

More awesome Photoshop.

Now, if you followed my first instruction (charge and turn on your Bluetooth Device) you should see them appear in this menu. Select the item you would like to add and click next. This will prompt you to enter a PIN on the device you wish to insyall (if installing a keyboard), or it will just add your device. If you have done this process successfully, your device will show up in the device menu. If it does not, you fucked up.

 

Linux: does it work for workers who work in the workplace?

July 27th, 2013 No comments

In the ramp-up to the 2013 Linux Experiment, I got ambitious and decided to try not only FreeBSD as my official entry, but to install one or more versions of Linux at the office (so take that, anyone who says “Well FreeBSD isn’t Linux!” I’m aware.)

There are a number of reasons I wanted to check out Linux in an office environment, and was able to consider this secondary experiment:

  • Most of my work is Linux-based already. We have moved away from Windows-based systems fairly drastically since 2011, and there is minimal Windows administration effort. The much more common presence of professionally managed Windows virtual machines means that I can use tools like rdesktop if a Windows UI is absolutely required. Having a built-in SSH client is one of the reasons I picked a MacBook Pro for a corporate laptop, and Linux distributions offer the same ssh packages.
  • I have the good fortune to have multiple corporate-issued systems available on short notice. If the experiment goes poorly, I’m only down for ten minutes to reconnect a Windows or OSX-based system. I can then resume my remote tasks through the diligent use of screen and multiple SSH tunnels.
  • Another point in favour is that most IT support is now self-directed for software issues; there is a large (and growing) Linux user community internally and corporate documentation now tends to indicate proper server names and connection information rather than “just use Outlook”.
  • Finally, there’s an easy way to back out if something goes wrong – it’s possible to reimage a laptop and rejoin it to the corporate domain without engaging technical support. I don’t keep files locally and my key configuration files are all backed up on a remote Git server, so getting back to Windows 7 wouldn’t be too hard at all.

Hopefully with this adventure I’ll be able to better able to contribute internally to the Linux user community, and appropriately redacted, share the trials and tribulations of running Linux (mostly) full time in the workplace. Wish me luck!




I am currently running Ubuntu 14.04 LTS for a home server, with a mix of Windows, OS X and Linux clients for both work and personal use.
I prefer Ubuntu LTS releases without Unity - XFCE is much more my style of desktop interface.
Check out my profile for more information.

What is this, text for ants? Part I

July 26th, 2013 No comments

Unlike many people who may be installing a version of Linux, I am doing so on a machine that has a projector with a 92″ screen as it’s main display.

So, upon initial installation of Kubuntu, I couldn’t see ANY of the text on the desktop, it was itty bitty.

Font for Ants

I can’t even read this standing inches away.

In order to fix this, I had to hook up an additional display.

Thankfully, living in a house with a computer guru, I had many to choose from.

In order to get my secondary display to appear, I had to first plug it into the display port on the machine I am using. I then had to turn off the current display (projector) and reboot the machine so that it would initialize the use of my new monitor.

Sounds easy enough, and it was, albeit with some gentle guidance from Jake B.

From here, I am able to properly configure my display.

The thing I am enjoying most about Kubuntu so far is that it is very user friendly. It seems almost intuitive where each setting can be found in menus.

So these are the steps I followed to change my display configuration.

I went into Menu > Computer > System Settings

Computer Tab

Check out my sweet Photoshop Skills. I may have taken this picture with a potato.

Once you get into the System Settings folder, you have the option to change a lot of things. For example, your display resolution.

System Settings

Looks a lot like the OSX System Preferences layout.

Now that you are in this menu, you will want to select Display and Monitor from the options. Here you can set your resolution, monitor priority, mirroring, and multiple displays. Since I will only be using this display on the Projector, I ensured that the resolution was set so that I could read the text properly on the Projector Screen. Before disabling my secondary monitor, I also set up my Bluetooth keyboard and mouse, which I will talk about in another post.

This process only took a few moments. I will still have to tweak the font scaling, as I have shit-tastic eyesite.

Experience Booting Linux Using the Windows 7 Bootloader

July 26th, 2013 2 comments

Greetings everyone! It has been quite some time since my last post. As you’ll be able to read from my profile (and signature,) I have decided to run ArchLinux for the upcoming experiment. As of yet, I’m not sure what my contributions to the community will be, however, there will be more on that later.

One of the interesting things I wanted to try this time around was to get Linux to boot from the Windows 7 bootloader. The basic principle here is to take the first 512-bytes of your /boot partition (with GRUB installed), and place it on your C:\ as linux.bin. From there, you use BCDEdit in Windows to add it to your bootloader. When you boot Windows, you will be prompted to either start Windows 7 or Linux. If you choose Linux, GRUB will be launched.

Before I go into my experience, I just wanted to let you know that I was not able to get it working. It’s not that it isn’t possible, but for the sake of being able to boot into ArchLinux at some point during the experiment, I decided to install GRUB to the MBR and chainload the Windows bootloader.

I started off with this article from the ArchLinux wiki, that basically explains the process above in more detail. What I failed to realize was that this article was meant to be used when both OSes are on the same disk. In my case, I have Windows running on one disk, and Linux on another.

According to this article on Eric Hameleers’ blog, the Windows 7 Bootloader does not play well with loading operating systems that reside on a different disk. Eric goes into a workaround for this in the article. The proposed solution is to have your /boot partition reside on the same disk as Windows. This way, the second stage of GRUB will be properly loaded, and GRUB will handle the rest properly.

Although I could attempt the above, I don’t really want to be re-sizing my Windows partition at this point, and it will be much easier for me to install GRUB to the MBR on my Linux disk, and have that disk boot first. That way, if I decide to get rid of Linux later, I can change the boot order, and the Windows bootloader will have remained un-touched.

Besides, while I was investigating this approach, I received a lot of ridicule from #archlinux for trying to use the Windows bootloader.

09:49 < AngryArchLinuxUser555> uhm, first 512bytes of /boot is pretty useless
09:49 < AngryArchLinuxUser555> unless you are doing retarded things like not having grub in mbr
(username changed for privacy)

For the record, I was not attempting this because I think it’s a good idea. I do much prefer using GRUB, however, this was FOR SCIENCE!

If I ever do manage to boot into ArchLinux, I will be sure to write another post.


I am currently running ArchLinux (x86_64).
Check out my profile for more information.

This isn’t going well.

July 26th, 2013 No comments

Today I started out by going into work, only to discover that it is NEXT Friday that I need to cover.

So I came home and decided to get a jump start on installing Kubuntu.

I am now at a screeching halt because the hardware I am using has Win8 installed on it and when I boot into the Start Up settings, I lose the ability to use my keyboard. This is going swimmingly.

So, it is NOW about 3 hours later.

In this time, I have cursed, yelled, felt exasperated and been downright pissed.

This is mainly because Windows 8 does not make it easily accessible to get to the Boot Loader. In fact, the handy Windows made video that is supposed to walk you through how EASY, and user friendly the process of changing system settings is fails to mention what to do if the “Use a Device” option is nowhere to be found (as it was in my case).

So I relied on Google, which is usually pretty good about answering questions about stupid computer issues. I FINALLY came across one post that stated that due to how quickly Windows 8 boots, that there is no time to press F2 or F8. However, I tries anyway. F8 is the key to selecting what device you want to boot from, as you will see later in this post.

What you will want to do if installing any version of Linux is, first format a USB stick to hold your Linux distro. I used Universal USB Loader. The nice thing about this loader is that you don’t have to already have the .iso for the distro you want to use downloaded. You have the option of downloading right in the program.

After you have selected you distro, downloaded the .iso and loaded it onto your USB stick now is the fun part. Plug your USB stick into the computer you wish to load Linux onto.

Considering how easy this was once I figured it all out, I do feel rather silly. If I were to have to do it again, I would feel much more knowledgeable.

If you are using balls-ass Windows 8, like I was, the EASIEST way to select an alternate device to boot from is to restart the computer and press F8 a billion times until a menu pops up, letting you choose from multiple devices. Choose the device with the name of the USB stick, for me it was PENDRIVE.

Once you press enter (from a keyboard that is attached directly to the computer you are using via USB cable, because apparently Win8 loses the ability to use Wireless USB devices before the OS has fully booted…at least that was my experience).

So now, I am being prompted to install Kubuntu (good news, I already know it supports my projector, because I can see this happening).

Now, I have had to plug in a USB wired keyboard and mouse for this process so far. This makes life a little bit difficult because the computer I am using sits in a closet, too far away from my projector screen. This makes it almost impossible for me to see what is going on, on the screen. So installing the drives for my wireless USB devices it a bit of a pain.

However, the hard part is over. The OS is installed successfully. My next post will detail how the hell to install wireless USB devices. I will probably also make a fancy signature, so you all know what I am running.

Come on, really?!

July 25th, 2013 3 comments

So it is 9:40 PM and I started my “Find a Linux distro to install” process. Like many people, I decided to type exactly what I wanted to search into Google. Literally, I typed “Linux Distro Chooser” into Google. Complex and requiring great technical skill, I know.

My next mission was to pick the site that had a description with the least amount of “sketch”. Meaning, I picked the first site in the Google results. I then used my well honed multiple choice skills (ignore the question, pick B) to find my perfect Linux distro match.

After several pages of clicking through, I was presented with a list of Linux distributions that fit my needs and hardware.

See, a nice list, with percents and everything.

This picture has everything... percents, mints, Man Drivers...

This picture has everything… percents, mints, Man Drivers…

So naturally, I do what everyone does with lists.. look at my options and pick the one with the prettiest picture.

For me that distro was Kubuntu. It has a cool sounding name that starts with the same letter as my name.

So I follow the link through to the website to pull the .iso and this pops up.

Fuck Drupal

God damn Drupal!

I have dealt with Drupal before, as it was the platform the website I did data entry for was built on. Needless to say, I hate it. Hey Web Dev with Trev, if you are out there, I hope you burn your toast the next time you make some.

So, to be productive while waiting for Drupal to fix it’s shit, I decided to start a post and rant. In the time this took, the website for Kubuntu has recovered (for now).

So, I downloaded my .iso and am ready to move it onto a USB stick.

I’m debating whether I want to install it now or later, as I would really like to watch some West Wing tonight. I know that if I start this process and fuck it up, I am going to be forced to move upstairs where there is another TV, but it is small 🙁

Well, here I go, we’ll see how long it takes me to install it. If you are reading this, go ahead and time me… it may be a while.

An Experiment in Transitioning to Open Document Formats

June 15th, 2013 2 comments

Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Limit Bandwitdth Used by apt-get

October 22nd, 2012 No comments

It’s easy. Simply throw “-o Acquire::http::Dl-Limit=X” in your apt-get command where X is the kb/s you wish to limit it to. So for example let’s say that you want to limit an apt-get upgrade command to roughly 50kb/s of bandwidth. Simply issue the following command:

sudo apt-get -o Acquire::http::Dl-Limit=50 upgrade

Simple right?




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Automatically put computer to sleep and wake it up on a schedule

June 24th, 2012 No comments

Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

#!/bin/sh
t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Sabayon Linux – Stable if not without polish

April 28th, 2012 3 comments

I have been running Sabayon Linux (Xfce) for the past couple of months and figured I would throw a post up on here describing my experience with it.

Reasons for Running

The reason I tried Sabayon in the first place is because I was curious what it would be like to run a rolling release distribution (that is a distribution that you install once and just updates forever with no need to re-install). After doing some research I discovered a number of possible candidates but quick narrowed it down based on the following reasons:

  • Linux Mint Debian Edition – this is an excellent distribution for many people but for whatever reason every time I update it on my hardware it breaks. Sadly this was not an option.
  • Gentoo – I had previously been running Gentoo and while it is technically a rolling release I never bothered to update it because it just took too long to re-compile everything.
  • Arch Linux – Sort of like Gentoo but with binary packages, I turned this one down because it still required a lot of configuration to get up and running.
  • Sabayon Linux – based on Gentoo but with everything pre-compiled for you. Also takes the ‘just works’ approach by including all of the proprietary and closed source  codecs, drivers and programs you could possibly want.

Experience running Sabayon

Sabayon seems to take a change-little approach to packaging applications and the desktop environment. What do I mean by this? Simply that if you install the GNOME, KDE or Xfce versions you will get them how the developers intended – there are very few after-market modifications done by the Sabayon team. That’s not necessarily a bad thing however, because as updates are made upstream you will receive them very quickly thereafter.

This distribution does live up to its promise with the codecs and drivers. My normally troublesome hardware has given me absolutely zero issues running Sabayon which has been a very nice change compared to some other, more popular distributions (*cough* Linux Mint *cough*). My only problem with Sabayon stems from Entropy (their application installer) being very slow compared to some other such implementations (apt, yum, etc). This is especially apparent during the weekly system wide updates which can result in many, many package updates.

Final Thoughts

For anyone looking for a down to basics, Ubuntu-like (in terms of ease of install and use), rolling release distribution I would highly recommend Sabayon. For someone looking for something a bit more polished or extremely user friendly, perhaps you should look elsewhere. That’s not to say that Sabayon is hard to use, just that other distributions might specialize in user friendliness.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Oh Gentoo

December 22nd, 2011 6 comments

Well it’s been a couple of months now since the start of Experiment 2.0 and I’ve had plenty of time to learn about Gentoo, see its strengths and… sit waiting through its weaknesses. I don’t think Gentoo is as bad as everyone makes it out to be, in fact, compared to some other distributions out there, Gentoo doesn’t look bad at all.

Now that the experiment is approaching its end I figured it would be a good time to do a quick post about my experiences running Gentoo as a day-to-day desktop machine.

Strengths

Gentoo is exactly what you want it to be, nothing more. Sure there are special meta-packages that make it easy to install things such as the KDE desktop, but the real key is that you don’t need to install anything that you don’t want to. As a result Gentoo is fast. My startup time is about 10-20 seconds and, if I had the inclination to do so, could be trimmed down even further through optimization.

Packages are also compiled with your own set of custom options and flags so you get exactly what you need, optimized for your exact hardware. Being a more advanced (see expert) oriented distribution it will also teach you quite a bit about Linux and software configuration as a whole.

Weaknesses

Sadly Gentoo is not without its faults. As mentioned above Gentoo can be whatever you want it to be. The major problem with this strength in practice is that the average desktop user just wants a desktop that works. When it takes days of configuration and compilation just to get the most basic of programs installed it can be a major deterrent to the vast majority of users.

Speaking of compiling programs, I find this aspect of Gentoo interesting from a theoretical perspective but I honestly have a hard time believing that it makes enough of a difference to make it worth sitting through the hours days of compiling it takes just to get some things installed. Its so bad that I actually haven’t bothered to re-sync and update my whole system in over 50 days for fear that it would take forever to re-compile and re-install all of the updated programs and libraries.

Worse yet even when I do have programs installed they don’t always play nicely with one another. Gentoo offers a package manager, portage, but it still fails at some dependency resolution – often times making you choose between uninstalling previous programs just to install the new one or to not install the new one at all. Another example of things being more complicated than they should be is my system sound. Even though I have pulseaudio installed and configured my system refuses to play audio from more than one program at a time. These are just a few examples of problems I wouldn’t have to deal with on another distribution.

-Sigh-

Well, it’s been interesting but I will not be sticking with Gentoo once this experiment is over. There are just too many little things that make this more of an educational experience than a real day-to-day desktop. While I certainly have learned a lot during this version of the experiment, at the end of the day I’d rather things just work right the first time.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

How to play Red Alert 2 on Linux

December 4th, 2011 1 comment

The other day I finally managed to get the classic RTS game Command & Conquer Red Alert 2 running on Linux, and running well in fact. I started by following the instructions here with a few tweaks that I found on other forums that I can’t seem to find links to anymore. Essentially the process is as follows:

  • Install Red Alert 2 on Windows. Yes you just read that right. Apparently the Red Alert 2 installer does not work under wine so you need to install the game files while running Windows.
  • Update the game and apply the CD-Crack via the instructions in the link above. Note that this step may have some legal issues associated with it. If in doubt seek professional legal advice.
  • Copy program files install directory to Linux.
  • Apply speed fix in the how-to section here.
  • Run game using wine and enjoy.

It is a convoluted process that is, at times, ridiculous but it’s worth it for such a classic game. Even better there is a bit of a ‘hack’ that will allow you to play RA2’s multiplayer IPX network mode but over the more modern TCP/IP protocol. The steps for this hack can also be found at the WineHQ link above.

Happy gaming!




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.
Categories: Linux, Tyler B Tags: , , ,

Linux From Scratch : The Beginning…

October 31st, 2011 1 comment

Hi Everyone,

If you don’t remember me, I’m Dave. Last time for the experiment I used SuSE, which I regretted. This time I decided to use Linux From Scratch like Jake, as I couldn’t think of another distribution that I haven’t used in some way or another before. Let me tell you… It’s been quite the experience so far.

The Initial Setup

Unlike Jake, I opted not to use the LFS Live CD, as I figured it would be much easier to start with a Debian Live CD. By the sounds of it, I made a good decision. I had network right out of the gate, which made it easy to copy and paste awful sed commands.

The initial part of the install was relatively painless for me. Well, except that one of the LFS mirrors had a version from 2007 listed as their latest stable build, setting me back about an hour. I followed the book, waited quite a while for some stuff to compile, and I was in my brand new … command-line. Ok, it it’s not very exciting at first, but I was jumping for joy when I ran the following command and got the result I did:

root [ ~ ]# ping google.ca
PING google.ca (74.125.226.82): 56 data bytes
64 bytes from 74.125.226.82: icmp_seq=0 ttl=56 time=32.967 ms
64 bytes from 74.125.226.82: icmp_seq=1 ttl=56 time=33.127 ms
64 bytes from 74.125.226.82: icmp_seq=2 ttl=56 time=40.045 ms

 

Series of Tubes

The internet was working! Keep reading if you want to hear what awful thing happened next…

Read more…


I am currently running ArchLinux (x86_64).
Check out my profile for more information.

Experiment 2.0

October 30th, 2011 No comments

As Jake pointed out in the previous post we have once again decided to run The Linux Experiment. This iteration will once again following the rule where you are not allowed to use a distribution that you have used in the past. We also have a number of new individuals taking part in the experiment: Aíne B, Matt C, Travis G and Warren G. Be sure to check back often as we post about our experiences running our chosen distributions.

Rules

Here are the new rules we are playing by for this version of the experiment:

  1. You must have absolutely no prior experience with the distribution you choose
  2. You must use the distribution on your primary computer and it must be your primary day-to-day computing environment
  3. The experiment runs from November 1st, 2011 until January 31st, 2011
  4. You must document your experience
  5. After committing to a distribution you may not later change to a different one

Achievements

For fun we’ve decided to create a series of challenges to try throughout the experiment. This list can be found here and may be updated as we add more throughout the course of the experiment.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.
Categories: Linux, Tyler B Tags: ,

Big distributions, little RAM 3

August 14th, 2011 2 comments

Once again I’ve decided to re-run my previous tests this time using the following distributions:

  • Debian 6.0.2 (GNOME)
  • Fedora 15 (GNOME 3 Fallback Mode)
  • Fedora 15 (KDE)
  • Kubuntu 11.04 (KDE)
  • Linux Mint 11 (GNOME)
  • Linux Mint 10 (KDE)
  • Linux Mint 10 (LXDE)
  • Linux Mint 11 (Xfce)
  • Lubuntu 11.04 (LXDE)
  • Mandriva One (GNOME)
  • Mandriva One (KDE)
  • OpenSUSE 11.4 (GNOME)
  • OpenSUSE 11.4 (KDE)
  • Ubuntu 11.04 (GNOME Unity Fallback Mode)
  • Xubuntu 11.04 (Xfce)

I will be testing all of this within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86

The tests were all done using VirtualBox 4.0.6 on Linux Mint 11, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default 800×600 and accepted the installation defaults. All tests were run on August 14th, 2011 so your results may not be identical.

Results

Following in the tradition of my previous posts I have once again gone through the effort to bring you nothing but the most state of the art in picture graphs for your enjoyment.

Things to know before looking at the graphs

First off none of the Fedora 15 versions would install in 512MB of RAM. They both required a minimum of 640MB and therefore are disqualified from this little experiment. I did however run them in VirtualBox with 640MB of RAM just for comparison purposes. Secondly the Linux Mint 10 KDE distro would not even install in either 512MB or 640MB of RAM, the installer just kept crashing. I was unable to actually get it to work so it was not included in these tests. Finally when I tested Debian I was unable to test before / after applying updates because it seemed to have applied the updates during install.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Conclusion

As before I’m going to leave you to drawing your own conclusions.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Create a GStreamer powered Java media player

March 14th, 2011 1 comment

For something to do I decided to see if I could create a very simple Java media player. After doing some research, and finding out that the Java Media Framework was no longer in development, I decided to settle on GStreamer to power my media player.

GStreamer for the uninitiated is a very powerful multimedia framework that offers both low-level pipeline building as well as high-level playback abstraction. What’s nice about GStreamer, besides being completely open source, is that it presents a unified API no matter what type of file it is playing. For instance if the user only has the free, high quality GStreamer codecs installed, referred to as the good plugins, then the API will only play those files. If however the user installs the other plugins as well, be it the bad or ugly sets, the API remains the same and thus you don’t need to update your code. Unfortunately being a C library this approach does have some drawbacks, notably the need to include the JNA jar as well as the system specific libraries. This approach can be considered similar to how SWT works.

Setup

Assuming that you already have a Java development environment, the first thing you’ll need is to install GStreamer. On Linux odds are you already have it, unless you are running a rather stripped down distro or don’t have many media players installed (both Rhythmbox and Banshee use GStreamer). If you don’t it should be pretty straight forward to install along with your choice of plugins. On Windows you’ll need to head over to ossbuild where they have downloadable installers.

The second thing you’ll need is gstreamer-java which you can grab over at their website here. You’ll need to download both gstreamer-java-1.4.jar and jna-3.2.4.jar. Both might contain some extra files that you probably don’t need and can prune out later if you’d like. Setup your development environment so that both of these jar files are in your build path.

Simple playback

GStreamer offers highly abstracted playback engines called PlayBins. This is what we will use to actually play our files. Here is a very simple code example that demonstrates how to actually make use of a PlayBin:

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));
     playbin.setInputFile("song.mp3");

     playbin.setState(State.PLAYING);
     Gst.main();
     playbin.setState(State.NULL);
}

So what does it all mean?

public static void main(String[] args) {
     args = Gst.init("MyMediaPlayer", args);

The above line takes the incoming command line arguments and passes them to the Gst.init function and returns a new set of arguments. If you have every done any GTK+ programming before this should be instantly recognizable to you. Essentially what GStreamer is doing is grabbing, and removing, any GStreamer specific arguments before your program will actually process them.

     Playbin playbin = new PlayBin("AudioPlayer");
     playbin.setVideoSink(ElementFactory.make("fakesink", "videosink"));
     playbin.setInputFile("song.mp3");

The first line of code requests a standard “AudioPlayer” PlayBin. This PlayBin is built right into GStreamer and automatically sets up a default pipeline for you. Essentially this lets us avoid all of the low-level craziness that we would have to normally deal with if we were starting from scratch.

The next line sets the PlayBin’s VideoSink, think of sinks as output locations, to a “fakesink” or null sink. The reason we do this is because PlayBin’s can play both audio and video. For the purposes of this player we only want audio playback so we automatically redirect all video output to the “fakesink”.

The last line is pretty straight forward and just tells GStreamer what file to play.

     playbin.setState(State.PLAYING);
     Gst.main();
     playbin.setState(State.NULL);

Finally with the above lines of code we tell the PlayBin to actually start playing and then enter the GStreamer main loop. This loop continues for the duration. The last line is used to reset the PlayBin state and do some cleanup.

Bundle it with a quick GUI

To make it a little more friendly I wrote a very quick GUI to wrap all of the functionality with. The download links for that (binary only package), as well as the source (all package) is below. And there you have it: a very simple cross-platform media player that will playback pretty much anything you throw at it.

Please note that I have provided this software purely as a quick example. If you are really interested in developing a GStreamer powered Java application you would do yourself a favor by reading the official documentation.

Binary Only Package All Package
File name: my_media_player_binary.zip my_media_player_all.zip
Screenshots:
Version: March 13, 2011
File size: 1.5MB 1.51MB
File download: Download Here Download Here

Originally posted on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Create a GTK+ application on Linux with Objective-C

December 8th, 2010 8 comments

As sort of follow-up-in-spirit to my older post I decided to share a really straight forward way to use Objective-C to build GTK+ applications.

Objective-what?

Objective-C is an improvement to the iconic C programming language that remains backwards compatible while adding many new and interesting features. Chief among these additions is syntax for real objects (and thus object-oriented programming). Popularized by NeXT and eventually Apple, Objective-C is most commonly seen in development for Apple OSX and iOS based platforms. It ships with or without a large standard library (sometimes referred to as the Foundation Kit library) that makes it very easy for developers to quickly create fast and efficient programs. The result is a language that compiles down to binary, requires no virtual machines (just a runtime library), and achieves performance comparable to C and C++.

Marrying Objective-C with GTK+

Normally when writing a GTK+ application the language (or a library) will supply you with bindings that let you create GUIs in a way native to that language. So for instance in C++ you would create GTK+ objects, whereas in C you would create structures or ask functions for pointers back to the objects. Unfortunately while there used to exist a couple of different Objective-C bindings for GTK+, all of them are quite out of date. So instead we are going to rely on the fact that Objective-C is backwards compatible with C to get our program to work.

What you need to start

I’m going to assume that Ubuntu will be our operating system for development. To ensure that we have what we need to compile the programs, just install the following packages:

  1. gnustep-core-devel
  2. libgtk2.0-dev

As you can see from the list above we will be using GNUstep as our Objective-C library of choice.

Setting it all up

In order to make this work we will be creating two Objective-C classes, one that will house our GTK+ window and another that will actually start our program. I’m going to call my GTK+ object MainWindow and create the two necessary files: MainWindow.h and MainWindow.m. Finally I will create a main.m that will start the program and clean it up after it is done.

Let me apologize here for the poor code formatting; apparently WordPress likes to destroy whatever I try and do to make it better. If you want properly indented code please see the download link below.

MainWindow.h

In the MainWindow.h file put the following code:

#import <gtk/gtk.h>
#import <Foundation/NSObject.h>
#import <Foundation/NSString.h>

//A pointer to this object (set on init) so C functions can call
//Objective-C functions
id myMainWindow;

/*
* This class is responsible for initializing the GTK render loop
* as well as setting up the GUI for the user. It also handles all GTK
* callbacks for the winMain GtkWindow.
*/
@interface MainWindow : NSObject
{
//The main GtkWindow
GtkWidget *winMain;
GtkWidget *button;
}

/*
* Constructs the object and initializes GTK and the GUI for the
* application.
*
* *********************************************************************
* Input
* *********************************************************************
* argc (int *): A pointer to the arg count variable that was passed
* in at the application start. It will be returned
* with the count of the modified argv array.
* argv (char *[]): A pointer to the argument array that was passed in
* at the application start. It will be returned with
* the GTK arguments removed.
*
* *********************************************************************
* Returns
* *********************************************************************
* MainWindow (id): The constructed object or nil
* arc (int *): The modified input int as described above
* argv (char *[]): The modified input array modified as described above
*/
-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv;

/*
* Frees the Gtk widgets that we have control over
*/
-(void)destroyWidget;

/*
* Starts and hands off execution to the GTK main loop
*/
-(void)startGtkMainLoop;

/*
* Example Objective-C function that prints some output
*/
-(void)printSomething;

/*
********************************************************
* C callback functions
********************************************************
*/

/*
* Called when the user closes the window
*/
void on_MainWindow_destroy(GtkObject *object, gpointer user_data);

/*
* Called when the user presses the button
*/
void on_btnPushMe_clicked(GtkObject *object, gpointer user_data);

@end

MainWindow.m

For the class’ actual code file fill it in as show below. This class will create a GTK+ window with a single button and will react to both the user pressing the button, and closing the window.

#import “MainWindow.h”

/*
* For documentation see MainWindow.h
*/

@implementation MainWindow

-(id)initWithArgCount:(int *)argc andArgVals:(char *[])argv
{
//call parent class’ init
if (self = [super init]) {

//setup the window
winMain = gtk_window_new (GTK_WINDOW_TOPLEVEL);

gtk_window_set_title (GTK_WINDOW (winMain), “Hello World”);
gtk_window_set_default_size(GTK_WINDOW(winMain), 230, 150);

//setup the button
button = gtk_button_new_with_label (“Push me!”);

gtk_container_add (GTK_CONTAINER (winMain), button);

//connect the signals
g_signal_connect (winMain, “destroy”, G_CALLBACK (on_MainWindow_destroy), NULL);
g_signal_connect (button, “clicked”, G_CALLBACK (on_btnPushMe_clicked), NULL);

//force show all
gtk_widget_show_all(winMain);
}

//assign C-compatible pointer
myMainWindow = self;

//return pointer to this object
return self;
}

-(void)startGtkMainLoop
{
//start gtk loop
gtk_main();
}

-(void)printSomething{
NSLog(@”Printed from Objective-C’s NSLog function.”);
printf(“Also printed from standard printf function.\n”);
}

-(void)destroyWidget{

myMainWindow = NULL;

if(GTK_IS_WIDGET (button))
{
//clean up the button
gtk_widget_destroy(button);
}

if(GTK_IS_WIDGET (winMain))
{
//clean up the main window
gtk_widget_destroy(winMain);
}
}

-(void)dealloc{
[self destroyWidget];

[super dealloc];
}

void on_MainWindow_destroy(GtkObject *object, gpointer user_data)
{
//exit the main loop
gtk_main_quit();
}

void on_btnPushMe_clicked(GtkObject *object, gpointer user_data)
{
printf(“Button was clicked\n”);

//call Objective-C function from C function using global object pointer
[myMainWindow printSomething];
}

@end

main.m

To finish I will write a main file and function that creates the MainWindow object and eventually cleans it up. Objective-C (1.0) does not support automatic garbage collection so it is important that we don’t forget to clean up after ourselves.

#import “MainWindow.h”
#import <Foundation/NSAutoreleasePool.h>

int main(int argc, char *argv[]) {

//create an AutoreleasePool
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

//init gtk engine
gtk_init(&argc, &argv);

//set up GUI
MainWindow *mainWindow = [[MainWindow alloc] initWithArgCount:&argc andArgVals:argv];

//begin the GTK loop
[mainWindow startGtkMainLoop];

//free the GUI
[mainWindow release];

//drain the pool
[pool release];

//exit application
return 0;
}

Compiling it all together

Use the following command to compile the program. This will automatically include all .m files in the current directory so be careful when and where you run this.

gcc `pkg-config –cflags –libs gtk+-2.0` -lgnustep-base -fconstant-string-class=NSConstantString -o “./myprogram” $(find . -name ‘*.m’) -I /usr/include/GNUstep/ -L /usr/lib/GNUstep/ -std=c99 -O3

Once complete you will notice a new executable in the directory called myprogram. Start this program and you will see our GTK+ window in action.

If you run it from the command line you can see the output that we coded when the button is pushed.

Wrapping it up

There you have it. We now have a program that is written in Objective-C, using C’s native GTK+ ‘bindings’ for the GUI, that can call both regular C and Objective-C functions and code. In addition, thanks to the porting of both GTK+ and GNUstep to Windows, this same code will also produce a cross-platform application that works on both Mac OSX and Windows.

Source Code Downloads

Source Only Package
File name: objective_c_gtk_source.zip
File hashes: Download Here
File size: 2.4KB
File download: Download Here

Originally posted on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 18.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Feel free to visit me at my personal website here.

Setting up an Ubuntu-based ASP.NET Server with Mono

November 21st, 2010 5 comments

Introduction:

In my day job, I work as an infrastructure developer for a small company. While I wouldn’t call us a Microsoft shop by any stretch (we actually make web design tools), we do maintain a large code base in C#, which includes our website and a number of web-based administrative tools. In planning for a future project, I recently spent some time figuring out how to host our existing ASP.NET-based web site on a Linux server. After a great deal of research, and just a bit of trial and error, I came up with the following steps:

VirtualBox Setup:

The server is going to run in a virtual machine, primarily because I don’t have any available hardware to throw at the problem right now. This has the added benefit of being easily expandable, and our web hosting company will actually accept *.vdi files, which allows us to easily pick up the finished machine and put it live with no added hassle. In our case, the host machine was a Windows Server 2008 machine, but these steps would work just as well on a Linux host.

I started off with VirtualBox 3.2.10 r66523, although like I said, grabbing the OSE edition from your repositories will work just as well. The host machine that we’re using is a bit underpowered, so I only gave the virtual machine 512MB of RAM and 10GB of dynamically expanding storage. One important thing – because I’ll want this server to live on our LAN and interact with our other machines, I was careful to change the network card settings to Bridged Adapter and to make sure that the Ethernet adapter of the host machine is selected in the hardware drop down. This is important because we want the virtual machine to ask our office router for an IP address instead of using the host machine as a private subnet.

Installing the Operating System:

For the initial install, I went with the Ubuntu 10.10 Maverick Meerkat 32-bit Desktop Edition. Any server admins reading this will probably pull out their hair over the fact, but in our office, we have administrators who are very used to using Windows’ Remote Desktop utility to log into remote machines, and I don’t feel like training everybody on the intricacies of PuTTy and SSH. If you want to, you can install the Server version instead, and forgo all of the additional overhead of a windowing system on your server. Since all of my installation was done from the terminal, these instructions will work just as well with or without a GUI.

From VirtualBox, you’ll want to mount the Ubuntu ISO in the IDE CD-ROM drive, and start the machine. When prompted, click your way through Ubuntu’s slick new installer, and tell it to erase and use entire disk, since we don’t need any fancy partitioning for this setup. When I went through these steps, I opted to encrypt the home folder of the vm, mostly out of habit, but that’s up to you. Once you make it to a desktop, install VirtualBox Guest Additions.

From Terminal, type sudo apt-get upgrade to apply any patches that might be available.

Setting up a Static IP Address:

From a terminal, type ifconfig and find the HWaddr entry for your ethernet card, usually eth0. It will probably look something like 08:00:27:1c:17:6c. Next, you’ll need to log in to your router and set it up so that any device with this hardware address (also called a MAC address) is always given the same IP address. In my case, I chose to assign the virtual server an IP address of 192.168.1.10 because it was easy to remember. There are other ways that you can go about setting up a static IP, but I find this to be the easiest.

Getting Remote Desktop support up and running:

As I mentioned above, the guys in our office are used to administering remote machines by logging in via Windows’ remote desktop client. In order to provide this functionality, I chose to set up the xrdp project on my little server. Installing this is as easy as typing sudo apt-get install xrdp in your terminal. The installation process will also require the vnc4server and xbase-clients packages.

When the installation has completed, the xrdp service will run on startup and will provide an encrypted remote desktop server that runs on port 3389. From Windows, you can now connect to 192.168.1.10 with the standard rdp client. When prompted for login, make sure that sesman-Xvnc is selected as the protocol, and you should be able to log in with the username and password combination that you chose above.

Installing a Graphical Firewall Utility:

Ubuntu ships with a firewall baked into the kernel that can be accessed from the terminal with the ufw tool. Because some of our administrators are afraid of the command line, I also chose to install a graphical firewall manager. In the terminal, type sudo apt-get install gufw to install an easy to use gui for the firewall. Once complete, it will show up in the standard Gnome menu system under System > Administration > Firewall Configuration.
Let’s do a bit of setup. Open up the Firewall Configuration utility, and check off the box to enable the firewall. Below that box, make sure that all incoming traffic is automatically denied while all outgoing is allowed. These rules can be tightened up later, but are a good starting point for now. To allow incoming remote desktop connections, you’ll need to create a new rule to allow all TCP connections on port 3389. If this server is to be used on the live Internet, you may also consider limiting the IP addresses that these connections can come from so that not just anybody can log in to your server. Remember, defense in depth is your best friend.

Adding SSH Support:

Unlike my coworkers, I prefer to manage my server machines via command line. As such, an SSH server is necessary. Later, the SSH connection can be used for SFTP or a secure tunnel over which we can communicate with our source control and database servers. In terminal, type sudo apt-get install openssh-server to start the OpenSSH installation process. Once it’s done, you’ll want to back up its default configuration file with the command cp /etc/ssh/sshd_config /etc/ssh/sshd_config_old. Next, open up the config file your text editor of choice (mine is nano) and change a couple of the default options:

  • Change the Port to 5000, or some other easy to remember port. Running an SSH server on port 22 can lead to high discoverability, and is regarded by some as a security no-no.
  • Change PermitRootLogin to no. This will ensure that only normal user accounts can log in.
  • At the end of the file, add the line AllowUsers <your-username> to limit the user accounts that can log in to the machine. It is good practice to create a user account with limited privileges and only allow it to log in via SSH. This way, if an attacker does get in, they are limited in the amount of damage that they can do.

Back in your terminal, type sudo /etc/init.d/ssh restart to load the new settings. Using the instructions above, open up your firewall utility and create a new rule to allow all TCP connections on port 5000. Once again, if this server is to be used on the live Internet, it’s a good idea to limit the IP addresses that this traffic can originate from.

With this done, you can log in to the server from any other Linux-based machine using the ssh command in your terminal. From Windows, you’ll need a third-party utility like PuTTy.

Installing Apache and ModMono:

For simplicity’s sake, we’ll install both Apache (the web server) and mod_mono (a module responsible for processing ASP.NET requests) from Ubuntu’s repositories. The downside is that the code base is a bit older, but the upside is that everything should just work, and the code is stable. These instructions are a modified version of the ones found on the HBY Consultancy blog. Credit where credit is due, after all. From your terminal, enter the following:

$ sudo apt-get install monodevelop mono-devel monodevelop-database mono-debugger mono-xsp2 libapache2-mod-mono mono-apache-server2 apache2

$ sudo a2dismod mod_mono

$ sudo a2enmod mod_mono_auto

With this done, Apache and mod_mono are installed. WE’ll need to do a bit of configuration before they’re ready to go. Open up mod_mono’s configuration file in your text editor of choice with something like sudo nano /etc/apache2/mods-available/mod_mono_auto.conf. Scroll down to the bottom and append the following text to the file:

MonoPath default “/usr/lib/mono/3.5”

MonoServerPath default /usr/bin/mod-mono-server2

AddMonoApplications default “/:/var/www”

Finally, restart the Apache web server so that the changes take effect with the command sudo /etc/init.d/apache2 restart. This configuration will allow us to run aspx files out of our /var/www/ directory, just like html or php files that you may have seen hosted in the past.

Having a Beer:

That was a fair bit of work, but I think that it was worth it. If everything went well, you’ve now got a fully functional Apache web server that’s reasonably secure, and can run any ASP.NET code that you throw at it.

The one hiccup that I encountered with this setup was that Mono doesn’t yet have support for .NET’s Entity Framework, which is the object-relational mapping framework that we use as a part of our database stack on the application that we wanted to host. This means that if I want to host the existing code on Linux, I’ll have to modify it so that it uses a different database back end. Its kind of a pain, but not the end of the world, and certainly a situation that can be avoided if you’re coding up a website from scratch. You can read more about the status of Mono’s ASP.NET implementation on their website.

Hopefully this helped somebody. Let me know in the comments if there’s anything that isn’t quite clear or if you encounter any snags with the process.




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.