Archive

Archive for the ‘Linux’ Category

Ubuntu 12.10 Beta 1 (Report #3)

September 22nd, 2012 No comments

Just a quick update on my experience running the pre-release version of Ubuntu (this time upgraded to Ubuntu 12.10 Beta 1!). Not a whole lot new to report – Beta 1 is basically the same as Alpha 3 but with the addition of an option to connect to a Remote Server directly from the login screen. Unfortunately the bugs that I have filed so far have yet to be resolved, but I’m still hopeful someone has a chance to correct them prior to release.

It is already almost the end of September which means there are only a couple more weeks before the official 12.10 launch. From what I’ve seen so far this upgrade will be a pretty small, evolutionary update to the already good 12.04 release.

Previous posts in this series:




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B, Ubuntu Tags: ,

Big distributions, little RAM 5

September 14th, 2012 2 comments

Once again I’ve compiled some charts to show what the major, full desktop distributions look like while running on limited hardware. Just like before I’ve decided to re-run my previous tests this time using the following distributions:

  • Fedora 17 (GNOME)
  • Fedora 17 (KDE)
  • Kubuntu 12.04 (KDE)
  • Linux Mint 13 (Cinnamon)
  • Linux Mint 13 (KDE)
  • Linux Mint 13 (Mate)
  • Linux Mint 13 (Xfce)
  • Mageia (GNOME)
  • Mageia (KDE)
  • OpenSUSE 12.2 (GNOME)
  • OpenSUSE 12.2 (KDE)
  • Ubuntu 12.04 (Unity)
  • Xubuntu 12.04 (Xfce)

I will be testing all of this within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.1.22, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between September 3rd, 2012 and September 14th, 2012 so your results may not be identical.

Results

Following in the tradition of my previous posts I have once again gone through the effort to bring you nothing but the most state of the art in picture graphs for your enjoyment.

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). Secondly there may be some distributions that don’t appear on all of the graphs, for example Mandriva (now replaced by Mageia). Finally I did not include Debian this time around because it is still at the same version as last time. As always feel free to run your own tests.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Conclusion

As before I’m going to leave you to drawing your own conclusions.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Ubuntu 12.10 Alpha 3 (Report #2)

September 1st, 2012 No comments

Running an alpha version of an operating system, Linux or otherwise, is quite a different experience. It means, for instance, that you are not allowed to complain when minor things have bugs or simply don’t work – it is all par for the course, after all this is alpha software. That doesn’t mean however that when you do run into problems that it doesn’t still suck.

I ran into one of these problems earlier today while trying to connect via SSH to a remote computer within Nautilus. It seems that this release of the software is currently broken resulting in the following error message every time I try and browse my remote server’s directories:

The second really annoying issue I ran into was GIMP no longer showing menu items in Ubuntu’s global appmenu. This was especially infuriating because, prior to installing some updates today, it had worked perfectly fine in the past. I even had to hunt down a sub-par paint (GNU Paint) application just to crop the above screenshot.

Hopefully my annoying experiences, and subsequent bug filings, will prevent other users from experiencing the same pains when 12.10 is finally released to all. Here’s hoping anyway…

Update: It turns out that it wasn’t just the GIMP that wasn’t displaying menu items, no applications are. Off to file another bug…

Previous posts in this series:




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B, Ubuntu Tags:

Python Development on Linux and Why You Should Too

September 1st, 2012 25 comments

If you’re a programmer and you use Linux but you haven’t yet entered the amazing world that is Python development, you’re really missing out on something special. For years, I dismissed Python as just another script kiddie language, eschewing it for more “serious” languages like Java and C#. What I was missing out on were the heady days of rapid development that I so enjoyed while hacking away on Visual Basic .NET in my early years of university.

There was a time when nary a day would go by without me slinging together some code into a crappy Windows Forms application that I wrote to play with a new idea or to automate an annoying task. By and large, these small projects ceased when I moved over to Linux, partially out of laziness, and partially because I missed the rapid prototyping environment that Visual Basic .NET provided. Let’s face it – Java and C# are great languages, but getting a basic forms app set up in either of them takes a significant amount of time and effort.

Enter Python, git/github, pip, and virtualenv. This basic tool chain has got me writing code in my spare time again, and the feeling is great. So without further ado, let me present (yet another) quick tutorial on how to set up a bad-ass Python development environment of your own:

Step 1: Python

If there’s one thing that I really love about Python, it’s the wide availability of libraries for most any task that one can imagine. An important part of the rapid prototyping frame of mind is to not get bogged down on writing low-level libraries. If you have to spend an hour or two writing a custom database interface layer, you’re going to lose the drive that got you started in on the project in the first place. Unless of course, the purpose of the project was to re-invent the database interface layer. In that case, all power to you. In my experience, this is never a problem with Python, as its magical import statement will unlock a world of possibilities that is occupied by literally thousands of libraries to do most anything that you can imagine.

Install this bad boy with the simple command sudo apt-get install python and then find yourself a good python tutorial with which to learn the basics. Alternatively, you can just start hacking away and use StackOverflow to fill in any of the gaps in your knowledge.

Step 2: Git/Github

In my professional life, I live and die by source control. It’s an excellent way to keep track of the status of your project, try out new features or ideas without jeopardizing the bits of your application that already work, and perhaps most importantly, it’s a life saver when you can’t figure out why in the hell you decided to do something that seemed like a good idea at the time but now seems like a truly retarded move. If you work with other developers, it’s also a great way to find out who to blame when the build is broken.

So why Git? Well, if time is on your side, go watch this 1 hour presentation by Linus Torvalds; I guarantee that if you know the first thing about source control, he will convince you to switch. If you don’t have that kind of time on your hands (and really, who does?) suffice it to say that Git plays really well with Github, and Github is like programming + social media + crack. Basically, it’s a website that stores your public (or private) repositories, showing off your code for all the world to see and fork and hack on top of. It also allows you to find and follow other interesting projects and libraries, and to receive updates when they make a change that you might be interested in.

Need a library to do fuzzy string matching? Search Git and find fuzzywuzzy. Install it into your working environment, and start playing with it. If it doesn’t do quite what you need, fork it, check out the source, and start hacking on it until it does! Github is an amazing way to expand your ability to rapid prototype and explore ideas that would take way too long to implement from scratch.

Get started by installing git with the command sudo apt-get install git-core. You should probably also skim through the git tutorial, as it will help you start off on the right foot.

Next, mosey on over to Github and sign up for an account. Seriously, it’s awesome, stop procrastinating and do it.

Step 3: Pip

I’ve already raved about the third-party libraries for Python, but what I haven’t told you yet is that there’s an insanely easy way to get those libraries into your working environment. Pip is like a repository just for Python libraries. If you’re already familiar with Linux, then you know what I’m talking about. Remember the earlier example of needing a fuzzy string matching library in your project? Well with pip, getting one is as easy as typing pip install fuzzywuzzy. This will install the fuzzywuzzy library on your system, and make it available to your application in one easy step.

But I’m getting ahead of myself here: You need to install pip before you can start using it. For that, you’ll need to run sudo apt-get install python-setuptools python-dev build-essential && sudo easy_install -U pip

The other cool thing about pip? When you’re ready to share your project with others (or just want to set up a development environment on another machine that has all of the necessary prerequisites to run it) you can run the command pip freeze > requirements.txt to create a file that describes all of the libraries that are necessary for your app to run correctly. In order to use that list, just run pip install -r requirements.txt on the target machine, and pip will automatically fetch all of your projects prerequisites. I swear, it’s fucking magical.

Step 4: Virtualenv

As I’ve already mentioned, one of my favourite things about Python is the availability of third-party libraries that enable your code to do just about anything with simple import statement. One of the problems with Python is that trying to keep all of the dependencies for all of your projects straight can be a real pain in the ass. Enter virtualenv.

This is an application that allows you to create virtual working environments, complete with their own Python versions and libraries. You can start a new project, use pip to install a whole bunch of libraries, then switch over to another project and work with a whole bunch of other libraries, all without different versions of the same library ever interfering with one another. This technique also keeps the pip requirements files that I mentioned above nice and clean so that each of your projects can state the exact dependency set that it needs to run without introducing cruft into your development environment.

Another tool that I’d like to introduce you to at this time is virtualenvwrapper. Just like the name says, it’s a wrapper for virtualenv that allows you to easily manage the many virtual environments that you will soon have floating around your machine.

Install both with the command pip install virtualenv virtualenvwrapper

Once the installation has completed, you’ll may need to modify your .bashrc profile to initialize virtualenvwrapper whenever you log into your user account. To do so, open up the .bashrc file in your home directory using your favourite text editor, or execute the following command: sudo nano ~/.bashrc 

Now paste the following chunk of code into the bottom of that file, save it, and exit:

# initialize virtualenv wrapper if present
if [ -f /usr/local/bin/virtualenvwrapper.sh ] ; then
. /usr/local/bin/virtualenvwrapper.sh
fi

Please note that this step didn’t seem to be necessary on Ubuntu 12.04, so it may only be essential for those running older versions of the operating system. I would suggest trying to use virtualenvwrapper with the instructions below before bothering to modify the .bashrc file.

Now you can make a new virtual environment with the command mkvirtualenv <project name>, and activate it with the command workon <project name>. When you create a new virtual environment, it’s like wiping your Python slate clean. Use pip to add some libraries to your virtual environment, write some code, and when you’re done, use the deactivate command to go back to your main system. Don’t forget to use pip freeze inside of your virtual environment to obtain a list of all of the packages that your application depends on.

Step 5: Starting a New Project

Ok, so how do we actually use all of the tools that I’ve raved about here? Follow the steps below to start your very own Python project:

  1. Decide on a name for your project. This is likely the hardest part. It probably shouldn’t have spaces in it, because Linux really doesn’t like spaces.
  2. Create a virtual environment for your project with the command mkvirtualenv <project name>
  3. Activate the virtual environment for your project with the command workon <project name>
  4. Sign into Github and click on the New Repository button in the lower right hand corner of the home page
  5. Give your new repository the same name as your project. If you were a creative and individual snowflake, the name won’t already be taken. If not, consider starting back at step 1, or just tacking your birth year onto the end of the bastard like we used to do with hotmail addresses back in the day.
  6. On the new repository page, make sure that you check the box that says Initialize this repository with a README and that you select Python from the Add .gitignore drop down box. The latter step will make sure that git ignores files types that need not be checked into your repository when you commit your code.
  7. Click theCreate Repository button
  8. Back on your local machine, Clone your repository with the command git clone https://github.com/<github user name>/<project name> this will create a directory for your project that you can do all of your work in.
  9. Write some amazing fucking code that blows everybody’s minds. If you need some libraries (and really, who doesn’t?) make sure to use the pip install <library name> command.
  10. Commit early, commit often with the git commit -am “your commit message goes here” command
  11. When you’re ready to make your work public, post it to github with the command git push https://github.com/<github user name>/<project name>
  12. Don’t forget to script out your project dependencies with the pip freeze > requirements.txt command
  13. Finally, when you’re finished working for the day, use the deactivate command to return to your normal working environment.

In Conclusion:

This post is way longer than I had originally intended. Suck it. I hope your eyes are sore. I also hope that by now, you’ve been convinced of how awesome a Python development environment can be. So get out there and write some amazing code. Oh, and don’t forget to check out my projects on github.




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.
Categories: Jon F, Programming Tags: , ,

Ubuntu 12.10 Alpha 3 (Report #1)

August 27th, 2012 No comments

Well it’s been a little while since I made the mistake (joking) of installing Ubuntu 12.10 Alpha 3. Here is what I’ve learned so far.

  1. My laptop really does not like the open source ATI graphics driver – and there are no proprietary drivers for this release yet. It’s not that the driver doesn’t perform well enough graphically, its just that it causes my card to give off more heat than the proprietary driver. This in turn causes my laptop’s fan to run non-stop and drains my battery at a considerable rate.
  2. Ubuntu has changed the way they do updates in this release. Instead of the old Update Manager there is a new application (maybe just a re-skinning of the old) that is much more refined and really quite simple. Interestingly enough the old hardware drivers application is also now gone, instead it is merged into the update manager. Overall I’m neutral on both changes.

    Updates are quite frequent when running an alpha release

  3. There is a new Online Accounts application (part of the system settings) included in this release. This application seems to work like an extension of the GNOME keyring – saving passwords for your various online accounts (go figure). I haven’t really had a chance to play around with it too much yet but it seems to work well enough.

That’s it for now. I’m off to file a bug over this open source driver that is currently melting my computer. I’ll keep you posted on how that goes.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B, Ubuntu Tags: ,

Test driving the new Ubuntu (12.10)

August 26th, 2012 No comments

Call it crazy but I’ve decided to actually install an Ubuntu Alpha release, specifically Ubuntu 12.10 Alpha 3. Why would anyone in their right mind install an operating system that is bound to be full of bugs and likely destroy all of my data? My reasons are twofold:

  1. I regularly use Ubuntu or Ubuntu derivatives and would like to help in the process of making them better
  2. There are still a few quirks with my particular laptop that I would like to help iron out once and for all, hopefully correcting them in a more universal sense for Linux as a whole

So join me over the next few posts as I relate my most recent experiences running… shall we say, less than production code.

 




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B, Ubuntu Tags: ,

Automatically put computer to sleep and wake it up on a schedule

June 24th, 2012 No comments

Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

#!/bin/sh
t=`date –date “17:00″ +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Mount entire drive dd image

May 21st, 2012 No comments

It is a pretty common practice to use the command dd to make backup images of drives and partitions. It’s as simple as the command:

dd if=[input] of=[output]

A while back I did just that and made a dd backup of not just a partition but of an entire hard drive. This was very simple (I just used if=/dev/sda instead of something like if=/dev/sda2). The problem came when I tried to mount this image. With a partition image you can just use the mount command like normal, i.e. something like this:

sudo mount -o loop -t [filesystem] [path to image file] [path to mount point]

Unfortunately this doesn’t make any sense when mounting an image of an entire hard drive. What if the drive had multiple partitions? What exactly would it be mounting to the mount point? After some searching I found a series of forum posts that dealt with just this scenario. Here are the steps required to mount your whole drive image:

1) Use the fdisk command to list the drive image’s partition table:

fdisk -ul [path to image file]

This should print out a lot of useful information. For example you’ll get something like this:

foo@bar:~$ fdisk -ul imagefile.img
You must set cylinders.
You can do this from the extra functions menu.

Disk imagefile.img: 0 MB, 0 bytes
32 heads, 63 sectors/track, 0 cylinders, total 0 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x07443446

        Device Boot      Start         End      Blocks   Id  System
imagefile.img1   *          63      499967      249952+  83  Linux
imagefile.img2          499968      997919      248976   83  Linux

2) Take a look in what that command prints out for the sector size (512 bytes in the above example) and the start # for the partition you want to mount (let’s say 63 in the above example).

3) Use a slightly modified version of the mount command (with an offset) to mount your partition.

mount -o loop, offset=[offset value] [path to image file] [path to mount point]

Using the example above I would set my offset value to be sector size * offset, so 512*63 = 32256. The command would look something like this:

mount -o loop, offset=32256 image.dd /mnt/point

That’s it. You should now have that partition from the dd backup image mounted to the mount point.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Linux, Tyler B Tags: , , ,

How to test hard drive for errors in Linux

May 21st, 2012 No comments

I recently re-built an older PC from a laundry list of Frankenstein parts. However before installing anything to the hard drive I found I wanted to check it for physical errors and problems as I couldn’t remember why I wasn’t using this particular drive in any of my other systems.

From an Ubuntu 12.04 live CD I used GParted to to delete the old partition on the drive. This let me start from a clean slate. After the drive had absolutely nothing on it I went searching for an easy way to test the drive for errors. I stumbled across this excellent article and began using badblocks to scan the drive. Basically what this program does is write to every spot on the drive and then read it back to ensure that it still holds the data that was just written.

Here is the command I used. NOTE: This command is destructive and will damage the data on the hard drive. DO NOT use this if you want to keep the data that is already on the drive. Please see the above linked article for more information.

badblocks -b 4096 -p 4 -c 16384 -w -s /dev/sda

What does it all mean?

  • -b sets the block size to use. Most drives these days use 4096 byte blocks.
  • -p sets the number of passes to use on the drive. When I used the option -p 4 above it means that it will write/read from each block on the drive 4 times looking for errors. If it makes it through 4 passes without finding new errors then it will consider the process done.
  • -c sets the number of blocks to test at a time. This can help to speed up the process but will also use more RAM.
  • -w turns on write mode. This tells badblocks to do a write test as well.
  • -s turns on progress showing. This lets you know how far the program has gotten testing the drive.
  • /dev/sda is just the path to the drive I’m scanning. Your path may be different.



I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Sabayon Linux – Stable if not without polish

April 28th, 2012 3 comments

I have been running Sabayon Linux (Xfce) for the past couple of months and figured I would throw a post up on here describing my experience with it.

Reasons for Running

The reason I tried Sabayon in the first place is because I was curious what it would be like to run a rolling release distribution (that is a distribution that you install once and just updates forever with no need to re-install). After doing some research I discovered a number of possible candidates but quick narrowed it down based on the following reasons:

  • Linux Mint Debian Edition – this is an excellent distribution for many people but for whatever reason every time I update it on my hardware it breaks. Sadly this was not an option.
  • Gentoo – I had previously been running Gentoo and while it is technically a rolling release I never bothered to update it because it just took too long to re-compile everything.
  • Arch Linux – Sort of like Gentoo but with binary packages, I turned this one down because it still required a lot of configuration to get up and running.
  • Sabayon Linux – based on Gentoo but with everything pre-compiled for you. Also takes the ‘just works’ approach by including all of the proprietary and closed source  codecs, drivers and programs you could possibly want.

Experience running Sabayon

Sabayon seems to take a change-little approach to packaging applications and the desktop environment. What do I mean by this? Simply that if you install the GNOME, KDE or Xfce versions you will get them how the developers intended – there are very few after-market modifications done by the Sabayon team. That’s not necessarily a bad thing however, because as updates are made upstream you will receive them very quickly thereafter.

This distribution does live up to its promise with the codecs and drivers. My normally troublesome hardware has given me absolutely zero issues running Sabayon which has been a very nice change compared to some other, more popular distributions (*cough* Linux Mint *cough*). My only problem with Sabayon stems from Entropy (their application installer) being very slow compared to some other such implementations (apt, yum, etc). This is especially apparent during the weekly system wide updates which can result in many, many package updates.

Final Thoughts

For anyone looking for a down to basics, Ubuntu-like (in terms of ease of install and use), rolling release distribution I would highly recommend Sabayon. For someone looking for something a bit more polished or extremely user friendly, perhaps you should look elsewhere. That’s not to say that Sabayon is hard to use, just that other distributions might specialize in user friendliness.




I am currently running a variety of distributions, primarily Linux Mint 17.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.