Archive

Archive for December, 2016

Download YouTube videos with youtube-dl

December 31st, 2016 No comments

Have you ever wanted to save a YouTube video for offline use? Well now it’s super simple to do with the easy to use command line utility youtube-dl.

Start by installing the utility either through your package manager (always the recommended approach) or from the project GitHub page here.

sudo apt-get install youtube-dl

This utility comes packed with loads of neat options that I encourage you to read about on the project page but if you just want to quickly grab a video all you need to do is run the command:

youtube-dl https://www.youtube.com/YOUR_REAL_VIDEO_HERE

So for example if you wanted to grab this year’s YouTube Rewind you would just run:

youtube-dl https://www.youtube.com/watch?v=_GuOjXYl5ew

and the video would end up in whatever directory your terminal is currently in. Can’t get much easier than that!

Categories: Linux, Tyler B Tags: ,

Happy New Year!

December 31st, 2016 No comments

Yeah I know technically this is going up a day early but so what? 😛

From all of us at The Linux Experiment we want to wish you all the best in 2017!

 

Categories: Tyler B Tags:

Fixing Areca Backup on Ubuntu 16.04 (and related distributions)

December 30th, 2016 No comments

Seems like I’m at it again, this time fixing Areca Backup on Ubuntu 16.04 (actually Linux Mint 18.1 in my case). For some reason when I download the current version (Areca 7.5 for Linux/GTK) and try and run the areca.sh script I get the following error:

tyler@computer $ ./areca.sh
ls: cannot access ‘/usr/java’: No such file or directory
No valid JRE found in /usr/java.

This is especially odd because I quite clearly do have Java installed:

tyler@computer $ java -version
openjdk version “1.8.0_111”
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)

Now granted this may be an issue exclusive to OpenJDK, or just this version of OpenJDK, but I’m hardly going to install Sun Java just to make this program work.

After some digging I narrowed it down to the look_for_java() function inside of the areca_run.sh script located in the Areca /bin/ directory. Now I’m quite sure there is a far more elegant solution than this but I simply commented out the vast majority of this function and hard coded the directory of my system’s Java binary. Here is how you can do the same.

First locate where your Java is installed by running the which command:

tyler@computer $ which java
/usr/bin/java

As you can see from the output above my java executable exists in my /usr/bin/ directory.

Next open up areca_run.sh inside of the Areca /bin/ directory and modify the look_for_java() function. In here you’ll want to set the variable JAVA_PROGRAM_DIR to your directory above (i.e. in my case it would be /usr/bin/) and then return 0 indicating no error. You can either simply delete the rest or just comment out the remaining function script by placing a # character at the start of each line.

#Method to locate matching JREs
look_for_java() {
JAVA_PROGRAM_DIR=”/usr/bin/”
return 0
# IFS=$’\n’
# potential_java_dirs=(`ls -1 “$JAVADIR” | sort | tac`)
# IFS=
# for D in “${potential_java_dirs[@]}”; do
#    if [[ -d “$JAVADIR/$D” && -x “$JAVADIR/$D/bin/java” ]]; then
#       JAVA_PROGRAM_DIR=”$JAVADIR/$D/bin/”
#       echo “JRE found in ${JAVA_PROGRAM_DIR} directory.”
#       if check_version ; then
#          return 0
#       else
#          return 1
#       fi
#    fi
# done
# echo “No valid JRE found in ${JAVADIR}.”
# return 1
}

Once you’ve saved the file you should be able to run the normal areca.sh script now without encountering any errors!


This post originally appeared on my personal website here.

Resizing and Correctly Orienting Images with Mogrify

December 30th, 2016 No comments

Lately, I’ve been publishing woodworking tutorials over on my own blog, and I’ve found myself needing to resize and correctly orient images taken with my iPhone prior to posting.

If you want to do anything with images on Linux, the best place to start is with ImageMagick, an all-in-one utility that does just about everything under the sun with images, and has fantastic documentation to boot.

After a bit of research, I came up with this command:

mogrify -auto-orient -resize 584×438 -strip -quality 85% *.jpg

Here’s a rundown of exactly what this does:

  • mogrify modifies the images in the source directory. This command will overwrite your image files, so use it on a copy of them if you care to keep the original sized images
  • auto-orient fixes image orientation. When you take a picture with your iPhone, it writes the image directly to storage in the orientation that the image sensor was in at the time the image was taken, and then writes that orientation information to the image’s metadata. This makes taking pictures faster, because the phone doesn’t have to rotate the image prior to writing the file, but can cause images to show up in all sorts of creative orientations, depending on the software tool chain that you use to get the image onto your website
  • resize resizes the image to the specified size (expressed in pixels). If the image can’t be resized to the exact dimensions specified, it will be taken to the nearest possible dimensions, while maintaining aspect ratio, so the end image won’t be all stretched out
  • strip removes all metadata from the final images. This is handy for privacy, since you probably don’t want the GPS coordinates of your home being published online, but also removes the pesky orientation metadata, which is good, because the auto-orient command already acted on it, and you wouldn’t want an image to be double-rotated at display time
  • quality specifies the quality of the final jpg. I’ve found that 85% works well, but if you’re quality-conscious, you could set this too 100% and it won’t take much extra time to convert the images
  • *.jpg executes the command against all .jpg images in the current directory

So there you have it! A simple one-line command that rotates and resizes images prior to posting them. Oh, and if you’re the type that tends to forget complicated command-line syntax, don’t forget about the history command:

jon@IDEAPAD-UBUNTU:~$ history | grep mogrify
1691 mogrify -auto-orient -resize 584×438 -strip -quality 85% *.jpg
1696 history | grep mogrify

It’s an easy way to find that pesky command that you know you’ve used in the recent past, without resorting to looking up all of the command line switches again.

Cheers, and happy image uploading!

This post was adapted from the original post on my blog at jonathanfritz.ca

Categories: Jon F, Linux, Open Source Software Tags:

Blast from the Past: Linux Media Players Suck – Part 1: Rhythmbox

December 30th, 2016 No comments

This post was originally published on May 5, 2010. The original can be found here.


The state of media players on Linux is a sad one indeed. If you’re a platform enthusiast, you may want to cover your ears and scream “la-la-la-la” while reading this article, because it will likely offend your sensibilities. In fact, the very idea behind this series is to shake up the freetards’ world view, and to make them realize that a decent Winamp or iTunes clone need not be the end of the story for media management and playback on Linux.

This article will concentrate on lambasting Rhythmbox, the default jukebox software of the GNOME desktop environment. Subsequent posts will give the same treatment to other players in this sphere, including Banshee, Amarok, and Songbird (if I can find a copy that will still build on Linux). If you’re a user of media players on Linux, keep your own annoyances firmly in mind, and if I don’t mention them, please share in the comments. If you’re a developer for one of these fine projects, try to keep an open mind and get inspired to do better. A media player is not a hard thing to build, and I do believe that together, we can do better.

For the remainder of this article, please keep in mind that I am currently running Rhythmbox under Kubuntu 9.10, so you’ll see it rendered with qt widgets in all of my screen shots. This doesn’t affect the overall performance of the app, but leads nicely into my first complaint:

  1. Poor Cross-Platform Support: There are basically two desktop environments that matter in the Linux world, GNOME and KDE. Under GNOME, Rhythmbox has a reasonably nice icon set that is comparable to other media players. Under KDE, the qt re-skinning replaces those icons with a horrible set of mismatched images that really make the program look second-rate:
    Isn't this shit awful?

    As you can see, these icons look terrible. Note that there isn’t even an icon for ‘Burn’ and the icon for ‘Browse’ is a fucking question mark.

    This extends to the CD burning and help features too. They rely on programs like gnome-help and brasero to work, but don’t install them with the media player, so when I try to access these features under KDE, I just get error messages. Nice.

    Honestly, who packaged this thing?

    This is just plain stupid. Every package manager has the concept of dependencies, so why doesn’t Rhythmbox use them?

  2. The Player Starts in the Tray: Under what circumstances would it be considered useful for a media player to automatically minimize itself to the system tray on startup? It doesn’t begin to play automatically. The first thing that I always do is click on the tray icon to maximize it so that I can select some music to start playing. Way to start the user experience off on the wrong foot.
  3. Missing Files View: This one is just plain stupid. Whenever I delete a file from my hard drive, it shows up under the ‘Missing Files’ view, even though my intent was clearly to remove the file from my library. Further, I use Rhythmbox to put music on my BlackBerry. Whenever I fill it with music, I first delete the files on it. Those files that I deleted from my mobile device? Yeah, they show up under ‘Missing Files’ too, as if they were a legitimate part of my library! So this view ends up being like a global garbage bin that I have to waste my precious time emptying on occasion, and serves no useful purpose in the mean time. Yeah, I deleted those files. What are you going to do about it?

    Seriously, why the hell are these files in here?

    As you can see, I’ve highlighted the fact that Rhythmbox is telling me that these files are missing from my mobile device. No shit.

  4. Shared Libraries that I can’t Play: So we’ve known for awhile now that Apple broke the ability to connect to iTunes via the DAAP protocol, and that it’s not possible to connect to a shared iTunes library from Linux. If that’s the case, why does Rhythmbox still show these libraries as available? And how come it shows my library under this node? Why would I listen to my own shared library? Finally, I’ve found that even if I’m running Rhythmbox on another machine, I still can’t connect to my shared library. This feature seems to be downright broken – so why is it still in the build?
  5. The GUI and Backend are on One Thread: I keep about half of my music collection as lossless FLAC files. When I want to rip these files to my portable media device, they need to be converted to the Mp3 format. Turns out that Rhythmbox thinks it appropriate to transcode these files on the same thread that it uses to update its GUI, so that while this process is taking place, the app becomes laggy, and at times, downright unusable. Further, the application doesn’t seem to give me any control over the bitrate that my songs are transcoded to. Fuck!
  6. Lack of Playlist Options: Smart playlists in Rhythmbox are missing a rather key feature: Randomness. When filling the aforementioned mobile device with music, I would like to select a random 4GB of music from my top rated playlist. But I can’t. I can select 4GB of music by most every criteria except randomness, which means that I get the same 600 or so songs on my device every time I fill it. This is strange, because I can shuffle the contents of a static playlist; But I cannot randomly fill a smart playlist. Great.

    If you have a device that has a small amount of memory, this feature is essential

    It’s funny; I really want to like Rhythmbox, but it’s shit like this that ruins the experience for me

  7. Columns: What the fuck. Who wrote this part of the application? When I choose the columns that are visible in the main window, I can’t re-order them. That’s right. So the only order that I can put my columns in is Track, Title, Genre, Artist, Album, Year, Time, Quality, Rating. Can’t reorder them at all, and I have to go into the preferences menu to choose which ones are displayed, instead of being able to right-click on the column headers to select them like I can in every other program written in the last 10 years. This is just ridiculous. I know that the GTK+ toolkit allows you to create re-order-able columns, because I’ve seen it done.

    This is just so incredibly backward. I mean, columns are a standard part of the GTK+ toolkit, and I've seen plenty of other apps that do this properly.

    Why, for the love of God, can’t these be re-ordered?

  8. The Equalizer is Balls: No presets, and no preamp. So I can set the EQ, and my settings are magically saved, but I can only have one setting, because there doesn’t appear to be a way to create multiple profiles. And louder music sounds like balls, because I can’t turn down the preamp, so I get digital distortion throughout my signal. It would be better to just not have an equalizer at all.

    I mean, it works. But...

    I mean, it works. But…

  9. Context Menus Don’t Make Sense: Let’s just take a look at this context menu for a moment. There are three ways to remove a song from a playlist. You can Remove the song, which just removes it from the playlist, but not from your library or your hard drive. Alternatively, you can select Move to Trash, which does what you might expect – it removes the song from the playlist, the library, and your computer. I’ve got a problem with the naming conventions here. The purpose of Remove isn’t well explained, and confused the hell out of me at first. In addition, when browsing a mobile device that you’ve filled with music, the GUI breaks down even further. In this case, you can still hit Remove, which seems to remove the song from Rhythmbox’s listing, but leaves the file on the device. So now I have a file on my device that I can’t access. Great. The right-click menu also has the ability to copy and cut the song, even though there is no immediately obvious way to paste it. For that functionality, you’ll have to head up to the Edit menu.

    The right-click context menu

    I’m starting to run out of anger. The 10,000 papercuts that come along with this app are making me numb to it.

  10. No Command Line Tools: Now, normally, this wouldn’t bother me too much. A music library is something that’s meant to have a GUI, and doesn’t generally lend itself to working from the command line. In this case however, command line access to Rhythmbox would be really handy, because I’d like to set up a hot key on my keyboard that will skip songs or pause playback. Unfortunately, there’s no way to do that within the software, and it doesn’t have any command line arguments that I can call instead. Balls.

There you have it, 10 things that really ruin the Rhythmbox experience. While using this piece of software, I felt like the developers worked really hard to build something that was sort of comparable to Apple’s iTunes, and then stopped trying. That isn’t good enough! If we want to attract users to our platform of choice, and keep them here, we need to give them reasons to check it out, and even more to stick around. If I say to you that I want to have the best Linux media player, you tend to put the emphasis on the word Linux. Why not just make the best media player? GNOME is on at least half of all Linux desktops, if not more. Why hinder it with software that gives people a poor first impression of what Linux is capable of? Seriously guys, let’s step it up.

 

10 Things You Must Know About Linux Security

December 29th, 2016 No comments

Millions of users that opt out for using Linux operating system for two decades now, all on the grounds that it is much safer than most others on the market. While it’s true that Linux is less susceptible to security breaches, it is not impenetrable (no system on the planet is), which is why users should get acquainted with some security precautions that can protect their devices even more. The main topic of this article are 10 things you must know about Linux security, and we’ll try to bring this topic closer to home and closer to everyday use of your OS.

1. It All Starts with Updates

Even if you were using the most secure operating system on Earth, it still wouldn’t do you much good unless you keep it up to date. Linux distributions are usually very easy to manage when it comes to the matter at hand and we wholeheartedly suggest you setting up automated updates so that you can rest assured that everything is under control. Also, remember to keep all your apps updated as well, because cybercriminals use them as the back entrance for installing malicious software.

2. Separate Disk Partitions

This is computer security 101 and Linux is not an exemption from the rule. The fact that Linux offers more safety doesn’t mean that you can’t downgrade it by being negligent when it comes to protecting your security. As soon as you’ve set up Linux, be sure to separate disk partitions, so that you have a few different ones for different purposes. This is a form of insurance in case anything goes wrong with a program or a virus starts running rampant. Chances are bigger that the threats will stay contained only on one partition, so you don’t have to eliminate all the data from your device, but just what’s on a particular partition.

3. Security Enhanced Linux

SELinux is one of the main reasons why this operating system is considered to be so bulletproof, but it can also prove to be a bit overbearing. This is a security mechanism that comes in the kernel and it will be extremely careful for you not to stumble on anything malicious on the internet and sometimes it will be too careful. However, shutting it down completely can result in complete security failure of the OS and you don’t want to do that. It would be wise to at least have SELinux in permissive mode, where it won’t enforce its security policy, but it will actively inform you if there’s something you should be worried about.

4. Make Use of the Firewall

Maybe you’re not familiar with the fact that Linux has a very efficient firewall, but now that you know, you should use it all the time. The component is called iptables and it grants you significant amount of control when it comes to keeping your network traffic in check. The firewall is usually disabled by default, but you can turn it on easily enough, depending on which distro of Linux you’ve got.

5. Old Passwords

Using old passwords is a recipe for potential disaster, because it makes it much easier on hackers to get into your device and wreak havoc. Linux has a solution for this problem – it restricts any account from using any of the past five passwords that have been used. If you do try to reuse one of your old passwords, it will simply show an error page and request a new one.

6. Security Software

Many people think that it’s an overkill to have security software on top of already very secure Linux, but it can bring no harm. Having an antivirus program can hardly be a bad thing and if all other system defenses fail, it will be there to save the day. Furthermore, if you’re concerned about your privacy when browsing the internet, consider getting a VPN service to encrypt activities on the web and prevent surveillance.

7. Manual Account Lock

If there are users of the device that don’t inspire trust or simply won’t be using their account for a while, you can lock down their account in the OS. If the user of the locked account tries to access it, he/she will only get an error page saying that the account isn’t available. Bear in mind that the lock account option is only available for root user.

8. Think about Browser Security

Browsers are always potential security weak links unless you tend to them. No matter what browser you use, hackers can find a way to slither between the cracks, which is why you should take full advantage of security plugins that abound for any browser there is.

9. Encrypting Your Hard-Disk

This is great prevention for any unfortunate event of your laptop getting stolen or lost. Choosing to encrypt all the essential data on your device prevents anyone from misusing it and you can rest assured that no unauthorized person can reach your confidential information, because they’ll need FDE password that only you know. The best thing is that this encryption won’t in any way slow down your computer’s performance.

10. You Need Strong Password

This is another security 101 tip, which many Linux users forget about because they believe that the OS’s security can’t be breached. If you use simple and weak passwords, then a simple brute force attack can have your security crumbling down. Don’t gamble with this aspect of your safety and have a strong password for your Linux OS.

If your computer’s security is one of your primary concerns, then using Linux will definitely give you some peace of mind. Just remember that you also have to put some effort into securing your device even more so that your OS becomes a fortress against cybercriminals.


Thomas Milva is 28 and has been in an Information Security Analyst for over four years. He loves his job, but he also loves spending his time in nature, because he’s working from home, which sometimes means not getting enough fresh air. He also regularly writes for wefollowtech.com, where he often comments on the latest web trends in his articles. Thomas currently lives in Baton Rouge with his dog, two fish and his girlfriend.

vi(m) or emacs? Neither, just use nano!

December 29th, 2016 No comments

There is quite a funny, almost religious, debate within the Linux community between the two venerable command line text editors vi or Vim and Emacs. Sure they have loads of features and plugins but do you really need that from a command line editor? I think for the most part, or perhaps for most people at least, the answer is no. Instead I’m going to do something really stupid and publicly state, on a Linux related website, that you should ignore both of those text editors and instead use what many consider an inferior, more simplistic one instead: nano.

Sure it doesn’t have all of the fancy bells and whistles but honestly half the time I’m using a command line editor it’s just to change one line of text. I don’t generally need the extra fluff and when I do I can always use a graphical editor instead. Besides it’s so easy to use… just look at how easy it is to use:

Create/edit a file:

  1. Open the file in nano
  2. Make your changes
  3. Save the changes

Open the file:

nano doc.txt

Make your changes:

Admit it, it's true!

Admit it, it’s true!

 

Save your changes:

No complicated key combinations to remember – it’s all listed at the bottom of the screen. Want to save, “Write Out,” your changes? Easy as Ctrl+O and then Enter to confirm.

Sooooooooo easy to use

Sooooooooo easy to use

So let’s all come together, stop the fanboy wars, and all embrace nano as the best command line text editor out there! 😛

 

Categories: Linux, Open Source Software, Tyler B Tags: , , ,

Going Linux Podcast (Still Going)

December 28th, 2016 No comments

Way back in the early days of The Linux Experiment I came across an excellent podcast called Going Linux which offered beginners advice for those people trying out Linux for the first time or just wanting to know more about Linux in general. I so happy to see that the podcast is still going strong (now at over 300 episodes!) and wanted to mention them again here because they were very helpful in our original experiment’s goal of Going Linux.

Their mascot might actually be better than ours!

Please do check them out at http://goinglinux.com/ or subscribe to their podcast by following the steps here.

 

Categories: Podcast, Tyler B Tags: ,

Blast from the Past: Going Linux, Once and for All

December 28th, 2016 No comments

This post was originally published on December 23, 2009. The original can be found here.


With the linux experiment coming to an end, and my Vista PC requiring a reinstall, I decided to take the leap and go all linux all the time. To that end, I’ve installed Kubuntu on my desktop PC.

I would like to be able to report that the Kubuntu install experience was better than the Debian one, or even on par with a Windows install. Unfortunately, that just isn’t the case.

My machine contains three 500GB hard drives. One is used as the system drive, while an integrated hardware RAID controller binds the other two together as a RAID1 array. Under Windows, this setup worked perfectly. Under Kubuntu, it crashed the graphical installer, and threw the text-based installer into fits of rage.

With plenty of help from the #kubuntu IRC channel on freenode, I managed to complete the Kubuntu install by running it with the two RAID drives disconnected from the motherboard. After finishing the install, I shut down, reconnected the RAID drives, and booted back up. At this point, the RAID drives were visible from Dolphin, but appeared as two discrete drives.

It was explained to me via this article that the hardware RAID support that I had always enjoyed under windows was in fact a ‘fake RAID,’ and is not supported on Linux. Instead, I need to reformat the two drives, and then link them together with a software RAID. More on that process in a later post, once I figure out how to actually do it.

At this point, I have my desktop back up and running, reasonably customized, and looking good. After trying KDE’s default Amarok media player and failing to figure out how to properly import an m3u playlist, I opted to use Gnome’s Banshee player for the time being instead. It is a predictable yet stable iTunes clone that has proved more than capable of handling my library for the time being. I will probably look into Amarok and a few other media players in the future. On that note, if you’re having trouble playing your MP3 files on Linux, check out this post on the ubuntu forums for information about a few of the necessary GStreamer plugins.

For now, my main tasks include setting up my RAID array, getting my ergonomic bluetooth wireless mouse working, and working out folder and printer sharing on our local Windows network. In addition, I would like to set up a Windows XP image inside of Sun’s Virtual Box so that I can continue to use Microsoft Visual Studio, the only Windows application that I’ve yet to find a Linux replacement for.

This is just the beginning of the next chapter of my own personal Linux experiment; stay tuned for more excitement.

This post first appeared at Index out of Bounds.

 

Help out a project with OpenHatch

December 27th, 2016 No comments

OpenHatch is a site that aggregates all of the help postings from a variety of open source projects. It maintains a whole community dedicated to matching people who want to contribute with people who need their help. You don’t even need to be technical like a programmer or something like that. Instead if you want to lend your artistic talents to creating icons and logos for a project, or your writing skills to help them out with documentation – both areas a lot of open source projects aren’t the best with – I’m sure they would be greatly appreciative.

So what are you waiting for? Get connected and give back to the community that helps create the applications you use on a daily basis!

Blast from the Past: Automatically put computer to sleep and wake it up on a schedule

December 26th, 2016 No comments

This post was originally published on June 24, 2012. The original can be found here.


Ever wanted your computer to be on when you need it but automatically put itself to sleep (suspended) when you don’t? Or maybe you just wanted to create a really elaborate alarm clock?

I stumbled across this very useful command a while back but only recently created a script that I now run to control when my computer is suspended and when it is awake.

#!/bin/sh
t=`date –date “17:00” +%s`
sudo /bin/true
sudo rtcwake -u -t $t -m on &
sleep 2
sudo pm-suspend

This creates a variable, t above, with an assigned time and then runs the command rtcwake to tell the computer to automatically wake itself up at that time. In the above example I’m telling the computer that it should wake itself up automatically at 17:00 (5pm). It then sleeps for 2 seconds (just to let the rtcwake command finish what it is doing) and runs pm-suspend which actually puts the computer to sleep. When run the computer will put itself right to sleep and then wake up at whatever time you specify.

For the final piece of the puzzle, I’ve scheduled this script to run daily (when I want the PC to actually go to sleep) and the rest is taken care of for me. As an example, say you use your PC from 5pm to midnight but the rest of the time you are sleeping or at work. Simply schedule the above script to run at midnight and when you get home from work it will be already up and running and waiting for you.

I should note that your computer must have compatible hardware to make advanced power management features like suspend and wake work so, as with everything, your mileage may vary.

This post originally appeared on my personal website here.

 

Blast from the Past: Linux from Scratch: I’ve had it up to here!

December 23rd, 2016 No comments

This post was originally published on November 27, 2011. The original can be found here.


As you may be able to tell from my recent, snooze-worthy technical posts about compilers and makefiles and other assorted garbage, my experience with Linux from Scratch has been equally educational and enraging. Like Dave, I’ve had the pleasure of trying to compile various desktop environments and software packages from scratch, into some god-awful contraption that will let me check my damn email and look at the Twitters.

To be clear, when anyone says I have nobody to blame but myself, that’s complete hokum. From the beginning, this entire process was flawed. The last official LFS LiveCD has a kernel that’s enough revisions behind to cause grief during the setup process. But I really can’t blame the guys behind LFS for all my woes; their documentation is really well-written and explains why you have to pass fifty --do-not-compile-this-obscure-component-or-your-cat-will-crap-on-the-rug arguments.

Patch Your Cares Away

CC attribution licensed from benchilada

Read more…

5 apt tips and tricks

December 22nd, 2016 No comments

Everyone loves apt. It’s a simple command line tool to install new programs and update your system, but beyond the standard commands like update, install and upgrade did you know there are a load of other useful apt-based commands you can run?

1) Search for a package name with apt-cache search

Can’t remember the exact package name but you know some of it? Make it easy on yourself and search using apt-cache. For example:

apt-cache search Firefox

It lists all results for your search. Nice and easy!

It lists all results for your search. Nice and easy!

2) Search for package information with apt-cache show

Want details of a package before you install it? Simple just search for it with apt-cache show.

apt-cache show firefox

More details than you probably even wanted!

More details than you probably even wanted!

3) Upgrade only a specific package

So you already know that you can upgrade your whole system by running

apt-get upgrade

but did you know you can upgrade a specific package instead of the whole system? It’s easy, just specify the package name in the upgrade command. For example to upgrade just firefox run:

apt-get upgrade firefox

4) Install specific package version

Normally when you apt-get install something you get the latest version available but what if that’s not what you wanted? What if you wanted a specific version of the package instead? Again, simple, just specify it when you run the install command. For example run:

apt-get install firefox=version

Where version is the version number you wish to install.

5) Free up disk space with clean

When you download and install packages apt will automatically cache them on your hard drive. This can be useful for a number of reasons, for example some distributions use delta packages so that only what has changed between versions are re-downloaded. In order to do this it needs to have a base cached file already on your hard drive. However these files can take up a lot of space and often times don’t get a lot of updates anyway. Thankfully there are two quick commands that free up this disk space.

apt-get clean

apt-get autoclean

Both of these essentially do the same thing but the difference here is autoclean only gets rid of cached files that have a newer version also cached on your hard drive. These older packages won’t be used anymore and so they are an easy way to free up some space.

There you have it, you are now officially 5 apt commands smarter. Happy computing!

 

Blast from the Past: 10 reasons why Mint might not fail in India

December 21st, 2016 No comments

This post was originally published on July 7, 2010. The original can be found here.


Last evening while reading the SA forums, I encountered a thread about Linux and what was required to bring it to the general public. One of the goons mentioned a post that indicated ten reasons why Ubuntu wasn’t ready for the desktop in India. I kid you not – the most ridiculous reason was because users couldn’t perform the important ritual of right click/Refreshing on the desktop five or more times before getting down to work.

Here are Bharat’s reasons why Ubuntu fails, followed by why I think Mint might succeed instead in its place (while still employing his dubious logic.) When I refer to Indian users, of course, I’m taking his word for it – he’s obviously the authority here.

GRUB Boot Loader does not have an Aesthetic Appeal.

Bharat complains about the visual appearance of Grub – how it does not create a good first impression. This is, of course, in spite of Windows’ horrible boot menu when there’s more than one operating system or boot option to select. Apparently Indian users all have full-color splash screens with aesthetic appeal for BIOS, video card and PCI add-in initialization as well; this is just the icing on the cake that makes them go “eurrrgh” and completely discount Ubuntu.

To improve relations with India and eliminate this eyesore, Mint has added a background image during this phase of boot. My good friend Tyler also informs me that there’s a simple option in the Mint Control Center called “Start-Up Manager” that alllows easy configuration of grub to match a system’s native resolution and color depth.

Login Screen-Users are required to type in their username.

Again, another seemingly impenetrable barrier. Has nobody in India worked in an environment where typing in usernames AND passwords is required – like, for example, posting a blog entry on WordPress or signing into Gmail? In any event, Mint’s GNOME installation definitely gives a clickable list for this awfully onerous task.

Desktop-The Refresh option is missing!

I’m just going to directly lift this description as to the burning need for right click / Refresh:

What does an average Indian user do when the desktop loads in Windows?He rights clicks on the desktop and refreshes the desktop about 5-6 times or until he is satisfied.This is a ritual performed by most Indian Users after switching on the computer and just before shutting down the computer.
When this average user tries to perform his ‘Refresh’ ritual in Ubuntu,he is in for a rude shock.The Ubuntu Desktop does not have a Refresh Option or any other simliar option like Reload in the Right Click Menu.
So I advice Ubuntu Developers to include to a Refresh or a Reload option in the right click menu on the Desktop and in the Nautilus File Manager.The option should be equivalent of pressing Ctrl+R.As of now ,pressing Ctrl+R refreshes the Desktop in Ubuntu.

Mint’s developers have unfortunately not come around to this clearly superior way of thinking by default yet.

Read more…

Discover and listen to your next favourite band, all free!

December 20th, 2016 No comments

Jamendo is a wonderful website where artists can share their music and fans can listen all for free. The music featured is all released under various Creative Commons licenses and so it is free to use and listen to, burn it onto a CD (if people still do that?), put it on your music device and take it on the go, or cut it up and use it in your own works like a podcast. In cases where there may otherwise be a license conflict, for example if you are advertising in your podcast or not releasing it under the same license as the music, Jamendo even facilitates buying a commercial license for yours needs. Sure these features make it attractive for content creators looking for some free background music but none of this is where Jamendo really shines. It shines in just how good of an experience it is to find and listen to new music.

A simple interface makes finding new music a joy

A simple interface makes finding new music a joy

From fostering different music communities, putting together nice playlists or featuring new music Jamendo always has something new to discover.

A few quick stats about Jamendo taken straight from their website:

  • A wide catalog of more than 500,000 tracks
  • More than 40,000 artists
  • Over 150 countries

You can find Jamendo’s website at https://www.jamendo.com.

 

Categories: Tyler B Tags: , , ,

Blast from the Past: Linux Saves the Day

December 19th, 2016 No comments

This post was originally published on December 23, 2009. The original can be found here.


Earlier this week I had an experience where using Linux got me out of trouble in a relatively quick and easy manner. The catch? It was kind of Linux’s fault that I was in trouble in the first place.

Around halfway through November my Linux install on my laptop crapped out, and really fucked things up hard. However, my Windows install wasn’t affected, so I started using Windows on my laptop primarily, while switching to an openSUSE VM on my desktop for my home computing needs.

About a week back I decided it was time to reinstall Linux on my laptop, since exams and my 600 hojillion final projects were out of the way. I booted into Win7, nuked the partitions being used by Linux and… went and got some pizza and forgot to finish my install. Turns out I hadn’t restarted my PC anywhere between that day and when shit hit the fan. When I did restart, I was informed to the merry tune of a PC Speaker screech that my computer had no bootable media.

… Well shit.

My first reaction was to try again, my second was to check to make sure the hard drive was plugged in firmly. After doing this a few times, I was so enraged about my lost data that I was about ready to repave the whole drive when I had the good sense to throw in a BartPE live CD and check to see if there was any data left on the drive. To my elation, all of my data was still in tact! It was at this precise moment I thought to myself “Oh drat, I bet I uninstalled that darned GRUB bootloader. Fiddlesticks!”

However, all was not lost. I know that Linux is great and is capable of finding other OS installs during its install and setting them up in GRUB without me having to look around for a windows boot point and do it myself. 20 minutes and an openSUSE install later, everything was back to normal on my laptop, Win7 and openSUSE 11.1 included!

As we speak I’m attempting an in-place upgrade to openSUSE 11.2 so hopefully I get lucky and everything goes smoothly!

 

CoreGTK 3.18.0 Released!

December 18th, 2016 No comments

The next version of CoreGTK, version 3.18.0, has been tagged for release! This is the first version of CoreGTK to support GTK+ 3.18.

Highlights for this release:

  • Rebased on GTK+ 3.18
  • New supported GtkWidgets in this release:
    • GtkActionBar
    • GtkFlowBox
    • GtkFlowBoxChild
    • GtkGLArea
    • GtkModelButton
    • GtkPopover
    • GtkPopoverMenu
    • GtkStackSidebar
  • Reverted to using GCC as the default compiler (but clang can still be used)

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

Read more about this release here.

This post originally appeared on my personal website here.

Blast from the Past: An Experiment in Transitioning to Open Document Formats

December 16th, 2016 No comments

This post was originally published on June 15, 2013. The original can be found here.


Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.

 

Using screen to keep your terminal sessions alive

December 15th, 2016 No comments

Have you ever connected to a remote Linux computer, using a… let’s say less than ideal WiFi connection, and started running a command only to have your ssh connection drop and your command killed off in a half finished state? In the best of cases this is simply annoying but if it happens during something of consequence, like a system upgrade, it can leave you in a dire state. Thankfully there is a really simple way to avoid this from happening.

Enter: screen

screen is a simple terminal application that basically allows you to create additional persistent terminals. So instead of ssh-ing into your computer and running the command in that session you can instead start a screen session and then run your commands within that. If your connection drops you lose your ‘screen’ but the screen session continues uninterrupted on the computer. Then you can simply re-connect and resume the screen.

Explain with an example!

OK fine. Let’s say I want to write a document over ssh. First you connect to the computer then you start your favourite text editor and begin typing:

ssh user@computer
user@computer’s password:

user@computer: nano doc.txt

What a wonderful document!

What a wonderful document!

Now if I lost my connection at this point all of my hard work would also be lost because I haven’t saved it yet. Instead let’s say I used screen:

ssh user@computer
user@computer’s password:

user@computer: screen

Welcome to screen!

Welcome to screen!

Now with screen running I can just use my terminal like normal and write my story. But oh no I lost my connection! Now what will I do? Well simply re-connect and re-run screen telling it to resume the previous session.

ssh user@computer
user@computer’s password:

user@computer: screen -r

Voila! There you have it – a simple way to somewhat protect your long-running terminal applications from random network disconnects.

 

Blast from the Past: Phoenix Rising

December 14th, 2016 No comments

This post was originally published on December 12, 2009. The original can be found here.


As we prepare to bring The Linux Experiment to a close over the coming weeks, I find that this has been a time of (mostly solemn) reflection for myself and others. At the very least, it’s been an interesting experience with various flavours of Linux and what it has to offer. At its peak, it’s been a roller-coaster of controversial posts (my bad), positive experiences, and the urge to shatter our screens into pieces.

Let me share with you some of the things I’ve personally taken away from this experiment over the last three-and-a-half months.

Fedora 12 is on the bleeding edge of Linux development

This has been a point of discussion on both of our podcasts at this point, and a particular sore spot with both myself and Tyler. It’s come to a place wherein I’m sort of… afraid to perform updates to my system out of fear of just bricking it entirely. While this is admittedly something that could happen under any operating system and any platform, it’s never been as bad for me as it has been under Fedora 12.

As an example, the last *six* kernel updates for me to both Fedora 11 and 12 combined have completely broken graphics capability with my adapter (a GeForce 8600 M GS). Yes, I know that the Fedora development team is not responsible for ensuring that my graphics card works with their operating system – but this is not something the average user should have to worry about. Tyler has also had this issue, and I think would tend to agree with me.

Linux is fun, too

Though there have been so many frustrating moments over the last four months that I have been tempted to just format everything and go back to my native Windows 7 (previously: release candidate, now RTM). Through all of this though, Fedora – and Linux in general – has never stopped interesting me.

This could just be due to the fact that I’ve been learning so much – I can definitely do a lot more now than I ever could before under a Linux environment, and am reasonably pleased with this – but I’ve never sat down on my laptop and been bored to play around with getting stuff to work. In addition, with some software (such as Wine or CrossOver) I’ve been able to get a number of Windows games working as well. Linux can play, too!

Customizing my UI has also been a very nice experience. It looks roughly like Sasha’s now – no bottom panel, GnomeDo with Docky, and Compiz effects… it’s quite pretty now.

There’s always another way

If there’s one thing I’ve chosen to take away from this experiment it’s that there is ALWAYS some kind of alternative to any of my problems, or anything I can do under another platform or operating system. Cisco VPN client won’t install under Wine, nor will the Linux client version? BAM, say hello to vpnc.

Need a comprehensive messaging platform with support for multiple services? Welcome Pidgin into the ring.

No, I still can’t do everything I could do in Windows… but I’m sure, given enough time, I could make Fedora 12 an extremely viable alternative to Windows 7 for me.

The long and short of it

There’s a reason I’ve chosen my clever and rather cliche title for this post. According to lore, a phoenix is a bird that would rise up from its own ashes in a rebirth cycle after igniting its nest at the end of a life cycle. So is the case for Fedora 12 and my experience with Linux.

At this point, I could not see myself continuing my tenure with the Fedora operating system. For a Linux user with my relatively low level of experience, it is too advanced and too likely to brick itself with a round of updates to be viable for me. Perhaps after quite a bit more experience with Linux on the whole, I could revisit it – but not for a good long while. This is not to say it’s unstable – it’s been rock solid, never crashing once – but it’s just not for me.

To that end, Fedora 12 rests after a long and interest-filled tenure with me. Rising from the ashes is a new user in the world of Linux – me. I can say with confidence that I will be experimenting with Linux distributions in the future – maybe dipping my feet in the somewhat familiar waters of Ubuntu once more before wading into the deep-end.

Watch out, Linux community… here I come.