Archive

Archive for the ‘Open Source Software’ Category

Alternative software: QupZilla Browser

March 19th, 2017 No comments

I’m back at it, looking for the latest open source alternative software gem. This time around I’m trying out the QupZilla web browser. Never heard of QupZilla before? Well it’s another alternative web browser but this one is built on top of Qt WebEngine, comes with a built in ad blocker and strives to have a native look and feel for whichever distribution/desktop environment you’re running.

QupZilla on first launch

QupZilla supports a number of extensions that expands the browser’s capabilities. These are presented in an easy to use settings dialog.

Extensions for you! And you! And You!

I figured that it would be a good test to quickly see memory usage when loading up a number of tabs:

Number of tabs Memory Usage
1 184.4MB
2 238.8MB
3 275.3MB

While it’s hard to say if that’s a lot of memory usage or not it does seem like quite a bit of RAM to use up for just a single tab. The other thing I noticed was that the browser seems to use quite a bit of CPU when initially rendering the web page. I don’t know if that’s just related to my particular configuration or what but I regularly saw spikes up to ~70% CPU which seems excessive.

Overall though QupZilla seemed to work fine for the websites I tried in it. I suppose if you’re looking to run a browser that no one else is then this one is for you but just like Midori and Konqeror I don’t have a compelling reason to recommend this for every day use.

Blast from the Past: Do something nice for a change

March 5th, 2017 No comments

This post was originally published on October 6, 2010. The original can be found here.


Open source software (OSS) is great. It’s powerful, community focused and, lets face it, free. There is not a single day that goes by that I don’t use OSS. Between Firefox, Linux Mint, Thunderbird, Pidgin, Pinta, Deluge, FileZilla and many, many more there is hardly ever an occasion where I find myself in a situation where there isn’t an OSS tool for the job. Unfortunately for all of the benefits that OSS brings me in my daily life I find, in reflection, that I hardly ever do anything to contribute back. What’s worse is that I know I am not alone in this. Many OSS users out there just use the software because it happens to be the best for them. And while there is absolutely nothing wrong with that, many of these individuals could be contributing back. Now obviously I don’t expect everyone, or even half for that matter, to contribute back but I honestly do think that the proportion of people who do contribute back could be much higher.

Why should I?

This is perhaps the easiest to answer. While you don’t have to contribute back, you should if you want to personally make the OSS you love even better.

How to I contribute?

Contributing to a project is incredibly easy. In fact in the vast majority of cases you don’t need to write code, debug software or even do much more than simply use the software in question. What do I mean by this? Well the fact that we here on The Linux Experiment write blog posts praising (or tearing to shreds supplying constructive criticism) to various OSS projects is one form of contributing. Did I lose you? Every time you mention an OSS project you bring attention to it. This attention in turn draws more users/developers to the project and grows it larger. Tell your family, write a blog post, digg stories about OSS or just tell your friends about “this cool new program I found”.

There are many other very easy ways to help out as well. For instance if you notice the program is doing something funky then file a bug. It’s a short process that is usually very painless and quickly brings real world results. I have found that it is also a very therapeutic way to get back at that application that just crashed and lost all of your data. Sometimes you don’t even have to be the one to file it, simply bringing it up in a discussion, such as a forum post, can be enough for others to look into it for you.

Speaking of forum posts, answering new users’ questions about OSS projects can be an excellent way to both spread use of the project and identify problems that new users are facing. The latter could in turn be corrected through subsequent bug or feature requests. Along the same lines, documentation is something that some OSS projects are sorely missing. While it is not the most glamorous job, documentation is key to providing an excellent experience to a first time user. If you know more than one language I can’t think of a single OSS project that couldn’t use your help making translations so that people all over the world can begin to use the software.

For the artists among us there are many OSS projects that could benefit from a complete artwork makeover. As a programmer myself I know all to well the horrors of developer artwork. Creating some awesome graphics, icons, etc. for a project can make a world of difference. Or if you are more interested in user experience and interface design there are many projects that could also benefit from your unique skills. Tools like Glade can even allow individuals to create whole user interfaces without writing a single line of code.

Are you a web developer? Do you like making pretty websites with fancy AJAX fluff? Offer to help the project by designing an attractive website that lures visitors to try the software. You could be the difference between this and this (no offense to the former).

If you’ve been using a particular piece of software for a while, and feel comfortable trying to help others, hop on over to the project’s IRC channel. Help new users troubleshoot their problems and offer suggestions of solutions that have work for you. Just remember: nothing turns off a new user like an angry IRC asshat.

Finally if you are a developer take a look at the software you use on a daily basis. Can you see anything in it that you could help change? Peruse their bug tracker and start picking off the low priority or trivial bugs. These are often issues that get overlooked while the ‘full time’ developers tackle the larger problems. Squashing these small bugs can help to alleviate the 100 paper cuts syndrome many users experience while using some OSS.

Where to start

Depending on how you would like to contribute your starting point could be pretty much anywhere. I would suggest however that you check out your favourite OSS project’s website. Alternatively jump over to an intermediary like OpenHatch that aggregates all of the help postings from a variety of projects. OpenHatch actually has a whole community dedicated to matching people who want to contribute with people who need their help.

I don’t expect anyone, and certainly not myself, to contribute back on a daily basis. I will however personally start by setting a recurring event in my calendar that reminds me to contribute, in some way or another, every week or month. If we all did something similar imagine the rapid improvements we could see in a short time.

Blast from the Past: VOIP with Linode, Ubuntu, Asterisk and FreePBX

March 1st, 2017 No comments

This post was originally published on October 29, 2010. The original can be found here.


Overview and Introduction

I’ve been dabbling with managing a VOIP server for the past year or so, using CentOS, Asterisk and FreePBX on a co-located server. Recently Dave and I needed to move to our own machine, and decided to use TEH CLOUD to reduce management and get a fresh start. There are hundreds of hosts out there offering virtual private servers (VPS’s). We’ve standardized on Linode for our small business for a few reasons. While I don’t want to sound like a complete advertisement, I’ve been incredibly impressed with them:

  • Performance. The host systems at Linode run at least 4-way 2GHz Xeon dual-core CPUs (I’ve seen higher as well) and you’re guaranteed the RAM you pay for. Pricing is generally based on how much memory you need.
  • Pricing. For a 512MB Linode, you pay $19.95 US per month. Slicehost (a part of Rackspace, and a Linode competitor) charges the same amount for a 256MB slice. Generally you want at least 512MB RAM for a Linux machine that’s not a test/development box.
  • Features. If you have multiple VMs in the same datacenter, you can assign them private IPs and internal traffic doesn’t count toward your bandwidth allowance. Likewise, bandwidth is pooled among all your VMs; so buying two VMs with 200GB bandwidth each gives 400GB for all your systems.

With full root access and the Linux distribution of your choice, it’s very easy to set up and tear down VMs.

Why VOIP?
When people hear VOIP, they generally assume either a flaky enterprise system with echoing calls or something like Skype. Properly configured, a VOIP system offers a number of really interesting features:

  • Low-cost long distance and international calling. The provider we use, voip.ms, offers outgoing calls for $0.0052 per minute to Canada and $0.0105/minute to the US on their value route.
  • Cheap phone numbers – direct inward dialing – are available for $0.99 per month in your region. These phone numbers are virtual and can be configured to do nearly anything you want. Incoming calls are $0.01/minute, and calls between voip.ms numbers are free.
  • Want to take advantage of cheap long distance from your cell phone? Set up a Direct Inward System Access path, which gives you a dial tone for making outgoing calls when you call a local number. Put your DID number on your My5 list, and you’re set to reduce bill overages.
  • Voicemail becomes much more useful when the VOIP server sends you an email with a WAV attachment and caller ID information.
  • Want to set up an interactive voice response menu, time conditions, blacklist telemarketers, manage group conferences or have witty hold music? All available with FreePBX and Asterisk.

Continue reading for server setup details and security best practices…

Configuration
I opted to use Ubuntu 10.04 LTS on the server, since there are handy directions for configuring Asterisk and FreePBX. The guide is pretty handy, but some adjustments had to be made to the article for the latest versions of Ubuntu and Asterisk:

  • When installing Asterisk, make sure to install the sox package for additional sound and recording support.
  • Back up your /etc/asterisk/modules.conf file before installing FreePBX, and then restore it after the installation is complete. The FreePBX installation seems to clobber this file.
  • Replace all instances of the asteriskcdr database with asteriskcdrdb for proper call reporting functionality. Likewise, you’ll have to recompile and install the asterisk-addons package as per Launchpad bug 560656:
    cd /usr/src
    apt-get build-dep asterisk-mysql
    apt-get -b source asterisk-mysql
    dpkg -i asterisk-mysql*.deb
  • Don’t use the amportal script for managing Asterisk/FreePBX; use
    /etc/init.d/asterisk [start|stop|restart]

    instead.

Let It Ring
There’s plenty of great FreePBX documentation available online – you shouldn’t have a problem getting up and running once the installation is finished. As always, you should follow best server security practices:

  • Enforce username/password authentication with .htaccess and htpasswd and HTTPS for your management console, or only expose FreePBX administration over localhost/127.0.0.1 and tunnel in. There are plenty of articles on configuring OpenSSL and Apache2.
  • Consider running SSH on an alternate port (not 22), and denying direct logins from the root user account. Enforce strong passwords and use tools such as fail2ban and DenyHosts to limit SSH attacks.
  • Use a firewall. Ubuntu’s ufw is very simple to manage. For an Asterisk server, you’ll want to allow UDP ports 5060 and 10000-20000 (for voice traffic), or a range defined in
    /etc/asterisk/rtp.conf

Feel free to post comments here on server setup or general VOIP questions, and I’ll do my best to help out!

Blast from the Past: Very short plug for PowerTOP

February 25th, 2017 No comments

This post was originally published on July 4, 2010. The original can be found here.


Recently I decided to try out PowerTOP, a Linux power saving application built by Intel. I am extremely impressed by how easy it was to use and the power savings I am now basking in.

PowerTOP is a terminal application that first scans your computer for a number of things during a set interval. It then reports back which processes are taking up the most power and offers you some options to improve your battery life. All of these options can literally be enabled at a press of a button. It’s sort of like an experience I once had with Clippy in Microsoft Word; “it seems you are trying to save power, let me help you…” After applying a few of the suggestions the estimated battery life on my laptop went from about 3 and a half hours to almost 5 hours. In short, I would highly recommend everyone at least try out PowerTOP. I’m not promising miracles but at the very least it should help you out some.

Blast from the Past: Vorbis is not Theora

February 24th, 2017 No comments

This post was originally published on April 21, 2010. The original can be found here.


Recently I have started to mess around with the Vorbis audio codec, commonly found within the Ogg media container. Unlike Theora, which I had also experimented with but won’t post the results for fear of a backlash, I must say I am rather impressed with Vorbis. I had no idea that the open source community had such a high quality audio codec available to them. Previously I always sort of passed off Vorbis’ reason for being regarded as ‘so great’ within the community as simply a lack of options. However after some comparative tests between Vorbis and MP3 I must say I am a changed man. I would now easily recommend Vorbis as a quality choice if it fits your situation of use.

What is Vorbis?

Like I had mentioned above, Vorbis is the name of a very high quality free and open source audio codec. It is analogous to MP3 in that you can use it to shrink the size of your music collection, but still retain very good sound. Vorbis is unique in that it only offers a VBR mode, which allows it to squeeze the best sound out of the fewest number of bits. This is done by lowering the bitrate during sections of silence or unimportant audio. Additionally, unlike other audio codecs, Vorbis audio is generally encoded at a supplied ‘quality’ level. Currently the bitrate for quality level 4 is 128kbit/s, however as the encoders mature they may be able to squeeze out the same quality at a lower bitrate. This will potentially allow a modern iteration of the encoder to achieve the same quality level but by using a lower bitrate, saving you storage space/bandwidth/etc.

So Vorbis is better than MP3?

Obviously when it comes to comparing the relative quality of competing audio codecs it must always be up to the listener to decide. That being said I firmly believe that Vorbis is far better than MP3 at low bitrates and is, at the very least, very comparable to MP3 as you increase the bitrate.

The Tests

I began by grabbing a FLAC copy of the Creative Commons album The Slip by Nine Inch Nails here. I chose FLAC because it provided me with the highest quality possible (lossless CD quality) from which to encode the samples with. Then, looking around at some Internet radio websites, I decided that I should test the following bitrates: 45kbit/s, 64kbit/s, 96kbit/s, and finally 128kbit/s (for good measure). I encoded them using only the default encoder settings and the following terminal commands:

For MP3 I used LAME and the following command. I chose average bitrate (ABR) which is really just VBR with a target, similar to Vorbis:

flac -cd {input file goes here}.flac | lame –abr {target bitrate} – {output file goes here}.mp3

For Vorbis I used OggEnc and the following command:

oggenc -b {target bitrate} {input file goes here}.flac -o {output file goes here}.ogg

Results

I think I would be a hypocrite if I didn’t tell you to just listen for yourself… The song in question is track #4, Discipline.

Note: if you are using Mozilla Firefox, Google Chrome, or anything else that supports HTML5/Vorbis, you should be able to play the Vorbis file right in your browser.

45kbit/s MP3(1.4MB) Vorbis(1.3MB)

64kbit/s MP3(2.0MB) Vorbis(1.9MB)

96kbit/s MP3(2.9MB) Vorbis(2.8MB)

128kbit/s MP3(3.8MB) Vorbis(3.6MB)

Setup your own VPN with OpenVPN

February 6th, 2017 No comments

Using the excellent Digital Ocean tutorial as my base I decided to setup an OpenVPN server on a Linux Mint 18 computer running on my home network so that I can have an extra layer of protection when connecting to those less than reputable WiFi hotspots at airports and hotels.

While this post is not meant to be an in-depth guide, you should use the original for that, it is meant to allow me to look back at this at some point in the future and easily re-create my setup.

1. Install everything you need

sudo apt-get update
sudo apt-get install openvpn easy-rsa

2. Setup Certificate Authority (CA)

make-cadir ~/openvpn-ca
cd ~/openvpn-ca
nano vars

3. Update CA vars

Set these to something that makes sense:

export KEY_COUNTRY=”US”
export KEY_PROVINCE=”CA”
export KEY_CITY=”SanFrancisco”
export KEY_ORG=”Fort-Funston”
export KEY_EMAIL=”me@myhost.mydomain”
export KEY_OU=”MyOrganizationalUnit”

Set the KEY_NAME to something that makes sense:

export KEY_NAME=”server”

4. Build the CA

source vars
./clean-all
./build-ca

5. Build server certificate and key

./build-key-server server
./build-dh
openvpn –genkey –secret keys/ta.key

6. Generate client certificate

source vars
./build-key-pass clientname

7. Configure OpenVPN

cd ~/openvpn-ca/keys
sudo cp ca.crt ca.key server.crt server.key ta.key dh2048.pem /etc/openvpn
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | sudo tee /etc/openvpn/server.conf

Edit config file:

sudo nano /etc/openvpn/server.conf

Uncomment the following:

tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
push “redirect-gateway def1 bypass-dhcp”
push “route 192.168.10.0 255.255.255.0”
push “route 192.168.20.0 255.255.255.0”

Add the following:

key-direction 0
auth SHA256

Edit config file:

sudo nano /etc/sysctl.conf

Uncomment the following:

net.ipv4.ip_forward=1

Run:

sudo sysctl -p

8. Setup UFW rules

Run:

ip route | grep default

To find the name of the network adaptor. For example:

default via 192.168.x.x dev enp3s0  src 192.168.x.x  metric 202

Edit config file:

sudo nano /etc/ufw/before.rules

Add the following, replacing your network adaptor name, above the bit that says # Don’t delete these required lines…

# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0
-A POSTROUTING -s 10.8.0.0/8 -o enp3s0 -j MASQUERADE
COMMIT
# END OPENVPN RULES

Edit config file:

sudo nano /etc/default/ufw

Change DEFAULT_FORWARD_POLICY to ACCEPT.

DEFAULT_FORWARD_POLICY=”ACCEPT”

Add port and OpenVPN to ufw, allow it and restart ufw to enable:

sudo ufw allow 1194/udp
sudo ufw allow OpenSSH
sudo ufw disable
sudo ufw enable

9. Start OpenVPN Service and set it to enable at boot

sudo systemctl start openvpn@server
sudo systemctl enable openvpn@server

10. Setup client configuration

mkdir -p ~/client-configs/files
chmod 700 ~/client-configs/files
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf

Edit config file:

nano ~/client-configs/base.conf

Replace remote server_IP_address port with the external IP address and port you are planning on using. The IP address can also be a hostname, such as a re-director.

Add the following:

cipher AES-128-CBC
auth SHA256
key-direction 1

Uncomment the following:

user nobody
group nogroup

Comment out the following:

#ca ca.crt
#cert client.crt
#key client.key

11. Make a client configuration generation script

Create the file:

nano ~/client-configs/make_config.sh

Add the following to it:

#!/bin/bash

# First argument: Client identifier

KEY_DIR=~/openvpn-ca/keys
OUTPUT_DIR=~/client-configs/files
BASE_CONFIG=~/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e ‘<ca>’) \
${KEY_DIR}/ca.crt \
<(echo -e ‘</ca>\n<cert>’) \
${KEY_DIR}/${1}.crt \
<(echo -e ‘</cert>\n<key>’) \
${KEY_DIR}/${1}.key \
<(echo -e ‘</key>\n<tls-auth>’) \
${KEY_DIR}/ta.key \
<(echo -e ‘</tls-auth>’) \
> ${OUTPUT_DIR}/${1}.ovpn

And mark it executable:

chmod 700 ~/client-configs/make_config.sh

12. Generate the client config file

cd ~/client-configs
./make_config.sh clientname

13. Transfer client configuration to device

You can now transfer the client configuration file found in ~/client-configs/files to your device.


This post originally appeared on my personal website here.

Categories: Open Source Software, Tyler B Tags: ,

Blast from the Past: Getting KDE on openSUSE is like playing Jenga

February 2nd, 2017 No comments

This post was originally published on October 16, 2009. The original can be found here.


As part of our experiment, everyone is required to try a different desktop manager for two weeks. I chose KDE, since I’ve been using GNOME since I installed openSUSE. However, I’ve found that while trying to get a desktop manager set up one wrong move can cause everything to fall apart.

Switching from GNOME:

This was fairly simple. I started up YaST Software Management, changed my filter from “Search” to “Patterns”, and found the Graphical Environments section. Here I right clicked “KDE Base System”, and selected install. Clicking accept installed the kdebase and kdm packages, with a slew of other KDE default programs. Once this was done, I logged out of my GNOME session, and selected KDE4 as my new login session. My system was slightly confused and booted into GNOME again, so I restarted. This time, I was met with KDE 4.1.

My Thoughts on KDE 4.1:

As much as I had hated the qt look [which I erronously call the ‘quicktime’ look, due to its uncanny similarity to the quicktime app], the desktop was beautiful. The default panel was a very slick, glossy black, which looked quite nice. The “lines” in each window title made the windowing system very ugly, so I set out to turn them off. Its a fairly easy process:

KDE Application Launcher > Configure Desktop > Appearance > Windows > Uncheck the “Show stripes next to the title” box.

Once completed, my windows were simple and effective, and slightly less chunky than the default GNOME theme, so I was content.

Getting rid of the openSUSE Branding:

openSUSE usually draws much ire from me – so its not hard to imagine that I’d prefer not to have openSUSE branding on every god damn application I run, least of all my Desktop Manager. From YaST Software Management I searched for openSUSE and uninstalled every package that had the words “openSUSE” and “branding”. YaST automatically replaces these packages with alternate “upstream” packages, which seem to be the non-openSUSE themes/appearances. Once these were gone, things looked a lot less gray-and-green, and I was happy.

Oh god what happened to my login screen:

A side effect of removing all those openSUSE packages my login screen took a trip back in time, to the Windows 3.1 era. It was a white window on a  blue background with Times New Roman-esque font. After a bit of researching on the GOOG, I found out that this was KDE3 stepping up to take over for my openSUSE branding. Uninstalling the package kde3base or whatever the shit it’s called forced KDE4 to take over, and everything was peachy again.

Installing my Broadcom Wirless Driver

In order to install my driver, I followed this guide TO THE LETTER. Not following this guide actually gave YaST a heart attack and created code conflicts.

KMix Being Weird

KMix magically made my media buttons on my laptop work, however it occasionally decided to change what “audio device” the default slider was controlling. Still, having the media buttons working was a HUGE plus.

Getting Compositing to Work

I did not have a good experience with this. Infact, by fucking around with settings, I ended up bricking my openSUSE install entirely. So alas, I ended up completely re-installing openSUSE. Regardless, to install ATI drivers, follow the guide here using the one-click install method worked perfectly. After finally getting my drivers, turning on compositing was simple:

KDE Application Launcher > Configure Desktop > Appearance > Desktop > Check the “Enable Desktop Effects” box.

From KDE4.1 to KDE4.3

While KDE was really working for me, the notifications system was seriously annoying. Every time my system had an update, or a received a message in Kopete  an ugly, plain, slightly off center, gray box would appear at the top of my screen to inform me. Tyler informed me that this was caused by the fact that I wasn’t running the most recent version of KDE4. A quick check showed me that openSUSE isn’t going to use KDE4.3 until openSUSE 11.2 launches, however you can manually add the KDE 4.3 repositories to YaST, as shown on the openSUSE KDE Repository page.

After adding these repositories, I learned a painful lesson in upgrading your display manager. Do not, under any circumstances, attempt a Display Manager upgrade/switch untill you have an hour to spare,  and enough battery life to last the whole time. I did not, and even though I cancelled the install about 60 seconds in, I found that YaST had already uninstalled my display manager. Upon restart, I was met with a terminal.

From the terminal, I used the command line version of YaST to completely remove kdebase4 and kdm from my system. After that, re-installing the KDE4.3 verison of  kdm from YaST in the terminal installed all the other required applications. However, there are a shitload of dependency issues you gotta sort through and unfortunately the required action is not the same for each application.

KDE4.3

KDE4.3 is absolutely gorgeous, I’ve had no complaints with it. KMix seems to have reassigned itself again, but it assigned itself correctly. Removing the openSUSE branding was the same, but by default the desktop theme used is Air. I prefer the darker look of Oxygen, so I headed over to my desktop to fix it by following these steps:

Desktop > Right Click > Plain Desktop Settings > Change the Desktop Theme from Air to Oxygen.

Concluding Thoughts

Now that all these things are sorted out, I’m surprisingly impressed with KDE, and I might even keep it at the end of this test period for our podcast.

Let me know if you’ve ever had to change desktop managers and your woes in the comments!

get rid of that openSUSE shit:

KDE4.1
uninstall openSUSE branding, except the KDM one maybe?

uninstall kde3base or whatever the shit it’s called. this makes stuff wicked.

KDE4.3
This might have all been unessecary. since installing KDE4.3, I did it all again to no avail. Rightclick desktop, plain desktop settings, theme: oxygen. Then hooray its fine?

Blast from the Past: Installing Gnome Do with Docky on openSUSE

January 19th, 2017 No comments

This post was originally published on September 28, 2009. The original can be found here.


Before I switched to Windows 7 for my laptop, I used a a dock software called RocketDock to manage my windows and commonly used desktop shortcuts. I liked being able to see my whole desktop ever since I found a good wallpaper site. Back when I rolled Ubuntu, I installed this application called Gnome Do. It’s a Quicksilver like program that just works. However, the newest feature of Gnome Do that I loved was its Docky theme. It puts a dock similar to RocketDock on the bottom of your screen, and integrates it’s OS searching features right into the dock.

I decided to install the application from YaST, the default system administration tool. It indexes a fairly large number of repositories, and it did have Gnome Do. A few minutes later I had the app running, but unfortunately the version was way out of date. Gnome Do is on roughly version 0.8.x, and YaST gave me 0.4.x.

So off I went trying to find a .rpm for Gnome Do that would install. I was met with a lot of failure, with a ton of dependencies unable to be resolved and so on. Next I tried the openSUSE file from Gnome Do’s homepage, but for some reason the servers were down and I was unable to install that way either.

Frustrated and not knowing what to do next, I decided to hop on IRC and see if anyone in #SUSE on irc.freenode.net could help me out. They told me about this service called Webpin. There I found a .ymp [which is an openSUSE specific installer file like a .deb or .rpm] for Gnome Do, and a ymp for Gnome Do’s plugins. Downloading and opening the files installed the programs without any problems. The last step I had to take to enable Docky was to install compiz and enable desktop compositing. After that, a quick trip to Gnome Do’s preference dialog allowed me to use the Docky theme, and I was up and running!

Shove ads in your pi-hole!

January 8th, 2017 No comments

There are loads of neat little projects out there for your Raspberry Pi from random little hacks all the way up to full scale home automation and more. In the past I’ve written about RetroPie (which is an awesome project that you should definitely check out!) but this time I’m going to take a moment to mention another really cool project: pi-hole.

Pi-hole, as their website says, is “a black hole for Internet advertisements.” Essentially it’s software that you install on your Raspberry Pi (or other Linux computer) that then acts as a local DNS proxy. Once it is setup and running you can point your devices to it individually or just tell your router to use that instead (which then applies to everything on the network).

Then as you’re browsing the internet and come across a webpage that is trying to serve you ads, pi-hole will simply block the DNS request for the ad from really resolving and instead return a blank image or web page meaning that the site simply can’t download the ad to show you. Voila! Universal ad blocker for your entire network and all of your devices! Even better – because you’re blocking the ads from being downloaded in the first place your browsing speeds can sometimes be improved as well.

Pi-hole dashboard

You can monitor or control which domains are blocked all from a really nice dashboard interface and see the queries come into pi-hole almost in real time.

After running pi-hole for a week now I’m quite surprised with how effective it has really been with removing ads. It’s legitimately pleasant being able to browse the web without seeing ads everywhere or having ad blockers break certain websites. If that sounds like something you too might be interested in then pi-hole might be worth taking a look.


This post originally appeared on my personal website here.

Fixing Areca Backup on Ubuntu 16.04 (and related distributions)

December 30th, 2016 No comments

Seems like I’m at it again, this time fixing Areca Backup on Ubuntu 16.04 (actually Linux Mint 18.1 in my case). For some reason when I download the current version (Areca 7.5 for Linux/GTK) and try and run the areca.sh script I get the following error:

tyler@computer $ ./areca.sh
ls: cannot access ‘/usr/java’: No such file or directory
No valid JRE found in /usr/java.

This is especially odd because I quite clearly do have Java installed:

tyler@computer $ java -version
openjdk version “1.8.0_111”
OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)

Now granted this may be an issue exclusive to OpenJDK, or just this version of OpenJDK, but I’m hardly going to install Sun Java just to make this program work.

After some digging I narrowed it down to the look_for_java() function inside of the areca_run.sh script located in the Areca /bin/ directory. Now I’m quite sure there is a far more elegant solution than this but I simply commented out the vast majority of this function and hard coded the directory of my system’s Java binary. Here is how you can do the same.

First locate where your Java is installed by running the which command:

tyler@computer $ which java
/usr/bin/java

As you can see from the output above my java executable exists in my /usr/bin/ directory.

Next open up areca_run.sh inside of the Areca /bin/ directory and modify the look_for_java() function. In here you’ll want to set the variable JAVA_PROGRAM_DIR to your directory above (i.e. in my case it would be /usr/bin/) and then return 0 indicating no error. You can either simply delete the rest or just comment out the remaining function script by placing a # character at the start of each line.

#Method to locate matching JREs
look_for_java() {
JAVA_PROGRAM_DIR=”/usr/bin/”
return 0
# IFS=$’\n’
# potential_java_dirs=(`ls -1 “$JAVADIR” | sort | tac`)
# IFS=
# for D in “${potential_java_dirs[@]}”; do
#    if [[ -d “$JAVADIR/$D” && -x “$JAVADIR/$D/bin/java” ]]; then
#       JAVA_PROGRAM_DIR=”$JAVADIR/$D/bin/”
#       echo “JRE found in ${JAVA_PROGRAM_DIR} directory.”
#       if check_version ; then
#          return 0
#       else
#          return 1
#       fi
#    fi
# done
# echo “No valid JRE found in ${JAVADIR}.”
# return 1
}

Once you’ve saved the file you should be able to run the normal areca.sh script now without encountering any errors!


This post originally appeared on my personal website here.

Resizing and Correctly Orienting Images with Mogrify

December 30th, 2016 No comments

Lately, I’ve been publishing woodworking tutorials over on my own blog, and I’ve found myself needing to resize and correctly orient images taken with my iPhone prior to posting.

If you want to do anything with images on Linux, the best place to start is with ImageMagick, an all-in-one utility that does just about everything under the sun with images, and has fantastic documentation to boot.

After a bit of research, I came up with this command:

mogrify -auto-orient -resize 584×438 -strip -quality 85% *.jpg

Here’s a rundown of exactly what this does:

  • mogrify modifies the images in the source directory. This command will overwrite your image files, so use it on a copy of them if you care to keep the original sized images
  • auto-orient fixes image orientation. When you take a picture with your iPhone, it writes the image directly to storage in the orientation that the image sensor was in at the time the image was taken, and then writes that orientation information to the image’s metadata. This makes taking pictures faster, because the phone doesn’t have to rotate the image prior to writing the file, but can cause images to show up in all sorts of creative orientations, depending on the software tool chain that you use to get the image onto your website
  • resize resizes the image to the specified size (expressed in pixels). If the image can’t be resized to the exact dimensions specified, it will be taken to the nearest possible dimensions, while maintaining aspect ratio, so the end image won’t be all stretched out
  • strip removes all metadata from the final images. This is handy for privacy, since you probably don’t want the GPS coordinates of your home being published online, but also removes the pesky orientation metadata, which is good, because the auto-orient command already acted on it, and you wouldn’t want an image to be double-rotated at display time
  • quality specifies the quality of the final jpg. I’ve found that 85% works well, but if you’re quality-conscious, you could set this too 100% and it won’t take much extra time to convert the images
  • *.jpg executes the command against all .jpg images in the current directory

So there you have it! A simple one-line command that rotates and resizes images prior to posting them. Oh, and if you’re the type that tends to forget complicated command-line syntax, don’t forget about the history command:

jon@IDEAPAD-UBUNTU:~$ history | grep mogrify
1691 mogrify -auto-orient -resize 584×438 -strip -quality 85% *.jpg
1696 history | grep mogrify

It’s an easy way to find that pesky command that you know you’ve used in the recent past, without resorting to looking up all of the command line switches again.

Cheers, and happy image uploading!

This post was adapted from the original post on my blog at jonathanfritz.ca

Categories: Jon F, Linux, Open Source Software Tags:

Blast from the Past: Linux Media Players Suck – Part 1: Rhythmbox

December 30th, 2016 No comments

This post was originally published on May 5, 2010. The original can be found here.


The state of media players on Linux is a sad one indeed. If you’re a platform enthusiast, you may want to cover your ears and scream “la-la-la-la” while reading this article, because it will likely offend your sensibilities. In fact, the very idea behind this series is to shake up the freetards’ world view, and to make them realize that a decent Winamp or iTunes clone need not be the end of the story for media management and playback on Linux.

This article will concentrate on lambasting Rhythmbox, the default jukebox software of the GNOME desktop environment. Subsequent posts will give the same treatment to other players in this sphere, including Banshee, Amarok, and Songbird (if I can find a copy that will still build on Linux). If you’re a user of media players on Linux, keep your own annoyances firmly in mind, and if I don’t mention them, please share in the comments. If you’re a developer for one of these fine projects, try to keep an open mind and get inspired to do better. A media player is not a hard thing to build, and I do believe that together, we can do better.

For the remainder of this article, please keep in mind that I am currently running Rhythmbox under Kubuntu 9.10, so you’ll see it rendered with qt widgets in all of my screen shots. This doesn’t affect the overall performance of the app, but leads nicely into my first complaint:

  1. Poor Cross-Platform Support: There are basically two desktop environments that matter in the Linux world, GNOME and KDE. Under GNOME, Rhythmbox has a reasonably nice icon set that is comparable to other media players. Under KDE, the qt re-skinning replaces those icons with a horrible set of mismatched images that really make the program look second-rate:
    Isn't this shit awful?

    As you can see, these icons look terrible. Note that there isn’t even an icon for ‘Burn’ and the icon for ‘Browse’ is a fucking question mark.

    This extends to the CD burning and help features too. They rely on programs like gnome-help and brasero to work, but don’t install them with the media player, so when I try to access these features under KDE, I just get error messages. Nice.

    Honestly, who packaged this thing?

    This is just plain stupid. Every package manager has the concept of dependencies, so why doesn’t Rhythmbox use them?

  2. The Player Starts in the Tray: Under what circumstances would it be considered useful for a media player to automatically minimize itself to the system tray on startup? It doesn’t begin to play automatically. The first thing that I always do is click on the tray icon to maximize it so that I can select some music to start playing. Way to start the user experience off on the wrong foot.
  3. Missing Files View: This one is just plain stupid. Whenever I delete a file from my hard drive, it shows up under the ‘Missing Files’ view, even though my intent was clearly to remove the file from my library. Further, I use Rhythmbox to put music on my BlackBerry. Whenever I fill it with music, I first delete the files on it. Those files that I deleted from my mobile device? Yeah, they show up under ‘Missing Files’ too, as if they were a legitimate part of my library! So this view ends up being like a global garbage bin that I have to waste my precious time emptying on occasion, and serves no useful purpose in the mean time. Yeah, I deleted those files. What are you going to do about it?

    Seriously, why the hell are these files in here?

    As you can see, I’ve highlighted the fact that Rhythmbox is telling me that these files are missing from my mobile device. No shit.

  4. Shared Libraries that I can’t Play: So we’ve known for awhile now that Apple broke the ability to connect to iTunes via the DAAP protocol, and that it’s not possible to connect to a shared iTunes library from Linux. If that’s the case, why does Rhythmbox still show these libraries as available? And how come it shows my library under this node? Why would I listen to my own shared library? Finally, I’ve found that even if I’m running Rhythmbox on another machine, I still can’t connect to my shared library. This feature seems to be downright broken – so why is it still in the build?
  5. The GUI and Backend are on One Thread: I keep about half of my music collection as lossless FLAC files. When I want to rip these files to my portable media device, they need to be converted to the Mp3 format. Turns out that Rhythmbox thinks it appropriate to transcode these files on the same thread that it uses to update its GUI, so that while this process is taking place, the app becomes laggy, and at times, downright unusable. Further, the application doesn’t seem to give me any control over the bitrate that my songs are transcoded to. Fuck!
  6. Lack of Playlist Options: Smart playlists in Rhythmbox are missing a rather key feature: Randomness. When filling the aforementioned mobile device with music, I would like to select a random 4GB of music from my top rated playlist. But I can’t. I can select 4GB of music by most every criteria except randomness, which means that I get the same 600 or so songs on my device every time I fill it. This is strange, because I can shuffle the contents of a static playlist; But I cannot randomly fill a smart playlist. Great.

    If you have a device that has a small amount of memory, this feature is essential

    It’s funny; I really want to like Rhythmbox, but it’s shit like this that ruins the experience for me

  7. Columns: What the fuck. Who wrote this part of the application? When I choose the columns that are visible in the main window, I can’t re-order them. That’s right. So the only order that I can put my columns in is Track, Title, Genre, Artist, Album, Year, Time, Quality, Rating. Can’t reorder them at all, and I have to go into the preferences menu to choose which ones are displayed, instead of being able to right-click on the column headers to select them like I can in every other program written in the last 10 years. This is just ridiculous. I know that the GTK+ toolkit allows you to create re-order-able columns, because I’ve seen it done.

    This is just so incredibly backward. I mean, columns are a standard part of the GTK+ toolkit, and I've seen plenty of other apps that do this properly.

    Why, for the love of God, can’t these be re-ordered?

  8. The Equalizer is Balls: No presets, and no preamp. So I can set the EQ, and my settings are magically saved, but I can only have one setting, because there doesn’t appear to be a way to create multiple profiles. And louder music sounds like balls, because I can’t turn down the preamp, so I get digital distortion throughout my signal. It would be better to just not have an equalizer at all.

    I mean, it works. But...

    I mean, it works. But…

  9. Context Menus Don’t Make Sense: Let’s just take a look at this context menu for a moment. There are three ways to remove a song from a playlist. You can Remove the song, which just removes it from the playlist, but not from your library or your hard drive. Alternatively, you can select Move to Trash, which does what you might expect – it removes the song from the playlist, the library, and your computer. I’ve got a problem with the naming conventions here. The purpose of Remove isn’t well explained, and confused the hell out of me at first. In addition, when browsing a mobile device that you’ve filled with music, the GUI breaks down even further. In this case, you can still hit Remove, which seems to remove the song from Rhythmbox’s listing, but leaves the file on the device. So now I have a file on my device that I can’t access. Great. The right-click menu also has the ability to copy and cut the song, even though there is no immediately obvious way to paste it. For that functionality, you’ll have to head up to the Edit menu.

    The right-click context menu

    I’m starting to run out of anger. The 10,000 papercuts that come along with this app are making me numb to it.

  10. No Command Line Tools: Now, normally, this wouldn’t bother me too much. A music library is something that’s meant to have a GUI, and doesn’t generally lend itself to working from the command line. In this case however, command line access to Rhythmbox would be really handy, because I’d like to set up a hot key on my keyboard that will skip songs or pause playback. Unfortunately, there’s no way to do that within the software, and it doesn’t have any command line arguments that I can call instead. Balls.

There you have it, 10 things that really ruin the Rhythmbox experience. While using this piece of software, I felt like the developers worked really hard to build something that was sort of comparable to Apple’s iTunes, and then stopped trying. That isn’t good enough! If we want to attract users to our platform of choice, and keep them here, we need to give them reasons to check it out, and even more to stick around. If I say to you that I want to have the best Linux media player, you tend to put the emphasis on the word Linux. Why not just make the best media player? GNOME is on at least half of all Linux desktops, if not more. Why hinder it with software that gives people a poor first impression of what Linux is capable of? Seriously guys, let’s step it up.

 

vi(m) or emacs? Neither, just use nano!

December 29th, 2016 No comments

There is quite a funny, almost religious, debate within the Linux community between the two venerable command line text editors vi or Vim and Emacs. Sure they have loads of features and plugins but do you really need that from a command line editor? I think for the most part, or perhaps for most people at least, the answer is no. Instead I’m going to do something really stupid and publicly state, on a Linux related website, that you should ignore both of those text editors and instead use what many consider an inferior, more simplistic one instead: nano.

Sure it doesn’t have all of the fancy bells and whistles but honestly half the time I’m using a command line editor it’s just to change one line of text. I don’t generally need the extra fluff and when I do I can always use a graphical editor instead. Besides it’s so easy to use… just look at how easy it is to use:

Create/edit a file:

  1. Open the file in nano
  2. Make your changes
  3. Save the changes

Open the file:

nano doc.txt

Make your changes:

Admit it, it's true!

Admit it, it’s true!

 

Save your changes:

No complicated key combinations to remember – it’s all listed at the bottom of the screen. Want to save, “Write Out,” your changes? Easy as Ctrl+O and then Enter to confirm.

Sooooooooo easy to use

Sooooooooo easy to use

So let’s all come together, stop the fanboy wars, and all embrace nano as the best command line text editor out there! 😛

 

Categories: Linux, Open Source Software, Tyler B Tags: , , ,

Blast from the Past: Going Linux, Once and for All

December 28th, 2016 No comments

This post was originally published on December 23, 2009. The original can be found here.


With the linux experiment coming to an end, and my Vista PC requiring a reinstall, I decided to take the leap and go all linux all the time. To that end, I’ve installed Kubuntu on my desktop PC.

I would like to be able to report that the Kubuntu install experience was better than the Debian one, or even on par with a Windows install. Unfortunately, that just isn’t the case.

My machine contains three 500GB hard drives. One is used as the system drive, while an integrated hardware RAID controller binds the other two together as a RAID1 array. Under Windows, this setup worked perfectly. Under Kubuntu, it crashed the graphical installer, and threw the text-based installer into fits of rage.

With plenty of help from the #kubuntu IRC channel on freenode, I managed to complete the Kubuntu install by running it with the two RAID drives disconnected from the motherboard. After finishing the install, I shut down, reconnected the RAID drives, and booted back up. At this point, the RAID drives were visible from Dolphin, but appeared as two discrete drives.

It was explained to me via this article that the hardware RAID support that I had always enjoyed under windows was in fact a ‘fake RAID,’ and is not supported on Linux. Instead, I need to reformat the two drives, and then link them together with a software RAID. More on that process in a later post, once I figure out how to actually do it.

At this point, I have my desktop back up and running, reasonably customized, and looking good. After trying KDE’s default Amarok media player and failing to figure out how to properly import an m3u playlist, I opted to use Gnome’s Banshee player for the time being instead. It is a predictable yet stable iTunes clone that has proved more than capable of handling my library for the time being. I will probably look into Amarok and a few other media players in the future. On that note, if you’re having trouble playing your MP3 files on Linux, check out this post on the ubuntu forums for information about a few of the necessary GStreamer plugins.

For now, my main tasks include setting up my RAID array, getting my ergonomic bluetooth wireless mouse working, and working out folder and printer sharing on our local Windows network. In addition, I would like to set up a Windows XP image inside of Sun’s Virtual Box so that I can continue to use Microsoft Visual Studio, the only Windows application that I’ve yet to find a Linux replacement for.

This is just the beginning of the next chapter of my own personal Linux experiment; stay tuned for more excitement.

This post first appeared at Index out of Bounds.

 

Help out a project with OpenHatch

December 27th, 2016 No comments

OpenHatch is a site that aggregates all of the help postings from a variety of open source projects. It maintains a whole community dedicated to matching people who want to contribute with people who need their help. You don’t even need to be technical like a programmer or something like that. Instead if you want to lend your artistic talents to creating icons and logos for a project, or your writing skills to help them out with documentation – both areas a lot of open source projects aren’t the best with – I’m sure they would be greatly appreciative.

So what are you waiting for? Get connected and give back to the community that helps create the applications you use on a daily basis!

5 apt tips and tricks

December 22nd, 2016 No comments

Everyone loves apt. It’s a simple command line tool to install new programs and update your system, but beyond the standard commands like update, install and upgrade did you know there are a load of other useful apt-based commands you can run?

1) Search for a package name with apt-cache search

Can’t remember the exact package name but you know some of it? Make it easy on yourself and search using apt-cache. For example:

apt-cache search Firefox

It lists all results for your search. Nice and easy!

It lists all results for your search. Nice and easy!

2) Search for package information with apt-cache show

Want details of a package before you install it? Simple just search for it with apt-cache show.

apt-cache show firefox

More details than you probably even wanted!

More details than you probably even wanted!

3) Upgrade only a specific package

So you already know that you can upgrade your whole system by running

apt-get upgrade

but did you know you can upgrade a specific package instead of the whole system? It’s easy, just specify the package name in the upgrade command. For example to upgrade just firefox run:

apt-get upgrade firefox

4) Install specific package version

Normally when you apt-get install something you get the latest version available but what if that’s not what you wanted? What if you wanted a specific version of the package instead? Again, simple, just specify it when you run the install command. For example run:

apt-get install firefox=version

Where version is the version number you wish to install.

5) Free up disk space with clean

When you download and install packages apt will automatically cache them on your hard drive. This can be useful for a number of reasons, for example some distributions use delta packages so that only what has changed between versions are re-downloaded. In order to do this it needs to have a base cached file already on your hard drive. However these files can take up a lot of space and often times don’t get a lot of updates anyway. Thankfully there are two quick commands that free up this disk space.

apt-get clean

apt-get autoclean

Both of these essentially do the same thing but the difference here is autoclean only gets rid of cached files that have a newer version also cached on your hard drive. These older packages won’t be used anymore and so they are an easy way to free up some space.

There you have it, you are now officially 5 apt commands smarter. Happy computing!

 

CoreGTK 3.18.0 Released!

December 18th, 2016 No comments

The next version of CoreGTK, version 3.18.0, has been tagged for release! This is the first version of CoreGTK to support GTK+ 3.18.

Highlights for this release:

  • Rebased on GTK+ 3.18
  • New supported GtkWidgets in this release:
    • GtkActionBar
    • GtkFlowBox
    • GtkFlowBoxChild
    • GtkGLArea
    • GtkModelButton
    • GtkPopover
    • GtkPopoverMenu
    • GtkStackSidebar
  • Reverted to using GCC as the default compiler (but clang can still be used)

CoreGTK is an Objective-C language binding for the GTK+ widget toolkit. Like other “core” Objective-C libraries, CoreGTK is designed to be a thin wrapper. CoreGTK is free software, licensed under the GNU LGPL.

Read more about this release here.

This post originally appeared on my personal website here.

Blast from the Past: An Experiment in Transitioning to Open Document Formats

December 16th, 2016 No comments

This post was originally published on June 15, 2013. The original can be found here.


Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.

 

Using screen to keep your terminal sessions alive

December 15th, 2016 No comments

Have you ever connected to a remote Linux computer, using a… let’s say less than ideal WiFi connection, and started running a command only to have your ssh connection drop and your command killed off in a half finished state? In the best of cases this is simply annoying but if it happens during something of consequence, like a system upgrade, it can leave you in a dire state. Thankfully there is a really simple way to avoid this from happening.

Enter: screen

screen is a simple terminal application that basically allows you to create additional persistent terminals. So instead of ssh-ing into your computer and running the command in that session you can instead start a screen session and then run your commands within that. If your connection drops you lose your ‘screen’ but the screen session continues uninterrupted on the computer. Then you can simply re-connect and resume the screen.

Explain with an example!

OK fine. Let’s say I want to write a document over ssh. First you connect to the computer then you start your favourite text editor and begin typing:

ssh user@computer
user@computer’s password:

user@computer: nano doc.txt

What a wonderful document!

What a wonderful document!

Now if I lost my connection at this point all of my hard work would also be lost because I haven’t saved it yet. Instead let’s say I used screen:

ssh user@computer
user@computer’s password:

user@computer: screen

Welcome to screen!

Welcome to screen!

Now with screen running I can just use my terminal like normal and write my story. But oh no I lost my connection! Now what will I do? Well simply re-connect and re-run screen telling it to resume the previous session.

ssh user@computer
user@computer’s password:

user@computer: screen -r

Voila! There you have it – a simple way to somewhat protect your long-running terminal applications from random network disconnects.

 

Blast from the Past: Coming to Grips with Reality

December 12th, 2016 No comments

This post was originally published on December 8, 2009. The original can be found here.


The following is a cautionary tale about putting more trust in the software installed on your system than in your own knowledge.

Recently, while preparing for a big presentation that relied on me running a Java applet in Iceweasel, I discovered that I needed to install an additional package to make it work. This being nothing out of the ordinary, I opened up a terminal, and used apt-cache search to locate the package in question. Upon doing so, my system notified me that I had well over 50 ‘unnecessary’ packages installed. It recommended that I take care of the issue with the apt-get autoremove command.

Bad idea.

On restart, I found that my system was virtually destroyed. It seemed to start X11, but refused to give me either a terminal or a gdm login prompt. After booting into Debian’s rescue mode and messing about in the terminal for some time trying to fix a few circular dependencies and get my system back, I decided that it wasn’t worth my time, backed up my files with an Ubuntu live disk, and reinstalled from a netinst nightly build disk of the testing repositories. (Whew, that was a long sentence)

Unfortunately, just as soon as I rebooted from the install, I found that my system lacked a graphical display manager, and that I could only log in to my terminal, even though I had explicitly told the installer to add GNOME to my system. I headed over to #debian for some help, and found out that the testing repositories were broken, and that my system lacked gdm for some unknown reason. After following their instructions to work around the problem, I got my desktop back, and once more have a fully functioning system.

The moral of the story is a hard one for me to swallow. You see, I have come to the revelation that I don’t know what I’m doing. Over the course of the last 3 months, I have learned an awful lot about running and maintaining a Linux system, but I still lack the ability to fix even the simplest of problems without running for help. Sure, I can install and configure a Debian box like nobody’s business, having done it about 5 times since this experiment started; but I still lack the ability to diagnose a catastrophic failure and to recover from it without a good dose of help. I have also realized something that as a software developer, I know and should have been paying attention to when I used that fatal autoremove command – when something seems wrong, trust your instincts over your software, because they’re usually correct.

This entire experiment has been a huge learning experience for me. I installed an operating system that I had never used before, and eschewed the user-friendly Ubuntu for Debian, a distribution that adheres strictly to free software ideals and isn’t nearly as easy for beginners to use. That done, after a month of experience, I switched over from the stable version of Debian to the testing repositories, figuring that it would net me some newer software that occasionally worked better (especially in the case of Open Office and Gnome Network Manager), and some experience with running a somewhat less stable system. I certainly got what I wished for.

Overall, I don’t regret a thing, and I intend to keep the testing repositories installed on my laptop. I don’t usually use it for anything but note taking in class, so as long as I back it up regularly, I don’t mind if it breaks on occasion; I enjoy learning new things, and Debian keeps me on my toes. In addition, I think that I’ll install Kubuntu on my desktop machine when this whole thing is over. I like Debian a lot, but I’ve heard good things about Ubuntu and its variants, and feel that I should give them a try now that I’ve had my taste of what a distribution that isn’t written with beginners in mind is like. I have been very impressed by Linux, and have no doubts that it will become a major part of my computing experience, if not replacing Windows entirely – but I recognize that I still have a long way to go before I’ve really accomplished my goals.

As an afterthought: If anybody is familiar with some good tutorials for somebody who has basic knowledge but needs to learn more about what’s going on below the surface of a Linux install, please recommend them to me.