Archive

Author Archive

404 Oh No!

July 27th, 2013 No comments

Some of you may have noticed some previously working links going to 404 (page not found) pages. This is due to a change we’ve made in order to make permalinks more consistent among different authors and topics. Sorry for any inconvinence this may cause. On the plus side the website has a search bar that you can use to find what you were looking for :)




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B Tags:

Announcing The Linux Experiment (The Third!)

July 23rd, 2013 No comments

That’s right, time for round three!

If you’ve been following this website in the past you know what this means. This time around our rules are simple: give back to the community. With this idea in mind the participants, known as guinea pigs, will attempt to use their unique passions, interests and talents to give something back to the world of Linux and open source software. The goal is purposely designed to be generic and open ended – we want each guinea pig to interpret the rule in their own way and let their creativity determine how they will give back.

Some ideas we’ve tossed around to accomplish this goal have been:

  • Writing about a unique way to setup open source software to accomplish something
  • Trying out and writing about a new distribution that offers a different experience
  • Giving back to a distribution you use or like
  • Helping a project by submitting code patches, documentation updates, artwork, UX design or simply spreading the word
  • Starting your own project to accomplish something
  • Etc.

So join us dear reader as we chronicle giving back to the community. Oh and don’t be afraid to take part in your own way either ;)




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B Tags:

Make printing easy with the Samsung Unified Linux Driver Repository

July 13th, 2013 No comments

I recently picked up a cheap Samsung laser printer and decided to give the Samsung Unified Linux Driver Repository a shot while installing it. Basically the SULDR is a repository you add to your /etc/apt/sources.list file which allows you to install one of their driver management applications. Once that is installed anytime you go to hookup a new printer the management application automatically searches the repository, full of the official Samsung printer drivers, finds the correct one for you and installs it. Needless to say I didn’t have any problems getting this printer to work on linux!




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Hardware, Linux, Tyler B Tags: , ,

Listener Feedback Podcast Update (July 2013)

July 12th, 2013 No comments

A couple new Listener Feedback podcast episodes have been released in case you missed them:

So grab the MP3 or Ogg version of this Creative Commons podcast and enjoy!




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

Big distributions, little RAM 6

July 9th, 2013 3 comments

It’s that time again where I install the major, full desktop, distributions into a limited hardware machine and report on how they perform. Once again, and like before, I’ve decided to re-run my previous tests this time using the following distributions:

  • Fedora 18 (GNOME)
  • Fedora 18 (KDE)
  • Fedora 19 (GNOME
  • Fedora 19 (KDE)
  • Kubuntu 13.04 (KDE)
  • Linux Mint 15 (Cinnamon)
  • Linux Mint 15 (MATE)
  • Mageia 3 (GNOME)
  • Mageia 3 (KDE)
  • OpenSUSE 12.3 (GNOME)
  • OpenSUSE 12.3 (KDE)
  • Ubuntu 13.04 (Unity)
  • Xubuntu 13.04 (Xfce)

I even happened to have a Windows 7 (64-bit) VM lying around and, while I think you would be a fool to run a 64-bit OS on the limited test hardware, I’ve included as sort of a benchmark.

All of the tests were done within VirtualBox on ‘machines’ with the following specifications:

  • Total RAM: 512MB
  • Hard drive: 8GB
  • CPU type: x86 with PAE/NX
  • Graphics: 3D Acceleration enabled

The tests were all done using VirtualBox 4.2.16, and I did not install VirtualBox tools (although some distributions may have shipped with them). I also left the screen resolution at the default (whatever the distribution chose) and accepted the installation defaults. All tests were run between July 1st, 2013 and July 5th, 2013 so your results may not be identical.

Results

Just as before I have compiled a series of bar graphs to show you how each installation stacks up against one another. This time around however I’ve changed how things are measured slightly in order to be more accurate. Measurements (on linux) were taken using the free -m command for memory and the df -h command for disk usage. On Windows I used Task Manager and Windows Explorer.

In addition this will be the first time where I provide the results file as a download so you can see exactly what the numbers were or create your own custom comparisons (see below for link).

Things to know before looking at the graphs

First off if your distribution of choice didn’t appear in the list above its probably not reasonably possible to be installed (i.e. I don’t have hours to compile Gentoo) or I didn’t feel it was mainstream enough (pretty much anything with LXDE). Secondly there may be some distributions that don’t appear on all of the graphs, for example because I was using an existing Windows 7 VM I didn’t have a ‘first boot’ result. As always feel free to run your own tests. Thirdly you may be asking yourself ‘why does Fedora 18 and 19 make the list?’ Well basically because I had already run the tests for 18 and then 19 happened to be released. Finally Fedora 19 (GNOME), while included, does not have any data because I simply could not get it to install.

First boot memory (RAM) usage

This test was measured on the first startup after finishing a fresh install.

 

All Data Points

All Data Points

RAM

RAM

Buffers/Cache Only

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap Usage

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage after updates

This test was performed after all updates were installed and a reboot was performed.

After_Updates_All

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap

Swap

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Memory (RAM) usage change after updates

The net growth or decline in RAM usage after applying all of the updates.

All Data Points

All Data Points

RAM

RAM

Buffers/Cache

Buffers/Cache

RAM - Buffers/Cache

RAM – Buffers/Cache

Swap Usage

Swap

RAM - Buffers/Cache + Swap

RAM – Buffers/Cache + Swap

Install size after updates

The hard drive space used by the distribution after applying all of the updates.

Install Size

Install Size

Conclusion

Once again I will leave the conclusions to you. This time however, as promised above, I will provide my source data for you to plunder enjoy.

Source Data




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

How to backup and restore a Trac project

July 4th, 2013 No comments

If you use Trac as your bug and progress tracking tool then you too may one day need to take a backup of it or move it to a new server like I had to the other day. Thankfully, as I discovered, it is a relatively straight forward process. Here are the steps to backup and restore a Trac project.

Take a hot backup of your existing install. This is essentially a backup from a fixed point that you can take while still using your Trac at the same time (great for having no downtime).

trac-admin [/path/to/projenv] hotcopy [/path/to/backupdir]

For example:

trac-admin /var/www/trac/projectx hotcopy /home/awesomeadmin/trac_backup/projectx

In order to restore it on another server you just need to create the project from scratch (i.e. using initenv) like this

trac-admin [targetdir] initenv

and then simply replace the install directory contents with the backed up contents. Strictly speaking I’m not even sure if you need to initenv but that’s how I did it and it worked.

Hopefully this works for you as well. Happy… err… Trac-ing?




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B Tags: , ,

How to backup and restore an SVN repository with full commit history

July 2nd, 2013 No comments

Sometimes you need to move an SVN repository from one server to another but maintain the full commit history (i.e. comments and changes). Here is a very simple way to do so.

1. Dump (and compress) the source SVN in one line:

svnadmin dump [path to source SVNrepository] | gzip -9 > [path to destination gzipped dump file]

For example:

svnadmin dump /var/svn/projectx | gzip -9 > /home/awesomeadmin/svn_backup/projectx.dump.gz

2. Transfer gzipped dump to new server

3. Decompress dump

gunzip projectx.dump.gz

4. Restore dump to new SVN repository

svnadmin load [path to new SVN repository] < [path to dump file]

For example:

svnadmin load /var/svn/projecty < /home/awesomeadmin/svn_backup/projectx.dump

That’s it. Pretty simple, no?




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Tyler B Tags: , , ,

An Experiment in Transitioning to Open Document Formats

June 15th, 2013 2 comments

Recently I read an interesting article by Vint Cerf, mostly known as the man behind the TCP/IP protocol that underpins modern Internet communication, where he brought up a very scary problem with everything going digital. I’ll quote from the article (Cerf sees a problem: Today’s digital data could be gone tomorrow – posted June 4, 2013) to explain:

One of the computer scientists who turned on the Internet in 1983, Vinton Cerf, is concerned that much of the data created since then, and for years still to come, will be lost to time.

Cerf warned that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — won’t be readable in the years and centuries ahead.

Cerf illustrated the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. “It doesn’t know what it is,” he said.

“I’m not blaming Microsoft,” said Cerf, who is Google’s vice president and chief Internet evangelist. “What I’m saying is that backward compatibility is very hard to preserve over very long periods of time.”

The data objects are only meaningful if the application software is available to interpret them, Cerf said. “We won’t lose the disk, but we may lose the ability to understand the disk.”

This is a well known problem for anyone who has used a computer for quite some time. Occasionally you’ll get sent a file that you simply can’t open because the modern application you now run has ‘lost’ the ability to read the format created by the (now) ‘ancient’ application. But beyond this minor inconvenience it also brings up the question of how future generations, specifically historians, will be able to look back on our time and make any sense of it. We’ve benefited greatly in the past by having mediums that allow us a more or less easy interpretation of written text and art. Newspaper clippings, personal diaries, heck even cave drawings are all relatively easy to translate and interpret when compared to unknown, seemingly random, digital content. That isn’t to say it is an impossible task, it is however one that has (perceivably) little market value (relatively speaking at least) and thus would likely be de-emphasized or underfunded.

A Solution?

So what can we do to avoid these long-term problems? Realistically probably nothing. I hate to sound so down about it but at some point all technology will yet again make its next leap forward and likely render our current formats completely obsolete (again) in the process. The only thing we can do today that will likely have a meaningful impact that far into the future is to make use of very well documented and open standards. That means transitioning away from so-called binary formats, like .doc and .xls, and embracing the newer open standards meant to replace them. By doing so we can ensure large scale compliance (today) and work toward a sort of saturation effect wherein the likelihood of a complete ‘loss’ of ability to interpret our current formats decreases. This solution isn’t just a nice pie in the sky pipe dream for hippies either. Many large multinational organizations, governments, scientific and statistical groups and individuals are also all beginning to recognize this same issue and many have begun to take action to counteract it.

Enter OpenDocument/Office Open XML

Back in 2005 the Organization for the Advancement of Structured Information Standards (OASIS) created a technical committee to help develop a completely transparent and open standardized document format the end result of which would be the OpenDocument standard. This standard has gone on to be the default file format in most open source applications (such as LibreOffice, OpenOffice.org, Calligra Suite, etc.) and has seen wide spread adoption by many groups and applications (like Microsoft Office). According to Wikipedia the OpenDocument is supported and promoted by over 600 companies and organizations (including Apple, Adobe, Google, IBM, Intel, Microsoft, Novell, Red Hat, Oracle, Wikimedia Foundation, etc.) and is currently the mandatory standard for all NATO members. It is also the default format (or at least a supported format) by more than 25 different countries and many more regions and cities.

Not to be outdone, and potentially lose their position as the dominant office document format creator, Microsoft introduced a somewhat competing format called Office Open XML in 2006. There is much in common between these two formats, both being based on XML and structured as a collection of files within a ZIP container. However they do differ enough that they are 1) not interoperable and 2) that software written to import/export one format cannot be easily made to support the other. While OOXML too is an open standard there have been some concerns about just how open it actually is. For instance take these (completely biased) comparisons done by the OpenDocument Fellowship: Part I / Part II. Wikipedia (Open Office XML – from June 9, 2013) elaborates in saying:

Starting with Microsoft Office 2007, the Office Open XML file formats have become the default file format of Microsoft Office. However, due to the changes introduced in the Office Open XML standard, Office 2007 is not entirely in compliance with ISO/IEC 29500:2008. Microsoft Office 2010 includes support for the ISO/IEC 29500:2008 compliant version of Office Open XML, but it can only save documents conforming to the transitional schemas of the specification, not the strict schemas.

It is important to note that OpenDocument is not without its own set of issues, however its (continuing) standardization process is far more transparent. In practice I will say that (at least as of the time of writing this article) only Microsoft Office 2007 and 2010 can consistently edit and display OOXML documents without issue, whereas most other applications (like LibreOffice and OpenOffice) have a much better time handling OpenDocument. The flip side of which is while Microsoft Office can open and save to OpenDocument format it constantly lags behind the official standard in feature compliance. Without sounding too conspiratorial this is likely due to Microsoft wishing to show how much ‘better’ its standard is in comparison. That said with the forthcoming 2013 version Microsoft is set to drastically improve its compatibility with OpenDocument so the overall situation should get better with time.

Current day however I think, technologically, both standards are now on more or less equal footing. Initially both standards had issues and were lacking some features however both have since evolved to cover 99% of what’s needed in a document format.

What to do?

As discussed above there are two different, some would argue, competing open standards for the replacement of the old closed formats. Ten years ago I would have said that the choice between the two is simple: Office Open XML all the way. However the landscape of computing has changed drastically in the last decade and will likely continue to diversify in the coming one. Cell phone sales have superseded computers and while Microsoft Windows is still the market leader on PCs, alternative operating systems like Apple’s Mac OS X and Linux have been gaining ground. Then you have the new cloud computing contenders like Google’s Google Docs which let you view and edit documents right within a web browser making the operating system irrelevant. All of this heterogeneity has thrown a curve ball into how standards are established and being completely interoperable is now key – you can’t just be the market leader on PCs and expect everyone else to follow your lead anymore. I don’t want to be limited in where I can use my documents, I want them to work on my PC (running Windows 7), my laptop (running Ubuntu 12.04), my cellphone (running iOS 5) and my tablet (running Android 4.2). It is because of these reasons that for me the conclusion, in an ideal world, is OpenDocument. For others the choice may very well be Office Open XML and that’s fine too – both attempt to solve the same problem and a little market competition may end up being beneficial in the short term.

Is it possible to transition to OpenDocument?

This is the tricky part of the conversation. Lets say you want to jump 100% over to OpenDocument… how do you do so? Converting between the different formats, like the old .doc or even the newer Office Open XML .docx, and OpenDocument’s .odt is far from problem free. For most things the conversion process should be as simple as opening the current format document and re-saving it as OpenDocument – there are even wizards that will automate this process for you on a large number of documents. In my experience however things are almost never quite as simple as that. From what I’ve seen any document that has a bulleted list ends up being converted with far from perfect accuracy. I’ve come close to re-creating the original formatting manually, making heavy use of custom styles in the process, but its still not a fun or straightforward task – perhaps in these situations continuing to use Microsoft formatting, via Office Open XML, is the best solution.

If however you are starting fresh or just converting simple documents with little formatting there is no reason why you couldn’t make the jump to OpenDocument. For me personally I’m going to attempt to convert my existing .doc documents to OpenDocument (if possible) or Office Open XML (where there are formatting issues). By the end I should be using exclusively open formats which is a good thing.

I’ll write a follow up post on my successes or any issues encountered if I think it warrants it. In the meantime I’m curious as to the success others have had with a process like this. If you have any comments or insight into how to make a transition like this go more smoothly I’d love to hear it. Leave a comment below.

This post originally appeared on my personal website here.




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.

The apps of KDE 4.10 Part VII: Dragon Player

May 27th, 2013 2 comments

Rounding out this little series I took a look at KDE’s video player of choice: Dragon Player.

Dragon Player

For those of you familiar with similar applications such as VLC, Totem or even Windows Media Player, Dragon Player is a simplistic interface on top of quite powerful video playback.

Everyone loves Big Buck Bunny!

Everyone loves Big Buck Bunny!

Dragon Player’s power comes from the integrated KDE media backend Phonon. What this means for the user is that it is completely compatible with all installed system codecs. Speaking of codecs, Dragon Player prompts you whenever it doesn’t recognize a new piece of media and offers the ability to automatically search and install the required codecs. This works very well and allows you to keep your system relatively free of nonsense codecs you’ll never actually use, instead installing what you need as you need it.

For a KDE application Dragon Player is surprisingly streamlined and doesn’t offer very many configuration options. In fact almost any other video player has more configuration options than Dragon Player. The only real settings I could find were changing how the video playback looks:

Video Settings

Video Settings

And that’s it. No seriously, there isn’t anything else to mention about this application and believe it or not that’s a good thing! This program is designed for exactly one thing and it does it well. If you’re looking for a single use video player application, and you’re not already a VLC fan, I would highly suggest this as an alternative.

More in this series




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: KDE, Tyler B Tags: ,

The apps of KDE 4.10 Part VI: Calligra Suite

May 24th, 2013 1 comment

LibreOffice? Pfft. OpenOffice? Blah. KOffice? Dead for a while now. Calligra Suite? Now we’re talking!

Calligra Suite

You may be a bit confused as to what Calligra Suite is, in fact you may not have ever even heard of it before now. Essentially Calligra Suite is a fork of the KOffice project from back in 2010 and has now become the de facto group of KDE publishing/office applications, as KOffice isn’t really being developed any more. It consists of the following applications:

For the purposes of this post I’m going to be going over the first three which I think are the most commonly used day-to-day applications.

Calligra Words

You’ve seen one word processor, you’ve seen them all right? Well maybe not in this case. Calligra Words has quite a different interface than its contemporaries (even counting the new-ish Microsoft Office ribbon interface in that category).

Take that ribbon!

Take that ribbon!

The first thing you’ll notice is that the majority of the buttons and options are located on the right hand side of the interface. Initially this seems quite strange but I suppose if you were working on a large widescreen monitor, as well all should be right?, this makes perfect sense. As you click in the little tabs they expand to reveal additional categorized options. It is sort of like putting the ribbon interface from Microsoft Office on its side.

Side bar in action

Side bar in action

While there is nothing inherently wrong with Calligra Words there were times when I found it confusing. For instance there seems to be some places where the application ignores the conventional paradigm for doing something specific, instead opting for their own way with mixed success. A good example of this is formatting the lines on an inserted table. Normally you would simply select the table, go into some format properties window and change it there. Instead Calligra Words has you select the format you want, from the side bar, and then paint it onto the existing table one line at a time. Again not a big deal if you were first learning to edit documents using Calligra Words, but I could easily see people having a difficult time transitioning from Microsoft Office or LibreOffice.

Other things are just strange. For example the application supports spellcheck and will happily underline words you’ve misspelled but I couldn’t find the option to run through a spellcheck on the whole document. Instead it seems as though you need to hunt through the document manually in order to avoid missing anything. I also had the application crash on me when I attempted to insert a bibliography.

Overall I just get the feeling that Calligra Words is still very much under development and not quite mature enough to be used in everyday life. Perhaps in a few released this could become a legitimate replacement for some of the other mainstream word processors, but for now I can’t say that I would recommend it beyond those who are curious to see its unique interface.

Calligra Sheets

Like Words, Sheets shares the sidebar interface for manipulating data.

Example balance sheet template

Example balance sheet template

Most of the standard functionality makes an appearance (i.e. cell formulas, formatted text, etc.) although once again I’m going to have to focus on the negatives here. Like Words I found some of the features very confusing. For instance I tried to make a simple bar chart with two columns worth of data (x and y). Instead I ended up with a bar chart showing both data sets against some random x plane. Try as I might I couldn’t force it to do what I wanted. The program also seemed very unstable for me and crashed often. Unfortunately I became so frustrated with this program that I just couldn’t dive too deeply into its features.

Calligra Stage

Stage is Calligra Suite’s version of Microsoft Office’s PowerPoint or LibreOffice’s Presentation.

Showing one of the included templates

Showing one of the included templates

 

This is the first application of the three that I think really benefits from having the side bar and it makes finding what you’re after surprisingly easy and straight forward. The only weird thing I really ran into was when adding animation to part of the slide. Again you need to select animation, then sort of paint it on kind of like what you had to do with tables in Words.

Like the rest, I think Stage could use some more development and maturity but unlike the other two I think Stage feels much further along (it didn’t even crash on me once!).

Conclusion

If you can’t read between the lines above allow me to summarize my feelings in this way: Calligra Suite is a solid set of applications but one that feels very young and very much still under development. This is not exactly the sort of feeling you want when you are working on a business or time critical document. However I do like some of the things they’ve started here and look forward to seeing where they take it in the future.

More in this series




I am currently running a variety of distributions, primarily Linux Mint Debian Edition.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: KDE, Tyler B Tags: ,