Archive

Archive for February, 2017

Blast from the Past: Compile Windows programs on Linux

February 26th, 2017 No comments

This post was originally published on September 26, 2010. The original can be found here.


Windows?? *GASP!*

Sometimes you just have to compile Windows programs from the comfort of your Linux install. This is a relatively simple process that basically requires you to only install the following (Ubuntu) packages:

To compile 32-bit programs

  • mingw32 (swap out for gcc-mingw32 if you need 64-bit support)
  • mingw32-binutils
  • mingw32-runtime

Additionally for 64-bit programs (*PLEASE SEE NOTE)

  • mingw-w64
  • gcc-mingw32

Once you have those packages you just need to swap out “gcc” in your normal compile commands with either “i586-mingw32msvc-gcc” (for 32-bit) or “amd64-mingw32msvc-gcc” (for 64-bit). So for example if we take the following hello world program in C

#include <stdio.h>
int main(int argc, char** argv)
{
printf(“Hello world!\n”);
return 0;
}

we can compile it to a 32-bit Windows program by using something similar to the following command (assuming the code is contained within a file called main.c)

i586-mingw32msvc-gcc -Wall “main.c” -o “Program.exe”

You can even compile Win32 GUI programs as well. Take the following code as an example

#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
char *msg = “The message box’s message!”;
MessageBox(NULL, msg, “MsgBox Title”, MB_OK | MB_ICONINFORMATION);

return 0;
}

this time I’ll compile it into a 64-bit Windows application using

amd64-mingw32msvc-gcc -Wall -mwindows “main.c” -o “Program.exe”

You can even test to make sure it worked properly by running the program through wine like

wine Program.exe

You might need to install some extra packages to get Wine to run 64-bit applications but in general this will work.

That’s pretty much it. You might have a couple of other issues (like linking against Windows libraries instead of the Linux ones) but overall this is a very simple drop-in replacement for your regular gcc command.

*NOTE: There is currently a problem with the Lucid packages for the 64-bit compilers. As a work around you can get the packages from this PPA instead.

Originally posted on my personal website here.

Blast from the Past: Very short plug for PowerTOP

February 25th, 2017 No comments

This post was originally published on July 4, 2010. The original can be found here.


Recently I decided to try out PowerTOP, a Linux power saving application built by Intel. I am extremely impressed by how easy it was to use and the power savings I am now basking in.

PowerTOP is a terminal application that first scans your computer for a number of things during a set interval. It then reports back which processes are taking up the most power and offers you some options to improve your battery life. All of these options can literally be enabled at a press of a button. It’s sort of like an experience I once had with Clippy in Microsoft Word; “it seems you are trying to save power, let me help you…” After applying a few of the suggestions the estimated battery life on my laptop went from about 3 and a half hours to almost 5 hours. In short, I would highly recommend everyone at least try out PowerTOP. I’m not promising miracles but at the very least it should help you out some.

Blast from the Past: Vorbis is not Theora

February 24th, 2017 No comments

This post was originally published on April 21, 2010. The original can be found here.


Recently I have started to mess around with the Vorbis audio codec, commonly found within the Ogg media container. Unlike Theora, which I had also experimented with but won’t post the results for fear of a backlash, I must say I am rather impressed with Vorbis. I had no idea that the open source community had such a high quality audio codec available to them. Previously I always sort of passed off Vorbis’ reason for being regarded as ‘so great’ within the community as simply a lack of options. However after some comparative tests between Vorbis and MP3 I must say I am a changed man. I would now easily recommend Vorbis as a quality choice if it fits your situation of use.

What is Vorbis?

Like I had mentioned above, Vorbis is the name of a very high quality free and open source audio codec. It is analogous to MP3 in that you can use it to shrink the size of your music collection, but still retain very good sound. Vorbis is unique in that it only offers a VBR mode, which allows it to squeeze the best sound out of the fewest number of bits. This is done by lowering the bitrate during sections of silence or unimportant audio. Additionally, unlike other audio codecs, Vorbis audio is generally encoded at a supplied ‘quality’ level. Currently the bitrate for quality level 4 is 128kbit/s, however as the encoders mature they may be able to squeeze out the same quality at a lower bitrate. This will potentially allow a modern iteration of the encoder to achieve the same quality level but by using a lower bitrate, saving you storage space/bandwidth/etc.

So Vorbis is better than MP3?

Obviously when it comes to comparing the relative quality of competing audio codecs it must always be up to the listener to decide. That being said I firmly believe that Vorbis is far better than MP3 at low bitrates and is, at the very least, very comparable to MP3 as you increase the bitrate.

The Tests

I began by grabbing a FLAC copy of the Creative Commons album The Slip by Nine Inch Nails here. I chose FLAC because it provided me with the highest quality possible (lossless CD quality) from which to encode the samples with. Then, looking around at some Internet radio websites, I decided that I should test the following bitrates: 45kbit/s, 64kbit/s, 96kbit/s, and finally 128kbit/s (for good measure). I encoded them using only the default encoder settings and the following terminal commands:

For MP3 I used LAME and the following command. I chose average bitrate (ABR) which is really just VBR with a target, similar to Vorbis:

flac -cd {input file goes here}.flac | lame –abr {target bitrate} – {output file goes here}.mp3

For Vorbis I used OggEnc and the following command:

oggenc -b {target bitrate} {input file goes here}.flac -o {output file goes here}.ogg

Results

I think I would be a hypocrite if I didn’t tell you to just listen for yourself… The song in question is track #4, Discipline.

Note: if you are using Mozilla Firefox, Google Chrome, or anything else that supports HTML5/Vorbis, you should be able to play the Vorbis file right in your browser.

45kbit/s MP3(1.4MB) Vorbis(1.3MB)

64kbit/s MP3(2.0MB) Vorbis(1.9MB)

96kbit/s MP3(2.9MB) Vorbis(2.8MB)

128kbit/s MP3(3.8MB) Vorbis(3.6MB)

Blast from the Past: A Practical Reference of Linux Commands

February 23rd, 2017 No comments

This post was originally published on February 19, 2010. The original can be found here.


Just wanted to share a link to a great table that I found – the practical reference of linux commands is a handy little table of terminal commands organized by task. I’ll add it to our sidebar under the ‘Useful Sites’ heading for future reference.

Happy Linuxing!

We want you! (to write for The Linux Experiment)

February 20th, 2017 No comments

Are you a Linux user? Thinking about trying your own Linux experiment? Have you ever come across something broken or annoying and figured out a solution? Or maybe you just came up with a really neat way of doing something to make your life easier? Well if you have ever done any of those and can write a decent sentence or two we’d be glad to showcase your content here.

Get the full details at our page here: Write for the Linux Experiment.

Categories: Tyler B Tags:

Sudo apt-get install basic-linux-pt3 –Install & Setup

February 18th, 2017 No comments

It’s been a busy little while, so I haven’t had time to get this written up. So lets see what I can still remember.

Installing Ubuntu Server was as easy as you’d expect. Booting into it wasn’t. Turns out that the BIOS on this box is setup not to boot from the ODD SATA port. It’ll boot from any of the four drives, or from USB; but not from that extra SATA port. My friend’s, who already has the box running, solution was to setup a RAID, where the ODD SATA is RAID number 0, which then allows it to be booted from. I went for a much simpler solution, after noticing that the install process lets you select the location of the GRUB loader. This server has a USB port and a MicroSD card reader inside the case, both bootable. I have plenty of spare MicroSD cards lying around (seriously, since when is 1GB or 2GB big enough for anybody?), so I just inserted one, and reinstalled specifying the MicroSD card as the location for the GRUB.

It felt good to have my new box booting up and actually running. I got its static IP and OpenSSH setup, checked I could access it through Putty, and finally got it up on the shelf and off my desk. Everything from this point on I’ve done in Putty, with no monitor attached to the server. Lets face it, its not like there’s any difference between one white-on-black text interface and another.

Next, I turned my attention to mounting my drives. The mount command is simple enough, but obviously I want my drives to be available right away after boot; so it was time to learn about ‘fstab’ and ‘UUID’s. Luckily this is a fairly straightforward process, especially since my drives only have a single partition on each, other than having to write down the long UUID to copy from the terminal output to the fstab file. I haven’t been able to work with copy & paste in PuTTY. One thing I started to realise at this point is that while Ubuntu boots nice and quickly, the server itself doesn’t.  So each time I want to see if my fiddling has worked, I pretty much have time to make a cup of tea. From looking through various guides etc. I simply used:

UUID=<UUID> /mnt/<mountpoint> ext4 defaults 0 2

for each of my additional drives. After a reboot, I had access to all of my files and media as it was on my old NAS. I spent a little time clearing out the directory structures it had left behind, program files etc. to leave a nice, clean access to all of my files.

NFS and Samba were just as easy to get set up as they had been on the virtual machines. Although with so many different things I wanted to share, I had to add a lot of different entries into each file.  Thankfully there’s no need to reboot after each edit, the services can simply be restarted to pick up the new settings. Samba is simple enough to test, since I’m managing the server over SSH on PuTTY in Windows. NFS required me to test in one of the VMs 0nce again; but after some work, both seemed to be working. I’m not 100% happy with some of my setup, since I’m just allowing open access to anyone on some of these shares. Chances are I’ll be fine, but I’ll want to come back at some point to try and tighten up my user management.

Emby server has a very good set of installation instructions. The main new part for me was adding the new repository, but this means it’ll be kept up to date when I perform other apt-get upgrades. Everything else related to Emby is managed through its web GUI, so straight forward stuff.

In fact, I was surprised at how simple it was to get the majority of things working. FTP just kinda worked, I just needed to make symlinks from my home directory to the other places I need to access. Even Transmission wasn’t too bad to get going and allow my remote GUI to connect. Ont thing that started to get harder from this point was keeping track of the different ports and services I was using. I took some time to make a list of computers and services to plan my external port mapping, and got things like FTP, SSH and Transmission forwarded. Internally I’ve just used the defaults for simplicity; externally I’ve made sure they’re set to something completely different.

Next Up:

Bash-ing things around

This post was originally published on Nathanael’s site here.

Categories: Nathanael Y, Ubuntu Tags: , , , , , , ,

Sudo apt-get install basic-linux-pt2 –Testing-&-VMs

February 17th, 2017 No comments

With the hardware sorted (bar some jiggery-pokery to get the ODD to SSD bay converter to fit properly), I set about deciding what I want this box to do.

The list I came up with looks like this:

  • Media serving to my Kodi devices (2 Raspberry Pi systems, my android tablet, and a new Ubuntu PC I’m putting together for retro gaming with my kids)
  • FTP – I like to use my NAS like my own personal cloud. My tablet can mount an FTP in its file browser just like any other folder. No sFTP support, though, unfortunately (and I don’t like any of the file browsers I tried which do).
  • Transmission (or Deluge) – the main reason for swapping the 4GB of RAM out for 16GB
  • SSH (obviously!)
  • Dropbox and Google Drive – for when various apps and things integrate well with these mobile apps.
  • Backup – the WD My Cloud EX4’s backup options are very poor.
  • General file sharing with Windows and Ubuntu – Samba & NFS, naturally.
  • Hosting & tinkering with other bits I might want to try & learn about – a website (for practise, not for public viewing), a git… who knows.

Being basically completely unfamiliar with most of this stuff, I was undecided between Ubuntu Desktop or Server for quite a while. Desktop obviously just has so much of this stuff already ready to go, it mounts things automatically, I can use the GUI as a fallback if something isn’t right, its just more like what I’m accustomed to. On the other hand, having the GUI running all the time will just use up unnecessary RAM  – granted I probably don’t have a shortage of that, but still…

In the end I installed both onto VMs on my Windows machine, made copies (so I had a clean version always ready to go without having to reinstall again), and started playing.

First up I wanted to sort how I was going to deal with my media backend. On my current setup I use the Kodi client on one of my PCs to manage a central SQL database. While it works, its a bit slow and rather inneficient, so I went looking for either a headless Kodi backend, or just a way to run it without the GUI. I found all sorts of ideas, builds and code , none of which I understand or feel like I could implement. After a discussion with a linux guru (one of my Uni lecturers) it was clear that my plan was probably not going to work; he had pointed out that he just runs his on DLNA, and that Plex seems to be quite good too. More research, and a question in /r/Kodi later, I had been pointed in the direction of Emby, a backend for Kodi without many of the limitations of Plex and DLNA. Installation was simple enough, but accessing the Web UI wasn’t. When I had setup the VMs I had just left their network settings as NAT; this, it turns out, makes accessing the network from the VM possible, but not accessing the VM from elsewhere on the network (includingother VMs on the same system). I did try to just change the settings in the VM to add a bridged adapter, but it didn’t work. Not knowing enough about networking on linux to fix this, I just went ahead and reinstalled, this time setting up the VM with two network adapters – one NAT and another bridged. This worked a treat, and after adding a few media files and installing Kodi on the Desktop VM, I was able to play videos no problem.

Next, for no particular reason, was getting NFS working. I found guides, forums, blogs etc (my Google-fu is pretty strong) and set about trying. I was sure it should be working, I’d installed nfs-kernel-server, added the entry into /etc/exports, setup the permissions, but I just couldn’t mount it in the Desktop VM – even though I could watch them through Kodi. I ended up having to ask Reddit’s linux4noobs sub. Simple answer… sudo /etc/init.d/nfs-kernel-server start … and instantly it mounted no problem. Turns out that Kodi was actually watching a transcoded stream from Emby, until I had NFS working. Thankfully Samba took less time and hassle to get working (surprisingly), and pretty soon I could access files across both linux and Windows. And there was much rejoycing.

At this point I was getting impatient (plus this microserver is taking up a chunk of space on my desk where I really ought to be doing uni work), so I quickly checked I knew how to setup a static IP, and turned my attention to the real thing.

Next Up:

Booting up The Box
Installing, reinstalling and shenanigans

This post was originally published on Nathanael’s site here.

Sudo apt-get install basic-linux-pt1 –The-Task

February 16th, 2017 No comments

I’ve had it in mind to start learning linux for a while, and now I’ve found the perfect project to help me do just that: building my own NAS.

Until November I was perfectly happy with my business grade Draytek router. I bought it about 6 years ago,  particularly because it could host its own VPN, instead of just passing through to a Windows PC. It has served me well. I was also reasonably satisfied with my Western Digital EX4 NAS unit in most respects. Then Gigabit FTTH was installed in my area, and I got a free trial – all hell broke loose (very much in a Fist World Problems sense).

The first thing which became apparent is that my ‘nice’ router was outdated. Its best WAN to LAN topped out at 93Mb, while its LAN to WAN maxed at 73Mb (incidentally this also means that I already wasnt getting the benefit of my old 250Mb connection). My new connection is a Gigabit, symmetric line which realistically gives me up to about 700Mb in either direction. Not wanting to go back to relying on the equipment provided by ISPs, I went out (well, I went online) and found myself something which can cope with over 900Mb in either direction (Netgear Nighthawk R7000 if youre interested). ‘”Excellent”, I thought, “now I’m all set”… but no. The knock on effect of getting such fast internet soon became apparent in my NAS. After adding just a few torrents to Transmission, the speed steadily rose until it was approaching 10MB/s, at which point all interfaces to the device practically stopped responding. I surmised that the cause was likely the limited 512M of RAM inside the poor thing, and so went in search of a replacement.

As I looked around, it quickly became clear that most of the available devices on the market were out of my price bracket. While I would have liked to get myself something like a Synology, it just wasn’t going to happen. A friend of mine pointed me in the direction of an HP Gen 8 Proliant Microserver (http://compadvance.co.uk/en/item/323618/HP-Proliant-MicroServer-Gen8-G1610T-4GB) which he has, and after some extra checking around, I decided it looked good. I also decided to up the 4GB of RAM to 16GB, and add an SSD for the OS in the ODD bay.

Now the fun really started. Naturally I didn’t want to put Windows on this thing (although this is what my friend has done), I obviously wanted to run linux. And as its intended to sit quietly, (apparently) minding its own business for the most part, I wanted to keep as much RAM as possible free by not running a GUI. Ubuntu Server seemed like the perfect choice; except I have practically no experience setting up servers, working in command line (Basic ls, cp and rm don’t really count), or using Ubuntu or linux for anything at all (I’ve tinkered, but thats about it).

Well, no time like the present to learn.

Next Up:

Learning and testing on VMs

This post was originally published on Nathanael’s site here.

Categories: Nathanael Y, Ubuntu Tags: ,

KWLUG: Let’s Encrypt, Cryptocurrencies (2017-02)

February 11th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Let’s Encrypt, Cryptocurrencies published on February 7th 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

Setup your own VPN with OpenVPN

February 6th, 2017 No comments

Using the excellent Digital Ocean tutorial as my base I decided to setup an OpenVPN server on a Linux Mint 18 computer running on my home network so that I can have an extra layer of protection when connecting to those less than reputable WiFi hotspots at airports and hotels.

While this post is not meant to be an in-depth guide, you should use the original for that, it is meant to allow me to look back at this at some point in the future and easily re-create my setup.

1. Install everything you need

sudo apt-get update
sudo apt-get install openvpn easy-rsa

2. Setup Certificate Authority (CA)

make-cadir ~/openvpn-ca
cd ~/openvpn-ca
nano vars

3. Update CA vars

Set these to something that makes sense:

export KEY_COUNTRY=”US”
export KEY_PROVINCE=”CA”
export KEY_CITY=”SanFrancisco”
export KEY_ORG=”Fort-Funston”
export KEY_EMAIL=”me@myhost.mydomain”
export KEY_OU=”MyOrganizationalUnit”

Set the KEY_NAME to something that makes sense:

export KEY_NAME=”server”

4. Build the CA

source vars
./clean-all
./build-ca

5. Build server certificate and key

./build-key-server server
./build-dh
openvpn –genkey –secret keys/ta.key

6. Generate client certificate

source vars
./build-key-pass clientname

7. Configure OpenVPN

cd ~/openvpn-ca/keys
sudo cp ca.crt ca.key server.crt server.key ta.key dh2048.pem /etc/openvpn
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | sudo tee /etc/openvpn/server.conf

Edit config file:

sudo nano /etc/openvpn/server.conf

Uncomment the following:

tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
push “redirect-gateway def1 bypass-dhcp”
push “route 192.168.10.0 255.255.255.0”
push “route 192.168.20.0 255.255.255.0”

Add the following:

key-direction 0
auth SHA256

Edit config file:

sudo nano /etc/sysctl.conf

Uncomment the following:

net.ipv4.ip_forward=1

Run:

sudo sysctl -p

8. Setup UFW rules

Run:

ip route | grep default

To find the name of the network adaptor. For example:

default via 192.168.x.x dev enp3s0  src 192.168.x.x  metric 202

Edit config file:

sudo nano /etc/ufw/before.rules

Add the following, replacing your network adaptor name, above the bit that says # Don’t delete these required lines…

# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0
-A POSTROUTING -s 10.8.0.0/8 -o enp3s0 -j MASQUERADE
COMMIT
# END OPENVPN RULES

Edit config file:

sudo nano /etc/default/ufw

Change DEFAULT_FORWARD_POLICY to ACCEPT.

DEFAULT_FORWARD_POLICY=”ACCEPT”

Add port and OpenVPN to ufw, allow it and restart ufw to enable:

sudo ufw allow 1194/udp
sudo ufw allow OpenSSH
sudo ufw disable
sudo ufw enable

9. Start OpenVPN Service and set it to enable at boot

sudo systemctl start openvpn@server
sudo systemctl enable openvpn@server

10. Setup client configuration

mkdir -p ~/client-configs/files
chmod 700 ~/client-configs/files
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf

Edit config file:

nano ~/client-configs/base.conf

Replace remote server_IP_address port with the external IP address and port you are planning on using. The IP address can also be a hostname, such as a re-director.

Add the following:

cipher AES-128-CBC
auth SHA256
key-direction 1

Uncomment the following:

user nobody
group nogroup

Comment out the following:

#ca ca.crt
#cert client.crt
#key client.key

11. Make a client configuration generation script

Create the file:

nano ~/client-configs/make_config.sh

Add the following to it:

#!/bin/bash

# First argument: Client identifier

KEY_DIR=~/openvpn-ca/keys
OUTPUT_DIR=~/client-configs/files
BASE_CONFIG=~/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e ‘<ca>’) \
${KEY_DIR}/ca.crt \
<(echo -e ‘</ca>\n<cert>’) \
${KEY_DIR}/${1}.crt \
<(echo -e ‘</cert>\n<key>’) \
${KEY_DIR}/${1}.key \
<(echo -e ‘</key>\n<tls-auth>’) \
${KEY_DIR}/ta.key \
<(echo -e ‘</tls-auth>’) \
> ${OUTPUT_DIR}/${1}.ovpn

And mark it executable:

chmod 700 ~/client-configs/make_config.sh

12. Generate the client config file

cd ~/client-configs
./make_config.sh clientname

13. Transfer client configuration to device

You can now transfer the client configuration file found in ~/client-configs/files to your device.


This post originally appeared on my personal website here.

Categories: Open Source Software, Tyler B Tags: ,

Blast from the Past: Getting KDE on openSUSE is like playing Jenga

February 2nd, 2017 No comments

This post was originally published on October 16, 2009. The original can be found here.


As part of our experiment, everyone is required to try a different desktop manager for two weeks. I chose KDE, since I’ve been using GNOME since I installed openSUSE. However, I’ve found that while trying to get a desktop manager set up one wrong move can cause everything to fall apart.

Switching from GNOME:

This was fairly simple. I started up YaST Software Management, changed my filter from “Search” to “Patterns”, and found the Graphical Environments section. Here I right clicked “KDE Base System”, and selected install. Clicking accept installed the kdebase and kdm packages, with a slew of other KDE default programs. Once this was done, I logged out of my GNOME session, and selected KDE4 as my new login session. My system was slightly confused and booted into GNOME again, so I restarted. This time, I was met with KDE 4.1.

My Thoughts on KDE 4.1:

As much as I had hated the qt look [which I erronously call the ‘quicktime’ look, due to its uncanny similarity to the quicktime app], the desktop was beautiful. The default panel was a very slick, glossy black, which looked quite nice. The “lines” in each window title made the windowing system very ugly, so I set out to turn them off. Its a fairly easy process:

KDE Application Launcher > Configure Desktop > Appearance > Windows > Uncheck the “Show stripes next to the title” box.

Once completed, my windows were simple and effective, and slightly less chunky than the default GNOME theme, so I was content.

Getting rid of the openSUSE Branding:

openSUSE usually draws much ire from me – so its not hard to imagine that I’d prefer not to have openSUSE branding on every god damn application I run, least of all my Desktop Manager. From YaST Software Management I searched for openSUSE and uninstalled every package that had the words “openSUSE” and “branding”. YaST automatically replaces these packages with alternate “upstream” packages, which seem to be the non-openSUSE themes/appearances. Once these were gone, things looked a lot less gray-and-green, and I was happy.

Oh god what happened to my login screen:

A side effect of removing all those openSUSE packages my login screen took a trip back in time, to the Windows 3.1 era. It was a white window on a  blue background with Times New Roman-esque font. After a bit of researching on the GOOG, I found out that this was KDE3 stepping up to take over for my openSUSE branding. Uninstalling the package kde3base or whatever the shit it’s called forced KDE4 to take over, and everything was peachy again.

Installing my Broadcom Wirless Driver

In order to install my driver, I followed this guide TO THE LETTER. Not following this guide actually gave YaST a heart attack and created code conflicts.

KMix Being Weird

KMix magically made my media buttons on my laptop work, however it occasionally decided to change what “audio device” the default slider was controlling. Still, having the media buttons working was a HUGE plus.

Getting Compositing to Work

I did not have a good experience with this. Infact, by fucking around with settings, I ended up bricking my openSUSE install entirely. So alas, I ended up completely re-installing openSUSE. Regardless, to install ATI drivers, follow the guide here using the one-click install method worked perfectly. After finally getting my drivers, turning on compositing was simple:

KDE Application Launcher > Configure Desktop > Appearance > Desktop > Check the “Enable Desktop Effects” box.

From KDE4.1 to KDE4.3

While KDE was really working for me, the notifications system was seriously annoying. Every time my system had an update, or a received a message in Kopete  an ugly, plain, slightly off center, gray box would appear at the top of my screen to inform me. Tyler informed me that this was caused by the fact that I wasn’t running the most recent version of KDE4. A quick check showed me that openSUSE isn’t going to use KDE4.3 until openSUSE 11.2 launches, however you can manually add the KDE 4.3 repositories to YaST, as shown on the openSUSE KDE Repository page.

After adding these repositories, I learned a painful lesson in upgrading your display manager. Do not, under any circumstances, attempt a Display Manager upgrade/switch untill you have an hour to spare,  and enough battery life to last the whole time. I did not, and even though I cancelled the install about 60 seconds in, I found that YaST had already uninstalled my display manager. Upon restart, I was met with a terminal.

From the terminal, I used the command line version of YaST to completely remove kdebase4 and kdm from my system. After that, re-installing the KDE4.3 verison of  kdm from YaST in the terminal installed all the other required applications. However, there are a shitload of dependency issues you gotta sort through and unfortunately the required action is not the same for each application.

KDE4.3

KDE4.3 is absolutely gorgeous, I’ve had no complaints with it. KMix seems to have reassigned itself again, but it assigned itself correctly. Removing the openSUSE branding was the same, but by default the desktop theme used is Air. I prefer the darker look of Oxygen, so I headed over to my desktop to fix it by following these steps:

Desktop > Right Click > Plain Desktop Settings > Change the Desktop Theme from Air to Oxygen.

Concluding Thoughts

Now that all these things are sorted out, I’m surprisingly impressed with KDE, and I might even keep it at the end of this test period for our podcast.

Let me know if you’ve ever had to change desktop managers and your woes in the comments!

get rid of that openSUSE shit:

KDE4.1
uninstall openSUSE branding, except the KDM one maybe?

uninstall kde3base or whatever the shit it’s called. this makes stuff wicked.

KDE4.3
This might have all been unessecary. since installing KDE4.3, I did it all again to no avail. Rightclick desktop, plain desktop settings, theme: oxygen. Then hooray its fine?