Archive

Archive for the ‘Guinea Pigs’ Category

Blast from the Past: My Search for the Best Audio Editing Software

March 8th, 2017 No comments

This post was originally published on October 6, 2010. The original can be found here.


Lately, I’ve been doing some audio recording. In addition to a couple of podcasts that I work on, I occasionally like to record my own musical compositions. While there seems to be no shortage of high-end audio editing applications on either Windows or Mac, the situation on Linux is a bit more sparse. Faced with some frustration, I went out and downloaded a number of linux-based audio editors. I used Wikipedia to find the software in the tests below, and following are my totally subjective and highly biased reviews of each.

Each piece of software was used to edit some raw recordings from a podcast that I have been involved with lately. This source material is almost 100% spoken word, with some music and sound effects sprinkled throughout. It’s important to note these details, as your needs may vary drastically depending on the type of audio project that you’re working on.

Audacity:

The Audacity Project is kind of the Linux standard for non-professional audio editing. It was the first application that I tried to use, mainly because I was familiar with earlier versions of the program that I had once used back in my Windows days. Audacity includes a great number of features that make it ideal for post-processing of any audio project, including a wide array of effects, some great noise generators, and a few analysis tools that make it perfect for cleaning up your finished file before publication.

Audacity audio editor with a demo project loaded

Audacity audio editor with a demo project loaded

Unfortunately, I found that it lacked a usable GUI for editing podcast material. In particular, it seems to be missing the ability to edit a single track in a multi-track project without unduly affecting the other tracks.By default, if you use the selection tool to grab a portion of audio that ought to be deleted from one track in the project, it seems to delete that portion of audio from all tracks in the project.

I found this out the hard way when I played back the master track that I had assembled my finished podcast on, only to find out that significant portions of the audio had mysteriously gone missing at some point during the editing process.

To make matters worse, I closed the application, lost the undo record for the project, and had to start the editing process from the beginning.

This lack of GUI polish also exhibits itself in the way that you can interact with the audio tracks themselves. Unlike in most DAW solutions, a portion of audio that has been clipped out of a larger track cannot seemingly be moved around in the project by clicking on it and dragging it across the stage with the mouse. At least I couldn’t figure out how to do it, and ended up relying heavily on my cut, copy, and paste functions to edit my project. This is a poor way to work on a project of any kind of complexity, and makes projects that rely on audio loops a pain to assemble.

Ardour:

Where Audacity is suited more towards hobbyist recording setups, Ardour aims to be a professional audio solution that is capable of competing with mainstream software like ProTools. It is a fully featured audio suite that can allegedly do most everything that you may require, but as such, can also confuse the hell out of first-time users with its complicated GUI and lengthy manual.

Granted, this is hardly a slight to the project, because it really isn’t suited to my needs. It is a pro-level audio environment that can be used as the centrepiece to a full recording studio or stage

Ardour wants sole control of my audio interface

Ardour wants sole control of my audio interface

show. If you just want to edit a podcast, it may not be the tool for you. As such, if the GUI seems challenging and you find the documentation to be long-winded, you may just be using the wrong tool for the job.

The biggest issue that I had with this piece of software was getting it to run at all on my machine. It uses JACK to attach itself to your audio interfaces in the name of providing a perfect sampling environment that doesn’t get slowed down by having to share the interface with other pieces of software.

Unfortunately, this means that in order to use it, I had to quit all other processes that are capable of generating sound, including this web browser. This is a pain if you are trying to run Ardour in a multi-application environment, or need to reference the internet for anything while working.

After reading the introductory documentation and adjusting the settings in the startup dialog for about 15 minutes, I simply gave up on Ardour without ever managing to get into a workspace. It seems to be far too complicated for my needs, and doesn’t seem worth my time. Your mileage may vary.

Jokosher:

From the moment that I started reading about this project, I like the sound of it. Jokosher is a multi-track recoding and editing environment built on top of Python and GStreamer that was

Jokosher may look cartoony, but it may be exactly what you need for small projects

Jokosher may look cartoony, but it may be exactly what you need for small projects

created by a podcaster who was unsatisfied with the audio editing tools that were available on Linux. The application focuses on being easy enough to use that non-technical people like musicians can pick it up and get their ideas down with minimal hassle. Think of it as Garage Band for Linux.

Indeed, just as the website promised, I was able to get a working environment set up in a matter of minutes. The editing tools allow for splitting the audio, grabbing it and moving it around, and non-destructively editing multiple tracks at the same time (I’m looking at you, Audacity). The GUI also has a beautiful polish to it that, although a tad cartoony, really makes the program look and feel simple. For editing something like a podcast, I’m not sure that this application can be beat.

The only issue that I encountered in my short time using Jokosher was with its support of LADSPA plugins. These are free audio plugins that can be used to apply effects to the different tracks of your audio project. When I tried to use them from within the application, it instructed me to download some from my repositories. Upon checking Synaptic, I saw that I already had a number of them downloaded. Even after installing more, the program did not seem to pick them up.

All in all, this project lived up to its hype, and I will most certainly take some time to break it in, and may write a more in-depth review once I get used to it. If you’re doing podcasting, you owe it to yourself to check this app out.

In Conclusion:

Each of the three applications that I tried to work with while writing this piece deserve your respect. The underlying audio framework of most Linux systems is a veritable rats’ nest of subsystems, platforms, daemons, plugins and helper applications. I would wager a significant amount of money on this situation as the reason that we don’t have ProTools and its ilk on our platform of choice. I’ve done a little bit of work with GStreamer, and even it, as perhaps the prettiest and best supported of all audio libraries on the platform, left me scratching my head at times.

When choosing audio software, it’s important to keep in mind that you need a tool that’s uniquely suited to your project. Since I’m editing podcasts and fooling around with drum loops and samples of my guitars, Jokosher does just about everything that I need and more. I may use Audacity for post-production, or to record my source audio (simply because I haven’t tried recording in Jokosher yet – I know that Audacity works), because it falls somewhere in between a simple editing tool and an advanced platform. Ardour, meanwhile, is probably suited towards the more hard-core audio engineer slash system administrator types who are so fanatic about recording quality that they are willing to sacrifice an entire box for running their DAW software. It’s simply more power than the majority of hobbyist enthusiasts really needs.

Blast from the Past: Do something nice for a change

March 5th, 2017 No comments

This post was originally published on October 6, 2010. The original can be found here.


Open source software (OSS) is great. It’s powerful, community focused and, lets face it, free. There is not a single day that goes by that I don’t use OSS. Between Firefox, Linux Mint, Thunderbird, Pidgin, Pinta, Deluge, FileZilla and many, many more there is hardly ever an occasion where I find myself in a situation where there isn’t an OSS tool for the job. Unfortunately for all of the benefits that OSS brings me in my daily life I find, in reflection, that I hardly ever do anything to contribute back. What’s worse is that I know I am not alone in this. Many OSS users out there just use the software because it happens to be the best for them. And while there is absolutely nothing wrong with that, many of these individuals could be contributing back. Now obviously I don’t expect everyone, or even half for that matter, to contribute back but I honestly do think that the proportion of people who do contribute back could be much higher.

Why should I?

This is perhaps the easiest to answer. While you don’t have to contribute back, you should if you want to personally make the OSS you love even better.

How to I contribute?

Contributing to a project is incredibly easy. In fact in the vast majority of cases you don’t need to write code, debug software or even do much more than simply use the software in question. What do I mean by this? Well the fact that we here on The Linux Experiment write blog posts praising (or tearing to shreds supplying constructive criticism) to various OSS projects is one form of contributing. Did I lose you? Every time you mention an OSS project you bring attention to it. This attention in turn draws more users/developers to the project and grows it larger. Tell your family, write a blog post, digg stories about OSS or just tell your friends about “this cool new program I found”.

There are many other very easy ways to help out as well. For instance if you notice the program is doing something funky then file a bug. It’s a short process that is usually very painless and quickly brings real world results. I have found that it is also a very therapeutic way to get back at that application that just crashed and lost all of your data. Sometimes you don’t even have to be the one to file it, simply bringing it up in a discussion, such as a forum post, can be enough for others to look into it for you.

Speaking of forum posts, answering new users’ questions about OSS projects can be an excellent way to both spread use of the project and identify problems that new users are facing. The latter could in turn be corrected through subsequent bug or feature requests. Along the same lines, documentation is something that some OSS projects are sorely missing. While it is not the most glamorous job, documentation is key to providing an excellent experience to a first time user. If you know more than one language I can’t think of a single OSS project that couldn’t use your help making translations so that people all over the world can begin to use the software.

For the artists among us there are many OSS projects that could benefit from a complete artwork makeover. As a programmer myself I know all to well the horrors of developer artwork. Creating some awesome graphics, icons, etc. for a project can make a world of difference. Or if you are more interested in user experience and interface design there are many projects that could also benefit from your unique skills. Tools like Glade can even allow individuals to create whole user interfaces without writing a single line of code.

Are you a web developer? Do you like making pretty websites with fancy AJAX fluff? Offer to help the project by designing an attractive website that lures visitors to try the software. You could be the difference between this and this (no offense to the former).

If you’ve been using a particular piece of software for a while, and feel comfortable trying to help others, hop on over to the project’s IRC channel. Help new users troubleshoot their problems and offer suggestions of solutions that have work for you. Just remember: nothing turns off a new user like an angry IRC asshat.

Finally if you are a developer take a look at the software you use on a daily basis. Can you see anything in it that you could help change? Peruse their bug tracker and start picking off the low priority or trivial bugs. These are often issues that get overlooked while the ‘full time’ developers tackle the larger problems. Squashing these small bugs can help to alleviate the 100 paper cuts syndrome many users experience while using some OSS.

Where to start

Depending on how you would like to contribute your starting point could be pretty much anywhere. I would suggest however that you check out your favourite OSS project’s website. Alternatively jump over to an intermediary like OpenHatch that aggregates all of the help postings from a variety of projects. OpenHatch actually has a whole community dedicated to matching people who want to contribute with people who need their help.

I don’t expect anyone, and certainly not myself, to contribute back on a daily basis. I will however personally start by setting a recurring event in my calendar that reminds me to contribute, in some way or another, every week or month. If we all did something similar imagine the rapid improvements we could see in a short time.

Blast from the Past: VOIP with Linode, Ubuntu, Asterisk and FreePBX

March 1st, 2017 No comments

This post was originally published on October 29, 2010. The original can be found here.


Overview and Introduction

I’ve been dabbling with managing a VOIP server for the past year or so, using CentOS, Asterisk and FreePBX on a co-located server. Recently Dave and I needed to move to our own machine, and decided to use TEH CLOUD to reduce management and get a fresh start. There are hundreds of hosts out there offering virtual private servers (VPS’s). We’ve standardized on Linode for our small business for a few reasons. While I don’t want to sound like a complete advertisement, I’ve been incredibly impressed with them:

  • Performance. The host systems at Linode run at least 4-way 2GHz Xeon dual-core CPUs (I’ve seen higher as well) and you’re guaranteed the RAM you pay for. Pricing is generally based on how much memory you need.
  • Pricing. For a 512MB Linode, you pay $19.95 US per month. Slicehost (a part of Rackspace, and a Linode competitor) charges the same amount for a 256MB slice. Generally you want at least 512MB RAM for a Linux machine that’s not a test/development box.
  • Features. If you have multiple VMs in the same datacenter, you can assign them private IPs and internal traffic doesn’t count toward your bandwidth allowance. Likewise, bandwidth is pooled among all your VMs; so buying two VMs with 200GB bandwidth each gives 400GB for all your systems.

With full root access and the Linux distribution of your choice, it’s very easy to set up and tear down VMs.

Why VOIP?
When people hear VOIP, they generally assume either a flaky enterprise system with echoing calls or something like Skype. Properly configured, a VOIP system offers a number of really interesting features:

  • Low-cost long distance and international calling. The provider we use, voip.ms, offers outgoing calls for $0.0052 per minute to Canada and $0.0105/minute to the US on their value route.
  • Cheap phone numbers – direct inward dialing – are available for $0.99 per month in your region. These phone numbers are virtual and can be configured to do nearly anything you want. Incoming calls are $0.01/minute, and calls between voip.ms numbers are free.
  • Want to take advantage of cheap long distance from your cell phone? Set up a Direct Inward System Access path, which gives you a dial tone for making outgoing calls when you call a local number. Put your DID number on your My5 list, and you’re set to reduce bill overages.
  • Voicemail becomes much more useful when the VOIP server sends you an email with a WAV attachment and caller ID information.
  • Want to set up an interactive voice response menu, time conditions, blacklist telemarketers, manage group conferences or have witty hold music? All available with FreePBX and Asterisk.

Continue reading for server setup details and security best practices…

Configuration
I opted to use Ubuntu 10.04 LTS on the server, since there are handy directions for configuring Asterisk and FreePBX. The guide is pretty handy, but some adjustments had to be made to the article for the latest versions of Ubuntu and Asterisk:

  • When installing Asterisk, make sure to install the sox package for additional sound and recording support.
  • Back up your /etc/asterisk/modules.conf file before installing FreePBX, and then restore it after the installation is complete. The FreePBX installation seems to clobber this file.
  • Replace all instances of the asteriskcdr database with asteriskcdrdb for proper call reporting functionality. Likewise, you’ll have to recompile and install the asterisk-addons package as per Launchpad bug 560656:
    cd /usr/src
    apt-get build-dep asterisk-mysql
    apt-get -b source asterisk-mysql
    dpkg -i asterisk-mysql*.deb
  • Don’t use the amportal script for managing Asterisk/FreePBX; use
    /etc/init.d/asterisk [start|stop|restart]

    instead.

Let It Ring
There’s plenty of great FreePBX documentation available online – you shouldn’t have a problem getting up and running once the installation is finished. As always, you should follow best server security practices:

  • Enforce username/password authentication with .htaccess and htpasswd and HTTPS for your management console, or only expose FreePBX administration over localhost/127.0.0.1 and tunnel in. There are plenty of articles on configuring OpenSSL and Apache2.
  • Consider running SSH on an alternate port (not 22), and denying direct logins from the root user account. Enforce strong passwords and use tools such as fail2ban and DenyHosts to limit SSH attacks.
  • Use a firewall. Ubuntu’s ufw is very simple to manage. For an Asterisk server, you’ll want to allow UDP ports 5060 and 10000-20000 (for voice traffic), or a range defined in
    /etc/asterisk/rtp.conf

Feel free to post comments here on server setup or general VOIP questions, and I’ll do my best to help out!

Blast from the Past: Compile Windows programs on Linux

February 26th, 2017 No comments

This post was originally published on September 26, 2010. The original can be found here.


Windows?? *GASP!*

Sometimes you just have to compile Windows programs from the comfort of your Linux install. This is a relatively simple process that basically requires you to only install the following (Ubuntu) packages:

To compile 32-bit programs

  • mingw32 (swap out for gcc-mingw32 if you need 64-bit support)
  • mingw32-binutils
  • mingw32-runtime

Additionally for 64-bit programs (*PLEASE SEE NOTE)

  • mingw-w64
  • gcc-mingw32

Once you have those packages you just need to swap out “gcc” in your normal compile commands with either “i586-mingw32msvc-gcc” (for 32-bit) or “amd64-mingw32msvc-gcc” (for 64-bit). So for example if we take the following hello world program in C

#include <stdio.h>
int main(int argc, char** argv)
{
printf(“Hello world!\n”);
return 0;
}

we can compile it to a 32-bit Windows program by using something similar to the following command (assuming the code is contained within a file called main.c)

i586-mingw32msvc-gcc -Wall “main.c” -o “Program.exe”

You can even compile Win32 GUI programs as well. Take the following code as an example

#include <windows.h>
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
char *msg = “The message box’s message!”;
MessageBox(NULL, msg, “MsgBox Title”, MB_OK | MB_ICONINFORMATION);

return 0;
}

this time I’ll compile it into a 64-bit Windows application using

amd64-mingw32msvc-gcc -Wall -mwindows “main.c” -o “Program.exe”

You can even test to make sure it worked properly by running the program through wine like

wine Program.exe

You might need to install some extra packages to get Wine to run 64-bit applications but in general this will work.

That’s pretty much it. You might have a couple of other issues (like linking against Windows libraries instead of the Linux ones) but overall this is a very simple drop-in replacement for your regular gcc command.

*NOTE: There is currently a problem with the Lucid packages for the 64-bit compilers. As a work around you can get the packages from this PPA instead.

Originally posted on my personal website here.

Blast from the Past: Very short plug for PowerTOP

February 25th, 2017 No comments

This post was originally published on July 4, 2010. The original can be found here.


Recently I decided to try out PowerTOP, a Linux power saving application built by Intel. I am extremely impressed by how easy it was to use and the power savings I am now basking in.

PowerTOP is a terminal application that first scans your computer for a number of things during a set interval. It then reports back which processes are taking up the most power and offers you some options to improve your battery life. All of these options can literally be enabled at a press of a button. It’s sort of like an experience I once had with Clippy in Microsoft Word; “it seems you are trying to save power, let me help you…” After applying a few of the suggestions the estimated battery life on my laptop went from about 3 and a half hours to almost 5 hours. In short, I would highly recommend everyone at least try out PowerTOP. I’m not promising miracles but at the very least it should help you out some.

Blast from the Past: Vorbis is not Theora

February 24th, 2017 No comments

This post was originally published on April 21, 2010. The original can be found here.


Recently I have started to mess around with the Vorbis audio codec, commonly found within the Ogg media container. Unlike Theora, which I had also experimented with but won’t post the results for fear of a backlash, I must say I am rather impressed with Vorbis. I had no idea that the open source community had such a high quality audio codec available to them. Previously I always sort of passed off Vorbis’ reason for being regarded as ‘so great’ within the community as simply a lack of options. However after some comparative tests between Vorbis and MP3 I must say I am a changed man. I would now easily recommend Vorbis as a quality choice if it fits your situation of use.

What is Vorbis?

Like I had mentioned above, Vorbis is the name of a very high quality free and open source audio codec. It is analogous to MP3 in that you can use it to shrink the size of your music collection, but still retain very good sound. Vorbis is unique in that it only offers a VBR mode, which allows it to squeeze the best sound out of the fewest number of bits. This is done by lowering the bitrate during sections of silence or unimportant audio. Additionally, unlike other audio codecs, Vorbis audio is generally encoded at a supplied ‘quality’ level. Currently the bitrate for quality level 4 is 128kbit/s, however as the encoders mature they may be able to squeeze out the same quality at a lower bitrate. This will potentially allow a modern iteration of the encoder to achieve the same quality level but by using a lower bitrate, saving you storage space/bandwidth/etc.

So Vorbis is better than MP3?

Obviously when it comes to comparing the relative quality of competing audio codecs it must always be up to the listener to decide. That being said I firmly believe that Vorbis is far better than MP3 at low bitrates and is, at the very least, very comparable to MP3 as you increase the bitrate.

The Tests

I began by grabbing a FLAC copy of the Creative Commons album The Slip by Nine Inch Nails here. I chose FLAC because it provided me with the highest quality possible (lossless CD quality) from which to encode the samples with. Then, looking around at some Internet radio websites, I decided that I should test the following bitrates: 45kbit/s, 64kbit/s, 96kbit/s, and finally 128kbit/s (for good measure). I encoded them using only the default encoder settings and the following terminal commands:

For MP3 I used LAME and the following command. I chose average bitrate (ABR) which is really just VBR with a target, similar to Vorbis:

flac -cd {input file goes here}.flac | lame –abr {target bitrate} – {output file goes here}.mp3

For Vorbis I used OggEnc and the following command:

oggenc -b {target bitrate} {input file goes here}.flac -o {output file goes here}.ogg

Results

I think I would be a hypocrite if I didn’t tell you to just listen for yourself… The song in question is track #4, Discipline.

Note: if you are using Mozilla Firefox, Google Chrome, or anything else that supports HTML5/Vorbis, you should be able to play the Vorbis file right in your browser.

45kbit/s MP3(1.4MB) Vorbis(1.3MB)

64kbit/s MP3(2.0MB) Vorbis(1.9MB)

96kbit/s MP3(2.9MB) Vorbis(2.8MB)

128kbit/s MP3(3.8MB) Vorbis(3.6MB)

Blast from the Past: A Practical Reference of Linux Commands

February 23rd, 2017 No comments

This post was originally published on February 19, 2010. The original can be found here.


Just wanted to share a link to a great table that I found – the practical reference of linux commands is a handy little table of terminal commands organized by task. I’ll add it to our sidebar under the ‘Useful Sites’ heading for future reference.

Happy Linuxing!

We want you! (to write for The Linux Experiment)

February 20th, 2017 No comments

Are you a Linux user? Thinking about trying your own Linux experiment? Have you ever come across something broken or annoying and figured out a solution? Or maybe you just came up with a really neat way of doing something to make your life easier? Well if you have ever done any of those and can write a decent sentence or two we’d be glad to showcase your content here.

Get the full details at our page here: Write for the Linux Experiment.

Categories: Tyler B Tags:

Sudo apt-get install basic-linux-pt3 –Install & Setup

February 18th, 2017 No comments

It’s been a busy little while, so I haven’t had time to get this written up. So lets see what I can still remember.

Installing Ubuntu Server was as easy as you’d expect. Booting into it wasn’t. Turns out that the BIOS on this box is setup not to boot from the ODD SATA port. It’ll boot from any of the four drives, or from USB; but not from that extra SATA port. My friend’s, who already has the box running, solution was to setup a RAID, where the ODD SATA is RAID number 0, which then allows it to be booted from. I went for a much simpler solution, after noticing that the install process lets you select the location of the GRUB loader. This server has a USB port and a MicroSD card reader inside the case, both bootable. I have plenty of spare MicroSD cards lying around (seriously, since when is 1GB or 2GB big enough for anybody?), so I just inserted one, and reinstalled specifying the MicroSD card as the location for the GRUB.

It felt good to have my new box booting up and actually running. I got its static IP and OpenSSH setup, checked I could access it through Putty, and finally got it up on the shelf and off my desk. Everything from this point on I’ve done in Putty, with no monitor attached to the server. Lets face it, its not like there’s any difference between one white-on-black text interface and another.

Next, I turned my attention to mounting my drives. The mount command is simple enough, but obviously I want my drives to be available right away after boot; so it was time to learn about ‘fstab’ and ‘UUID’s. Luckily this is a fairly straightforward process, especially since my drives only have a single partition on each, other than having to write down the long UUID to copy from the terminal output to the fstab file. I haven’t been able to work with copy & paste in PuTTY. One thing I started to realise at this point is that while Ubuntu boots nice and quickly, the server itself doesn’t.  So each time I want to see if my fiddling has worked, I pretty much have time to make a cup of tea. From looking through various guides etc. I simply used:

UUID=<UUID> /mnt/<mountpoint> ext4 defaults 0 2

for each of my additional drives. After a reboot, I had access to all of my files and media as it was on my old NAS. I spent a little time clearing out the directory structures it had left behind, program files etc. to leave a nice, clean access to all of my files.

NFS and Samba were just as easy to get set up as they had been on the virtual machines. Although with so many different things I wanted to share, I had to add a lot of different entries into each file.  Thankfully there’s no need to reboot after each edit, the services can simply be restarted to pick up the new settings. Samba is simple enough to test, since I’m managing the server over SSH on PuTTY in Windows. NFS required me to test in one of the VMs 0nce again; but after some work, both seemed to be working. I’m not 100% happy with some of my setup, since I’m just allowing open access to anyone on some of these shares. Chances are I’ll be fine, but I’ll want to come back at some point to try and tighten up my user management.

Emby server has a very good set of installation instructions. The main new part for me was adding the new repository, but this means it’ll be kept up to date when I perform other apt-get upgrades. Everything else related to Emby is managed through its web GUI, so straight forward stuff.

In fact, I was surprised at how simple it was to get the majority of things working. FTP just kinda worked, I just needed to make symlinks from my home directory to the other places I need to access. Even Transmission wasn’t too bad to get going and allow my remote GUI to connect. Ont thing that started to get harder from this point was keeping track of the different ports and services I was using. I took some time to make a list of computers and services to plan my external port mapping, and got things like FTP, SSH and Transmission forwarded. Internally I’ve just used the defaults for simplicity; externally I’ve made sure they’re set to something completely different.

Next Up:

Bash-ing things around

This post was originally published on Nathanael’s site here.

Categories: Nathanael Y, Ubuntu Tags: , , , , , , ,

Sudo apt-get install basic-linux-pt2 –Testing-&-VMs

February 17th, 2017 No comments

With the hardware sorted (bar some jiggery-pokery to get the ODD to SSD bay converter to fit properly), I set about deciding what I want this box to do.

The list I came up with looks like this:

  • Media serving to my Kodi devices (2 Raspberry Pi systems, my android tablet, and a new Ubuntu PC I’m putting together for retro gaming with my kids)
  • FTP – I like to use my NAS like my own personal cloud. My tablet can mount an FTP in its file browser just like any other folder. No sFTP support, though, unfortunately (and I don’t like any of the file browsers I tried which do).
  • Transmission (or Deluge) – the main reason for swapping the 4GB of RAM out for 16GB
  • SSH (obviously!)
  • Dropbox and Google Drive – for when various apps and things integrate well with these mobile apps.
  • Backup – the WD My Cloud EX4’s backup options are very poor.
  • General file sharing with Windows and Ubuntu – Samba & NFS, naturally.
  • Hosting & tinkering with other bits I might want to try & learn about – a website (for practise, not for public viewing), a git… who knows.

Being basically completely unfamiliar with most of this stuff, I was undecided between Ubuntu Desktop or Server for quite a while. Desktop obviously just has so much of this stuff already ready to go, it mounts things automatically, I can use the GUI as a fallback if something isn’t right, its just more like what I’m accustomed to. On the other hand, having the GUI running all the time will just use up unnecessary RAM  – granted I probably don’t have a shortage of that, but still…

In the end I installed both onto VMs on my Windows machine, made copies (so I had a clean version always ready to go without having to reinstall again), and started playing.

First up I wanted to sort how I was going to deal with my media backend. On my current setup I use the Kodi client on one of my PCs to manage a central SQL database. While it works, its a bit slow and rather inneficient, so I went looking for either a headless Kodi backend, or just a way to run it without the GUI. I found all sorts of ideas, builds and code , none of which I understand or feel like I could implement. After a discussion with a linux guru (one of my Uni lecturers) it was clear that my plan was probably not going to work; he had pointed out that he just runs his on DLNA, and that Plex seems to be quite good too. More research, and a question in /r/Kodi later, I had been pointed in the direction of Emby, a backend for Kodi without many of the limitations of Plex and DLNA. Installation was simple enough, but accessing the Web UI wasn’t. When I had setup the VMs I had just left their network settings as NAT; this, it turns out, makes accessing the network from the VM possible, but not accessing the VM from elsewhere on the network (includingother VMs on the same system). I did try to just change the settings in the VM to add a bridged adapter, but it didn’t work. Not knowing enough about networking on linux to fix this, I just went ahead and reinstalled, this time setting up the VM with two network adapters – one NAT and another bridged. This worked a treat, and after adding a few media files and installing Kodi on the Desktop VM, I was able to play videos no problem.

Next, for no particular reason, was getting NFS working. I found guides, forums, blogs etc (my Google-fu is pretty strong) and set about trying. I was sure it should be working, I’d installed nfs-kernel-server, added the entry into /etc/exports, setup the permissions, but I just couldn’t mount it in the Desktop VM – even though I could watch them through Kodi. I ended up having to ask Reddit’s linux4noobs sub. Simple answer… sudo /etc/init.d/nfs-kernel-server start … and instantly it mounted no problem. Turns out that Kodi was actually watching a transcoded stream from Emby, until I had NFS working. Thankfully Samba took less time and hassle to get working (surprisingly), and pretty soon I could access files across both linux and Windows. And there was much rejoycing.

At this point I was getting impatient (plus this microserver is taking up a chunk of space on my desk where I really ought to be doing uni work), so I quickly checked I knew how to setup a static IP, and turned my attention to the real thing.

Next Up:

Booting up The Box
Installing, reinstalling and shenanigans

This post was originally published on Nathanael’s site here.

Sudo apt-get install basic-linux-pt1 –The-Task

February 16th, 2017 No comments

I’ve had it in mind to start learning linux for a while, and now I’ve found the perfect project to help me do just that: building my own NAS.

Until November I was perfectly happy with my business grade Draytek router. I bought it about 6 years ago,  particularly because it could host its own VPN, instead of just passing through to a Windows PC. It has served me well. I was also reasonably satisfied with my Western Digital EX4 NAS unit in most respects. Then Gigabit FTTH was installed in my area, and I got a free trial – all hell broke loose (very much in a Fist World Problems sense).

The first thing which became apparent is that my ‘nice’ router was outdated. Its best WAN to LAN topped out at 93Mb, while its LAN to WAN maxed at 73Mb (incidentally this also means that I already wasnt getting the benefit of my old 250Mb connection). My new connection is a Gigabit, symmetric line which realistically gives me up to about 700Mb in either direction. Not wanting to go back to relying on the equipment provided by ISPs, I went out (well, I went online) and found myself something which can cope with over 900Mb in either direction (Netgear Nighthawk R7000 if youre interested). ‘”Excellent”, I thought, “now I’m all set”… but no. The knock on effect of getting such fast internet soon became apparent in my NAS. After adding just a few torrents to Transmission, the speed steadily rose until it was approaching 10MB/s, at which point all interfaces to the device practically stopped responding. I surmised that the cause was likely the limited 512M of RAM inside the poor thing, and so went in search of a replacement.

As I looked around, it quickly became clear that most of the available devices on the market were out of my price bracket. While I would have liked to get myself something like a Synology, it just wasn’t going to happen. A friend of mine pointed me in the direction of an HP Gen 8 Proliant Microserver (http://compadvance.co.uk/en/item/323618/HP-Proliant-MicroServer-Gen8-G1610T-4GB) which he has, and after some extra checking around, I decided it looked good. I also decided to up the 4GB of RAM to 16GB, and add an SSD for the OS in the ODD bay.

Now the fun really started. Naturally I didn’t want to put Windows on this thing (although this is what my friend has done), I obviously wanted to run linux. And as its intended to sit quietly, (apparently) minding its own business for the most part, I wanted to keep as much RAM as possible free by not running a GUI. Ubuntu Server seemed like the perfect choice; except I have practically no experience setting up servers, working in command line (Basic ls, cp and rm don’t really count), or using Ubuntu or linux for anything at all (I’ve tinkered, but thats about it).

Well, no time like the present to learn.

Next Up:

Learning and testing on VMs

This post was originally published on Nathanael’s site here.

Categories: Nathanael Y, Ubuntu Tags: ,

KWLUG: Let’s Encrypt, Cryptocurrencies (2017-02)

February 11th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Let’s Encrypt, Cryptocurrencies published on February 7th 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

Setup your own VPN with OpenVPN

February 6th, 2017 No comments

Using the excellent Digital Ocean tutorial as my base I decided to setup an OpenVPN server on a Linux Mint 18 computer running on my home network so that I can have an extra layer of protection when connecting to those less than reputable WiFi hotspots at airports and hotels.

While this post is not meant to be an in-depth guide, you should use the original for that, it is meant to allow me to look back at this at some point in the future and easily re-create my setup.

1. Install everything you need

sudo apt-get update
sudo apt-get install openvpn easy-rsa

2. Setup Certificate Authority (CA)

make-cadir ~/openvpn-ca
cd ~/openvpn-ca
nano vars

3. Update CA vars

Set these to something that makes sense:

export KEY_COUNTRY=”US”
export KEY_PROVINCE=”CA”
export KEY_CITY=”SanFrancisco”
export KEY_ORG=”Fort-Funston”
export KEY_EMAIL=”me@myhost.mydomain”
export KEY_OU=”MyOrganizationalUnit”

Set the KEY_NAME to something that makes sense:

export KEY_NAME=”server”

4. Build the CA

source vars
./clean-all
./build-ca

5. Build server certificate and key

./build-key-server server
./build-dh
openvpn –genkey –secret keys/ta.key

6. Generate client certificate

source vars
./build-key-pass clientname

7. Configure OpenVPN

cd ~/openvpn-ca/keys
sudo cp ca.crt ca.key server.crt server.key ta.key dh2048.pem /etc/openvpn
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz | sudo tee /etc/openvpn/server.conf

Edit config file:

sudo nano /etc/openvpn/server.conf

Uncomment the following:

tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
push “redirect-gateway def1 bypass-dhcp”
push “route 192.168.10.0 255.255.255.0”
push “route 192.168.20.0 255.255.255.0”

Add the following:

key-direction 0
auth SHA256

Edit config file:

sudo nano /etc/sysctl.conf

Uncomment the following:

net.ipv4.ip_forward=1

Run:

sudo sysctl -p

8. Setup UFW rules

Run:

ip route | grep default

To find the name of the network adaptor. For example:

default via 192.168.x.x dev enp3s0  src 192.168.x.x  metric 202

Edit config file:

sudo nano /etc/ufw/before.rules

Add the following, replacing your network adaptor name, above the bit that says # Don’t delete these required lines…

# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0
-A POSTROUTING -s 10.8.0.0/8 -o enp3s0 -j MASQUERADE
COMMIT
# END OPENVPN RULES

Edit config file:

sudo nano /etc/default/ufw

Change DEFAULT_FORWARD_POLICY to ACCEPT.

DEFAULT_FORWARD_POLICY=”ACCEPT”

Add port and OpenVPN to ufw, allow it and restart ufw to enable:

sudo ufw allow 1194/udp
sudo ufw allow OpenSSH
sudo ufw disable
sudo ufw enable

9. Start OpenVPN Service and set it to enable at boot

sudo systemctl start openvpn@server
sudo systemctl enable openvpn@server

10. Setup client configuration

mkdir -p ~/client-configs/files
chmod 700 ~/client-configs/files
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf

Edit config file:

nano ~/client-configs/base.conf

Replace remote server_IP_address port with the external IP address and port you are planning on using. The IP address can also be a hostname, such as a re-director.

Add the following:

cipher AES-128-CBC
auth SHA256
key-direction 1

Uncomment the following:

user nobody
group nogroup

Comment out the following:

#ca ca.crt
#cert client.crt
#key client.key

11. Make a client configuration generation script

Create the file:

nano ~/client-configs/make_config.sh

Add the following to it:

#!/bin/bash

# First argument: Client identifier

KEY_DIR=~/openvpn-ca/keys
OUTPUT_DIR=~/client-configs/files
BASE_CONFIG=~/client-configs/base.conf

cat ${BASE_CONFIG} \
<(echo -e ‘<ca>’) \
${KEY_DIR}/ca.crt \
<(echo -e ‘</ca>\n<cert>’) \
${KEY_DIR}/${1}.crt \
<(echo -e ‘</cert>\n<key>’) \
${KEY_DIR}/${1}.key \
<(echo -e ‘</key>\n<tls-auth>’) \
${KEY_DIR}/ta.key \
<(echo -e ‘</tls-auth>’) \
> ${OUTPUT_DIR}/${1}.ovpn

And mark it executable:

chmod 700 ~/client-configs/make_config.sh

12. Generate the client config file

cd ~/client-configs
./make_config.sh clientname

13. Transfer client configuration to device

You can now transfer the client configuration file found in ~/client-configs/files to your device.


This post originally appeared on my personal website here.

Categories: Open Source Software, Tyler B Tags: ,

Blast from the Past: Getting KDE on openSUSE is like playing Jenga

February 2nd, 2017 No comments

This post was originally published on October 16, 2009. The original can be found here.


As part of our experiment, everyone is required to try a different desktop manager for two weeks. I chose KDE, since I’ve been using GNOME since I installed openSUSE. However, I’ve found that while trying to get a desktop manager set up one wrong move can cause everything to fall apart.

Switching from GNOME:

This was fairly simple. I started up YaST Software Management, changed my filter from “Search” to “Patterns”, and found the Graphical Environments section. Here I right clicked “KDE Base System”, and selected install. Clicking accept installed the kdebase and kdm packages, with a slew of other KDE default programs. Once this was done, I logged out of my GNOME session, and selected KDE4 as my new login session. My system was slightly confused and booted into GNOME again, so I restarted. This time, I was met with KDE 4.1.

My Thoughts on KDE 4.1:

As much as I had hated the qt look [which I erronously call the ‘quicktime’ look, due to its uncanny similarity to the quicktime app], the desktop was beautiful. The default panel was a very slick, glossy black, which looked quite nice. The “lines” in each window title made the windowing system very ugly, so I set out to turn them off. Its a fairly easy process:

KDE Application Launcher > Configure Desktop > Appearance > Windows > Uncheck the “Show stripes next to the title” box.

Once completed, my windows were simple and effective, and slightly less chunky than the default GNOME theme, so I was content.

Getting rid of the openSUSE Branding:

openSUSE usually draws much ire from me – so its not hard to imagine that I’d prefer not to have openSUSE branding on every god damn application I run, least of all my Desktop Manager. From YaST Software Management I searched for openSUSE and uninstalled every package that had the words “openSUSE” and “branding”. YaST automatically replaces these packages with alternate “upstream” packages, which seem to be the non-openSUSE themes/appearances. Once these were gone, things looked a lot less gray-and-green, and I was happy.

Oh god what happened to my login screen:

A side effect of removing all those openSUSE packages my login screen took a trip back in time, to the Windows 3.1 era. It was a white window on a  blue background with Times New Roman-esque font. After a bit of researching on the GOOG, I found out that this was KDE3 stepping up to take over for my openSUSE branding. Uninstalling the package kde3base or whatever the shit it’s called forced KDE4 to take over, and everything was peachy again.

Installing my Broadcom Wirless Driver

In order to install my driver, I followed this guide TO THE LETTER. Not following this guide actually gave YaST a heart attack and created code conflicts.

KMix Being Weird

KMix magically made my media buttons on my laptop work, however it occasionally decided to change what “audio device” the default slider was controlling. Still, having the media buttons working was a HUGE plus.

Getting Compositing to Work

I did not have a good experience with this. Infact, by fucking around with settings, I ended up bricking my openSUSE install entirely. So alas, I ended up completely re-installing openSUSE. Regardless, to install ATI drivers, follow the guide here using the one-click install method worked perfectly. After finally getting my drivers, turning on compositing was simple:

KDE Application Launcher > Configure Desktop > Appearance > Desktop > Check the “Enable Desktop Effects” box.

From KDE4.1 to KDE4.3

While KDE was really working for me, the notifications system was seriously annoying. Every time my system had an update, or a received a message in Kopete  an ugly, plain, slightly off center, gray box would appear at the top of my screen to inform me. Tyler informed me that this was caused by the fact that I wasn’t running the most recent version of KDE4. A quick check showed me that openSUSE isn’t going to use KDE4.3 until openSUSE 11.2 launches, however you can manually add the KDE 4.3 repositories to YaST, as shown on the openSUSE KDE Repository page.

After adding these repositories, I learned a painful lesson in upgrading your display manager. Do not, under any circumstances, attempt a Display Manager upgrade/switch untill you have an hour to spare,  and enough battery life to last the whole time. I did not, and even though I cancelled the install about 60 seconds in, I found that YaST had already uninstalled my display manager. Upon restart, I was met with a terminal.

From the terminal, I used the command line version of YaST to completely remove kdebase4 and kdm from my system. After that, re-installing the KDE4.3 verison of  kdm from YaST in the terminal installed all the other required applications. However, there are a shitload of dependency issues you gotta sort through and unfortunately the required action is not the same for each application.

KDE4.3

KDE4.3 is absolutely gorgeous, I’ve had no complaints with it. KMix seems to have reassigned itself again, but it assigned itself correctly. Removing the openSUSE branding was the same, but by default the desktop theme used is Air. I prefer the darker look of Oxygen, so I headed over to my desktop to fix it by following these steps:

Desktop > Right Click > Plain Desktop Settings > Change the Desktop Theme from Air to Oxygen.

Concluding Thoughts

Now that all these things are sorted out, I’m surprisingly impressed with KDE, and I might even keep it at the end of this test period for our podcast.

Let me know if you’ve ever had to change desktop managers and your woes in the comments!

get rid of that openSUSE shit:

KDE4.1
uninstall openSUSE branding, except the KDM one maybe?

uninstall kde3base or whatever the shit it’s called. this makes stuff wicked.

KDE4.3
This might have all been unessecary. since installing KDE4.3, I did it all again to no avail. Rightclick desktop, plain desktop settings, theme: oxygen. Then hooray its fine?

Blast from the Past: The Search Begins

January 26th, 2017 No comments

This post was originally published on July 29, 2009. The original can be found here.


100% fat free

Picking a flavour of Linux is like picking what you want to eat for dinner; sure some items may taste better than others but in the end you’re still full. At least I hope, the satisfied part still remains to be seen.

Where to begin?

A quick search of Wikipedia reveals that the sheer number of Linux distributions, and thus choices, can be very overwhelming. Thankfully because of my past experience with Ubuntu I can at least remove it and it’s immediate variants, Kubuntu and Xubuntu, from this list of potential candidates. That should only leave me with… well that hardly narrowed it down at all!

Seriously... the number of possible choices is a bit ridiculous

Seriously… the number of possible choices is a bit ridiculous

Learning from others’ experience

My next thought was to use the Internet for what it was designed to do: letting other people do your work for you! To start Wikipedia has a list of popular distributions. I figured if these distributions have somehow managed to make a name for themselves, among all of the possibilities, there must be a reason for that. Removing the direct Ubuntu variants, the site lists these as Arch Linux, CentOS, Debian, Fedora, Gentoo, gOS, Knoppix, Linux Mint, Mandriva, MontaVista Linux, OpenGEU, openSUSE, Oracle Enterprise Linux, Pardus, PCLinuxOS, Red Hat Enterprise Linux, Sabayon Linux, Slackware and, finally, Slax.

Doing a both a Google and a Bing search for “linux distributions” I found a number of additional websites that seem as though they might prove to be very useful. All of these websites aim to provide information about the various distributions or help point you in the direction of the one that’s right for you.

Only the start

Things are just getting started. There is plenty more research to do as I compare and narrow down the distributions until I finally arrive at the one that I will install come September 1st. Hopefully I can wrap my head around things by then.

Ripping DVDs on Ubuntu 14.04

January 24th, 2017 No comments

Remember DVDs? For those of you too young to have had to deal with the hassle of physical media, DVDs were how us old folks got all of our movies and TV seasons before Netflix existed. These days, I’ve got boxes of the things gathering dust in closets. I hadn’t thought about them since the last time I moved until last night, when my wife asked if I could make her Yoga DVDs available on our home Plex server.

I mean… yes? Sure, why not? Can’t be too hard right? Now all I need is a computer with a DVD drive…

After realizing that one of our laptops still has the appropriate hole in the side of it, I slid one of her disks into the slot, and listened while the machine made all sorts of noises and did… nothing at all.

At first, I thought maybe the drive was broken. So I dug through a drawer to find an old CD (another ancient fossil of a format, kids), and confirmed that the drive did, in fact, work. Physical capability confirmed, I figured that I might be running up against some kind of format issue, and did some Googling.

My cursory research turned up a helpful  page in the Ubuntu Documentation that provides instructions for installing the libdvdread4 package, which includes a set of libraries that allow Ubuntu machines to read DVDs. For Ubuntu 14.04, the instructions look something like this:

~$ sudo apt-get install libdvdread4
~$ sudo /usr/share/doc/libdvdread4/install-css.sh

After this, I had to restart my computer. I’m not sure why, but assumed that it had something to do with the fact that hardware is involved. Once it came back from its brief nap, it happily mounted the DVD that I had left in the drive.

The next step was to install Handbrake from the Ubuntu Software Centre. This is a handy little utility with a tropical-themed logo that can convert damned near any video format to nearly any other. I’ve used it in the past to shrinkify video for playback on my iPhone with great success.

If you open up Handbrake and use the Source button to choose your DVD, it will scan the disc, find the titles available, and show ’em in a dropdown box. Simply select the one that you want to rip, give it a reasonable file name, choose where to put the file on your machine, select High Profile from the presets box on the right hand side of the window, and press the big Start button up at the top.

If yoga were this easy, I’d be exercising instead of writing this article

Pleased with my progress, I returned to my wife, told her that I had made her yoga DVDs available, and asked when she was going to start sporting abs as tight as Jillian Michael’s. She was not impressed. You can’t win them all.

Categories: Jon F, Ubuntu Tags: ,

Blast from the Past: Installing Gnome Do with Docky on openSUSE

January 19th, 2017 No comments

This post was originally published on September 28, 2009. The original can be found here.


Before I switched to Windows 7 for my laptop, I used a a dock software called RocketDock to manage my windows and commonly used desktop shortcuts. I liked being able to see my whole desktop ever since I found a good wallpaper site. Back when I rolled Ubuntu, I installed this application called Gnome Do. It’s a Quicksilver like program that just works. However, the newest feature of Gnome Do that I loved was its Docky theme. It puts a dock similar to RocketDock on the bottom of your screen, and integrates it’s OS searching features right into the dock.

I decided to install the application from YaST, the default system administration tool. It indexes a fairly large number of repositories, and it did have Gnome Do. A few minutes later I had the app running, but unfortunately the version was way out of date. Gnome Do is on roughly version 0.8.x, and YaST gave me 0.4.x.

So off I went trying to find a .rpm for Gnome Do that would install. I was met with a lot of failure, with a ton of dependencies unable to be resolved and so on. Next I tried the openSUSE file from Gnome Do’s homepage, but for some reason the servers were down and I was unable to install that way either.

Frustrated and not knowing what to do next, I decided to hop on IRC and see if anyone in #SUSE on irc.freenode.net could help me out. They told me about this service called Webpin. There I found a .ymp [which is an openSUSE specific installer file like a .deb or .rpm] for Gnome Do, and a ymp for Gnome Do’s plugins. Downloading and opening the files installed the programs without any problems. The last step I had to take to enable Docky was to install compiz and enable desktop compositing. After that, a quick trip to Gnome Do’s preference dialog allowed me to use the Docky theme, and I was up and running!

Blast from the Past: The Distributions of Debian

January 12th, 2017 No comments

This post was originally published on August 21, 2009. The original can be found here.


Like many of the other varieties of Linux, Debian gives the end user a number of different installation choices. In addition to the choice of installer that Tyler B has already mentioned, the Debian community maintains three different distributions, which means that even though I’ve picked a distribution, I still haven’t picked a distribution! In the case of Debian, these distributions are as follows:

  1. Stable: Last updated on July 27th, 2009, this was the last major Debian release, codenamed “Lenny.” This is the currently supported version of Debian, and receives security patches from the community as they are developed, but no new features. The upside of this feature freeze is that the code is stable and almost bug free, with the downside that the software it contains is somewhat dated.
  2. Testing: Codenamed “Squeeze,” this distribution contains code that is destined for the next major release of Debian. Code is kept in the Testing distribution as long as it doesn’t contain any major bugs that might prevent a proper release (This system is explained here). The upside of running this distribution is that your system always has all of the newest (and mostly) bug free code available to users. The downside is that if a major bug is found, the fix for that bug may be obliged to spend a good deal of time in the Unstable distribution before it is considered stable enough to move over to Testing. As a result, your computer could be left with broken code for weeks on end. Further, this distribution doesn’t get security patches as fast as Stable, which poses a potential danger to the inexperienced user.
  3. Unstable: Nicknamed Sid after the psychotic next door neighbour in Toy Story who destroys toys as a hobby, this is where all of Debian’s newest and potentially buggy code lives. According to what I’ve read, Sid is like a developer’s build – new users who don’t know their way around the system don’t generally use this distribution because the build could break at any time, and there is absolutely no security support.

I’m currently leaning towards running the Testing distribution, mostly because I like new shiny toys, and (I think) want the challenge of becoming a part of the Debian community. Since we’ve been getting a lot of support from the various development communities lately, perhaps some of our readers could set me straight on any information that I might have missed, and perhaps set me straight on which distribution I should run.

KWLUG: Vigrant (2017-01)

January 11th, 2017 No comments

This is a podcast presentation from the Kitchener Waterloo Linux Users Group on the topic of Vigrant published on January 9th 2017. You can find the original Kitchener Waterloo Linux Users Group post here.

Read more…

Categories: Linux, Podcast, Tyler B Tags: ,

Shove ads in your pi-hole!

January 8th, 2017 No comments

There are loads of neat little projects out there for your Raspberry Pi from random little hacks all the way up to full scale home automation and more. In the past I’ve written about RetroPie (which is an awesome project that you should definitely check out!) but this time I’m going to take a moment to mention another really cool project: pi-hole.

Pi-hole, as their website says, is “a black hole for Internet advertisements.” Essentially it’s software that you install on your Raspberry Pi (or other Linux computer) that then acts as a local DNS proxy. Once it is setup and running you can point your devices to it individually or just tell your router to use that instead (which then applies to everything on the network).

Then as you’re browsing the internet and come across a webpage that is trying to serve you ads, pi-hole will simply block the DNS request for the ad from really resolving and instead return a blank image or web page meaning that the site simply can’t download the ad to show you. Voila! Universal ad blocker for your entire network and all of your devices! Even better – because you’re blocking the ads from being downloaded in the first place your browsing speeds can sometimes be improved as well.

Pi-hole dashboard

You can monitor or control which domains are blocked all from a really nice dashboard interface and see the queries come into pi-hole almost in real time.

After running pi-hole for a week now I’m quite surprised with how effective it has really been with removing ads. It’s legitimately pleasant being able to browse the web without seeing ads everywhere or having ad blockers break certain websites. If that sounds like something you too might be interested in then pi-hole might be worth taking a look.


This post originally appeared on my personal website here.