Archive

Posts Tagged ‘fstab’

How to mount a Windows share on startup

April 28th, 2014 2 comments

I recently invested in a NAS device to add a little bit of redundancy to my personal files. With this particular NAS the most convenient way to use the files it stores is via the Windows share protocol (also known a SMB or CIFS). Linux has supported these protocols for a while now so that’s great but I wanted it to automatically map the shared directory on the NAS to a directory on my Linux computer on startup. Thankfully there is a very easy way to do just that.

1) First install cifs-utils

sudo apt-get install cifs-utils

2) Next edit the fstab file and add the share(s)

To do this you’ll need to add a new line to the end of the file. You can easily open the file using nano in the terminal by running the command:

sudo nano /etc/fstab

Then use the arrow keys to scroll all the way to the bottom and add the share in the following format:

//<path to server>/<share name>     <path to local directory>     cifs     guest,uid=<user id to mount files as>,iocharset=utf8     0     0

Breaking it down a little bit:

  • <path to server>: This is the network name or IP address of the computer hosting the share (in my case the NAS). For example it could be something like “192.168.1.1″ or something like “MyNas”
  • <share name>: This is the name of the share on that computer. For example I set up my NAS to share different directories one of which was called “Files”
  • <path to local directory>: This is where you want the remote files to appear locally. For example if you want them to appear in a folder under /media you could do something like “/media/NAS”. Just make sure that the directory exists (create it if you need to).
  • <user id to mount files as>: This defines the permissions to give the files. On Ubuntu the first user you create is usually give uid 1000 so you could put “1000″ here. To find out the uid of any random user use the command “id <user>” without quotes.

So for example my added line in fstab was

//192.168.3.25/Files     /media/NAS     cifs     guest,uid=1000,iocharset=utf8     0     0

Then save the file “Ctrl+O” and then Enter in nano.

3) Mount the remote share

Run this command to test the share:

sudo mount -a

If that works you should see the files appear in your local directory path. When you restart the computer it will also attempt to connect to the share and place the files in that location as well. Keep in mind that anything you do to the files there also changes them on the share!




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Linux, Tyler B Tags: , , , , ,

Extend the life of your SSD on linux

February 9th, 2014 2 comments

This past year I purchased a laptop that came with two drives, a small 24GB SSD and a larger 1TB HDD. My configuration has placed the root filesystem (i.e. /) on the SSD and my home directory (i.e. /home) on the HDD so that I benefit from very fast system booting and application loading but still have loads of space for my personal files. The only downside to this configuration is that linux is sometimes not the best at ensuring your SSD lives a long life.

Unlike HDDs, SSDs have a finite number of write operations before they are guaranteed to fail (although you could argue HDDs aren’t all that great either…). Quite a few linux distributions have not yet been updated to detect and configure SSDs in such a way as to extend their life. Luckily for us it isn’t all that difficult to make the changes ourselves.

Change #1 – noatime

The first change that I do is to configure my system so that it no longer updates each files access time on the SSD partition. By default Linux records information about when files were created and last modified as well as when it was last accessed. There is a cost associated with recording the last access time and including this option can not only significantly reduce the number of writes to the drive but also give you a slight performance improvement as well. Note that if you care about access times (for example if you like to perform filesystem audits or something like that) then obviously disabling this may not be an option for you.

Open /etc/fstab as root. For example I used nano so I ran:

sudo nano /etc/fstab

Find the SSD partition(s) (remember mine is just the root, /, partition) and add noatime to the mounting options:

UUID=<some hex string> /               ext4    noatime,errors=remount-ro

Change #2 – discard

UPDATE: Starting with 14.04 you no longer need to add discard to your fstab file. It is now handled automatically for you through a different system mechanism.

TRIM is a technology that allows a filesystem to immediately notify the SSD when a file is deleted so that it can more efficiently manage the underlying storage and improve the lifespan of the drive. Not all filesystems support TRIM but if you are like most people and use ext4 then you can safely enable this feature. Note that some people have actually had drastic write performance decreases when enabling this option but personally I’d rather have that than a dead drive.

To enable TRIM support start by again opening /etc/fstab as root and find the SSD partition(s). This time add discard to the mounting options:

UUID=<some hex string> /               ext4    noatime,errors=remount-ro,discard

Change #3 – tmpfs

If you have enough RAM you can also dedicate some of it to mounting specific partitions via tmpfs. Tmpfs essentially makes a fake hard drive, known as a RAM disk, that exists only in your computer’s RAM memory while it is running. You could use this to store commonly written to temporary filesystems like /tmp or log file locations such as /var/logs.

This has a number of consequences. For one anything that gets written to tmpfs will not be there the second you restart or turn the computer off – it never gets written back to a real hard drive. This means that while you can save your SSD all of those log file writes you also won’t be able to debug a problem using those log files on a computer crash or something of the like. Also being a RAM disk means that it will slowly(?) eat up your RAM growing larger and larger the more you write to it between restarts. There are options for putting limits on how large a tmpfs partition can grow but I’ll leave you to search for those.

To set this up open /etc/fstab as root. This time add new tmpfs lines using the following format:

tmpfs   /tmp    tmpfs   defaults  0       0

You can lock it down even more by adding some additional options like noexec (disallows execution of binaries on the filesystem) and nosuid (block the operation of suid, and sgid bits). Some other locations you may consider adding are /var/log, /var/cache/apt etc. Please read up on each of these before applying them as YMMV.

Categories: Hardware, Tyler B Tags: , , , , ,

Setting up an LVM for Storage

December 30th, 2009 5 comments

Recently, I installed Kubuntu on my PC. Under Windows, I had used RAID1 array to create a storage volume out of two extra 500GB hard drives that I have in my system. Under Linux, I’ve decided to try creating a 1TB LVM out of the drives instead. This should be visible as a single drive, and allow me to store non-essential media files and home partition backups on a separate physical drive, the better to recover from catastrophic failures with. The only problem with this plan: documentation detailing the process of creating an LVM is sparse at best.

The Drive Situation
My machine contains the following drives, which are visible in the /dev directory:

  • sdc: root drive that contains three partitions; 1, 2, and 5, which are my boot, root, and swap partitions respectively
  • sda: 500GB SATA candidate drive that I’d like to add to the LVM
  • sdb: 500GB SATA candidate drive that I’d like to add to the LVM

First Try
Coming from a Windows background, I began by searching out a graphical tool for the job. I found one in my repositories called system-config-lvm 1.1.4.

The graphical tool that I found to create LVMs

I followed the buttons in this tool and created a 1TB LVM spanning sda and sdb, then formatted it with ext3. The result of these steps was an uninitialised LVM that refused to mount at boot. In response, I wrote the following script to activate, mount, and assign permissions to the drive at boot:

#!/bin/bash
sudo whoami
sudo lvchange -a y /dev/Storage/Storage
sudo mount /dev/Storage/Storage /home/jon/Storage
sudo chown jon /home/jon/Storage
sudo chmod 777 /home/jon/Storage

It worked about 50% of the time. Frustrated, I headed over to the #kubuntu IRC channel to find a better solution.

Second Try
On the #kubuntu channel, I got help from a fellow who walked me through the correct creation process from the command line. The steps are as follows:

  1. Create identical partitions on sda and sdb:
    1. sudo fdisk /dev/sda
    2. n to create a new partition on the disk
    3. p to make this the primary partition
    4. 1 to give the partition the number 1 as an identifier. It will then appear as sda1 under /dev
    5. Assign first and last cylinders – I simply used the default values for these options, as I want the partition to span the entire drive
    6. t toggle the type of partition to create
    7. 8e is the hex code for a Linux LVM
    8. w to write your changes to the disk. This will (obviously) overwrite any data on the disk
    9. Repeat steps 1 through 8 for /dev/sdb
    10. Both disks now have partition tables that span their entirety, but neither has been formatted (that step comes later).
  2. Make the partitions available to the LVM:
    1. sudo pvcreate /dev/sda1
    2. sudo pvcreate /dev/sdb1
    3. Notice that the two previous steps addressed the partitions sda1 and sdb1 that we created earlier
  3. Create the Volume Group that will contain our disks:
    1. sudo vgcreate storage /dev/sda1 /dev/sdb1 will create the volume group that spans the two partitions sda1 and sdb1
    2. sudo vgdisplay /dev/storage queries the newly created volume group. In particular, we want the VG Size property. In my case, it is 931.52 GB
  4. Create a Logical Volume from the Volume Group:
    1. sudo lvcreate -L $size(M or G) -n $name $path where $size is the value of the VG Size property from above (G for gigabytes, M for megabytes), $name is the name you’d like to give the new Logical Volume, and $path is the path to the Volume Group that we made in the previous step. My finished command looked like sudo lvcreate -L 931G -n storage dev/storage
    2. sudo lvdisplay /dev/storage queries our new Logical Volume. Taking a look at the LV Size property shows that the ‘storage’ is a 931GB volume.
  5. Put a file system on the Logical Volume ‘storage’:
    1. sudo mkfs.ext4 -L $name -j /dev/storage/storage will put an ext4 file system onto the Logical Volume ‘storage’ with the label $name. I used the label ‘storage’ for mine, just to keep things simple, but you can use whatever you like. Note that this process takes a minute or two, as it has to write all of the inode tables for the new file system. You can use mkfs.ext2 or mkfs.ext3 instead of this command if you want to use a different file system.
  6. Add an fstab entry for ‘storage’ so that it gets mounted on boot:
    1. sudo nano /etc/fstab to open the fstab file in nano with root permissions
    2. Add the line /dev/storage/storage    /home/jon/Storage       ext4    defaults        0       0 at the end of the file, where all of the spaces are tabs. This will cause the system to mount the Logical Volume ‘storage’ to the folder /home/jon/Storage on boot. Check out the wikipedia article on fstab for more information about specific mounting options.
    3. ctrl+x to exit nano
    4. y to write changes to disk
  7. Change the owner of ‘storage’ so that you have read/write access to the LVM
    1. sudo chown -R jon:jon /home/jon/Storage will give ownership to the disk mounted at /home/jon/Storage to the user ‘jon’

Time for a Beer
Whew, that was a lot of work! If all went well, we have managed to create a Logical Volume called storage that spans both sda and sdb, and is formatted with the ext4 file system. This volume will be mounted at boot to the folder Storage in my home directory, allowing me to dump non-essential media files like my music collection and system backups to a large disk that is physically separate from my system partitions.

The final step is to reboot the system, navigate to /home/jon/Storage (or wherever you set the boot point for the LVM in step 6), right-click, and hit properties. At the bottom of the properties dialog, beside ‘Device Usage,’ I can see that the folder in question has 869GB free of a total size of 916GB, which means that the system correctly mounted the LVM on boot. Congratulations to me!

Much thanks to the user ikonia on the #kubuntu IRC channel for all the help.

This piece has been mirrored at Index out of Bounds




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.

TrueCrypt, kernel compilation and where’d /boot go?

October 5th, 2009 1 comment

Since I’ve installed Gentoo, I haven’t had access to my other drives. One of them is an NTFS-formatted WD Raptor, and the other is a generic Seagate 300GB drive that contains my documents, pictures and Communist propaganda inside a TrueCrypt partition. Getting the Raptor to work was fairly simple (as far as Gentoo goes) – I added the entry to my /etc/fstab file and then manually mounted the partition:

/dev/sdb1               /mnt/raptor     ntfs-3g         noatime         0 0

The TrueCrypt drive proved to be more of an issue. After installing the software and attempting to mount the partition, I encountered an error:

device-mapper: reload ioctl failed: Invalid argument 
Command failed

A quick Bing and the Gentoo Wiki described this problem exactly, with the caveat that I had to recompile my kernel to add support for LRW and XTS support. Into the kernel configuration I went – the only difference I noticed is that LRW and XTS are considered “EXPERIMENTAL” but aren’t noted as such on the requirements page:

Kernel configuration options

(This may come down to an x64 vs. x86 issue, but I haven’t run into any issues with these options enabled (yet!))

Of course, then came the make && make modules_install commands, which didn’t take too long to complete. The question then became, how do I install the new kernel? Looking in my /boot partition, I only had a few template files – and not the kernel itself or any grub settings. Essentially, /boot had nothing in it but the system still launches properly!

I then tried mounting /dev/sda1 manually, and the kernel and grub.conf showed up properly in the mountpoint. Something is obviously wrong with the way my system remounts /boot during the startup process, but at least now I’m able to install the new kernel. After copying /arch/x86_64/boot/bzImage to the newly available directory, I rebooted and the new kernel was picked up properly. TrueCrypt now lets me open, create and delete files from /media/truecrypt1, and automatically uses ntfs-3g support to accomplish this.

Overall, I’m pretty pleased at how easily I can recompile a kernel, and installation was seamless once I figured out that /boot wasn’t pointing to the right location. I expect I’ll try and manually remove the directory from /dev/sda3 and see if that makes a difference.




I am currently running various *BSD variants for this Experiment.
I currently run a mix of Windows, OS X and Linux systems for both work and personal use.
For Linux, I prefer Ubuntu LTS releases without Unity and still keep Windows 7 around for gaming.
Check out my profile for more information.

Wireless Network Manager Woes

September 16th, 2009 No comments

Debian Lenny ships with the Network Manager package, version 0.6.6-4, which for all intents and purposes is a well written and very useful network management application. But of course, I wanted something more. At home, I have my music library (hosted on a Windows Vista machine) shared to the local network, and wanted to mount that drive using Samba so that I could share my music library between my two machines while on my home network.

On a Windows machine, one can just point an application to files on a networked drive, while Windows handles all of the dirty details related to allowing that application use those files as if they were on the local machine. On Linux, the application in question seems to have to be aware of how to handle a Windows share (usually via the Samba package), and handle that drive sharing on it’s own, unless the network drive has been mounted first. Further, when mounting a network share in Linux, one can choose any folder on their hard drive to put its contents into, ensuring that it always appears in the same location, and is easy to find.

Unfortunately, as far as I can divine, a networked drive can only be mounted by the root user, which seriously reduces the number of applications that can perform that mounting action. In my quest to get my home music share working, I looked into plenty of different methods for automatically mounting network drives, including startup scripts, modifying the fstab file, and manually connecting from a root terminal. None worked very well.

Eventually, I stumbled across a web post advertising the pros of the WICD network manager, which as I understand, will be used as an alternative to the network manager package by Debian Squeeze, and can currently be pulled into Lenny by adding the Debian-Lenny Backports repository to your sources list. I installed it, replacing the default network-manager-gnome package.

My first impression of WICD was extremely positive. Not only did it connect to my home network immediately, it also allowed me to define default networks to connect to (something that is conspiciously absent from the NetworkManager interface), and to set scripts that are run when my client connects to or disconnects from any of the networks in the list. This allowed me to write a simple one line script that mounted my network share on connection to my home wireless network. It worked every time, and mysteriously did so without asking me for my Sudo password, even though it used the sudo command internally to get rights to perform the mount.

Odd security peculiarities aside, I was happy with what I had accomplished – now I could tell my laptop to automatically connect to my home wireless network, and to mount my music share as soon as it did so! Then I went to school. Shit.

The wireless network at my University uses EAP-TTLS with PAP inner-authentication as a security protocol, something that WICD apparently had no idea how to handle. This protocol is extremely secure, as the host identifies itself to the client with a certificate that the client uses to tunnel into the host, allowing connection to take place without any user information being passed in the clear. At least that’s how it’s supposed to work, except that our school doesn’t have a certificate or certificate authority, so… Whatever.

In any case, WICD does not include a template for this type of network (which is fair I suppose, since Windows requires an add-on to access it as well), but for the life of me, I couldn’t figure out what to do to fix the problem. I trolled the internet from a wired machine and tried editing the WICD encryption templates, while Tyler (on Fedora) and Phil (on OpenSuse) connected on first try.

Eventually, after an hour or so of fruitless trial and error, I gave up, came home, and reinstalled the NetworkManager application, because that’s what Tyler and Phil were using on their systems, and it seemed to work fine. Sure enough, the next day I connected after just a minor tweaking of the network properties in the NetworkManager dialog.

Unfortunately, while I can now connect to my home and school networks, I once again have lost the ability to automatically connect to networks, and to execute scripts on connection, meaning that I’m back to square one with the mounted networked music share – for now, I just do the mounting manually from a root terminal. Balls.