Archive

Posts Tagged ‘fdisk’

Mount entire drive dd image

May 21st, 2012 No comments

It is a pretty common practice to use the command dd to make backup images of drives and partitions. It’s as simple as the command:

dd if=[input] of=[output]

A while back I did just that and made a dd backup of not just a partition but of an entire hard drive. This was very simple (I just used if=/dev/sda instead of something like if=/dev/sda2). The problem came when I tried to mount this image. With a partition image you can just use the mount command like normal, i.e. something like this:

sudo mount -o loop -t [filesystem] [path to image file] [path to mount point]

Unfortunately this doesn’t make any sense when mounting an image of an entire hard drive. What if the drive had multiple partitions? What exactly would it be mounting to the mount point? After some searching I found a series of forum posts that dealt with just this scenario. Here are the steps required to mount your whole drive image:

1) Use the fdisk command to list the drive image’s partition table:

fdisk -ul [path to image file]

This should print out a lot of useful information. For example you’ll get something like this:

foo@bar:~$ fdisk -ul imagefile.img
You must set cylinders.
You can do this from the extra functions menu.

Disk imagefile.img: 0 MB, 0 bytes
32 heads, 63 sectors/track, 0 cylinders, total 0 sectors
Units = sectors of 1 * 512 = 512 bytes
Disk identifier: 0x07443446

        Device Boot      Start         End      Blocks   Id  System
imagefile.img1   *          63      499967      249952+  83  Linux
imagefile.img2          499968      997919      248976   83  Linux

2) Take a look in what that command prints out for the sector size (512 bytes in the above example) and the start # for the partition you want to mount (let’s say 63 in the above example).

3) Use a slightly modified version of the mount command (with an offset) to mount your partition.

mount -o loop, offset=[offset value] [path to image file] [path to mount point]

Using the example above I would set my offset value to be sector size * offset, so 512*63 = 32256. The command would look something like this:

mount -o loop, offset=32256 image.dd /mnt/point

That’s it. You should now have that partition from the dd backup image mounted to the mount point.




I am currently running a variety of distributions, primarily Ubuntu 14.04.
Previously I was running KDE 4.3.3 on top of Fedora 11 (for the first experiment) and KDE 4.6.5 on top of Gentoo (for the second experiment).
Check out my profile for more information.
Categories: Linux, Tyler B Tags: , , ,

Setting up an LVM for Storage

December 30th, 2009 5 comments

Recently, I installed Kubuntu on my PC. Under Windows, I had used RAID1 array to create a storage volume out of two extra 500GB hard drives that I have in my system. Under Linux, I’ve decided to try creating a 1TB LVM out of the drives instead. This should be visible as a single drive, and allow me to store non-essential media files and home partition backups on a separate physical drive, the better to recover from catastrophic failures with. The only problem with this plan: documentation detailing the process of creating an LVM is sparse at best.

The Drive Situation
My machine contains the following drives, which are visible in the /dev directory:

  • sdc: root drive that contains three partitions; 1, 2, and 5, which are my boot, root, and swap partitions respectively
  • sda: 500GB SATA candidate drive that I’d like to add to the LVM
  • sdb: 500GB SATA candidate drive that I’d like to add to the LVM

First Try
Coming from a Windows background, I began by searching out a graphical tool for the job. I found one in my repositories called system-config-lvm 1.1.4.

The graphical tool that I found to create LVMs

I followed the buttons in this tool and created a 1TB LVM spanning sda and sdb, then formatted it with ext3. The result of these steps was an uninitialised LVM that refused to mount at boot. In response, I wrote the following script to activate, mount, and assign permissions to the drive at boot:

#!/bin/bash
sudo whoami
sudo lvchange -a y /dev/Storage/Storage
sudo mount /dev/Storage/Storage /home/jon/Storage
sudo chown jon /home/jon/Storage
sudo chmod 777 /home/jon/Storage

It worked about 50% of the time. Frustrated, I headed over to the #kubuntu IRC channel to find a better solution.

Second Try
On the #kubuntu channel, I got help from a fellow who walked me through the correct creation process from the command line. The steps are as follows:

  1. Create identical partitions on sda and sdb:
    1. sudo fdisk /dev/sda
    2. n to create a new partition on the disk
    3. p to make this the primary partition
    4. 1 to give the partition the number 1 as an identifier. It will then appear as sda1 under /dev
    5. Assign first and last cylinders – I simply used the default values for these options, as I want the partition to span the entire drive
    6. t toggle the type of partition to create
    7. 8e is the hex code for a Linux LVM
    8. w to write your changes to the disk. This will (obviously) overwrite any data on the disk
    9. Repeat steps 1 through 8 for /dev/sdb
    10. Both disks now have partition tables that span their entirety, but neither has been formatted (that step comes later).
  2. Make the partitions available to the LVM:
    1. sudo pvcreate /dev/sda1
    2. sudo pvcreate /dev/sdb1
    3. Notice that the two previous steps addressed the partitions sda1 and sdb1 that we created earlier
  3. Create the Volume Group that will contain our disks:
    1. sudo vgcreate storage /dev/sda1 /dev/sdb1 will create the volume group that spans the two partitions sda1 and sdb1
    2. sudo vgdisplay /dev/storage queries the newly created volume group. In particular, we want the VG Size property. In my case, it is 931.52 GB
  4. Create a Logical Volume from the Volume Group:
    1. sudo lvcreate -L $size(M or G) -n $name $path where $size is the value of the VG Size property from above (G for gigabytes, M for megabytes), $name is the name you’d like to give the new Logical Volume, and $path is the path to the Volume Group that we made in the previous step. My finished command looked like sudo lvcreate -L 931G -n storage dev/storage
    2. sudo lvdisplay /dev/storage queries our new Logical Volume. Taking a look at the LV Size property shows that the ‘storage’ is a 931GB volume.
  5. Put a file system on the Logical Volume ‘storage’:
    1. sudo mkfs.ext4 -L $name -j /dev/storage/storage will put an ext4 file system onto the Logical Volume ‘storage’ with the label $name. I used the label ‘storage’ for mine, just to keep things simple, but you can use whatever you like. Note that this process takes a minute or two, as it has to write all of the inode tables for the new file system. You can use mkfs.ext2 or mkfs.ext3 instead of this command if you want to use a different file system.
  6. Add an fstab entry for ‘storage’ so that it gets mounted on boot:
    1. sudo nano /etc/fstab to open the fstab file in nano with root permissions
    2. Add the line /dev/storage/storage    /home/jon/Storage       ext4    defaults        0       0 at the end of the file, where all of the spaces are tabs. This will cause the system to mount the Logical Volume ‘storage’ to the folder /home/jon/Storage on boot. Check out the wikipedia article on fstab for more information about specific mounting options.
    3. ctrl+x to exit nano
    4. y to write changes to disk
  7. Change the owner of ‘storage’ so that you have read/write access to the LVM
    1. sudo chown -R jon:jon /home/jon/Storage will give ownership to the disk mounted at /home/jon/Storage to the user ‘jon’

Time for a Beer
Whew, that was a lot of work! If all went well, we have managed to create a Logical Volume called storage that spans both sda and sdb, and is formatted with the ext4 file system. This volume will be mounted at boot to the folder Storage in my home directory, allowing me to dump non-essential media files like my music collection and system backups to a large disk that is physically separate from my system partitions.

The final step is to reboot the system, navigate to /home/jon/Storage (or wherever you set the boot point for the LVM in step 6), right-click, and hit properties. At the bottom of the properties dialog, beside ‘Device Usage,’ I can see that the folder in question has 869GB free of a total size of 916GB, which means that the system correctly mounted the LVM on boot. Congratulations to me!

Much thanks to the user ikonia on the #kubuntu IRC channel for all the help.

This piece has been mirrored at Index out of Bounds




On my Laptop, I am running Linux Mint 12.
On my home media server, I am running Ubuntu 12.04
Check out my profile for more information.