Archive

Posts Tagged ‘samba’

Sudo apt-get install basic-linux-pt3 –Install & Setup

February 18th, 2017 No comments

It’s been a busy little while, so I haven’t had time to get this written up. So lets see what I can still remember.

Installing Ubuntu Server was as easy as you’d expect. Booting into it wasn’t. Turns out that the BIOS on this box is setup not to boot from the ODD SATA port. It’ll boot from any of the four drives, or from USB; but not from that extra SATA port. My friend’s, who already has the box running, solution was to setup a RAID, where the ODD SATA is RAID number 0, which then allows it to be booted from. I went for a much simpler solution, after noticing that the install process lets you select the location of the GRUB loader. This server has a USB port and a MicroSD card reader inside the case, both bootable. I have plenty of spare MicroSD cards lying around (seriously, since when is 1GB or 2GB big enough for anybody?), so I just inserted one, and reinstalled specifying the MicroSD card as the location for the GRUB.

It felt good to have my new box booting up and actually running. I got its static IP and OpenSSH setup, checked I could access it through Putty, and finally got it up on the shelf and off my desk. Everything from this point on I’ve done in Putty, with no monitor attached to the server. Lets face it, its not like there’s any difference between one white-on-black text interface and another.

Next, I turned my attention to mounting my drives. The mount command is simple enough, but obviously I want my drives to be available right away after boot; so it was time to learn about ‘fstab’ and ‘UUID’s. Luckily this is a fairly straightforward process, especially since my drives only have a single partition on each, other than having to write down the long UUID to copy from the terminal output to the fstab file. I haven’t been able to work with copy & paste in PuTTY. One thing I started to realise at this point is that while Ubuntu boots nice and quickly, the server itself doesn’t.  So each time I want to see if my fiddling has worked, I pretty much have time to make a cup of tea. From looking through various guides etc. I simply used:

UUID=<UUID> /mnt/<mountpoint> ext4 defaults 0 2

for each of my additional drives. After a reboot, I had access to all of my files and media as it was on my old NAS. I spent a little time clearing out the directory structures it had left behind, program files etc. to leave a nice, clean access to all of my files.

NFS and Samba were just as easy to get set up as they had been on the virtual machines. Although with so many different things I wanted to share, I had to add a lot of different entries into each file.  Thankfully there’s no need to reboot after each edit, the services can simply be restarted to pick up the new settings. Samba is simple enough to test, since I’m managing the server over SSH on PuTTY in Windows. NFS required me to test in one of the VMs 0nce again; but after some work, both seemed to be working. I’m not 100% happy with some of my setup, since I’m just allowing open access to anyone on some of these shares. Chances are I’ll be fine, but I’ll want to come back at some point to try and tighten up my user management.

Emby server has a very good set of installation instructions. The main new part for me was adding the new repository, but this means it’ll be kept up to date when I perform other apt-get upgrades. Everything else related to Emby is managed through its web GUI, so straight forward stuff.

In fact, I was surprised at how simple it was to get the majority of things working. FTP just kinda worked, I just needed to make symlinks from my home directory to the other places I need to access. Even Transmission wasn’t too bad to get going and allow my remote GUI to connect. Ont thing that started to get harder from this point was keeping track of the different ports and services I was using. I took some time to make a list of computers and services to plan my external port mapping, and got things like FTP, SSH and Transmission forwarded. Internally I’ve just used the defaults for simplicity; externally I’ve made sure they’re set to something completely different.

Next Up:

Bash-ing things around

This post was originally published on Nathanael’s site here.

Categories: Nathanael Y, Ubuntu Tags: , , , , , , ,

Sudo apt-get install basic-linux-pt2 –Testing-&-VMs

February 17th, 2017 No comments

With the hardware sorted (bar some jiggery-pokery to get the ODD to SSD bay converter to fit properly), I set about deciding what I want this box to do.

The list I came up with looks like this:

  • Media serving to my Kodi devices (2 Raspberry Pi systems, my android tablet, and a new Ubuntu PC I’m putting together for retro gaming with my kids)
  • FTP – I like to use my NAS like my own personal cloud. My tablet can mount an FTP in its file browser just like any other folder. No sFTP support, though, unfortunately (and I don’t like any of the file browsers I tried which do).
  • Transmission (or Deluge) – the main reason for swapping the 4GB of RAM out for 16GB
  • SSH (obviously!)
  • Dropbox and Google Drive – for when various apps and things integrate well with these mobile apps.
  • Backup – the WD My Cloud EX4’s backup options are very poor.
  • General file sharing with Windows and Ubuntu – Samba & NFS, naturally.
  • Hosting & tinkering with other bits I might want to try & learn about – a website (for practise, not for public viewing), a git… who knows.

Being basically completely unfamiliar with most of this stuff, I was undecided between Ubuntu Desktop or Server for quite a while. Desktop obviously just has so much of this stuff already ready to go, it mounts things automatically, I can use the GUI as a fallback if something isn’t right, its just more like what I’m accustomed to. On the other hand, having the GUI running all the time will just use up unnecessary RAM  – granted I probably don’t have a shortage of that, but still…

In the end I installed both onto VMs on my Windows machine, made copies (so I had a clean version always ready to go without having to reinstall again), and started playing.

First up I wanted to sort how I was going to deal with my media backend. On my current setup I use the Kodi client on one of my PCs to manage a central SQL database. While it works, its a bit slow and rather inneficient, so I went looking for either a headless Kodi backend, or just a way to run it without the GUI. I found all sorts of ideas, builds and code , none of which I understand or feel like I could implement. After a discussion with a linux guru (one of my Uni lecturers) it was clear that my plan was probably not going to work; he had pointed out that he just runs his on DLNA, and that Plex seems to be quite good too. More research, and a question in /r/Kodi later, I had been pointed in the direction of Emby, a backend for Kodi without many of the limitations of Plex and DLNA. Installation was simple enough, but accessing the Web UI wasn’t. When I had setup the VMs I had just left their network settings as NAT; this, it turns out, makes accessing the network from the VM possible, but not accessing the VM from elsewhere on the network (includingother VMs on the same system). I did try to just change the settings in the VM to add a bridged adapter, but it didn’t work. Not knowing enough about networking on linux to fix this, I just went ahead and reinstalled, this time setting up the VM with two network adapters – one NAT and another bridged. This worked a treat, and after adding a few media files and installing Kodi on the Desktop VM, I was able to play videos no problem.

Next, for no particular reason, was getting NFS working. I found guides, forums, blogs etc (my Google-fu is pretty strong) and set about trying. I was sure it should be working, I’d installed nfs-kernel-server, added the entry into /etc/exports, setup the permissions, but I just couldn’t mount it in the Desktop VM – even though I could watch them through Kodi. I ended up having to ask Reddit’s linux4noobs sub. Simple answer… sudo /etc/init.d/nfs-kernel-server start … and instantly it mounted no problem. Turns out that Kodi was actually watching a transcoded stream from Emby, until I had NFS working. Thankfully Samba took less time and hassle to get working (surprisingly), and pretty soon I could access files across both linux and Windows. And there was much rejoycing.

At this point I was getting impatient (plus this microserver is taking up a chunk of space on my desk where I really ought to be doing uni work), so I quickly checked I knew how to setup a static IP, and turned my attention to the real thing.

Next Up:

Booting up The Box
Installing, reinstalling and shenanigans

This post was originally published on Nathanael’s site here.

How to mount a Windows share on startup

April 28th, 2014 2 comments

I recently invested in a NAS device to add a little bit of redundancy to my personal files. With this particular NAS the most convenient way to use the files it stores is via the Windows share protocol (also known a SMB or CIFS). Linux has supported these protocols for a while now so that’s great but I wanted it to automatically map the shared directory on the NAS to a directory on my Linux computer on startup. Thankfully there is a very easy way to do just that.

1) First install cifs-utils

sudo apt-get install cifs-utils

2) Next edit the fstab file and add the share(s)

To do this you’ll need to add a new line to the end of the file. You can easily open the file using nano in the terminal by running the command:

sudo nano /etc/fstab

Then use the arrow keys to scroll all the way to the bottom and add the share in the following format:

//<path to server>/<share name>     <path to local directory>     cifs     guest,uid=<user id to mount files as>,iocharset=utf8     0     0

Breaking it down a little bit:

  • <path to server>: This is the network name or IP address of the computer hosting the share (in my case the NAS). For example it could be something like “192.168.1.1” or something like “MyNas”
  • <share name>: This is the name of the share on that computer. For example I set up my NAS to share different directories one of which was called “Files”
  • <path to local directory>: This is where you want the remote files to appear locally. For example if you want them to appear in a folder under /media you could do something like “/media/NAS”. Just make sure that the directory exists (create it if you need to).
  • <user id to mount files as>: This defines the permissions to give the files. On Ubuntu the first user you create is usually give uid 1000 so you could put “1000” here. To find out the uid of any random user use the command “id <user>” without quotes.

So for example my added line in fstab was

//192.168.3.25/Files     /media/NAS     cifs     guest,uid=1000,iocharset=utf8     0     0

Then save the file “Ctrl+O” and then Enter in nano.

3) Mount the remote share

Run this command to test the share:

sudo mount -a

If that works you should see the files appear in your local directory path. When you restart the computer it will also attempt to connect to the share and place the files in that location as well. Keep in mind that anything you do to the files there also changes them on the share!

Categories: Linux, Tyler B Tags: , , , , ,

Fix for mount error(12): Cannot allocate memory

October 2nd, 2011 16 comments

Do you have the following situation:

  • You’ve got a share on Windows (XP, Vista, 7) that you’re trying to access from a Linux system, in this case Ubuntu.
  • Mounted through /etc/fstab or directly through the command line.
  • Initially, it works great, but then loses the mountpoint – you’ll go to, say, /mnt/server/mountpoint but there are no directory contents. “mount” shows the path as still mounted.
  • umount’ing the directory and then trying to remount it provides this gem of a message:
    mount error(12): Cannot allocate memory
    Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Of course, since you’re probably a reasonable system administrator, you go and check the memory allotment. top looks fine and nothing else on the system is complaining.

The solution, kindly provided by Alan LaMielle’s blog, gives a registry fix on the Windows side of things. In case that link ever breaks, here is the summary of what needs to happen on the Windows system:

  • In HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management, set the LargeSystemCache key to 1 (hex).
  • In HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters, set the Size key to 3 (hex).’
  • Restart the “Server” service and its dependencies (on my Windows 7 box, these were “Computer Browser” and “Homegroup Listener”, and I had to restart the service twice for the dependencies to also come back up.) Alternatively you can just restart the Windows system as you’re probably due for a large set of updates anyway.

Then re-run the mount command (for entries defined in /etc/fstab, use sudo mount -a) and your shares should be restored to their former glory.

Categories: Jake B Tags: , , ,

Why I Hate Samba

December 12th, 2010 9 comments

This file copy is running over my local wireless network:

Apparently Samba uses less than 1% of available network bandwidth for file copies...

That is all.

Categories: God Damnit Linux, Jon F Tags: , ,

Wireless Network Manager Woes

September 16th, 2009 No comments

Debian Lenny ships with the Network Manager package, version 0.6.6-4, which for all intents and purposes is a well written and very useful network management application. But of course, I wanted something more. At home, I have my music library (hosted on a Windows Vista machine) shared to the local network, and wanted to mount that drive using Samba so that I could share my music library between my two machines while on my home network.

On a Windows machine, one can just point an application to files on a networked drive, while Windows handles all of the dirty details related to allowing that application use those files as if they were on the local machine. On Linux, the application in question seems to have to be aware of how to handle a Windows share (usually via the Samba package), and handle that drive sharing on it’s own, unless the network drive has been mounted first. Further, when mounting a network share in Linux, one can choose any folder on their hard drive to put its contents into, ensuring that it always appears in the same location, and is easy to find.

Unfortunately, as far as I can divine, a networked drive can only be mounted by the root user, which seriously reduces the number of applications that can perform that mounting action. In my quest to get my home music share working, I looked into plenty of different methods for automatically mounting network drives, including startup scripts, modifying the fstab file, and manually connecting from a root terminal. None worked very well.

Eventually, I stumbled across a web post advertising the pros of the WICD network manager, which as I understand, will be used as an alternative to the network manager package by Debian Squeeze, and can currently be pulled into Lenny by adding the Debian-Lenny Backports repository to your sources list. I installed it, replacing the default network-manager-gnome package.

My first impression of WICD was extremely positive. Not only did it connect to my home network immediately, it also allowed me to define default networks to connect to (something that is conspiciously absent from the NetworkManager interface), and to set scripts that are run when my client connects to or disconnects from any of the networks in the list. This allowed me to write a simple one line script that mounted my network share on connection to my home wireless network. It worked every time, and mysteriously did so without asking me for my Sudo password, even though it used the sudo command internally to get rights to perform the mount.

Odd security peculiarities aside, I was happy with what I had accomplished – now I could tell my laptop to automatically connect to my home wireless network, and to mount my music share as soon as it did so! Then I went to school. Shit.

The wireless network at my University uses EAP-TTLS with PAP inner-authentication as a security protocol, something that WICD apparently had no idea how to handle. This protocol is extremely secure, as the host identifies itself to the client with a certificate that the client uses to tunnel into the host, allowing connection to take place without any user information being passed in the clear. At least that’s how it’s supposed to work, except that our school doesn’t have a certificate or certificate authority, so… Whatever.

In any case, WICD does not include a template for this type of network (which is fair I suppose, since Windows requires an add-on to access it as well), but for the life of me, I couldn’t figure out what to do to fix the problem. I trolled the internet from a wired machine and tried editing the WICD encryption templates, while Tyler (on Fedora) and Phil (on OpenSuse) connected on first try.

Eventually, after an hour or so of fruitless trial and error, I gave up, came home, and reinstalled the NetworkManager application, because that’s what Tyler and Phil were using on their systems, and it seemed to work fine. Sure enough, the next day I connected after just a minor tweaking of the network properties in the NetworkManager dialog.

Unfortunately, while I can now connect to my home and school networks, I once again have lost the ability to automatically connect to networks, and to execute scripts on connection, meaning that I’m back to square one with the mounted networked music share – for now, I just do the mounting manually from a root terminal. Balls.