After, the next step in my Mordor 1.0 - Mordor 2.0 project is to regain the basic Mordor 1.0 functionality based on a VM, AKA, set up an unRAID-powered-NAS. In this post I document the details of my experience of setting up unRAID (version 5.0-beta12a) on my ESXi server (running version 5.1.0). I describe creating a VM on ESXi that boots from a physical USB flash drive (using Plop Boot Manager), my adventures with trying to passthrough the onboard SATA controller to the VM (spoiler: it failed), and other options for configuring the data drives for the VM (Raw Device Mapping, VMDirectPath I/O passthrough). By far, the most useful resource that guided me in this process was the amazingly detailed documentation by the unRAID user Johnm in his. I have used guidance and tips from his post extensively, and will probably repeat some in this post. Install unRAID NAS OS as a VM on ESXi host to improve resource utilization of your home server!.
Challenges and Requirements So, what are the challenges in running unRAID as a VM? And what are some of the requirements I have from this set up and migration process?. unRAID runs from a USB flash drive as its boot media, and persistent memory for configuration and plugins, but ESXi doesn’t offer a “boot a VM from USB flash drive” option, nor does it support simulating a USB flash drive as a VM booting media. The main function of unRAID is to manage storage. Thus, it makes sense to give unRAID complete and absolute control of the NAS hard-drives it will manage. ESXi, running on VT-d-enabled hardware, has support for – but it is far from simple and/or straight forward to use it directly with physical drives.
I don’t have the luxury of having separate “production” and “staging” servers, to experiment without affecting my actual system and data. Mordor 1.0 managed 3 data-drives (2TB each), and this is all the large drives I have, so I must have the unRAID VM working with the existing configuration and data, while minimizing the risk that something bad happens to my data during the process. Create an unRAID VM that boots from a USB flash drive using Plop Boot Manager To tackle the first challenge – I decided to go for a solution that allows a VM to boot from a physical USB flash drive, using the Plop Boot Manager. Plop Boot Manager is a minimal program that allows booting other OSes from arbitrary media (like USB drive). The is a minimal program that allows booting various operating systems from various devices (e.g. CD, floppy, USB), with no dependency on BIOS (it is bundled with drivers for IDE-CDROM and USB). The game plan is to create a Plop image that boots from USB, and have the unRAID VM use that image as a boot-CD image.
Am improved, extensible web-interface to Lime-Technology's unRAID NAS. The install utility will download the latest version of unMENU.
Following is a detailed description of my implementation of this plan. Make a Plop Boot Manager image (on Windows). Download the Plop Boot Manager, and the PLPBT-createiso tool (see the, I used the latest available, which was 5.0.14 at the time). The default behavior of Plop Boot Manager is to display a menu with all possible boot devices detected, and let the user choose which device to boot from. Configuring the Plop image to boot automatically from USB. Load the plpbt.bin file, found in the same directory, set the Start mode to Hidden, mark the Countdown checkbox, set the Countdown value to 1, choose USB as Default boot (see above screenshot for the configured settings dialog), and click the Configure plpbt.bin button.
This will modify the plpbt.bin file that was loaded, in the same directory. Extract the plpbt-createiso.zip, and copy to modified plpbt.bin file from the previous step to the extracted directory. From a command-line prompt, execute create-iso.bat – it will generate a file in the same directory – plpbtmycd.iso. Create the unRAID VM. Launch VMware vSphere Client and log in the the ESXi host. Start the Create New Virtual Machine wizard, and select a Typical configuration.
Getting to the unRAID shell console in vSphere, and checking MAC address This concludes the unRAID-VM creation process, but it still has very little use as long as the data drives are not present, which is what the next section covers. Configuring the data drives Let’s review the available solutions for configuring data drives for use in the unRAID VM:.
Adding the data drives as datastore drives in the ESXi host, formatting them using VMFS, creating maximum-size virtual disks on each drive, and assigning the virtual disks to the unRAID VM. This is probably the simplest method. It is natively supported in ESXi, and doesn’t involve any “hardware magic”.
This is also probably the worst possible solution, for any and all of the following reasons:. Performance hell: This configuration goes through all possible software and hardware layers for every read/write operation. Possible capacity loss due to limitations on the size of virtual disks in VMFS, which might be smaller than the actual capacity of the physical drive. Doesn’t meet my requirement for transparent migration, as it requires reformatting all the drives to VMFS, thus losing the existing data. I don’t have 4TB of swap space laying around. Doesn’t allow unRAID to manage the drives directly, including SMART parameters and various low-level sensors. Using VMDirectPath I/O to passthrough the disks directly to the unRAID VM, and letting unRAID manage the disks on its own.
This solution sounds great, and meets all the requirements!. But, alas, VMDirectPath I/O can be used to passthrough supported PCI and PCIe devices, and not individual HDDs. So maybe I can passthrough an entire SATA controller to the VM, and assign all the HDDs on that controller to the unRAID VM?. This indeed would have been great, if it worked, but it didn’t – more details below. Future note here – I still plan to go back to this method,.
But this requires getting that RAID card. Using Raw Device Mapping to assign the physicals HDDs to the unRAID VM. This solution almost meets all of my requirements, with the following disadvantages:. It isn’t really direct physical access, so unRAID doesn’t get to manage the drives directly (no SMART parameters and various sensors). It isn’t natively supported in ESXi, and requires some manual ESXi voodoo that isn’t guaranteed to work in the future. But, despite the shortcomings, this method is much better than option #1, and I can implement it without waiting for some expansion card – which makes it the chosen solution! In the following sections I will give further details on my failure with solution #2, and success with solution #3.
During the attempts and experiments I had only one data HDD connected (out of the three), in case I do something stupid and invalidate the data. This is a reasonable safety net, since I already had protection for 1-HDD-loss based on a parity drive.
Of course it would have been better to have a full backup, or perform the experiments with non-production drives (living on the edge:-p ) SATA Controller DirectPath I/O Adventures As mentioned above, the SATA Controller passthrough alternative was the preferred solution for giving unRAID full control over the data HDDs, so this is what I tried first. Towards that goal, I launched vSphere Client and logged in to the ESXi host. In the host management interface, the DirectPath I/O Configuration can be found under the Configuration tab, in Advanced Settings. ESXi DirectPath I/O – selecting devices for passthrough in vSphere You can see that individual HDDs are not available for passthrough, which is reasonable considering HDDs are not PCI/PCIe devices. But the ASUS motherboard I have installed in this server has two on-board SATA controllers, both appear on the list above:. Intel Corporation Cougar Point 6 SATA AHCI Controller – part of the Intel Z68 Chipset, this controller includes 2 SATA 6Gb/s ports (gray), and 4 SATA 3Gb/s ports (blue). Marvell Technology Group Ltd.
88SE9172 – additional PCIe SATA controller that includes 2 SATA 6Gb/s ports (navy blue). Now, remember that the ESXi host itself needs its own datastore drives, so it will be using some SATA ports, but I don’t see a reason for installing more than two drives for the datastore, so why not use the Marvell controller for ESXi, and passthrough the Intel controller to unRAID? Wouldn’t it be great? But, sadly, it turns out that the Marvell controller is not supported by ESXi at all In all possible configurations I tried (connecting every HDD I have to each Marvell port), ESXi simply didn’t detect the HDD connected – as if nothing’s connected I assume this is a compatibility issue, as the controller does not appear on the.
I tried asking about this on, but got no response so far. It appears that the Marvell SATA controller is not supported by ESXi at all 🙠x81. Another approach I tried is to work around the compatibility issue, by letting the unRAID VM to deal with the Marvell controller – configure the Marvell using passthrough to the VM, and let the VM use it directly. I know unRAID is able to deal with this controller, since it worked fine back in the days. This is of course a very partial workaround, as it gives unRAID only two HDDs, which is not enough.
So I configured the Marvell controller for passthrough. Finishing the wizard for adding the Marvell SATA controller to the unRAID VM This appeared to be OK, but once I booted the unRAID VM with HDDs connected to the Marvell ports, unRAID could not see any HDDs connected.
I was unable to solve this issue (which is also probably part of the incompatibility of the Marvell controller), and decided to drop it in favor of the RDM solution, at least. Raw Device Mapping Configuration As mentioned, this solution isn’t natively supported in ESXi. This means that vSphere Client access to the host is not sufficient, and some operations must be performed manually on the host, using direct SSH access. This is not a tutorial about SSHing into ESXi hosts, so I sum it up with this:, and use some SSH client (e.g. In Windows) to connect to the host. It should be noted here that I found the post useful, with some variations based on a.
In vSphere Client, under the host Configuration tab, in the Storage settings, in the Devices view, verify that the NAS-data-HDD is recognized. Obtaining the full handle of the HDD is ESXi SSH session (my source HDD is t10.ATAWDCWD20EARS2D00MVWB0WD2DWCAZA1318851, which I will later refer to as for brevity and generality). Decide on a destination for the RDM handle. For instance, I want to use the RDMs in the unRAID VM, and I want to be able to keep track of which RDM belongs to which physical HDD, so I want to create the RDMs in the unRAID VM directory (e.g. /vmfs/volumes/Patriot-64G-SSD/unRAID), and to name them following the scheme rdmHDD-ID.vmdk.
Create the RDM using the vmkfstools program: vmkfstools -a lsilogic -z /vmfs/devices/disks/ /vmfs/volumes/.vmdk.