Hello, Dave,
Here it is, it is quite simple actually, trouble was of course finding out how to do it.
So if we enable ssh on ESXi4 and go to /dev/disks we get:
t10.ATA_____ST31000528AS________________________________________SERIALN1 vml.01000000002020202020202020202020203956503042345436535483313030
t10.ATA_____ST31000528AS________________________________________SERIALN1:1 vml.01000000002020202020202020202020203956503042345436535483313030:1
t10.ATA_____ST31000528AS________________________________________SERIALN2 vml.0100000000202020202020202020202020395650304a373159535483313030
t10.ATA_____ST31000528AS________________________________________SERIALN2:1 vml.0100000000202020202020202020202020395650304a373159535483313030:1
t10.ATA_____ST31000528AS________________________________________SERIALN3 vml.0100000000202020202020202020202020395650304a32354a535483313030
t10.ATA_____ST31000528AS________________________________________SERIALN3:1 vml.0100000000202020202020202020202020395650304a32354a535483313030:1
t10.ATA_____ST31000528AS________________________________________SERIALN4 vml.0100000000202020202020202020202020395650304a394a5a535483313030
t10.ATA_____ST31000528AS________________________________________SERIALN4:1 vml.0100000000202020202020202020202020395650304a394a5a535483313030:1
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704 vml.0100000000202020202057442d574d414d3933633432373034574493205744
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:1 vml.0100000000202020202057442d574d414d3933633432373034574493205744:1
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:2 vml.0100000000202020202057442d574d414d3933633432373034574493205744:2
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:3 vml.0100000000202020202057442d574d414d3933633432373034574493205744:3
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:4 vml.0100000000202020202057442d574d414d3933633432373034574493205744:4
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:5 vml.0100000000202020202057442d574d414d3933633432373034574493205744:5
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:6 vml.0100000000202020202057442d574d414d3933633432373034574493205744:6
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:7 vml.0100000000202020202057442d574d414d3933633432373034574493205744:7
t10.ATA_____WDC_WD800JD2D75JNE0___________________________WD2DWMAB91342704:8 vml.0100000000202020202057442d574d414d3933633432373034574493205744:8
The t10.ATA_____WDC_WD800JD2D75JNE0 is a 80GB Sata HDD i am using to install ESXi and the Solaris VM that will control the remaing 4 1tb hdds.
In order to be able to directly use one of these HDDs in a VM, what has to be done is to type the following command inside the folder of the VM that needs to directly access those disks:
vmkfstools -r /vmfs/devices/disks/vml.01000000002020202020202020202020203956503042345436535433313030 host_zpool_hdd1_SERIALN1.vmdk
vmkfstools -r /vmfs/devices/disks/vml.0100000000202020202020202020202020395650304a373159535433313030 host_zpool_hdd1_SERIALN2.vmdk
vmkfstools -r /vmfs/devices/disks/vml.0100000000202020202020202020202020395650304a32354a535433313030 host_zpool_hdd1_SERIALN3.vmdk
vmkfstools -r /vmfs/devices/disks/vml.0100000000202020202020202020202020395650304a394a5a535433313030 host_zpool_hdd1_SERIALN4.vmdk
I replaced the HDDs serial numbers for SERIALN1, SERIALN2, etc as i don't want my hdd serial numbers floating around the internet...
This will create what i think are passtrough vmdk files, wich then only have to be added to the vsphere client.
Because it may (or not, i don't know) help me to later identify an eventually failed hdd i try to add the serial number of the hdds to as much related things as i can, hence
host_zpool_hdd1_SERIALN1.vmdk
Also, for each vmdk file a -rdm.vmdk is creaated. I don't know why...
Doing ls -la under /vmfs/devices/datastore1/ZFS_Host_Opensolaris should
/vmfs/volumes/4ac07863-b04cc021-bfbd-001b23413506/ZFS_Host_OpenSolaris # ls -la
drwxr-xr-x 1 root root 2100 Sep 28 16:37 .
drwxr-xr-t 1 root root 1120 Sep 28 08:49 ..
-rw------- 1 root root 10737418240 Sep 28 08:49 ZFS_Host_OpenSolaris-flat.vmdk
-rw------- 1 root root 459 Sep 28 08:49 ZFS_Host_OpenSolaris.vmdk
-rw------- 1 root root 0 Sep 28 08:49 ZFS_Host_OpenSolaris.vmsd
-rwxr-xr-x 1 root root 1660 Sep 28 08:49 ZFS_Host_OpenSolaris.vmx
-rw------- 1 root root 275 Sep 28 08:49 ZFS_Host_OpenSolaris.vmxf
-rw------- 1 root root 1000204886016 Sep 28 16:36 host_zpool_hdd1_SERIALN1-rdm.vmdk
-rw------- 1 root root 481 Sep 28 16:36 host_zpool_hdd1_SERIALN1.vmdk
-rw------- 1 root root 1000204886016 Sep 28 16:36 host_zpool_hdd1_SERIALN2-rdm.vmdk
-rw------- 1 root root 481 Sep 28 16:36 host_zpool_hdd1_SERIALN2.vmdk
-rw------- 1 root root 1000204886016 Sep 28 16:37 host_zpool_hdd1_SERIALN3-rdm.vmdk
-rw------- 1 root root 481 Sep 28 16:37 host_zpool_hdd1_SERIALN3.vmdk
-rw------- 1 root root 1000204886016 Sep 28 16:37 host_zpool_hdd1_SERIALN4-rdm.vmdk
-rw------- 1 root root 481 Sep 28 16:37 host_zpool_hdd1_SERIALN4.vmdk
/vmfs/volumes/4ac07863-b04cc021-bfbd-001b23413506/ZFS_Host_OpenSolaris #
Now, all that needs to be done is go to the Vsphere client and in the machine properties for the VM and:
Go to the VM properties, add new hardware, select disk, select "use an existing virtual disk", select one of the new vmdk files you should see there (Pic1) and repeat for any other hdds.
Pic2 and Pic3 shows the final result.
I have tested it and if i create zpools and zfs filesystems on those disks and put data on them, from the VM, and then shutdown ESXi and boot from opensolaris live cd, the zpool are detected, and the data in the zfs filesystem accessible.
This exactly what i needed as i will be able to use all the advantages of ZFS and raidz, especially the fact that it is transactional and thus avoids the RAID-5 loophole. This means that, for instance, under a powerout situation, and no UPS, the data will be safe, even if ESXi or the Solaris VM gets corrupted as the advantages are not lost by having another abstraction layer (in the case ZFS was created inside a virtual HDD) and data will always be available out of ESXi via a simple reboot.
Also, the datastore for the other VMs will be an exported iscsi target zfs volume, but i haven't got there yet.
Please feel free to publish this where you like, and bellow i will add the links that got me on the right track after many, many hours of trying to find a solution.
I will send you this same text with the pictures to your e-mail.
I am preparing a full document, as i go, that will explain everything from the ESXi install to the solaris set-up, etc.
When i do i will foward it to you, if you are interested in hosting it, as i don't have where to. The only thing i ask, as with this, is that my name is on it.
Also, a final note, i am quite convinced, after a lot of digging that this is either not possible or very dificult under XEN Server.
Let me know your thoughts on this,
Mario
References:
http://episteme.arstechnica.com/eve/for ... 1000850041http://www.jume.nl/esx4man/man1/vmkfstools.1.html