View unanswered posts | View active topics It is currently Wed Dec 13, 2017 8:52 am



Reply to topic  [ 3 posts ] 
 Intel S3420GPLC / Socket 1156 Build 
Author Message

Joined: Sat Feb 05, 2011 9:27 pm
Posts: 5
Reply with quote
Post Intel S3420GPLC / Socket 1156 Build
Following combination works perfectly.

Board was relatively cheap at US$175 - memory costs depend on how much and what type you need. ECC support, VT-d and no crazy workarounds makes it well worthwhile.

(http://ark.intel.com/Product.aspx?id=46533)

    Intel S3420GPLC Server Board
    - Intel 3420 Chipset/ Socket 1156
    - 6 DDR3 1333/1066/800 DIMM Slots
      * Supports 4 x 4GB non-ECC UDIMM (unofficial, tested, works)
      * Supports 4 x 8GB ECC RDIMM (supported config)
      * Supports 6 x 4GB ECC RDIMM (supported config, tested, works)
      * WILL NOT POST WITH >4 DIMMs UNLESS ALL ARE REGISTERED
    Network: 1 x Intel 82578DM (onboard)/ 1 x Intel 82574L (onboard)
    Storage: 6 x SATA-II provided by 3420 chipset (AHCI mode)
    CPU Used: Intel Xeon 3440 CPU
    BIOS Versions: 46 and 48 (tested)

This combination supports VT-d/ FT (FT-capable only validated by bootable utility, not tested).

Of the on-board devices, the following are available for pass-thru:
    - Intel 82578DM
    - USB2 Root Hub #1
    - USB2 Root Hub #2
    - SATA-II 6-port controller (tested passthru to OpenSolaris, works)
    - PCI Express Root Port (in my case populated with an Intel SASWT4I)
    - PCI Express Root Port (in my case populated with an Intel SASUC8I, tested passthru to OpenSolaris, works)
    - PCI Express Root Port (in my case populated with an Intel SASUC8I, tested passthru to OpenSolaris, works)
    - PCI Express Root Port - Matrox G200e on-board video
    - PCI Express Root Port - Intel 82578DM on-board NIC
    - PCI Bridge (in my case populated with a D-Link DGE-530T)

Tested with ESXi 4.1 U1 - all core components (including both NICs) work out of the box - highlighting this as the VMWare HCL notes that the -DM on-board NIC isn't supported. DGE-530T (Marvell 88E8001) obviously required a minor fix to use the skge module, since it's not officially supported.

Will be glad to post more info if required, got much more from this board than I've contributed. :P

~trini


Thu Apr 28, 2011 4:26 am
Profile

Joined: Mon Jun 20, 2011 2:06 pm
Posts: 1
Reply with quote
Post Re: Intel S3420GPLC / Socket 1156 Build
Hi trini,

I am very interested in more information about your system, especially the storage you are using.
Would like to set up a machine like this, but not sure about the exact configuration.

So give me some NICE feedback :)

MakkuZ


Mon Jun 20, 2011 2:11 pm
Profile

Joined: Sat Feb 05, 2011 9:27 pm
Posts: 5
Reply with quote
Post Re: Intel S3420GPLC / Socket 1156 Build
Apologies for the delayed reply, hectic at work.

Background - I built this system to replace a standalone OpenSolaris server, which I was using as a file server and for VirtualBox. Since OpenSolaris is dead and I cant afford paying Oracle $1000/ socket/ year for patches, I needed something else.

So the box, and specifically the storage configuration.
    - configured to boot ESXi 4.1 U1 off the internal USB socket (4GB flash disk)
    - SASWT4I has RAID firmware, 2 x 40GB laptop drives configured in RAID-1
      * this serves as the 'base' drive for the SAN VM boot disk, which runs OpenFiler 2.99
      * also holds temporary/ non-production VMs and ISOs
    - 2 x SASUC8I with JBOD firmware configured as passthru to the OpenFiler VM (16 ports total)
    - On-board SATA controller configured as passthru to the OpenFiler VM (6 ports total)

Actual disks are:
    - 16 x 1TB disks attached to the SASUC8I's, configured as a 15-disk RAID-6 software array + 1 hot spare under OpenFiler (md0).
    - 6 x 2TB disks attached to the onboard controller, configured as a 5-disk RAID-5 software array + 1 hot spare under OpenFiler (md1).

md0 on OpenFiler VM has 2 iSCSI LUNs:
    - first exported to ESXi for 'production' VM storage (1TB)
    - second exported to a WHS 2011 VM (remainder, storage volume)

md1 on OpenFiler VM has 1 iSCSI LUN:
    - exported to same WHS 2011 VM (8TB, backup volume)

Note - this is a non-optimal solution, since all production VMs are on the iSCSI volume, so ESXi cant see them until the OpenFiler VM starts. Even then, I have a custom boot script that forces the ESXi to do an iSCSI HBA re-scan after the OpenFiler VM starts. This is required, else ESXi will not see the volume and won't be able to process auto-start/ auto-stop settings. This can be avoided if I host all the VMs on the RAID-1 volume, but my drives are too small.

Why WHS backed by OpenFiler? Because I actually really like how brain-dead WHS is - makes management of users/ shares/ streaming etc easy. As for OpenFiler, I'm ditching that for OpenIndiana as soon as it stabalizes. Want my ZFS back - volume configuration would be identical, except I'd use RAIDZ, RAIDZ2, and COMSTAR for iSCSI.

Performance-wise, I can't complain. File copies to/ from the WHS 2011 shares saturate my GigE network (actual transfers of 100MB/s+. The Intel NICs are used for actually serving traffic, while the DGE-530T is vmkernel/ management only (skge driver is a bit flaky under load, so decided to relegate it to light duty).


Tue Jun 28, 2011 2:52 am
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 3 posts ] 

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group.
Designed by STSoftware for PTF.