Building a new system to run Solaris 12

Several years ago I wrote about migrating my personal webserver from work’s Ultra20 to my own Ultra20 M2. Sadly, that Ultra20 M2 (dual-core opterons, 2Gb ram) has just about given up being able to cope with what I need it to do.

Externally, I run the www.jmcpdotcom.com web and mail (client and server) services on it. Internally it’s our media server, sharing out our collection of music, photos, movies and TV via NFS to the rpi. I just don’t have enough ram in the U20M2 to do the needful, to the extent that when I’ve needed to transcode media files I do it from my workstation (Lenovo quad-core, 32Gb ram) across the gigabit backbone we’re running. Ok, so that’s a physical distance of about 1.5m, but it has still meant that I manually schedule intensive tasks for later at night when the family’s asleep. That includes pkg update to whatever the latest available build is, since it regularly takes about 3 hours on the U20M2 rather than 15 minutes on my kernel zone or the laptop.

So… I had to built a new box.

I did a reasonable amount of research into what I could get, with the general provision that I should be able to bump the cpu and ram up with minimal effort. A high priority (but not mandatory) desire was for an onboard Intel gigabit nic – they appear to me to be the most robust solution on Solaris today. I also wanted to migrate from the mpt(7d) SAS HBA to an mpt_sas if I could find one cheap enough. What I ended up with was this:

I’ll comment on the specific features of the motherboard and case a little later.

This motherboard is UEFI, so I couldn’t just take the disks out of the U20M2 and boot up the new system straight away. I pulled down the most recent Solaris 12 text installer usb image, booted and installed a basic bootable image. The cpio stage (the final stage) took a shade over 2 minutes to blat the image onto the disk. While it would have been really nice to then ‘zfs send | zfs recv’ a snapshot of the U20M2 current BE across to the new box and try to boot it, I figured I had too many tweaks to make. I do so like customising my systems, y’see 😉

Still, it was very easy to copy across my cyrus-imap, mysql, exim, squid, privoxy and apache configurations. I also performed the rather simplistic operation of grabbing the list of installed packages from the U20M2 then uttering pkg install $LIST in the new system. Since most of them were in my local cache, that was a fairly quick operation to complete. The service which caused me most annoyance was my dhcp server. I run this as a convenience for the hardware we have (and which visitors might bring) and while I do manual address assignment for certain systems, it’s mostly completely automatic. After copying the old config files into place and creating /var/db/dhcpd.leases I thought I’d remembered everything. However, I’d missed one thing: May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] No subnet declaration for ext0 (192.168.2.30). May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] ** Ignoring requests on ext0. If this is not what May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] you want, please write a subnet declaration May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] in your dhcpd.conf file for the network segment May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] to which interface ext0 is attached. ** May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] May 30 13:15:04 orrery dhcpd: [ID 702911 local7.error] setsockopt: IP_PKTINFO: Bad file number  I’ve got two interfaces on this box, one physical, one vnic: $ dladm show-phys; dladm show-vnic ; dladm show-ether
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      e1000g0
ext0                net0              1000   2:8:20:6e:f5:f5   fixed       0
LINK              PTYPE    STATE    AUTO  SPEED-DUPLEX                    PAUSE
net0              current  up       yes   1G-f                            bi


(The U20M2 has two physical nge interfaces; I’d like to get another physical gig-E nic for this new system at some point).

The thing I’d forgotten to do was tell the dhcp server which interfaces to listen on. I don’t want it to listen on ext0, just net0.

\$ sudo svccfg -s dhcp/server:ipv4
svc:/network/dhcp/server:ipv4> listprop config/listen_ifnames
config/listen_ifnames astring
svc:/network/dhcp/server:ipv4>


This, fortunately, is a very easy thing to fix:

svc:/network/dhcp/server:ipv4> setprop config/listen_ifnames = net0
svc:/network/dhcp/server:ipv4> refresh
svc:/network/dhcp/server:ipv4> end
svc:/network/dhcp/server:ipv4> quit


And off we went.

The next phase was to power down both old and new systems, and transfer disks. The IBM mpt_sas card only has one external SFF-8088 interface – but has four internal SATA connectors. I took four disks out of the 8disk Sans Digital TowerRAID TR8X+ – 8 Bay SAS/SATA JBOD and installed them inside the Cooler Master case, then booted up again.

All the ZFS pools imported without any hassles, apache/squid/exim/dhcp/cyrus-imap all came up immediately.. ALL GOOD!

Since one aspect of the zpool import was automatic enabling of the various nfs and smb shares, the new system was then declared to Be Sufficient™ and have WAF > 0.

Phew!

A note on the case

Having worked in on and around Sun (now Oracle)-designed hardware for quite a few years now, I’m delighted to see that concepts like disk sled rails have trickled down to the consumer end of the market. That said, why would you advertise a case as accepting 7 3.5″ internal disks, but only supply rails for 4 disks? Also, I found it a bit difficult to get the SATA power and data cables connected to the internal disks without taking off the back side of the case as well. This is an area where a SATA disk backplane comes in handy. I’d pay another AUD30 on top of the case price for a feature like that. Just sayin’….

A note on the motherboard

While I’d really like to have two onboard Intel gigE nics, I’m fine with just one. There’s plenty of headroom for adding a PCI-E x1 (or even a PCI) card to handle that. There are the standard 4 DDR3 slots, currently populated with 4Gb dimms – seems like plenty, at least for the moment. There’s a serial port header if I want to get that set up (meh … maybe), and the only niggle I have is that the Intel HD Audio doesn’t seem to be supported by audiohd at the moment. I’ll log a bug for that but really, who needs audio output capabilities on a server?

For El Goog: here’s the output of prtconf -v for the board: ASRock_Fatal1ty_H87-prtconf-v.txt. I don’t have lspci, but I do have scanpci output: ASRock_Fatal1ty_H87-scanpci.txt and ASRock_Fatal1ty_H87-scanpci-v.txt

I hope you find it useful.