To my great annoyance, I have to run Microsoft Windows 7 (x64) on my laptop, for three reasons: $EMPLOYER uses Cisco IP Communicator as a VoIP solution (no Solaris version), when I travel OS I call J and the kids every day using skype video calling, and the Intel HD graphics support in Xorg on Solaris is somewhat flaky.
So I run Solaris inside vbox, in seamless mode, and do all my coding and sysadminning tasks using Proper Editors(tm) and other interfaces.
Here’s the first bit of laziness: installations.
Two weeks ago I took delivery of a new workstation (a Lenovo M82, which promptly received a memory, disk and graphics upgrade) to replace the aging, hot and noisy Ultra40 M2 I’ve had since 2007. I downloaded the latest (at that point) USB live image from the internal site and happily booted + installed what I needed to… before remembering that the installer blats over any existing disk partitioning you’ve got. That was rather annoying, since I like to use raw slices for swap and dump devices (an old habit) rather than zvols. In order to keep my existing configurations, I cheated. Just a little (ok, rather a lot).
Technically, I should have setup an AI server and gone through my list of installed packages and made sure that I had them listed, blah blah blah. However, what I chose to do was zfs snap my current BE, boot the new box using the liveusb image (with livessh mode enabled), create my desired rpool config, then zfs send|zfs recv the snapshot… and mount the snapshot, make appropriate edits, install the bootloader and then reboot.
It took about 25 minutes to transfer the snapshot from the Ultra 40 M2 to the Lenovo M82 (I’ve got a 16 port cheap-o gigE switch to connect the home systems), and about another 5 to go through the specific changes I needed in /etc (apart from those for
svc:/system/identity:domain. Installing the bootloader (grub2) was easy, but I had a weird problem getting the new system’s boot menu figured out. I ended up needing to remove
/etc/zfs/zpool.cache and then recreating it (by running zpool status; zpool list would have done just as well). The SMF changes were done while I had the liveusb stick booted. To do this, after importing the new rpool I uttered
and then selected the services I needed and edited them.
Prior to getting this Lenovo, I’d been using a non-global zone in the Ultra 40 M2 to serve out my webserver and handle mail. Part of this reconfiguration involved getting a miniSAS-connected jbod (from Other World Computing, shipped via Borderlinx) and physically moving my media and scratch pool disks into it. Then I attached the jbod to my Ultra 20 M2, and I just had to bring that up to date. So I snap’d the zone BE on the Ultra 40 M2, and send|recv’d it to the rpool on the Ultra 20 M2 – same liveusb+livessh sneakiness, same svccfg activities … and all done in under 30 minutes.
The longest part of the physical downtime was swapping the screws on the disk caddies when I removed them from the Ultra 40 M2 and inserted them into the jbod.
The second bit of laziness is not really laziness. I had a filesystem on the old box which I wanted to transfer to the new, and tried using rsync to move it. I’m a big fan of rsync, it does its job pretty well… mostly. However, this time it just wasn’t performing at all. I had about 8Gb to transfer, and after 30 minutes not only had I only seen 1.5Gb go across the wire, but trying to do any interactive work on the console of the other box was impossible. And I do mean, impossible – I got no response to keystrokes unless I paused the rsync. Then, having kicked myself because I really did just want the filesystem which happened to be on its own dataset, I snap’d it and kicked off a zfs send | zfs recv. All data then completely transferred in about 12 minutes. And with interactive performance going very nicely as well.
Now for the final bit of laziness:
Cut to yesterday, when I remembered that I really should get a new Vbox instance cons’d up on the laptop in preparation for my trips (I’ve got WLG next week, and SCA in mid-March). I still had the build 13 liveusb stick image, and I just could not be bothered downloading the equivalent ISO image so that I could boot the vbox with it. Running
strings on an older Solaris 11 update 1 ISO showed the mkisofs command line that Release Engineering used to create it. So, I figured I would copy their example.
Firstly, I needed to mount the USB image in the filesystem:
# lofiadm -a /path/to/liveusb.img /dev/lofi/1 # fstyp /dev/lofi/1 ufs # mount -F ufs -o rw /dev/lofi/1 /mnt
Great… UFS. Ugh. Now to create a bootable ISO from it:
# mkisofs -v -R -J -o /tmp/bootable.iso -c .catalog -b boot/bios.img -no-emul-boot -boot-load-size 4 \ -boot-info-table -eltorito-platform efi -eltorito-alt-boot -b boot/uefi.img -no-emul-boot \ -N -l -R -U -allow-multidot -no-iso-translate -cache-inodes -d -D /mnt
This commandline gave me exactly what I needed, so I moved the resulting ISO image across to the fileserver, fired up VirtualBox on the laptop and attached the image to my new vbox instance, and started installing.
Easy (and rather quick, too).