We’ve got a new pool

One of the features we looked for when we bought our house was an in-ground pool. As the kids have grown up and their confidence has increased, we've spent more and more time using the pool - last swimming season went from September 2014 to April 2015, with almost daily swims that stretched to being 2-3 hours long over the weekends in summer.

One aspect of our pool that we really did not like was the slate tiles around the side. While I'm sure they looked beautiful when installed, by the time we moved in (2007) they were looking tired and starting to shed. By the middle of this year we'd actually lost most of them, leaving some rather ugly concrete. The underwater surface was also starting to loosen, so the Barracuda would frequently pull bits up and leave them in the skimmer box.

We had to get it fixed.

It's been surprisingly difficult to get people to come and quote on a renovation job, and of those who did quote, some of them were more than the cost of a new pool. That didn't seem quite right. Eventually, J found Mark and his team at Sunseeker Pools, who not only quoted within our range, but helped us pick tiles and stone as well. He was also able to start within the next 3 weeks and estimated that it would take about 3 weeks to complete the whole job.

We had been umming and aahing about whether to get the pavers removed and replaced with tiles, thinking that would significantly add to the cost, however Mark's quote included that from the start so we were very happy. During construction we talked with him about our retaining wall, and he offered to build up the edge of the pool with a reinforced besser brick layer. This added about another 10% to the cost, but was very well worth it.

After a week's delay caused by the Brisbane Flu work started in earnest, with emptying the pool. That took about 24 hours, and showed just how bad a pool can get when you leave it unloved for a few months:

/images/2015/11/20150831_110501_IMG_2482.jpg

The next stage was to remove the pavers (we're planning on using them for some followup work in the rest of our outdoor area), jackhammer off the slate and the existing pool surface.

A day's worth of putting in formwork was quickly followed by the concrete truck (and pump), which delivered 3 cubic metres in about 45 minutes. That volume needed 3 days to cure, and then the tiles started going on. Mark and his team were very careful and precise in laying and cutting them all out, and then grouted them all at once. We went away to Ballandean (in the heart of Queensland's wine country) for a few days with friends, and when we came back the new pebblecrete had been laid, the waterline tiles and white river stone splashback had been installed and we just needed the acid wash.

The day after we got back, the acid wash subcontractor arrived and got to work - it took about 30 minutes to complete his task, and then we filled the pool. We've got about 40kL, and using town water from the hosepipe it took almost 24 hours to get up to the right level and it was time to put in chemicals. The chemical balance of Brisbane's town water is generally pretty close to what we need for a pool, so the only major concern that our pool shop had was that we add a lot of calcium. Apparently new pools can leach calcium, and it's nigh-on impossible to get back.

And with that, we have a new pool which we've been enjoying almost every day. It's nice to look at from the kitchen bench, too!

Thankyou very, very much to Mark and his team from Sunseeker Pools, and to Wayne and the gang at pool shop.

If you've got a suitable media-enabled browser, here's a video of the pool now - we really love the shimmery sparkly effect.




A recipe for running your pkg.depotd(1) server with SSL and Apache 2.4

As part of my contribution to the darktable community, I provide the Solaris packages needed to run the application via a locally-hosted pkg repo. You can

# pkg set-publisher -g https://www.jmcpdotcom.com/packages/packages JMCP

and then install the bits very easily.

What was a little non-obvious (to me at least) was how to get the pkg.depotd process to only listen on a secured port.

If you look at the SMF properties for svc:/application/pkg/server, you will observe these two likely-looking candidates:

pkg/ssl_key_file
pkg/ssl_cert_file

They are not, however, what you need. Running pkg/server outside of svc:/application/pkg/depot is actually restricted to plain http because the backing framework here is CherryPy – and the version which pkg.depotd uses apparently has some issues with https.

Hmmph.

So I asked a few colleagues who have worked on our packaging system for assistance. Liane pointed me at https://docs.oracle.com/cd/E23824_01/html/E21803/apache-config.html, which was an excellent place to start. I did, however, need some handholding from Tim and eventually wound up with the following configuration which works with Apache v2.4.

Firstly, /etc/apache2/2.4/httpd.conf:

  • Follow the Apache docs for enabling ssl

  • Once you’ve settled on the port to run your pkg.depotd on, add a ReWriteRule like this:

RewriteEngine On
RewriteRule ^/packages$ https://%{SERVER_NAME}:83 [R,L]

Secondly, svc:/application/pkg/depot:default:

# svccfg -s application/pkg/depot:default
svc:/application/pkg/depot:default> setprop config/port = 83
svc:/application/pkg/depot:default> setprop config/ssl_ca_cert_file = "/path/to/your/SSL CA cert bundle"
svc:/application/pkg/depot:default> setprop config/ssl_cert_file = "/path/to/your/SSL cert file"
svc:/application/pkg/depot:default> setprop config/ssl_key_file = "/path/to/your/SSL key file"
svc:/application/pkg/depot:default> refresh
svc:/application/pkg/depot:default> quit

Thirdly, svc:/application/pkg/server:

# svccfg -s application/pkg/server add packages
# svccfg -s application/pkg/server:packages addpg pkg application
# svccfg -s application/pkg/server:packages
svc:/application/pkg/server:packages>addprop pkg/proxy_base = astring: "https://your.ssl.url.here/packages"
svc:/application/pkg/server:packages>addprop pkg/inst_root = astring: "/path/to/your/REPO/on/disk"
svc:/application/pkg/server:packages>addprop pkg/readonly = boolean: true
svc:/application/pkg/server:packages>addprop pkg/log_access = astring: "/path/to/access/logfile"
svc:/application/pkg/server:packages>addprop pkg/log_errors = astring: "/path/to/error/logfile"
svc:/application/pkg/server:packages>addprop pkg/standalone = boolean: false
svc:/application/pkg/server:packages>refresh
svc:/application/pkg/server:packages>quit

Once you’ve got those steps completed, it’s time to enable the services and add the publisher:

# svcadm enable pkg/server:packages pkg/depot:default
# pkg set-publisher -g https://your.ssl.url.here/packages/packages yourpublishername

Pretty simple (now that you know how).




Darktable 1.6.8 for Solaris 11.3 (beta)

As promised, I've built the latest release of Darktable (1.6.8) for Solaris 11.3 Beta. You can get the release details at darktable 1.6.8 .

I've also (finally) set up my own pkg(5) server, so you can simply utter

$ sudo pkg set-publisher -g https://www.jmcpdotcom.com:10001 JMCP
$ sudo pkg install 'pkg://JMCP/JMCPdarktable*'

If you'd just like a gzipped p5a, please use this link instead .

Following my previous post I've ensured that the bits were built with -msse4.2.




Fixing up GPX ride data with lxml.etree

Last week I went for a ride on a rather grey day. The route was one of my usuals (40.2km in to the Goodwill Bridge), and I shaved a minute or two off the ride time. I was mightily disappointed to find that both ipBike and Strava reckoned I had ridden a bare 100m. This, despite the ipBike summary field claiming "40.270km with 347m climb in 1:43:41".

I had an incident like this happen to me earlier this year and was unable to fix it using SNAP, so I figured it was time to bite the bullet and fix the recorded file : particularly since the temperature, heart rate, cadence all appeared to be correctly recorded.

My first pass attempted to make use of Tomo Krajina's gpxpy, which was fine until I realised that that library cannot handle the TrackPointExtensions that Garmin defined.

I then tried to make headway using minidom, but got myself tied in knots trying to create new document nodes. I'm sure I missed something quite obvious there but I'm not really worried. Note in passing : Lode Nachtergaele's http://castfortwo.blogspot.com.au/2014/06/parsing-strava-gpx-file-with-python.html was really useful, and helped with my final attempt.

My final (and successful) attempt uses lxml.etree to pull out the info I need, skip a few points (since the rides had different elapsed times, but somewhat dubious) and then create a new GPX document with the munged data points.

While I've now ot a close-enough fixed up file, I'm down about 2km on the ride total, and up about 30m on the climbing total (according to Strava). I am quite happy with the results overall, though more than willing to accept that my code (below) is rather fugly. Good thing I'm not integrating this to a project gate!

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
#!/usr/bin/python
#
# Copyright (c) 2015, James C. McPherson. All Rights Reserved
#

from datetime import datetime, time, date
from copy import deepcopy
from lxml import etree as ET

NSMAP = {
    'gpxx': 'http://www.garmin.com/xmlschemas/GpxExtensions/v3',
    None: 'http://www.topografix.com/GPX/1/1',
    'gpxtpx': 'http://www.garmin.com/xmlschemas/TrackPointExtension/v1',
    'xsi': 'http://www.w3.org/2001/XMLSchema-instance'
}

schemaLocation = "http://www.topografix.com/GPX/1/1"
schemaLocation += "http://www.topografix.com/GPX/1/1/gpx.xsd"
schemaLocation += "http://www.garmin.com/xmlschemas/GpxExtensions/v3"
schemaLocation += "http://www.garmin.com/xmlschemas/GpxExtensionsv3.xsd"
schemaLocation += "http://www.garmin.com/xmlschemas/TrackPointExtension/v1"
schemaLocation += "http://www.garmin.com/xmlschemas/TrackPointExtensionv1.xsd"
schemaLocation += "http://www.garmin.com/xmlschemas/GpxExtensions/v3"
schemaLocation += "http://www.garmin.com/xmlschemas/GpxExtensionsv3.xsd"
schemaLocation += "http://www.garmin.com/xmlschemas/TrackPointExtension/v1"
schemaLocation += "http://www.garmin.com/xmlschemas/TrackPointExtensionv1.xsd"

gpxns = "{http://www.topografix.com/GPX/1/1}"
extns = "{http://www.garmin.com/xmlschemas/TrackPointExtension/v1}"

reftracks = []
failtracks = []


def parseTrack(trk, stime, keep=None):
    tracks = {}
    for s in trk.findall("%strkseg" % gpxns):
        for p in s.findall("%strkpt" % gpxns):
            # latitude and longitude are attributes of the trkpt node
            # but elevation is a child node in its own right
            el = {}
            el['lat'] = p.get("lat")
            el['lon'] = p.get("lon")
            el['ele'] = p.find("%sele" % gpxns).text
            if keep:
                el['trkpt'] = deepcopy(p)
                rfc3339 = p.find("%stime" % gpxns).text
                try:
                    t = datetime.strptime(rfc3339, '%Y-%m-%dT%H:%M:%S.%fZ')
                except ValueError:
                    t = datetime.strptime(rfc3339, '%Y-%m-%dT%H:%M:%SZ')
                    sec_t = int(t.strftime("%s"))
                    el['time'] = rfc3339
                    tracks[sec_t - stime] = el
    return tracks

##
# Main routine starts here.
##

rf1 = open("goodfile")
ff1 = open("dodgyfile")
rf = ET.parse(rf1)
ff = ET.parse(ff1)

rstimestr = rf.getroot().find("%smetadata" % gpxns).find("%stime" % gpxns).text
rstime = int(datetime.strptime(rstimestr, "%Y-%m-%dT%H:%M:%SZ").strftime("%s"))
fstimestr = ff.getroot().find("%smetadata" % gpxns).find("%stime" % gpxns).text
fstime = int(datetime.strptime(fstimestr, "%Y-%m-%dT%H:%M:%SZ").strftime("%s"))

for track in rf.findall("%strk" % gpxns):
    reftracks.append(parseTrack(track, rstime, False))

for track in ff.findall("%strk" % gpxns):
    failtracks.append(parseTrack(track, fstime, True))

# Now we need to fix node attributes in failtracks
# We're being lazy, so assume only one key for now

rpts = len(reftracks[0].keys())
fpts = len(failtracks[0].keys())

if fpts > rpts:
    skipn = fpts % rpts
else:
    skipn = rpts % fpts

# create a "fixed" track

ntrack = ET.Element("trk")
ntrkname = ET.SubElement(ntrack, "name")

ntrkname.text = ff.find("%strk" % gpxns).find("%sname" % gpxns).text

ntseg = ET.SubElement(ntrack, "trkseg")

for (n, v) in enumerate(failtracks[0]):
    if n % skipn is 0:
        continue
    badn = failtracks[0][n]['trkpt']
    goodn = reftracks[0][n]
    badn.set('lat', goodn['lat'])
    badn.set('lon', goodn['lon'])
    badne = badn.find("%sele" % gpxns)
    badne.text = goodn['ele']
    extn = badn.find("%sextensions" % gpxns)
    badn.append(extn)
    ntseg.append(badn)

# write a new file....
newwf = open("fixedp.gpx", "w")
gpx = ET.Element("gpx", nsmap=NSMAP)
gpx.set("creator", "James C. McPherson")
gpx.set("version", "1.1")
gpx.set("{http://www.w3.org/2001/XMLSchema-instance}schemaLocation",
        schemaLocation)
gpx.append(ntrack)
et = ET.ElementTree(gpx)
et.write(newf)
newf.close()



Building a new system to run Solaris 12

Several years ago I wrote about migrating my personal webserver from work's Ultra20 to my own Ultra20 M2. Sadly, that Ultra20 M2 (dual-core opterons, 2Gb ram) has just about given up being able to cope with what I need it to do.

Externally, I run the www.jmcpdotcom.com web and mail (client and server) services on it. Internally it's our media server, sharing out our collection of music, photos, movies and TV via NFS to the rpi. I just don't have enough ram in the U20M2 to do the needful, to the extent that when I've needed to transcode media files I do it from my workstation (Lenovo quad-core, 32Gb ram) across the gigabit backbone we're running. Ok, so that's a physical distance of about 1.5m, but it has still meant that I manually schedule intensive tasks for later at night when the family's asleep. That includes pkg update to whatever the latest available build is, since it regularly takes about 3 hours on the U20M2 rather than 15 minutes on my kernel zone or the laptop.

So... I had to built a new box.

I did a reasonable amount of research into what I could get, with the general provision that I should be able to bump the cpu and ram up with minimal effort. A high priority (but not mandatory) desire was for an onboard Intel gigabit nic ' they appear to me to be the most robust solution on Solaris today. I also wanted to migrate from the mpt(7d) SAS HBA to an mpt_sas if I could find one cheap enough. What I ended up with was this:

I'll comment on the specific features of the motherboard and case a little later.

This motherboard is UEFI, so I couldn't just take the disks out of the U20M2 and boot up the new system straight away. I pulled down the most recent Solaris 12 text installer usb image, booted and installed a basic bootable image. The cpio stage (the final stage) took a shade over 2 minutes to blat the image onto the disk. While it would have been really nice to then zfs send | zfs recv a snapshot of the U20M2 current BE across to the new box and try to boot it, I figured I had too many tweaks to make. I do so like customising my systems, y'see.

Still, it was very easy to copy across my cyrus-imap, mysql, exim, squid, privoxy and apache configurations. I also performed the rather simplistic operation of grabbing the list of installed packages from the U20M2 then uttering pkg install $LIST in the new system. Since most of them were in my local cache, that was a fairly quick operation to complete.

The service which caused me most annoyance was my dhcp server. I run this as a convenience for the hardware we have (and which visitors might bring) and while I do manual address assignment for certain systems, it's mostly completely automatic. After copying the old config files into place and creating /var/db/dhcpd.leases I thought I'd remembered everything. However, I'd missed one thing:

May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] No subnet declaration for ext0 (192.168.2.30).
May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error] ** Ignoring requests on ext0.  If this is not what
May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error]    you want, please write a subnet declaration
May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error]    in your dhcpd.conf file for the network segment
May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error]    to which interface ext0 is attached. **
May 30 13:14:57 orrery dhcpd: [ID 702911 local7.error]
May 30 13:15:04 orrery dhcpd: [ID 702911 local7.error] setsockopt: IP_PKTINFO: Bad file number

I've got two interfaces on this box, one physical, one vnic:

$ dladm show-phys; dladm show-vnic ; dladm show-ether
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net0              Ethernet             up         1000   full      e1000g0

LINK              OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS
ext0              net0              1000   2:8:20:6e:f5:f5   fixed       0

LINK              PTYPE    STATE    AUTO  SPEED-DUPLEX                    PAUSE
net0              current  up       yes   1G-f                            bi

(The U20M2 has two physical nge interfaces; I'd like to get another physical gig-E nic for this new system at some point).

The thing I'd forgotten to do was tell the dhcp server which interfaces to listen on. I don't want it to listen on ext0, just net0.

$ sudo svccfg -s dhcp/server:ipv4
Password:
svc:/network/dhcp/server:ipv4> listprop config/listen_ifnames
config/listen_ifnames astring
svc:/network/dhcp/server:ipv4>

This, fortunately, is a very easy thing to fix:

svc:/network/dhcp/server:ipv4> setprop config/listen_ifnames = net0
svc:/network/dhcp/server:ipv4> refresh
svc:/network/dhcp/server:ipv4> end
svc:/network/dhcp/server:ipv4> quit
# svcadm refresh dhcp/server:ipv4 ; svcadm restart dhcp/server:ipv4

And off we went.

The next phase was to power down both old and new systems, and transfer disks. The IBM mpt_sas card only has one external SFF-8088 interface, but has four internal SATA connectors. I took four disks out of the 8disk Sans Digital TowerRAID TR8X+ (an 8 Bay SAS/SATA JBOD) and installed them inside the Cooler Master case, then booted up again.

All the ZFS pools imported without any hassles, apache/squid/exim/dhcp/cyrus-imap all came up immediately.. ALL GOOD!

Since one aspect of the zpool import was automatic enabling of the various nfs and smb shares, the new system was then declared to Be Sufficient(tm) and have WAF > 0.

Phew!

A note on the case

Having worked in on and around Sun (now Oracle)-designed hardware for quite a few years now, I'm delighted to see that concepts like disk sled rails have trickled down to the consumer end of the market. That said, why would you advertise a case as accepting 7 3.5" internal disks, but only supply rails for 4 disks? Also, I found it a bit difficult to get the SATA power and data cables connected to the internal disks without taking off the back side of the case as well. This is an area where a SATA disk backplane

http://www.nessales.com/ebay/20507/Sun%20Ultra%2020%20TF-PWA%20SAS%20SATA%20Disk%20Backplane%20373-0057-01%20Pic%202.jpg

comes in handy. I'd pay another AUD30 on top of the case price for a feature like that. Just sayin'.

A note on the motherboard

While I'd really like to have two onboard Intel gigE nics, I'm fine with just one. There's plenty of headroom for adding a PCI-E x1 (or even a PCI) card to handle that. There are the standard 4 DDR3 slots, currently populated with 4Gb dimms ' seems like plenty, at least for the moment. There's a serial port header if I want to get that set up (meh.... maybe), and the only niggle I have is that the Intel HD Audio doesn't seem to be supported by audiohd at the moment. I'll log a bug for that but really, who needs audio output capabilities on a server?

For El Goog: here's the output of prtconf -v for the board: ASRock_Fatal1ty_H87-prtconf-v.txt. I don't have lspci, but I do have scanpci output: ASRock_Fatal1ty_H87-scanpci.txt and ASRock_Fatal1ty_H87-scanpci-v.txt

I hope you find it useful.







A collection of laziness

In preparation for my trip to Wellington next week, to present at the Multicore World 2013 conference, I’ve been building up a new vbox instance on my laptop.

To my great annoyance, I have to run Microsoft Windows 7 (x64) on my laptop, for three reasons: $EMPLOYER uses Cisco IP Communicator as a VoIP solution (no Solaris version), when I travel OS I call J and the kids every day using skype video calling, and the Intel HD graphics support in Xorg on Solaris is somewhat flaky.

So I run Solaris inside vbox, in seamless mode, and do all my coding and sysadminning tasks using Proper Editors(tm) and other interfaces.

Here’s the first bit of laziness: installations.

Two weeks ago I took delivery of a new workstation (a Lenovo M82, which promptly received a memory, disk and graphics upgrade) to replace the aging, hot and noisy Ultra40 M2 I’ve had since 2007. I downloaded the latest (at that point) USB live image from the internal site and happily booted + installed what I needed to… before remembering that the installer blats over any existing disk partitioning you’ve got. That was rather annoying, since I like to use raw slices for swap and dump devices (an old habit) rather than zvols. In order to keep my existing configurations, I cheated. Just a little (ok, rather a lot).

Technically, I should have setup an AI server and gone through my list of installed packages and made sure that I had them listed, blah blah blah. However, what I chose to do was zfs snap my current BE, boot the new box using the liveusb image (with livessh mode enabled), create my desired rpool config, then zfs send|zfs recv the snapshot… and mount the snapshot, make appropriate edits, install the bootloader and then reboot.

Simple!

It took about 25 minutes to transfer the snapshot from the Ultra 40 M2 to the Lenovo M82 (I’ve got a 16 port cheap-o gigE switch to connect the home systems), and about another 5 to go through the specific changes I needed in /etc (apart from those for svc:/system/identity:node and svc:/system/identity:domain. Installing the bootloader (grub2) was easy, but I had a weird problem getting the new system’s boot menu figured out. I ended up needing to remove /etc/zfs/zpool.cache and then recreating it (by running zpool status; zpool list would have done just as well). The SMF changes were done while I had the liveusb stick booted. To do this, after importing the new rpool I uttered

# svccfg
svc:>  repository /mnt/etc/svc/repository.db

and then selected the services I needed and edited them.

Prior to getting this Lenovo, I’d been using a non-global zone in the Ultra 40 M2 to serve out my webserver and handle mail. Part of this reconfiguration involved getting a miniSAS-connected jbod (from Other World Computing, shipped via Borderlinx) and physically moving my media and scratch pool disks into it. Then I attached the jbod to my Ultra 20 M2, and I just had to bring that up to date. So I snap’d the zone BE on the Ultra 40 M2, and send|recv’d it to the rpool on the Ultra 20 M2 – same liveusb+livessh sneakiness, same svccfg activities … and all done in under 30 minutes.

The longest part of the physical downtime was swapping the screws on the disk caddies when I removed them from the Ultra 40 M2 and inserted them into the jbod.

The second bit of laziness is not really laziness. I had a filesystem on the old box which I wanted to transfer to the new, and tried using rsync to move it. I’m a big fan of rsync, it does its job pretty well… mostly. However, this time it just wasn’t performing at all. I had about 8Gb to transfer, and after 30 minutes not only had I only seen 1.5Gb go across the wire, but trying to do any interactive work on the console of the other box was impossible. And I do mean, impossible – I got no response to keystrokes unless I paused the rsync. Then, having kicked myself because I really did just want the filesystem which happened to be on its own dataset, I snap’d it and kicked off a zfs send | zfs recv. All data then completely transferred in about 12 minutes. And with interactive performance going very nicely as well.

Now for the final bit of laziness:

Cut to yesterday, when I remembered that I really should get a new Vbox instance cons’d up on the laptop in preparation for my trips (I’ve got WLG next week, and SCA in mid-March). I still had the build 13 liveusb stick image, and I just could not be bothered downloading the equivalent ISO image so that I could boot the vbox with it. Running strings on an older Solaris 11 update 1 ISO showed the mkisofs command line that Release Engineering used to create it. So, I figured I would copy their example.

Firstly, I needed to mount the USB image in the filesystem:

# lofiadm -a /path/to/liveusb.img
/dev/lofi/1
# fstyp /dev/lofi/1
ufs
# mount -F ufs -o rw /dev/lofi/1 /mnt

Great… UFS. Ugh. Now to create a bootable ISO from it:

# mkisofs -v -R -J -o /tmp/bootable.iso -c .catalog -b boot/bios.img -no-emul-boot -boot-load-size 4 \
    -boot-info-table -eltorito-platform efi -eltorito-alt-boot -b boot/uefi.img -no-emul-boot \
    -N -l -R -U -allow-multidot -no-iso-translate -cache-inodes -d -D  /mnt

This commandline gave me exactly what I needed, so I moved the resulting ISO image across to the fileserver, fired up VirtualBox on the laptop and attached the image to my new vbox instance, and started installing.

Easy (and rather quick, too).




A health turning point

I’ve been introspecting a fair bit, about health and fitness and wellbeing in general, and figured out where the turning point for me (and for J) was, where we decided that we would take control of our wellbeing rather than just floating along.

Unsurprisingly, it occurred around the time when J was diagnosed with the meningioma in November 2011.

For me, 2011 was a very stressful year. As gatekeeper for the Solaris 11 ON consolidation (the core kernel and basic userland), I was at the very pointy end of making sure that we got the bits together for the release of Solaris 11. I’d had a horrendous trip to Beijing in July (laptop disk broke the day before a build close, I tore my medial meniscus while plugging in cables to a laptop, left my luggage in a taxi and missed the connecting flight in Hong Kong due to a recalcitrant passenger in Beijing delaying the PEK-HKG flight by 3 hours). I had surgery on my knee in September, October saw the final ON delivery – I’d been working more and more hours every single day to the point where I was usually working for 12 hours Monday to Saturday, and only 3-4 hours on Sundays. Utterly, freakin, crazy. A was born in March (which was the bright part of our year), and then in October after J’s dizziness and nausea hadn’t gone away, she went through a battery of tests and an MRI.

The post-Release holiday that we had planned was not as carefree as we had hoped. I vividly recall walking along the beach with J and the kids, talking with her about what the ENT might say about the spot we’d seen on the scan, whether it was malignant or not, whether she would need surgery….

We stayed at Sails on Horseshoe, one of the most peaceful and relaxing places we know. Horseshoe Bay is a stone’s throw (note, in metric terms that’s about 15 metres) from the front gate. It’s quiet. There’s a pub and a few cafes and restaurants at the other end of the street, and we love it. There is also a healthy food cafe where we had lunch several times. I don’t recall the name of the place, or what else was on the menu, but I do recall that they do freshly squeezed juice – and C loved it. I loved it. J loved it. We figured it was just a bit of a treat to have because we were on holidays in the tropics, but the appeal of that juice really stuck with us.

A week or so after we got back home I went for what was then a stretch ride – the 40km effort to the Goodwill Bridge and back. I was pleased that my times were getting better (from just over 2 hours to a few minutes under), and was starting to think that I should really be doing more riding – for stress relief, for health and wellbeing, and generally just because I like doing it. I recall that it was a hot morning and as I came up the rise past the Jindalee golf course I thought “I could really go some freshly squeezed juice when I get home.”

That was the turning point.

While I didn’t have any fresh juice that particular day, we did go and buy a juicer a few days later, and except for juice to accompany the kids’ meals if we’re out, we haven’t bought juice since then. We’ll make fresh juice every week or two, generally with some apples that have been on our local fresh fruit shop’s “reduced for quick sale” stack – they’re still good enough to eat, but you can get fresher for eating. We do usually apple and pear, with a knob of ginger thrown in for some zing. Lately we’ve also been adding watermelon, which makes it taste lighter. Sometimes I’ll do oranges or pineapple. Oranges are a pain, though, because you have to spend a lot of time peeling them before you can juice them and frankly, I want my juice now!

;-)

The kids love the fresh juice, they appreciate that it’s a treat, and if I miss out the ginger C will let me know pretty quickly.

We took charge of our wellbeing, deciding to be active and mindful about what we consume. It’s an ongoing process and commitment which we are going to keep doing. We’ve reinforced this commitment with the meal plan system we put in place last year. When we look back over the last 2 years, which have been very stressful and full of worry, it’s reassuring to realise that not only do we feel better, but we are healthier than we ever have been because of the changes we have made to how we think about food (and exercise).




Asserting control over my reality during a difficult time

When J was in hospital for 3 weeks last year, I was stressed. Very, very stressed. I put the kids into daycare again (Thursday and Friday), and her mum came to stay with us and take care of them the rest of the time. I need to point out that having J’s mum stay with us was not what made me stressed. What I was stressed about was worrying how long J would be in hospital, whether she’d get worse or better, and how to make sure the kids knew enough about what was going on.

One of the coping strategies I implemented in short order was to write weekly meal plans. We had a number of A5 sized notebooks around the house, which worked out to be just about the right size for what we needed (especially given my terrible handwriting).

Here’s an example. This week’s list is fairly standard, though there’s only one veg-only meal and we’re having a lot of salad right now because it’s ETOODAMNEDHOT.:

The meal plan

Sunday

chicken stirfry noodles

Monday

bbq miniburgers/steaks with salad

Tuesday

chicken wraps with salad

Wednesday

J’s butter chicken

Thursday

pesto pasta and vegies

Friday

chicken and pumpkin risotto

Saturday

bbq snags and salad

The miniburgers are those you can get from our local Coles in a pack of 8; they’re just the right size for the kids. The chicken wraps are our excuse to get something vaguely Mexican flavoured into our diet, rather than merely lots of Italian and Chinese inspired dishes. The kids don’t need to have the guacamole or any salsa, so J and I can have just a bit more of it to ourselves.

Nothing particularly unhealthy in there, nothing particularly flash. What the plan does give me, though, is time. I know roughly how long each meal takes to prepare, so if I work backwards from the ideal on-table window being 5:30-6pm (we like to get the kids bathed+dressed and A in to bed by about 7pm), then I know when I really have to stop for the day. I also try to work with the assumptions that 100g of meat per adult, 50g of carbohydrate (pasta or rice or spuds) per adult is about sufficient, assuming that we have pretty much free reign with veggies and/or salad to bulk things up as required.

This meal plan idea worked sufficiently well for us that we have kept it going and don’t expect to stop. It helps us keep our portion sizes in order, helps with the grocery bill since we can then plan just what we need each week and don’t wind up with a pantry or fridge that’s full of food which could go to waste, and helps relieve time pressure too. Wins all around.