Posts for year 2017

Oracle Instant Client v12.2 for Solaris now available via IPS

A little over two years ago I wrote that the Oracle Instant Client v12.1 was now available in IPS format from http://pkg.oracle.com- so it was a very simple matter to utter

# pkg install *instantclient*

and you would get the Instant Client packages installed (with a correct RPATH, too).

The Database group released v12.2 a few months ago, and after a delay caused by the significant reorganisation of the Systems division, I am really pleased to announce that we have that version packaged in IPS format as well.

We've made a few changes to the package naming, so that you can install both 12.1 and 12.2 on your system at the same time. There's also a mediator so that /usr/bin/sqlplus will point to the preferred version. To install this new version via IPS, you do need to have access to the Solaris 11 Support Repository. (Visit https://pkg-register.oracle.com to get started).

root@burn11x:~# pkg install -nv *instantclient*
           Packages to install:       9
           Mediators to change:       1
     Estimated space available: 8.50 GB
Estimated space to be consumed: 1.75 GB
       Create boot environment:      No
Create backup boot environment:      No
          Rebuild boot archive:      No

Changed mediators:
  mediator instantclient:
           version: None -> 12.2 (vendor default)

Changed packages:
solaris
  consolidation/instantclient/instantclient-incorporation
    None -> 12.2.0.1.0,5.11-4:20171026T193617Z
  database/oracle/instantclient-121
    None -> 12.1.0.2.0,5.11-4:20171027T004528Z
  database/oracle/instantclient-122
    None -> 12.2.0.1.0,5.11-4:20171026T193622Z
  database/oracle/instantclient/jdbc-supplement-121
    None -> 12.1.0.2.0,5.11-4:20171027T004524Z
  database/oracle/instantclient/jdbc-supplement-122
    None -> 12.2.0.1.0,5.11-4:20171026T193618Z
  database/oracle/instantclient/odbc-supplement-121
    None -> 12.1.0.2.0,5.11-4:20171027T004526Z
  database/oracle/instantclient/odbc-supplement-122
    None -> 12.2.0.1.0,5.11-4:20171026T193619Z
  developer/oracle/instantclient/sdk-121
    None -> 12.1.0.2.0,5.11-4:20171027T004538Z
  developer/oracle/instantclient/sdk-122
    None -> 12.2.0.1.0,5.11-4:20171026T194032Z

If you had the 12.1 packages installed already, then a pkg update would get you the renamed packages (database/oracle/instantclient-121).




NVLists for fun and profit

Over the years that I've worked on Solaris, I've come to know and love libnvpair. We use it all over the place, from the kernel and up to bits of userspace. If I was reimplementing fwflash today, I'd use libnvpair rather than <sys/queue.h>.

One of the things that you might not be aware of is that we ship python bindings for this library, and while they're not perfect, they are very, very useful. Let's have a look at how you can delve into one particular feature of your Solaris system: the zpool cache.

Before we start, you need to know that /etc/zfs/zpool.cache is NOT AN INTERFACE, if you edit that file you could muck up your zpool configurations, and this post is just an example of how we can extract nvlist data.

With that warning proclaimed, let's have a look at this file.

On our Solaris media server I have a large pool called soundandvision to store our photos along with music and movies that I've ripped from CDs, DVDs and Blu-Rays over the years. Here's what zpool status tells me about this right now:

$ zpool status soundandvision
  pool: soundandvision
 state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 1.60T in 6h36m with 0 errors on Thu Jul  6 22:07:18 2017

config:

        NAME                         STATE      READ WRITE CKSUM
        soundandvision               DEGRADED      0     0     0
          mirror-0                   ONLINE        0     0     0
            c2t3d0                   ONLINE        0     0     0
            c0t5000039FF3F0D8D9d0    ONLINE        0     0     0
          mirror-1                   ONLINE        0     0     0
            c0t5000CCA248E72F12d0    ONLINE        0     0     0
            c0t5000CCA248E728B6d0    ONLINE        0     0     0
          mirror-2                   DEGRADED      0     0     0
            c0t5000039FF3F0D2F0d0    ONLINE        0     0     0
            spare-1                  DEGRADED      0     0     0
              c0t5000039FE2DF1C15d0  REMOVED       0     0     0
              c0t5000C50067485F33d0  ONLINE        0     0     0
        spares
          c0t5000C50067485F33d0      INUSE

errors: No known data errors

Yes, I do need to pop along to my local PC bits shop and replace that removed disk. What can we find out about that disk, though?

Wow, I really left this post incomplete, didn't I!




Long Fat Networks

Living in Australia generally means that you're on the end of a Long Fat Network (LFN), internet-wise. That's a serious technical term which is important to the networking stack when determining optimal data transfer sizes.

Two of my colleagues down in Melbourne are also with Aussie Broadband and using the top (100Mbit down, 40Mbit up) NBN speed tier. We also have company-issued hardware vpn units because we work from home fulltime. I was delighted at the bandwidth available from Aussie for our connections to work systems in the SF Bay Area, and when I had cause to update my systems to a new build I observed that it now took about 55 minutes on our media server, rather than the 80-90 minutes it took with the SkyMesh connection.

There was a fly in the ointment, however, because my colleagues and I calculated that while we should be getting 1Mb/s or more as a sustained transfer rate from the internal pkg server, we'd often get around 400kb/s. Since networking is supposed to be something Solaris is good at, we started digging.

The first thing we looked at was the receive buffer size, which defaults to 1Mb. Greg found https://fasterdata.es.net/host-tuning/other/ so we changed that for tcp, udp and sctp. While fasterdata document talked about using /usr/sbin/ndd, the Proper Way(tm) to do this in Solaris 11.x is with /usr/sbin/ipadm:

 # for pp in tcp udp sctp; do ipadm show-prop -p max-buf $pp; done

PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
tcp   max-buf               rw   1048576      --           1048576      1048576-1073741824

PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
udp   max-buf               rw   2097152      --           2097152      65536-1073741824

PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
sctp  max-buf               rw   1048576      --           1048576      102400-1073741824

To effect a quick and persistent change, we uttered:

# for pp in tcp udp sctp; do ipadm set-prop -p max-buf=1073741824 $pp; done

While that did seem to make a positive difference, transferring a sample large file from across the Pacific still cycled up and down in the transfer rate. The cycling was really annoying. We kept digging.

The next thing we investigated was the congestion window, which is where the afore-mentioned LFN comes in to play. That property is cwnd-max:

 # for pp in tcp sctp; do ipadm show-prop -p cwnd-max $pp; done

PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
tcp   cwnd-max              rw   1048576      --           1048576      128-1073741824

PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
sctp  cwnd-max              rw   1048576      --           1048576      128-1073741824

Figuring that if it was worth doing, it was worth overdoing, we bumped that parameter up too:

# for pp in tcp sctp; do ipadm set-prop -p cwnd-max=1073741824 $pp; done
$ curl -o moz.bz2 http://ftp.mozilla.org/pub/mozilla/VMs/CentOS5-ReferencePlatform.tar.bz2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time Current
                                 Dload  Upload   Total   Spent    Left  Speed
  3 3091M    3  102M    0     0  4542k      0  0:11:36  0:00:23  0:11:13 5747k^C

While that speed cycled around a lot, it mostly remained above 5MB/s.

Another large improvement. Yay!

However... we still saw the cycling. Intriguingly, the period was about 20 seconds, so there was still something else to twiddle.

In the meantime, however, I decided to update our media server.

I was blown away.

23 minutes 1 second

Not bad at all, even considering that when pkg(1) is transferring lots of small files it's difficult to keep the pipes filled.

Now that both Greg and I had several interesting data points to consider, I asked some of our network gurus for advice on what else we could look at. N suggested looking at the actual congestion algorithm in use, and pointed me to this article on High speed TCP.

High-speed TCP (HS-TCP ). HS-TCP is an update of TCP that reacts better when using large congestion windows on high-bandwidth, high-latency networks.

The Solaris default is the newreno algorithm:

 # ipadm show-prop -p cong-default,cong-enabled tcp
PROTO PROPERTY              PERM CURRENT      PERSISTENT   DEFAULT      POSSIBLE
tcp   cong-default          rw   newreno      --           newreno      newreno,cubic,
                                                                        dctcp,
                                                                        highspeed,
                                                                        vegas
tcp   cong-enabled          rw   newreno,     newreno,     newreno      newreno,cubic,
                                 cubic,dctcp, cubic,dctcp,              dctcp,
                                 highspeed,   highspeed,                highspeed,
                                 vegas        vegas                     vegas

Changing that was easy:

# for pp in tcp sctp ; do ipadm set-prop -p cong-default=highspeed $pp; done

Off to pull down that bz2 from mozilla.org again:

 $ curl -o blah.tar.bz2 http://ftp.mozilla.org/pub/mozilla/VMs/CentOS5-ReferencePlatform.tar.bz2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 3091M  100 3091M    0     0  5866k      0  0:08:59  0:08:59 --:--:-- 8684k

For a more local test (within Australia) I made use of Internode's facility:

$ curl -o t.test http://mirror.internode.on.net/pub/test/1000meg.test
  % Total    % Received % Xferd  Average Speed   Time    Time     Time Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  953M  100  953M    0     0  10.0M      0  0:01:35  0:01:35 --:--:-- 11.0M

And finally, updating my global zone.

 # time pkg update --be-name $NEWBE core-os@$version *incorporation@$version
            Packages to update: 291
       Create boot environment: Yes
Create backup boot environment:  No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            291/291     2025/2025  116.9/116.9  317k/s

PHASE                                          ITEMS
Removing old actions                       1544/1544
Installing new actions                     1552/1552
Updating modified actions                  2358/2358
Updating package state database                 Done
Updating package cache                       291/291
Updating image state                            Done
Creating fast lookup database                   Done
Reading search index                            Done
Building new search index                  1932/1932

A clone of $oldbe exists and has been updated and activated.
On the next boot the Boot Environment be://rpool/$newbe will be
mounted on '/'.  Reboot when ready to switch to this updated BE.


real    12m30.391s
user    4m4.173s
sys     0m21.496s

I think that's sufficient.







Ten years ago

Since about 5:30pm last night, we have had the keys to our home for ten years. After the mad scramble to clean out the rented unit in Sydney, we took our time driving up the coast before pulling in to the driveway and collecting the keys from the letterbox. The house was dark and very, very empty. We wondered what we'd gotten ourselves in to ("eeeek, it's real!").

The next day, our belongings arrived - the 20ft shipping container that everything fit into had gone via train and then truck. Charlie (codercat) arrived the day after that, and was rather grumpy. I don't think she liked flying or being cooped up.

Since that day in 2007, we've done a few things: turned the second garage space into my home office, renovated the pool, kitchen and laundry, gone through IVF to bring our two amazing children into this world, gotten through J's brain tumour and treatment, and tiled the loungeroom.

We've met so many people who have changed our lives since we moved into the area, people who we might never have met if we had moved to a house even a few streets away.

We had a small celebration of the event for dinner last night: I cooked J the dish that I first cooked for her, and we had a rather nice bottle of bubbly.

Here's to the next ten years!




On encryption and backdoors

For those people who think that it's appropriate, measured and useful for the Attorney General Senator George Brandis and Prime Minister Malcolm Turnbull to be talking about forcing tech companies and ISPs to insert backdoors into their products to enable near real-time decryption of messages, my colleagues in the IT Professionals Association (formerly SAGE-Au) have something for you to consider right now:

https://www.itpa.org.au/news/federal-government-again-gets-it-wrong-when-suggesting-it-policy-direction/

We've already had the Crypto Wars, and the insanity which was the Clipper Chip. We don't need to revisit that time. We don't need to go back to the time when encryption was decreed to be a munition, and therefore subject to export controls.

Don't think this is just about messaging (whether instant or email), either. Think about your internet banking options - not feasible to trust without strong encryption. Think about the intellectual property or your client records which your company has developed, keeps behind a firewall and requires authentication to access.

Think about your personal health records. Your tax records. The security of all these things from people who could and would do you harm is compromised when governments mandate backdoors into the security software which protects them.

What we really, desperately, need, is for government (of all stripes, and in all countries) to recognise that they cannot solve their terrrrrrrism problems by making everybody less safe.

Maths isn't the problem here.




An easy way to generate a contact sheet in MacOS

We're getting ready for J's 40th birthday, and asked a good friend who's handy with presentation software to put together a photo-based invitation for us to print. Which worked nicely until we got to the point of wanting to put four of them on the same A4 page.

My first few attempts were to convert the pptx to pdf, then import into GIMP, scale and then put four copies into one new image. This failed miserably when it came to the text - full of jaggies.

What I needed was a contact sheet.

Most of the hits you'll see in a simple search are for creating contact sheets with Adobe products such as Illustrator, Photoshop or Bridge. Since I don't have those, I tried using psnup and psbind, but couldn't figure out the right invocation. Then I went back to GIMP, and found Indexprint, but that wasn't quite right either.

Finally, in desperation I went back in to PowerPoint, copied the slide another 3 times, and then chose Print to PDF. I had a quick look in Preview.app, and went wandering through the print dialog and noticed the Layout menu. That menu had just what I needed - 'n' pages per page!

4up preview

A few minutes later and lp2onfire has delivered a nice A4-sized photo contact sheet.







An SMF service and manifest for Smokeping

A few weeks after we got our NBN HFC1 service up and running, I set up Smokeping, which has been quite useful. What I forgot to do was create an SMF manifest and service script for it, so I missed a few days of monitoring when I updated to a newer build of Solaris.Next (or whatever that release ends up being called).

I've now fixed that oversight and thought I should share what I'd written. My installation is at /opt/smokeping/2.6.11, for the record.

Firstly, the manifest:

<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<!--
 Copyright (c) 2017 James C. McPherson. All rights reserved.
-->

<service_bundle type='manifest' name='smokeping'>

<service
    name='network/smokeping'
    type='service'
    version='1'>

    <create_default_instance enabled='true' />

    <single_instance/>

        <dependency
                name='smokeping'
                type='service'
                grouping='require_all'
                restart_on='refresh'>
                <service_fmri value='svc:/network/http:apache24' />
        </dependency>

    <exec_method
        type='method'
        name='start'
        exec='/lib/svc/method/svc-smokeping %m'
        timeout_seconds='600' />

    <exec_method
        type='method'
        name='stop'
        exec=':kill'
        timeout_seconds='60' />

    <exec_method
        type='method'
        name='restart'
        exec=':kill'
        timeout_seconds='60' />

    <exec_method
        type='method'
        name='refresh'
        exec=':kill'
        timeout_seconds='60' />

    <template>
        <common_name>
            <loctext xml:lang='C'>
                Smokeping latency graph
            </loctext>
        </common_name>
        <description>
            <loctext xml:lang='C'>
Provides information about upstream connection latency.
            </loctext>
        </description>
    </template>

</service>

</service_bundle>

Now for the script itself:

#!/bin/sh

#
# Copyright (c) 2017, James C. McPherson. All rights reserved.
#

. /lib/svc/share/smf_include.sh

SMOKEPING=/opt/smokeping/2.6.11/bin/smokeping
PIDFILE=/var/smokeping/var/smokeping.pid


case "$1" in
'start')
        # check to see if we're still running
        if [ -f $PIDFILE ]; then
                ps -fp `cat $PIDFILE`;
                if [ $? -eq 0 ]; then
                        # still running, exit
                        exit 0
                fi
        fi
        rm -f $PIDFILE
        $SMOKEPING
        ;;

'restart')
    $SMOKEPING --restart
        ;;
'refresh')
    $SMOKEPING --reload
    ;;
'stop')
        pkill smokeping
        rm -f $PIDFILE
        ;;
'*')
        echo "what do you want to do?"
        exit 99
        ;;
esac

You can download the manifest and method script directly if desired. For use, place the manifest in /lib/svc/manifest/site and svcadm restart manifest-import. Place the script in /lib/svc/method. Enjoy!




I need a clock

In early December last year we replaced the unspeakably disgusting carpet in the loungeroom (it had been there since the house was built in the mid-80s) with some rather nice tiles. Fallout from that process involved getting rid of our Ikea Billy cd shelves, moving a bookcase from one wall to another, and EOLing (end-of-life) our Beyonwiz DPP2 pvr. We haven't recorded live tv in a ~very~ long time, so the DPP2 was a rather expensive way of providing an ntp-anchored clock.

J expressed a desire for a replacement clock, and I've always appreciated having an actually accurate clock. So I acquired an Adafruit PiTFT panel. After being surprised that I had to solder the 40pin socket connector myself (not having soldered anything in more than 10 years), I managed to do it sufficiently well that the device was fine on boot and got a working display:

First boot with the PiTFT attached

Now since the Pi in the loungeroom runs OSMC, that is, it's an appliance, it doesn't have the requisite Adafruit drivers in its repo. So... time to build a fresh kernel.

I build fresh Solaris kernels several times a day, and in 2014 Tim, Mark and I delivered a major rewrite for how we actually build core Solaris. But I haven't built a linux kernel in about 20 years - I had to go looking for instructions on where to start! I've taken my lead from *khAttAm* and now I've got the repo building on the pi. It's going to take a while, though, because (a) the pi is fairly low-powered, and (b) I've set it up so that the OSMC home directory is actually mounted from our Solaris media server so we don't run out of space with the media db.

Anyway, once that kernel and its modules are built, I hope to schlep them into place, suddenly have a /dev/fb1 on which to display this:

#!/usr/bin/python3.4

# from http://stackoverflow.com/questions/7573031/when-i-use-update-with-tkinter-my-label-writes-another-line-instead-of-rewriti
# only slightly modified


import tkinter as tk
import time

class piClocknDate(tk.Tk):
    def __init__(self, *args, **kwargs):
        tk.Tk.__init__(self, *args, **kwargs)
        self.maxsize(width=320, height=240)
        self.resizable(0, 0)
        self.title("rpi Clock")
        self.fontC = "helvetica 36 bold"
        self.fontD = "helvetica 18 bold"
        self.padc = 40
        self.padd = 50
        self.clockL = tk.Label(self, text="", font=self.fontC,
                               padx=self.padc, pady=70,
                               foreground="light blue", background="black")
        self.clockL.pack()
        self.curdate = time.strftime("%d %B %Y", time.localtime())
        self.dateL = tk.Label(self, text=self.curdate, font=self.fontD,
                              padx=self.padd, pady=70,
                              foreground="blue", background="black")
        self.dateL.pack()

        # start the clock "ticking"
        self.update_clock()
        self.update_date()

    def update_clock(self):
        curt = time.localtime()
        disptime = time.strftime("%I:%M  %p" , curt)
        secs = int(time.strftime("%S"))
        padx = self.padc
        if secs % 15 is 0:
            padx = self.padc - 10
        self.clockL.configure(text=disptime, padx=padx)
        # call this function again in one second
        self.after(1000, self.update_clock)

    def update_date(self):
        newdate = time.strftime("%d %B %Y", time.localtime())
        if newdate is not self.curdate:
            self.curdate = newdate
            self.dateL.configure(text=self.curdate, padx=self.padd)
        self.after(1000, self.update_date)


if __name__== "__main__":
    app = piClocknDate()
    app.mainloop()