Oracle Instant Client v12.2 for Solaris now available via IPS

A little over two years ago I wrote that the Oracle Instant Client v12.1 was now available in IPS format from so it was a very simple matter to utter

# pkg install *instantclient*

and you would get the Instant Client packages installed (with a correct RPATH, too).

The Database group released v12.2 a few months ago, and after a delay caused by the significant reorganisation of the Systems division, I am really pleased to announce that we have that version packaged in IPS format as well.

We've made a few changes to the package naming, so that you can install both 12.1 and 12.2 on your system at the same time. There's also a mediator so that /usr/bin/sqlplus will point to the preferred version. To install this new version via IPS, you do need to have access to the Solaris 11 Support Repository. (Visit to get started).

root@burn11x:~# pkg install -nv *instantclient*
           Packages to install:       9
           Mediators to change:       1
     Estimated space available: 8.50 GB
Estimated space to be consumed: 1.75 GB
       Create boot environment:      No
Create backup boot environment:      No
          Rebuild boot archive:      No

Changed mediators:
  mediator instantclient:
           version: None -> 12.2 (vendor default)

Changed packages:
    None ->,5.11-4:20171026T193617Z
    None ->,5.11-4:20171027T004528Z
    None ->,5.11-4:20171026T193622Z
    None ->,5.11-4:20171027T004524Z
    None ->,5.11-4:20171026T193618Z
    None ->,5.11-4:20171027T004526Z
    None ->,5.11-4:20171026T193619Z
    None ->,5.11-4:20171027T004538Z
    None ->,5.11-4:20171026T194032Z

If you had the 12.1 packages installed already, then a pkg update would get you the renamed packages (database/oracle/instantclient-121).

NVLists for fun and profit

Over the years that I've worked on Solaris, I've come to know and love libnvpair. We use it all over the place, from the kernel and up to bits of userspace. If I was reimplementing fwflash today, I'd use libnvpair rather than <sys/queue.h>.

One of the things that you might not be aware of is that we ship python bindings for this library, and while they're not perfect, they are very, very useful. Let's have a look at how you can delve into one particular feature of your Solaris system: the zpool cache.

Before we start, you need to know that /etc/zfs/zpool.cache is NOT AN INTERFACE, if you edit that file you could muck up your zpool configurations, and this post is just an example of how we can extract nvlist data.

With that warning proclaimed, let's have a look at this file.

On our Solaris media server I have a large pool called soundandvision to store our photos along with music and movies that I've ripped from CDs, DVDs and Blu-Rays over the years. Here's what zpool status tells me about this right now:

$ zpool status soundandvision
  pool: soundandvision
 state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 1.60T in 6h36m with 0 errors on Thu Jul  6 22:07:18 2017


        NAME                         STATE      READ WRITE CKSUM
        soundandvision               DEGRADED      0     0     0
          mirror-0                   ONLINE        0     0     0
            c2t3d0                   ONLINE        0     0     0
            c0t5000039FF3F0D8D9d0    ONLINE        0     0     0
          mirror-1                   ONLINE        0     0     0
            c0t5000CCA248E72F12d0    ONLINE        0     0     0
            c0t5000CCA248E728B6d0    ONLINE        0     0     0
          mirror-2                   DEGRADED      0     0     0
            c0t5000039FF3F0D2F0d0    ONLINE        0     0     0
            spare-1                  DEGRADED      0     0     0
              c0t5000039FE2DF1C15d0  REMOVED       0     0     0
              c0t5000C50067485F33d0  ONLINE        0     0     0
          c0t5000C50067485F33d0      INUSE

errors: No known data errors

Yes, I do need to pop along to my local PC bits shop and replace that removed disk. What can we find out about that disk, though?

Wow, I really left this post incomplete, didn't I!

Long Fat Networks

Living in Australia generally means that you're on the end of a Long Fat Network (LFN), internet-wise. That's a serious technical term which is important to the networking stack when determining optimal data transfer sizes.

Two of my colleagues down in Melbourne are also with Aussie Broadband and using the top (100Mbit down, 40Mbit up) NBN speed tier. We also have company-issued hardware vpn units because we work from home fulltime. I was delighted at the bandwidth available from Aussie for our connections to work systems in the SF Bay Area, and when I had cause to update my systems to a new build I observed that it now took about 55 minutes on our media server, rather than the 80-90 minutes it took with the SkyMesh connection.

There was a fly in the ointment, however, because my colleagues and I calculated that while we should be getting 1Mb/s or more as a sustained transfer rate from the internal pkg server, we'd often get around 400kb/s. Since networking is supposed to be something Solaris is good at, we started digging.

The first thing we looked at was the receive buffer size, which defaults to 1Mb. Greg found so we changed that for tcp, udp and sctp. While fasterdata document talked about using /usr/sbin/ndd, the Proper Way(tm) to do this in Solaris 11.x is with /usr/sbin/ipadm:

 # for pp in tcp udp sctp; do ipadm show-prop -p max-buf $pp; done

tcp   max-buf               rw   1048576      --           1048576      1048576-1073741824

udp   max-buf               rw   2097152      --           2097152      65536-1073741824

sctp  max-buf               rw   1048576      --           1048576      102400-1073741824

To effect a quick and persistent change, we uttered:

# for pp in tcp udp sctp; do ipadm set-prop -p max-buf=1073741824 $pp; done

While that did seem to make a positive difference, transferring a sample large file from across the Pacific still cycled up and down in the transfer rate. The cycling was really annoying. We kept digging.

The next thing we investigated was the congestion window, which is where the afore-mentioned LFN comes in to play. That property is cwnd-max:

 # for pp in tcp sctp; do ipadm show-prop -p cwnd-max $pp; done

tcp   cwnd-max              rw   1048576      --           1048576      128-1073741824

sctp  cwnd-max              rw   1048576      --           1048576      128-1073741824

Figuring that if it was worth doing, it was worth overdoing, we bumped that parameter up too:

# for pp in tcp sctp; do ipadm set-prop -p cwnd-max=1073741824 $pp; done
$ curl -o moz.bz2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time Current
                                 Dload  Upload   Total   Spent    Left  Speed
  3 3091M    3  102M    0     0  4542k      0  0:11:36  0:00:23  0:11:13 5747k^C

While that speed cycled around a lot, it mostly remained above 5MB/s.

Another large improvement. Yay!

However... we still saw the cycling. Intriguingly, the period was about 20 seconds, so there was still something else to twiddle.

In the meantime, however, I decided to update our media server.

I was blown away.

23 minutes 1 second

Not bad at all, even considering that when pkg(1) is transferring lots of small files it's difficult to keep the pipes filled.

Now that both Greg and I had several interesting data points to consider, I asked some of our network gurus for advice on what else we could look at. N suggested looking at the actual congestion algorithm in use, and pointed me to this article on High speed TCP.

High-speed TCP (HS-TCP ). HS-TCP is an update of TCP that reacts better when using large congestion windows on high-bandwidth, high-latency networks.

The Solaris default is the newreno algorithm:

 # ipadm show-prop -p cong-default,cong-enabled tcp
tcp   cong-default          rw   newreno      --           newreno      newreno,cubic,
tcp   cong-enabled          rw   newreno,     newreno,     newreno      newreno,cubic,
                                 cubic,dctcp, cubic,dctcp,              dctcp,
                                 highspeed,   highspeed,                highspeed,
                                 vegas        vegas                     vegas

Changing that was easy:

# for pp in tcp sctp ; do ipadm set-prop -p cong-default=highspeed $pp; done

Off to pull down that bz2 from again:

 $ curl -o blah.tar.bz2
  % Total    % Received % Xferd  Average Speed   Time    Time     Time Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 3091M  100 3091M    0     0  5866k      0  0:08:59  0:08:59 --:--:-- 8684k

For a more local test (within Australia) I made use of Internode's facility:

$ curl -o t.test
  % Total    % Received % Xferd  Average Speed   Time    Time     Time Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  953M  100  953M    0     0  10.0M      0  0:01:35  0:01:35 --:--:-- 11.0M

And finally, updating my global zone.

 # time pkg update --be-name $NEWBE core-os@$version *incorporation@$version
            Packages to update: 291
       Create boot environment: Yes
Create backup boot environment:  No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            291/291     2025/2025  116.9/116.9  317k/s

PHASE                                          ITEMS
Removing old actions                       1544/1544
Installing new actions                     1552/1552
Updating modified actions                  2358/2358
Updating package state database                 Done
Updating package cache                       291/291
Updating image state                            Done
Creating fast lookup database                   Done
Reading search index                            Done
Building new search index                  1932/1932

A clone of $oldbe exists and has been updated and activated.
On the next boot the Boot Environment be://rpool/$newbe will be
mounted on '/'.  Reboot when ready to switch to this updated BE.

real    12m30.391s
user    4m4.173s
sys     0m21.496s

I think that's sufficient.