Testing Oracle Solaris 11 Express

I've been testing Oracle Solaris 11 Express recently. For those who don't remember it, Oracle acquired Sun Microsystems :rip: and killed OpenSolaris :rip: with no official statement, the only information about the process was a leaked internal note (I leave it to you to decide whether that that leakage was real, and if it was intentional or not).

Solaris 11 Express is what remains of OpenSolaris after Oracle decided how they should move forward with it.

The immediate change you may notice in case you want to download and test it is that the license has changed, and that you are not allowed to download it unless you explicitly accept the license. Up to my knowledge, the license allows you to use it for free for personal use, otherwise you need to buy some sort of support; I didn't investigate this further because, well, I am interested in it for personal use at the moment. Why? Well, many reasons. …The first reason: I like it! This was simple!
The second: it has cool features that I'd like to learn more about, first of all ZFS, which is damn cool.
The third: I don't want to forget everything about Solaris because I am mainly using Linux these days.

So, just for a start, I wanted to go back where I left OpenSolaris back in the end of year 2008. The checklist was as follows:

  • configure NTP
  • have DHCP update the dynamic dns with the hostname
  • configure the NFS server on the Solaris side, configure automount on the client side, and move my home directory to Solaris
  • configure the time slider

It turned out that I needed some documentation for this, and that the documentation was not terribly hard to find, and that the tasks where quite easy. Let's check them one by one.

Configuring NTP
This was apparently simple: we have a pretty cool multicast NTP infrastructure here that requires just a couple of standard files to configure. But something didn't work as expected, and ntpd refused to synchronize properly and ntpd was logging too little to help debugging. How could I get more information?

The man page was useful, and provided the necessary information. To get the current configuration of the service I needed to query SMF, and in particular to run the command:

svccfg -s svc:/network/ntp:default listprop config

I needed to set verbose_logging to true. Not difficult at all:

svccfg -s svc:/network/ntp:default setprop config/verbose_logging = true

Restarting the service is easy, too, and again you use some SMF magic:

svcadm restart svc:/network/ntp:default

Once the service restarted, some interesting information started to flow into the file /var/ntp/ntp.log.

After some research on the log messages, it turned out that there is a bug in Solaris and no workaround exists for the ntpd version that ships with S11E. OK, no big deal: I configured ntpd in unicast and everything ran smoothly.

DHCP/DynDNS
DHCP worked out of the box, no sweat. But no record was created for this machine on the DNS. Some more research, and it turned out that I needed to create a file /etc/hostname.interfacename, containing the string inet hostname. Just did that and rebooted, and it worked flawlessly. Oh well 🙂

Time slider
For those who don't know it, time slider is an application of the file manager which takes advantage of ZFS snapshots. Once you activate the time slider, the system starts making rolling snapshots of the filesystems at regular intervals. Besides, a slider cursor is added to the filemanager windows: moving the slider back and forth shows you the situation of the selected folder at the given point in time. So if you want to, say, restore a file you accidentally deleted, just move the cursor to the desired point in time, and the file is just a copy & paste away!

This is a nice thing indeed. Plus, activating it is not difficult at all: a few clicks in the GUI and you're done. :yes:

NFS/automount
This task was not difficult "per se", but it required some preparation before you could actually make everything work.

For first, I wanted to save a copy of my current home directory on Solaris. No big deal really: with zfs this is just a snapshot and a clone away. It takes longer to type the commands than the copy itself.

Then, UID and GID for my user must be the same on both machines. The easiest thing was to use Linux IDs on Solaris, and changing them was not hard: we know how to edit the passwd and group files, right?

Done this, it was time to start the NFS server and configure the filesystem. This is a task where Solaris and Linux diverge a lot. Another cool feature of Solaris is the Service Management Facility (SMF). SMF presides at the following functions:

  • manages the start/stop order of services at boot, shutdown and runlevel change;
  • manages dependencies between different services;
  • monitors services for self-healing (e.g.: a process which dies unexpectedly is automatically restarted)

To find the exact string identifier (FMRI) for the NFS server you can use the svcs command, while you'll use svcadm enable to enable the service. So:

bronto@isaiah:~$ svcs *nfs* | awk '{ print $3 }'
FMRI
svc:/network/nfs/client:default
svc:/network/nfs/mapid:default
svc:/network/nfs/status:default
svc:/network/nfs/nlockmgr:default
svc:/network/nfs/rquota:default
svc:/network/nfs/server:default
svc:/network/nfs/cbd:default
bronto@isaiah:~$ pfexec svcadm enable svc:/network/nfs/server:default
bronto@isaiah:~$ svcs svc:/network/nfs/server:default
STATE          STIME    FMRI
online         May_20   svc:/network/nfs/server:default

Voilà! It's started, and will be started at boot time from now on.

Configuring a permanent NFS share differs from both Linux and previous Solaris versions. Now you use the sharemgr command. With sharemgr you can organize your shares in groups, where all shares in the same group will… uhm… share the same configuration (e.g.: mount permissions).

Two default groups are defined at installation time: default and zfs (you can list the groups using sharemgr list). If this machine was to be used as a server for different users and task I would create a separate group right away, but it's only me. So I decided to skip the creation of a new group and create my share into the default group. You do that with sharemgr add-share.

Assigning the right permissions to the share was a bit more difficult: the command sharemgr set in this case requires the -S option to be set, and I had no idea of which keyword I should associate to it! A bit of research and experimentation, and I found it: -S sys (which translates to the NFS option sec=sys. The share is now ready and persistent. Note that you can still create a share with the share command, or editing the sharetab file. Unfortunately, that's not going to survive after the very next reboot.

Everything was ready on the Solaris side, so time to go on Linux. The first task, quite time consuming I must say, is to copy the home directory to the Solaris share via NFS. A mount and an rsync are not difficult, but I had some VirtualBox disk images on my home directory. Ouch!!! It took ages to complete the first run!

It was now time to freeze the Linux home directory and configure autofs/automount. As a temporary change, I set my user's home directory to "/", so that I could leave the real home directory untouched and frozen, and do the final sync. Done this, I installed autofs and added the few needed lines in auto.master and auto.home, and cd-ing in my real home automatically mounted the Solaris filesystem! Done? Well… not yet!

I saw that the network interface was being saturated at regular intervals, some ten seconds when I was fetching data from the network at the maximum possible speed, and then again after some time, and then again… It was clear to me that something was fetching data from NFS, but which process? A tool named iostat came to the rescue, and quickly identified trackerd as the culprit. trackerd was actively "watching" my home directory, strolling through it at regular intervals. I unchecked the option and restarted trackerd. It then did its "initial" sweep, and then stopped bothering me. :yes:

And that was all. I am now looking for more fun, and there's a lot to play with: zfs, crossbow, zones/containers… All this while waiting for the first stable release of OpenIndiana. What's next? 🙂

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.