Archive

Archive for June, 2007

Solaris Live Upgrade on x4500

2007-06-29 5 comments

I’ve managed to upgrade the OS on our x4500 server from Solaris 10 06/06 to Solaris Express Developer Edition build 64 using Live Upgrade. Here’s the
entire process from beginning to end.

Before I initially set up the x4500, I had done a minimal amount of research on using Live Upgrade that suggested that it would be a really good idea to leave some empty space on my system’s boot disk. I made sure to do this during the initial configuration.

It’s also important to note that if you are running zones (Solaris virtual machines) and you are starting with Solaris 10 06/06 or earlier, you have to stop your zones, uninstall them and delete their configurations before you do a live upgrade. At some point after Solaris 10 06/06 this became unnecessary, but I don’t know which build number became capable of live upgrading a system with active non-root zones.

First, to see where we were at, I looked at the contents of /etc/release, which were like this:

                Solaris 10 6/06 s10x_u2wos_09a X86
   Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
                Use is subject to license terms.
                     Assembled 09 June 2006

That indicates the June 2006 release of Solaris 10.

Next, I needed to figure out which disk was the boot disk. The x4500 has an amazing 8 sata controllers, and 48 disks in it, most of which are part of a big giant zfs pool, but two of which are not, and one of those two on my system is the boot disk. To figure out the boot disk, I just ran the mount command and looked for the name of the disk holding the root partition, “/”. It turned out for some inexplicable reason to be c6t0d0s0.

Next, I needed to prepare another partition on the boot disk in the empty space I had prepared earlier (like a tv chef). I ran the format command to do
this. The session looked like this:

format /dev/dsk/c6t0d0
partition> format
format> 3
Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 14652
Enter partition size[0b, 0c, 14652e, 0mb, 0gb]: 30gb
partition> quit
format> label
format> quit

The parameter “new starting cyl” is the next cylinder after the highest occupied cylinder on your disk, ignoring the partition called backup, which doesn’t actually use the cylinders it seems to be using.

Now, there is a 30 GB partition called c6t0d0s3 ready and waiting for the Live Upgrade process.

Next, I had to patch the snot out of my server to get it right up to date with all the current patches. I used smpatch to do that, but you can also use the gui patch management console. There’s probably a more streamlined way to do it than I did, buy my way had the advantage that I could figure it out, and it worked. Here’s what I did:

smpatch analyze > list.sh

That puts a list of all patches that the patch management system thinks this machine should install into a file called list.sh. I then edited list.sh to remove all the descriptions of the patches so that it just contained patch numbers like 121119-12. I made each line into a command line for installing each patch number, so after editing the file looked like this:

#!/usr/bin/bash
smpatch upgrade -i 111111-11
smpatch upgrade -i 222222-22
smpatch upgrade -i 333333-33
etc.

Then I made the file executable by going chmod +x list.sh and then ran it. It spent a long time installing patches and then completed nicely. I then rebooted the server by typing init 6.

Next, after the server rebooted, went to Sunsolve and looked up document #72099, which describes necessary patches for using Live Upgrade safely. I
downloaded them all for the x86/64 architecture, unzipped them, and applied them in order according to the document, using a shell script. Each line of
the shell script looked like this:

patchadd /data/patches/118816-03

Once that completed, I rebooted again with init 6.

Next, after the server rebooted, I mounted the build 64 dvd iso and used it to install the Live Upgrade utilities from the version of Solaris that I was going to upgrade to. This is necessary for whatever version of Solaris you will be upgrading to. To mount an iso image, you do this:

lofiadm -a /data/solaris-dvd.iso
mount -F hsfs /dev/logi/1 /mnt

This will mount your dvd image at /mnt. Next I used the pkgrm command to remove the old versions of the Live Upgrade utilities and the pkgadd command to add the new ones.

pkgrm SUNWlur
pkgrm SUNWluu
pkgrm WUNWluzone
pkgadd -d /mnt/Solaris_11/Product SUNWlur
pkgadd -d /mnt/Solaris_11/Product SUNWluu
pkgadd -d /mnt/Solaris_11/Product SUNWluzone

Next I created a duplicate of my boot environment using lucreate. I called the existing boot environment 0606 and the new one b64.

lucreate -c 0606 -n b64 -m /:/defv/dsk/c6t0d0s3:ufs

This takes a long time and essentially copies your boot environment onto the disk you specified. When it’s done you have a copy of your working environment that you can then upgrade, patch or whatever, while the system is running, without affecting your working conifguration.

Next, I used the luupgrade command to upgrade my b64 boot environment to Solaris Express DE build 64 using my dvd image, which again was mounted at the /mnt mount point.

luupgrade -u -n b64 -s /mnt/

This takes quite a while. After the first little while, a percent progress indicator shows up to tell you how it’s doing. When it’s done, you just have to activate the new boot environment and reboot, and you will be running your new version.

luactivate b64
init 6

Once the server reboots, there is a new grub menu selection to boot the new environment. The new environment is the default so if you do nothing the server boots into the new version of Solaris. Once mine was done, I checked the /etc/release file, and it said the following:

                    Solaris Nevada snv_64a X86
   Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
                Use is subject to license terms.
                      Assembled 18 May 2007

That indicates Solaris Express Developer Edition build 64, which is what I was hoping for. I also checked my customized services that I had configured in Solaris 10, and everything was still working, so the upgrade was successful.

Categories: Sun Solaris

Solaris Live Upgrade Testing

2007-06-26 Leave a comment

In the continuing saga to update Solaris, I’ve added the first successful chapter. I need to get to a Solaris build of about b55 or higher, so that the command zfs receive supports the rollback feature prior to the receive operation.

On some of my Solaris machines, the boot disks are small and there is not enough room to do an upgrade over top of the system. I probably have to rebuild those completely, and until I can do that, synchronizing between them will have to be via rsync rather than zfs send / zfs receive. However, our x4500 has a big boot disk, with unpartitioned space, which means I can use Solaris Live Upgrade to update it. Live Upgrade allows you to make a duplicate of your working boot environment on a new partition or disk, upgrade the duplicate to a new version (or even just apply patches to it) and then reboot into that duplicate. It also allows you to revert to the original system in the event you have problems with the upgraded one.

Before I do this on the x4500, I wanted to try it on a non-production machine. To do that, I built a VMWare Server VM with Solaris Express DE build 55, and then yesterday, with the help of these instructions, I did a live upgrade to Solaris Express DE build 64. That worked fine, so I’m encouraged to try a live upgrade on the x4500. It’s running Solaris 11/06, however, not Solaris Express DE, so it’s a bigger step to get to build 64. I think I’m going to start my VM over at the same level as the x4500, and re-do the live upgrade to b64 directly, so I can see what all the steps will need to be on the x4500.

Categories: Sun Solaris, zfs

Fun With Das Keyboard

2007-06-19 Leave a comment

When Dom from Graycon was in here last week to setup the Juniper Instant Virtual Extranet device for us on eval, he wanted to leave me with some documentation. I took him to my desk and was impressed that he sat down in front of my “Das Keyboard” and proceeded to login to Graycon’s extranet without blinking. He did say “this feels really weird” though.

I still love the keyboard after having used it every day now for a few months and I highly recommend it.

Categories: Neat Geek Stuff

GroupWise Migration Complete

2007-06-19 6 comments

Thanks to a lot of hard work by Denys, and no thanks to Novell support, we’ve finished moving all our GroupWise users to our new GroupWise server architecture.

Our old system consisted of six post offices hosted on a two-node NetWare cluster using Novell Cluster Services. There were six post offices as part of a history that included corporate political influences that helped create a functional but sub-optimal technical design. We had an opportunity to upgrade to GroupWise 7, migrate to new hardware, and consolidate to an architecture that included one post office per server on three servers, all in one operation. We built new servers, consisting of three virtual machines running SLES9, hosting one post office each, but with the storage for those post offices held on a fourth server running Solaris, with the post office data on ZFS filesystems, and remote-mounted on the SLES boxes. The beauty of this is that we can do backups with about 10 seconds or less of downtime per post office per day, and we can do speculative changes with near-instant rollback if the changes do anything unexpected. ZFS is your friend.

Anyways, the thanks for Denys is for all the effort he put in to move more than 600 users, many resources and several distribution lists from the six old post offices to the three new ones, and dealing with the aftermath of broken passwords that ensued.

The no-thanks to Novell support is for the lack of help fixing a bug that causes mailbox caching passwords to break when you move a user from a NetWare-created GroupWise post office to a Linux-hosted GroupWise 7 post office. Despite us filing a Premium Support ticket, and Novell recognizing a bug (GroupWise defect 239947) and refunding our support ticket, they never fixed the bug, and after months of us waiting, they even stopped responding for our requests for a status update on the bug. GroupWise 7 SP2 came out without the bug fixed, and we had to proceed and fix 600 broken passwords manually. Hence the ensuing aftermath.

Anyways, we’re finally done with the migration, and our new architecture is much more scalable than the former one. I expect that by adding post office virtual machines on additional blades, plus adding storage management Solaris blades as required, we’ll be able to scale up to several thousand users, which should work for us for the foreseeable future.

Categories: GroupWise, zfs

Juniper Instant Virtual Extranet

2007-06-18 1 comment

We use a Novell iChain gateway device to provide secure access to our internal intranet resources. We’re presently testing alternatives, because our new financial and project management application, doesn’t work through iChain. The best candidate so far, is the Juniper Instant Virtual Extranet (stupid name, awesome product). So far, Vision works through it, ssh works, nfs shares work, and web stuff works (with the exception of some of the cookie-based navigation stuff in our intranet) and even the Novell Client works (ncp client). That’s pretty cool.

The setup is also very easy, and we had it working with authentication to my radius server with our tokens in about 10 minutes from initially powering it on. We’ll be looking at this very closely and trying several other of our applications through it’s on-the-fly ssl vpn functionality. In particular, we’re interested in getting several different software license mangers for various Autodesk products working so that our users can check out design tool licenses out in the field.

Virtualization on Mac OSX

2007-06-07 Leave a comment

I use virtualization extensively at work to run multiple virtual computers on one physical machine. We also use it to disconnect the operating system and application environment from the physical hardware for disaster recovery and hardware agnosticism. Our platforms of choice are VMware Server and VMware ESX server. The first is great because it’s free, and the second is great because it’s amazingly fast and reliable.

Almost all our virtualized workloads run fine in VMware Server in production, which is great because there are no license costs. The only workload that works like crap on VMware server that we use, is SQL Server. It dies like a dog because of I/O latency or something, and the only thing we could do to get it working in a virtualized environment is to run it in ESX server. SQL Server is so flaky that it returns random query results (when it works) or one of several unrelated errors (when it fails) when run in VMware Server or VMware Workstation.

Since I’ve become a switcher I’ve been looking to run virtual machines on my Mac at home. The likely choice for me is VMware Fusion, which is still in beta, even though Parallels is more mature on the Mac platform. The advantage of VMware is that my work virtual machines will run at home. In Beta 3, it seems that 64-bit VMs are not supported, even though my Mac is a Core2-Duo. The website says you can run 64-bit VMs, but a Solaris VM I built at work won’t run in 64-bit mode in Beta3. I haven’t updated to Beta4 yet, but apparently it has a new feature called Unity, which allows you to sort-of disappear a Windows virtual machine desktop so that the application windows running inside the virtual machine just appear as windows on your Mac desktop. that’s kind-of cool, I guess. I’ll update to Beta4 and see if my Solaris 64-bit VM works.

Categories: Mac Stuff, Virtualization

ZFS on Mac OSX 10.5

2007-06-06 Leave a comment

I read today that Jonathan Schwartz “accidentally” leaked that ZFS would be the filesystem of OSX Tiger. This is very interesting to me, because we use ZFS for doing disk backup snapshots at work, and because I really want a ZFS-based home server too. When I first saw an announcement of Apple’s Time Machine feature for Tiger, it occurred to me that it would be fairly easy to implement that using ZFS as a file backing store. I can’t wait to get Tiger, and integrate it with my Solaris file server at home. However, if I can be patient enough, FreeBSD 7.0 with ZFS might end up being my server OS instead. I like FreeBSD and just have a lot more experience with it than Solaris, so for a home server it makes more sense for me.

Categories: Mac Stuff, zfs
Follow

Get every new post delivered to your Inbox.