Getting Solaris 10 patches with smpatch, PCA and Oracle Support ID (CSI)

Short story:

Download PCA for system registered with Oracle CSI. This version of PCA (Patch Check Advanced) allows to maintain Solaris OS patches using Oracle Solaris Premier Subscriptions and Oracle CSI account instead of the SunSolve account with Sun Contract.

Long Story:

After quite successful evaluation of running MySQL on Solaris 10 we decided to move all our production database servers to the Oracle Solaris. We purchased Oracle Solaris Premier Subscriptions for Non-Oracle x86 Systems (our servers are Dell PowerEdge R710). I have registered the my subscription with Oracle support and successfully installed Oracle Configuration Manager on the server. Patch analysis and recommendations didn’t work for the server OS. Attempting to download the recommended patch cluster didn’t work either. Sun server, where the actual patch file is located, responded with 403 (Forbidden) and the “You are not entitled to retrieve this content. ” message. Trying to get to the OS patches via SunSolve finally showed the following picture: Sun servers that host all Solaris patches don’t know about Oracle CSI (Customer Support ID) and Oracle support system knows nothing about Sun Contract Number which is required for getting any Solaris patch other than public security patch.

My next thought was getting the system updated with the built-in smpatch utility. I was thinking that if Oracle is now packaging Solaris 10 distribution, everything shipped with the OS should work. Naive me…

The server was installed with minimal install of the Solaris OS. smpatch is not being installed as part of this choice. Installing just SUNWmga package doesn’t not work as it dependent on a bunch of other packages that are not installed during minimal install. To make the story short you should do the following in order to get smpatch functional:

  1. Install smpatch related packages from Solaris 10 DVD:
    • Insert the DVD and find out its device:
      ls -al /dev/sr* | awk ‘{print “/” $11}’
    • Mount it with:
      mount -F hsfs -o ro /dev/<device name> /mnt/dvd where device name is the one you’ve got in the above step (make sure /mnt/dvd exists)
    • execute as root:
      # pkgadd -d /mnt/dvd/Solaris_10/Product/ SUNWupdatemgru SUNWupdatemgrr SUNWccccr SUNWccccrr SUNWccccfg SUNWcctpx SUNWccfw SUNWccsign SUNWbrg SUNWcsmauth SUNWscnprm SUNWscnsom SUNWsensor SUNWscn-base SUNWsam SUNWscnprmr SUNWcacaort SUNWscn-base-r SUNWsamr SUNWbrgr SUNWzoneu SUNWzoner SUNWpool SUNWxcu4 SUNWsensorr SUNWscnsomr SUNWjdmk-base
  2. Follow this excellent article created by Kevin Pendleton about Registering Solaris 10 and updating patches from the command line (CLI)

So now the smpatch does something useful, other than throwing weird Java hundred lines exception stack traces. Frankly, I don’t understand Sun engineers, who decided to develop such a simple and vital system utility in Java with dependences to a bunch of other packages. The most annoying thing is that the program doesn’t tell you that “SUNWjdmk-base package is missing, please install it”. Instead, it throws weird unreadable Java errors. But the most funny thing is that if you are Oracle customer and you don’t have existing Sun contract you cannot do much with smpatch. smpatch also goes for patch files to SunSolve servers which know nothing about the fact that Oracle sales $1K per CPU support subscriptions for the operating system it is running on.

While I was digging the Internet for finding solution to apply patches to my system, I found two very important things.

One of them is PCA (Patch Check Advanced) – outrageous patching utility for Solaris written by Martin Paul. It is extremely smal Perl script that just does the work in the right way. This is exactly what this kind of OS utility should look like. I’m going to manage all my new Solaris systems with PCA. Once you get get it work, you don’t need anymore any heavy Java based patch management software provided by Sun/Oracle.

Another very useful piece of information was Oracle article, explaining Patch download automation for Sun products using wget. In fact, Oracle has migrated all Sun servers hosting Solaris patches to its own servers. Presumably, it is not yet integrated into the Oracle Support portal and Solaris system utilities. I hope Oracle is not going to abandon Solaris 10 in sake of their new Solaris 11 Express release. So if you have valid Oracle CSI, you can manually download and install required Solaris patches using this howto. I did it for the recommended patch cluster (10_x86_Recommended.zip).

And finally, when my system was up to date, the only thing that I was missing is a patching tool for the ongoing system maintenance. I looked into the PCA code, it was quite easy to merge base patch URLs to work with new Oracle locations. As a result I’ve got PCA that is fully functional and works with Oracle CSI credentials.

Download it: PCA for system registered with Oracle CSI

Update (December, 13)

I just received email from Martin Paul saying that he also had prepared an updated version of PCA and it is going to be released any time soon. Please check his site for the update. Oracle has completed migration of Sun servers over the last weekend. I will check My Oracle Support site for patches availability and its integration with Oracle Configuration Manager.

Update (December, 14)

Blastwave PCA package is still not updated with latest PCA version. Download it manually from the PCA Home page. Oracle Configuration Manager is not updated either. Patches should be downloaded manually, although with PCA it is much easier to do.

Moving MySQL from RHEL (CentOS) to Solaris

After quite successful evaluation we have finally decided to move our production MySQL Database servers to Solaris. Why Solaris? Well, there are couple of good reasons for it.

First of all about database part of system architecture design. If your system is designed according to SOA principles you probably have one (or more) relatively small databases for each service app. Let’s assume that small database term describes a database up to 200-300GB of the overall size and up to 50 millions rows in its largest table (no more than 1-2 such tables in a db). This is where MySQL is performing quite well. If your database is much bigger and/or it cannot be segmented because of application nature, then, perhaps, MySQL is not the right choice.

Traffic pattern is also important factor in the system architecture design. In most common, Web oriented application, pattern of accessing data usually doesn’t not much differs from the read/write ratio value of 90/10. In practice, many systems that I saw are dealing with 95% of reads against 5% of write requests. For example, if we are designing simple Users Directory service, we would expect a lot of logins (reads) and modest amount of new user registrations (writes).

Scalability. This is one of most popular arguments of almost every technical discussion about system architecture of an application backed by database engine. The truth is that database scalability issue in most cases exists only for read requests to the database. Read-only database traffic scales out quite good using simple technique of separating read and write queries on different servers using MySQL replication. Using this approach, application is querying MySQL slave(s) when reading data, and all data modification statements go to the master. All the complicity of software design is that application code should distinguish between read and write database connection. And, of course there system overhead of setting up the MySQL replication. Pros and cons of this modus operandi is beyond the scope of the discussion. The only fact that matters is that master database instance has much higher SLA than slave. And one more important requirement for the MySQL server, is that in sake of maintainability of DB servers, the only database(s) that belongs to the application should exist on master and slave(s) instances of MySQL. This limitation comes up as MySQL defines replication per instance and not per database. This requirement restricts putting any additional (even micro-small) on the master server.

Hardware. Normally, we prefer cheap and small servers in our system. Standard server is 1U Dell PowerEdge R410 like machine. We made exception for database hosts and set it up on bigger Dell PowerEdge R710 with 12 cores and 64GB of RAM. We equipped database servers with redundant power supplies and remote access (DRAC) cards. Each database host is running multiple instances of MySQL. Currently, master database is the only piece of system that must not have any downtime and requires immediate attention in case of failure.

Platforms and versions. The system is 100% Linux. We love Debian. For the database server we made exception again, and installed CentOS on it. The reason for this is MySQL. Packages shipped with Debian are MySQL 5.0 wich is as stable as Debian. But 5.0 lacks many features that modern web development demands. Newer, 5.1 packages are not maintained by the Debian team. Respecting well known fact that MySQL 5.1 is relatively buggy piece of software, it has been decided to stick with official binary builds provided by Sun (Oracle now). Today prebuilt  packages are provided for the following Linux Distros: RedHat, Suse Enterprise and Generic Linux, which is basically an archive that should be manually unpacked and placed in right places. We have chosen CentOS as most popular, binary compatible with RedHat system.

MySQL setup layout. We configured our server with CentOS for running multiple instances of MySQL. Each instance has its own data directory, listening on its own TCP port and UNIX socket. In order to achieve this there should be created customized versions of init script (/etc/init.d/mysql.<instance_name>), configuration file (/etc/my.<instance_name>.cnf) and actual data and log directories for each instance.

Having the system running more than a year now, I can confirm that it is working fairly well with the following exceptions:

  • Maintenance is somewhat complicated. Configuration of each instance is overhead.
  • Scripting, tracing and debugging requires custom/parametrized scripts. Even console mysql client requires connection parameters for the right instance. Funny and annoying fact is that command history of the mysql client is shared for all instances. Sometimes it drives me crazy. 🙂
  • No way to isolate system resources. We were needed it one-two times in all the history and I cannot say that this is big problem. But having such a possibility is very nice to have candy.
  • RedHat/CentOS software packages are very very outdated. For example, running simple jobs and scripts written in Python (that we use a lot) often becomes a nightmare as CentOS has Python 2.4. Code developed in Python 2.5 and later has no chance to run there without massive refactoring.

So we looked at Solaris as main platform for our database servers. MySQL packages are built by mysql.com and both Solaris and MySQL belong now to the same company, so Solaris seems to be good choice for database server OS. Especially we wanted to evaluate the features described below.

  1. Using Solaris Containers
    In the past I have been playing with a few virtualization technologies attempting to use it for running MySQL database. VMWare ESXi was complete disaster for me, IO performance is horrible and it simply kills MySQL. OpenVZ performs much better, as it use similar approach of operating system level virtualization and performance penalty is minimal there. But it still locks us down to RedHat/CentOS. Solaris Containers with its excellent management tools allows to run multiple instances of MySQL without notable performance loss. Containers are very flexible in everything related to resource management. Almost every system aspect can be either shared between zones or reserved to each container. MySQL installation and management is greatly simplified. We have one single version of the my.cnf configuration file, init script, ports and directories layout are simply set to defaults.
  2. Using DTrace
    Tracing, monitoring and debugging MySQL is not a trivial task. DTrace is awesome troubleshooting technology. There are DTrace probes support in MySQL. It makes the server problem finding and solving much easier. We just want it. Period.
  3. Using ZFS replication and snapshots for DB backup and HA
    Backing up MySQL is always a big headache. Mysqldump is slow and sometimes is not that reliable. There is no binary backup support out of the box. Innodb.com Hot Backup is dead is renamed to MySQL Enterprise Backup and is being sold now as part of commercial editions of MySQL. Percona engineers have developed XtraBackup that does a great job. But it is still interfering with the MySQL engine and quite slow. ZFS replication and snapshots works on much lower level and is extremely fast. Restore is also almost immediate and reliable.

We are running our new Solaris servers now for more than two weeks in production. Everything goes smoothly until now. We will share our future experience further each time we have something new to tell about our MySQL life on Solaris.