How do I upgrade the Fusion-io drivers?

This article was last updated April 2011.

Any time the kernel is upgraded, you’ll need to recompile and reinstall the drivers. As such, it is important to plan your kernel upgrades in advance and perform testing after the first boot using a new kernel.

If you are an IBM customer using IBM High IOPS adapters, please refer instead to the latest IBM documentation, drivers and firmware located at IBM Fix Central.

The 2.3.x driver builds and runs on a much wider range of kernels than the 1.2.x driver series. It accomplishes this using a new portability layer to abstract itself away from the operating system internals. By following the procedure below, you will obtain a working driver, built for the specific kernel running on your system.

Follow the instructions based on the version of your driver.

Building the Fusion-io Drivers From Source

To view a step-by-step screencast of this process, click here.

Before beginning, download the ioDrive driver source rpm for RHEL from the Dell Fusion-io support site to a temporary directory. At minimum, you’ll need the latest version of the following packages:

  • fio-common
  • fio-firmware
  • fio-sysvinit
  • fio-util
  • iomemory-vsl
  • libfio

Here’s a screenshot showing the packages that need to be downloaded.

Remove prior versions of the ioDrive driver rpm

yum remove iomemory* iomanager* iodrive* fio-* libfio*
rm -rf /usr/src/redhat/RPMS/x86_64/iomemory-vsl-*

For Red Hat Enterprise 5, install the gcc 4.x and kernel-devel packages for your current kernel. The kernel-headers package is also needed, but is typically installed as part of the base operating system.

yum -y install kernel-headers-`uname -r` kernel-devel-`uname -r` rpm-build gcc lm_sensors net-snmp

Change to the directory where you downloaded the ioDrive driver source RPM and begin the rebuild process.

rpmbuild --rebuild iomemory-vsl*.src.rpm

Install the newly-built drivers.

yum install --nogpgcheck fio-sysvinit* fio-common* fio-util* libfio* fio-firmware* iomemory-vsl-source* /usr/src/redhat/RPMS/x86_64/iomemory-vsl-*

Check the status of the ioDrive(s):

fio-status

Found 1 ioDrive in this system

fct0 Attached as 'fioa' (block device)
 Fusion-io ioDIMM3 160GB, Product Number:FS1-001-161-ES SN:6168
 Firmware v5.0.1, rev 42895
 161 GBytes block device size, 198 GBytes physical device size capacity.
 PCI:0c:00.0, Slot Number:6
 Internal temperature: avg 42.3 degC, max 44.3 degC
 Media status: Healthy; Reserves: 100.00%, warn at 10.00%

Upgrading the Fusion-io Firmware

In the previous section, you should have already downloaded and installed the latest firmware package. If not, you may download the fio-firmware package from the Fusion-io Support website to proceed. Once the firmware package is installed, it needs to be applied to the cards.

To upgrade the firmware:

fio-update-iodrive /usr/share/fio/firmware/iodrive_*.fff

Watch the output of the upgrade process and reboot when complete.

Update the Fusion-io Init Scripts

NOTE: this is the new recommended configuration for any customer using multiple Fusion-io ioDrives in a RAID array, as it ensures the cards are all initialized before bringing the RAID array and filesystems online. Reference: http://kb.fusionio.com/KB/a64/loading-the-driver-via-udev-or-init-script-for-md-and-lvm.aspx

Uncomment the blacklist line:

vim /etc/modprobe.d/iomemory-vsl.conf
# To keep ioDrive from auto loading at boot when using udev, uncomment below
blacklist iomemory-vsl

Backup /etc/fstab, and modify it to add “noauto 0 0” to the datavg filesystems:

cp /etc/fstab /etc/fstab.`date +%Y%m%d.%H%M%S`
vim /etc/fstab

Example:

/dev/datavg/u2          /u2                     ext3    defaults,noauto        0 0
/dev/datavg/eclipse     /u2/eclipse             ext3    defaults,noauto        0 0
/dev/datavg/edi         /u2/edi                 ext3    defaults,noauto        0 0
/dev/datavg/ereports    /u2/eclipse/ereports    ext3    defaults,noauto        0 0
/dev/datavg/pdw         /u2/pdw                 ext3    defaults,noauto        0 0
/dev/datavg/uvtmp       /u2/uvtmp               ext3    defaults,noauto        0 0
/dev/datavg/kourier     /u2/kourier             ext3    defaults,noauto        0 0
/dev/datavg/crashplan   /usr/local/crashplan    ext3    defaults,noauto        0 0

Uncomment “ENABLED=1” in /etc/sysconfig/iomemory-vsl to enable init script:

vim /etc/sysconfig/iomemory-vsl
# If ENABLED is not set (non-zero) then iomemory-vsl init script will not be
# used.
ENABLED=1

In the same /etc/sysconfig/iomemory-vsl file, add the RAID array(s) and mount points. For example:

MD_ARRAYS="/dev/md0"
LVM_VGS="/dev/datavg"
MOUNTS="/u2 /u2/edi /u2/eclipse /u2/eclipse/ereports /u2/pdw /u2/uvtmp /u2/kourier /usr/spool/crashplan"

Enable the init script:

chkconfig iomemory-vsl on
chkconfig --list iomemory-vsl

Resources