Installing Memory in an IBM p520 or p52a Server

Reminder: always follow our best practices when performing any system maintenance.

For detailed instructions and videos covering the entire memory installation procedure, please see IBM’s website: Install model 285 or 52x memory modules

Installation rules:

  • If your server does not have ports P1-T3 and P1-T4 on the back, install the memory in the following order:
    • The first memory module pair is placed into memory module slots C9 and C16.
    • The second memory module pair is placed into memory module slots C11 and C14.
    • The third memory module pair is placed into memory module slots C10 and C15.
    • The fourth memory module pair is placed into memory module slots C12 and C13.
  • If your server has ports P1-T3 and P1-T4 on the back, install the memory in the following order:
    • The first quad of memory modules is placed into memory module slots C9, C11, C14, and C16.
    • The second quad of memory modules is plugged into memory module slots C10, C12, C13, and C15.

If you encounter any issues while performing the upgrade, please contact IBM hardware support.

How do I access Dell OpenManage?

The Dell OpenManage Server Administrator allows system administrators to manage individual servers from an integrated, Web browser-based graphical user interface (GUI). OMSA is designed for system administrators to manage systems locally and remotely on a network.

You may access the Dell OpenManage web interface from any computer (replace 192.168.1.100 with your server’s LAN IP address):

  • Launch the OpenManage web interface (https://192.168.1.100:1311/)
  • Log in using your root username and password

What type of tasks can be performed using OMSA?

  • View the server’s hardware configuration
  • Check for hardware errors
  • Check for firmware updates
  • Replace or reconfigure storage

Resources:

Can Dell monitor my servers for hardware errors?

Yes. If you have an active Dell ProSupport contract, you may sign up for Dell Proactive Support.

Getting started with Dell™ Proactive Systems Management is a snap. And it’s available — free of charge — for qualified Dell systems covered by a current ProSupport contract.

What You’ll Need First

Before you proceed, please make sure you have the following:

  1. Your Dell MyAccount or Premier username and password; go to My Account Login to verify that you have a Dell My Account established
  2. The service tag number of a Dell server covered under an active Dell ProSupport™ contract
  3. A Windows® virtual machine or server on which to run the Proactive Systems Management proxy
  4. Administrative credentials (i.e., usernames and passwords) for the systems you plan to monitor
  5. Dell OpenManage™ Server Administrator (OMSA) installed on the systems you plan to monitor; go to www.support.dell.com for more information or to download OMSA

Quick and Simple Setup

  1. Confirm that you are a ProSupport customer.
  2. Set up a customer account.
  3. Configure the Proactive Systems Management Web portal.
  4. Download the Proactive Systems Management proxy.
  5. Identify the systems to be monitored.

Questions?

If you have technical questions about Proactive Systems Management, please refer to the Deployment Guide or the Frequently Asked Questions document.

For additional assistance, please call Dell Technical Support in your country and request support for Proactive Systems Management.

How do I upgrade the Fusion-io drivers?

This article was last updated April 2011.

Any time the kernel is upgraded, you’ll need to recompile and reinstall the drivers. As such, it is important to plan your kernel upgrades in advance and perform testing after the first boot using a new kernel.

If you are an IBM customer using IBM High IOPS adapters, please refer instead to the latest IBM documentation, drivers and firmware located at IBM Fix Central.

The 2.3.x driver builds and runs on a much wider range of kernels than the 1.2.x driver series. It accomplishes this using a new portability layer to abstract itself away from the operating system internals. By following the procedure below, you will obtain a working driver, built for the specific kernel running on your system.

Follow the instructions based on the version of your driver.

Building the Fusion-io Drivers From Source

To view a step-by-step screencast of this process, click here.

Before beginning, download the ioDrive driver source rpm for RHEL from the Dell Fusion-io support site to a temporary directory. At minimum, you’ll need the latest version of the following packages:

  • fio-common
  • fio-firmware
  • fio-sysvinit
  • fio-util
  • iomemory-vsl
  • libfio

Here’s a screenshot showing the packages that need to be downloaded.

Remove prior versions of the ioDrive driver rpm

yum remove iomemory* iomanager* iodrive* fio-* libfio*
rm -rf /usr/src/redhat/RPMS/x86_64/iomemory-vsl-*

For Red Hat Enterprise 5, install the gcc 4.x and kernel-devel packages for your current kernel. The kernel-headers package is also needed, but is typically installed as part of the base operating system.

yum -y install kernel-headers-`uname -r` kernel-devel-`uname -r` rpm-build gcc lm_sensors net-snmp

Change to the directory where you downloaded the ioDrive driver source RPM and begin the rebuild process.

rpmbuild --rebuild iomemory-vsl*.src.rpm

Install the newly-built drivers.

yum install --nogpgcheck fio-sysvinit* fio-common* fio-util* libfio* fio-firmware* iomemory-vsl-source* /usr/src/redhat/RPMS/x86_64/iomemory-vsl-*

Check the status of the ioDrive(s):

fio-status

Found 1 ioDrive in this system

fct0 Attached as 'fioa' (block device)
 Fusion-io ioDIMM3 160GB, Product Number:FS1-001-161-ES SN:6168
 Firmware v5.0.1, rev 42895
 161 GBytes block device size, 198 GBytes physical device size capacity.
 PCI:0c:00.0, Slot Number:6
 Internal temperature: avg 42.3 degC, max 44.3 degC
 Media status: Healthy; Reserves: 100.00%, warn at 10.00%

Upgrading the Fusion-io Firmware

In the previous section, you should have already downloaded and installed the latest firmware package. If not, you may download the fio-firmware package from the Fusion-io Support website to proceed. Once the firmware package is installed, it needs to be applied to the cards.

To upgrade the firmware:

fio-update-iodrive /usr/share/fio/firmware/iodrive_*.fff

Watch the output of the upgrade process and reboot when complete.

Update the Fusion-io Init Scripts

NOTE: this is the new recommended configuration for any customer using multiple Fusion-io ioDrives in a RAID array, as it ensures the cards are all initialized before bringing the RAID array and filesystems online. Reference: http://kb.fusionio.com/KB/a64/loading-the-driver-via-udev-or-init-script-for-md-and-lvm.aspx

Uncomment the blacklist line:

vim /etc/modprobe.d/iomemory-vsl.conf
# To keep ioDrive from auto loading at boot when using udev, uncomment below
blacklist iomemory-vsl

Backup /etc/fstab, and modify it to add “noauto 0 0” to the datavg filesystems:

cp /etc/fstab /etc/fstab.`date +%Y%m%d.%H%M%S`
vim /etc/fstab

Example:

/dev/datavg/u2          /u2                     ext3    defaults,noauto        0 0
/dev/datavg/eclipse     /u2/eclipse             ext3    defaults,noauto        0 0
/dev/datavg/edi         /u2/edi                 ext3    defaults,noauto        0 0
/dev/datavg/ereports    /u2/eclipse/ereports    ext3    defaults,noauto        0 0
/dev/datavg/pdw         /u2/pdw                 ext3    defaults,noauto        0 0
/dev/datavg/uvtmp       /u2/uvtmp               ext3    defaults,noauto        0 0
/dev/datavg/kourier     /u2/kourier             ext3    defaults,noauto        0 0
/dev/datavg/crashplan   /usr/local/crashplan    ext3    defaults,noauto        0 0

Uncomment “ENABLED=1” in /etc/sysconfig/iomemory-vsl to enable init script:

vim /etc/sysconfig/iomemory-vsl
# If ENABLED is not set (non-zero) then iomemory-vsl init script will not be
# used.
ENABLED=1

In the same /etc/sysconfig/iomemory-vsl file, add the RAID array(s) and mount points. For example:

MD_ARRAYS="/dev/md0"
LVM_VGS="/dev/datavg"
MOUNTS="/u2 /u2/edi /u2/eclipse /u2/eclipse/ereports /u2/pdw /u2/uvtmp /u2/kourier /usr/spool/crashplan"

Enable the init script:

chkconfig iomemory-vsl on
chkconfig --list iomemory-vsl

Resources

How do I run standalone diagnostics on my IBM pSeries server?

Additional resources: