Expand Linux Filesystem

If you need to expand or add space to a filesystem under Linux, please follow the procedure outlined below, replacing the directory and filesystem names with your own:

Verify the current filesystem usage and device path:

df -h /u2/eclipse

Verify that the volume group for sufficient space:

vgs datavg

Set the new logical volume size:

lvextend -L 80G /dev/datavg/eclipse

Resize the filesystem:

resize2fs /dev/datavg/eclipse

Verify the updated filesystem utilization:

df -h /u2/eclipse

Here’s an example:

[root@eclipse ~]# df -h /u2/eclipse
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/datavg-eclipse
                       20G   18G  835M  96% /u2/eclipse
[root@eclipse ~]# vgs datavg
  VG     #PV #LV #SN Attr   VSize   VFree
  datavg   1   5   0 wz--n- 474.22G 406.22G
[root@eclipse ~]# lvextend -L 40G /dev/datavg/eclipse
  Extending logical volume eclipse to 40.00 GB
  Logical volume eclipse successfully resized
[root@eclipse ~]# resize2fs /dev/datavg/eclipse
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/datavg/eclipse is mounted on /u2/eclipse; on-line resizing required
Performing an on-line resize of /dev/datavg/eclipse to 10485760 (4k) blocks.
The filesystem on /dev/datavg/eclipse is now 10485760 blocks long.

[root@eclipse ~]# df -h /u2/eclipse
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/datavg-eclipse
                       39G   18G   20G  48% /u2/eclipse

If the LVM snapshot filesystems are mounted, please follow the procedure outlined below.

First, unmount the snap filesystems (and any “child” filesystems, such as /u2/eclipse/ereports under /u2/eclipse):

umount /snap/20110909.1043/u2/eclipse/

Remove the snap logical volumes (which you can identify with the “lvs” command):

lvremove /dev/datavg/lvol1

Set the new logical volume size:

lvextend -L 250G /dev/datavg/eclipse

Resize the filesystem:

resize2fs /dev/datavg/eclipse

If you are unable to unmount or lvremove a snapshot, verify that there are no processes holding the volume open.

How do I backup to multiple destinations using ABS CrashPlan PRO?

Eclipse recommends backing up to multiple destinations wherever possible, echoing the recommendations of CrashPlan themselves:

CrashPlan works very hard to verify the integrity of your data and to repair corrupted archives before you need to restore. However, CrashPlan is not always able to recover from corruption before a restore is needed, hardware fails, system administrators fat-finger commands, and “lightning” can strike anywhere. So we always recommend that people use multiple backup destinations whenever possible.

Backing up to multiple destinations provides onsite backups for fast and convenient restores and offsite backups in case there is a problem at the local site. CrashPlan PRO supports multiple destinations to provide the highest level of availability and protection at no additional cost, because a client backup destination does not consume a seat license.

Your backup destinations can be a location with either PRO Server or a PRO Client installed, or even an external hard drive attached to a client. CrashPlan PRO is so flexible you can back to a combination of PRO Servers and PRO Clients, and locally attached drives.

Destinations can also be local or they can be remote. A local destination can be a computer on your local network (fast), a USB drive attached to your computer (faster), or another disk attached directly to your motherboard (fastest). A remote destination is one that requires an Internet connection and are generally slower than local destinations.

To maximize your protection and convenience, we recommend backing up to a combination of local and remote destinations.

To complement the off-site ABS Online Backup service, many Eclipse customer choose to add an on-site backup destination. The on-site backup destination is typically one of the following:

  • Local storage: internal or external (USB) disk attached locally to the server and used as a local folder backup destination.
  • Network storage: a UNIX (NFS) or Windows (CIFS/samba) share can be mapped on the local server to be used as a local folder backup destination.
  • Peer-to-peer: by installing the CrashPlan PRO agent on another computer on the customer’s network, the computer can be turned into a “backup server” and configured to accept incoming backups from any or all of the customer’s CrashPlan PRO agents.
  • Configuring inbound backup folder/directory: settings > backup > inbound backup > “default backup archive location”

If you require assistance configuring an additional backup destination on an existing CrashPlan PRO agent, please contact CrashplanPro Support or open a service request with the Eclipse Systems support team.

Your data is extremely important to us. So important, in fact, that we strongly believe you shouldn’t trust just ANY single provider or destination, even us. We give you the ability to back up to multiple destinations and always recommend that you back up to a local destination (USB drive, local computer, etc) as well as to a remote destination. This yields multiple benefits:

1) Fast backup to the local destination so you’re protected while the slower, remote backup is completing
2) Fast restore from a local destination in situations where a full restore is required (e.g. laptop theft, hard drive failure)
3) Remote backup security in the event of a local disaster (e.g. fire, flood, etc)

Reference: http://support.crashplanpro.com/doku.php/recipe/back_up_to_multiple_destinations

How do I view my Eclipse tape backup logs on AIX?

The standard tape backup script keeps a number of log files that are accessible to the system administrator. To view the log files, log into your server as root and run the following commands. You may also configure the backup script to send an email every time the backup is run.

/tmp/backup.bak contains the output from the last successful tape backup command

more /tmp/backup.bak

The log for a successful tape backup will show output similar to the following:

Backing up to /dev/rmt0.
Cluster 51200 bytes (100 blocks).
Volume 1 on /dev/rmt0
The total size is 19986274993 bytes.
Backup finished on Tue Sep  7 18:11:20 CDT 2010; there are 39071600 blocks on 1 volumes.

/tmp/backup.chk contains the start and stop timestamps for the tape backup operation

more /tmp/backup.chk

The following log file indicates the backup ran for 30 minutes.

Backup started at: Mon Feb 22 21:00:05 EST 2010
Backup ended at: Mon Feb 22 21:30:11 EST 2010

/tmp/backup.err contains any potential error messages from the most recent backup

more /tmp/backup.err

The following log file indicates there was no tape in the drive.

backup: 0511-089 Cannot open /dev/rmt0: The device is not ready for operation.
Mount volume 1 on /dev/rmt0.

/tmp/snapsave.log logs database suspension and snapshot operations:

less /tmp/snapsave.log

The following log file shows a successful backup and details the database and snapshot operations taking place. This is very useful for troubleshooting backup issues.

--------------------------------------------------------------------------------
Tue Sep  7 21:00:05 CDT 2010: Current snapshot status
Snapshots for /u2
Current  Location          512-blocks        Free Time
*        /dev/fslv00           163840       50432 Mon Sep  6 21:01:32 CDT 2010
Snapshots for /u2/eclipse
Current  Location          512-blocks        Free Time
*        /dev/fslv01          2818048     1512448 Mon Sep  6 21:01:39 CDT 2010
Snapshots for /u2/eclipse/ereports
Current  Location          512-blocks        Free Time
*        /dev/fslv02            32768       32000 Mon Sep  6 21:01:44 CDT 2010
Snapshots for /u2/pdw
Current  Location          512-blocks        Free Time
*        /dev/fslv03          1835008     1833984 Mon Sep  6 21:01:48 CDT 2010
--------------------------------------------------------------------------------
Tue Sep  7 21:00:16 CDT 2010: Releasing and unmounting previous snapshots
Tue Sep  7 21:00:18 CDT 2010: Unmounting /snap/u2/pdw
Tue Sep  7 21:00:23 CDT 2010: Removing snapshot(s) of /u2/pdw
rmlv: Logical volume fslv03 is removed.
Tue Sep  7 21:00:31 CDT 2010: Unmounting /snap/u2/eclipse/ereports
Tue Sep  7 21:00:32 CDT 2010: Removing snapshot(s) of /u2/eclipse/ereports
rmlv: Logical volume fslv02 is removed.
Tue Sep  7 21:00:39 CDT 2010: Unmounting /snap/u2/eclipse
Tue Sep  7 21:00:40 CDT 2010: Removing snapshot(s) of /u2/eclipse
rmlv: Logical volume fslv01 is removed.
Tue Sep  7 21:00:47 CDT 2010: Unmounting /snap/u2
Tue Sep  7 21:00:47 CDT 2010: Removing snapshot(s) of /u2
rmlv: Logical volume fslv00 is removed.
--------------------------------------------------------------------------------
Tue Sep  7 21:00:53 CDT 2010: Suspending database
--------------------------------------------------------------------------------
Tue Sep  7 21:00:59 CDT 2010: Performing snapshots:
Tue Sep  7 21:00:59 CDT 2010: Taking snapshot of /u2
Snapshot for file system /u2 created on /dev/fslv00
Tue Sep  7 21:01:04 CDT 2010: Taking snapshot of /u2/eclipse
Snapshot for file system /u2/eclipse created on /dev/fslv01
Tue Sep  7 21:01:11 CDT 2010: Taking snapshot of /u2/eclipse/ereports
Snapshot for file system /u2/eclipse/ereports created on /dev/fslv02
Tue Sep  7 21:01:16 CDT 2010: Taking snapshot of /u2/pdw
Snapshot for file system /u2/pdw created on /dev/fslv03
--------------------------------------------------------------------------------
Tue Sep  7 21:01:21 CDT 2010: Database suspend released.
--------------------------------------------------------------------------------
Tue Sep  7 21:01:21 CDT 2010: Mounting snapshot filesystems
Tue Sep  7 21:01:23 CDT 2010: Mounting snapshot: /snap/u2
Tue Sep  7 21:01:25 CDT 2010: Mounting snapshot: /snap/u2/eclipse
Tue Sep  7 21:01:27 CDT 2010: Mounting snapshot: /snap/u2/eclipse/ereports
Tue Sep  7 21:01:30 CDT 2010: Mounting snapshot: /snap/u2/pdw
rmt0 changed
Tue Sep  7 21:02:05 CDT 2010: Starting backup from /snap
--------------------------------------------------------------------------------
Tue Sep  7 22:40:15 CDT 2010: Mailing backup report
--------------------------------------------------------------------------------

How do filesystem snapshots work on Linux?

To perform valid backups of your database, it is important to suspend the database. This prevents modifications of files during the backup process. By taking a point-in-time snapshot of your database files, your backup program will be capturing a “frozen” database instead of an “in motion” database.

Our standard backup script uses database suspension with snapshots to create point-in-time images of your database files. The snapshot script itself is located at /u2/UTILS/bin/snapsave_linux.sh (with a symbolic link at /bin/save for backwards compatibility).

The snapshot script is typically scheduled to run at regular intervals via crontab to create new filesystem snapshots.  Here’s an example of a snapshot backup script that is scheduled via crontab to run every night at 12:59 AM:

[root@eclipse ~]# crontab -l
59 0 * * * /u2/UTILS/bin/snapsave_linux.sh

After running the script, the snapshot filesystems are mounted under /snap, allowing read-only access by backup software. For example, the snapshot of the /u2/eclipse/LEDGER file would be located at /snap/u2/eclipse/LEDGER. When configuring backup software, it is recommended to backup every file under /snap/u2.

Since every change (delta) between the snapshot and the “live” filesystem must be recorded, the snapshots have a finite lifespan. By default, the snapshot script is configured to hold 1GB of changes before requiring a refresh. On busier systems, or on systems where the snapshots must be retained for a longer period of time to accommodate a slow backup process, the snapshot volume size may be increased by editing the snapshot backup script. You may check the status of the snapshots using the “lvs” command, which shows a usage percentage for each snapshot volume.

[root@eclipse ~]# lvs
  LV       VG     Attr   LSize   Origin   Snap%  Move Log Copy%  Convert
  eclipse  datavg owi-ao  26.00G
  ereports datavg owi-ao   1.00G
  lvol0    datavg swi-ao   1.00G u2        45.85
  lvol4    datavg swi-ao   1.00G uvtmp      0.00
  lvol5    datavg swi-a-   1.00G ereports   0.00
  lvol6    datavg swi-ao   1.00G eclipse    0.85
  u2       datavg owi-ao   4.00G
  uvtmp    datavg owi-ao   4.00G
  esupport rootvg -wi-ao   6.00G
  root     rootvg -wi-ao  20.00G
  swap     rootvg -wi-ao   4.00G

When the Snap% value reaches 100%, the snapshot volume has reached its maximum capacity for tracking changes and must be recreated by running the snapshot script again.

For troubleshooting purposes, a log of the snapshot backup script is kept at /tmp/snapsave.log. Information regarding the creation, removal and expiration of snapshot LVs is also recorded in the system log (/var/log/messages).

How do I check the status of my EVault backup?

To manually check the status of your EVault backup:

  • If it’s not already installed, download and install the EVault CentralControl client from EVault’s support website or using this direct link.
  • If you’re using CentralControl for the first time, you’ll need to add connections to each server being backed up by EVault:
    • File -> New Agent
    • Fill in the Description, Network address, and root/Administrator user name and password fields
    • Press OK
  • Click the “+” sign next to the server you wish to check to expand and display the backup job(s)
  • Click the “+” sign next to the backup job you wish to check to expand and display the Logs folder
  • Select the Logs folder and double-click the most recent log to display the log. A successful backup will have no errors with output similar to the following:
06-Sep 21:03 SSET-I-04131 disconnect from the Vault at 06-Sep-2010 21:03:24 -0700
06-Sep 21:03 BKUP-I-00001 errors encountered:                        0
06-Sep 21:03 BKUP-I-00002 warnings encountered:                      0
06-Sep 21:03 BKUP-I-00003 files/directories examined:                280,340
06-Sep 21:03 BKUP-I-00004 files/directories filtered:                0
06-Sep 21:03 BKUP-I-00006 files/directories deferred:                0
06-Sep 21:03 BKUP-I-00007 files/directories backed-up:               280,340
06-Sep 21:03 BKUP-I-00008 files backed-up:                           267,837
06-Sep 21:03 BKUP-I-00009 directories backed-up:                     12,503
06-Sep 21:03 BKUP-I-00010 data stream bytes processed:               14,782,268,676 (13.76 GB)
06-Sep 21:03 BKUP-I-00011 all stream bytes processed:                14,782,268,676 (13.76 GB)
06-Sep 21:03 BKUP-I-00012 pre-delta bytes processed:                 651,912,316 (621.71 MB)
06-Sep 21:03 BKUP-I-00013 deltized bytes processed:                  118,090,710 (112.62 MB)
06-Sep 21:03 BKUP-I-00014 compressed bytes processed:                34,164,618 (32.58 MB)
06-Sep 21:03 BKUP-I-00015 approximate bytes deferred:                0 (0 bytes)
06-Sep 21:03 BKUP-I-00016 reconnections on recv fail:                0
06-Sep 21:03 BKUP-I-00017 reconnections on send fail:                0
06-Sep 21:03 SYST-I-07035 send e-mail: OK
06-Sep 21:03 BKUP-I-04128 job completed at 06-Sep-2010 21:03:26 -0700
06-Sep 21:03 BKUP-I-04129 elapsed time 00:03:11

Here’s a screencast showing an overview of the process: