How do I contact CrashPlan for support?

If you have an active subscription to Epicor’s ABS Online Backup or ABS Local Backup services, you may contact Epicor for support.

If you have purchased CrashPlan PRO licenses for your organization, and are running your own CrashPlan PRO server, you may contact CrashPlan directly for support. Here are some helpful tips that will speed up problem resolution:

Before contacting CrashPlan for support, please gather the CrashPlan PRO client backup logs: How do I gather CrashPlan logs?

Also, you may wish to consult CrashPlan’s list of Frequently Asked Questions and search their Support Forum.

Once you have the logs, use one of the following methods to contact CrashPlan support:

How do I backup to multiple destinations using ABS CrashPlan PRO?

Eclipse recommends backing up to multiple destinations wherever possible, echoing the recommendations of CrashPlan themselves:

CrashPlan works very hard to verify the integrity of your data and to repair corrupted archives before you need to restore. However, CrashPlan is not always able to recover from corruption before a restore is needed, hardware fails, system administrators fat-finger commands, and “lightning” can strike anywhere. So we always recommend that people use multiple backup destinations whenever possible.

Backing up to multiple destinations provides onsite backups for fast and convenient restores and offsite backups in case there is a problem at the local site. CrashPlan PRO supports multiple destinations to provide the highest level of availability and protection at no additional cost, because a client backup destination does not consume a seat license.

Your backup destinations can be a location with either PRO Server or a PRO Client installed, or even an external hard drive attached to a client. CrashPlan PRO is so flexible you can back to a combination of PRO Servers and PRO Clients, and locally attached drives.

Destinations can also be local or they can be remote. A local destination can be a computer on your local network (fast), a USB drive attached to your computer (faster), or another disk attached directly to your motherboard (fastest). A remote destination is one that requires an Internet connection and are generally slower than local destinations.

To maximize your protection and convenience, we recommend backing up to a combination of local and remote destinations.

To complement the off-site ABS Online Backup service, many Eclipse customer choose to add an on-site backup destination. The on-site backup destination is typically one of the following:

  • Local storage: internal or external (USB) disk attached locally to the server and used as a local folder backup destination.
  • Network storage: a UNIX (NFS) or Windows (CIFS/samba) share can be mapped on the local server to be used as a local folder backup destination.
  • Peer-to-peer: by installing the CrashPlan PRO agent on another computer on the customer’s network, the computer can be turned into a “backup server” and configured to accept incoming backups from any or all of the customer’s CrashPlan PRO agents.
  • Configuring inbound backup folder/directory: settings > backup > inbound backup > “default backup archive location”

If you require assistance configuring an additional backup destination on an existing CrashPlan PRO agent, please contact CrashplanPro Support or open a service request with the Eclipse Systems support team.

Your data is extremely important to us. So important, in fact, that we strongly believe you shouldn’t trust just ANY single provider or destination, even us. We give you the ability to back up to multiple destinations and always recommend that you back up to a local destination (USB drive, local computer, etc) as well as to a remote destination. This yields multiple benefits:

1) Fast backup to the local destination so you’re protected while the slower, remote backup is completing
2) Fast restore from a local destination in situations where a full restore is required (e.g. laptop theft, hard drive failure)
3) Remote backup security in the event of a local disaster (e.g. fire, flood, etc)

Reference: http://support.crashplanpro.com/doku.php/recipe/back_up_to_multiple_destinations

AIX Printer Setup

NOTE: The instructions below are for configuring a new printer for “traditional” or “legacy” printing on an AIX server, not for printing via Eclipse Forms.

To view a step-by-step screencast of this process, click here.

In each of the steps below, substitute your printer’s IP address and new “lp” number.

Add an entry for the printer’s hostname and IP address to the /etc/hosts file:

echo "172.17.189.5	lp1" >> /etc/hosts

Add an AIX print queue:

/usr/lib/lpd/pio/etc/piomisc_ext mkpq_remote_ext  -q 'lp1' -h 'lp1' -r 'lp1' -t 'aix' -T '999' -C 'FALSE'

Add the UniVerse driver file:

echo "lp -dlp1" > /usr/spool/uv/lp1.dvr
chmod 777 /usr/spool/uv/lp1.dvr

Add the UV print queue:

cd /u2/uv
uv

Select Spooler -> Device -> Maintain Devices
Use ENTER to advance, selecting all defaults unless otherwise specified.

  • Name = LP1 (shows all caps)
  • Path = /dev/null
  • Driver = lp1.dvr (ignore error)
  • Lock = lock.lp1

Press ESC -> Q -> ENTER to exit

Launch Eterm and log into Eclipse to setup the Eclipse print queue
Select F2 -> F -> P -> A (Assign Printer)

  • Printer/Fax = new
  • Name = 1 (# from lp#)
  • Type = (press F10 to select)
  • ESC to apply and exit
  • L (Location Maintenance)
  • Location = NEW
  • Name = HERE
  • Ship Ticket Branch = (blank)
  • Physical Branch = (blank)
  • Printer = 1 (number of printer)
  • ESC to apply and exit

How do I view my Eclipse tape backup logs on AIX?

The standard tape backup script keeps a number of log files that are accessible to the system administrator. To view the log files, log into your server as root and run the following commands. You may also configure the backup script to send an email every time the backup is run.

/tmp/backup.bak contains the output from the last successful tape backup command

more /tmp/backup.bak

The log for a successful tape backup will show output similar to the following:

Backing up to /dev/rmt0.
Cluster 51200 bytes (100 blocks).
Volume 1 on /dev/rmt0
The total size is 19986274993 bytes.
Backup finished on Tue Sep  7 18:11:20 CDT 2010; there are 39071600 blocks on 1 volumes.

/tmp/backup.chk contains the start and stop timestamps for the tape backup operation

more /tmp/backup.chk

The following log file indicates the backup ran for 30 minutes.

Backup started at: Mon Feb 22 21:00:05 EST 2010
Backup ended at: Mon Feb 22 21:30:11 EST 2010

/tmp/backup.err contains any potential error messages from the most recent backup

more /tmp/backup.err

The following log file indicates there was no tape in the drive.

backup: 0511-089 Cannot open /dev/rmt0: The device is not ready for operation.
Mount volume 1 on /dev/rmt0.

/tmp/snapsave.log logs database suspension and snapshot operations:

less /tmp/snapsave.log

The following log file shows a successful backup and details the database and snapshot operations taking place. This is very useful for troubleshooting backup issues.

--------------------------------------------------------------------------------
Tue Sep  7 21:00:05 CDT 2010: Current snapshot status
Snapshots for /u2
Current  Location          512-blocks        Free Time
*        /dev/fslv00           163840       50432 Mon Sep  6 21:01:32 CDT 2010
Snapshots for /u2/eclipse
Current  Location          512-blocks        Free Time
*        /dev/fslv01          2818048     1512448 Mon Sep  6 21:01:39 CDT 2010
Snapshots for /u2/eclipse/ereports
Current  Location          512-blocks        Free Time
*        /dev/fslv02            32768       32000 Mon Sep  6 21:01:44 CDT 2010
Snapshots for /u2/pdw
Current  Location          512-blocks        Free Time
*        /dev/fslv03          1835008     1833984 Mon Sep  6 21:01:48 CDT 2010
--------------------------------------------------------------------------------
Tue Sep  7 21:00:16 CDT 2010: Releasing and unmounting previous snapshots
Tue Sep  7 21:00:18 CDT 2010: Unmounting /snap/u2/pdw
Tue Sep  7 21:00:23 CDT 2010: Removing snapshot(s) of /u2/pdw
rmlv: Logical volume fslv03 is removed.
Tue Sep  7 21:00:31 CDT 2010: Unmounting /snap/u2/eclipse/ereports
Tue Sep  7 21:00:32 CDT 2010: Removing snapshot(s) of /u2/eclipse/ereports
rmlv: Logical volume fslv02 is removed.
Tue Sep  7 21:00:39 CDT 2010: Unmounting /snap/u2/eclipse
Tue Sep  7 21:00:40 CDT 2010: Removing snapshot(s) of /u2/eclipse
rmlv: Logical volume fslv01 is removed.
Tue Sep  7 21:00:47 CDT 2010: Unmounting /snap/u2
Tue Sep  7 21:00:47 CDT 2010: Removing snapshot(s) of /u2
rmlv: Logical volume fslv00 is removed.
--------------------------------------------------------------------------------
Tue Sep  7 21:00:53 CDT 2010: Suspending database
--------------------------------------------------------------------------------
Tue Sep  7 21:00:59 CDT 2010: Performing snapshots:
Tue Sep  7 21:00:59 CDT 2010: Taking snapshot of /u2
Snapshot for file system /u2 created on /dev/fslv00
Tue Sep  7 21:01:04 CDT 2010: Taking snapshot of /u2/eclipse
Snapshot for file system /u2/eclipse created on /dev/fslv01
Tue Sep  7 21:01:11 CDT 2010: Taking snapshot of /u2/eclipse/ereports
Snapshot for file system /u2/eclipse/ereports created on /dev/fslv02
Tue Sep  7 21:01:16 CDT 2010: Taking snapshot of /u2/pdw
Snapshot for file system /u2/pdw created on /dev/fslv03
--------------------------------------------------------------------------------
Tue Sep  7 21:01:21 CDT 2010: Database suspend released.
--------------------------------------------------------------------------------
Tue Sep  7 21:01:21 CDT 2010: Mounting snapshot filesystems
Tue Sep  7 21:01:23 CDT 2010: Mounting snapshot: /snap/u2
Tue Sep  7 21:01:25 CDT 2010: Mounting snapshot: /snap/u2/eclipse
Tue Sep  7 21:01:27 CDT 2010: Mounting snapshot: /snap/u2/eclipse/ereports
Tue Sep  7 21:01:30 CDT 2010: Mounting snapshot: /snap/u2/pdw
rmt0 changed
Tue Sep  7 21:02:05 CDT 2010: Starting backup from /snap
--------------------------------------------------------------------------------
Tue Sep  7 22:40:15 CDT 2010: Mailing backup report
--------------------------------------------------------------------------------

How do filesystem snapshots work on Linux?

To perform valid backups of your database, it is important to suspend the database. This prevents modifications of files during the backup process. By taking a point-in-time snapshot of your database files, your backup program will be capturing a “frozen” database instead of an “in motion” database.

Our standard backup script uses database suspension with snapshots to create point-in-time images of your database files. The snapshot script itself is located at /u2/UTILS/bin/snapsave_linux.sh (with a symbolic link at /bin/save for backwards compatibility).

The snapshot script is typically scheduled to run at regular intervals via crontab to create new filesystem snapshots.  Here’s an example of a snapshot backup script that is scheduled via crontab to run every night at 12:59 AM:

[root@eclipse ~]# crontab -l
59 0 * * * /u2/UTILS/bin/snapsave_linux.sh

After running the script, the snapshot filesystems are mounted under /snap, allowing read-only access by backup software. For example, the snapshot of the /u2/eclipse/LEDGER file would be located at /snap/u2/eclipse/LEDGER. When configuring backup software, it is recommended to backup every file under /snap/u2.

Since every change (delta) between the snapshot and the “live” filesystem must be recorded, the snapshots have a finite lifespan. By default, the snapshot script is configured to hold 1GB of changes before requiring a refresh. On busier systems, or on systems where the snapshots must be retained for a longer period of time to accommodate a slow backup process, the snapshot volume size may be increased by editing the snapshot backup script. You may check the status of the snapshots using the “lvs” command, which shows a usage percentage for each snapshot volume.

[root@eclipse ~]# lvs
  LV       VG     Attr   LSize   Origin   Snap%  Move Log Copy%  Convert
  eclipse  datavg owi-ao  26.00G
  ereports datavg owi-ao   1.00G
  lvol0    datavg swi-ao   1.00G u2        45.85
  lvol4    datavg swi-ao   1.00G uvtmp      0.00
  lvol5    datavg swi-a-   1.00G ereports   0.00
  lvol6    datavg swi-ao   1.00G eclipse    0.85
  u2       datavg owi-ao   4.00G
  uvtmp    datavg owi-ao   4.00G
  esupport rootvg -wi-ao   6.00G
  root     rootvg -wi-ao  20.00G
  swap     rootvg -wi-ao   4.00G

When the Snap% value reaches 100%, the snapshot volume has reached its maximum capacity for tracking changes and must be recreated by running the snapshot script again.

For troubleshooting purposes, a log of the snapshot backup script is kept at /tmp/snapsave.log. Information regarding the creation, removal and expiration of snapshot LVs is also recorded in the system log (/var/log/messages).