We are moving Netware volumes from direct SCSI storage (using Adaptec
RAID controllers) to a SAN using QLogic HBAs. On a older, slower test
server using (but with smaller, less complex volumes) we moved a
12 GB volume using VCU in about 13 min. That's just under 1 GB/min.
Extrapolating, we estimated that moving a 146 GB volume would take about 3
hrs. Instead, it took 34 hours! Obviously, we missed something.

Our results:
- Physically faster production servers performed a fraction as fast as the
slower test server.
- Physically identical "fast" production servers performed very
differently, with the one running NW6 SP5 MUCH slower than the already
slow NW65 SP5 server. However, this slowly performing "fast" server is
running the same OS versions as the physically slower, but effectively
faster test server.
- Most stats for the two identical production servers were similar with
some noteable differences: The slower one was paging, with 104K requests
in and 197K requests out, while the faster one had no paging. The slower
one had an a cache utilization Allocated Block Count of 2,185,195, while
the faster one had 45,760. The slower one had 308,992 page faults, while
the faster one had 39.710. The slower one had Long Term Cache Hits of 13%,
while the faster one had 29%. Short Term Cache Hits for both was 100%.
Both ran at 1-2% utilization with bumps to 28%. Current disk requests on
both ran 0-2 with bumps to 22.

Further details that have us thorougly confused:

Test server:
- Processor speed: 1000
- RAM: 1 GB
- Internal Adapted SCSI controller running ADPT160M.HAM for local drives
- Qlogic 2340 FC HBA to the SAN running QL2X00.HAM (firmware v1.54)
- NW60 SP5
- Server 5.60.05
- eDir 8.7.3.7
- NDS 10552.79
- VCU transfer rate for each volume tried was just under GB/min.
- Using Portlock Storage Manager to copy a volume, performance was 12,500
KBS.

Production server 1:
- Processor speed: 2993
- RAM: 2 GB
- External local drives connected to 4 channels configured for RAID 1
on Adaptec 3410S RAID controllers running I20PCI.HAM, BKSTROSM.HAM, &
MEGARIDE.HAM
- Qlogic 2340 FC HBA to the SAN running QL2X00.HAM (firmware v1.54)
- OES NW6.5 SP5
- Server 5.70.05
- eDir 8.7.3.7
- NDS 10552.79
- 2 NSS volumes transferred using VCU
- Vol 1 on a 73 GB RAID 1 mirrored SCSI set had 61 GB of data in 600,000+
files and 14,000+ directories and took 4.5 hr for a transfer rate of .23
GB/min (compared to <1 GB/min on test).
- Vol 2 on a 146 GB RAID 1 mirrored SCSI set had 140 GB of data in
1,180,841 files in 253,630 directories ad took 34 hr, 12 min. for a
transfer rate of .07 GB/min (compared to <1 GB/min on test).

Production server 2: (Physically same as Production 1)
- Processor speed: 2993
- RAM: 2 GB
- External local drives connected to 4 channels configured for RAID 1
on Adaptec 3410S RAID controllers running I20PCI.HAM, BKSTROSM.HAM, &
MEGARIDE.HAM
- Qlogic 2340 FC HBA to the SAN running QL2X00.HAM (firmware v1.54)
- OES NW60 SP5 (OS same as test)
- Server 5.60.05
- eDir 8.7.3.7
- NDS 10552.79
- 1 NSS volume transferred with VCU
-- Vol 1 on a 73 GB RAID 1 mirrored SCSI set had 75 GB of data in 800,000+
files and 18,000+ directories and took 9 hr for a transfer rate of .16
GB/min (compared to <1 GB/min on test and .21 GB/min on Prod 1).
-Due to the slowness of VCU on this server we used Portlock Storage
Manager on the 2nd volume.
-- Vol 2 on 146 GB RAID 1 mirrored SCSI set had 140 GB of data. This
volume is equavilent to Prod 1's second volume, contained an estimate of
1,200,000 files and 275,000 directories. Storage Manager's performance onp
the volume copy was about 2600 KBS (compared to test's 12,500 KBS) and
took 16 hours. Because of another problem we did not have time for a
restore.

Is there a way to get a reasonable estimate of how long a volume move
using VCU (or Portlock) would take?

What variables come into play? Obviously you can't do a linear
extrapolation from a simpler test system.

We have used both VCU (to convert a few years ago from Trad to NSS) and
it was quite fast. We have used Storage Manager many times and it was
fast. In both cases, the volumes in question were smaller.

What impact does the number of directories have?
What impact does NSS have in the complexity of data for volume moves?

What else could we examine for why much faster servers had transfer
rates so much slower than the slower one?

We still have 3 146 GB volumes with 1.3 mil files and several hundred
thousand directories on each to transfer to SAN. How can we calculate a
reasonable estimate?

Thanks for any thoughts or suggestions.

Lyle