I've read a few posts over the last few months concerning I/O performance
when expanding a pool with multiple partitions. But my question concerns a
particular scenario rather than a general discussion of randomly throwing
new space/partitions into a pool to be consumed.

Lets say I have a 500GB partition/pool that resides on a single device/raid
group. Now I want to expand to 1TB by adding another 500GB. If I were using
local storage I would add some more disks and expand the Array (which
restripes existing data) & Logical Drive. If I were using SAN storage, I
would create a MetaLun (striping the existing data across the added lun, not
concatenating) using the original 500GB lun plus another 500Gb lun on its
own dedicated raid group.

So, now in NSSMU the device which holds my original 500GB of data now has
500Gb of free space, which I partition and add to the pool so that I now
have 1TB of space in my pool.

My question is: Since all the data has been restriped across the backend
disks in both of my scenarios, should I expect to see lower I/O performance
in this configuration (1TB space, with 2 500GB partitions) as compared to a
device which was created as 1TB up front, and is partitioned as a single
1TB partition and presented to the pool?

Thanks in advance,