2 nodes in a cluster running OES 2015 SP1 connected via FC to a disk array.

I have a shared pool I'm trying to expand but the error given no matter
what I've tried is: "Error: There is no more disk space available. The
disk is full of active or deleted files. If salvage is enabled, purge
deleted files from the salvage area. Delete the files that are no longer
needed. Use move and split options to expand the storage area."

The pool in question is currently made up of 3 segments, and each
segment is 8TB for a total size of 24TB. I have another 8TB lun I need
to add to the pool, and it is from the same disk array as the other
working segments.

The new lun is set to GPT and shared.

Things I've tried:
re-initializing the new device

Rebooting all nodes in the cluster at the same time to fully restart the
cluster.

deleting the lun on the storage array and creating a new one shared as
the same lun #

deleting the lun on the storage array and creating a new one shared as a
different lun #.

Making sure the nodes were disconnected from the lun, connecting a
Windows machine to the lun, deleting the partition and creating an NTFS
filesystem on the lun, then deleting it, then reconnecting the cluster
nodes to the lun and initializing the disk to try to make sure any
remnants of the partition table that make it seem like it has files on
it are destroyed.

In between each of the things I've tried listed above I always run the
following on both nodes before trying to do the expansion:
rescan-scsi-bus.sh --forcerescan
nlvm rescan
multipath -v2
multipath -ll

I've tried different versions of iManager.

From either node I can create a non-cluster enabled new pool from the
lun no problem.

I've never had problems adding a lun to this pool before as evidenced by
the fact that it currently has 3 segments.

After everything I've tried I'm guessing the error message I'm getting
is a red herring of sorts, but I can't find any troubleshooting tips to
help deal with it.

All of the documentation I can find says to expand a cluster pool I need
to use iManager and be connected to the cluster object when making the
expansion. Can I use nlvm or nssmu connected to the master node
instead? If I have to use iManager, can I be connected to the master
node instead of the cluster object? I really don't want to screw up the
existing pool in the process, but I'm running out of space fast and need
to get more space added to it. OES 2015 has been a godsend since it
did away with the 8TB pool limit, and 32TB is nowhere near the current
limit, so any help would be appreciated.

-Mike