Friends,

I just heard from a user who received an internal bounceback with an 8200 generic error.
Generic file I/O error
Error = 8200


The POA says:

23:11:15 288 The database function 50 reported error [8200] on userkfs.db
23:11:29 896 The database function 44 reported error [8201] on useryvr.db
23:11:29 896 Queueing message for retry because of: [8201]
23:11:29 896 The database function 50 reported error [8200] on userr2x.db
23:11:29 896 The database function 50 reported error [8200] on usertop.db
23:11:29 896 The database function 50 reported error [8200] on userrfx.db
23:11:29 896 The database function 44 reported error [8201] on useryvr.db
23:11:29 896 Queueing message for retry because of: [8201]
23:11:29 664 The database function 44 reported error [8201] on useryvr.db
23:11:29 664 Queueing message for retry because of: [8201]
23:11:30 664 The database function 44 reported error [8201] on useryvr.db
23:11:30 664 Deferring message delivery because of: [8201]
23:11:36 056 The database function 29 reported error [8201] on useru6z.db


This is the only such error in the current logs.


This directly coincided with the backup of that volume using SEP Sesam (which backs up using SMS/SMDR employing tsafs with the embedded tsagw.)

My question is:
Is there something wrong with the tsagw that's included in tsafs that's allowing these locks and subsequent errors? I just had an "oh s&*&" moment since I'm having some fairly regular issues popping up on my weekly gwchecks.
My understanding is that the tsagw functionality requires no configuration as long as you have the right OES version (we're on OES2SP3)

SMDR is set to load:
autoload: tsafs --cluster
autoload: tsands


Groupwise is version 8.0.2-92377