Station A writes a byte of data to a file. Station B reads a chunk of that file.

On some stations this works fine--I tested over 200,000 iterations of the routine without a problem. On other stations, though, if it read the block of data and got a certain result it would get the same result on later reads even though the data had changed in the meantime.

Turning off client caching fixed the problem which basically says it's the client caching data that should not be cached.

Most of the failures were on Windows 98 stations but it was also observed on XP. Correct observation has been observed on both, also.

In all cases the file was opened without restrictions. File locks were not being used. In about half the cases there was most definitely someone else who already had the file open at the time the station opened it. (I believe this is irrelevant, though, as this matches the distribution of the opportunities for failure.)

What in the world is going on here???

Notes to anyone who is trying to pick apart the lack of locks: The logic is sound although the routine is inefficient. Other approaches would be more efficient in CPU time but this is legacy code that's already on the way out, the developer time simply would not be worth it.