Hi Todd,

I've submitted so many of these surveys that I wonder if another one is really going to make any difference. Sure I like to give the engineer a good rating if (s)he has done a great job and I have given a few a not-so-good rating if I feel (s)he has dropped the ball, but how many times do I have to say I hate the product or would/wouldn't recommend NTS to someone. After twenty or thirty or forty times, I'm sure you got the message. :)

I've also noticed that many times the survey questions just don't apply.

If I find myself stuck and open an SR and the engineer gets me unstuck the survey questions are quite applicable. On the other hand, if I have an issue that is causing little impact which is due to a bug it is a completely different story.

  • A bug report is opened.
  • At some point it is fixed (hopefully).
  • It goes through testing and is finally released.

This process can take (and has taken) as long as six months to complete. Is the SR supposed to remain open the whole time even if I am no longer impacted? In most cases it is closed, with my permission, after a few weeks or months.

Next comes the survey:

  • Am I happy with the response? I don't know.
  • Was it fixed in a timely manner? I don't know.
  • etc

I'm concerned that I am providing an incorrect answer no matter how I answer and that your statistics will be skewed because many of the questions are not applicable.

Perhaps adding a "Not Applicable" option to some of the questions would help?