I'm having problems understanding the concepts behind the Validator.
Can anyone describe the thoughts behind the design?
An example test suite setup?

Specifically I don't understand:

* Is the idea that you create small suite files that execute an isolated test?
- What speaks for this is that I cannot continue executing (part of) a suite on fail.
This means, if a test fails, I do not get any further information on the state of the system.
When testing, I expect to test different functionality or scenarios independently to identify multiple problems as early as possible.
- What also speaks for this is that I cannot execute a single test from command line if I want to script test execution.
- What speaks against this is the storage of variables is inside a single test suite and cannot be overridden.
If I set up multiple minor testparts, I don't want to be forced to maintain passwords in multiple files.

* Configuration Management
I'm used to having control over changes in my test-cases by checking them in to a source-control system, e.g. git.
I've set up git and tried to check in the test suites. No problem, BUT:
- since I cannot create temporary variables in each testcase/test template, or mark them as volatile, the variables are updated by executing tests.
This makes it hard to identify relevant changes in test files, and more or less, forces commiting of the irrelevant changes.
- To be able to move testcases between different servers (for sake of isolation we don't want to test from same validator instance)
I was planning to use git to distribute testcases between servers, but this is also more or less blocked by the design putting variables in the test definition file itself.
Also, as mentioned in https://validator.ideascale.com/a/dt...a-tab-comments the states are stored in the same file.

* Test execution
If I want to execute a test and temporary disable a test step, this is stored in the test definition file, and as humans are imperfect, this may easily be forgotten, and the test will later on be executed without all test steps enabled.
Since, as mentioned earlier, I cannot force continuation of tests on failure, I have felt myself forced to use disabling to be able to achieve the goal of having a more complete view of system state after a test run. Thus increasing the risk of executing tests with disabled test steps.

* Test cleanup
To be able to create independent testcases, I want test cases or suites to be self contained, and after execution clean up after itself, even on fail.
As mention before, I want to be able to execute as many of my test cases, even if some of them fails to get the big picture as soon as possible.
See https://validator.ideascale.com/a/dt...a-tab-comments

Is anyone doing any serious testing with this tool, or am I just spoiled by working with 'real' source code and testing using pyUnit, jUnit, nUnit and the flexibility they give?

Anyone using e.g. pyUnit for testing as an alternative?
Maybe NetIQ could provide a python module with appropriate support instead?

The developer responses in ideascale seems rather sparse, contradicting geoff's comment in the 'Will there be a new version of Validator?' thread.
(not literally, since he says 'active in reading' not responding, but reading does not add value)

Best Regards