Building the MWiF Test Plan
Moderator: Shannon V. OKeets
Building the MWiF Test Plan
Up till this point the testing of MWiF has been somewhat semi-random ("Click here and see what happens") and guided mainly by the previous WiF experience of the test team. This model was considered adequate for the initial "alpha" stages because it was not possible to do more sophisticated testing until the basic MWiF code infrastructure was in place.
We will soon need to be testing against a more sophisticated test plan. This will require the combined experience of WiF rules lawyers and experienced software testers who are familiar with the discipline of writing a test plan against a given specification (in this case, the World In Flames rulebook). I would like to call for two or three volunteers who are happy to share this load.
I am interested in hearing from people who:
* do software testing for a living.
* are willing to accept the shared responsibility of writing a test plan for MWiF.
* are willing to spend a few hours a week in helping us develop and manage this test plan.
These volunteers will not be downloading the software. Like myself, their primary involvement will be to ensure that the other testers are helped to use their time most effectively. If you've ever been a test team leader then you know what I mean.
Remember that this role does not involve actually testing the software (although I am sure your dedication will be noticed [;)]).
Volunteers, one step forward!
We will soon need to be testing against a more sophisticated test plan. This will require the combined experience of WiF rules lawyers and experienced software testers who are familiar with the discipline of writing a test plan against a given specification (in this case, the World In Flames rulebook). I would like to call for two or three volunteers who are happy to share this load.
I am interested in hearing from people who:
* do software testing for a living.
* are willing to accept the shared responsibility of writing a test plan for MWiF.
* are willing to spend a few hours a week in helping us develop and manage this test plan.
These volunteers will not be downloading the software. Like myself, their primary involvement will be to ensure that the other testers are helped to use their time most effectively. If you've ever been a test team leader then you know what I mean.
Remember that this role does not involve actually testing the software (although I am sure your dedication will be noticed [;)]).
Volunteers, one step forward!
/Greyshaft
RE: Building the MWiF Test Plan
I do not know the rules well enough to be a good candidate for this myself, but I suspect you'd get a better response if you nominated a small group of testers to do this as well. I understand the desire not to have too many active testers bickering and contradicting one another. But at the same time, it is important to understand the motivational drivers of a volunteer workforce. I don't want to discount the desire of everyone here to help ensure this is a quality product, but even with access to the software you tend to see a degree of turnover among volunteers. Without it, I suspect turnover might be even higher. And if the intent of these positions is to plan and monitor the testing, then I would expect this might be the area where turnover would have an even more pronounced impact. Furthermore, I would suggest that communication between the planners and the testers would be paramount. To be effective, the planners will certainly need access to the development forums, and differentiating them as non-testers just builds artificial divisions among the community. Finally, allowing the planners to have access to the software would give them additional perspective that would greatly enhance their ability to understand and communicate with the testers.
For a lot of reasons, I would suggest that these planners should have access to the software. Since you have openings now, I would encourage you to fill at least some of those slots with people who would be able and willing to commit some additional time and serve as both tester and planner (and I am saying that as someone who admits he isn't qualified to be a planner and therefore could be dislodged as a tester by my own suggestion).
For a lot of reasons, I would suggest that these planners should have access to the software. Since you have openings now, I would encourage you to fill at least some of those slots with people who would be able and willing to commit some additional time and serve as both tester and planner (and I am saying that as someone who admits he isn't qualified to be a planner and therefore could be dislodged as a tester by my own suggestion).
- Peter Stauffenberg
- Posts: 403
- Joined: Fri Feb 24, 2006 10:04 am
- Location: Oslo, Norway
RE: Building the MWiF Test Plan
I agree with jchastain. Why not let some testers have a dual role, being regular testers and also writing test plans etc.? Taking part in the former shouldn't exclude you from the latter and vice versa.
I think it would be hard to make test plans for a program you haven't even seen. A person dedicated to test planning and maybe even writing test cases for the others to test would have to work closely with Steve and also have access to the program so he can write better what they should test. Steve can provide documentation of the program structures etc.
At my work we always hire some people being responsible for organizing the testing when we develop new versions of our programs, especially when those developments are project related. These test people make test plans, gather all the bug reports from the testers and send the relevant ones to the programmers to fix. They even organize the test runs of value chains etc. writing test cases for the testers to perform. The people responsible for all the testing don't have to be experts on the program or regular users, but they need to have access to the program to verify bugs reported etc. and become familiar with how the program works so they know what's needed to be tested.
I think it would be hard to make test plans for a program you haven't even seen. A person dedicated to test planning and maybe even writing test cases for the others to test would have to work closely with Steve and also have access to the program so he can write better what they should test. Steve can provide documentation of the program structures etc.
At my work we always hire some people being responsible for organizing the testing when we develop new versions of our programs, especially when those developments are project related. These test people make test plans, gather all the bug reports from the testers and send the relevant ones to the programmers to fix. They even organize the test runs of value chains etc. writing test cases for the testers to perform. The people responsible for all the testing don't have to be experts on the program or regular users, but they need to have access to the program to verify bugs reported etc. and become familiar with how the program works so they know what's needed to be tested.
RE: Building the MWiF Test Plan
MWiF is an extremely complex game and unlike other complex games (think of Grigsby's War in the Pacific) it is being developed against a specification. Grigsby and his team had the luxury of being able to define the rules as they went along and if a feature didn't work then they could always delete that feature and the enduser would be none the wiser. The game was defined by the software and that was the end of the story.
MWiF isn't like that. The computer game is defined by the boardgame and apart from documented exceptions (such as changing the scale on the Pacific Map) everybody expects the computer game will adhere to the boardgame rules. In any complex software environment the scope of the application passes beyond the ability of any one individual to comprehend the entire model. Even the WiF lawyers (that doesn't include me) can make mistakes in their interpretation of the rules.
So...
1. How can we make sure that everything is tested unless there is someone with a list of tests asking for a volunteer to (for example) "Run test 1047. Execute an amphibious landing against an empty hex . Confirm that the notional defender contributed to the defense. Run the same landing against an occupied hex. Confirm that the notional defender did not contribute to the defense"? While there will be plenty of invasions executed during the testing cycle it may well be that no-one stops to check that the notional defender only appears when he is supposed to appear... unless we are working from a Test plan
2. Given point 1, how can we create a list of tests unless someone volunteers to do this? Writing a test plan takes time and concentration. Any management role in any profession (medicine, engineer, armed forces, you name it) involves passing the pointy stick to the crew on the ready line and devoting yourself to doing your management job properly. Like everyone else who jumps into the software I am tempted to 'play' with it and just report bugs. Unfortunately I can very easily chew up all of my MWiF time in doing just that and we wind up being no further along in being able to say MWiF has been properly tested.
Consider the mindset of the testers (and this includes me when I am in Tester mode - this is an observation, not a criticism)
* Do they play with every optional rule or do they avoid optional rules they dislike?
* Do they play every country in every scenario or are they focussed on one or two favorites?
* Do they try every different strategy or have they settled on one or two optimum approaches to victory and pursue those at all costs?
Even testers who make a concious effort to pursue a balanced approach with their testing will find it difficult to claim they have completely tested the software unless they are working from a documented plan.
I don't see the planner position as being a short cut into testing or the planner might find themselves doing too much testing and not enough planning. It is frustrating watching others 'playing' the game while my time is spent dividing the rule set up into discrete tests and considering how to sweettalk the testers into running these tests... but someone's got to do it.
MWiF isn't like that. The computer game is defined by the boardgame and apart from documented exceptions (such as changing the scale on the Pacific Map) everybody expects the computer game will adhere to the boardgame rules. In any complex software environment the scope of the application passes beyond the ability of any one individual to comprehend the entire model. Even the WiF lawyers (that doesn't include me) can make mistakes in their interpretation of the rules.
So...
1. How can we make sure that everything is tested unless there is someone with a list of tests asking for a volunteer to (for example) "Run test 1047. Execute an amphibious landing against an empty hex . Confirm that the notional defender contributed to the defense. Run the same landing against an occupied hex. Confirm that the notional defender did not contribute to the defense"? While there will be plenty of invasions executed during the testing cycle it may well be that no-one stops to check that the notional defender only appears when he is supposed to appear... unless we are working from a Test plan
2. Given point 1, how can we create a list of tests unless someone volunteers to do this? Writing a test plan takes time and concentration. Any management role in any profession (medicine, engineer, armed forces, you name it) involves passing the pointy stick to the crew on the ready line and devoting yourself to doing your management job properly. Like everyone else who jumps into the software I am tempted to 'play' with it and just report bugs. Unfortunately I can very easily chew up all of my MWiF time in doing just that and we wind up being no further along in being able to say MWiF has been properly tested.
Consider the mindset of the testers (and this includes me when I am in Tester mode - this is an observation, not a criticism)
* Do they play with every optional rule or do they avoid optional rules they dislike?
* Do they play every country in every scenario or are they focussed on one or two favorites?
* Do they try every different strategy or have they settled on one or two optimum approaches to victory and pursue those at all costs?
Even testers who make a concious effort to pursue a balanced approach with their testing will find it difficult to claim they have completely tested the software unless they are working from a documented plan.
I don't see the planner position as being a short cut into testing or the planner might find themselves doing too much testing and not enough planning. It is frustrating watching others 'playing' the game while my time is spent dividing the rule set up into discrete tests and considering how to sweettalk the testers into running these tests... but someone's got to do it.
/Greyshaft
RE: Building the MWiF Test Plan
I'm not an expert at game / software testing, but I have a feeling about what Greyshaft just wrote :
Problem is with a game like MWiF, that to be able to "run test 1047", you need to start the game from one of the scenario starts, setup the units, go through all the steps, phases for HOURS until a situation where you can "run test 1047" happens.
What I mean is that you can't jump to a specific situatin to test and test it. You have to actually play the game.
I think that we will be more efficient in testing MWiF in maintaining a common bug list that each of us can access, so that each of us knows if a bug was reported or not, he can append to the list with new details, check that the bug is solved or not when the situation happen to him, but I do not believe very much in a test plan. At least, I do not believe in such a detailed test plan. Now, if the test plan is more broad, and ask players to focus on the naval aspect of the game, or on the resource transportation, etc... but specificaly on one step I think that this is not possible.
Moreover, I'm quite uneasy at speaking about this on the public list. Also, I think that if ever a person did this, it MUST be someone from the playtest group, as I think that this goes against the non disclosure agreement to talk about this test business here. Could we continue this on the testing forum ?
"Run test 1047. Execute an amphibious landing against an empty hex . Confirm that the notional defender contributed to the defense. Run the same landing against an occupied hex. Confirm that the notional defender did not contribute to the defense"?
Problem is with a game like MWiF, that to be able to "run test 1047", you need to start the game from one of the scenario starts, setup the units, go through all the steps, phases for HOURS until a situation where you can "run test 1047" happens.
What I mean is that you can't jump to a specific situatin to test and test it. You have to actually play the game.
I think that we will be more efficient in testing MWiF in maintaining a common bug list that each of us can access, so that each of us knows if a bug was reported or not, he can append to the list with new details, check that the bug is solved or not when the situation happen to him, but I do not believe very much in a test plan. At least, I do not believe in such a detailed test plan. Now, if the test plan is more broad, and ask players to focus on the naval aspect of the game, or on the resource transportation, etc... but specificaly on one step I think that this is not possible.
Moreover, I'm quite uneasy at speaking about this on the public list. Also, I think that if ever a person did this, it MUST be someone from the playtest group, as I think that this goes against the non disclosure agreement to talk about this test business here. Could we continue this on the testing forum ?
RE: Building the MWiF Test Plan
Patrice,
We are talking about the philosophy and practicalities of software testing. None of this is exclusive to MWiF or even Matrix and does not refer to any information provided by Matrix. There is no NDA issue here. The discussion is in the public forum so we can seek assistance from the wider MWiF audience.
If anything I think readers would be impressed by knowing that the MWiF test team is using formal software testing techniques.
We are talking about the philosophy and practicalities of software testing. None of this is exclusive to MWiF or even Matrix and does not refer to any information provided by Matrix. There is no NDA issue here. The discussion is in the public forum so we can seek assistance from the wider MWiF audience.
If anything I think readers would be impressed by knowing that the MWiF test team is using formal software testing techniques.
/Greyshaft
-
- Posts: 146
- Joined: Sun Jan 02, 2005 5:40 am
- Contact:
RE: Building the MWiF Test Plan
I am not qualified to be a test program designer, but I would love to see it done. Right now I just randomly start a game whenever I feel like it and select any old scenario, some group of options and whatever other choices exist. I then set-up without much of a plan and wait for something to happen.
I tend to wander aimlessly
through the process until some error message pops up and then I report in and start over.
I would feel like I was contributing more if I were given orders to test specific scenarios, options combinations, etc. and told to take particular actions.
It also would help me be more disciplined if I had a particular number of assignments to complete within a certain period of time.
So, kudos to the test design officer corps.
As a beta test soldier
, I really look forward to a more structured mission.
I tend to wander aimlessly

I would feel like I was contributing more if I were given orders to test specific scenarios, options combinations, etc. and told to take particular actions.
It also would help me be more disciplined if I had a particular number of assignments to complete within a certain period of time.
So, kudos to the test design officer corps.



RE: Building the MWiF Test Plan
Thanks pak,
keep wandering aimlessly for the moment but rest assured that orders... er... um... I mean 'requests for asistance' will soon be forthcoming. For example you might be asked...
'Excuse me Tester pak, would you be terribly kind and have a look at what happens when Germany tries to DOW Britain in Nov/Dec 1939 after Britain has already done a DOW on Germany in Sep/Oct 1939? Thanks awfully.'
keep wandering aimlessly for the moment but rest assured that orders... er... um... I mean 'requests for asistance' will soon be forthcoming. For example you might be asked...
'Excuse me Tester pak, would you be terribly kind and have a look at what happens when Germany tries to DOW Britain in Nov/Dec 1939 after Britain has already done a DOW on Germany in Sep/Oct 1939? Thanks awfully.'
/Greyshaft
-
- Posts: 146
- Joined: Sun Jan 02, 2005 5:40 am
- Contact:
RE: Building the MWiF Test Plan
That's what I'm talking about! Troop morale and combat effectiveness will improve considerably (at least this troop).
ORIGINAL: Greyshaft
Thanks pak,
keep wandering aimlessly for the moment but rest assured that orders... er... um... I mean 'requests for asistance' will soon be forthcoming. For example you might be asked...
'Excuse me Tester pak, would you be terribly kind and have a look at what happens when Germany tries to DOW Britain in Nov/Dec 1939 after Britain has already done a DOW on Germany in Sep/Oct 1939? Thanks awfully.'
- Griffitz62
- Posts: 64
- Joined: Fri Sep 03, 2004 4:31 am
- Contact:
RE: Building the MWiF Test Plan
Wow pak,
You nailed this one on the head for me. I do my testing in exactly the same way that you described. Although this can work and does have its uses, I find it to be really ineffective. My brain just wants clear goals, objectives and timelines (20 years in the military will do that to a guy). I feel that by the time I sit down to try something out, another tester has already tried it. I would really like to see a comprehensinve test plan. That way the testers can look for very specific things, and we can test with confidence that everything is being tested.
Ken
You nailed this one on the head for me. I do my testing in exactly the same way that you described. Although this can work and does have its uses, I find it to be really ineffective. My brain just wants clear goals, objectives and timelines (20 years in the military will do that to a guy). I feel that by the time I sit down to try something out, another tester has already tried it. I would really like to see a comprehensinve test plan. That way the testers can look for very specific things, and we can test with confidence that everything is being tested.
Ken
- Peter Stauffenberg
- Posts: 403
- Joined: Fri Feb 24, 2006 10:04 am
- Location: Oslo, Norway
RE: Building the MWiF Test Plan
I agree that we will all benefit from a test plan. But Patrice has a point that you can't just jump to a specific point and test what's written in a test plan. You can have the test plan in front of you and check some issues when you have advanced far enough in your game to try it. E. g. if you want to test something specific about Sea Lion then you need to play some turns so you have enough German units and have placed your units in position for Sea Lion. So it takes some hours to test those specific things.
The state of the beta game will also decide whether you can test via a test plan or not. If the game is in early beta it means the game will stop with error messages even before you get to the point you can test e. g. an amphibious on an empty hex. So the beta game needs to be pretty coherent and stable so you can expect to play several turns in succession without seeing crashes.
But you can have a test plan for the early beta stage as well. That test plan would be to try the most basic aspects of the game. E. g. DoW, movement of naval units or land combat. Then you can report the errors you find when you try to do specific actions.
It's very important to have a database, Excel file or whatever where you report new bugs and can check if some of the reported bugs have been fixed and retested. This would also prevent people from reporting the same bugs several times. At work we use the software Test Director or Bugzilla. That helps a lot.
Having a test coordinator / supervisor is good idea. This person is responsible for the test plan, but also responsible for collecting all the bugs and sorting out which ones goes to the programmers to be fixed. He will also be told by the programmers which bugs have been fixed so the coordinator can inform the testers to retest the fixed bugs. The coordinator (in addition to the programmers) will be the one with a good view about which areas of the program is buggy and which areas are stable. So he can focus more effort into those areas with problems.
The bottleneck in debugging software will often be the programmers. They are the most crucial people since only they can actually fix bugs. So it's therefore vital the programmers spend their time upon fixing new bugs instead of having to spend lots of time testing whether a bug report is real or not. This is where a testing coordinator can be valuable. He can try to recreate all the reported bugs and when he realizes the reported bug is a new one he can inform the programmers about it. That means the programmers spend their valuable time for what's most productive, i. e. fixing actual bugs. It's nothing more frustrating to a programmer than to try to recreate a bug reported by a tester and not be able to recreate it only to discover later that the bug is not a bug. The tester simply misunderstood the rules and believed it to be a bug. So the testing coordinator should know every little detail of the rules so he can tell whether a reported bug is not according to the rules or not.
If the programming project is large you can actually have several coordinator persons. One for writing test plans etc. and one for coordinating the bugs being reported by the testers.
The state of the beta game will also decide whether you can test via a test plan or not. If the game is in early beta it means the game will stop with error messages even before you get to the point you can test e. g. an amphibious on an empty hex. So the beta game needs to be pretty coherent and stable so you can expect to play several turns in succession without seeing crashes.
But you can have a test plan for the early beta stage as well. That test plan would be to try the most basic aspects of the game. E. g. DoW, movement of naval units or land combat. Then you can report the errors you find when you try to do specific actions.
It's very important to have a database, Excel file or whatever where you report new bugs and can check if some of the reported bugs have been fixed and retested. This would also prevent people from reporting the same bugs several times. At work we use the software Test Director or Bugzilla. That helps a lot.
Having a test coordinator / supervisor is good idea. This person is responsible for the test plan, but also responsible for collecting all the bugs and sorting out which ones goes to the programmers to be fixed. He will also be told by the programmers which bugs have been fixed so the coordinator can inform the testers to retest the fixed bugs. The coordinator (in addition to the programmers) will be the one with a good view about which areas of the program is buggy and which areas are stable. So he can focus more effort into those areas with problems.
The bottleneck in debugging software will often be the programmers. They are the most crucial people since only they can actually fix bugs. So it's therefore vital the programmers spend their time upon fixing new bugs instead of having to spend lots of time testing whether a bug report is real or not. This is where a testing coordinator can be valuable. He can try to recreate all the reported bugs and when he realizes the reported bug is a new one he can inform the programmers about it. That means the programmers spend their valuable time for what's most productive, i. e. fixing actual bugs. It's nothing more frustrating to a programmer than to try to recreate a bug reported by a tester and not be able to recreate it only to discover later that the bug is not a bug. The tester simply misunderstood the rules and believed it to be a bug. So the testing coordinator should know every little detail of the rules so he can tell whether a reported bug is not according to the rules or not.
If the programming project is large you can actually have several coordinator persons. One for writing test plans etc. and one for coordinating the bugs being reported by the testers.
RE: Building the MWiF Test Plan
There's no reason that Germany can't try an amphibious invasion against the Polish coastal hex in September 39. Even in Barbarossa the Germans can try an invasion on the first turn (then again, so can the Russians [:D]). I'm interested in testing the mechanics of the game. Whether or not the move makes strategic sense is irrelevant.ORIGINAL: Borger Borgersen
...if you want to test something specific about Sea Lion then you need to play some turns so you have enough German units and have placed your units in position for Sea Lion.
Quite Correct. Steve and I discussed that point quite early in the piece and agreed that we'd leave a test plan until the game was stable enough to support it. We're getting there now (stability-wise) so its time to start discussing how we will implement it.The state of the beta game will also decide whether you can test via a test plan or not. If the game is in early beta it means the game will stop with error messages even before you get to the point you can test e. g. an amphibious on an empty hex.
(BTW: Testers who want to just poke and prod without a test plan are free to do so. Working within the Test Plan is entirely voluntary.)
Steve is in negotiation with Matrix to implement bug tracking software.At work we use the software Test Director or Bugzilla. That helps a lot.
Correct again... sounds like you've played 'Bug Hunt' before [:D]The coordinator (in addition to the programmers) will be the one with a good view about which areas of the program is buggy and which areas are stable. So he can focus more effort into those areas with problems.
That's exactly why I started this threadIf the programming project is large you can actually have several coordinator persons. One for writing test plans etc. and one for coordinating the bugs being reported by the testers.

/Greyshaft
- Zorachus99
- Posts: 789
- Joined: Fri Sep 15, 2000 8:00 am
- Location: Palo Alto, CA
RE: Building the MWiF Test Plan
I used to take bug reports from support, reproduce them, and then document the steps how to exactly reproduce the issue for the developers. However unless the GRL is failing (is that right?), it's likely reproduction may be already documented. I can say I've had good success reproducing the nastiest types of errors; but this capability depends on experience and familiarity with the product.
I'd be happy to help here by either contributing with Bugzilla or developing a format that helps Steve the most in prioritizing bugs the testers come across. I'd like to say I'd volunteer to triage all the reports, but I'd like to know how big a scope that is before I jump into it.
I'd be happy to help here by either contributing with Bugzilla or developing a format that helps Steve the most in prioritizing bugs the testers come across. I'd like to say I'd volunteer to triage all the reports, but I'd like to know how big a scope that is before I jump into it.
Most men can survive adversity, the true test of a man's character is power. -Abraham Lincoln
RE: Building the MWiF Test Plan
Thanks Zorachus,
Right now we're just stirring the pot to get people's head around the idea of a Test Plan.
Stay tuned - you'll hear more about this
Right now we're just stirring the pot to get people's head around the idea of a Test Plan.
Stay tuned - you'll hear more about this
/Greyshaft
- composer99
- Posts: 2931
- Joined: Mon Jun 06, 2005 8:00 am
- Location: Ottawa, Canada
- Contact:
RE: Building the MWiF Test Plan
If I had the software experience and the time, I'd certainly be willing to offer to help in this regard.
As I have neither, I will have to simply cheer from the sidelines.
As I have neither, I will have to simply cheer from the sidelines.

~ Composer99
- SamuraiProgrmmr
- Posts: 416
- Joined: Sun Oct 17, 2004 3:15 am
- Location: NW Tennessee
RE: Building the MWiF Test Plan
I would be willing to participate, but as I am in the throes of finishing a very large (300,000+ lines) project myself, I am not sure I can commit time every week. After November or December, I plan to be able to become more involved.
I would recommend highly the following. There may be portions of the testing that can be automated (and repeated ad infinitum after every change to the code base)
http://dunit.sourceforge.net/
Other than that, let me know what I can do.
I would recommend highly the following. There may be portions of the testing that can be automated (and repeated ad infinitum after every change to the code base)
http://dunit.sourceforge.net/
Other than that, let me know what I can do.
Bridge is the best wargame going .. Where else can you find a tournament every weekend?
RE: Building the MWiF Test Plan
Many moons ago I started building a Test plan for MWiF in Excel. Bear in mind that a Test Plan is NOT the same as bug tracking software - the documents are complementary but not identical.
The Test Plan tests for application adherence to user requirements by doing the following:
* define the user requirement - (in this case, the WiF:FE ruleset)
* break the WiF ruleset into Application Modules - (logical groupings of functions such as 'naval combat' vs. 'land combat')
* define the expected Behaviours for each Application Module - (what do I expect to happen in 'naval combat'?)
* lists the Test Cases for each expected Behaviour - (how do I check if my expectations are fulfilled?)
* provide an overview of the Test Script for each Test Case - (what are the specific mouse click or menu-driven instructions for performing these tests?)
* record the PASS/FAIL results for each Test Case - (did it work as expected?)
* summase the PASS/FAIL result for each Application Module - (which parts didn't work?)
* summarise the application adherence to user requirement - (how bad is the problem?)
Bug Tracking software performs the following tasks:
* record errors that occured when executing a Test Script previously defined within the Test Plan.
* record general errors which fall outside the Test Plan - (eg. MWiF won't load a saved game).
(... then, regardless of which type of error occured...)
* record sufficient information to permit the developer to duplicate the error.
* be categorised in a manner to allow the developer to see all similar errors.
* record additional developer comments
* be flagged as 'open' , 'fixed - not retested' , 'fixed & retested' (or similar categories)
Experienced project managers may quibble about the definition of some of these items (eg whether testing for the ability to load a saved game should be part of the Test Plan) but please remember that this is "Big Picture" stuff to assist the team to understand the difference between the two documents.
A section of the Test Plan spreadsheet is included below. Note that the blue text is actually a quote from the WIF:FE rules.

The Test Plan tests for application adherence to user requirements by doing the following:
* define the user requirement - (in this case, the WiF:FE ruleset)
* break the WiF ruleset into Application Modules - (logical groupings of functions such as 'naval combat' vs. 'land combat')
* define the expected Behaviours for each Application Module - (what do I expect to happen in 'naval combat'?)
* lists the Test Cases for each expected Behaviour - (how do I check if my expectations are fulfilled?)
* provide an overview of the Test Script for each Test Case - (what are the specific mouse click or menu-driven instructions for performing these tests?)
* record the PASS/FAIL results for each Test Case - (did it work as expected?)
* summase the PASS/FAIL result for each Application Module - (which parts didn't work?)
* summarise the application adherence to user requirement - (how bad is the problem?)
Bug Tracking software performs the following tasks:
* record errors that occured when executing a Test Script previously defined within the Test Plan.
* record general errors which fall outside the Test Plan - (eg. MWiF won't load a saved game).
(... then, regardless of which type of error occured...)
* record sufficient information to permit the developer to duplicate the error.
* be categorised in a manner to allow the developer to see all similar errors.
* record additional developer comments
* be flagged as 'open' , 'fixed - not retested' , 'fixed & retested' (or similar categories)
Experienced project managers may quibble about the definition of some of these items (eg whether testing for the ability to load a saved game should be part of the Test Plan) but please remember that this is "Big Picture" stuff to assist the team to understand the difference between the two documents.
A section of the Test Plan spreadsheet is included below. Note that the blue text is actually a quote from the WIF:FE rules.

- Attachments
-
- MWIFTestPlan.jpg (121.55 KiB) Viewed 621 times
/Greyshaft
RE: Building the MWiF Test Plan
Don't worry I'm sure the test plan team can find jobs for you [:D]
We're here for a good time not a long time!
- Peter Stauffenberg
- Posts: 403
- Joined: Fri Feb 24, 2006 10:04 am
- Location: Oslo, Norway
RE: Building the MWiF Test Plan
I think it would be nice when people report actual bugs mentioned in the test plan to add columns with the following information:
1. Tester signature or name
2. Date the test was performed
3. Severity of error.
At work we label the severity of each error with one of the following letters.
A - Critical bug. Causes the program to crash or hang
B - Severe bug. Does not cause the program to crash, but the intended function doesn't work as intended and results in wrong data being stored etc. E. g. if you make a land attack and get a breakthrough result and you're not offered the option to advance after combat, maybe even the shattered units don't appear on the production spiral etc
C - Moderate bug. The function doesn't behave as intended, but it doesn't alter data in a wrong way etc. E. g. if you make an air strike and notice that the tank buster aircraft didn't get the intended bonus for attacking an enemy armor. The air strike went on like it was a normal tac bomber.
D- Minor bug. These bugs are only cosmetical. The function behaves as intended, but it may not be shown the way it's intended. E. g spelling errors, dialogue boxes with combat results missing some data, wrong status symbol shown etc.
We have a rule at work we're not allowed to release new software versions where we know about A and B errors not fixed. The quality of the release is afterwards checked towards A and B errors reported by the users after the release.
I think such a distinction makes it easier for the programmers and bug tester coordinaters to understand the severity of the bug and make priorities to which bugs to fix.
It also makes it possible to report internally about the status of the testing. E. g. the report can be like this at release date:
Detected and fixed errors: A: 5, B: 15, C: 42, D: 56
Known errors at release: A:0, B:0, C: 3, D: 4
We also use index numbers with decimal digits to show the progress in bugfixing. ID's ground be grouped so you can immediately see from the ID numbers what kind of bug it is. E. g. 216 = the first test, 216.1 is the first retest, 216.2 is the second retest etc. This
way you will see all the former attempts to fix the bug and why they failed.
1. Tester signature or name
2. Date the test was performed
3. Severity of error.
At work we label the severity of each error with one of the following letters.
A - Critical bug. Causes the program to crash or hang
B - Severe bug. Does not cause the program to crash, but the intended function doesn't work as intended and results in wrong data being stored etc. E. g. if you make a land attack and get a breakthrough result and you're not offered the option to advance after combat, maybe even the shattered units don't appear on the production spiral etc
C - Moderate bug. The function doesn't behave as intended, but it doesn't alter data in a wrong way etc. E. g. if you make an air strike and notice that the tank buster aircraft didn't get the intended bonus for attacking an enemy armor. The air strike went on like it was a normal tac bomber.
D- Minor bug. These bugs are only cosmetical. The function behaves as intended, but it may not be shown the way it's intended. E. g spelling errors, dialogue boxes with combat results missing some data, wrong status symbol shown etc.
We have a rule at work we're not allowed to release new software versions where we know about A and B errors not fixed. The quality of the release is afterwards checked towards A and B errors reported by the users after the release.
I think such a distinction makes it easier for the programmers and bug tester coordinaters to understand the severity of the bug and make priorities to which bugs to fix.
It also makes it possible to report internally about the status of the testing. E. g. the report can be like this at release date:
Detected and fixed errors: A: 5, B: 15, C: 42, D: 56
Known errors at release: A:0, B:0, C: 3, D: 4
We also use index numbers with decimal digits to show the progress in bugfixing. ID's ground be grouped so you can immediately see from the ID numbers what kind of bug it is. E. g. 216 = the first test, 216.1 is the first retest, 216.2 is the second retest etc. This
way you will see all the former attempts to fix the bug and why they failed.
-
- Posts: 22165
- Joined: Wed May 18, 2005 11:51 pm
- Location: Honolulu, Hawaii
- Contact:
RE: Building the MWiF Test Plan
Yes to all of this. I am using almost the identical system - except for the release stuff.ORIGINAL: Borger Borgersen
I think it would be nice when people report actual bugs mentioned in the test plan to add columns with the following information:
1. Tester signature or name
2. Date the test was performed
3. Severity of error.
At work we label the severity of each error with one of the following letters.
A - Critical bug. Causes the program to crash or hang
B - Severe bug. Does not cause the program to crash, but the intended function doesn't work as intended and results in wrong data being stored etc. E. g. if you make a land attack and get a breakthrough result and you're not offered the option to advance after combat, maybe even the shattered units don't appear on the production spiral etc
C - Moderate bug. The function doesn't behave as intended, but it doesn't alter data in a wrong way etc. E. g. if you make an air strike and notice that the tank buster aircraft didn't get the intended bonus for attacking an enemy armor. The air strike went on like it was a normal tac bomber.
D- Minor bug. These bugs are only cosmetical. The function behaves as intended, but it may not be shown the way it's intended. E. g spelling errors, dialogue boxes with combat results missing some data, wrong status symbol shown etc.
We have a rule at work we're not allowed to release new software versions where we know about A and B errors not fixed. The quality of the release is afterwards checked towards A and B errors reported by the users after the release.
I think such a distinction makes it easier for the programmers and bug tester coordinaters to understand the severity of the bug and make priorities to which bugs to fix.
It also makes it possible to report internally about the status of the testing. E. g. the report can be like this at release date:
Detected and fixed errors: A: 5, B: 15, C: 42, D: 56
Known errors at release: A:0, B:0, C: 3, D: 4
We also use index numbers with decimal digits to show the progress in bugfixing. ID's ground be grouped so you can immediately see from the ID numbers what kind of bug it is. E. g. 216 = the first test, 216.1 is the first retest, 216.2 is the second retest etc. This
way you will see all the former attempts to fix the bug and why they failed.
The codes I have are: Fatal, Critical, Bad, Minor, Cosmetic, and Suggestion
I date each attempt to correct a bug with information about what was tried and what effect it had (if any).
Steve
Perfection is an elusive goal.
Perfection is an elusive goal.