This, to me, is a far more complicated question then it may first appear to be.
Not having a detailed insight into you company’s standard process for doing code test, I can, however, give you some of my own common practices that I have accumulated ove thte last 15 + years:
Practice #1: Never assume that code (whether it is a single function or an entire suite of functions to include GUI’s) works properly! Solution: Use a ‘standard’ set of scripts and MANUAL operations to test rudimentary code functinality and validity (‘goes-ins’ and ‘goes-outs’).
Practice #2: ALWAYS use previous test strategies and tools that yielded CONFIRMED bug detection (This is especially true if your test project is a newer (not necessarily BETTER) version of an existing product). Solution: REGRESSION testing! Best to go back as far as three previous versions of the current test target.
Practice #3: Use both ‘dirty’ platforms as well as ‘clean’ platforms to test the (proposed) deployed product. Solution: Find a good utility program that can recreate all of the platforms/versions of OS that the deployed product will be documetated as being able to function properly on. After these ‘clean’ platforms are tested, install both previous versions of the ‘new’ deployed product (if in existence) as well as commonly-used third party software to create a ‘dirty’ platform. This is best done with a re-formatted platform taht has the older/third party software installled FIRST and then install the ‘proposed’ deployed product. Re-run the same tests that you did on the ‘clean’ platforms.
There are more but, these should be a good starting point (from my point-of-view, that is).
Hope this helps.
I also find it somehow incorrect.
I think the tester should not design the test cases in the first place. Test cases should be designed in the early stages of the project, even before coding. Often the test cases are derived from the software requirements, so I think that what they are doing is not completely wrong, because they start testing based on requirements.
The problem to me is that without formal test cases defined before hand, the risk of leaving important features/cases without test is bigger. Also, if the tester does not have pre-defined inputs and expected outputs for each test case, he/she has to decide whether the test results are correct or not, and that is also a risk.
Creating test cases after they have been tested, just to be able to write the results, is only helping to document the project, and that should not be its only goal.
On the other hand, if the testing team is often discovering many errors in the early stages of the testing, the development team could also need to improve its job.