No. Writing test cases to verify a program is an integral part of writing and debugging the program. Providing the specific test cases would be tantamount to doing this part of the project for you.
The job of a grader (human or auto-grader) is to evaluate how well the student completed the project. The purpose of a grader obviously is not to complete the project for the student. For example, a grader evaluates how well a program works, but the grader doesn't re-write the program for the student. In the same way, the grader evaluates how well the student verified the program, but this evaluation process shouldn't replace the verification performed by the student.
The range of input values that your program should work for is very clearly defined in the project handout. It is not enumerated as specific test cases; rather it is defined as a universe of programs. Your program should work for this entire universe of programs. Since it is not possible to evaluate it on the entire universe of programs (which is infinite in size), we evaluate it on a few specific programs to give us a statistical sample of how it would perform over the entire universe. You should be doing the same kind of evaluation yourself; conducting this evaluation is part of the project assignment.
The auto-grader's feedback is designed to inform the student of the project grade, so the student can make an informed decision about continuing to work on the project or not. Any feedback other than the project grade should be considered a bonus.
Even though the auto-grader is not intended to help debug, some students try to use it in this manner anyway. Apparently, really frequent feedback about the numeric grade (plus any extra information it happens to provide) can be used clumsily in debugging. The limit on number of submissions is intended to prevent use of the auto-grader in such a trial-and-error manner.
Trial-and-error debugging with a magical oracle is inefficient and unrealistic. In the real world, you write test cases to debug your program, and you (hopefully methodically) track down the bug, fix it, and verify that it works on the test case. Trying fixes at random should offend your sensibilities as a computer scientist. Before you fix a bug, you should understand why it's a bug and understand why the fix should work. If you try a fix without completely understanding the bug and the fix, you're likely to get it to work for some but not all workloads, and the bugs that result will be even harder to find.
The best way to debug a problem is to write a test case that triggers the bug, trace through the program to figure out why it failed, fix the bug, then re-run the test case to verify that the fix works. This process is time consuming, so one submission per day should be plenty. You should catch most bugs without needing to submit to an auto-grader.
The one-per-day submission policy is also intended to encourage you to start the project early. Don't try to do the entire project on the due date!
First, writing test suites and verifying programs is a major part of software development. In fact, the main role of many software developers is to write test suites and verify programs. As much manpower in industry is spent debugging and verifying programs as is spent writing them!
Second, writing a good suite of test cases will deepen your understanding of the program. Thinking through the different possible behaviors the program can exhibit and designing test cases to generate these behaviors helps you thoroughly understand the program.
Third, writing a good suite of test cases will help you debug your program.
Finally, you should have been writing a comprehensive suite of test cases in the normal course of writing your program, so submitting this suite shouldn't be additional work. In practice, however, it is tempting to try to skip writing a good set of test cases, even though this often ends up costing more time in debugging. Requiring you to write a comprehensive suite of test cases counters this temptation.
The auto-grader runs your own test cases on your program. This does not affect the grade; its purpose is to help students by informing you that your own test cases will trigger a bug in your program. Ordinarily you should already know if your programs fails on your own test case, because you should have figured out the right answer for the test case in advance and compared your program's output to the right answer. However, this may help alert you to a misunderstanding about the project specification.
Once you know that your own test cases causes your program to fail, your job becomes much easier (most of my debugging time goes to constructing a test case that causes my program to fail repeatably). You should figure out by hand the right answer (e.g., sequence of output) for this test case (you really should have done this while you wrote the test case), and see if your program matches this right answer. If it does not match the right answer, you're well on your way to debugging the program. If it DOES match the right answer, your "right answer" may be wrong. If you can't figure out how your "right answer" could be wrong, see a TA or professor during office hours.
The set of instructor buggy programs is small (I can't generate as wide a variety of bugs as 100 students). Your program apparently has a bug that is not represented in the set of instructor buggy programs, and your test suite is not exposing that bug. Even though your test suite is getting full credit, you still need to write a test case that will expose and help you find your bug.