You may do this assignment in OCaml, Python, JavaScript, Haskell or Ruby.
You may work in a team of two people for this assignment. You may work in a team for any or all subsequent programming assignments. You do not need to keep the same teammate. The course staff are not responsible for finding you a willing teammate.
You will also write additional code to unserialize the class map, implementation map, parent map, and annotated AST produced by the semantic analyzer.
However, you must generate file.cl-asm (or file.s) so that it checks for and reports run-time errors. When your file.{cl-asm,s} program detects an error, it should use the Syscall IO.out_string and Syscall exit assembly instructions to cause an error string to be printed to the screen.
To report an error, write the string ERROR: line_number: Exception: message (for example, using Syscall IO.out_string) and terminate the program with Syscall exit. You may generate your file.{cl-asm,s} so that it writes whatever you want in the message, but it should be fairly indicative. Example erroneous input:
class Main inherits IO { my_void_io : IO ; -- no initializer => void value main() : Object { my_void_io.out_string("Hello, world.\n") } ; } ;
For such an input, you must generate a well-formed file.{cl-asm,s} assmebly language file. However, when that file is executed (either in a Cool CPU Simulator or on an x86-64 machine), it will produce output such as:
ERROR: 4: Exception: dispatch on voidTo put this another way, rather than actually checking for errors directly, you must generate assembly code that will later check for and report errors.
You can do basic testing as follows:
$ cool --asm file.cl-type $ cool file.cl-asm >& reference-output $ my-code-generator file.cl-type $ cool file.cl-asm >& my-output $ diff my-output reference-output
Whitespace and newlines do not matter in your file.{cl-asm,s} assembly code. However, whitespace and newlines do matter for your simulated Cool CPU output. This is because you are specifically being asked to implement IO and substring functions.
You should implement all of the operational semantics rules in the Reference Manual. You will also have to implement all of the built-in functions on the five Basic Classes.
The goal of CA4t is to leave you with a high-quality test suite of Cool programs that you can use to evaluate your own CA4 and CA5 code generators. Writing a code generator requires you to consider many corner cases when reading the formal and informal semantics in the Cool Reference Manual. While you you can check for correct "positive" behavior by executing your code generator's output against the reference compiler on existing "good" Cool programs, it is comparatively harder to check for "negative" behavior (i.e., run-time errors, strange corner cases).
If you fail to construct a rich test suite of semantically-valid programs you will face a frustrating series of "you fail held-out negative test x" reports for CA4 and CA5 proper, which can turn into unproductive guessing games. Because students often report that this is frustrating (even though it is, shall we say, infinitely more realistic than making all of the post-deployment tests visible in advance), the CA4t preliminary testing exercise provides a structured means to help you get started with the construction of a rich test suite.
The course staff have produced 21 variants of the reference compiler, each with a secret intentionally-introduced defect related to code generation. A high-quality test suite is one that reveals each introduced defect by showing a difference between the behavior of the true reference compiler and the corresponding buggy version. You desire a high-quality test suite to help you gain confidence in your own CA4 (and CA5) submission.
For CA4t, you must produce semantically-valid Cool programs (test cases). There are 21 separate held-out seeded code generator bugs waiting on the grading server. For each bug, if one of your tests causes the reference and the buggy version to produce difference output, you win: that test has revealed that bug. For full credit your tests must reveal at least 15 of the 21 unknown defects.
The secret defects that we have injected into the reference compiler correspond to common defects made by students in CA4. Thus, if you make a rich test suite for CA4t that reveals many defects, you can use it on your own CA4 submission to reveal and fix your own bugs!
For CA4t you should turn in (electronically):
Students on a team are expected to participate equally in the effort and to be thoroughly familiar with all aspects of the joint work. Both members bear full responsibility for the completion of assignments. Partners turn in one solution for each programming assignment; each member receives the same grade for the assignment. If a partnership is not going well, the teaching assistants will help to negotiate new partnerships. Teams may not be dissolved in the middle of an assignment.
If you are working in a team, exactly one team member should submit a CA4 zipfile. That submission should include the file team.txt, a one-line flat ASCII text file that contains exactly and only the email address of your teammate. Don't include the @virgnia.edu bit. Example: If ph4u and wrw6y are working together, ph4u would submit ph4u-ca4.zip with a team.txt file that contains the word wrw6y. Then ph4u and wrw6y will both receive the same grade for that submission.
In each case we will then compare your output to the correct answer:
Note that this time we do not ignore newlines and whitespace since
we are explicitly testing your implementation of a string IO subsystem. You
must get every character correct in non-error instances.
If your answer is not the same as the reference answer you get 0
points for that testcase. Otherwise you get 1 point for that testcase.
For error messages and negative testcases we will compare your output but not the particular error message. Basically, your generated code need only correctly identify that there is an error on line X. You do not have to faithfully duplicate our English error messages. Many people choose to (because it makes testing easier) — but it's not required.
We will perform the autograding on a 64-bit Linux system. However, your submissions must officialy be platform-independent (not that hard with a scripting language). You cannot depend on your compiler running on any particular platform (although you can depend on the resulting assembly code running on its associated platform).
There is more to your grade than autograder results. See the Programming Assignment page for a point breakdown.
Your submission may not create any temporary files. Your submission may not read or write any files beyond its input and output. We may test your submission in a special "jail" or "sandbox".