Programming Assignment 5 - The Interpreter
Project Overview
Programming assignments 2 through 4 involved the constructed of the
front-end (lexer, parser) and gatekeeping (semantic analyzer) stages of an
interpreter. In this assignment you will write the code that performs the
execution and interpretation.
You may do this assignment in OCaml, Python or Ruby. You must use each
language at least once (over the course of PA2 - PA5); you will use one
language (presumably your favorite) twice.
You may work in a team of two people for this assignment. You may work in a
team for any or all subsequent programming assignments. You do not need to
keep the same teammate. The course staff are not responsible for finding
you a willing teammate. However, you must still satisfy the language
breadth requirement (i.e., you must be graded on at least one OCaml
program, at least one Ruby program, and at least one Python program).
Goal
For this assignment you will write an interpreter. Among other
things, this involves implementing the operational semantics specification
of Cool. You will track enough information to generate legitimate run-time
errors (e.g., dispatch on void). You do not have to worry about "malformed
input" because the semantic analyzer (from PA4) has already ruled out bad
programs.
You will also write additional code to unserialize the class and
implementation maps produced by the semantic analyzer and the
parse tree produced by the parser.
The Specification
You must create three artifacts:
- A program that takes a single command-line argument (e.g.,
file.cl-type). That argument will be an ASCII text
Cool class map, implementation map, and AST file (as described in PA4). Your program must execute the Cool program
described by that input.
If your program is called interp, invoking interp
file.cl-type should yield the same output as cool
file.cl. Your program will consist of a number of OCaml files, a
number of Python files, or a number of Ruby files.
- You will only be given .cl-type files from programs that
pass the semantic analysis phase of the reference interpreter. You are
not responsible for correctly handling (1+"hello") programs.
- A plain ASCII text file called readme.txt describing your
design decisions and choice of test cases. See the grading rubric. A few
paragraphs should suffice.
- Testcases test1.cl, test2.cl, test3.cl and
test4.cl. The testcases should exercise interpreter and run-time
error corner cases.
Error Reporting
To report an error, write the string ERROR: line_number:
Exception: message to standard output and terminate the
program. You may
write whatever you want in the message, but it should be fairly indicative.
Example erroneous input:
class Main inherits IO {
my_void_io : IO ; -- no initializer => void value
main() : Object {
my_void_io.out_string("Hello, world.\n")
} ;
} ;
Example error report output:
ERROR: 4: Exception: dispatch on void
Commentary
You will have to handle all of the internal functions (e.g.,
IO.out_string) that you first encountered in PA4.
You can do basic testing as follows:
You should implement all
of the operational semantics rules in the Reference Manual.
You will also have to implement
all
of the built-in functions on the five Basic Classes.
What To Turn In For PA5
You must turn in a zip file containing these files:
- readme.txt -- your README file
- test1.cl -- a testcase
- test2.cl -- a testcase
- test3.cl -- a testcase
- test4.cl -- a testcase
- source_files -- your implementation
Your zip file may also contain:
- team.txt -- an optional file listing your other team member
(see below -- if you are not working in a team, do not include this file)
Your zip file must be named your_email-pa5.zip. For
example, if your University email address is wrw6y you must
call your zip file wrw6y-pa5.zip. Do not use your gmail address or
whatnot -- we really want your university ID here.
Submit the file using Toolkit (as with PA1-PA4).
Working In Pairs
You may complete this project in a team of two. Teamwork imposes burdens
of communication and coordination, but has the benefits of more thoughtful
designs and cleaner programs. Team programming is also the norm in the
professional world.
Students on a team are expected to participate equally in the effort and to
be thoroughly familiar with all aspects of the joint work. Both members
bear full responsibility for the completion of assignments. Partners turn
in one solution for each programming assignment; each member receives the
same grade for the assignment. If a partnership is not going well, the
teaching assistants will help to negotiate new partnerships. Teams may not
be dissolved in the middle of an assignment.
If you are working in a team, exactly one team member should submit
a PA4 zipfile. That submission should include the file team.txt, a
one-line flat ASCII text file that contains exactly and only the
email address of your teammate. Don't include the @virgnia.edu
bit. Example: If ph4u and wrw6y are working together,
ph4u would submit ph4u-pa4.zip with a team.txt
file that contains the word wrw6y. Then ph4u and
wrw6y will both receive the same grade for that submission.
Autograding
We will use scripts to run your program on various testcases. The testcases
will come from the good.cl and bad.cl files you and your
classsmates submit as well as held-out testcases used only for grading.
Your programs cannot use any special libraries (aside from the OCaml
unix and str libraries, which are not necessary for this
assignment). We will use (loosely) the following commands to execute them:
- ocaml unix.cma str.cma *.ml testcase.cl-type >& testcase.out
- python main.py testcase.cl-type >& testcase.out
- ruby main.rb testcase.cl-type >& testcase.out
You may thus have as many source files as you like (although two or three
should suffice) -- they will be passed to your
language interpreter in alphabetical order (if it matters).
In each case we will then compare your output to the correct answer:
- diff -b -B -E -w testcase.out correct-answer.out
If your answer is not the same as the reference answer you get 0
points for that testcase. Otherwise you get 1 point for that testcase.
For error messages and negative testcases we will compare your output
but not the particular error message. Basically, your interpreter
need only correctly identify that there is an error on line X. You do not
have to faithfully duplicate our English error messages. Many people choose
to (because it makes testing easier) -- but it's not required.
We will perform the autograding on some unspecified test system. It is
likely to be Solaris/UltraSPARC, Cygwin/x86 or Linux/x86. However, your
submissions must officialy be platform-independent (not that hard
with a scripting language). You cannot depend on running on any particular
platform.
There is more to your grade than autograder results. See the Programming
Assignment page for a point breakdown.