Is there a Java library for testing command line applications?

Computer Science Educators Asked by TechnoSam on December 4, 2020

JUnit seems to work well for unit tests, but I’m not interested in unit tests, I want to test and entire command line application as a black box.

I have created a text-based adventure game project for my students and I am trying to build an auto-grader. I do not have any web infrastructure and I would prefer to just creating a local application to test students’ submissions. I also would like it to be easy enough to students to use to check their work (JUnit doesn’t seem to be easily usable outside of an IDE).

I am looking to create a program that will attempt to compile a students code and run the game through all the possible paths. I could just build something like this from scratch, but I feel like there should already be some kind of framework to make handling test cases and printing results easy. I haven’t found anything outside of web-based services, which isn’t what I’m looking for.

Is there any library to help with this, or am I better off just making my own grader from scratch?

3 Answers

I doubt that there is such a library, but, depending on what you expect from your students there are things you might be able to build.

First, it is possible to compile and run one Java program from another. Here is a page that describes the how to do it:

Once you can do that, you can, perhaps, read a set of regular expression patterns from a file and match them against what the student program produces using the Pattern class.

Answered by Buffy on December 4, 2020

There is expect. It is designed to test interactive command-line systems. For non-interactive programs it is easier.

expect is not java, but as pointed out in the comments. The language is irrelevant, as the testing is done from the outside.

Answered by ctrl-alt-delor on December 4, 2020

This answer is based on the using features and commands available in a Linux environment. The basic idea is to use file redirection to feed input to the program under test (PUT) and collect output, both stdout and stderr. This describes a small system that we have used at Colorado State University that I wrote. If anyone is interested, I can supply both the scripts and some documentation. Using them would require modifications on the part of the tester to accommodate location of submissions, etc that are particular to CSU.

The basic idea is to execute a single test case, collect its output and assess the result. The test framework is language agnostic as it is used to test programs written in a variety of languages. It is written in bash and takes advantage of many Linux utilities. A test case consists of a single line of text containing a testName, number of points for the test, and the actual Linux command necessary to run the test. The "language" of the test case has some simple macro capability to make it easy to write multiple test cases easily. For example, the macro $inputFile is expanded to input/$testName, and likewise for $output. The tester creates multiple input files named the same as the testName in the directory input.The required output redirection is done by the framework, so that the actual command is often as simple as java SomeProgram < $inputFile or ./myprog < $inputFile.

Assessment is done using diff, comparing the output of the PUT to that of a master solution. diff has lots of options to ignore case, white space, etc to loosen up the actual comparison. The master solution's output is collected by simply running the framework and renaming its output as the master. The diff is presented in a colored side-by-side format as that is easier for the students to understand than a standard diff output. A student gets no/full points depending on the diff. A typical program has many test cases. Optional post-processing to assess points is possible as is the ability to add in material generated manually.

In a complete test, the framework takes test cases from a text file, processed them one by one, and collects all output in a single file. At the completion, the file is post-processed to extract the individual test case scores and prepend them to the raw output. The student sees a total score, individual test case scores and finally the results of each test case.

To test the entire class, the framework processes a list of the student ID's from a text file. Each ID corresponds to a directory containing the submission for that student. The submission consists of a single file, though that file is frequently a tar/jar/zip.

The actual build of the PUT is driven by a Makefile. This may be supplied by the person running the tests or many be a required part of the submission. The framework simple performs a make and runs the resulting code. Part of the specification of the assignment is the name/class of the executable. For testing scripts, the Makefile may simply insure that the execute permission is set.

Answered by Fritz Sieker on December 4, 2020

Add your own answers!

Related Questions

Problem book for Haskell course

1  Asked on August 21, 2021 by paus


MERN – Which textbook to use?

1  Asked on July 30, 2021


Concern over plagiarism in designing online course

1  Asked on July 23, 2021 by essentialanonymity


Algorithm Design challenges

2  Asked on July 20, 2021


Why GAN is Unsupervised Learning?

0  Asked on July 2, 2021 by failed-scientist


Robotics Recommendations

2  Asked on June 24, 2021 by birwin


Ask a Question

Get help from others!

© 2022 All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP, SolveDir