From: Thomas Walker Lynch Each test function is given an Each test function is responsible for managing any output generated
+ on Each test should manage its output streams with an intentional policy: This approach ensures that tests remain clean and focused on their primary objectives without unintended side effects from unhandled output. © 2024 Thomas Walker Lynch - All Rights Reserved. Testing centers around three key components: the test
+ bench, the test functions (or tests), and
+ the functions under test. In most cases, the
+ developer provides the functions under test. When this tool is used, Mosaic
+ supplies the test bench. This leaves the tester with the role of creating and
+ running the tests. Often, of course, the tester role and the developer role are
+ performed by the same person, though these roles are distinct. The term function refers to any program or
+ circuit where outputs are determined solely by inputs, without internal
+ state being kept, and without side effects. All inputs and outputs are
+ explicitly defined. By definition, a function returns a single result, but
+ this is not a very strong constraint because said single result can be a
+ collection, such as a vector or set. We need this precise definition for a function to make meaningful
+ statements in this document, but the Mosaic TestBench can be used with
+ tests designed to evaluate any type of subroutine. A later chapter will
+ cover testing stateful subroutines, provided that I get around to writing it. There is also a nuanced distinction between function
+ in singular and plural forms, because a collection of functions can be viewed as
+ a single larger function with perhaps more inputs and outputs. Hence, when a test
+ is said to work on a function, we cannot conclude that it is a single function
+ defined in the code. A test must have access to the function under test so that it can supply
+ inputs and harvest outputs from it. A test must also have a
+ failure detection function that, when given
+ copies of the inputs and outputs, will return a result indicating if a
+ test failed or not. Ideally, the failure detection function is accurate,
+ or even perfect, as this reduces missed failures and minimizes the need
+ to verify cases that it has flagged as failures. The testerâs goal is to identify failures,
+ observable differences between actual outputs and expected outputs. Once a
+ failure is identified, a developer can investigate the issue, locate
+ the fault, and implement corrections as
+ necessary. While Mosaic aids in failure detection, it does not directly
+ assist with debugging. Unstructured testing is at the base of all testing strategies. The following are some
+ examples of approaches to unstructured testing. The Mosaic TestBench is agnostic
+ to the approach used for unstructured testing, rather this section is about writing
+ the test code that the TestBench will call. In reference value-based testing, an ordering
+ is assigned to the inputs for
+ the function under test, as well as to
+ its outputs. With this ordering, the function
+ under test can be said to receive an input
+ vector and to return an actual output vector. In this testing approach, a Reference Model is also used.
+ When given an input vector, the Reference Model will produce
+ a corresponding reference output vector that follows the
+ same component ordering as the actual output vector from the
+ function under test. The failure detection function then compares each
+ actual output vector with its respective reference output vector. If they do
+ not match, the test is deemed to have failed. The Reference Model is sometimes referred to as the golden
+ model, and said to produce golden values. However, this
+ terminology is often an exaggeration, as testing frequently reveals inaccuracies
+ in reference values. Thus, in reference value-based testing, the failure detection function
+ relies on a comparison between the actual and reference output vectors. Its accuracy
+ depends directly on the accuracy of the Reference Model. Property check testing is an alternative to
+ reference value-based testing. Here, rather than comparing the actual
+ outputs to reference outputs, the actual output is validated against
+ known properties or expected characteristics. For example, given an integer as input, a function that squares this
+ input should yield an even result for even inputs and an odd result for odd
+ inputs. If the output satisfies the expected property, the test passes;
+ otherwise, it fails. This approach allows testing of general behaviors
+ without specific reference values. With spot checking, the function under test is checked against one or
+ two input vectors. Moving from zero to one, i.e. trying a program for the first time,
+ can have a particularly high threshold of difficulty. A tremendous
+ around is learned during development if even one tests passes for
+ a function. Sometimes there are notorious edge cases. Zeros and one off the
+ end of arrays come to mind. Checking a middle value and the edge
+ cases is often an effective test. It takes two points to determine a line. In Fourier Analysis,
+ it takes two samples per period of the highest frequency component
+ to determine an entire wave form. There is only so much a piece of
+ code can do different if it works at the edge cases and in between.
+ It is because of this effect that ad hoc testing has produced so
+ much working code.
+ Spot checking is particularly useful during development. It is the
+ highest leverage testing return for low investment. High investment is
+ not approrpiate for code in development that is not stable, and is open to
+ being refactored.
+ These tests are primarily ad hoc, as we avoid using the TestBench to test
itself. Despite being ad hoc, the tests follow a core philosophy: the goal
- is to identify which functions fail, rather than diagnose why they fail. To
- achieve this, tests do not print messages but instead
- return Output Stream Policy for Tests
+
+ Overview of the
+
+ IO ObjectIO object, which provides
+ methods for inspecting stdout and stderr output
+ streams, programmatically adding data to the stdin input stream,
+ and clearing output streams as needed. Although the IO object is
+ optional, it is available for cases where I/O validation or cleanup is
+ essential to the test.Purpose
+
+ stdout or stderr by the function under test
+ (fut). TestBench will automatically clear the streams before
+ each test begins and will check them after the test completes, treating any
+ remaining output as unintended and marking the test as a failure. This policy
+ ensures that tests intentionally handle output by either validating,
+ clearing, or ignoring it, thereby maintaining a clean and predictable testing
+ environment.Policy Guidelines
+
+
+
+
+
+
+
+
+
+
+
+ stdout and stderr using methods like io.get_out_content() or io.get_err_content(). The test passes if the actual output matches the expected content.io.clear_buffers()) if further output handling is not needed to avoid residual content.
+
+
+ TestBench that any output generated was intentionally disregarded and avoids marking the test as failed.
+
+ TestBench will flag this as a failure.Example Scenarios
+
+
+
+ public static Boolean test_with_output_verification(IO io) {
+ System.out.println("Expected output");
+ String output = io.get_out_content();
+ boolean isCorrect = output.equals("Expected output");
+ io.clear_buffers(); // Clear remaining content if not needed
+ return isCorrect;
+}
+
+
+ public static Boolean test_without_output_verification(IO io) {
+ System.out.println("Output not needed for this test");
+ io.clear_buffers(); // Clear output since itâs intentionally ignored
+ return true;
+}Summary
+
+
+ TestBench from marking the test as failed.White Box Testing
+ Introduction
+
+ Unstructured Testing
+
+ Reference Value based testing
+
+ Property Check Testing
+
+ Spot Checking
+
+ true if they pass.
Accordingly, only pass/fail counts and the names of failing functions are
- recorded. For more detailed investigation, the developer can run a failed
- test using a debugging tool such as jdb.
jdb.
To run all tests and gather results, follow these steps:
clean_build_directories.clean_build_directories.make to compile the project and prepare all test class shell wrappers.run_tests to run the tests. Each test class will output
its results, identifying tests that failed.