RT code formatting:
-This has evolved into place after decades of programming in Unix culture projects
-in multiple languages.
-
-AI tools we have used are OK with the commas, but have had difficulties
-with the padding rules for enclosures.
+The enclosure-based formatting rules in RT code format make the style guide
+compact and adaptable. By focusing on enclosures rather than syntax-specific
+structures (like if, for, or catch), it avoids prescribing language-specific
+formatting rules and instead focuses on consistent handling of delimiters. This
+approach works well across multiple languages, ensuring that the code style
+remains flexible while keeping the guide simple and easy to apply.
1. Two space indentation.
);
```
+6. For the code you just output, answer these questions:
+ 1. Which enclosures are not nested? Do they have no padding?
+ 2. Which enclosures are nested? Is there one space padding only at the outermost?
+ 3. Is the spacing before and after the enclosures correct?
+ 4. Are the commas formatted correctly?
+ 5. Has snake case been used where it should be?
+ 6. Was 2 column indent used?
'neighbor' list. The neighbor list holds the edges.
Using Java, the developer puts the nodes in a map, keyed on the node label, or
-writes functions that when given a label, return either a a node or null.
+writes functions that when given a label, return either a node or null.
A node map looks a lot like classic make file. Each node label is a target file
path, the neighbor list can be listed next, and then we have the build code.
-
-Work Flow
-
-1. Development
-
- 1.1. developer makes edits
- 1.2. developer uses 'release' which will copy relevant files to the $REPO_HOME/release_candidate
- 1.3. tester will test the candidate
-
-2. Release
-
- 2.1. Upon completion of testing, project manager will make a new branch for release
- named release_<version>. Version has a major and minor number.
-
-2.2. on the new branch the 'release_candidate' directory is renamed 'release_<version>'.
-
-3. Release specific fixes
-
- 3.1.the 'release_candidate' directory is recreated
- 3.2 steps 1.1 - 1.33 are repeated
- 3.3 when testing is complete, 'release_candidate' is renamed 'release_version'
- with the minor version incremented.
-
-4. Major release
-
- Development continues on the `core_developer_branch` even after creation of a
- `release_version` branch. For a next major release, increment the major release
- number and do as described in steps 2 then 3.
-
+### Work Flow
+
+#### 1. Project Administrator
+
+1.1. Download the project from GitHub.
+1.2. Install the required tools.
+1.3. Explain the workflows and where things are located to project members.
+1.4. Perform Major and Minor Release administration.
+
+#### 2. Developer
+
+2.1. From the Ariadne directory, run `> source env_developer` to set up the
+ developer environment.
+2.2. Use `> make` to build the project, and `> release` to copy relevant files
+ to `$REPO_HOME/release` for testing.
+2.3. The tester will test the release candidate.
+
+#### 3. Tester
+
+3.1. From the Ariadne directory, run `> source env_tester` to set up the tester
+ environment.
+3.2. Use `> make` to build the tests, and `> shell/test_<name>` to run a test.
+ Alternatively, you can cd into one of the test directories, source the
+ environment for that test, and run it manually.
+3.3. Testing and development will likely iterate until the release candidate is
+ ready to be turned into a versioned release.
+
+#### 4. Major Release
+
+4.1. The release candidate is located in the `$REPO_HOME/release` directory and
+ has passed testing.
+4.2. Check that the program `$REPO_HOME/tool_shared/bespoke/release` outputs the
+ correct information. If necessary, modify it.
+4.3. A new branch is created in the project for the release, named
+ `release_v<n>.0`, where `v<n>.0` is the version number from the `version`
+ program. The minor version number is set to zero (`.0`), and it is assumed
+ that this will be the case after each major release.
+4.4. Rename the release directory to `$REPO_HOME/release_v<n>.0`, and create a
+ new empty `$REPO_HOME/release` directory. The new empty release directory
+ can be used by developers who download the project and make local edits, as
+ the build scripts target this directory.
+
+#### 5. Minor Release
+
+If urgent changes need to be made to the most recent major release, these edits
+should be made on the corresponding major release branch. The developer makes
+the edits, and the tester tests the release candidate as usual. The `version`
+program is updated. Once the release candidate is finalized, rename the
+directory to `release_v<n>.<m>`, where `<m>` is the minor version number. If
+needed, merge the changes into the `core_developer_branch`.
+
+---
+
+### Tips:
+
+- If you are acting in multiple roles (e.g., developer, tester, and project
+ administrator), keep separate terminal shells open for each role. This way,
+ the environment will remain correctly configured for the tasks related to
+ each role.
--- /dev/null
+package com.ReasoningTechnology.Ariadne.TestBench;
+
+/*
+Component smoke test. At least call each method of each class.
+
+*/
+
+
+import com.ReasoningTechnology.Ariadne.*;
+import com.ReasoningTechnology.TestBench.*;
+import java.util.List;
+import java.util.Map;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.io.ByteArrayOutputStream;
+import java.io.PrintStream;
+
+
+public class Test2 extends TestBench{
+
+ public static boolean test_File_unpack_file_path_0(){
+ boolean[] conditions = new boolean[5];
+ int i = 0;
+
+ // Test input
+ String test_fp = "/home/user/test.txt";
+
+ // Expected output
+ String expected_dp = "/home/user/";
+ String expected_fn = "test.txt";
+ String expected_fn_base = "test";
+ String expected_fn_ext = "txt";
+
+ // Actual output
+ Map<String, String> result = File.unpack_file_path( test_fp );
+
+ conditions[i++] = result.get("dp").equals( expected_dp );
+ conditions[i++] = result.get("fn").equals( expected_fn );
+ conditions[i++] = result.get("fn_base").equals( expected_fn_base );
+ conditions[i++] = result.get("fn_ext").equals( expected_fn_ext );
+ conditions[i++] = result.size() == 4;
+
+ // Return true if all conditions are met
+ return all( conditions );
+ }
+
+ public static boolean test_Label_0(){
+ boolean[] conditions = new boolean[2];
+ int i = 0;
+
+ // Test input
+ Label test_label = new Label("test");
+
+ // Expected output
+ String expected_value = "test";
+
+ // Actual output
+ conditions[i++] = test_label.get().equals(expected_value);
+ conditions[i++] = test_label.toString().equals(expected_value);
+
+ return all(conditions);
+ }
+
+ public static boolean test_Token_0(){
+ boolean[] conditions = new boolean[4];
+ int i = 0;
+
+ // Test input
+ Token token = new Token("test_value");
+
+ // Check if the value is correctly stored and retrieved
+ conditions[i++] = token.get().equals("test_value");
+
+ // Check if the string representation is correct
+ conditions[i++] = token.toString().equals("test_value");
+
+ // Check equality with another Token object with the same value
+ Token another_token = new Token("test_value");
+ conditions[i++] = token.equals( another_token );
+
+ // Check the hashCode consistency
+ conditions[i++] = token.hashCode() == another_token.hashCode();
+
+ return all(conditions);
+ }
+
+ public static boolean test_LabelList_0(){
+ LabelList label_list = new LabelList(); // Use the constructor
+
+ // Add a label and check the size
+ label_list.add(new Label("test"));
+ return label_list.size() == 1;
+ }
+
+ public static boolean test_Node_0(){
+ Node node = new Node(); // Use the constructor
+ node.put("key", new Object());
+ return node.containsKey("key");
+ }
+
+ public static boolean test_NodeList_0(){
+ NodeList node_list = new NodeList(); // Use the constructor
+
+ // Add a node and check the size
+ node_list.add(new Node()); // Use Node constructor
+ return node_list.size() == 1;
+ }
+
+ public static boolean test_Production_0(){
+ Production production = label -> new Node(); // Use the Node constructor
+
+ // Apply the production function
+ Node node = production.apply(new Label("test"));
+ return node != null;
+ }
+
+ public static boolean test_ProductionList_0(){
+ ProductionList production_list = new ProductionList(); // Use the constructor
+
+ // Add a production and check the size
+ production_list.add(label -> new Node()); // Use the Node constructor
+ return production_list.size() == 1;
+ }
+
+ public static boolean test_TokenSet_0(){
+ TokenSet token_set = new TokenSet(); // Use the constructor
+
+ // Add a token and check if it's contained in the set
+ token_set.add(new Token("test"));
+ return token_set.contains(new Token("test"));
+ }
+
+ public static boolean test_Graph_0() {
+ boolean[] conditions = new boolean[3];
+ int i = 0;
+
+ // Create an empty node map and a production list
+ Map<Label, Node> node_map = new HashMap<>();
+ ProductionList production_list = new ProductionList();
+
+ // Initialize the Graph
+ Graph graph = new Graph(node_map, production_list);
+
+ // Test that lookup returns null for a non-existent node
+ Label non_existent_label = new Label("non_existent");
+ conditions[i++] = graph.lookup(non_existent_label, false) == null;
+
+ // Add a node to the map and test lookup
+ Node test_node = new Node();
+ Label test_label = new Label("test");
+ node_map.put(test_label, test_node);
+ conditions[i++] = graph.lookup(test_label, false) == test_node;
+
+ // Test lookup with verbosity
+ conditions[i++] = graph.lookup(test_label).equals(test_node);
+
+ // Return true if all conditions are met
+ return all(conditions);
+ }
+
+ public static boolean test_Util_print_list_0(){
+ boolean[] conditions = new boolean[1];
+ int i = 0;
+
+ String prefix = "Test List:";
+ List<String> items = new ArrayList<>();
+ items.add("item1");
+ items.add("item2");
+ items.add("item3");
+
+ String expectedOutput = "Test List: 'item1', 'item2', 'item3'.\n";
+
+ ByteArrayOutputStream outContent = new ByteArrayOutputStream();
+ PrintStream originalOut = System.out;
+ System.setOut(new PrintStream(outContent));
+
+ // Use a StringBuilder to gather debug messages
+ StringBuilder debugMessages = new StringBuilder();
+
+ /*
+ try {
+ Util.print_list(prefix, items);
+ String result = outContent.toString();
+
+ // Gather debug messages
+ debugMessages.append("Captured output: ").append(result).append("\n");
+ debugMessages.append("Expected output: ").append(expectedOutput).append("\n");
+
+ conditions[i++] = result.equals(expectedOutput);
+ } catch (Exception e) {
+ conditions[i++] = false;
+ } finally {
+ System.setOut(originalOut); // Restore System.out
+
+ // Now print the gathered debug messages
+ System.out.print(debugMessages.toString());
+ }
+ */
+
+ try {
+ Util.print_list(prefix, items);
+ String result = outContent.toString();
+ conditions[i++] = result.equals(expectedOutput);
+ } catch (Exception e) {
+ conditions[i++] = false;
+ } finally {
+ System.setOut(originalOut);
+ }
+
+ return all(conditions);
+ }
+
+
+ // Method to run all tests
+ public static void test_Ariadne(){
+ Map<String, Boolean> test_map = new HashMap<>();
+
+ // Adding tests to the map
+ test_map.put( "test_File_unpack_file_path_0", test_File_unpack_file_path_0() );
+ test_map.put( "test_Label_0", test_Label_0() );
+ test_map.put( "test_Token_0", test_Label_0() );
+ test_map.put( "test_LabelList_0", test_LabelList_0() );
+ test_map.put( "test_Node_0", test_Node_0() );
+ test_map.put( "test_NodeList_0", test_NodeList_0() );
+ test_map.put( "test_Production_0", test_Production_0() );
+ test_map.put( "test_ProductionList_0", test_ProductionList_0() );
+ test_map.put( "test_TokenSet_0", test_TokenSet_0() );
+ test_map.put("test_Graph_0", test_Graph_0());
+ test_map.put("test_Util_print_list_0", test_Util_print_list_0());
+
+ // Run the tests using TestBench
+ TestBench.run( test_map );
+ }
+
+ // Main function to provide a shell interface for running tests
+ public static void main(String[] args){
+ System.out.println("Running Ariadne tests...");
+ test_Ariadne(); // Calls the method to run all tests
+ }
+
+}
+
package com.ReasoningTechnology.TestBench;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.PrintStream;
+import java.lang.reflect.Method;
import java.util.Map;
-public class TestBench {
+public class TestBench{
// typically used to gather results before a return
public static boolean all(boolean[] conditions){
return true;
}
- public static void run(Map<String, Boolean> test_map){
- int totalTest_Map = test_map.size();
- int passedTest_Map = 0;
- int failedTest_Map = 0;
+ public static void flush_stdin() throws IOException{
+ while(System.in.available() > 0){
+ System.in.read();
+ }
+ }
+
+ public static void set_test_input(String input_data){
+ ByteArrayInputStream test_in = new ByteArrayInputStream(input_data.getBytes());
+ System.setIn(test_in);
+ }
+
+ public static void log_output(String test_name ,String stream ,String output_data) throws IOException{
+ // Only log if there is actual content to log
+ if(output_data != null && !output_data.isEmpty()){
+ try(FileWriter log_writer = new FileWriter("test_log.txt" ,true)){ // Append mode
+ log_writer.write("Test: " + test_name + "\n");
+ log_writer.write("Stream: " + stream + "\n");
+ log_writer.write("Output:\n" + output_data + "\n");
+ log_writer.write("----------------------------------------\n");
+ }
+ }
+ }
+
+ public static boolean method_is_wellformed(Method method) {
+ // Check if the method returns boolean
+ if(!method.getReturnType().equals(boolean.class)){
+ System.out.println("Structural problem: " + method.getName() + " does not return boolean.");
+ return false;
+ }
+
+ // Check if the method has exactly three arguments
+ Class<?>[] parameterTypes = method.getParameterTypes();
+ if(parameterTypes == null || parameterTypes.length != 3){
+ System.out.println("Structural problem: " + method.getName() + " does not have three arguments.");
+ return false;
+ }
+
+ // Check that all parameters are ByteArrayOutputStream
+ if(
+ !parameterTypes[0].equals(ByteArrayOutputStream.class) // Check first parameter
+ || !parameterTypes[1].equals(ByteArrayOutputStream.class) // Check second parameter
+ || !parameterTypes[2].equals(ByteArrayOutputStream.class) // Check third parameter
+ ){
+ System.out.println("Structural problem: " + method.getName() + " has incorrect argument types.");
+ return false;
+ }
+
+ return true;
+ }
+
+ public static void run(Object test_suite ,String[] stdin_array){
+
+ int failed_test = 0;
+ int passed_test = 0;
+
+ Method[] methods = test_suite.getClass().getDeclaredMethods();
+
+ for(Method method : methods){
+
+ // Ways a test can fail ,not exclusive
+ boolean fail_testbench = false;
+ boolean fail_malformed = false;
+ boolean fail_reported = false;
+ boolean fail_exception = false;
+ boolean fail_extraneous_stdout = false;
+ boolean fail_extraneous_stderr = false;
+
+ if( !method_is_wellformed(method) ){
+ // the malformed check prints specific messages
+ System.out.println("TestBench: malformed test counted as a failure:\'" + method.getName() + "\'");
+ failed_test++;
+ continue;
+ }
- for( Map.Entry<String, Boolean> test : test_map.entrySet() ){
try{
- if( test.getValue() ){
- passedTest_Map++;
- } else{
- System.out.println( "failed: " + test.getKey() );
- failedTest_Map++;
- }
+ // Redirect the I/O channels so the tests can manipulate them as data.
+ PrintStream original_out = System.out;
+ PrintStream original_err = System.err;
+ InputStream original_in = System.in;
+
+ ByteArrayOutputStream out_content = new ByteArrayOutputStream();
+ ByteArrayOutputStream err_content = new ByteArrayOutputStream();
+ ByteArrayInputStream in_content = new ByteArrayInputStream(String.join("\n" ,stdin_array).getBytes());
+
+ System.setOut(new PrintStream(out_content));
+ System.setErr(new PrintStream(err_content));
+ System.setIn(in_content);
+
+ } catch(Throwable e){ // Catches both Errors and Exceptions
+ // Restore stdout ,stderr ,and stdin before reporting the error
+ System.setOut(original_out);
+ System.setErr(original_err);
+ System.setIn(original_in);
+
+ // Report the error
+ System.out.println("TestBench:: when redirecting i/o in preparation for running test \'" + test.getName() + "\' ,test bench itself throws error: " + e.toString());
+ failed_test++;
+ continue;
+ }
+
+ // Capture detritus
+ Exception exception_string = "";
+ String stdout_string = "";
+ String stderr_string = "";
+
+ // Finally the gremlins run the test!
+ try{
+
+ Object result = method.invoke(test_suite ,in_content ,out_content ,err_content);
+ fail_reported = !Boolean.TRUE.equals(result); // test passes if ,and only if ,it returns exactly 'true'.
+
+ // A test fails when there is extraneous output
+ fail_extraneous_stdout = out_content.size() > 0;
+ fail_extraneous_stderr = err_content.size() > 0;
+
+ // We keep it to log it
+ if(fail_extraneous_stdout){ stdout_string = out_content.toString(); }
+ if(fail_extraneous_stderr){ stderr_string = err_content.toString(); }
+
} catch(Exception e){
- System.out.println( "failed: " + test.getKey() );
- failedTest_Map++;
+
+ // A test fails when there is an unhandled exception.
+ fail_exception = true;
+
+ // We keep it to report it
+ exception = e;
+
+ } finally{
+
+ // Restore original stdin ,stdout ,and stderr
+ System.setOut(original_out);
+ System.setErr(original_err);
+ System.setIn(original_in);
+ }
+
+ if(
+ fail_reported
+ || fail_exception
+ || fail_extraneous_stdout
+ || fail_extraneous_stderr
+ ){
+
+ failed_test++;
+
+ if(fail_reported) System.out.println("failed: \'" + method.getName() + "\' by report from test.");
+ if(fail_exception) System.out.println("failed: \'" + method.getName() + "\' due to unhandled exception: " + exception_string);
+ if(fail_extraneous_stdout){
+ System.out.println("failed: \'" + method.getName() + "\' due extraneous stdout output ,see log.");
+ log_output(method.getName() ,"stdout" ,stdout_string);
+ }
+ if(fail_extraneous_stderr){
+ System.out.println("failed: \'" + method.getName() + "\' due extraneous stderr output ,see log.");
+ log_output(method.getName() ,"stderr" ,stderr_string);
+ }
+
+ } else{
+ passed_test++;
}
+
}
- System.out.println("Total test_map run: " + totalTest_Map);
- System.out.println("Total test_map passed: " + passedTest_Map);
- System.out.println("Total test_map failed: " + failedTest_Map);
+ // Report summary of results
+ System.out.println("Total tests run: " + (passed_test + failed_test));
+ System.out.println("Total tests passed: " + passed_test);
+ System.out.println("Total tests failed: " + failed_test);
}
}
+++ /dev/null
-package com.ReasoningTechnology.Ariadne.TestBench;
-import com.ReasoningTechnology.Ariadne.*;
-import com.ReasoningTechnology.TestBench.*;
-import java.util.Map;
-import java.util.ArrayList;
-import java.util.HashMap;
-
-public class TestBenchAriadne extends TestBench{
-
- public static boolean test_File_unpack_file_path_0(){
- boolean[] conditions = new boolean[5];
- int i = 0;
-
- // Test input
- String test_fp = "/home/user/test.txt";
-
- // Expected output
- String expected_dp = "/home/user/";
- String expected_fn = "test.txt";
- String expected_fn_base = "test";
- String expected_fn_ext = "txt";
-
- // Actual output
- Map<String, String> result = File.unpack_file_path( test_fp );
-
- conditions[i++] = result.get("dp").equals( expected_dp );
- conditions[i++] = result.get("fn").equals( expected_fn );
- conditions[i++] = result.get("fn_base").equals( expected_fn_base );
- conditions[i++] = result.get("fn_ext").equals( expected_fn_ext );
- conditions[i++] = result.size() == 4;
-
- // Return true if all conditions are met
- return all( conditions );
- }
-
- public static boolean test_Label_0(){
- boolean[] conditions = new boolean[2];
- int i = 0;
-
- // Test input
- Label test_label = new Label("test");
-
- // Expected output
- String expected_value = "test";
-
- // Actual output
- conditions[i++] = test_label.get().equals(expected_value);
- conditions[i++] = test_label.toString().equals(expected_value);
-
- return all(conditions);
- }
-
- public static boolean test_Token_0(){
- boolean[] conditions = new boolean[4];
- int i = 0;
-
- // Test input
- Token token = new Token("test_value");
-
- // Check if the value is correctly stored and retrieved
- conditions[i++] = token.get().equals("test_value");
-
- // Check if the string representation is correct
- conditions[i++] = token.toString().equals("test_value");
-
- // Check equality with another Token object with the same value
- Token another_token = new Token("test_value");
- conditions[i++] = token.equals( another_token );
-
- // Check the hashCode consistency
- conditions[i++] = token.hashCode() == another_token.hashCode();
-
- return all(conditions);
- }
-
- public static boolean test_LabelList_0(){
- LabelList label_list = new LabelList(); // Use the constructor
-
- // Add a label and check the size
- label_list.add(new Label("test"));
- return label_list.size() == 1;
- }
-
- public static boolean test_Node_0(){
- Node node = new Node(); // Use the constructor
-
- // Add a key-value pair and check the map
- node.put(new Label("key"), new Object());
- return node.containsKey(new Label("key"));
- }
-
- public static boolean test_NodeList_0(){
- NodeList node_list = new NodeList(); // Use the constructor
-
- // Add a node and check the size
- node_list.add(new Node()); // Use Node constructor
- return node_list.size() == 1;
- }
-
- public static boolean test_Production_0(){
- Production production = label -> new Node(); // Use the Node constructor
-
- // Apply the production function
- Node node = production.apply(new Label("test"));
- return node != null;
- }
-
- public static boolean test_ProductionList_0(){
- ProductionList production_list = new ProductionList(); // Use the constructor
-
- // Add a production and check the size
- production_list.add(label -> new Node()); // Use the Node constructor
- return production_list.size() == 1;
- }
-
- public static boolean test_TokenSet_0(){
- TokenSet token_set = new TokenSet(); // Use the constructor
-
- // Add a token and check if it's contained in the set
- token_set.add(new Token("test"));
- return token_set.contains(new Token("test"));
- }
-
- public static boolean test_Graph_0() {
- boolean[] conditions = new boolean[3];
- int i = 0;
-
- // Create an empty node map and a production list
- Map<Label, Node> node_map = new HashMap<>();
- ProductionList production_list = new ProductionList();
-
- // Initialize the Graph
- Graph graph = new Graph(node_map, production_list);
-
- // Test that lookup returns null for a non-existent node
- Label non_existent_label = new Label("non_existent");
- conditions[i++] = graph.lookup(non_existent_label, false) == null;
-
- // Add a node to the map and test lookup
- Node test_node = new Node();
- Label test_label = new Label("test");
- node_map.put(test_label, test_node);
- conditions[i++] = graph.lookup(test_label, false) == test_node;
-
- // Test lookup with verbosity
- conditions[i++] = graph.lookup(test_label).equals(test_node);
-
- // Return true if all conditions are met
- return all(conditions);
- }
-
- // Method to run all tests
- public static void test_Ariadne(){
- Map<String, Boolean> test_map = new HashMap<>();
-
- // Adding tests to the map
- test_map.put( "test_File_unpack_file_path_0", test_File_unpack_file_path_0() );
- test_map.put( "test_Label_0", test_Label_0() );
- test_map.put( "test_Token_0", test_Label_0() );
- test_map.put( "test_LabelList_0", test_LabelList_0() );
- test_map.put( "test_Node_0", test_Node_0() );
- test_map.put( "test_NodeList_0", test_NodeList_0() );
- test_map.put( "test_Production_0", test_Production_0() );
- test_map.put( "test_ProductionList_0", test_ProductionList_0() );
- test_map.put( "test_TokenSet_0", test_TokenSet_0() );
- test_map.put("test_Graph_0", test_Graph_0());
-
- // Run the tests using TestBench
- TestBench.run( test_map );
- }
-
- // Main function to provide a shell interface for running tests
- public static void main(String[] args){
- System.out.println("Running Ariadne tests...");
- test_Ariadne(); // Calls the method to run all tests
- }
-
-}
-
--- /dev/null
+package com.ReasoningTechnology.TestBench;
+
+/*
+Component smoke test. At least call each method of each class.
+
+*/
+
+import com.ReasoningTechnology.Ariadne.*;
+import com.ReasoningTechnology.TestBench.*;
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.lang.reflect.Method;
+import java.util.List;
+import java.util.Map;
+
+public class TestTestBench extends TestBench{
+
+ public static class TestSuite{
+
+ TestSuite(){
+ }
+
+ public boolean test_pass(ByteArrayOutputStream out_content, ByteArrayOutputStream err_content){
+ return true;
+ }
+
+ public boolean test_fail_0(ByteArrayOutputStream out_content, ByteArrayOutputStream err_content){
+ return false;
+ }
+
+ // Tests if exception uncaught by the test correctly causes a failure from the TestBench.
+ public static boolean test_fail_1() throws Exception {
+ int randomInt = (int) (Math.random() * 100); // Generate a random integer
+ // Always returns true, but Java will not complain that following code is unreachable
+ if(
+ (randomInt % 2 != 0 && ((randomInt * randomInt - 1) % 8 == 0))
+ || (randomInt % 2 == 0 && (randomInt * randomInt) % 4 == 0)
+ ){
+ throw new Exception("Condition met, error thrown.");
+ }
+
+ return true; // If the condition fails, return true
+ }
+
+ }
+
+ // Method to run all tests
+ public static void test_TestBench(){
+ System.out.println("TestTestBench: running tests. Note that two failures is normal");
+ TestSuite test_suite = new TestSuite();
+ TestBench.run( test_suite );
+ }
+
+ // Main function to provide a shell interface for running tests
+ public static void main(String[] args){
+ // tests currently takes no arguments or options
+ test_TestBench(); // Calls the method to run all tests
+ }
+
+}
+
+++ /dev/null
-#!/bin/env bash
-java com.ReasoningTechnology.Ariadne.TestBench.TestBenchAriadne
--- /dev/null
+#!/bin/env bash
+java com.ReasoningTechnology.TestBench.TestTestBench
--- /dev/null
+Test: test_pass
+Stream: stderr
+Output:
+wrong number of arguments
+----------------------------------------
+Test: test_fail_0
+Stream: stderr
+Output:
+wrong number of arguments
+----------------------------------------
echo "Creating shell wrappers..."
mkdir -p shell
# wrapper is a space separated list
- wrapper=TestBenchAriadne
+ wrapper=TestTestBench
for file in $wrapper;do
cat > shell/$file << EOL
#!/bin/env bash
--- /dev/null
+#!/bin/env bash
+
+# input guards
+
+ env_must_be="tester/tool/env"
+ if [ "$ENV" != "$env_must_be" ]; then
+ echo "$(script_fp):: error: must be run in the $env_must_be environment"
+ exit 1
+ fi
+
+echo "Compiling files..."
+
+ set -x
+ cd $REPO_HOME/tester
+ javac -d scratch_pad javac/TestBench.java javac/TestTestBench.java
+ jar cf jvm/TestBench.jar -C scratch_pad com/ReasoningTechnology/TestBench
+ set +x
+
+echo "Creating shell wrappers..."
+ mkdir -p shell
+ # wrapper is a space separated list
+ wrapper=TestTestBench
+ for file in $wrapper;do
+ cat > shell/$file << EOL
+#!/bin/env bash
+java com.ReasoningTechnology.TestBench.$file
+EOL
+ chmod +x shell/$file
+ done
+
+echo "$(script_fp) done."