--- /dev/null
+> cd Mosaic
+> source env_developer
+> emacs &
+
+...
+
+2024-11-04T11:19:53Z[Mosaic_developer]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic/developer§
+> clean_build_directories
++ cd /var/user_data/Thomas-developer/Mosaic/developer
++ rm -r scratchpad/com
++ rm jvm/Mosaic.jar
++ rm shell/Mosaic
++ set +x
+clean_build_directories done.
+
+2024-11-04T11:20:14Z[Mosaic_developer]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic/developer§
+> wipe_release
++ cd /var/user_data/Thomas-developer/Mosaic
++ rm -rf release/Mosaic release/Mosaic.jar
++ set +x
+wipe_release done.
+
+2024-11-04T11:20:18Z[Mosaic_developer]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic/developer§
+> make
+Compiling files...
++ javac -g -d scratchpad javac/IO.java javac/Mosaic.java javac/TestBench.java javac/Util.java
++ set +x
+Creating JAR file...
++ jar_file=jvm/Mosaic.jar
++ mkdir -p jvm
++ jar cf jvm/Mosaic.jar -C scratchpad .
++ set +x
+JAR file created successfully: jvm/Mosaic.jar
+Creating shell wrappers...
+developer/tool/make done.
+
+2024-11-04T11:20:40Z[Mosaic_developer]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic/developer§
+> release
+Starting release process...
+Installed Mosaic.jar to /var/user_data/Thomas-developer/Mosaic/release with permissions ug+r
+Installed Mosaic to /var/user_data/Thomas-developer/Mosaic/release with permissions ug+r+x
+developer/tool/release done.
+
+2024-11-04T11:20:44Z[Mosaic_developer]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic/developer§
+> clean_make_output
++ cd /var/user_data/Thomas-developer/Mosaic/developer
++ rm -r scratchpad/com/ReasoningTechnology/Mosaic
++ rm jvm/Mosaic.jar
++ rm 'shell/{Mosaic}'
+rm: cannot remove 'shell/{Mosaic}': No such file or directory
++ set +x
+clean_make_output done.
+
the last row of the table, to have reasonable test times, there would be coverage
10<sup>-18</sup> percentage coverage. At that level of coverage there is really
no reason to test. Hence, this table is not limited to speaking about exhaustive
- testing, rather is speaks to black box testing in general.</p>
+ testing, rather it speaks to black box testing in general.</p>
<h3>Informed Spot Checking</h3>
+++ /dev/null
-<!DOCTYPE html>
-<html lang="en">
-<head>
- <meta charset="UTF-8">
- <meta name="viewport" content="width=device-width, initial-scale=1.0">
- <link href="https://fonts.googleapis.com/css2?family=Noto+Sans+JP&display=swap" rel="stylesheet">
- <title>Output Stream Policy - Mosaic Project</title>
- <style>
- html { font-size: 16px; }
- body {
- font-family: 'Noto Sans JP', Arial, sans-serif;
- background-color: hsl(0, 0%, 10%);
- color: hsl(42, 100%, 80%);
- padding: 2rem;
- margin: 0;
- }
- .page { padding: 1.25rem; margin: 1.25rem auto; max-width: 46.875rem; background-color: hsl(0, 0%, 0%); box-shadow: 0 0 0.625rem hsl(42, 100%, 50%); }
- ul, li { font-size: 1rem; list-style-type: none; }
- li::before { content: "📄 "; margin-right: 0.3125rem; }
- li { margin-bottom: 0.3125rem; }
- .description { margin-left: 0.625rem; color: hsl(42, 100%, 75%); }
- code { font-family: 'Courier New', Courier, monospace; background-color: hsl(0, 0%, 25%); color: hsl(42, 100%, 90%); padding: 0.125rem 0.25rem; border-radius: 0.1875rem; font-size: 90%; }
- h1 { text-align: center; color: hsl(42, 100%, 84%); text-transform: uppercase; margin-bottom: 1.25rem; }
- h2 { color: hsl(42, 100%, 84%); text-transform: uppercase; margin-top: 2.5rem; }
- p { color: hsl(42, 100%, 90%); margin-bottom: 1.25rem; text-align: justify; }
- </style>
-</head>
-<body>
- <div class="page">
- <h1>Output Stream Policy for Tests</h1>
-
- <h2>Overview of the <code>IO</code> Object</h2>
-
- <p>Each test function is given an <code>IO</code> object, which provides
- methods for inspecting <code>stdout</code> and <code>stderr</code> output
- streams, programmatically adding data to the <code>stdin</code> input stream,
- and clearing output streams as needed. Although the <code>IO</code> object is
- optional, it is available for cases where I/O validation or cleanup is
- essential to the test.</p>
-
- <h2>Purpose</h2>
-
- <p>Each test function is responsible for managing any output generated
- on <code>stdout</code> or <code>stderr</code> by the function under test
- (fut). <code>TestBench</code> will automatically clear the streams before
- each test begins and will check them after the test completes, treating any
- remaining output as unintended and marking the test as a failure. This policy
- ensures that tests intentionally handle output by either validating,
- clearing, or ignoring it, thereby maintaining a clean and predictable testing
- environment.</p>
-
- <h2>Policy Guidelines</h2>
- <ul>
- <li><strong>1. Define an Output Handling Policy:</strong></li>
- <ul>
- <li><span class="description">Every test should have a defined policy for how it handles output generated by the fut. There are three primary approaches:</span></li>
- <ul>
- <li><span class="description"><strong>Validation:</strong> Check the fut output and confirm its correctness.</span></li>
- <li><span class="description"><strong>Intentional Ignoring:</strong> If output validation isn’t relevant to the test, the output should still be acknowledged and cleared to avoid unintended failures.</span></li>
- <li><span class="description"><strong>Mixed Policy:</strong> A test can validate specific output while ignoring others, as long as any remaining output is cleared before the test returns.</span></li>
- </ul>
- </ul>
-
- <li><strong>2. When to Validate Output:</strong></li>
- <ul>
- <li><span class="description">If the test expects specific output from the fut, it should retrieve and check the content on <code>stdout</code> and <code>stderr</code> using methods like <code>io.get_out_content()</code> or <code>io.get_err_content()</code>. The test passes if the actual output matches the expected content.</span></li>
- <li><span class="description">After validating, the test should clear the output buffers (<code>io.clear_buffers()</code>) if further output handling is not needed to avoid residual content.</span></li>
- </ul>
-
- <li><strong>3. When to Ignore Output:</strong></li>
- <ul>
- <li><span class="description">If the test does not require output verification, it should acknowledge the output by clearing the streams before returning.</span></li>
- <li><span class="description">This approach signals to <code>TestBench</code> that any output generated was intentionally disregarded and avoids marking the test as failed.</span></li>
- </ul>
-
- <li><strong>4. Failure Due to Residual Output:</strong></li>
- <ul>
- <li><span class="description"><strong>No Defined Policy:</strong> If a test leaves output on the streams without a clear handling policy (validation or intentional clearing), <code>TestBench</code> will flag this as a failure.</span></li>
- <li><span class="description"><strong>Ensuring Clean Tests:</strong> To avoid unexpected failures, verify that each test has no residual output before returning by either validating or clearing output streams.</span></li>
- </ul>
- </ul>
-
- <h2>Example Scenarios</h2>
- <ul>
- <li><strong>1. Output Validation:</strong></li>
- <pre><code>public static Boolean test_with_output_verification(IO io) {
- System.out.println("Expected output");
- String output = io.get_out_content();
- boolean isCorrect = output.equals("Expected output");
- io.clear_buffers(); // Clear remaining content if not needed
- return isCorrect;
-}</code></pre>
-
- <li><strong>2. Ignoring Output:</strong></li>
- <pre><code>public static Boolean test_without_output_verification(IO io) {
- System.out.println("Output not needed for this test");
- io.clear_buffers(); // Clear output since it’s intentionally ignored
- return true;
-}</code></pre>
- </ul>
-
- <h2>Summary</h2>
- <p>Each test should manage its output streams with an intentional policy:</p>
- <ul>
- <li><span class="description"><strong>Validate output</strong> if it is relevant to the test.</span></li>
- <li><span class="description"><strong>Acknowledge and clear output</strong> if it is not relevant.</span></li>
- <li><span class="description"><strong>Avoid residual output</strong> to prevent <code>TestBench</code> from marking the test as failed.</span></li>
- </ul>
- <p>This approach ensures that tests remain clean and focused on their primary objectives without unintended side effects from unhandled output.</p>
- </div>
-</body>
-</html>
--- /dev/null
+
+Bash is inconsistent about returning the name of the running script in
+all scenarios (sourced, executed directly, from with in a function called
+by another function).
+
+1.
+
+BASH_SOURCE[0] was used because $0 did not work with sourced scripts (a
+fact that is leveraged for detecting when in a sourced script).
+
+2.
+
+However, this did not work in all scenarios:
+
+ read -r -d '' script_afp_string <<'EOF'
+ realpath "${BASH_SOURCE[0]}" 2>/dev/null
+ EOF
+
+ script_afp(){
+ eval "$script_afp_string"
+ }
+
+ export script_afp_string
+ export -f script_afp
+
+When `script_afp` was exported, used in another file, and used within a function
+in that other file, it reported `environment` for the script name at
+BASH_SOURCE[0]. In various call scenarios the actual script name appears at
+BASH_SOURCE[1] or even at BASH_SOURCE[2].
+
+3.
+
+As a stable alternative to having a script_afp function, place this line
+at the top of scripts that use the `script_XX` functions, or at the top
+of all scripts:
+
+ script_afp=realpath "${BASH_SOURCE[0]}"
+
+Then use $script_afp as a string within other functions. It will have stable
+value no matter the call structure.
<li>tester/ <span class="description">Workspace for the tester. Has the test bench, tests, and test scripts.</span></li>
<ul>
<li>document/ <span class="description">Test-specific documentation.</span></li>
- <li>test0/ <span class="description">Test case 0 environment and associated scripts.</span></li>
- <li>test1/ <span class="description">Test case 1 environment and associated scripts.</span></li>
- <li>test2/ <span class="description">Test case 2 environment and associated scripts.</span></li>
+ <li>javac/ <span class="description">The tests of the test bench sources.</span></li>
<li>tool/ <span class="description">Tools needed for testing and managing the test environment.</span></li>
</ul>
<li>tool/ <span class="description">Project administration specific tools.</span></li>
+++ /dev/null
-
-I had a lot of problems in bash scripting language, while trying to export a
-function that could report the name of the script it was called in.
-
-1.
-
-BASH_SOURCE[0] was used because $0 did not work with sourced scripts (a
-fact that is leveraged for detecting when in a sourced script).
-
-2.
-
-Hence, this did not work in general:
-
- read -r -d '' script_afp_string <<'EOF'
- realpath "${BASH_SOURCE[0]}" 2>/dev/null
- EOF
-
- script_afp(){
- eval "$script_afp_string"
- }
-
- export script_afp_string
- export -f script_afp
-
-When `script_afp` was exported, used in another file, and used within a function
-in that other file, it reported `environment` for the script name at
-BASH_SOURCE[0]. In various call scenarios the actual script name appears at
-BASH_SOURCE[1] or even at BASH_SOURCE[2].
-
-3.
-
-As a stable alternative to having a script_afp function, place this line
-at the top of scripts that use the `script_XX` functions, or at the top
-of all scripts:
-
- script_afp=realpath "${BASH_SOURCE[0]}"
-
-Then use $script_afp as a string within other functions. It will have stable
-value no matter the call structure.
4.1. The release candidate is located in the `$REPO_HOME/release` directory and
has passed testing.
-4.2. Check that the program `$REPO_HOME/tool_shared/bespoke/release` outputs the
+4.2. Check that the program `$REPO_HOME/tool_shared/bespoke/version` outputs the
correct information. If necessary, modify it.
4.3. A new branch is created in the project for the release, named
`release_v<n>.0`, where `v<n>.0` is the version number from the `version`
--- /dev/null
+<!DOCTYPE html>
+<html lang="en">
+<head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <link href="https://fonts.googleapis.com/css2?family=Noto+Sans+JP&display=swap" rel="stylesheet">
+ <title>Output Stream Policy - Mosaic Project</title>
+ <style>
+ html { font-size: 16px; }
+ body {
+ font-family: 'Noto Sans JP', Arial, sans-serif;
+ background-color: hsl(0, 0%, 10%);
+ color: hsl(42, 100%, 80%);
+ padding: 2rem;
+ margin: 0;
+ }
+ .page { padding: 1.25rem; margin: 1.25rem auto; max-width: 46.875rem; background-color: hsl(0, 0%, 0%); box-shadow: 0 0 0.625rem hsl(42, 100%, 50%); }
+ ul, li { font-size: 1rem; list-style-type: none; }
+ li::before { content: "📄 "; margin-right: 0.3125rem; }
+ li { margin-bottom: 0.3125rem; }
+ .description { margin-left: 0.625rem; color: hsl(42, 100%, 75%); }
+ code { font-family: 'Courier New', Courier, monospace; background-color: hsl(0, 0%, 25%); color: hsl(42, 100%, 90%); padding: 0.125rem 0.25rem; border-radius: 0.1875rem; font-size: 90%; }
+ h1 { text-align: center; color: hsl(42, 100%, 84%); text-transform: uppercase; margin-bottom: 1.25rem; }
+ h2 { color: hsl(42, 100%, 84%); text-transform: uppercase; margin-top: 2.5rem; }
+ p { color: hsl(42, 100%, 90%); margin-bottom: 1.25rem; text-align: justify; }
+ </style>
+</head>
+<body>
+ <div class="page">
+ <h1>Output Stream Policy for Tests</h1>
+
+ <h2>Overview of the <code>IO</code> Object</h2>
+
+ <p>Each test function is given an <code>IO</code> object, which provides
+ methods for inspecting <code>stdout</code> and <code>stderr</code> output
+ streams, programmatically adding data to the <code>stdin</code> input stream,
+ and clearing output streams as needed. Although the <code>IO</code> object is
+ optional, it is available for cases where I/O validation or cleanup is
+ essential to the test.</p>
+
+ <h2>Purpose</h2>
+
+ <p>Each test function is responsible for managing any output generated
+ on <code>stdout</code> or <code>stderr</code> by the function under test
+ (fut). <code>TestBench</code> will automatically clear the streams before
+ each test begins and will check them after the test completes, treating any
+ remaining output as unintended and marking the test as a failure. This policy
+ ensures that tests intentionally handle output by either validating,
+ clearing, or ignoring it, thereby maintaining a clean and predictable testing
+ environment.</p>
+
+ <h2>Policy Guidelines</h2>
+ <ul>
+ <li><strong>1. Define an Output Handling Policy:</strong></li>
+ <ul>
+ <li><span class="description">Every test should have a defined policy for how it handles output generated by the fut. There are three primary approaches:</span></li>
+ <ul>
+ <li><span class="description"><strong>Validation:</strong> Check the fut output and confirm its correctness.</span></li>
+ <li><span class="description"><strong>Intentional Ignoring:</strong> If output validation isn’t relevant to the test, the output should still be acknowledged and cleared to avoid unintended failures.</span></li>
+ <li><span class="description"><strong>Mixed Policy:</strong> A test can validate specific output while ignoring others, as long as any remaining output is cleared before the test returns.</span></li>
+ </ul>
+ </ul>
+
+ <li><strong>2. When to Validate Output:</strong></li>
+ <ul>
+ <li><span class="description">If the test expects specific output from the fut, it should retrieve and check the content on <code>stdout</code> and <code>stderr</code> using methods like <code>io.get_out_content()</code> or <code>io.get_err_content()</code>. The test passes if the actual output matches the expected content.</span></li>
+ <li><span class="description">After validating, the test should clear the output buffers (<code>io.clear_buffers()</code>) if further output handling is not needed to avoid residual content.</span></li>
+ </ul>
+
+ <li><strong>3. When to Ignore Output:</strong></li>
+ <ul>
+ <li><span class="description">If the test does not require output verification, it should acknowledge the output by clearing the streams before returning.</span></li>
+ <li><span class="description">This approach signals to <code>TestBench</code> that any output generated was intentionally disregarded and avoids marking the test as failed.</span></li>
+ </ul>
+
+ <li><strong>4. Failure Due to Residual Output:</strong></li>
+ <ul>
+ <li><span class="description"><strong>No Defined Policy:</strong> If a test leaves output on the streams without a clear handling policy (validation or intentional clearing), <code>TestBench</code> will flag this as a failure.</span></li>
+ <li><span class="description"><strong>Ensuring Clean Tests:</strong> To avoid unexpected failures, verify that each test has no residual output before returning by either validating or clearing output streams.</span></li>
+ </ul>
+ </ul>
+
+ <h2>Example Scenarios</h2>
+ <ul>
+ <li><strong>1. Output Validation:</strong></li>
+ <pre><code>public static Boolean test_with_output_verification(IO io) {
+ System.out.println("Expected output");
+ String output = io.get_out_content();
+ boolean isCorrect = output.equals("Expected output");
+ io.clear_buffers(); // Clear remaining content if not needed
+ return isCorrect;
+}</code></pre>
+
+ <li><strong>2. Ignoring Output:</strong></li>
+ <pre><code>public static Boolean test_without_output_verification(IO io) {
+ System.out.println("Output not needed for this test");
+ io.clear_buffers(); // Clear output since it’s intentionally ignored
+ return true;
+}</code></pre>
+ </ul>
+
+ <h2>Summary</h2>
+ <p>Each test should manage its output streams with an intentional policy:</p>
+ <ul>
+ <li><span class="description"><strong>Validate output</strong> if it is relevant to the test.</span></li>
+ <li><span class="description"><strong>Acknowledge and clear output</strong> if it is not relevant.</span></li>
+ <li><span class="description"><strong>Avoid residual output</strong> to prevent <code>TestBench</code> from marking the test as failed.</span></li>
+ </ul>
+ <p>This approach ensures that tests remain clean and focused on their primary objectives without unintended side effects from unhandled output.</p>
+ </div>
+</body>
+</html>
--- /dev/null
+This shows all tests passing.
+
+It can be a bit confusing to read, but the failure results from the tests named
+'test_failure_X' are actually passing when they report that they failed. This is
+because we are testing a test bench, and we are testing the feature of the test
+bench where it fails bad code.
+
+> cd Mosaic
+> source env_tester
+> emacs &
+
+...
+
+2024-11-04T11:23:08Z[Mosaic_tester]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic§
+> clean_build_directories
++ cd /var/user_data/Thomas-developer/Mosaic/tester
++ rm -r scratchpad/Test0.class scratchpad/Test_IO.class 'scratchpad/Test_MockClass$TestSuite.class' scratchpad/Test_MockClass.class scratchpad/Test_TestBench.class scratchpad/Test_Util.class
++ rm jvm/Test_Mosaic.jar
++ rm shell/Test0 shell/Test_IO shell/Test_MockClass shell/Test_TestBench shell/Test_Util
++ set +x
+clean_build_directories done.
+
+2024-11-04T11:23:23Z[Mosaic_tester]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic§
+> make
+Compiling files...
++ cd /var/user_data/Thomas-developer/Mosaic/tester
++ javac -g -d scratchpad javac/Test0.java javac/Test_IO.java javac/Test_MockClass.java javac/Test_TestBench.java javac/Test_Util.java
++ jar cf jvm/Test_Mosaic.jar -C scratchpad .
++ set +x
+Creating shell wrappers...
+tester/tool/make done.
+
+2024-11-04T11:23:27Z[Mosaic_tester]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic§
+> run_tests
+Running Test0...Test0 passed
+Running Test_Util...Test_Util passed
+Running Test_IO...Test_IO passed
+Running Test_TestBench...Expected output: Structural problem message for dummy_invalid_return_method.
+Structural problem: dummy_invalid_return_method does not return Boolean.
+Test_TestBench Total tests run: 3
+Test_TestBench Total tests passed: 3
+Test_TestBench Total tests failed: 0
+Running Test_MockClass...Test failed: 'test_failure_0' reported failure.
+Structural problem: test_failure_1 does not return Boolean.
+Error: test_failure_1 has an invalid structure.
+Test failed: 'test_failure_2' threw an exception: java.lang.reflect.InvocationTargetException
+Test failed: 'test_failure_3' produced extraneous stdout.
+Test failed: 'test_failure_4' produced extraneous stderr.
+Total tests run: 9
+Total tests passed: 4
+Total tests failed: 5
+
+2024-11-04T11:23:33Z[Mosaic_tester]
+Thomas-developer@Blossac§/var/user_data/Thomas-developer/Mosaic§
+> clean_build_directories
++ cd /var/user_data/Thomas-developer/Mosaic/tester
++ rm -r scratchpad/Test0.class scratchpad/Test_IO.class 'scratchpad/Test_MockClass$TestSuite.class' scratchpad/Test_MockClass.class scratchpad/Test_TestBench.class scratchpad/Test_Util.class
++ rm jvm/Test_Mosaic.jar
++ rm shell/Test0 shell/Test_IO shell/test_log.txt shell/Test_MockClass shell/Test_TestBench shell/Test_Util
++ set +x
+clean_build_directories done.
--- /dev/null
+
+2024-11-04T13:53:57.865246Z -----------------------------------------------------------
+Test: test_failure_3
+Stream: stdout
+Output:
+Intentional extraneous chars to stdout for testing
+
+
+2024-11-04T13:53:57.874296Z -----------------------------------------------------------
+Test: test_failure_4
+Stream: stderr
+Output:
+Intentional extraneous chars to stderr for testing.
+
script_afp=$(realpath "${BASH_SOURCE[0]}")
# 2024-10-24T14:56:09Z project skeleton and test bench files extracted from Ariadne
-echo v0.1
+echo v1.0