From c63be1e1366acefc2d799b8857674f503bea6996 Mon Sep 17 00:00:00 2001
From: Thomas Walker Lynch Each test function is given an Each test function is responsible for managing any output generated
+ on Each test should manage its output streams with an intentional policy: This approach ensures that tests remain clean and focused on their primary objectives without unintended side effects from unhandled output. © 2024 Thomas Walker Lynch - All Rights Reserved. Testing centers around three key components: the test
+ bench, the test functions (or tests), and
+ the functions under test. In most cases, the
+ developer provides the functions under test. When this tool is used, Mosaic
+ supplies the test bench. This leaves the tester with the role of creating and
+ running the tests. Often, of course, the tester role and the developer role are
+ performed by the same person, though these roles are distinct. The term function refers to any program or
+ circuit where outputs are determined solely by inputs, without internal
+ state being kept, and without side effects. All inputs and outputs are
+ explicitly defined. By definition, a function returns a single result, but
+ this is not a very strong constraint because said single result can be a
+ collection, such as a vector or set. We need this precise definition for a function to make meaningful
+ statements in this document, but the Mosaic TestBench can be used with
+ tests designed to evaluate any type of subroutine. A later chapter will
+ cover testing stateful subroutines, provided that I get around to writing it. There is also a nuanced distinction between function
+ in singular and plural forms, because a collection of functions can be viewed as
+ a single larger function with perhaps more inputs and outputs. Hence, when a test
+ is said to work on a function, we cannot conclude that it is a single function
+ defined in the code. A test must have access to the function under test so that it can supply
+ inputs and harvest outputs from it. A test must also have a
+ failure detection function that, when given
+ copies of the inputs and outputs, will return a result indicating if a
+ test failed or not. Ideally, the failure detection function is accurate,
+ or even perfect, as this reduces missed failures and minimizes the need
+ to verify cases that it has flagged as failures. The testerâs goal is to identify failures,
+ observable differences between actual outputs and expected outputs. Once a
+ failure is identified, a developer can investigate the issue, locate
+ the fault, and implement corrections as
+ necessary. While Mosaic aids in failure detection, it does not directly
+ assist with debugging. Unstructured testing is at the base of all testing strategies. The following are some
+ examples of approaches to unstructured testing. The Mosaic TestBench is agnostic
+ to the approach used for unstructured testing, rather this section is about writing
+ the test code that the TestBench will call. In reference value-based testing, an ordering
+ is assigned to the inputs for
+ the function under test, as well as to
+ its outputs. With this ordering, the function
+ under test can be said to receive an input
+ vector and to return an actual output vector. In this testing approach, a Reference Model is also used.
+ When given an input vector, the Reference Model will produce
+ a corresponding reference output vector that follows the
+ same component ordering as the actual output vector from the
+ function under test. The failure detection function then compares each
+ actual output vector with its respective reference output vector. If they do
+ not match, the test is deemed to have failed. The Reference Model is sometimes referred to as the golden
+ model, and said to produce golden values. However, this
+ terminology is often an exaggeration, as testing frequently reveals inaccuracies
+ in reference values. Thus, in reference value-based testing, the failure detection function
+ relies on a comparison between the actual and reference output vectors. Its accuracy
+ depends directly on the accuracy of the Reference Model. Property check testing is an alternative to
+ reference value-based testing. Here, rather than comparing the actual
+ outputs to reference outputs, the actual output is validated against
+ known properties or expected characteristics. For example, given an integer as input, a function that squares this
+ input should yield an even result for even inputs and an odd result for odd
+ inputs. If the output satisfies the expected property, the test passes;
+ otherwise, it fails. This approach allows testing of general behaviors
+ without specific reference values. With spot checking, the function under test is checked against one or
+ two input vectors. Moving from zero to one, i.e. trying a program for the first time,
+ can have a particularly high threshold of difficulty. A tremendous
+ around is learned during development if even one tests passes for
+ a function. Sometimes there are notorious edge cases. Zeros and one off the
+ end of arrays come to mind. Checking a middle value and the edge
+ cases is often an effective test. It takes two points to determine a line. In Fourier Analysis,
+ it takes two samples per period of the highest frequency component
+ to determine an entire wave form. There is only so much a piece of
+ code can do different if it works at the edge cases and in between.
+ It is because of this effect that ad hoc testing has produced so
+ much working code.
+ Spot checking is particularly useful during development. It is the
+ highest leverage testing return for low investment. High investment is
+ not approrpiate for code in development that is not stable, and is open to
+ being refactored.
+ dUSiKw^`QC
zJujQ3S3f^}p-FmG(%DN5`v|iilly4qAiYemm4nOutput Stream Policy for Tests
+
+ Overview of the
+
+ IO ObjectIO object, which provides
+ methods for inspecting stdout and stderr output
+ streams, programmatically adding data to the stdin input stream,
+ and clearing output streams as needed. Although the IO object is
+ optional, it is available for cases where I/O validation or cleanup is
+ essential to the test.Purpose
+
+ stdout or stderr by the function under test
+ (fut). TestBench will automatically clear the streams before
+ each test begins and will check them after the test completes, treating any
+ remaining output as unintended and marking the test as a failure. This policy
+ ensures that tests intentionally handle output by either validating,
+ clearing, or ignoring it, thereby maintaining a clean and predictable testing
+ environment.Policy Guidelines
+
+
+
+
+
+
+
+
+
+
+
+ stdout and stderr using methods like io.get_out_content() or io.get_err_content(). The test passes if the actual output matches the expected content.io.clear_buffers()) if further output handling is not needed to avoid residual content.
+
+
+ TestBench that any output generated was intentionally disregarded and avoids marking the test as failed.
+
+ TestBench will flag this as a failure.Example Scenarios
+
+
+
+ public static Boolean test_with_output_verification(IO io) {
+ System.out.println("Expected output");
+ String output = io.get_out_content();
+ boolean isCorrect = output.equals("Expected output");
+ io.clear_buffers(); // Clear remaining content if not needed
+ return isCorrect;
+}
+
+
+ public static Boolean test_without_output_verification(IO io) {
+ System.out.println("Output not needed for this test");
+ io.clear_buffers(); // Clear output since itâs intentionally ignored
+ return true;
+}Summary
+
+
+ TestBench from marking the test as failed.White Box Testing
+ Introduction
+
+ Unstructured Testing
+
+ Reference Value based testing
+
+ Property Check Testing
+
+ Spot Checking
+
+ true if they pass.
Accordingly, only pass/fail counts and the names of failing functions are
- recorded. For more detailed investigation, the developer can run a failed
- test using a debugging tool such as jdb.
jdb.
To run all tests and gather results, follow these steps:
clean_build_directories.clean_build_directories.make to compile the project and prepare all test class shell wrappers.run_tests to run the tests. Each test class will output
its results, identifying tests that failed.g#iS5y@_hyH%K^S21;A$7ELK@!0Fyvyehud>H)^e;A2IblB>>_rOhl89uax*~N+>KM#p*MYSh*I~0@^fBWmO^tU z2q)#j6%k-ngz09V$0FtC{z?i1EIyS5Hh|SAcBt0aRdg^y5mXHjM&g^;5cKaNnjMzG zHJvZ}75FC%ca6n-t{TC+;WX79PlKg2Js^ub?M)UpN^EOXI4mOz=NxXwGxH3Hi6WnT z*)V&u{stW&GX5)Z^o+@I{PFCER)6lwp-=sYQf@%;Y1|!%WlC =Dwew;d1U#kj=OZPO9icwe7dc83le@=3dFs*`*0+wvg0 zvk3*bLg3HbRIX<7$!>h}#JPVkk98BGzH9Vp`eWxk>}V_WH0&INRJjT!mRvFsBLRVM zWK^JwjlXeqd^Y$x@Neq<7vZ|cm})>Ysat^oC!^}3lfp;DG{_bK*7f?9my9umB&*T9b?LxA{s|fY%--rBAk=cz`%eIj?3M{4d z({1t^r_zyN*~<4CpCn5V8jO~bSg9$^TwKnI-8eQ(v9Ug?dE0Ruw=Pa%v2v7+tdjXe zHQ|hbe4cGs 9}$w1siZ$5mmT -LS62N*zS0UtUg!^(cKtOqeb`eac(h&BdrpLF&J n5ix>lWP?#&D1isr oOWPY!ZoQe5nmSNKyatNZO>yq=Bv zWeG6?k~+ jheAjSeQ}2yFmYxhWP);1%JkIyZQ}7 E z7U=YlZ7-?l Pu^RqR#TT%z+Y)0xwqGpKF{!^Lg=n3v*4t}P-LLh$X-+Kynffm zh^7FsW)XW=wvR&JX6Fm6RCG@7Tk>Da3GSzS$)t|eLA~l8peig_O;ZOkh_cLX4x4e{ zG=D{{t)=$^4rCvDMFqh`XqK|t_a{mA&d;{UEGCjrq!P;^XNHCS`Xz2=q|N;XOxbgx z)Z;_(C7d%k)SgBjDdPrm&Wss(cC+^~SR4Dx%nk1 iwG39dyQqCfZl` z#Jy+4hW?OxhjIB5oi@&`w0y^yg~~MVA<8=HD#E+X{YDwam2CuNX$*(2!t0i6 Y9N0x^4D`WA8dJ7-e8(&uMg(Exa(~ xq;DZC9{IJSh7aELX}hEk>T35{I~rllH~BRV?DGyBBNbQILs8`k zo>JFfTYht22 3L=gtZ?|IU9(R2#DqAhgX@ktR*gu0rZ*I5q2)hsiyd3nO!Olb^ z!~}798Eof-kD0If 8vcd^F~yH6MacUh@})OY7Us^&*6y@N9F+Mfo} za>W&$;|{HPZxpe03(Lc0rEnZ7ge!TOFn02od8C(lH2ieeC42xZ8J9lmK<^jm3VrdD zrZv2c0N;4v069`g>l@8m8I=%U`Px?ICrEvFBuby}`Q-Idxwr;u-e{NLDu8~T-dn!1 zlO;QOJ7nPp%RRF~ez|hGX1hp&>U`u_As~w2l5yh~;hp zMWv1}t?(>fymUV%Mqh-+ O8Xu~7 zwceU0*rw$b5fDu`51Ed^0jH4xSkW}*v6-D|MH%@zv_!||%$3^L4|^PbEJdKV9nzw4 zHjLq_8vQRrWqf;t9D+l$mzgKWsAQIlpURwgerg%!m^*(_xtrGkdiYp1X2#D=`a)5t zx o_ifylIlec3CsRnJ?u z;Y} };%76B{ zP&I-VKJuR}{(i^*Y+*(n{}FsW{t!y8aA{YMzip3@T>OktG7#wU%72CJ_5c5tj+?E+ Yzr+(#h2+ZR0n&>lelar(nJ%mU10;tX!~g&Q delta 1606 zcmY+EYdF&j9LFbiY{`)2EZa`fPDL&&F_L7t94T@ciMdB)Xu~4>t0CPKdt8o;L=+um z?g|mPZ8e8txfEmOnlNT=Cth`aZ$97W`99D0_wKV~$WoU*XC)~G2SFeZkXNL8lI$_b zO`Pu7L@7dovu!I(N|s7(20BUp06=s?_Uww!)`XfTAMl91qRd0|BV0sQiHxJS&Ct>} z(aN5#rhk!egx|(N-sUo}2WhdfV7(O=8?&E>;?|Jhu=qbEYU176=s1{h01J7&7| ze8ysbYqE#yYd>b_IPvBqF-u#AWoJ!M??COjG#`~ABi^{>uf@M^Xke-how+n0%w*Y> zh|UZCphV;9R5A0p^Y{SOXKFc>BjO43+`7k&27V3p_+q#NI(QIDTpKu&KYS6l6PC(% zfIBT5DPEl*+am6jKP?7^M;|+0$!#|pjks%a-l@DDJj9NAuG}qC+J1Z6S7938A&e{c z#>2lBi#APk(LBS3TUb#Ux&_}1*6loxLynJ%%!b}(@T$xGB{fj~hKhw|HUu8N%L#4@ zZRvycL)!08kh_c`>I)scah!)xoV*)Xdf!kSdCI{lpYbUb1Q?UqPH{0kxIjNbKV+>S z^V-=ny|agWDJFs<-3g)aG%^+2TUIWwo^zvjE~`K7`aJ$fpeVn>{$}ljX;R_e$CFgP zlnJT^E7S;7S9kr)`mX&h1~4Mf_V{ULh}urEVO3mN&G+~0#kN$M`FTQ*>p!*|ANcw& zPm$scv=*j90cdSsm135nS>unf!LIk74Nn5>PdRIR(I^Szk$Zf>Hi$@Tnci+&y%RUs zYIAx;QC o(u-f6gXEdYt~!MuN(zlE$ d=ZZy}XU3L7{FD{%2S+%|u( xnPxsLno !0>t=`|^=uoRp?%c${KsbwUMwVw;5BuDCETM)w_OFG74yC4^MfI79H z!`e#^NykJlVB7uQmEYCL)a~4t$$R5TI_RhYd)TS%C5>Oy!fQ{Zaa5vF7ZcKU^Lt_V zmos!uvnJlX0>YTg<4`d{M)_l`@w1UjI}O{O6PwHhI*!EaQOJ^anos9QO&_K)6?wOz zX%z{zaR@J>8a>2s;3IPO9l)0Sr)H87-#tC2{Ib%+4VN@UVpx&nnc%&e3HR2W1MmrE z$(UIJGaa>(SnTq^KtDR1UMEZb`Y<9P%If1jG3AyX;_5+iIQ`vgqIOVq46{#Y*Ck9? z-$&bHJOl~EKCaKgib9P=vAMp+BDojcm3he qWTlo6P6 zptP;SoAwCR_qnp=Sr`qjlUL2nr^IJdnlaPEB>K@dg5nYkkYGaOL`b=a+wKy#{vyi^ zRX78r&kxq6&!McOAz6&Zrf@I_gxmoF{ZHVf;YudpEmPMZ^|vTQQnwh4(%a%Ql)l1G zlRN)90jK{!DbZ8VNR`cIrTcp?_~$rR1_Y7={d#9$bPjqqOd2E&dL{t^B`R)~{{f(7 B2KE2| diff --git a/tester/shell/Test_MockClass b/tester/shell/Test_MockClass new file mode 100755 index 0000000..2e4f2a7 --- /dev/null +++ b/tester/shell/Test_MockClass @@ -0,0 +1,2 @@ +#!/bin/env bash +java Test_MockClass diff --git a/tester/tool/shell_wrapper_list b/tester/tool/shell_wrapper_list index 3b46b8d..99bf5de 100755 --- a/tester/tool/shell_wrapper_list +++ b/tester/tool/shell_wrapper_list @@ -9,5 +9,5 @@ if [ "$ENV" != "$env_must_be" ]; then fi # space separated list of shell interface wrappers -echo Test0 Test_Util Test_IO Test_TestBench +echo Test0 Test_Util Test_IO Test_TestBench Test_MockClass -- 2.20.1