From c6e643766ed17ce5ecb0760479b493fde9f4bc6d Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 12:33:10 +0000 Subject: [PATCH 01/16] Updated directory structure in examples --- files/code-testing.zip | Bin 9154 -> 25024 bytes 1 file changed, 0 insertions(+), 0 deletions(-) diff --git a/files/code-testing.zip b/files/code-testing.zip index f94670049e8c7b223236e228c4e68f47b9ae32c1..cf5e74530790ee8ee0ea7adf330f4386d8c2a460 100644 GIT binary patch literal 25024 zcmdsfbyU<__czkr(nxn>fTV;-cXz|kT_Q*eNP|*R(gK2#f~YizgmkBb2#7%m`pz&4 zGnC%}696Txv=;h#HseJI^_ZtEX8H|a&sTqfwDhdo- zqkx3rZ3k>qR}VCp3-C*DFfa#iiiba^_BVq1ImVx#vs(aMtj)|E4q)!$2^x}><`Ta+8-EWdwXkVFfRxSf8pi2OR6HIC`8a8E@sXy1_yZT4xR^W)pO?SpTQPB5C|Ej z1jv>$(udX!;5T$w7#KWgL?|#MAbY2P=|#)gHBq7mZCwvaP|H$Rs*PyM`dBzt^E#Sa z-e26v^#14^VJBmI6X|^VmQV_5*Jsitx2sg1Yec`c)ldsU*UI^=uI zs(kQY%+vP>-kgrUL(gr!rYQOLMM2n<5}oIzr{XI#3Q-j-U;R_tC`t#N)zq7p1&ZI2 zXbbO07yH)SRxR+aD&|Itg)4ccaE-Ygd%eqPXuTn1OsJs0eV2G6Yx<6LOsX=!vN)lm zKo$A!8uEII?~!kK8`-Nv+5`2E_AHzLreK?*SK^r70zRks7uW(7x>m}zJY%u%fIwOo zAS`fzp?p(BT<#{jt?7X|gP%&CW`UfjGP_1jbaD4P2xuJNu>tGm@WcY;jPK34`PwR} z@f;<^vT?W9Nple1zofZP@9uulMrRv6tHniOLI`hZHTvnIlxqy(BK61k+x(wL$c5Z~ zR4uqJ8aMRrh{Q;ZC-g1zepMTPMle|NQ=oP-ic7PmY4=s(D3OX<&$rZpE71)N{v7~> z54W|567iXw32wZ{j^p6tb%K2a>%s<@<(d*hRb8OI`T0u2=9@}iO#?dt!$;14+meG!Ab7#r_ zC48`mjzxH)tQGA=tsg75lplyo#nZG-5Xf?%EY01SBKLg7dnxJ`7EVIx2O|<1T7ThQ zY&z`d{JT$$F5)qLZ%9TPrW6ywHzTEF4P-OAB4tv`Nu9y%L;RkNTk1Lumkm5d1mcc` zG<*Zpu@HcfZx6RWMN$_5CKoqF&Eh^ zS+PUCgIXy;#|vIrVjw4gr#ODtS5&vrSfrxI7)SE{5MgQU-96Z6X!qz5vcj$Eb-J=~ zm!*jx(pG0#zj*e-k6SVM0hcph;rzR*fVVO-o=7MU`%7)4Ya2Y7Qn9eRq-X92GjmU} zeV{a4b}QHibahsHy&oq|;HSw>k;ItL-s$po!MM+&=;{B)4t94`F<0oSA$%t8jEIak zj(@n}xKrb`1k?wXGZomW)) z`*e29(e0^vW?+Zn==JqyKOl0D_%GII`_$z}xFfz=zG4Mi{Js>c&JLAnE3xT{-X}xE z+ClA-At#L;J=tq5&pVWuE>wkfjP2&@ejZ?!V+_`QOCCzMYsu~GC6k7FuRxWYj?Y#3 zkprC%+Xqv_IEzPyGGey=t5StiiaA-OcbaREnflztujz`(VkiT$!j%nUU04{@kklf9|GV{yA#x-S%9CV;0vW?UB`X$hxJ6Ydb$tCP)(FG`Rm-#vygol+?|>m+8e8+?^lY1MK#C zw{Plw*GhR}v^^g2Ey#a!;xS6W^|}BG%q0r9#W6x%>sL+*lLSKvkE~0I7$eMMpIlUx z65(-M{aN<(N>?G50*Zt#6Qs&*L9&c6CF$A){IiQX%#%Jv3=oi>(U<4dy5R{-3v zeAvZXh?Q1I0u%TAB|_o{G=p9t`IGad`jodVi$~-|t&4`oeW^zy<5VWxi>#4|``R`l zHhZZSE5$l9ydOs)rEB=}USF<8_0*M>sMd)+E_XYO+UM@emW#Vj^RNUY2&#t$iMa|l zUw8{(IwskfS6U19JI1xdb^9((KCi!!G55LPYDLUZW z{L9&MonB!?YwwIpJp|YG688KkRs14uRll5>sPAxfxXNs$LaN*4)ynWo>Y`6-d4hLi zandY{zMAPeO{S;uBNaTi+pbo=aA${Q@{`dN7)cLu5fdX5%inp{UxPF#+BA)dn^LZBcwA60#$R6!K>m~g8S4yjC4k>+Pv;_P&Rwe(Q~F}Wcb zRtwj_C0Ny5o$<1F^OYs-_J(hTve#n^#hF*pE8qArMO=@2ZVgA_JKvF?p#6ZyU-TL4 zBbr4ELm%_9hc)!5@*x>5t7BhU(J-ByFFz+?Nx!yt>hB$R+p5T@-IgGpN(M zFkt>0Y#I?amnf^AIX`d3F2dPHp!l1%+`#F`Ic5N7%f}~Oy1xVeo0X5?nQCP*fT<=7 z=wbU;cxM+g2X}z8*+Cw3hOu?@pcMp3i1l9y*#VaZ08{6`(*dV%Abm-GYO;+13iAs1 zF#U~=n24Crq@c2$vE=Mk}eD}ot7iG8EjRj`@Xjj|eKdQ6 zQCAJ$uTw0n{9lqRVS^Ny^n|6yn0`b*QhjWVl!G{cIr{C6TH{yu$J8f~;FBr$X%eME; z2zMy^JSGP}9*Y2rtAfwyfbxPKhQG-SS}Fl7?ChP+H6IP7n3I{2i0W{ zy4c$qxd2RzY-~K)QJGPXEPOoqCOyf8fq{ePfp;1h9A)F@rivg?^kIE8N)#9ve_|LI zj`Kke(nb>?ua@?9Y?rw>fq~EDz^I`7QSxLsY9`XNCvA1yT9dq_fKT`KO9(!LU_}{Q zZf^D{V70n%t|^iXjt!?tLMb1)1mB%bUv`q~_AME{QZ_}>()SOue!cbqWD56&)PF0+ zLfCa$((!}!SyGwQ%~JgkZlN879 zqLfQpa_(a`YPlzll<>Rdjm08Aw%e_>OBk_T^Qc_u*I9@x zB`%r1(_4&Ycg^F;hhm}@B)wNC)@tu4oVynr_7b0yuMtNm8O4hy(!$F zE`+6yw_!*IOp%oHGxkowY-is}S4?*+;Pj4OLD|PZNX4C7xRBSxd$r{y0`LvMWk>8l zGV8_sRxr!>oraeF=GBo`7xw)>sVw2GyJ0a>d}wODJ7grkA#h!0zM8sKc$rBMqs13- z4zZ}0B2Um5z$slsmI9l&9l(9HqF1zsFx8|l#_U9($B?-X-YR$I<%f^# zZlkf{6q84ka|=9A>a=mXowxjIt$%v<9@%QiVv3G#*VMYh*JrES4`7?#X~RV};6y{p5i zK^Cl}b^VA{-#svH4m;I0Jee1LH&?tGy%yvmoLx(0dkNQ(WZTQty<)j?$10rvO(DnY z9jeY^eZ5+*;umhkUf(4KLbeg|nH@B2ho4|P+)(RRq$qOP6q|@8)lnxF^d-L0-UEVa=jDLSGBqRaC%SqyOSB zn^2zk-IO-&`_p>z=Uq#%y2^UgVd(IZoCP7|Ip~27EC|1Ab}5I!!BMs_>aq z-5H@xj*RETu^ORdChCHls+Rr{3Lh94a4f>B#wQ+|dKQQ;QW3=H(R@WutX=U=mc|EA zI3~e{R#C5}-<#Nd@FS6;y@XlK-h89LgD*Pcc_s~MYr%`}M^ts>zr zXn8gA&7DhB@$t|4a~bB-X+Cl@bA-{O2=OOpv~zZ**fZTFlN{F45Et+ceZAH_I2PwQXUG znTP5aUd3K@j$87=AN;|Z6+ZMf>OI%)UBf5%{=(V2HR+RsF{=y`0OPxt+DTq*j+boW zKE?F(ZG&O04}xhlzZsr2)O;nd{fSCMNtX)pLW_c~EZp|RG-`=LWv-_rA6eP@o&`#v zF6M7e8FERZ(dS&Q=s>3@lYjY+jpYg}3-ii!t*Mq`diiXOzZqARk=87_oe$<4QshbT z(!gncWOk~2%58RktA=Hmtvfh97-5XC9Z&HhJ|_Fz>|SZ!ei&kqsT%EK(yXstBSibl zqKv28eE5?J>W4`I{^o^g>##?NV|;j4Thpp=E3`p+5>hM=iP&~$u?2cX$~5>7<&Md@c>WLR)zwX@u13 znj5g)Jndj?bWN(kezX=|LM4;_5lt2MSR_7Nk15|eOBH?R%1bMw$y{3X!Y*o80 zE*tporZIK5Rd!IK`k_i?HB7M>Okd4HFZQpy7^_AQrxy0A$s`cj3+C2$@o;nL#e7A; z#zi_+^}(j1x=UHulmTx#9p-rkx$3TtyDbi;OQp(l#7?6o+i9Z|6;+K(y!`)}Ev(XI2-p7U8!TRn0@+LU*Sc*B@0eaYC*DKN86u*Xg#f^C%Qb-8t$WOSs`4e18-HZ}PG81H!Ca1jiBal=0vzgxaK#ga@iq-+=^zAXn|e|ZC4JOR&tP{nLVnzYs+w-yhN!i5kA@ycuXCd?bUtgy7oM9M_YL{7wyiJX(N5{ZbF6a9&=BpGnS=3Gg#E91>um>)NG zR;^_xnc31s9rtezS=3+P&=$ndvdS(|3+#A@rp;=h;clR0GekQzWDMxvUX^COX-o%@c;Z0|TczmONdkUi>0 zriA`XF7N`(QQk|Cn4~Cdc)gY!s1?vda++p-YvXqt;H_7-{wWnrb0>RSN_J2VO9`-b zuy=By6f?82VWqqU{FieQ?Zd7E=$9O)to1>p{+eY&uJx&)M1V$wUgm?KPm$#R>N5ZM z8VB0e(2DxK!aqm(;3fEROv-bV@5TYKxIYK^Xv~tiqU7BGdSwFeVg3hX8zW~Ib0dI_ z8Mwo7hJBwbC@U&5OR=%{BH?Vb7hJ#Yg^-}GZm2KiBjRl(?Nu0?AS$Hgn^ZF#5iOzgZ3QdoTy`*!S(-e9RuVGkaY~|bHqJX z!Z<_N|5(C^Q?r8VE#pEn0IZr=x*CHk7X*(b&+k(o$09ow%iZ3|+T6z8{ddRt98|DJ zZ0cAkr;d$&L<73946tB#%$0w0c!UZr+pv2Y+1eb`K`2X3?N|oIa49r$Vh6eZ3L_~G$~y_BhFN!RFw4RX9z ze=u(NIP1A2v;Vq})6_VJk40FkN2SqvV|ZOAVNSD!QJSk-}l#)~G(P-mfZ0~$m z1p8di+Z+G=-PvZ{jSYl^_Rx0Xf{l$`U2R|_?QP#-D$~{NK;xJXyv*_^cF*?c2u z);rn|rl)5X7oikWkX?|oxEnH%rUC~KkjcXrbaxh1{s%iQs0wyQG60^BjsrpW6Apl& zL*{{FodXNjCI}WX$AIo9oB|7-Xn^ofDd7EIO*D=h0lJ@Xine}d8|Rqebn8O*6HWz& zNIA}dKe95cq@s}^&<}%w4|Kiu9Qb2-#~J26%sWCA|Ql(}ddI)83Ph(^EU>i{HZ4gM3hox5=UpNYWAkHCE59mJ1DV(3kKh9wL ze>ndD$qbo)Ku@lx$b2jbIYZ)qPC~##$Qa0l-ff|pn_*?4wJHJl5%(n;K|mU(M97w{C3BW)gUZdlt$ zzHX+sQ@lx&js0l7wVvbGqc&ur#u$7i{6#c+bhms~@>VG&M0BM};rV-XEG%sDgM>VsOnZK<7|4#dLQuV`Rd|X*%>ET^wO1re_iY(R=O`OZwN}*J{VS$md z@`fyKi-_H?pWZXj&)XzO$L>j{tSOd|kX+yjLS|Ma`wCZrk|52MbC2Sx))Z_B(iofg zqe=E#?NS|vvhTBI$Fe7S%bn;t67ijQyrU*Lr9IRMF@1wqWJu>M&=NGmgrDfR%zeLb z6>H-0w|n$VMp{$!7m@>!6eFYno;z-J4L>}Co`0Ub<=cAkeIJRRXU+#wzz#DTm35Ge zX=jslehvZwJDKJR+@m)Vp89@45{8+mpY9SjTCf;%o6(zwB^EI4%2>4AnNLZ{B3a{U ze#cu8=@v!YNXTO7y)Rq7D^0IDBYY$0g}6vgFhZfy3Pzp=r#S~#lSd7UIIAALM0BS& zjDgNqN4Y-3=Nq~X!fn&th$(}P^$~O0_nlr$v^#1#Ur)Y-czw?~KPxx1$}+at-DSer zX?Em$mqB$I!27GF5{KoNOgS;VY%Jda`8ySlu6-+#9e%PYi)r&jFxT?gRMrJ6{kOTR zJP3;QNInp~L2sZp9p(hU52tw3|6f-wtOa=~2N4FQj2Q+-1S&H+a8XFmBX@xjm~WVQ!Wp zPH9e;9N0hyP^g*j(Q&?adkLK)!yxma&)u%2(YWrHANnmcN|9M_3|AWoyk61*(6RXf zMy9?Buq>BpYToUh^3DF-jUuvri)i;=>3BdhrD5sa7%BMKyVeT68+W7akP%8>+ms<0)U% zx)V~o9*f$ge&0rmL7Xk+$)_eb>0mfwS)@DoJUsmQJ4@jocAr>Zr`a=cr>jlMu@2H# z^V2B)aDO0Kdt_p4khBT8>Y?+Dn_J(c`(+Ew{P7qzl*q@qH{MlHC#F!$g&E~HnIcUF z!)1Qi?2GicdjmNpF?nIoh3|@Mt3WnPay*%wP-2M`tj5A1y>+DNr)%p9L*|c#p1qlm zDIs0qunI}Jk2hM@?C{FH=<~~67QCMfbU~U0%38hUxJoP8MH(xpO2v$y*ll^3leG$n z9~kt_AUEjob#Rcgy9bdBg%F`n3v8=gJ%m==Y4QGFD;d=MlFS(05DbBnp4xLud5wnwXbF>x; zCKPU!G4$6#yDu(N8cyIaPSb-+JU2*PSs6U-VBJeKgp)WuX#Rw4c8%|a-8-InURb>G zQJadJxA-D3LSFP6xtajKUfi6}9vvtiLA_jpdaI^_n7IkZ!CK}Wn|oF6m|a8`?UPwO z*CKnG@f7SDB%9gC!I3g8L-%b(2+ZM#jz2Zr+?)eLTcKg<*SH{?i z2pLK5%1nC`F>WI|}9>a_@*^e$dtS6*m zsUysv@EATwlj#_Zs?9LYb}CX~!Z~ukLox|%Y#_jDYe%QHfVF+$M{$|Ss$Re?b+i?$ zJdHS5zJ~f9Q#qJUkpPh4X zxT1wqKAy8*qyR&vyoaUaSdSE|D7ivr*N{o3$w$|xP3<07F0p$a1WWk5W-n~6f4q_$ zz-k>gmrP`Wu1f{iX@Z%EVSm@rmyzlnmm%WUi=><}2!`?^>{sum1Ge|+zl8UR zQ~fJNa164ATcv$x#bVm>Bz?KxF|1`M$}%v1sv+daUSi+J?d}1x!O1sUjKYuC_KW0OW2hc-EVwQLI*XyEg?JZZh@ZY`JI8*GS%~Pr4V* zB!=Fd&pU^!uBaOQ%~T?F=DtG<{woK`@tig(tv8eW{PWa;E28Z_Z!Of8U&cL`?kCJP zE>yp#q2*^szH>i~rccI<1HEiWb2!dyGb2QekvuP%*s8lK302<(PPjS12fA5uVtzcgSrSKxWqL|s0is!Q zXR9=e`(P0dO+^w7cT1eZ@GM` zkOIhAxGWbX683UE>R~EgzR&(>r&n3mn^9d|H#0we&}Fk&hak|mF;c~9@mKiMpPkscPR4ls^@NP+8_%O}?~M^1w0XQ^C=?j83L z=p>Q|5XT6PfIy%AtdS69Pj!|EmA7_doY-|=St>iIxDCz+Pf z?!gU~?%Q~~@rj~puV)CY- zBlrE1W-Nek7(=hZ&qgjZr?6=oz3v#(%MURm;*(mcZ_XxGppAI7^zeC+rhN3a3Eru# z91@WbQu@*jhSS<{X@R7(MZtwCk6R!*0kA9y{-j%H*#=%2D|a+ zYDZ<7Pwxc?fI$oN9EPkT&pC|AzgO7hz=Hy(ge-8&7WA_}En9Jb8APx4_<%X=)gDkf zrH%?3K4+9?!Q}xEB4i!)uvdEuB6JBA1PywQjm-b6CDh}phc+^_>VH>H&r$yA@$ax# zdy4X*dbR(A3|W#q?1P?z3{{Oh!@7{wNRUQUEn(&8fiBMid=Bed=g@#GM4mz7un-Aq z1~VnK7aTN$vGsib%JNp;NGLzWf~yPBu4aiXet1`q$kRYj22A*+gq zUCmR(JyuXWL)d>VD4y=L&=c-Kf_6|u1W`LT^aubCr*f1&c6tN!9cJL9dss_4kmLXe z^kxA1`e)X?EPv{(s}?DO+Q5l@bL)C@o*jwK#aevbk6iJC_et`l+Ovt?z*TSFOKYNK zoMkmf_FV5~-lE+c-PEtO*E~^wVHsXNl+afGGr@&q zVUe6SD;rIMP5Q}JOzXI4iFzuFDzJl=vF{UW+QMjr`Qqw`Mmoo2o&e><{}@?*df~wSqWOH6-ZaYz0wZ16lyQ_x%hn0=&MQH@T@m5TjMwmxUnBnSE=~vl|T$ed6bLdUA zy>8PSdaXIsCR#;XQ4PWa5k<0;6TE^r3J8IEX0aYXpHc=kwGYdQrwe=JV-V7wUQRqibUh_OYhb%u zaEC@p4Au`*4uIJA4xyDwOPi5SQF|#iM~7gjGG4+^M?$#`X=GHm?^Sk%%1_ZeX-p{@ z%x~HJQtOiwWp1@7qY`n8RiyBrF&NC3uMWz5yK4BFF@UMijirk1Ml`<);f2c5eUv^D zx6&tDd=gKmhungVV$+;H3VgQYeH5nhN<5BZntp&{uP)H}K^!MdGkv^3E&|(R{!F6^Q@tG$x2e8pUv;zc9It; zYZJWobw1WJq3~mO*V1s3w~(@1D)N-FWsc&;Q^HEb0j}&{XbxBi-#(6`_t@u4Jj+O0 zyQf&}SQ8mJX{K5tx4*B6+(2h`Gd+$jFE;~Fn7F+~arf1wQ0CvW1A4_Ga6#jG+pd;=B>)RKPp@6~ugtrRGQ^VUpypnwiCwA-9 z4n9vBF2?%`>nlvkG`8C5{Bi@EcMGSRZk57cdoG8z^X+R=Pi&D=d)l>E!!5qQ*1fmz zjEE4oYB&{{^$C$l>Dchz-%x)WaWPC?TC6=g|D(SaRn==+_ZLoVw5#G zGQXn1MiO&&X~00}<$^fzc*n1Y<<;Zj3bNriAmdCM1;ZjikwX1obLe_uTEMOf1};YvY8w}?-&*2vWQxuFwzWl z@wNV{#k)fBc4B9jcn79O|^0e5T zls`G>r~gg>9Nh*!re2G7!cSEk2^!>Ooym}Im<){92O}*4!&${5oB}K!JL%T(c!W+M zRQCI(@B8*hoXhqlbUJZUDA@AMl>OrQ?8RBg zMvTC*yCM!f{Kn>tT4_g0$ zU!&TFnDWC2&S;&AT8#yNL3-$!Iq>V^u z+?sVI;Te>}CA5t!$?z?gj2x!g()egJ*+_bHlSQvKt7tz{^0t^x!|C!Bal64wO?zF` zO_Y~L*QsT1xQ%JStQYmN60a!AvR43)_|uy=yE@3eggLx@`AK*nW=+EW0)@!Ncti2U zwSEN)zKsX%*YbXm+CQWW+nz{2v9K4Zr^jl;c;^jL77Vgc#i!D|CC7h4{C#qgj7(T z7k3UibPzm~3|L9Wg&=_vq7Ha((^*3Nj(`6VpArfmoAf+wAlNdbKZ=E&6L zepdfL1#)NVc|LulHORQf2xS~hvcF3O6=nZHI_Q%vkkUE*12xD1`wzsR>$Z@@Sb%5W z&T@zovGq(!M@oc@t<+FTB)N1prDI|BFLaKd(ZGdp!BvqIXgY_dKY&gEI;U~Y_&<(I z28FwLD(=ylje`htCIzrtpF9V0WQ%M+K?*1hoqs9>C=TdU$f@FhiJd$M0ZFWY@;I?S z$0Gh7C{ux>6PUjpz919bRS^v;NMumH2u`d(`)p5T7wkR9L7|h;jFX_Jgy1ttfHeXl z0U3g!lhLiek${fFXVN-d9Oz_}j^>1ppvCzejQ=AM=w#IA??fOX@|mQLbO(Jz;K-ZG z{U-jE)UmL9Ceb4$LWX7N3^;)HuSx{V2Nj#oBzYt!^xl6+IY0d?Nyq^G4-_E-G!sbGeX0W=05zP>!8^J4=?{Cka$=-~niwzeoh{mlIq^(N;PC=-1;{xOdgb!@97@Ng!+)RynmGQvpc7|Den^Rq z66Z5P!8SR$zvIXz*?uohf*#CDkkJIR+v7iscIY{_N8kk4kaN$!4tVHo6JQBIYRdMb z1D;AfnA{FR>w#Jz2^@y5(q(%tJr%&!@vH^C#lWp}Q zHnRN?Zk)mf^c)LnvY%tXlP&f?_@EAo)*t-e?Z*R)fg0|oB7%MHWV;=V2%fysk0C>G*}rYJFb721v$f(=x=5GX7<1c(DpA!*yRU!nLCTO3gT_ClWqP0= wP%rjWB4GWT?8HLqC;RV2AmlpUmj$WtH0lVzZ4WRoQoz4qUr;sWm99YGM%T!O~ zo>~|J0OOr`JSg>GnzvQ(1nt;bD29M<3@iNTw3roPg2)&`9Xx(~oet~6( zNT`KZkF%f&3`??kT{>IfQ;Zmn8UUyQQtWAhYDN)4x9<0RMXKCqhic3##GK@hYR=ou zCcUd1@M4{JZCJfP4RRJ8wq)~QRv?Sk9ULa8)xr_rOC{THC>AaXUW6`L`JznO$9esx zgm@|-7e<5$HeyS~rIkz3QlkipvC<@dG+4yaqqRmZGyu2Uf?a!yjxI!xyg^YI#)$*t zJyTeNcIp9O4lH0=B13{j-isktVoN1500a9^<(9hY@A#tbQI)f0LOF;#7Inb4E$(%B zG1ZO|-Vr?C0_w*&X+)(SI`n6*}pLcm*dyN`UGREW_2-@VE(IcohF3qda`J-E3 z_|w+7MX6&~NVCaJiI1jBHHvA}QVQ?9gdwedA9A}{t)^mgV+89;+2N|UCZxQVlol^x z#7$`#Yj-5_d?mXeGhfU6l%LNQ^)Ii?yeMiv^SHd&n{s_!yZa}nId+%1^)I+1dBF|w zk#?ao4UH1rPR3eq{nthJkz?WZrEb@y)nfQ6ykDANm&Lo5`f^#(u-#Dd)}>RXxsqF{ zUmtSfrJK(F*m9@chjKTdVCQM_*$8UyR0s98t$%S&WMr@7$zj@WeU8=JO+KFTYp$&O zg>pl?shi`-6;YENQ>-#5ral34%oy-nDX&>o$mrJS|D|rZS)6F~-!l0N|K3|k8D&X@!Oe{Wc4Ma(VR;Jdl z9`Q!YiQ^BSBK@$*(g$rF{9biV;iTnSc#Pprh66(h(?5HAd&DF#I-^O+JAD zIOxYP!ZPw8$~1<-MlO=!4!+}DMUBY}8V%6#w*z3;85}L4QDoU*3eK9dicV&HgsNB- z$*qJgvYhdctd&?;b)rPWT#SeobyyQ>F$cIPjEzjF!{uQ%b6XcMD|bye8fw2VT{2OY z^380^G!=BE<>#ku+8`$kS87$lv;=IjR1?6iKswGwJ5jKqP{2m4v>_BY>Jj<_V}j}M z0!K@2G^kJ;z>C3_#4+z%*4+Y4?;A+U&jABF4J_Z<4Nyy6eY-{TZTPP z;y<{`HP=g$%@e$}D4fN3cQV5tMdh9a>B{k7Q@F*#?8$WBEu8K+8nnwmIe39^LxG$^ z{54ReaAGAJ={JBE6Aj^eH*FZ0r0Yds>i-@D<>ZzaLP4Sxia!mi@RBHovFhv<(iC-c zMp@1c##L-q9?{^0(yTPcvXL2M_#k%ioYg}vL*GC_3IO_GH0ZbqZ0A}l8L*L8CQuy9 jfR3?xbK5}}R1_WCUdmLeO-&uY5Ej^Pf(k&VDgN(YA4bj- From 2b358c14d3e65e986d056088f090f0efc39e04c6 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 12:51:50 +0000 Subject: [PATCH 02/16] Added code example to 'A double-edged sword' --- _episodes/03-fixtures.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/_episodes/03-fixtures.md b/_episodes/03-fixtures.md index 1d4658f..abdf22e 100644 --- a/_episodes/03-fixtures.md +++ b/_episodes/03-fixtures.md @@ -414,7 +414,8 @@ being done once. > behaviour of the tests, and pytest prioritises correctness of the tests over > their performance. > -> What sort of behavior would functions have that failed in this way? +> What sort of behavior would functions have that failed in this way? Can you +> come up with example code for this? > >> ## Solution >> @@ -425,6 +426,19 @@ being done once. >> >> Fixtures should only be re-used within groups of tests that do not mutate >> them. +>> +>> ~~~ +>> @pytest.fixture(scope="session") +>> def initially_empty_list(): +>> return [] +>> +>> +>> @pytest.mark.parametrize("letter", ["a", "b", "c"]) +>> def test_append_letter(initially_empty_list, letter): +>> initially_empty_list.append(letter) +>> assert initially_empty_list == [letter] +>> ~~~ +>> {:. language-python} > {: .solution} {: .challenge} From c661c1c636a9bfeabc65bf1c845af6a76c96cf51 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 13:02:12 +0000 Subject: [PATCH 03/16] Added item about confidence to final exercise --- _episodes/08-exercise.md | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/_episodes/08-exercise.md b/_episodes/08-exercise.md index 691242e..3865834 100644 --- a/_episodes/08-exercise.md +++ b/_episodes/08-exercise.md @@ -34,10 +34,16 @@ the different bacteria are. It already has tests written for most functions. > the badges. > 4. Create a virtual environment on your computer for the project, and install > the project's requirements, so you can run the test suite locally. -> 5. Currently, some of the tests for the repository fail. Work out why this is +> 5. The current code is very outdated by now and you will see in a moment that +> it does not work with a standard contemporary python installation anymore. +> Assuming for a moment the tests would not exist, how would you feel about +> the task of updating the code to run _correctly_ on a modern machine? Where +> would you start? How confident would you feel that each and every line of +> code works as intended? +> 6. Now we turn to the tests. Some of them fail currently. Work out why this is > happening, and fix the issues. Check that they are fixed in the CI workflow > as well. -> 6. Currently, the code is only tested for Python versions up to 3.6. Since +> 7. Currently, the code is only tested for Python versions up to 3.6. Since > Python has moved on now, add 3.7, 3.8 and 3.9 as targets for the CI. Do the > tests pass now? If not, identify what has caused them to fail, and fix the > issues you identify. This is an important reason for having a test suite: @@ -45,12 +51,12 @@ the different bacteria are. It already has tests written for most functions. > Without a test suite, you don't know whether this has happened until > someone points out that your new results don't match your older ones! > Having CI set up allows easy testing of multiple different versions. -> 7. Currently the code is being tested against Ubuntu 18.04 (released April 2018). +> 8. Currently the code is being tested against Ubuntu 18.04 (released April 2018). > A new long term support release of Ubuntu came out in April 2020 (version 20.04). > Upgrade the operating system being tested from Ubuntu 18.04 to Ubuntu 20.04. > As with upgrading Python, the test suite helps us check that the code still > runs on a newer operating system. -> 8. Upgrade to the most recent version of Pandas. Again, see if this breaks +> 9. Upgrade to the most recent version of Pandas. Again, see if this breaks > anything. If it does, then fix the issues, and ensure that the test suite > passes again. > From 7d1dc29b2d696878c5dd3d30d6d8fa05f6882e97 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 13:03:09 +0000 Subject: [PATCH 04/16] Changed comparsions with None --- _episodes/04-edges.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/_episodes/04-edges.md b/_episodes/04-edges.md index 7b4228c..f19270b 100644 --- a/_episodes/04-edges.md +++ b/_episodes/04-edges.md @@ -114,7 +114,7 @@ def test_left_edge(): assert c.neighbours() == 3 # Check the coordinates of the neighbours. - assert c.left() == None + assert c.left() is None assert c.right() == (1, 2) assert c.up() == (0, 3) assert c.down() == (0, 1) @@ -146,10 +146,10 @@ def test_bottom_left_corner(): assert c.neighbours() == 2 # Check the coordinates of the neighbours. - assert c.left() == None + assert c.left() is None assert c.right() == (1, 0) assert c.up() == (0, 1) - assert c.down() == None + assert c.down() is None ~~~ {: .language-python} From 12ca547b0ad8234f7718974617c008c3808d8c13 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 14:20:27 +0000 Subject: [PATCH 05/16] Added pitch for ids kwarg --- _episodes/02-pytest-functionality.md | 61 +++++++++++++++++++++++++--- 1 file changed, 56 insertions(+), 5 deletions(-) diff --git a/_episodes/02-pytest-functionality.md b/_episodes/02-pytest-functionality.md index af90e2c..1780d71 100644 --- a/_episodes/02-pytest-functionality.md +++ b/_episodes/02-pytest-functionality.md @@ -36,7 +36,7 @@ Lets add a second test to check a different set of inputs and outputs to the ~~~ from arrays import add_arrays -def test_add_arrays1(): +def test_add_arrays_positive(): a = [1, 2, 3] b = [4, 5, 6] expect = [5, 7, 9] @@ -45,7 +45,7 @@ def test_add_arrays1(): assert output == expect -def test_add_arrays2(): +def test_add_arrays_negative(): a = [-1, -5, -3] b = [-4, -3, 0] expect = [-5, -8, -3] @@ -73,8 +73,8 @@ rootdir: /home/matt/projects/courses/software_engineering_best_practices plugins: requests-mock-1.8.0 collected 2 items -test_arrays.py::test_add_arrays1 PASSED [ 50%] -test_arrays.py::test_add_arrays2 PASSED [100%] +test_arrays.py::test_add_arrays_positive PASSED [ 50%] +test_arrays.py::test_add_arrays_negative PASSED [100%] ==================== 2 passed in 0.07s ===================== ~~~ @@ -166,6 +166,57 @@ test_arrays.py::test_add_arrays[a1-b1-expect1] PASSED [100%] We see that both tests have the same name (`test_arrays.py::test_add_arrays`) but each parametrization is differentiated with some square brackets. +Unfortunately, in the current form this differentiation is not very helpful. If +you run this test later, you might not remember what `a0-b0-expect0` means, let +alone the precise numbers or the motivation for choosing them. Were that the +positive inputs or the negative ones? Did I choose them after fixing a +particular bug or because it is an important use case or were those just random +numbers? + +Luckily, we are not the first ones to realise that the above form of +parametrization misses the expressiveness of explicit function names. That's why +there is an additional `ids` keyword argument: The following code + +~~~ +import pytest + +from arrays import add_arrays + +@pytest.mark.parametrize("a, b, expect", [ + ([1, 2, 3], [4, 5, 6], [5, 7, 9]), + ([-1, -5, -3], [-4, -3, 0], [-5, -8, -3]), + ids=['positve','negative'] +]) +def test_add_arrays(a, b, expect): + output = add_arrays(a, b) + + assert output == expect +~~~ +{: .language-python} + +now results in the significantly more expressive + +~~~ +=================== test session starts ==================== +platform linux -- Python 3.8.5, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python3 +cachedir: .pytest_cache +rootdir: /home/matt/projects/courses/software_engineering_best_practices +plugins: requests-mock-1.8.0 +collected 2 items + +test_arrays.py::test_add_arrays[positive] PASSED [ 50%] +test_arrays.py::test_add_arrays[negative] PASSED [100%] + +==================== 2 passed in 0.03s ===================== +~~~ +{: .output} + +If the arguments are better representable as a string than our example with +lists here, `pytest` often does a reasonably good job in generating `ids` +automatically from the values (we will see some examples of this in the next +section). But this still lacks the intentional communication that is associated +with manually chosen `ids`, so we strongly recommend to use `ids` in all but the +most trivial cases. > ## More parameters > @@ -185,6 +236,7 @@ but each parametrization is differentiated with some square brackets. >> ([-1, -5, -3], [-4, -3, 0], [-5, -8, -3]), # Test zeros >> ([41, 0, 3], [4, 76, 32], [45, 76, 35]), # Test larger numbers >> ([], [], []), # Test empty lists +>> ids=["positive", "negative", "larger numbers", "empty lists"] >> ]) >> def test_add_arrays(a, b, expect): >> output = add_arrays(a, b) @@ -195,7 +247,6 @@ but each parametrization is differentiated with some square brackets. > {: .solution} {: .challenge} - ## Failing correctly The interface of a function is made up of the _parameters_ it expects and the From 4a67a8c599ecd77e0fbee7f07978539aad601f9e Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 14:37:38 +0000 Subject: [PATCH 06/16] Add missing ids --- _episodes/02-pytest-functionality.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/_episodes/02-pytest-functionality.md b/_episodes/02-pytest-functionality.md index 1780d71..7f7a238 100644 --- a/_episodes/02-pytest-functionality.md +++ b/_episodes/02-pytest-functionality.md @@ -331,6 +331,7 @@ from arrays import add_arrays @pytest.mark.parametrize("a, b, expect", [ ([1, 2, 3], [4, 5, 6], [5, 7, 9]), ([-1, -5, -3], [-4, -3, 0], [-5, -8, -3]), + ids=["positive", "negative"] ]) def test_add_arrays(a, b, expect): output = add_arrays(a, b) @@ -358,8 +359,8 @@ rootdir: /home/matt/projects/courses/software_engineering_best_practices plugins: requests-mock-1.8.0 collected 3 items -test_arrays.py::test_add_arrays[a0-b0-expect0] PASSED [ 33%] -test_arrays.py::test_add_arrays[a1-b1-expect1] PASSED [ 66%] +test_arrays.py::test_add_arrays[positive] PASSED [ 33%] +test_arrays.py::test_add_arrays[negative] PASSED [ 66%] test_arrays.py::test_add_arrays_error PASSED [100%] ==================== 3 passed in 0.03s ===================== @@ -377,6 +378,7 @@ test_arrays.py::test_add_arrays_error PASSED [100%] >> @pytest.mark.parametrize("a, b, expected_error", [ >> ([1, 2, 3], [4, 5], ValueError), >> ([1, 2], [4, 5, 6], ValueError), +>> ids=['second shorter','first shorter'] >> ]) >> def test_add_arrays_error(a, b, expected_error): >> with pytest.raises(expected_error): @@ -405,6 +407,7 @@ test_arrays.py::test_add_arrays_error PASSED [100%] >> ([6], [3], [2]), # Test single-element lists >> ([1, 2, 3], [4, 5, 6], [0.25, 0.4, 0.5]), # Test non-integers >> ([], [], []), # Test empty lists +>> ids=["int", "negative int", "single-element", "non-int", "empty lists"] >> ]) >> def test_divide_arrays(a, b, expect): >> output = divide_arrays(a, b) @@ -416,6 +419,7 @@ test_arrays.py::test_add_arrays_error PASSED [100%] >> ([1, 2, 3], [4, 5], ValueError), >> ([1, 2], [4, 5, 6], ValueError), >> ([1, 2, 3], [0, 1, 2], ZeroDivisionError), +>> ids=['second shorter', 'first shorter', 'zero division'] >> ]) >> def test_divide_arrays_error(a, b, expected_error): >> with pytest.raises(expected_error): From 18b64d85b2cb3eb207fdf6b3ba00988c14dcc82e Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 14:48:05 +0000 Subject: [PATCH 07/16] Amended wording at the end of Testing Randomness --- _episodes/05-randomness.md | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/_episodes/05-randomness.md b/_episodes/05-randomness.md index 4e40863..a3eab52 100644 --- a/_episodes/05-randomness.md +++ b/_episodes/05-randomness.md @@ -173,11 +173,14 @@ that it is relatively bug-free for the cases we've tested for. Of course, so far we've only tested 6-sided dice—we have no guarantee that it works for other numbers of sides, yet. -You can extend this approach to any programming problem where you don't know the -exact answer up front, including those that are random and those that are just -exploratory. Start by focusing on what you do know, and write tests for that. As -you understand more what the expected results are, you can expand the test -suite. +The important upshot of this approach is that despite the fact that we could not +predict the exact return value of our function, we were still able to test for +exactly known invariants and guarantees upheld by it. You can extend this +approach to any programming problem where the exact return value of a function +cannot be meaningfully tested for, including those that are random or out of +your control and those that are just exploratory. Start by focusing on what you +do know, and write tests for that. As you understand more what the expected +results are, you can expand the test suite. > ## Two six-sided dice > From 19052059ebdc709e586dfb82ff052efbc51e9401 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 15:02:06 +0000 Subject: [PATCH 08/16] Mention pre-commit --- _episodes/06-continuous-integration.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/_episodes/06-continuous-integration.md b/_episodes/06-continuous-integration.md index 67c7011..03e3033 100644 --- a/_episodes/06-continuous-integration.md +++ b/_episodes/06-continuous-integration.md @@ -338,6 +338,23 @@ next to the first commit, a green tick (passed) next to the second, and nothing > check all code against a defined house style (for example, PEP 8). {: .callout} +> ## pre-commit +> +> Another helpful developer tool somewhat related to CI is +> [pre-commit][pre-commit] (or more generally `git` hooks). They allow to +> perform certain actions locally when triggered by various `git` related events +> like before or after a commit, merge, push, etc. A standard use-case is +> running automated formatters or code linters before every commit/push but +> other things are possible, too, like updating a version number. One major +> difference with respect to CI is that each developer on your team has to +> manually install the hooks themselves and, thus, could choose to not do so. As +> opposed to a CI in a central repository, `git` hooks are therefore not capable +> of enforcing anything but are a pure convenience for the programmer while CI +> could be used to reject pushes or pull requests automatically. Furthermore, +> you are supposed to commit often and, hence, committing should be a fast and +> lightweight action. Therefore, the pre-commit developers explicitly discourage +> running expensive test suites as a pre-commit hook. +> {: .callout} > ## Try it yourself > @@ -366,3 +383,4 @@ next to the first commit, a green tick (passed) next to the second, and nothing [pypi]: https://pypi.org [starter-workflows]: https://github.com/actions/starter-workflows [yaml]: https://en.wikipedia.org/wiki/YAML +[pre-commit]: https://pre-commit.com From 88eb92f9ff8738eb08eb3a94dfe89bc4d98df4c4 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 15:36:44 +0000 Subject: [PATCH 09/16] Added exercise Better ways to (unit) test --- _episodes/03-fixtures.md | 60 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/_episodes/03-fixtures.md b/_episodes/03-fixtures.md index abdf22e..0bd6962 100644 --- a/_episodes/03-fixtures.md +++ b/_episodes/03-fixtures.md @@ -442,4 +442,64 @@ being done once. > {: .solution} {: .challenge} +> ## Better ways to (unit) test +> +> The above example was explicitly constructed to acquire an expensive resource +> and exhibit a big advantage when using a fixture but is it actually a good +> way to test the `word_counts` function? Think about what the `word_counts` is +> supposed to do. Do you need a whole book to test this? +> +> List advantages and disadvantages of the above approach. Then, come up with +> another way of testing it that cures the disadvantages (maybe also loosing +> some of the advantages). Is your approach simpler and less error-prone? +> +> It is safe to assume that whenever to test such a function, it is supposed to +> be used in a larger project. Can you think of a test scenario where the +> original method is the best? +> +>> ## Solution +>> +>> The `word_counts` function is designed to count words in any string. It does +>> not need a whole book to test counting, so we could have also used tiny test +>> strings like `""`, `"hello world"`, `"hello, hello world"` to test all +>> functionality of `word_counts`. In fact, the original approach has a number +>> of disadvantages: +>> +>> * It is (time) expensive because it needs to download the book every time the +>> test suite is run. (2s for a test is a very long time if you want to run +>> that a test suite of hundreds of those every few minutes.) +>> * It is brittle regarding various aspects: +>> - If you don't have an internet connection, your test fails. +>> - If the URL changes, your test fails. +>> - If the content changes, your test fails (we had that a few times). +>> * It is very obscure because you cannot know if the numbers we have given you +>> are correct. Maybe the function has a bug that we don't know about because +>> admittedly we also just used the output of that function to generate our +>> test cases. +>> +>> The one big advantage of the above is that you are using realistic test data. +>> As opposed to the string `"hello world"`, the book likely contains a lot of +>> different words, potentially different capitalisation and spellings, +>> additional punctuation and maybe special characters that your function may or +>> may not handle correctly. You might need a lot of different test strings to +>> cover all these cases (and combinations thereof). +>> +>> The alternative approach with tiny test strings cures all of the above +>> listed disadvantages and the tests will be easy to read, understand and +>> verify particularly if you use expressive test function names and parameters +>> `ids`. This is the best way to write a unit test, i.e. a test that is +>> concerned with this single unit of functionality in isolation and will likely +>> be run hundreds of times during a coding session. +>> +>> Nevertheless, in a bigger project you would want to have other kinds of +>> tests, too. The `word_counts` functionality will probably be integrated into +>> a larger aspect of functionality, e.g., a statistical analysis of books. In +>> such a case, it is equally important to test that the integration of the +>> various individually tested units worked correctly. Such integration tests +>> will be run less often than unit tests and might be more meaningful for more +>> realistic circumstances. For such -- and definitely for the even broader +>> end-to-end tests that run a whole program from the (simulated) user input to +>> a final output -- the original approach is well-suited. +{: .challenge} + [urllib-request]: https://docs.python.org/3/library/urllib.request.html From 9a490f6aa846de5ca11d14cd422301d2a742ba13 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 15:40:34 +0000 Subject: [PATCH 10/16] Add missing .solution --- _episodes/03-fixtures.md | 1 + 1 file changed, 1 insertion(+) diff --git a/_episodes/03-fixtures.md b/_episodes/03-fixtures.md index 0bd6962..89cc0c3 100644 --- a/_episodes/03-fixtures.md +++ b/_episodes/03-fixtures.md @@ -500,6 +500,7 @@ being done once. >> realistic circumstances. For such -- and definitely for the even broader >> end-to-end tests that run a whole program from the (simulated) user input to >> a final output -- the original approach is well-suited. +> {: .solution} {: .challenge} [urllib-request]: https://docs.python.org/3/library/urllib.request.html From f18abd8d86ebce476242fa884b9859a5afc0de74 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Wed, 15 Mar 2023 15:59:51 +0000 Subject: [PATCH 11/16] Mention coverage configuration --- _episodes/07-coverage.md | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/_episodes/07-coverage.md b/_episodes/07-coverage.md index 57be482..30f6d6d 100644 --- a/_episodes/07-coverage.md +++ b/_episodes/07-coverage.md @@ -98,6 +98,8 @@ the consistency checks in the `__init__()` method of `Cell`, and methods such as methods) to have at least one test, so this test suite would benefit from being expanded. +> ## How much coverage do I need? +> > It's worth pointing out again that 100% coverage is not essential for a good > test suite. If the coverage is below 100%, then that indicates that it's worth > understanding where the uncovered lines of code are, and whether it is worth @@ -116,6 +118,22 @@ expanded. > between projects. {: .callout} +> ## Configuring `coverage` +> +> `coverage` and `pytest-cov` are configurable via a toml file called +> `.coveragerc` by default. Various details about behaviour and output can be +> adjusted there. Most notably, explicit exceptions can be defined that exclude +> certain files, blocks or lines from the coverage report. +> +> This is useful in various situation; you can, e.g., exclude the test files +> from the coverage report to reduce noise or change the commandline output. +> +> Another opinionated idea is to indeed aim for 100% code coverage but +> explicitly exclude what you consider unimportant in your testing. While +> opponents say that is just cheating, you had to make a concious decision to +> exclude a piece of code and explicitly documented it in a file (with a comment +> explaining the decision in the best case). +{: .callout} ## Coverage and continuous integration From 6987b703fd1f5ec0278ae748d0bf5ed942f29495 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Thu, 16 Mar 2023 10:50:56 +0000 Subject: [PATCH 12/16] First draft list of test purposes --- _episodes/08-exercise.md | 80 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) diff --git a/_episodes/08-exercise.md b/_episodes/08-exercise.md index 3865834..0e5e03b 100644 --- a/_episodes/08-exercise.md +++ b/_episodes/08-exercise.md @@ -15,6 +15,86 @@ Now we have developed a range of skills relating to testing and continuous integration, we can try putting them into practice on a real piece of research software. +But before we do so, let us quickly recap what we learnt today. + +## The purpose of a test + +When you write a test you do this to gain (and in the case of automated tests +maintain) confidence into the correctness of your code. But in detail your tests +can serve a variety of purposes. It can be useful to keep in mind what you could +use tests for during your coding, so we compiled a certainly non-exhaustive list +of test purposes here. The major ones were discussed in the lesson; some more +exotic ones should be seen as suggestions for you to try. A test can have more +than one of these purposes: + +* test for features - The first and simplest test that is capable of verifying + the correctness of any feature (in the broadest sense) of your code. This is + the obvious purpose of a test and encountered everywhere in the lesson. +* test for confidence (in a narrow sense) - Additional tests of the same + features that are redundant in that they repeat a test for features with + qualitatively similar input just to double-check. This is also encountered in + the lesson, e.g. in the solution of [More parameters][more_parameters] large + numbers aren't really that different from other numbers unless you run into + overflow errors (one could even argue that testing the negative numbers in + [pytest features][pytest_features] is not qualitatively different and rather + to double check). One should be aware of the difference between testing for + confidence and necessary feature tests. The former is a convenience that + comes at the cost of longer test runs and so it is not always desirable to + test redundantly (although certainly better than missing an aspect). +* test for edge-/corner-cases - Test special input or conditions for which a + general algorithm needs to be specialised (e.g., NaNs, infinities, overflows, + empty input, etc.). We did a [whole episode][edge_cases] on this. +* test for failures - This is part of feature testing but important enough to + mention explicitly: The conditions under which your code fails are part of + your interface and need to be tested. The user (that probably includes + yourself) might rely on a raised exception or returned default value in the + case of failure. Make sure they can and think of all the cases that your + current approach cannot handle. Any changes in these (even those for the + better) are changes of the interface and should appear intentionally. This was + discussed in [pytest features][pytest_features]. +* fuzzy testing - This is broader than testing for failures; if you have + unexperienced or even malicious users of your project, they might run your + code with inputs or under conditions that do not make any sense at all and are + almost impossible to predict. Fuzzy testing is a strategy where you let the + computer run your code with random input (sometimes down to the bit level) and + make sure that not even the most far-fetched input can break your code. There + are libraries for that, so you don't have to set up all the boilerplate + yourself. +* regression test - After you have found a bug, you can write a test reproducing + the precise conditions under which the bug appeared. Once you fixed it, your + test will work fine and if a later change risks introducing this bug again, you + can rest assured that it will be immediately signalled by a failing test. +* test as a reminder - In most contemporary test frameworks, you can mark a test + as an "expected failure". Such tests are run during your standard test runs + but the test framework will complain if they don't fail. This can be a + convenient way of marking a to-do or a known bug that you don't have time to + fix at the moment. It will preserve your precise intention, e.g., the precise + conditions of the bug in code form and it might be an important information if + a bug disappeared unexpectedly. Maybe another code change had an effect you + did not intend? +* test for fixing an external interface - You can even test code did not write + yourself. If you rely on a particular library, you don't have control over the + evolution of that library, so it can be a good idea to write a few test cases + that just use the interface of that library as you do it in your code. If they + ever change or deprecate something about that interface, you don't have to + chase down a rabbit hole of function calls to get the bottom of that but + instead have a (hopefully well-named) test that immediately signals where the + problem lies. +* test for learning an external interface - When you start using a new library, + you might play around with it for a while before using it in production just + to learn how it's used. Why not preserve this in automated tests? You have the + same effect as if you, e.g., wrote a script or used an interactive session but + you can come back and have a look at it again later. Also, you immediately fix + the external interface (see previous item). + +This is list certainly not something you want to implement as a whole. Some of +the purposes might simply not apply (e.g. fuzzy testing if you don't have +external users) or might not be worth the extra effort (e.g. fixing an external +interface that is expected to be very stable). But you might find yourself in a +situation where some of these are appropriate tools for your problem and you +might want to come back from time to time and refresh your memory. That said, +let's dive into the final exercise. + ## The software We are going to work with `pl_curves`, a piece of research software developed by From ead4f265bc2091287b28c338043d27934f6642e4 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Thu, 16 Mar 2023 11:02:17 +0000 Subject: [PATCH 13/16] Styling list --- _episodes/08-exercise.md | 37 ++++++++++++++++++++----------------- 1 file changed, 20 insertions(+), 17 deletions(-) diff --git a/_episodes/08-exercise.md b/_episodes/08-exercise.md index 0e5e03b..d1142cd 100644 --- a/_episodes/08-exercise.md +++ b/_episodes/08-exercise.md @@ -27,24 +27,25 @@ of test purposes here. The major ones were discussed in the lesson; some more exotic ones should be seen as suggestions for you to try. A test can have more than one of these purposes: -* test for features - The first and simplest test that is capable of verifying +* *test for features* - The first and simplest test that is capable of verifying the correctness of any feature (in the broadest sense) of your code. This is the obvious purpose of a test and encountered everywhere in the lesson. -* test for confidence (in a narrow sense) - Additional tests of the same +* *test for confidence (in a narrow sense)* - Additional tests of the same features that are redundant in that they repeat a test for features with qualitatively similar input just to double-check. This is also encountered in - the lesson, e.g. in the solution of [More parameters][more_parameters] large - numbers aren't really that different from other numbers unless you run into - overflow errors (one could even argue that testing the negative numbers in - [pytest features][pytest_features] is not qualitatively different and rather - to double check). One should be aware of the difference between testing for - confidence and necessary feature tests. The former is a convenience that - comes at the cost of longer test runs and so it is not always desirable to - test redundantly (although certainly better than missing an aspect). -* test for edge-/corner-cases - Test special input or conditions for which a + the lesson, e.g. in the solution of "More parameters" in [pytest + features][pytest_features] large numbers aren't really that different from + other numbers unless you run into overflow errors (one could even argue that + testing the negative numbers in [pytest features][pytest_features] is not + qualitatively different and rather to double check). One should be aware of + the difference between testing for confidence and necessary feature tests. The + former is a convenience that comes at the cost of longer test runs and so it + is not always desirable to test redundantly (although certainly better than + missing an aspect). +* *test for edge-/corner-cases* - Test special input or conditions for which a general algorithm needs to be specialised (e.g., NaNs, infinities, overflows, empty input, etc.). We did a [whole episode][edge_cases] on this. -* test for failures - This is part of feature testing but important enough to +* *test for failures* - This is part of feature testing but important enough to mention explicitly: The conditions under which your code fails are part of your interface and need to be tested. The user (that probably includes yourself) might rely on a raised exception or returned default value in the @@ -52,7 +53,7 @@ than one of these purposes: current approach cannot handle. Any changes in these (even those for the better) are changes of the interface and should appear intentionally. This was discussed in [pytest features][pytest_features]. -* fuzzy testing - This is broader than testing for failures; if you have +* *fuzzy testing* - This is broader than testing for failures; if you have unexperienced or even malicious users of your project, they might run your code with inputs or under conditions that do not make any sense at all and are almost impossible to predict. Fuzzy testing is a strategy where you let the @@ -60,11 +61,11 @@ than one of these purposes: make sure that not even the most far-fetched input can break your code. There are libraries for that, so you don't have to set up all the boilerplate yourself. -* regression test - After you have found a bug, you can write a test reproducing +* *regression test* - After you have found a bug, you can write a test reproducing the precise conditions under which the bug appeared. Once you fixed it, your test will work fine and if a later change risks introducing this bug again, you can rest assured that it will be immediately signalled by a failing test. -* test as a reminder - In most contemporary test frameworks, you can mark a test +* *test as a reminder* - In most contemporary test frameworks, you can mark a test as an "expected failure". Such tests are run during your standard test runs but the test framework will complain if they don't fail. This can be a convenient way of marking a to-do or a known bug that you don't have time to @@ -72,7 +73,7 @@ than one of these purposes: conditions of the bug in code form and it might be an important information if a bug disappeared unexpectedly. Maybe another code change had an effect you did not intend? -* test for fixing an external interface - You can even test code did not write +* *test for fixing an external interface* - You can even test code did not write yourself. If you rely on a particular library, you don't have control over the evolution of that library, so it can be a good idea to write a few test cases that just use the interface of that library as you do it in your code. If they @@ -80,7 +81,7 @@ than one of these purposes: chase down a rabbit hole of function calls to get the bottom of that but instead have a (hopefully well-named) test that immediately signals where the problem lies. -* test for learning an external interface - When you start using a new library, +* *test for learning an external interface* - When you start using a new library, you might play around with it for a while before using it in production just to learn how it's used. Why not preserve this in automated tests? You have the same effect as if you, e.g., wrote a script or used an interactive session but @@ -150,3 +151,5 @@ the different bacteria are. It already has tests written for most functions. [pl-curves]: https://github.com/CDT-AIMLAC/pl_curves +[pytest_features]: https://edbennett.github.io/python-testing-ci/02-pytest-functionality/index.html +[edge_cases]: https://edbennett.github.io/python-testing-ci/04-edges/index.html From 78b894f3547afa6a23acad5c96e75e12e9653d5d Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Thu, 16 Mar 2023 12:01:06 +0000 Subject: [PATCH 14/16] Updated episode 8 metadata --- _episodes/08-exercise.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/_episodes/08-exercise.md b/_episodes/08-exercise.md index d1142cd..b6e007e 100644 --- a/_episodes/08-exercise.md +++ b/_episodes/08-exercise.md @@ -1,12 +1,14 @@ --- title: "Putting it all together" -teaching: 5 +teaching: 10 exercises: 90 questions: - "How can I apply all of these techniques at once to a real application?" objectives: - "Be able to apply testing and CI techniques to a piece of research software." keypoints: +- "Tests can have very different purposes and you should keep in mind the broad + applicability of automated testing." - "Testing and CI work well together to identify problems in research software and allow them to be fixed quickly." - "If anything is unclear, or you get stuck, please ask for help!" --- From 6db8fb4772e221ca34ed31ebe8cd870a2e5d7996 Mon Sep 17 00:00:00 2001 From: Julian Lenz Date: Thu, 16 Mar 2023 12:01:51 +0000 Subject: [PATCH 15/16] Italic to bold --- _episodes/08-exercise.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/_episodes/08-exercise.md b/_episodes/08-exercise.md index b6e007e..5654f70 100644 --- a/_episodes/08-exercise.md +++ b/_episodes/08-exercise.md @@ -29,10 +29,10 @@ of test purposes here. The major ones were discussed in the lesson; some more exotic ones should be seen as suggestions for you to try. A test can have more than one of these purposes: -* *test for features* - The first and simplest test that is capable of verifying +* **test for features** - The first and simplest test that is capable of verifying the correctness of any feature (in the broadest sense) of your code. This is the obvious purpose of a test and encountered everywhere in the lesson. -* *test for confidence (in a narrow sense)* - Additional tests of the same +* **test for confidence (in a narrow sense)** - Additional tests of the same features that are redundant in that they repeat a test for features with qualitatively similar input just to double-check. This is also encountered in the lesson, e.g. in the solution of "More parameters" in [pytest @@ -44,10 +44,10 @@ than one of these purposes: former is a convenience that comes at the cost of longer test runs and so it is not always desirable to test redundantly (although certainly better than missing an aspect). -* *test for edge-/corner-cases* - Test special input or conditions for which a +* **test for edge-/corner-cases** - Test special input or conditions for which a general algorithm needs to be specialised (e.g., NaNs, infinities, overflows, empty input, etc.). We did a [whole episode][edge_cases] on this. -* *test for failures* - This is part of feature testing but important enough to +* **test for failures** - This is part of feature testing but important enough to mention explicitly: The conditions under which your code fails are part of your interface and need to be tested. The user (that probably includes yourself) might rely on a raised exception or returned default value in the @@ -55,7 +55,7 @@ than one of these purposes: current approach cannot handle. Any changes in these (even those for the better) are changes of the interface and should appear intentionally. This was discussed in [pytest features][pytest_features]. -* *fuzzy testing* - This is broader than testing for failures; if you have +* **fuzzy testing** - This is broader than testing for failures; if you have unexperienced or even malicious users of your project, they might run your code with inputs or under conditions that do not make any sense at all and are almost impossible to predict. Fuzzy testing is a strategy where you let the @@ -63,11 +63,11 @@ than one of these purposes: make sure that not even the most far-fetched input can break your code. There are libraries for that, so you don't have to set up all the boilerplate yourself. -* *regression test* - After you have found a bug, you can write a test reproducing +* **regression test** - After you have found a bug, you can write a test reproducing the precise conditions under which the bug appeared. Once you fixed it, your test will work fine and if a later change risks introducing this bug again, you can rest assured that it will be immediately signalled by a failing test. -* *test as a reminder* - In most contemporary test frameworks, you can mark a test +* **test as a reminder** - In most contemporary test frameworks, you can mark a test as an "expected failure". Such tests are run during your standard test runs but the test framework will complain if they don't fail. This can be a convenient way of marking a to-do or a known bug that you don't have time to @@ -75,7 +75,7 @@ than one of these purposes: conditions of the bug in code form and it might be an important information if a bug disappeared unexpectedly. Maybe another code change had an effect you did not intend? -* *test for fixing an external interface* - You can even test code did not write +* **test for fixing an external interface** - You can even test code did not write yourself. If you rely on a particular library, you don't have control over the evolution of that library, so it can be a good idea to write a few test cases that just use the interface of that library as you do it in your code. If they @@ -83,7 +83,7 @@ than one of these purposes: chase down a rabbit hole of function calls to get the bottom of that but instead have a (hopefully well-named) test that immediately signals where the problem lies. -* *test for learning an external interface* - When you start using a new library, +* **test for learning an external interface** - When you start using a new library, you might play around with it for a while before using it in production just to learn how it's used. Why not preserve this in automated tests? You have the same effect as if you, e.g., wrote a script or used an interactive session but From 85b853aa985b46412ad8e6218cb2f5babec3bbac Mon Sep 17 00:00:00 2001 From: chillenzer <107195608+chillenzer@users.noreply.github.com> Date: Thu, 16 Mar 2023 12:03:27 +0000 Subject: [PATCH 16/16] Update CITATION --- CITATION | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CITATION b/CITATION index 3139b19..9c29544 100644 --- a/CITATION +++ b/CITATION @@ -1,3 +1,3 @@ Please cite as: -Ed Bennett, Lester Hedges, Matt Williams, "Introduction to automated testing and continuous integration in Python" +Ed Bennett, Lester Hedges, Julian Lenz, Matt Williams, "Introduction to automated testing and continuous integration in Python"