arclight<p>Today's work fun involved my modernized dead-end version of the good sodium fire code (as opposed to SOFIRE II, the bad sodium fire code).</p><p>One of the changes I make in all legacy Fortean codes is to replace 'double precision' numerical types with 'real(kind=real64)'. Modern Fortran has patameterized numerical precision via a 'kind' which is defined as part of the standard library. 'kind' is an enum implemented as an integer constant - the details aren't important - the big idea is that you can define a "working precision" kind constant and use that throughout your code to set precision on constants snd variables. So I define WP as real64 and all my declarations are real(kind=WP). Similarly, all my literals are written as 1.0_WP (the underscore connects the literal value with a type/kind).</p><p>The neat thing is that you can redefine WP as real32 or real128, recompile, and now your reals are all now single or quad precision. No compiler options, no global search-and-replace - just redefine a constant, recompile, and you're done.</p><p>Why do this? Back in the 32-bit days, double precision meant better accuracy but much slowe execution. Engineering code would often ship both single- and double-previsuon versions of an application. Run scoping cases quickly in single precision, switch to double precision for final runs or in cases with noticeable roundoff, stability, or precision issues.</p><p>As an experiment, I rebuilt the code in single and quad precision, ran the test suite and compared the results against the default real64 results. Results for real64 and real128 were mostly identical except for 1-2 parameters in the first 5-6 timesteps when conditions are rapidly changing (start of fire). What's interesting is there's not much numerical "noise" after the first 5-6 timesteps (it helps that the code is only writing 5 significant figures in the data files so much of the noise won't be visable). real32 results had more deviation so clearly real64 is adequate for what we're doing. More bits extends runtime significantly and the results are essentially identical. Fewer bits gives us less accurate results (no great surprise there.)</p><p>I'm curious what I should be looking for in a test like this. I think I need to dig into Nick Higham's book on stability and accuracy <a href="https://nhigham.com/accuracy-and-stability-of-numerical-algorithms/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">nhigham.com/accuracy-and-stabi</span><span class="invisible">lity-of-numerical-algorithms/</span></a> and see if he has any interesting ideas (this book is above my pay grade...) I really need to generate some plots and look for signs of clipping or instability.</p><p>This is a case where it was easy to run this experiment; I did it just to see what would happen. Now I'm at the point of asking what I should expect to see, what kinds of issues can arise by fiddling with the precision knob, and what conclusions I can draw about code stability from the results.</p><p>Parameterized precision makes this test trivial in Fortran; curious if/how other languages can do this without complier options, macros, and a preprocessor or similar unmaintainable and questionable hackery. It's nice having everything explicitly there in the source code with nary a <a href="https://oldbytes.space/tags/define" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>define</span></a> in sight.</p>