I thought we'd brought you down to earth in
https://www.homebrewtalk.com/showthread.php?t=598079. We pointed out there that the proability of getting 50 pH measurements in a row accurate to ±0.01 was something like E-19 (IIRC) based on the accuracy with which pH can realistically be measured (i.e. assuming your prediction algorithm is perfect which, of course, it can't be).
I have before me on the desk a package of Hach pH 4.01 @25 °C buffer powder pillows. They are labeled 4.01 ±0.02. This means I cannot, with any algorithm, predict the pH of this buffer to better than ±0.02. How then can you continue to claim that you can predict pH of a complicated mash to better accuracy than the standard against which you are measuring it? Using the actual pH of the DI mashes for the actual malts will, as discussed in the referenced thread, increase accuracy appreciably but not, as discussed, to ±0.01. If you continue to claim ±0.01 you will only destroy any credibility you may have and thereby, potentially consign an important finding (that error in malt DI pH is a bigger contributor to error than error in buffering capacity) into obscurity.