I have to ask. How serious is the sometimes serious error under typical real-world mash scenarios?
That depends on whether you are in a "sometimes serious" situation or not. The first generation programs give a decent answer lots of the time but sometimes they are way off base. I think this is because they use largely empirically derived models which models are based on what happens most of the time. As an example, declaring bicarbonate to be 61*alkalinity/50 works pretty well if your water's pH is between 6 and 8. Outside that range the approximation's accuracy starts to fall off. Even at pH 9 this particular error isn't serious but perhaps simpler to understand than the more serious errors that have been discovered recently. Thus the question you have to ask yourself is "Am I in a potentially serious error situation here?" With respect to the bicarbonate approximation you at least know that you are if your water's pH is 9.8 but who is going to remember that? With respect to the other errors I don't know how to word a caveat that says "Don't use this program when...." because I don't know where the errors lie. I can look at the models used (or what I think are the models used) ask myself "How is this connected to the underlying science?" and only observe that they seem to fit what the science predicts in a lot of cases and that they don't in others. I have noted here over the years that Bru'n Water, which is, obviously, the most used and thus the most commented on, seems to do better when high colored malts are not involved or are used at low levels and this makes sense because the non linearity of their titration curves introduces larger errors in a forced linear model (which all the first generation programs, AFAIK, use). Thus I often tell people "Use the program to get you in the ballpark but, especially if dark malts are involved, do a test mash." Now if you take that advice then it doesn't really matter how bad the program is as you will catch it out when you do the test mash and still make a good beer (or at least one mashed at the correct pH).
And by typical, I mean in the 1 to 2.5 quart per pound mash thickness range.
Suppose you are a BIAB or no sparge brewer?
I'm not calling the error into doubt having kept up with related posts for 3 weeks now. But on a scale of 1 to 10 how likely is it someone will encounter the error at all?
And if they do how badly will it skew their pH prediction results 10%? 100%?[/QUOTE] I don't see the problem as being in pH prediction but in reliance on these programs for acid addition recommendations. The recent big discovery showed that the most popular program gave a wildly wrong answer for that under some conditions. If those recomendations were folowed it is likely that the desired mash pH would be substantially off. By how much, I don't know nor can I tell you how much worse that is going to make your beer than it might have otherwise been. Nor can I even tell you when this problem will occur. I haven't the time or inclination to check all these programs and publish "flight envelopes" for them. I think a much better short term fix is to tell the author there is a problem and have him repair it. Don't you? The long term fix is to start producing second generation spreadsheets and programs which are not subject to these errors and uncertainties
Finally is the skew in pH prediction greater than that introduced by incorrect grain DIpH, calcium or Lactic acid potency, etc.?
That would depend on those uncertainties. In science or industry if we need to estimate something (here the amount of acid we must add to a mash to get a desired pH) and that estimate is derived from a series of measurements (water alkalinity and pH, malt titration parameters, acid strength, malt masses, water volume) it is clear that an error in any of those measurements will contribute to uncertainty in the estimate. The art of error propagation is used to determine how much each measurement uncertainty contributes to the acid addition uncertainty. These contributions are tabulated in an "error budget" and that list is examined in order to see if there are ways to reduce or eliminate the dominant errors. There is another item on the error budget besides those attributable to measurements and that is error introduced by the algorithm that processes the measurements. To answer your question one needs to have numbers for the error budget and those are hard to get. In this case I can look at the apparent error in Brewer's Friend and propagate that through to an uncertainty in acid amount and propagate that further to an estimate of error in the desired pH. But I have another option and that is to replace Brewer's Friend with a program that contributes 0 approximation error thus removing that item from the budget all together. Now you are quite right to point out, as I think is your intention is asking this question, that if the budget item for program errors is 0.02 pH while the RSS (if you know what that means, great, if not don't worry about it) of the other items is 0.05 pH then the total error is 0.054 pH and eliminating 0.02 program error would only improve things to 0.05 and we might question whether it is worth it to 'fix' the program. But, of course, we don't know what the program error is relative to the rest of the budget and so find it prudent to 'fix' the program if that isn't too expensive (and it isn't).
We've deviated quite a bit from the main theme of this thread and it is doubtful that many will read down this far but right in line with the theme of the thread is that by using the 0 alkalinity method we take a couple more items out of the error budget - those related to the water's volume measurement, pH measurement and alkalinity measurement.
Example: Say a majority of people enjoy listening to music played on a $200 system reproducing sound with 99% accuracy. While an audiophile may spend 10 or more times that amount trying to reach that last 1% of accuracy. Who stands to benefit the most as this relates to brewing beer?
This is a good analogy. Most loudspeakers deliver THD (distortion) of a couple of percent. Modern amplifiers have THD specs of tenths or hundredths of percent. Given this, why do we bother with such good specs for the amp when the speakers are relatively so poor? The answer is that it really ins't that difficult, with today's technology, to build an amplifier whose THD is 10 dB below the THD of the speakers (10 dB is the engineer's rule of thumb - if somethings error contribution is 10 or more dB's below another's then we can ignore its error).
It's the same here. It really isn't difficult too build a spreadsheet that eliminates program error from the error budget. Watch this space!