• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

What's the most baking soda you ever added to a dark beer recipes mash?

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I fiddle much less with water and pH than I used to. Didn't see much benefit.
If you are fortunate enough to have water that comes out of the tap with low alkalinity and a blend of the stylistic ions that suits your taste vis-a-vis the styles you like to brew and your brewing practices are such that mash pH falls into the right band then there wouldn't be much benefit to tweaking your water. If, OTOH, you are like most your failure to appreciate the benefits would cut you off from the opportunity to dramatically improve your beers.
 
I have never added alkalinity to brewing liquor. It would seem to me that ones recipe would be highly suspect and that the calcium salts should in that case be added to the boil.
 
I have never added alkalinity to brewing liquor. It would seem to me that ones recipe would be highly suspect and that the calcium salts should in that case be added to the boil.

Unfortunately, a brewer can't escape the pH lowering effect of calcium salts by reserving them for the boil. While it is helpful to avoid an overly low mashing pH since that tends to enhance fermentability of the wort and make the resulting beer thinner than intended, adding those salts to the kettle will cause the kettle wort pH to drop...maybe lower than you might prefer.

Mashing water alkalinity can be an asset in brewing some beer styles.
 
I have never added alkalinity to brewing liquor. It would seem to me that ones recipe would be highly suspect and that the calcium salts should in that case be added to the boil.

Highly suspect in what way? Are you thinking along the lines of excessively too much roasted malt in the recipe?

High Lovibond roasted malts in the color range of 600+ L can have DI_mash pH's ranging down to about 4.25 (give or take a bit). And I believe the acidity of such malts in the most extreme of cases may approach or even perhaps exceed 100 mEq's/Kg. (and certainly would if your intended mash pH target is 5.6 instead of 5.4). So even the addition of half a pound of such a malt introduces a load of acidity to a mash. And a search of recipes on a site such as Brewers Friend will reveal that many (scaled to 5.5 gallon) recipes call for single to multiple malts like this to be added in increments of whole pounds.

One lot of 630 Lovibond Briess malt was lab tested at a DI_mash pH of 4.24, and a 503L Chocolate malt tested at a DI_pH of 4.38. Oddly enough, unmalted dark roasted grains extending as deeply dark as 600L never seem to go much below ~4.55 DI_pH, and most malts in this class generally hover closer to 4.7, regardless of their Lovibond. So deep roasted malted barleys can be (but are not always, with this highly Lovibond dependent) much more acidic overall than is the case for deep roasted unmalted barleys.
 
Last edited:
Does not completely ignoring the presence of any quantity of mash water alkalinity (be it inherent, or added) fly in the face of a lengthy history of brewing which indicates that those regions with high alkalinity water often gravitated toward the production of darker brews? I'm amazed that so many contributors to this thread are openly touting their mash waters zero alkalinity as if it was somehow of benefit to them in the mashing of their very dark beer recipes. I'm also amazed that so few have admitted to having added alkalinity at any level, and that some of such admissions almost seem to come with an attached level of trepidation or self doubt as to having done fully the right thing. Clearly most home brewers seem to be far more comfortable with acidifying a mash than with alkalizing it. I wonder why.
 
Last edited:
It's probably because if one is careful in what he does, bases his practices on pH measurement rather than predictions of questionable spread sheets and calculators and uses reasonable amounts of high colored malts he finds that he seldom if ever needs to add alkalinity. I probably did it in my youth because I didn't understand all this and did what the 'experts' (the authors of the books and magazine articles) said to do. I'm now older (much older) and wiser (not so much wiser as older) and I haven't added alkalinity to any beer since the renaissance.

I think home brewers have gotten wise to the fact that stouts that leave your mouth tasting like the bottom of your Weber aren't that good. With RO water and enough gypsum or CaCl2 to get 50 ppm calcium a typical mash of 90% base malt and 10% roast barley is going to give you a mash pH around 5.5. Change that to 80%/20% and the mash pH drops to around 5.4. Now mash those same grists with water of 1.75 mEq/L alkalinity and the pH's rise by about 0.1 to 5.6 for 90/10 and 5.5 for 90/20. In the 90/10 case, which would be a miuch better beer to my way of thinking, one might be considering acid addition rather than alkali.

Now home brewers (and commercial brewers too) love to experiment and they do brew beers with incredible amounts of black and other high colored malts. They aren't very good beers but to make them less bad than they might be one should get the mash pH right and that will require alkali. It's always hard to draw conclusions from a handful of posts but I might think of interpreting the answers here by observing that this is the Brewing Science Forum and that, therefore, they may be better cognizance of the science of the mash tun and that the brewers here may tend to be more experienced than the brewers in other forums and that, therefore, you may have a group here who has tended to discover than alkali additions aren't needed that often in well balanced beers.

You should consider asking the question in some of the other fora.
 
My Absolute Perfection Stout uses 25% dark specialty malts and my city tap water. It needs just a slight nudge to bring mash pH up into range and take the edge off the acidity. And it tastes, well, absolutely perfect and has medaled. I don't add baking soda or any other salts very often anymore, but I will add a tiny amount in a case like this. In this case used 1/8 teaspoon in 3 gallons. A little dab'll do ya.
 
A.J., was I off base in my post #36 above? Are not some higher end Lovibond dark roast malts extremely acidic, and would not their use (as opposed to tamer roast barley) more likely necessitate the addition of additional alkalinity?

Some of these malts (if 1 lb. was to be mashed alone) would necessitate the addition of perhaps 2 to 4 g. of baking soda all by themselves to hit pH 5.4-5.5 in the mash. Of course the alkaline base malts are utilized to counter this, as is using less of such deep roasted malt(s), but if the goal is to use them and also to mash at a pH of 5.4 to 5.5 (the latter of which per Martin offers flavor and mouthfeel benefits for dark brews, for which I welcome him to confirm here), then baking soda or slaked lime additions would seem to be a necessity.
 
Last edited:
I've added up to 4-5 grams baking soda to mash water, in order to bring it in proper pH range. This was for beers which exceeded 15-20% specialty malts - a blend of both crystal and roasted malts. I have recently opened a bottle of a 9.2% Dark Ale brewed in November 2017, with a recipe which came close to 50% specialty malts ( kitchen sink recipe ) and it was very, very good. I've used 6 gr Baking Soda in the mashing water. My Na levels were probably very high, but nothing I tasted in the beer.

All these happened before I actually got my pH meter, January this year.

I will try to report back here again, once I begin brewing some Stouts, Browns, Baltic Porters for the end of year.
 
Good point, and thank you 'thehaze'! Crystal (or caramel) malts can be as acidic as deeply roasted malts. And some sweet stout recipes have loads of caramel/crystal malts, in addition to deeply roasted malts.

I presently believe (subject to highly welcomed correction) that 60-80L crystal is about as acidic pound for pound as is 300L roasted, and 180-220L (ish) crystal would be roughly matching the acidity level (also pound for pound) of the most acidic of the 600L deep roasted malt(s). And 120-130L crystal would fall somewhere in-between these, perhaps matching roughly 450-500L roasted malts for acidity.

I look forward to hearing how your baking soda additions meter when tested for pH in the mash.
 
Last edited:
A.J., was I off base in my post #36 above?
No, not at all.

Are not some higher end Lovibond dark roast malts extremely acidic, and would not their use (as opposed to tamer roast barley) more likely necessitate the addition of additional alkalinity?
Yes. A kg of a 150L Briess crystal measure by Kai is, to pH 5.5, equivalent to 4 mL of 23 Be' HCl. A kg of 600L Crisp black malt I measured is equivalent to 5.2 mL of that same acid (for reference a kg of sauermalz is equivalent to 28 mL of the 23 Be' HCl). A kg of the 600L malt would require 63 mEq of protons to be absorbed to get it to pH 5.5. At 0.9 mEq/mmol from bicarbonate that's 70 mmol which is 70*84 = 5880 mg (5.88 grams) of baking soda. But if you mashed a pound of it in water of 0 alkalinity and 2.5 mEq/L calcium hardness with 9 lbs of a typical base malt you would arrive at pH 5.5 and find that it only delivers 29 mEq protons and the base malt absorbs 34. Thus unless you get up into higher colored malt percentages you aren't going to need to add alkali.

Some of these malts (if 1 lb. was to be mashed alone) would necessitate the addition of perhaps 2 to 4 g. of baking soda all by themselves to hit pH 5.4-5.5 in the mash. Of course the alkaline base malts are utilized to counter this...
There it is!

I think No. 39 is very instructive with respect to this question.
 
Having judged far too many bad beers, a fault that I more commonly find with the wider usage of RO and distilled brewing water, is excessive sharpness and higher acridity (and sometimes: thinness) in some porters and stouts. I find it very refreshing to find a porter or stout that was brewed with more alkaline water and the sharpness and acridity have been moderated away so that you can actually enjoy elements such as coffee and chocolate. The other significant benefit of avoiding an overly low mashing pH is that body and mouthfeel won't be lost to excessive proteolysis.

It usually doesn't take too much baking soda to provide the alkalinity necessary to avoid an excessive pH drop. For those of you wanting to test a really acidic grist, try the Reaper's Mild recipe that is posted on this forum. It's dominated by a pretty high percentage of dark crystal malts and I recall predicting and measuring a room-temp pH of about 4.9 before lime or baking soda was added to the mashing water.
 
Having judged far too many bad beers, a fault that I more commonly find with the wider usage of RO and distilled brewing water, is excessive sharpness and higher acridity (and sometimes: thinness) in some porters and stouts. I find it very refreshing to find a porter or stout that was brewed with more alkaline water and the sharpness and acridity have been moderated away so that you can actually enjoy elements such as coffee and chocolate.
How does a judge know that reduced acridity and sharpness came from increased alkalinity as opposed to a more sensible grain bill?

The other significant benefit of avoiding an overly low mashing pH is that body and mouthfeel won't be lost to excessive proteolysis.
Just to be sure: I don't want my remarks to be taken as being supportive of low mash pH. If you need alkali to hit proper mash pH, use it.
 
Unfortunately, a brewer can't escape the pH lowering effect of calcium salts by reserving them for the boil. While it is helpful to avoid an overly low mashing pH since that tends to enhance fermentability of the wort and make the resulting beer thinner than intended, adding those salts to the kettle will cause the kettle wort pH to drop...maybe lower than you might prefer.

Mashing water alkalinity can be an asset in brewing some beer styles.

With all of the technology and information there is a loss in recipe (ingredients/process) to achieve some of the same effects.

A change in process can accomplish some if not all of the same. With this recipe, the base grains would be mashed (perhaps even with the calcium salts), the roasted grains would be steeped.

Adding calcium salts to the boil does lower the pH but it sounds like you fear this change, maybe because it's not well documented or you can't measure/control it.

I can already see the responses to this... something akin to "change in process doesn't produce the same results".
 
..... With RO water and enough gypsum or CaCl2 to get 50 ppm calcium a typical mash of 90% base malt and 10% roast barley is going to give you a mash pH around 5.5. Change that to 80%/20% and the mash pH drops to around 5.4. Now mash those same grists with water of 1.75 mEq/L alkalinity and the pH's rise by about 0.1 to 5.6 for 90/10 and 5.5 for 90/20. In the 90/10 case, which would be a miuch better beer to my way of thinking, one might be considering acid addition rather than alkali.

A.J., with my tentative and as yet unreleased 'MME' revision, presently being called version 2.60, I get the following for 300L Roast Barley (DI_pH = 4.7) and 1.8L Pilsner malt (DI_pH = 5.81) mashing into 4.5 gallons of water with 50 ppm Ca++:

1) 9 lbs. Pilsner + 1 lb. 300L Roast Barley = 5.51 pH (90%/10%)

2) 8 lbs. Pilsner + 2 lbs. 300L Roast Barley = 5.36 pH (80%/20%)

These two seem to be in reasonably good agreement with your above quoted assessment.

But I've recently discovered from Briess data that Roast Barley (unmalted) is much less acidic than similar L colored malted barleys. Per Briess data even at 600L Roast Barley does not go lower than about 4.65 to 4.55 DI_pH (call it 4.6), but 600L roasted 'Black' (or Black Patent) malted barley can go as low as between 4.3 and 4.24 DI_pH. That seems to be a huge difference with respect to acidity. For those I get with MME version 2.60 (using same criteria as above):

1) 9 lbs. Pilsner + 1 lb. 600L malted 'Black' = 5.35 pH (90%/10%)

2) 8 lbs. Pilsner + 2 lbs. 600L malted 'Black' = 5.15 pH (80%/20%)

Am I presuming too much acidity for a 600L roasted/malted barley in the DI_pH range of 4.3 to 4.24, when also in the presence of 4.5 gallons of mash water with 50 ppm Ca++?
 
Last edited:
But I've recently discovered from Briess data that Roast Barley (unmalted) is much less acidic than similar L colored malted barleys. Per Briess data even at 600L Roast Barley does not go lower than about 4.65 to 4.55 DI_pH (call it 4.6), but 600L roasted 'Black' (or Black Patent) malted barley can go as low as between 4.3 and 4.24 DI_pH. That seems to be a huge difference with respect to acidity. For those I get with MME version 2.60 (using same criteria as above):

Am I presuming too much acidity for a 600L roasted/malted barley in the DI_pH range of 4.3 to 4.24,

Keep in mind that DI pH is only half the story when it comes to acidity. Remember what the definition of acidity is: it is the quantity of protons which must be absorbed in order to raise the pH of a DI mash of the malt to a pH of interest. There is no way to know what that is from just a DI pH measurement. Whoever is doing the analysis must make a second pH measurement with a known quantity of an alkali of known strength added to a known mass of grain. The alkali added divided by the pH change and then again by the malt mass gives the buffering of the malt - the number of mEq per pH per unit mass - for the malt. We know that for base malts this is a number around -40 mEq/kg/pH and can rough computations from DI pH assuming that number. But for dark malts that number can vary. The effects of this can be seen in the following graph:


Acidity.jpg


The graph plots the acidity of 1 kg of all the malts in my data base which are acidic with respect to mash pH of 5.5 (excluding Sauermalz entries) vs the DI pH's of those malts. As you can see if the DI pH is bigger than 5.2 you can get a fair estimate of the malt's acidity from 319.89 - 59.081*pHDI. When the malt pHDI is less than 5.2 the approximation is worse but not terrible as long as you stay above pHDI 4.8. Below that, i.e. for the highly kilned malts and unmalted grains, things really fall apart. Right at around pHDI of 4.7 malts can be found with acidities ranging from 35 to over 60 mEq/kg. And you will find at lower pHDI malts with less acidity than some malts with higher pHDI.

So I am afraid that as long as you try to tie acidity to pHDI alone you will be chasing your tail with dark malts. The good news is that they are usually used in small enough quantity that it doesn't matter that much whether their acidity is 30 or 60 mEq/kg

For those following the "voltmeter" spreadsheet thread - I was able to whip the numbers on the plot out of the database in a few minutes using the dQm1kg function. Why didn't I do this years ago?
 
Last edited:
Not all, but I suspect that some of the scatter may be due to the difference between a deep roasted malts "reported" Lovibond color, and its "actual" Lovibond color.

For example, one lot of a 'Black Patent' type malt may lab measure to be only 520L and another to be 670L, but when purchased, both lots will say 600L on the package. But the two lots will have different levels of acidity.

The same sort of lot to lot color (and therefore acidity) variations can be seen to some degree with crystal malts also. And ditto for all other malts.... One would hope that the Maltsters carefully blend lots to mitigate this issue. But since acidity is logarithmic with respect to pH, and color is far more linear, even if linear color blends are achieved so that at retail you get close to the color you pay for, the acidity from retail lot to retail lot will likely still scatter all over the place.

The Briess data I have pretty much scatters all over the place with respect to DI_pH vs. color. And that is before the lots acidity is even considered.
 
Last edited:
There may even be inherent acidity differences with respect to color between caramel and crystal malts, which are generally considered to be the same, as the Briess data I have includes some tests of crystal malts, for which I have no way of knowing, but for which I suspect may be from competitors (since to my knowledge Briess only sells caramel malts), and (for the limited data I have on hand) the crystal malts relative correlation of color to DI_pH is far more scattered and unpredictable than for caramel malts.
 
Last edited:
.... And you will find at lower pHDI malts with less acidity than some malts with higher pHDI.

Does this hold true after separating unmalted deep roast from malted deep roast barley? I believe there is a big difference between them. And then there are the deep roast grains that are wheat based. And also those that are modified to give deep color but not pass any acrid roast/burnt taste to your recipe. Who knows where these deep roast types fit into the acidity scale mix, or even if they do fit in?
 
Not all, but I suspect that some of the scatter may be due to the difference between a deep roasted malts "reported" Lovibond color, and its "actual" Lovibond color.
Color was not considered in making the plot. It is based only on pHDI, buffering and the acidity to pH 5.5 based on those two parameters. I don't even know what the color of most of those malts is.



The same sort of lot to lot color (and therefore acidity) variations can be seen to some degree with crystal malts also.
Color has nothing to do with this,

But since acidity is logarithmic with respect to pH,
Acidity is approximately linear with pH over the pH region of interest to brewers. It is calculated from acidity = a1*(pH - pHDI) + a2*(pH - pHDI)^2 + a3*(pH - pHDI)^3. As a2 and a3 are usually small in magnitude relative to a1 and pH - pHDI < 1 in most cases acidity is nearly (but not quite) linear with pH.


And that is before the lots acidity is even considered.
At this point it would probably be prudent to ask "What is your definition of acidity?" I suspect when I use the term it means something substantially different from what you mean by it.
 
Last edited:
There may even be inherent acidity differences with respect to color between caramel and crystal malts.
While one can certainly calculate the acidity of the malt WRT a particular pH (if he has pHDI and buffering data for the malt) and plot that against a the color of the malt I don't see any value in that. Kai published some plots of pHDI vs color and I seem to recall that Riffe had some plots of buffering vs color. The correlations certainly aren't tight enough to predict pHDI from color with any accuracy nor a1 from color with any accuracy let alone a1*(pH - pHDI) i.e. the acidity.
 
Does this hold true after separating unmalted deep roast from malted deep roast barley? I believe there is a big difference between them. And then there are the deep roast grains that are wheat based. And also those that are modified to give deep color but not pass any acrid roast/burnt taste to your recipe. Who knows where these deep roast types fit into the acidity scale mix, or even if they do fit in?

Well that's a good question. When I was investigating beer color I found that it could be pretty accurately described by the SRM and three or four parameters which we could call b1, b2 and b3, Over an ensemble of 100 beers the beers did group somewhat by type. IOW if you plotted b1 vs b2 there was some noticeable clustering. I don't have enough data here for that with malt titrations and no one is willing to do the work to get more. Getting more for the color investigation was much easier: Open beer, pour into glass, pipette 2 mL into cuvete, stick cuvete in SA, push Scan, at completion of scan push "Transfer (spectrum) to server", drink rest of beer, repeat until too intoxicated to continue. That's the kind of investigation I like! So is it possible that plots of a1 vs pHDI might cluster? Yes, it seems quite possible. Is it likely that that is the case? Don't know.

Now if it is to the extent that we can get a good fit between pHDI and a1 for an identifiable group then we are fat and it is sufficient to measure pHDI because that would lead us to a reasonably accurate value of a1 and with that we can calculate acidity keeping in mind that if a2 and a3 are insignificant we know the pH at any pHz because acidity is, in that case, linear with pHz.

I'm doubtful that this would prove to be the case but to prove or disprove it a lot of complete titration data, or at least enough to determine a1 and a2 would be needed and I don't know where that's going to come from.
 
Last edited:
A.J., I know that you are working hard to literally deny and thus destroy the malt color vs. acidity relationship in order to promote the superior value of testing for DI_pH and then for 3 titration values (the superiority for this approach of which is undeniable), but since for each maltster it all starts out as essentially the same small set of malts (variant by cross bread inheritance, or via direct induced GMO genetic traits, or by regional soils, or regional rain and temperature seasonality), and darker malts clearly have more acidity when considered along with an added process (or malt class) relationship to said acidity (wherein for example caramel malts by class and process are well more acidic in relation to their color than are roast malts), even you can not fully eradicate the correlation of color to acidity (by classification, as in caramel, base, roast, brown, Munich, Specialty or toasted, wheat or barley, malted or unmalted, etc...), though for years you have tried to talk this correlation down, and I think I understand why. The why being that the R value of correlation is at best only about 0.8 due to malt data scatter.

In the end, in order to market a software package that can be used by those who will not ever DI_pH measure and then also three-fold titrate each and every individual lot of purchased malt/grain, a similar color relationship to acidity by malt class must sneak into even your software at some juncture, even if somehow kept low key or under the rug. Thus the greatest "initial" defect of the gen 1 programs (that being that there is too much scatter between various lots and across the various maltsters process differences, and across seasonal or regional acidity differences in malts, or even genetic trait differences between malts due to cross breading, or now active GMO manipulation) must at some juncture inevitably pop up for your gen 2 software. That your software is leagues above others in its underlying complexity and coding will assuredly baffle and impress the unawares masses who will believe (without actually knowing or understanding why no-less) that such complexity of coding is in and of itself somehow an indication of inherent accuracy, as well as clearly draw the interest of those who specialize in the latest cutting edge software coding abilities (likely no-less also without actually understanding why it should work in the end, but merely bringing to the table the ability to make it work, as well as look good), but in the end your product will inevitably make educated guesses related to malt color by malt class along the same lines as for all of the others. The real advantage for any of these software packages comes from breaking down malts by maltster, region, process, seasonality, etc... rather than lumping all of them into a small handful of classes meant to merely average across all of such variants. This same thing could be done for the gen 1 products to raise their potential for accuracy. And they could do it without all of the spectacularly baffling and complex coding. All they need is to incorporate hundreds of "individual" malts "hard" data instead of merely averaging across a small handful of (sometimes inferred) malt "classes" as they presently do, even though in the end there will still inevitably be myriads of guesses as to whether or not the malt in the hand of the brewer will actually match to the malt selection choice residing in a coded database. Just as for your software...

But on top of this, all of the gen 1 software (mine being no exception) is rife with some level of errors that go beyond the "initial" color to acidity related poor 'R' correlation issue, and their programmers must work to squash all of such additional math-model errors. This I hope is where your softwares true advantage currently lies. But in reality only time and hard correlation to real batches mash pH data will tell.
 
Last edited:
To borrow a quote from Arthur Schopenhauer.... 'Talent hits a target no one else can hit. Genius hits a target no one else can see.'

At this point I’m convinced the approach taken for Gen 2 pH predictions will include a combination of DIpH, color and actual mash pH values in order to increase pH prediction accuracy. At least for the foreseeable future.
 
To borrow a quote from Arthur Schopenhauer.... 'Talent hits a target no one else can hit. Genius hits a target no one else can see.'

At this point I’m convinced the approach taken for Gen 2 pH predictions will include a combination of DIpH, color and actual mash pH values in order to increase pH prediction accuracy. At least for the foreseeable future.

And a purist like A.J. (who stamps out error at the level of less than 1% to 2%) will likely reject that color can validly be used at all due to its poor 'R' value of correlation inevitably leading to errors far more massive than 1% to 2%. This should be inevitably true for all of us as well, so it clearly does not involve seeing beyond what is already known, but if in fact malt color is to be completely left out of the picture (as A.J. stated above for his graph), then all software intended for the un-testing and unawares masses must be invalidated thereby, regardless of its generation or complexity. This is the essential point that I'm making. The point that coders such as yourself are deluding everyone (including yourselves) into somehow believing that such genius will in fact hit a target that no one has ever seen before. If color is involved, then massive error potential (and reality) is inevitable. And in the real world pH targets will be missed.

Did you notice that A.J.'s averaging line passes through the center-line data for exactly none of the malts he tested on a color free (color independent) basis? Therefore even if color is stamped out of the picture entirely by introducing and replacing color by "averaging" data for hundreds of past tested malts, correlation to real world of today malts will be found to be faulty if one does not "properly and validly" test every malt every time it is purchased....

And if malts can have anywhere from 35 to 60+ mEq/Kg acidity for a DI_pH of 4.7, then how can you know that your roughly 300-350L roasted malt lot will be 35 or 60+ (or anywhere inbetween) acidic without independently testing it (beyond simply taking a DI_pH)....

So not only is color an invalid criteria, but so also is DI_pH alone, and so also will be the averaging of a multiplicity of actual "past" data points. Validity at the A.J. level of approaching zero error can only come from testing at a lab control level of expertise, and real world brewers this side of breweries will not be likely to undertake this, or even understand how to undertake it properly if they do decide to undertake it. Most of us can't even trust our own pH meter readings, let alone do and then trust titration readings. So if the criteria of excellence requires both a precise DI_pH meter reading, and a then a tri-fold set of titration readings, and we can't even believe our own pH meters, then most of us are dead in the water even if we do intend to test....

Short version: The validity of A.J.'s software is entirely dependent upon a multiplicity of testing procedures, all carried out with the confidence of precision accuracy, and all carried out for each and every malt we purchase, each and every time we purchase it.

We have already witnessed one real world case where A.J.'s software solution failed to match a brewers real world deep roasted recipe brewing experience. And also several testimonies to mash pH's at the measured level of 4.8 to 4.9 pH pre-adjustment, wherein A.J.'s model presumably predicts closer to mashing straight-up at (give or take) 5.4 pH for these or similar recipes. So there are already a couple cracks appearing within the complexity of A.J.'s ointment (and wherein at this juncture I must openly admit that my own software has admittedly revealed many of such similar magnitude cracks, for which I'm continually working to better resolve the issues).
 
Last edited:
A.J., I know that you are working hard to literally deny and thus destroy the malt color vs. acidity relationship in order to promote the superior value of testing for DI_pH and then for 3 titration values (the superiority for this approach of which is undeniable),
Not at all. The color vs acidity relationship must stand on it's own merits. If you can find a way to tighten the correlations (e.g. by grouping) beyond what other investigators have found then all I would have to say is "Great!".


but since for each maltster it all starts out as essentially the same small set of malts (variant by cross bread inheritance, or via direct induced GMO genetic traits, or by regional soils, or regional rain and temperature seasonality), and darker malts clearly have more acidity when considered along with an added process (or malt class) relationship to said acidity (wherein for example caramel malts by class and process are well more acidic in relation to their color than are roast malts), even you can not fully eradicate the correlation of color to acidity
It has nothing to do with me. There is a correlation. It just isn't a very good one. I only caution people to be wary of using the fits as predictors of malt properties.


(by classification, as in caramel, base, roast, brown, Munich, Specialty or toasted, wheat or barley, malted or unmalted, etc...), though for years you have tried to talk this correlation down, and I think I understand why. The why being that the R value of correlation is at best only about 0.8 due to malt data scatter.
Exactly.


In the end, in order to market a software package that can be used by those who will not ever DI_pH measure and then also three-fold titrate each and every individual lot of purchased malt/grain, a similar color relationship to acidity by malt class must sneak into even your software at some juncture, even if somehow kept low key or under the rug.
At this point "my software" should be thought of as a set of functions which can be installed in Excel (it may be offered as an Add In) which will much simplify the process of preparing brewing related spreadsheets. In addition to simplifying life these functions offer a robust implementation of the "proton condition" approach to acid/base chemistry.


Thus the greatest "initial" defect of the gen 1 programs (that being that there is too much scatter between various lots and across the various maltsters process differences, and across seasonal or regional acidity differences in malts, or even genetic trait differences between malts due to cross breading, or now active GMO manipulation) must at some juncture inevitably pop up for your gen 2 software.
You are entirely focused on the malt models. The greatest defect of the first generation programs is that they try to force linearity on a non linear problem and just plain sloppiness with regard to the treatment of of weak acids and bases. While one can get away with this because the consequences are small errors there is really, in today's world of previously undreamed of computing power, no reason to continue to tolerate those errors. Would you run your software on a 16 bit machine? They also use bad malt models but I readily agree that practically speaking, there may be no alternative. The difference between first and second generation software is that given good malt data a first generation program can give a bad answer but a second generation program, given good malt data, will not. In many 1st gen. you feed in color or type and the program tries to figure out what the pH will be. In the better ones one can, at least, input pHDI but one cannot enter anything about the buffering properties of a malt. How can a program figure out the pH of a mix of acids, bases and malts if it doesn't know the buffering properties of the malts? It can't so it has to guess at what they may be. This merges the malt model with the computational model. They are separate things.

That your software is leagues above others in its underlying complexity and coding will assuredly baffle and impress the unawares masses who will believe (without actually knowing or understanding why no-less) that such complexity of coding is in and of itself somehow an indication of inherent accuracy,
Certainly many will be baffled and some may become acolytes based on faith but that doesn't matter. What matters is whether the 2nd gen software is indeed superior and it clearly it is. The ability of the software to do the calculations properly is quite independent of the malt model problem. Feed these functions malt data that does not represent what the brewer has in hand or give it bad water data or incorrect strength information on the available lactic acid and it will give you bad predictions/recommendations. But at least in the second gen. approach the models have been separated and the computation part is solid.

as well as clearly draw the interest of those who specialize in the latest cutting edge software coding abilities (likely no-less also without actually understanding why it should work in the end, but merely bringing to the table the ability to make it work, as well as look good),
Does everyone who used the STDEVP function in Excel know what it does? No, but hundreds of thousands of people use it every day because they are confident that given a set of data it returns the standard deviation. It is the same here. One need not understand the details of the Henderson-Hasselbalch equation to calculate the charge on an acid via an invocation of the QAcid function. It, is, of course, tremendously helpful to both the STDEVP and QAcid user to understand what those functions do and how they work. A person who doesn't understand what standard deviation is probably won't find the STDEVP function very useful and a user who doesn't understand the role of charge in acid base problems probably won't find QAcid very useful either.


but in the end your product will inevitably make educated guesses related to malt color by malt class along the same lines as for all of the others.
A developer using the second gen functions may well choose to do things that way as it certainly makes things convenient but it is up to him how he wants to have his user choose malt types. Using myself as a prototype developer and the spreadsheet I've made available to an few guys as an example: malt choice is any malt that I have any knowledge of plus any malt for which the user has pHDI and buffering numbers. In trying to help the fellow with the stout I was forced to choose from the available selections and pick the malts on which I have data which, in my opinion, are likely to be the most like the ones he used. My choices influence the outcome, of course. He said he used Maris Otter. There are lots of Maris Otter out there. I've tested two and they are quite different. One has, with respect to pH 5.5, 84% more acidity. But WRT pH 5.6 it has 174% more acidity (what I love about the 2nd gen approach is how easy it is for me to get numbers like this). This is why I continually tell the guys that are interested in coding this up (as spreadsheets, as iPhone apps, as Web based calculators) that the challenges don't lie in the coding - that's pretty simple - but in the problem of making malt data conveniently available. In this I think we are in complete agreement. But I am firmly of the opinion that this should be done using the second genertion calculations approach as they are sounder. Empiricism has been moved out of the computational model and is completely in the malt model domain where it belongs.
 
The real advantage for any of these software packages comes from breaking down malts by maltster, region, process, seasonality, etc... rather than lumping all of them into a small handful of classes meant to merely average across all of such variants.
The more classes and subclasses you have the more complex the access becomes. How big a tree can a user tolerate before he gets to the point where he'd rather just pick a malt from a list?


This same thing could be done for the gen 1 products to raise their potential for accuracy.
Well no. The first generation approach has the inherent inaccuracies imposed on it by linearization. The problems with mishandling of weak acids can be fixed. As an example Bru'n Water now correctly handles bicarbonate additions.

And they could do it without all of the spectacularly baffling and complex coding.
The second generation coding is certainly not spectacularly baffling nor complex. It is quite simple. OK, it uses Newton - Raphson and when people see mention of that they throw up their hands. But they were all taught Newton-Raphson in high school and have simply forgotten it. This is high school stuff!


All they need is to incorporate hundreds of "individual" malts "hard" data instead of merely averaging across a small handful of (sometimes inferred) malt "classes" as they presently do, even though in the end there will still inevitably be myriads of guesses as to whether or not the malt in the hand of the brewer will actually match to the malt selection choice residing in a coded database.
But how do you get this "hard data". You have to go to the lab and titrate. A malt is characterized by its titration curve. Perhaps one can find a grouping scheme that shows tight correlation between buffering and pHDI but you won't know that until you obtain hard buffering data.

Just as for your software...
Yes, just as for my software. We still have the malt problems but we don't, with "my software" have to live with the computational problems as well.


But on top of this, all of the gen 1 software (mine being no exception) is rife with some level of errors that go beyond the "initial" color to acidity related poor 'R' correlation issue, and their programmers must work to squash all of such additional math-model errors.
That's easily done. I did it in a couple of weeks. Toss the empirical approach and go to the charge balance model. This requires iterative solution which is easily implemented with Newton-Raphson.


This I hope is where your softwares true advantage currently lies.
Yes, it clearly is.

But in reality only time and hard correlation to real batches mash pH data will tell.
The charge balance model is solid. You really can't argue with it (I say this with some trepidation as one of the foundations of science is to question EVERYTHING). If you get bad answer from a program based on the charge model then
1)You didn't code it right and/or
2)You fed it bad malt or water data and/or
3)You took a bad reading.
 
Last edited:
To borrow a quote from Arthur Schopenhauer.... 'Talent hits a target no one else can hit. Genius hits a target no one else can see.'

At this point I’m convinced the approach taken for Gen 2 pH predictions will include a combination of DIpH, color and actual mash pH values in order to increase pH prediction accuracy. At least for the foreseeable future.
One of the main advantages of the Gen II approach is that you can use any or all of the above in arriving at a mash pH prediction. It depends on how you invoke the functions and on what you feed them. As for the quote - Wow!. I'm afraid I can only sign up for the talent part if that. I could have, and should have, done this years ago but was too lazy. I have had the privilege of working with several geniuses in my career. They are amazing people. I ain't one of them. One of most interesting things about them I noticed is that they don't sleep.
 
Back
Top