• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

Acidified malt vs lactic acid

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
That Brulosophy experiment was interesting. Phosphoric acid. Better taste? More stable than lactic?

You expect Brulosophy to find that it doesn’t matter. But apparently there is a perceived difference between the two acids.

When Brulosophy finds a difference, you can pretty much bank on it. When they don't, see this presentation for one of the reasons (there are others) you might want to be skeptical: https://sonsofalchemy.org/wp-content/uploads/2020/05/Understanding_Brulosophy_2.pdf
 
Thanks for that. I hold a few opinions about Brulosophy.

First, I really enjoy them—top five podcasts in the car.

Second, I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows. We are attached to our sacred cows.

I have made postings poking fun of the Brulosophy testing. @doug293cz will remember a recent one.

However, it's good that some sacred cows get slaughtered. Over the decades we have seen many things that we believed to be true, to turn out to be completely unfounded. Good riddance.

Fourth, I agree with @VikeMan that the personal subjectivity of human tasting is a real problem for Brulosophy. Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.

He said that where A and B are identical and C is the outlier, there are a number of people who are certain that A and B are different from each other. And further, they are certain that they prefer B over A. These people are shocked when they are told that A and B are identical.

And even more, these are not people off the street. The testers are all experienced in brewing.

Wow.

Anyway. I like Brulosophy.
 
Thanks for that. I hold a few opinions about Brulosophy.

First, I really enjoy them—top five podcasts in the car.

Second, I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows. We are attached to our sacred cows.

I have made postings poking fun of the Brulosophy testing. @doug293cz will remember a recent one.

However, it's good that some sacred cows get slaughtered. Over the decades we have seen many things that we believed to be true, to turn out to be completely unfounded. Good riddance.

Fourth, I agree with @VikeMan that the personal subjectivity of human tasting is a real problem for Brulosophy. Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.

He said that where A and B are identical and C is the outlier, there are a number of people who are certain that A and B are different from each other. And further, they are certain that they prefer B over A. These people are shocked when they are told that A and B are identical.

And even more, these are not people off the street. The testers are all experienced in brewing.

Wow.

Anyway. I like Brulosophy.
Maybe if the results were repeatable over more than one sample group, and if the universe of sample groups and individuals in those groups were numerically larger, I’d be more inclined to accept the veracity of the data presented as something other than a curiously random compendium of opinions which, due to its limited scope and size, bears little significant statistical relevance, despite the author’s mumbo-jumbo application of the laws of probability.

TL; DR: That said, I still read the Exbeeriments but don’t put a lot of stock in their findings.
 
... due to its limited scope and size, bears little significant statistical relevance, despite the author’s mumbo-jumbo application of the laws of probability.

This is another problem, one which isn't mentioned (I think) in the presentation. The small-ish number of testers results in low statistical power. Real differences that would be slam dunks with a larger number of testers are "suppressed" (for lack of a better word) due to a low statistical power with the smaller number of testers. Basically, with a small number of testers, you need a higher percentage identifying the different beer than you do with a larger number, in order to clear the p-value hurdle.
 
When Brulosophy finds a difference, you can pretty much bank on it. When they don't, see this presentation for one of the reasons (there are others) you might want to be skeptical: https://sonsofalchemy.org/wp-content/uploads/2020/05/Understanding_Brulosophy_2.pdf
I actually take it a step further. I know they don't like collecting preference data but when it's a significant difference and it shows a clear preference for one or the other that tends to hold more influence for me. The significant variables that are the split 50/50 from a preference standpoint make it confusing.
 
Thanks for that. I hold a few opinions about Brulosophy.

First, I really enjoy them—top five podcasts in the car.

Second, I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows. We are attached to our sacred cows.

I have made postings poking fun of the Brulosophy testing. @doug293cz will remember a recent one.

However, it's good that some sacred cows get slaughtered. Over the decades we have seen many things that we believed to be true, to turn out to be completely unfounded. Good riddance.

Fourth, I agree with @VikeMan that the personal subjectivity of human tasting is a real problem for Brulosophy. Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.

He said that where A and B are identical and C is the outlier, there are a number of people who are certain that A and B are different from each other. And further, they are certain that they prefer B over A. These people are shocked when they are told that A and B are identical.

And even more, these are not people off the street. The testers are all experienced in brewing.

Wow.

Anyway. I like Brulosophy.
I do too, I think given the state of homebrewing we should rally around anyone who helps support interest in the hobby. Nothing they are doing warrants the negativity that is thrown their way.
 
I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows.
Actually almost all of the pushback is due to their statistical illiteracy.
Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.
They could solve most of this problem by reading a basic stats text. And presenting the data the way that @VikeMan suggested. Nine times out of ten, when they say "the tasters couldn't tell the difference" they should be saying "we didn't have enough tasters to do the test correctly."
 
Sorry for derailing to Brulosophy. But I have to say that our online community are sophisticated enough to see when a test is strong or weak. If 10 or 14 out of 24 people draw a conclusion, we know that carries little weight.

But sometimes the tests are strong and interesting. Sometimes the tests show that only 4 out of 24 people can spot the outlier. And we don't have to take their word for it. We then test it ourselves and find out.

Everyone knows that their sample numbers are miniscule. I'm sure they would love to have resources for more longitudinal studies.

It's certainly better than back in the eighties when all we had was Kevin posting in a magazine about his hunch about why something happened.
 
I had no idea Brulosophy got some hate. I enjoy their content, and I like their attempt at a scientific approach, and they are able to test things better than I would be able to (and save me the time and effort to do so) but I'd never take any of their results as gospel, just a good indicator.
 
Sorry for derailing to Brulosophy. But I have to say that our online community are sophisticated enough to see when a test is strong or weak. If 10 or 14 out of 24 people draw a conclusion, we know that carries little weight.

But that's exactly one of the problems. In a triangle test, if there is no detectable difference, we don't expect half to randomly choose the different beer correctly (but many do expect this!) .

We expect 1/3.

10 out of 24: more than 1/3.

Brulosophy conclusion: "these results suggest tasters in this xBmt were unable to reliably distinguish..." It's disingenuous, because most of the readers' take away: "Brulosophy proved that there is no difference." And they know this. It's what keeps them in business.

And no, our online community by and large is not sophisticated enough to see this. Note the number of posts here and elsewhere that claim Brulosophy has "debunked" or "disproven" something. Fact: No Brulosophy experiment has ever "debunked" any claim. Not one. Anyone familiar with triangle testing, null hypotheses, etc. would know this. Most of our online community is not (familiar) and does not (know). After a decade of Brulosophy, I feel very confident saying this.
 
Last edited:
I do too, I think given the state of homebrewing we should rally around anyone who helps support interest in the hobby. Nothing they are doing warrants the negativity that is thrown their way.

Misleading homebrewers doesn't warrant criticism? I can't agree with that.
 
Most of drive behind my lack of interest in that site is old tape now, but jeeze, at the start of their notorious "tests", there was one comparing oxidation resulting from racking to kegs using "different techniques" :rolleyes: It was the most appalling abuse of beer - ON BOTH KEGS - I'd ever witnessed or even heard of: both kegs were lidless and the beer air-dropped through the opening, one from the top and the other with the hose inside the keg. Of course both kegs went on to a disastrous future - that the authors still tried to compare to confirm their supposition.

Over and Out right there for me.
 
But that's exactly one of the problems. In a triangle test, if there is no detectable difference, we don't expect half to randomly choose the different beer correctly (but many do expect this!) .

We expect 1/3.

10 out of 24: more than 1/3.

Brulosophy conclusion: "these results suggest tasters in this xBmt were unable to reliably distinguish..." It's disingenuous, because most of the readers' take away: "Brulosophy proved that there is no difference." And they know this. It's what keeps them in business.

And no, our online community by and large is not sophisticated enough to see this. Note the number of posts here and elsewhere that claim Brulosophy has "debunked" or "disproven" something. Fact: No Brulosophy experiment has ever "debunked" any claim. Not one. Anyone familiar with triangle testing, null hypotheses, etc. would know this. Most of our online community is not (familiar) and does not (know). After a decade of Brulosophy, I feel very confident saying this.
I get your point, but I also get why they say what they do. Imagine the backlash if they did not pick a consistent method. I do like having a conversation about some of the challenges of the method.

Interesting that you think having non significant results is keeping them in business. I always felt that it wasn't helping their cause and most brewers would get bored reading.
 
Interesting that you think having non significant results is keeping them in business.

TL;DR: He's not claiming that non-significant results is keeping them in business, he's claiming that representing non-significant results as a negative result is keeping them in business.


13 correct answers would be P<0.05 which means it's 95% certain that the result was positive and wasn't just random guesses. Even that isn't completely conclusive, but it hits the accepted threshold that says it /probably/ is. 7 correct answers would be P>0.95 which means it's 95% certain it's a negative result, anything between those results is considered statistically inconclusive.

But instead of stating a result of 10 as "The result is statistically inconclusive, and neither proves nor disproves the hypothesis' they state it as 'these results suggest tasters in this xBmt were unable to reliably distinguish' - which, the poster is right, is an incorrect statement. They are wording it as a negative result when it's not. If anything it actually suggests a possibly positive result (81.5% probability)

His claim is that by mis-representing the result as negative, it not only misleads the audience into believing it was 'debunked' or whatever when it wasn't, but also keeps their audience engaged, whereas declaring a lot of their experiment results as meaningless would drive people away.
 
Last edited:
^^^This. (Essentially, though Brulosophy uses a one-tailed test, not two.)
 
Last edited:
TL;DR: He's not claiming that non-significant results is keeping them in business, he's claiming that representing non-significant results as a negative result is keeping them in business.


13 correct answers would be P<0.05 which means it's 95% certain that the result was positive and wasn't just random guesses. Even that isn't completely conclusive, but it hits the accepted threshold that says it /probably/ is. 7 correct answers would be P>0.95 which means it's 95% certain it's a negative result, anything between those results is considered statistically inconclusive.

But instead of stating a result of 10 as "The result is statistically inconclusive, and neither proves nor disproves the hypothesis' they state it as 'these results suggest tasters in this xBmt were unable to reliably distinguish' - which, the poster is right, is an incorrect statement. They are wording it as a negative result when it's not. If anything it actually suggests a possibly positive result (81.5% probability)

His claim is that by mis-representing the result as negative, it not only misleads the audience into believing it was 'debunked' or whatever when it wasn't, but also keeps their audience engaged, whereas declaring a lot of their experiment results as meaningless would drive people away.
Thanks for the explanation.

I was coming at this as debunking is a pretty poor business model. If a business was their intent they would be better off proving products right and then branding and selling them as Brulosophy proven. Who knows maybe we will get there.
 
Back on topic, I use liquid Lactic Acid, the Reinheitsgebot kann mich am arsch lecken.💋
I do however have a KG of sauer malt that I bought before I came to this conclusion so I might use it sometime in the future.
 
P<0.05 which means it's 95% certain that the result was positive and wasn't just random guesses
It means that there is less than a 5% probability that the result cannot be attributed to random variation. But the experiment has still only been conducted once and repeating the experiment several times with more and more testers can still show that the preliminary result was in fact just random variation. Anybody want to hazard a guess at how many times cancer has been cured in small phase I clinical trials that failed to hold up in larger phase II and III trials?
 
It means that there is less than a 5% probability that the result cannot be attributed to random variation
can still show that the preliminary result was in fact just random variation.
the very next words that you snipped were:

"that isn't completely conclusive"

Which was my way of saying what you did :) Sorry if that wasn't clear, I was trying to be succinct, my post got very lengthy while I was writing it and I ended up chopping a lot of the fine detail out to make it more readable.
 
I was trying to be succinct, my post got very lengthy while I was writing it and I ended up chopping a lot of the fine detail out to make it more readable.
Yeah, I get that. But another big part of the problem is that our brains are pretty much hard wired to not understand statistics and trying to simplify things generally only leads to more and more misunderstanding.
 
When you say you use 1 or 2 ml, are you doing rough justice based on pale grist vs darker grists, or are you using the calculation software?
I use beersmith. I actually haven't used acid lately now that I use RO the malt bill does fine with pH adjustment. With darker grits occasionally I use baking soda. The acid comes into play when you use harder waters.
 
I use beersmith. I actually haven't used acid lately now that I use RO the malt bill does fine with pH adjustment. With darker grits occasionally I use baking soda. The acid comes into play when you use harder waters.

Didn't know that about RO. I just got my RO system. Looking forward to using the ph meter on it.
 
Back
Top