• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

Acidified malt vs lactic acid

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
That Brulosophy experiment was interesting. Phosphoric acid. Better taste? More stable than lactic?

You expect Brulosophy to find that it doesn’t matter. But apparently there is a perceived difference between the two acids.

When Brulosophy finds a difference, you can pretty much bank on it. When they don't, see this presentation for one of the reasons (there are others) you might want to be skeptical: https://sonsofalchemy.org/wp-content/uploads/2020/05/Understanding_Brulosophy_2.pdf
 
Thanks for that. I hold a few opinions about Brulosophy.

First, I really enjoy them—top five podcasts in the car.

Second, I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows. We are attached to our sacred cows.

I have made postings poking fun of the Brulosophy testing. @doug293cz will remember a recent one.

However, it's good that some sacred cows get slaughtered. Over the decades we have seen many things that we believed to be true, to turn out to be completely unfounded. Good riddance.

Fourth, I agree with @VikeMan that the personal subjectivity of human tasting is a real problem for Brulosophy. Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.

He said that where A and B are identical and C is the outlier, there are a number of people who are certain that A and B are different from each other. And further, they are certain that they prefer B over A. These people are shocked when they are told that A and B are identical.

And even more, these are not people off the street. The testers are all experienced in brewing.

Wow.

Anyway. I like Brulosophy.
 
Thanks for that. I hold a few opinions about Brulosophy.

First, I really enjoy them—top five podcasts in the car.

Second, I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows. We are attached to our sacred cows.

I have made postings poking fun of the Brulosophy testing. @doug293cz will remember a recent one.

However, it's good that some sacred cows get slaughtered. Over the decades we have seen many things that we believed to be true, to turn out to be completely unfounded. Good riddance.

Fourth, I agree with @VikeMan that the personal subjectivity of human tasting is a real problem for Brulosophy. Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.

He said that where A and B are identical and C is the outlier, there are a number of people who are certain that A and B are different from each other. And further, they are certain that they prefer B over A. These people are shocked when they are told that A and B are identical.

And even more, these are not people off the street. The testers are all experienced in brewing.

Wow.

Anyway. I like Brulosophy.
Maybe if the results were repeatable over more than one sample group, and if the universe of sample groups and individuals in those groups were numerically larger, I’d be more inclined to accept the veracity of the data presented as something other than a curiously random compendium of opinions which, due to its limited scope and size, bears little significant statistical relevance, despite the author’s mumbo-jumbo application of the laws of probability.

TL; DR: That said, I still read the Exbeeriments but don’t put a lot of stock in their findings.
 
... due to its limited scope and size, bears little significant statistical relevance, despite the author’s mumbo-jumbo application of the laws of probability.

This is another problem, one which isn't mentioned (I think) in the presentation. The small-ish number of testers results in low statistical power. Real differences that would be slam dunks with a larger number of testers are "suppressed" (for lack of a better word) due to a low statistical power with the smaller number of testers. Basically, with a small number of testers, you need a higher percentage identifying the different beer than you do with a larger number, in order to clear the p-value hurdle.
 
When Brulosophy finds a difference, you can pretty much bank on it. When they don't, see this presentation for one of the reasons (there are others) you might want to be skeptical: https://sonsofalchemy.org/wp-content/uploads/2020/05/Understanding_Brulosophy_2.pdf
I actually take it a step further. I know they don't like collecting preference data but when it's a significant difference and it shows a clear preference for one or the other that tends to hold more influence for me. The significant variables that are the split 50/50 from a preference standpoint make it confusing.
 
Thanks for that. I hold a few opinions about Brulosophy.

First, I really enjoy them—top five podcasts in the car.

Second, I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows. We are attached to our sacred cows.

I have made postings poking fun of the Brulosophy testing. @doug293cz will remember a recent one.

However, it's good that some sacred cows get slaughtered. Over the decades we have seen many things that we believed to be true, to turn out to be completely unfounded. Good riddance.

Fourth, I agree with @VikeMan that the personal subjectivity of human tasting is a real problem for Brulosophy. Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.

He said that where A and B are identical and C is the outlier, there are a number of people who are certain that A and B are different from each other. And further, they are certain that they prefer B over A. These people are shocked when they are told that A and B are identical.

And even more, these are not people off the street. The testers are all experienced in brewing.

Wow.

Anyway. I like Brulosophy.
I do too, I think given the state of homebrewing we should rally around anyone who helps support interest in the hobby. Nothing they are doing warrants the negativity that is thrown their way.
 
I think some of the pushback or even resentment of them, is due to the fact that they regularly slay sacred cows.
Actually almost all of the pushback is due to their statistical illiteracy.
Not their fault. But they have to deal with messy, bias influenced human perceptions. A great example was given by the Brulosophy presenter in one of their podcasts. He was talking frankly about the difficulty in testing.
They could solve most of this problem by reading a basic stats text. And presenting the data the way that @VikeMan suggested. Nine times out of ten, when they say "the tasters couldn't tell the difference" they should be saying "we didn't have enough tasters to do the test correctly."
 
Sorry for derailing to Brulosophy. But I have to say that our online community are sophisticated enough to see when a test is strong or weak. If 10 or 14 out of 24 people draw a conclusion, we know that carries little weight.

But sometimes the tests are strong and interesting. Sometimes the tests show that only 4 out of 24 people can spot the outlier. And we don't have to take their word for it. We then test it ourselves and find out.

Everyone knows that their sample numbers are miniscule. I'm sure they would love to have resources for more longitudinal studies.

It's certainly better than back in the eighties when all we had was Kevin posting in a magazine about his hunch about why something happened.
 
I had no idea Brulosophy got some hate. I enjoy their content, and I like their attempt at a scientific approach, and they are able to test things better than I would be able to (and save me the time and effort to do so) but I'd never take any of their results as gospel, just a good indicator.
 
Sorry for derailing to Brulosophy. But I have to say that our online community are sophisticated enough to see when a test is strong or weak. If 10 or 14 out of 24 people draw a conclusion, we know that carries little weight.

But that's exactly one of the problems. In a triangle test, if there is no detectable difference, we don't expect half to randomly choose the different beer correctly (but many do expect this!) .

We expect 1/3.

10 out of 24: more than 1/3.

Brulosophy conclusion: "these results suggest tasters in this xBmt were unable to reliably distinguish..." It's disingenuous, because most of the readers' take away: "Brulosophy proved that there is no difference." And they know this. It's what keeps them in business.

And no, our online community by and large is not sophisticated enough to see this. Note the number of posts here and elsewhere that claim Brulosophy has "debunked" or "disproven" something. Fact: No Brulosophy experiment has ever "debunked" any claim. Not one. Anyone familiar with triangle testing, null hypotheses, etc. would know this. Most of our online community is not (familiar) and does not (know). After a decade of Brulosophy, I feel very confident saying this.
 
Last edited:
I do too, I think given the state of homebrewing we should rally around anyone who helps support interest in the hobby. Nothing they are doing warrants the negativity that is thrown their way.

Misleading homebrewers doesn't warrant criticism? I can't agree with that.
 
Back
Top