• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

Do "professional" brewers consider brulosophy to be a load of bs?

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Status
Not open for further replies.
Thanks Denny, I kind of knew this, but was trying to be cheep. :D It is not like I'm not putting 4 or 5 hours aside on brew day and I have always mashed for at least an hour, usually longer because I forget to get my strike water hot early. Anyway, thanks for the reminder.

:off: Question: If I want more unfermentable sugars is a shorter mash an option? I understand a higher mash temp (156-160f) does this, but it would be good to know. :mug:

Theoretically, yes. But malt these days in so high in diastatic power that mash length and temp make less difference than they used to. I mashed recipe at 153 and 168 and found no detectable difference in body or flavor and the OG and FG were the same on both. That's only a single data point, but it's interesting enough that I want to look into it further.
 
Theoretically, yes. But malt these days in so high in diastatic power that mash length and temp make less difference than they used to.

I love Scottish ales, and I have had the same experience. Some I mashed at 150 and some at 158 or so and I didn't notice the difference. Although they may have been months apart. I also used long boil times for both, usually 90 to 120 mins. The kettle caramelization does add to the body and unfermentables. :mug:
 
I guess I don't understand how diastatic power is germaine to the conversation at hand.
 
I love Scottish ales, and I have had the same experience. Some I mashed at 150 and some at 158 or so and I didn't notice the difference. Although they may have been months apart. I also used long boil times for both, usually 90 to 120 mins. The kettle caramelization does add to the body and unfermentables. :mug:

Although there are chemical reactions that take place in the kettle, caramelization isn't one of them. Unless you're referring to pulling some wort and doing a boildown.
 
ponder
In 1813 the British chemist Edward Charles Howard invented a method of refining sugar that involved boiling the cane juice not in an open kettle, but in a closed vessel heated by steam and held under partial vacuum. At reduced pressure, water boils at a lower temperature, and this development both saved fuel and reduced the amount of sugar lost through caramelization.
 
We live in a world where psychology has a huge influence over every aspect of our lives, where global warming is taken seriously, and xx or xy chromosone is all but ignored. I'm thinking there's no need to distinguish between science and "real" science where brewing beer is concerned.
 
Here's a bit more perspective on the confusions caused by Brulosophy's misinterpretation of significance (p). There are two types of problems of interest.


  1. We make a change in materials or process in the expectation that the beer will be improved. We hope that most of our 'customers' will notice a change.
  2. We make a change in materials or process in order to save time and/or money. We hope that few, if any, of our 'customers' will notice a change.

'Customers' is in quotes because while in the case of a commercial operation they are literally customers, in the home brewing case they are our families, friends, colleagues or brew club members (or all of these).

The two cases are quite distinct and call for different interpretations of triangle test results. In the first case we do not want to be misled by our interpretation of the test data into thinking that there is a difference in the beers when in fact there isn't. We want the probability of this kind of error (Type I) to be quite low. In the second case we don't want to be misled into thinking that there is no difference when in fact there is. We want the data to imply that the probability of this kind of error (Type II) is quite low.

Suppose we have two lagers; one brewed with a triple decoction mash and one brewed with no decoctions (melanoidin malt instead). Further suppose that these beers are presented to a panel of 40 and 14 of these correctly detect the difference. Brulosophy would plug these numbers into the p formula or table and come up with p = 0.47 and declare the test to be not significantly significant and those with a passing familiarity with statistics (and, unfortunately, quite a few well acquainted with them too) would declare the test to be of no statistical significance and thus flawed.

Thus far we have determined that if the null hypothesis is true (decoction makes no difference) the probability of getting 14 or more correct answers is 47%. A brewer considering going to the extra trouble of decoctions would like to see that the probability of getting 14 under the null hypothesis is very much lower than that which would mean that the null hypothesis isn't the correct one is low and that, therefore, the alternate hypothesis (that it does make a difference) is true. These data don't show that so this brewer would probably decide that the evidence that decoction makes perceptible change isn't very strong and would probably decide that he should not bother with it. The data is not that useable to him but is still sufficiently so for him to decide on a course of action. This does not mean the data aren't significant and should be tossed on the scrap heap while the investigators wring their hands trying to discover what they did wrong.

To see this we look at the data from the perspective of Item 2, i.e. of the brewer that has been doing decoctions and wants to see if he can get away with dropping them in favor of melanoidin malt. He looks at the significance of the test with respect to Type II errors. He assumes that there IS a difference and wants to determine what the probability of getting the test data under that assumption would be. Things are not quite symmetrical here as there is only one null hypothesis which is that the beers are not differentiable whereas there are an infinite number of alternate hypotheses e.g. that 1% of the population can differentiate them, that 2% can differentiate them, that 2.47651182% can differentiate them and so on. So he decides he'll drop decoctions if 20% or less of his 'customers' can detect a difference. He now computes the probability of getting 9 or fewer correct answers in a beer that is 20% detectable (he can do the computation with the spreadsheet I offered earlier) and finds that probability to be 0.0495. He can, therefore, be quite confident from these data that he can drop the decoctions and thus the same test which was not statistically significant to the guy considering adding decoctions to his process is now seen to be significant to the guy considering dropping them.

To return this to the original question: a pro brewer who is aware of this aspect of the triangle test might extract some useful guidance from the Brulosophy tests (though he would want to see some improvements in the way they conduct them). Finding a pro brewer who has this awareness, however, would probably not be easy. You'd have to hang out in the hospitality suites at ASBC or MBAA meetings. The guys you'd find who knew about this probably wouldn't be brewers but rather the professors and lab rats that go to these things. And I'll note that the ASBC's MOA mentions nothing of Type II error (though I have not seen the latest version of it).
 
Here's a bit more perspective on the confusions caused by Brulosophy's misinterpretation of significance (p). There are two types of problems of interest.


  1. We make a change in materials or process in the expectation that the beer will be improved. We hope that most of our 'customers' will notice a change.
  2. We make a change in materials or process in order to save time and/or money. We hope that few, if any, of our 'customers' will notice a change.

'Customers' is in quotes because while in the case of a commercial operation they are literally customers, in the home brewing case they are our families, friends, colleagues or brew club members (or all of these).

The two cases are quite distinct and call for different interpretations of triangle test results. In the first case we do not want to be misled by our interpretation of the test data into thinking that there is a difference in the beers when in fact there isn't. We want the probability of this kind of error (Type I) to be quite low. In the second case we don't want to be misled into thinking that there is no difference when in fact there is. We want the data to imply that the probability of this kind of error (Type II) is quite low.

Suppose we have two lagers; one brewed with a triple decoction mash and one brewed with no decoctions (melanoidin malt instead). Further suppose that these beers are presented to a panel of 40 and 14 of these correctly detect the difference. Brulosophy would plug these numbers into the p formula or table and come up with p = 0.47 and declare the test to be not significantly significant and those with a passing familiarity with statistics (and, unfortunately, quite a few well acquainted with them too) would declare the test to be of no statistical significance and thus flawed.

Thus far we have determined that if the null hypothesis is true (decoction makes no difference) the probability of getting 14 or more correct answers is 47%. A brewer considering going to the extra trouble of decoctions would like to see that the probability of getting 14 under the null hypothesis is very much lower than that which would mean that the null hypothesis isn't the correct one is low and that, therefore, the alternate hypothesis (that it does make a difference) is true. These data don't show that so this brewer would probably decide that the evidence that decoction makes perceptible change isn't very strong and would probably decide that he should not bother with it. The data is not that useable to him but is still sufficiently so for him to decide on a course of action. This does not mean the data aren't significant and should be tossed on the scrap heap while the investigators wring their hands trying to discover what they did wrong.

To see this we look at the data from the perspective of Item 2, i.e. of the brewer that has been doing decoctions and wants to see if he can get away with dropping them in favor of melanoidin malt. He looks at the significance of the test with respect to Type II errors. He assumes that there IS a difference and wants to determine what the probability of getting the test data under that assumption would be. Things are not quite symmetrical here as there is only one null hypothesis which is that the beers are not differentiable whereas there are an infinite number of alternate hypotheses e.g. that 1% of the population can differentiate them, that 2% can differentiate them, that 2.47651182% can differentiate them and so on. So he decides he'll drop decoctions if 20% or less of his 'customers' can detect a difference. He now computes the probability of getting 9 or fewer correct answers in a beer that is 20% detectable (he can do the computation with the spreadsheet I offered earlier) and finds that probability to be 0.0495. He can, therefore, be quite confident from these data that he can drop the decoctions and thus the same test which was not statistically significant to the guy considering adding decoctions to his process is now seen to be significant to the guy considering dropping them.

To return this to the original question: a pro brewer who is aware of this aspect of the triangle test might extract some useful guidance from the Brulosophy tests (though he would want to see some improvements in the way they conduct them). Finding a pro brewer who has this awareness, however, would probably not be easy. You'd have to hang out in the hospitality suites at ASBC or MBAA meetings. The guys you'd find who knew about this probably wouldn't be brewers but rather the professors and lab rats that go to these things. And I'll note that the ASBC's MOA mentions nothing of Type II error (though I have not seen the latest version of it).

Really intetesting way of looking at different sides and perspectives of data. I enjoy your posts much, thanks
 
'Customers' is in quotes because while in the case of a commercial operation they are literally customers, in the home brewing case they are our families, friends, colleagues or brew club members (or all of these).

Thus what Brulosophy has done is very applicable to home brewers who want to improve their products or potentially save time - if said home brewers know how to interpret Bruslosohy's findings. It would, of course, help immensely if Brulosophy did too.
 
Come on, man! Who amongst us hasn't sat down over a home brew and argued P-values all night!? :mug:

:D

Me, for one...;)

I've said it before and I'll say it again...take these experiments for what they are, not what you want them to be. Use them to find out for yourself. We're making beer at home, not looking for a cure for cancer.
 
I've said it before and I'll say it again...take these experiments for what they are, not what you want them to be. Use them to find out for yourself. We're making beer at home, not looking for a cure for cancer.

Well said Denny.

It would be nice if the Brulosophy experiments were more accurate, but I take them for what they are.. unreliable. Therefore I ignore their results and do my own testing.

As for the amount of effort one puts into ones beer, that of course is a personal decision. I go by the axiom that anything worth doing, is worth doing right. Mediocre beer is ubiquitous. To each his own though.
 
What I'd like them to be is a source of useful information and they are if you know how to interpret the results properly. Why are you so offended that some of us would like to know what the experiments can potentially tell us beyond that some guys brewed some beer hot and some cold and couldn't draw a conclusion as to whether it made a difference? And they brewed some beers with loose hops and packaged hops and couldn't tell whether it made a difference, or they tried Hochkurz and couldn't tell whether it made a difference. Properly interpreted there is quite a bit of information of value in the actual results even though it is clear that they generally used a panel size that was too small and made other errors as well.

Of course in many cases, because of the small panel size, one really can't draw reasonable conclusions but in others one can. Here's the data from the first few on the list of all the experiments. It's a pain to read I know. I've seen tables here but darned if I know how to put one in. All the Type II numbers are for a hypothesized 20% differentiability.

Experiment ; Panelists ; No. Correct ; p Type I ; p Type II ; MLE Pd ; Z ; SD ; Lower Lim ; Upper Lim)
Hochkurz ; 26 ; 7 ; 0.815 ; 0.012 ; 0.000 ; 1.645 ; 0.130 ; 0.000 ; 0.215
Roasted Part 3 ; 24 ; 10 ; 0.254 ; 0.245 ; 0.125 ; 1.645 ; 0.151 ; 0.000 ; 0.373
Hop Stand ; 22 ; 9 ; 0.293 ; 0.226 ; 0.114 ; 1.645 ; 0.157 ; 0.000 ; 0.372
Yeast Comparison ; 32 ; 15 ; 0.078 ; 0.441 ; 0.203 ; 1.645 ; 0.132 ; 0.000 ; 0.421
Water Chemistry ; 20 ; 8 ; 0.339 ; 0.206 ; 0.100 ; 1.645 ; 0.164 ; 0.000 ; 0.370
Hop Storage ; 16 ; 4 ; 0.834 ; 0.021 ; 0.000 ; 1.645 ; 0.162 ; 0.000 ; 0.267
Ferm. Temp Pt 8 ; 20 ; 8 ; 0.339 ; 0.206 ; 0.100 ; 1.645 ; 0.164 ; 0.000 ; 0.370
Loose vs. bagged ; 21 ; 12 ; 0.021 ; 0.772 ; 0.357 ; 1.645 ; 0.162 ; 0.091 ; 0.624
Traditonal vs. short ; 22 ; 13 ; 0.012 ; 0.830 ; 0.386 ; 1.645 ; 0.157 ; 0.128 ; 0.645

In any case the Hochkurz data shows good support (p = 0.012) for the hypothesis that no more than 20% of tasters will be able to detect that it makes a difference.The Traditional vs Short test shows strong (0.012) support for the hypothesis that doing things properly makes a difference. The yeast comparison test shows fair support for the hypothesis that yeast strain makes a difference. The hop storage test shows strong ( 0.021) for the hypothesis that the storage methods compared don't make much difference. Etc.

It's true that robust conduct of a triangle test leads to some issues but getting the statistics part right (choosing proper panel size, interpreting the data) is pretty simple if you own a laptop that runs Excel. The attitude that one is doing things in a half assed fashion when it requires little more effort to do it right but who cares doesn't seem justified. If you don't care about the value of your work, why bother to do it at all?
 
Professional brewers don't want to tell you nada, the just wanna keep it all up their sleeve.

I haven't found that to be true at all. Anytime I visit a brewery and have a question about a yeast they use or a temperature or contents of a spice mix they use they're super excited to talk about it with a fellow Brewer and do everything except write down the exact recipe for you.
 
  • Like
Reactions: Kee
Me, for one...;)

I've said it before and I'll say it again...take these experiments for what they are, not what you want them to be. Use them to find out for yourself. We're making beer at home, not looking for a cure for cancer.

I have a lot of respect for you and what you've done for the homebrewing community, but this is, honestly, a very defensive looking post. If you don't enjoy the discussion going on here, then you're free to ignore it.

As AJ said, some of us want to know what the numbers are actually saying, because as we've seen from a few posters in this thread (and even more so ALL OVER THIS SITE), people are taking these exbeeriments and saying, "See none of this sh!t matters!" Yet we all know from our own experiences that it often does.

Now we know from looking further into the data, that them (or maybe just the crazies who go around claiming it for them) saying that it's not significant might not be true. Because for most of them, they haven't reached a low enough threshold in order to say, "see this doesn't matter." The only thing they've proven with some of these is that it *might not* matter. But then when we see it taken from a larger sample size, the numbers say, "actually this likely does matter."

This stuff is important to us. Sure, it might not be attempts at curing cancer, but that doesn't mean we don't find significance in this sort of discussion in OUR lives. That's all there is to it. If you don't enjoy the fact that some of us are enjoying the discussion, then, again, you're free to ignore it.
 
I have a lot of respect for you and what you've done for the homebrewing community, but this is, honestly, a very defensive looking post. If you don't enjoy the discussion going on here, then you're free to ignore it.

As AJ said, some of us want to know what the numbers are actually saying, because as we've seen from a few posters in this thread (and even more so ALL OVER THIS SITE), people are taking these exbeeriments and saying, "See none of this sh!t matters!" Yet we all know from our own experiences that it often does.

Now we know from looking further into the data, that them (or maybe just the crazies who go around claiming it for them) saying that it's not significant might not be true. Because for most of them, they haven't reached a low enough threshold in order to say, "see this doesn't matter." The only thing they've proven with some of these is that it *might not* matter. But then when we see it taken from a larger sample size, the numbers say, "actually this likely does matter."

This stuff is important to us. Sure, it might not be attempts at curing cancer, but that doesn't mean we don't find significance in this sort of discussion in OUR lives. That's all there is to it. If you don't enjoy the fact that some of us are enjoying the discussion, then, again, you're free to ignore it.

I'll also add that this discussion can help people understand how all science is reported on, especially in the popular press/media.

Too many headlines start off with "studies say...." when the results aren't actually what the headline says.

Everyone can benefit from a greater understanding of science, statistics, and interpretation of data.
 
I'll also add that this discussion can help people understand how all science is reported on, especially in the popular press/media.

Too many headlines start off with "studies say...." when the results aren't actually what the headline says.

Everyone can benefit from a greater understanding of science, statistics, and interpretation of data.

Oh, so you're saying that this sort of discussion can have implications above and beyond just homebrewing?

Well, who cares!? Because it's not curing cancer...
 
heck I am just learning about double and triple bonds and the periodic table of the elements, I need all the science I can git! %)
 
@aj

I am trying really hard to understand the type 1 and 2 examples. I feel I am very close, can you maybe put it a little easier, if possible, for me? I would appreciate it. I am starting to get the idea of going less than 14 in one case vs the other but just cant quite put it all together. Thanks again I would appreciate it.
 
@aj

I am trying really hard to understand the type 1 and 2 examples. I feel I am very close, can you maybe put it a little easier, if possible, for me? I would appreciate it. I am starting to get the idea of going less than 14 in one case vs the other but just cant quite put it all together. Thanks again I would appreciate it.

;)

Type-I-and-II-errors1-625x468.jpg
 
@aj

I am trying really hard to understand the type 1 and 2 examples. I feel I am very close, can you maybe put it a little easier, if possible, for me?

It's a little tricky to explain because it is a bit like trying to figure out what the sentence "It's likely that you will find the statement that most people don't like Mozart to be untrue." means.

Do you own a fish finder? If so think about that. If you set the sensitivity too low the screen is all white even though Moby Dick is under the boat. This is Type II error (false dismissal). If you set the gain too high the screen becomes cluttered with targets that aren't actually there. These represent Type I errors (false alarms).
 
to the actual question posted-

i have seen posts that seem useful, pro or homebrewer-- flavor profiles on a new hop, swapping malts/yeasts in a recipe to compare and contrast. that's something you could rely on to try out a new recipe. to the original point- dont think anyone in pro world is changing their operations based on the "results"

reminds me of a marketing vehicle in the guise of a science program. ala Dr Oz. a bit of science on top of product placements, branding, merchandising, sponsorships, etc. its friggin brilliant. enough science to seem meaningful, but no claims to be definitive. "more testing needed" is the equivalent of "tune in next week". people look, and read, and debate it endlessly. even pay for access. virality personified. great business i'd think.

for actual science there's the brewers association, the big schools domestic and abroad, hop growers/brokers funding research and experiments, yeast labs doing all kinds of new research, and so on and so forth. granted, not all, or not even much of that info is available to homebrewers. so they fill the void.

have met some. nice guys. beer lovers, no doubt. dont want to sound harsh or condescending. in summary, i think infomercial might be a little too harsh.....maybe info-tainment? all with a grain of salt.
 
I'll also add that this discussion can help people understand how all science is reported on, especially in the popular press/media.

Too many headlines start off with "studies say...." when the results aren't actually what the headline says.

Everyone can benefit from a greater understanding of science, statistics, and interpretation of data.

I basically never trust the media. There's a simple test:

Take a subject that you're an expert on. Read basically any mainstream journalist discussing the subject, and ask yourself whether their take on it is accurate. You'll find, FAR more often than not, that it's nowhere close.

Once you realize that, you realize that their reporting on all the issues that you're not an expert on is equally suspect.
 
I basically never trust the media. There's a simple test:

Take a subject that you're an expert on. Read basically any mainstream journalist discussing the subject, and ask yourself whether their take on it is accurate. You'll find, FAR more often than not, that it's nowhere close.

Once you realize that, you realize that their reporting on all the issues that you're not an expert on is equally suspect.

I never expect a non-technical reporter to get the technical stuff exactly right. And headlines are written to sell newspapers (or page views). But the popular phrase "I never trust the media" often means "they are deliberately lying to us", which is quite a different proposition.
 
  1. We make a change in materials or process in the expectation that the beer will be improved. We hope that most of our 'customers' will notice a change.
  2. We make a change in materials or process in order to save time and/or money. We hope that few, if any, of our 'customers' will notice a change.

The two cases are quite distinct and call for different interpretations of triangle test results. In the first case we do not want to be misled by our interpretation of the test data into thinking that there is a difference in the beers when in fact there isn't. We want the probability of this kind of error (Type I) to be quite low. In the second case we don't want to be misled into thinking that there is no difference when in fact there is. We want the data to imply that the probability of this kind of error (Type II) is quite low.

I'd think in the first case we would actually want both our customers to notice the difference and to show a preference for one or the other. If planning to make such a change I'd want to be convinced customers will be able to tell the difference, and that they show a preference for the more expensive process.

In the second case since we are hoping customers don't notice the change so there would be an expectation going into the study that the preference data will not be meaningful.


To be honest I still get lost in the math and have taken statistics and am not repelled by the science. But I'd of reached the same conclusions as your two hypothetical brewers in this example simply by looking at 14/40 result. A third of 40 is 13.34 you are right on top of expected result from random chance. Are the beers the same? Who knows. But no indication from this experiment that customers can tell them apart. In first example I am probably going to look for something else to do to improve my product. Maybe an investment in packaging or distribution will be more meaningful.

In second case I am not going to be inclined to triple decoct for a competition or when brewing for "customers" that won't be impressed by the label. However if I have a situation where triple decoction can be part of my labeling/branding and have customers--say other brewers/homebrewers--that will find that label appealing, then I'm probably not convinced by the study that I should stop doing it. Triangle tasting is very different from pouring a pint of beer and saying "hey come taste my triple decocted lager".
 
I never expect a non-technical reporter to get the technical stuff exactly right. And headlines are written to sell newspapers (or page views). But the popular phrase "I never trust the media" often means "they are deliberately lying to us", which is quite a different proposition.

A different proposition, so what? The media does engage in deception and lying; "deliberate" is superfluous as a lie is deliberate by any definition.
 
Status
Not open for further replies.

Latest posts

Back
Top