• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

Brulosophy expert can't tell a Blonde Ale made with straight RO from RO plus minerals

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Status
Not open for further replies.
An acceptable alternative would be to determine whether an alternative is even required. Do we really need someone running these types of experiments to tell us best practice in brewing, especially given the amount of great information out there for free?

My point is not to ditch a discerning eye toward making sure we don’t bias ourselves WRT brewing, but rather to be more cautious who we place our trust in, includIng myself and others.
This post is not responsive to my question. It's just another "appeal to authority" and a way of rephrasing "read the commercial textbooks," which may or may not be directly applicable in a homebrew setting.

Brew on :mug:
 
This post is not responsive to my question. It's just another "appeal to authority" and a way of rephrasing "read the commercial textbooks," which may or may not be directly applicable in a homebrew setting.

Brew on :mug:

Scrap my previous answer. I’ll separate myself from my site for a minute and say this: I’ve had good luck interfacing with people directly and using my knowledge of brewing calculations, practical experience and theoretical knowledge to help them achieve what they feel are worthwhile and essential improvements in their brewing. That’s all I (or We) can do honestly.
 
In my club there is a resounding "Well, the brulosophy guys have proven that..." ...and they don't seem to grasp isolating and testing for one variable, vs compounding 3-4 "shoddy" methods you're precipitously, observably hurting that beer. These are folks who can definitely brew great beer, but they are given license to laziness, and the "science" gets repeated as debunked-fact. Nobody in the club is decocting anymore, but I can sure notice a difference in commercial and my own beers that were decocted.

I like that they are doing these experiments. My favorite thing they do is the Hop Chronicles, if I could donate to them doing those more often, I would. So my major contention is that the Bru crew is a sort of headwind against making increasingly good beer, honing your technique. They are like a think tank that says global warming isn't that bad because an individual commuter contributes a insignificant (p=0.07!!) amount of carbon emission, go ahead and roll coal in that 12mpg truck. Rather than taking the findings in stride (for that group of 20 it was insignificant... but we know nothing about them), they start repeating it as "debunked" gospel.
 
I've been following this thread since it started and I've stayed quiet until now...y'all are really letting a couple of guys who are doing "experiments" bother you when in reality it doesn't affect you or your brewing in anyway, shape or form.

Marshall and Crew have been more than clear from the website and podcast that their experiments should not be taken as gospel and on more than a few occasions Marshall and his Crew have said they don't normally do things the way they do in these experiments.

Another thing, for those of you talking about statistical significance you all realize you're saying that the accepted quantitative statistical significance of P(.05) that researchers use should be changed is comical and frankly ridiculous.

Maybe instead of letting these guys bother you so much maybe just maybe you should take a step back and ask yourself why you're letting it bother you so much? Do you think you can do a better job? Do you think the experiment is poorly created and executed? Do you not believe the results? If so, then do it yourself, publish your results and let your data speak. Otherwise, why let it affect you so much?
 
Maybe instead of letting these guys bother you so much maybe just maybe you should take a step back and ask yourself why you're letting it bother you so much? Do you think you can do a better job? Do you think the experiment is poorly created and executed? Do you not believe the results? If so, then do it yourself, publish your results and let your data speak. Otherwise, why let it affect you so much?
Well stated. I think some folks just like to be angry.
 
Another thing, for those of you talking about statistical significance you all realize you're saying that the accepted quantitative statistical significance of P(.05) that researchers use should be changed is comical and frankly ridiculous.

Are you under the impression that p < 0.05 means there was a difference detected and that p >= 0.05 means there was no difference detected? That's not the case. Not in a Brulosophy experiment or in any experiment.

Brulosophy could choose to be very clear about what any given p value means, by stating what the odds were of achieving it if there were no difference. They choose not to do that.

When p = 0.10, for example, that means that, assuming no difference, there was only a 10% chance of getting as many correct choices (or more) as they got. Anyone who understands that would realize that it is not likely there is no difference. But what words do they use? This: "... indicating participants in this xBmt were unable to reliably distinguish ..."

@Sammy86, do you think it would be better if Brulosophy actually explained what the numbers mean in each experiment? All I'm asking for is the truth.
 
Another thing, for those of you talking about statistical significance you all realize you're saying that the accepted quantitative statistical significance of P(.05) that researchers use should be changed is comical and frankly ridiculous.

Not frankly ridiculous per the US Supreme Court in a 2011 decision. A company which claimed zinc shortens the duration of colds was about to potentially lose a bunch of money due to lawsuits piling up from those claiming as to having lost their sense of smell instead of or in addition to their cold. The companies defense in requesting that the lawsuits should be tossed out as frivolous was that numerous studies showed with P<0.05 or better statistical accuracy as proof that their zinc product did not statistically cause such loss of smell. The SC said that the statistical studies were completely irrelevant and useless and those claiming loss of smell could proceed to sue accordingly. This SC decision actually played a big part in the ASA's diligently researching and then also subsequently concluding in 2016 that P<0.05 isn't at all what it had been previously purported to be.

Some of the ASA members involved developed a math model that shows P<0.05 to fail ~29% of the time. And P<0.01 to fail ~11% of the time. This was in good agreement with claims that at least 25% of medical studies, when repeated, fail to arrive at the same conclusion as for the initial statistical studies conclusions.

Which of the following is true?
P > 0.05 is the probability that the null hypothesis is true.
1 minus the P value is the probability that the alternative hypothesis is true.
A statistically significant test result (P ≤ 0.05) means that the test hypothesis is false or should be rejected.
A P value greater than 0.05 means that no effect was observed.

If you answered “none of the above,” you may understand this slippery concept better than many researchers.

“P < 0.05” Might Not Mean What You Think: American Statistical Association Clarifies P Values

https://www.tandfonline.com/doi/full/10.1080/00031305.2016.1154108
 
Last edited:
Maybe instead of letting these guys bother you so much maybe just maybe you should take a step back and ask yourself why you're letting it bother you so much?

It bothers me because it misleads people, and leads them to take unwise shortcuts. One of the things I enjoy about this hobby is helping new brewers learn to make better beer. So naturally it grinds my gears when I see them being misled.
 
Are you under the impression that p < 0.05 means there was a difference detected and that p >= 0.05 means there was no difference detected? That's not the case. Not in a Brulosophy experiment or in any

Brulosophy could choose to be very clear about what any given p value means, by stating what the odds were of achieving it if there were no difference. They choose not to do that.

When p = 0.10, for example, that means that, assuming no difference, there was only a 10% chance of getting as many correct choices (or more) as they got. Anyone who understands that would realize that it is not likely there is no difference. But what words do they use? This: "... indicating participants in this xBmt were unable to reliably distinguish ..."

@Sammy86, do you think it would be better if Brulosophy actually explained what the numbers mean in each experiment? All I'm asking for is the truth.

Truth is you're proving my point...it doesn't matter what I think, they are using the current accepted research practice of P=.05 and they are clear with how many participants are required to show statistical significance based on the P value. In every single article they state the number needed to show statistical significance based on the P value of .05.
 
It bothers me because it misleads people, and leads them to take unwise shortcuts. One of the things I enjoy about this hobby is helping new brewers learn to make better beer. So naturally it grinds my gears when I see them being misled.

Again, proving my point. It doesn't affect your brewing in anyway, shape or form. And if people read the article using the same critical thinking skills they were taught in let's say middle school, then it shouldn't affect them either.
 
Truth is you're proving my point...it doesn't matter what I think, they are using the current accepted research practice of P=.05 and they are clear with how many participants are required to show statistical significance based on the P value. In every single article they state the number needed to show statistical significance based on the P value of .05.

That's not entirely correct. p<0.05 simply means that "significance" has not been reached. It does not necessarily mean there's a difference. And p >= 0.05 does not necessarily mean there's not a difference. Any scientist or statistician will agree with what I just said. All it actually tells you, vis-a-vis a potential difference in triangle testing, is how likely it was (assuming no difference) that "X" number (or more) would have made the correct selection.

I do not believe any legitimate research paper would take a p value of 0.10 and say "... indicating participants were unable to reliably distinguish..."

They would very likely say "failed to reach significance" and leave it at that, because people who read those papers (not hobbyists) know exactly what that means. There's no need to add the misleading words "... indicating participants were unable to reliably distinguish..." when it is not likely that there was no difference.
 
Again, proving my point. It doesn't affect your brewing in anyway, shape or form.

You are right. It doesn't affect my brewing. But it affects others. I happen to care if others can make good beer.

And if people read the article using the same critical thinking skills they were taught in let's say middle school, then it shouldn't affect them either.

Middle school doesn't teach p values. It sounds like you're saying that if someone is misled by misleading words, it's their own fault. That's sad.
 
I'm curious about the science behind the idea to run with the 10 semi-blind tests. This is pretty different than triangle testing since the brewer must know the variable being tested unlike in the triangle testing.

It sounds like a fair amount of work to do but certainly sounds like it might be a better tool than pouring a couple half pints and tasting them back and forth while thinking about whether that big new piece of equipment or extra hour spent slaving away in the brewing on a complicated technique made a perceptible difference.

They do a nice job attempting to eliminate confirmation bias and their so-called surprising results are not really all that different from the many experiments that have been done in wine tasting (am thinking of some of Frederic Brochet's experiments).
 
It sounds like a fair amount of work to do but certainly sounds like it might be a better tool than pouring a couple half pints and tasting them back and forth while thinking about whether that big new piece of equipment or extra hour spent slaving away in the brewing on a complicated technique made a perceptible difference.

I agree that this is better than just tasting back and forth. Completely honest presentation of the results would make it even better-er.
 
I always thought helping people and sharing correct information are what HBT was about. Calling out bad information when you see it is just part of being a good forum citizen. In this case, I am thankful people with greater knowledge on statistics take the time to explain in detail the facts.
 
Last edited:
That's not entirely correct. p<0.05 simply means that "significance" has not been reached. It does not necessarily mean there's a difference. And p >= 0.05 does not necessarily mean there's not a difference. Any scientist or statistician will agree with what I just said. All it actually tells you, vis-a-vis a potential difference in triangle testing, is how likely it was (assuming no difference) that "X" number (or more) would have made the correct selection.

I do not believe any legitimate research paper would take a p value of 0.10 and say "... indicating participants were unable to reliably distinguish..."

They would very likely say "failed to reach significance" and leave it at that, because people who read those papers (not hobbyists) know exactly what that means. There's no need to add the misleading words "... indicating participants were unable to reliably distinguish..." when it is not likely that there was no difference.

You're talking about home brewers, doing home brewer experiments...you could say they're all invalid because they aren't done in a lab in a completely controlled environment. Be real here, it bothers you because you don't like the language used. That's ok too, it's valid.
 
Be real here, it bothers you because you don't like the language used.

:D Yeah, I think I've been pretty clear about that. The language is misleading, and that's the issue I have with the write-ups.
 
You guys are missing the point. Brulosophy is a great website and fun projects to follow in so many different areas of the brewing hobby. Everything from recipes, procedures, temperatures, yeast comparisons – a lot of great stuff!

You are getting bogged down on having to all be “right” about statistical values which there is obviously professional differences to methodology. You don’t have to argue some point into the ground.

This is supposed to be a fun hobby. Look at the effort he has put into this, it is fantastic and very interesting. All I hear is a lot of puffery about how you can do it better. Well, do it then, I’m getting tired hearing the sniping from the peanut gallery.
 
You are getting bogged down on having to all be “right” about statistical values which there is obviously professional differences to methodology. You don’t have to argue some point into the ground.

I'll assume you're talking about me? I'm not taking issue with Brulosophy's computation of statistical values or with their experimental methodology. So let's eliminate that strawman.
 
Yes, there are gullible, not so smart people in home brewing. Why do we care how good or bad their beer is? And who really gives a crap about p values?
Good point!! Maybe I should not reveal this info, but I do not have a clue what p values are. I just skim over that part of the Brulosophy posts and look at the actual numbers of folks who like, prefer, or cannot tell the difference.
 
Good point!! Maybe I should not reveal this info, but I do not have a clue what p values are. I just skim over that part of the Brulosophy posts and look at the actual numbers of folks who like, prefer, or cannot tell the difference.
Haha, me too. I mean I may have learned about p values in a statistics course way too many years ago, but it’s long forgotten. This thread has caused me to pay more attention to Brulosophy than I had previously and I dug up an old interview with Marshall which caused me to have a greater affinity for his exbeeriments. I may tweak my process a little, but my learning curve has been a series of process adjustments based on input from a lot of sources.
 
Again, proving my point. It doesn't affect your brewing in anyway, shape or form. And if people read the article using the same critical thinking skills they were taught in let's say middle school, then it shouldn't affect them either.

I'll go back to my post, wherein a majority of our college educated (even one Ph.D in biology) in the brew club are now outsourcing their methodology to Brulosophy, citing them as THE reason they no longer follow _______ practice, using it as an excuse to make shortcuts. Most people aren't approaching the hobby with statistical rigor (who learns stats in middle school? You're lucky if your college degree requires it... I digress). They do strive to make better beer, which is one of the prime missions of a homebrew club (it certainly is in ours), a community advance the pursuit of great homebrew - including the golden, amber, etc. liquid. I'm an educator by trade, it carries over into my passion projects. Frankly, Brulosophy is a headwind. I have to spend more time explaining why they should consider lager fermenting at 52 than at ale temps - or at least split their fermentations and compare for themselves instead of accepting p = 0.07
 
or at least split their fermentations and compare for themselves
What is the p value for that?

Is brulosophy, despite its well documented failings, not an order of magnitude better than "my last batch of this in October I think I did X(was drinking while brewing and spilt beer on my notes so not really sure) and this time I did Y and OMG best beer ever X sucks" which is pretty much my scientific method.
 
This thread is unlocked.

Personal attacks aren't allowed.

Political discussion isn't allowed.

Keep it civil. Stay on topic.

be-excellent-to-each-other
 
This thread looks more and more like a Monty Python sketch every day.

"Is this a methodically proper study then?"
"Yes indeed!"
"Really?"
"No"

Cheers! (A "Cheese Shop" tribute ;))
 
Status
Not open for further replies.
Back
Top