• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

Brulosophy expert can't tell a Blonde Ale made with straight RO from RO plus minerals

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Status
Not open for further replies.
It was substantially unsubstantiated and entirely without substance.
Sounds juicy, I wish I had had a chance to read that hot smut before it was removed.
 
As a general rule, and just so members understand the context of posting here at HBT, if a moderator has to delete two posts from a member in the same thread, a temporary ban is likely.
 
So much energy expended. I mean it not like any of this thread is going to change the way any of us brew. Give the guy a break - he’s not trashing anyone.
 
So this thread has been kind of interesting (I confess I’ve had a few bourbons) but my primary concern is the Raging Irish Red Ale I have fermenting right now- I added an extra lb of 2 row Canadian AND it was double crushed so OG was 1.066 instead if 1.050 or 52 — but it WILL be excellent! I started this hobby in the 90s and left it (sorry to say) and came back to it 18+ months ago. It’s a great hobby — let’s all enjoy it!!
 
The funny thing is that most Brulosophy posts end with the authors explicitly stating that their results will not change the way they brew.

I've noticed that, pondered it, and never understood it. Why are they going through the nominal motions of learning, only to generally reject what they learn?
 
I've noticed that, pondered it, and never understood it. Why are they going through the nominal motions of learning, only to generally reject what they learn?

Perhaps they understand what their numbers (don't) mean. No need to share that insight with the casual reader, though.
 
I've noticed that, pondered it, and never understood it. Why are they going through the nominal motions of learning, only to generally reject what they learn?

I think the experiments are generally underpowered, but that's partly because the true effects they are looking for are small. Failure to reject null is not evidence of no effect. So if the time/expense to continue brewing as usual are low, they choose to keep doing it. It's rational, but I agree undercuts the purpose enough for me to generally categorize their efforts as whimsical.
 
I've noticed that, pondered it, and never understood it. Why are they going through the nominal motions of learning, only to generally reject what they learn?
Their disclaimer should come with a disclaimer.

New brewers joining the hobby already have a steep learning curve ahead of them, the nonsense brulosophy peddles is a tantalizing distraction at best, and disastrously misleading at worst. Beginners need to be supported and encouraged with clear, easy to follow methods and practices. Far too many of them end up leaving the hobby in frustration with how much there is to learn.

I'm all for demystifying the art of brewing, but challenging the validity of established practice based on some "tasters" inability to reliably pick out a difference is just silly.
 
Their disclaimer should come with a disclaimer.

New brewers joining the hobby already have a steep learning curve ahead of them, the nonsense brulosophy peddles is a tantalizing distraction at best, and disastrously misleading at worst. Beginners need to be supported and encouraged with clear, easy to follow methods and practices. Far too many of them end up leaving the hobby in frustration with how much there is to learn.

I'm all for demystifying the art of brewing, but challenging the validity of established practice based on some "tasters" inability to reliably pick out a difference is just silly.

Agree 100%. But I'll add that the "inability to reliably pick out a difference" is often a misleading exaggeration, given the numbers. When most of the p values are well below 50%, that tells you that there are a lot of differences being detected (but not reaching p < 0.05). When you look at the pages of tables of experimental results, it jumps right out. Now, you can't say for sure which ones, but you can say there are a lot.
 
My biggest gripes and feedback for Marshall & Co. continue to be (and I've said this many times before):

Their goal should not be p<0.05 confidence of a reliable detection of a difference, but rather p<0.20 that MAYBE there MIGHT be a detectable difference. Looking at all their results in this light, results are far more intriguing, and should spawn additional experimentation to try to repeat/support/refute a potential detectable difference.

Their average tasting population size of ~20 people is far too small for any truly meaningful repeatable results. I know it's a lot to ask, but IF they could get 45-50 people minimum, their results should become far more "reliable" and repeatable.

And a taster population size of 1 during COVID!? Ha!! Basically worthless!... and admittedly about as equally worthless as my own anecdotes shared over these past couple decades. :ghostly:
 
I guess I'm missing much of the point of the strong opinions here. Brulosophy is fun, magazine-level reading. It's not hardcore science. This is a hobby, I'm an armchair know-it-all (who knows much less than he thinks.. wait, is this circular logic?), and I quite enjoy reading the "experiments." In my day job, I'm a researcher who relies on statistics - the Brulosophy stuff I read is nowhere near the muster I need to work through for publication. But the thing is, they know it, and present it as such...

Isn't this just about the fun, anyway? I'm more than happy to scoff at their results and do what I've always been doing, for no other reason than I've always done it that way...

I, for one, think that site makes a nice contribution to the hobby!
 
It's not hardcore science. In my day job, I'm a researcher who relies on statistics - the Brulosophy stuff I read is nowhere near the muster I need to work through for publication. But the thing is, they know it, and present it as such...

IMO, they pretty much do present it as such in the write-ups, or at least that's the way many (maybe most) people take it. And that's a problem. Lots of people believe that if significance was not achieved*, Brulosophy proved that there was no difference. I see it over and over again in forum posts, where people state that Brulosophy has "debunked" something or other.

In the past, I've suggested relatively short disclaimers that could be included in each write-up, that would make it pretty dang clear what the results mean (and don't mean). But I suspect full disclosure would reduce site traffic.

ETA: *I don't mean that most readers know what significance/p values mean. I mean the words that accompany results where p >= .05, i.e. "...indicating participants in this xBmt were unable to reliably distinguish..." Casual readers take that to mean "there's no difference."
 
Last edited:
IMO, they pretty much do present it as such in the write-ups, or at least that's the way many (maybe most) people take it. And that's a problem. Lots of people believe that if significance was not achieved*, Brulosophy proved that there was no difference. I see it over and over again in forum posts, where people state that Brulosophy has "debunked" something or other.

In the past, I've suggested relatively short disclaimers that could be included in each write-up, that would make it pretty dang clear what the results mean (and don't mean). But I suspect full disclosure would reduce site traffic.

ETA: *I don't mean that most readers know what significance/p values mean. I mean the words that accompany results where p >= .05, i.e. "...indicating participants in this xBmt were unable to reliably distinguish..." Casual readers take that to mean "there's no difference."
Perhaps the onus is on the reader to have reasonable critical thinking skills. I mean after all, ONE guy was taste testing. You don’t even need superior critical thinking skills to NOT take that very seriously!
 
Perhaps the onus is on the reader to have reasonable critical thinking skills. I mean after all, ONE guy was taste testing. You don’t even need superior critical thinking skills to NOT take that very seriously!

Absolute blasphemy. You can't possibly be suggesting people should be able to think for themselves?;)
 
Perhaps the onus is on the reader to have reasonable critical thinking skills.

Have you met people? <jk> Seriously, most people don't understand p values. No amount of critical thinking gets past that without research/education, or an explanation in the write-up.

I mean after all, ONE guy was taste testing. You don’t even need superior critical thinking skills to NOT take that very seriously!

And yet, the stats are presented.
 
In regard to the material being presented as hard science vs fun magazine reading:

they pretty much do present it as such in the write-ups, or at least that's the way many (maybe most) people take it.

I’m not sure I agree with this. I have never viewed their content as irrefutable science and I don’t think “most” people who read it view it that way either. They’re pretty clear about their intent. Additionally, if you’re smart enough to find their opinions, you should be able to find a counter opinion. We need to stop defending the people who can’t think for themselves.
 
I’m not sure I agree with this. I have never viewed their content as irrefutable science and I don’t think “most” people who read it view it that way either. They’re pretty clear about their intent. Additionally, if you’re smart enough to find their opinions, you should be able to find a counter opinion. We need to stop defending the people who can’t think for themselves.

So, is it fair to say it's okay to mislead people, because they should know better?
 
Some interesting discussion here. Reading the comments about how 20 people is too small a sample made me think about all the times I tried a beer because one person recommended it!

I can see the point that new brewers do not need to be exposed to possibly misleading info. Still, I enjoy reading Brulosophy. I am neither very experienced nor very expert, and I do not have any serious opinion about the stuff on Brulosophy. But I will mention that anyone who is new to brewing can very readily access a huge amount of info on the internet, in books, and even here, which is totally overwhelming and mostly useless to a newbie. Brusophy is just more appealingly presented that a lot of other stuff.
 
So, is it fair to say it's okay to mislead people, because they should know better?
Mislead as in hide or manipulate data/info or lie is not ok. That's not happening here though. Brulosophy openly explain the process used and openly present the data that was collected. It's just their methodology and interpretation of data that doesn't stack up. Anyone with half a brain can decide for themselves how relevant the results are - if people are mislead by that then it's their own fault. There's far worse info on the internet, including some provided on this site (and any other discussion forum). But again, if someone takes some random-person-on-the-internet's opinion as gospel then the results are their own fault. Just my opinion.
 
Anyone with half a brain can decide for themselves how relevant the results are - if people are mislead by that then it's their own fault.

Nope. I will state again... Most people don't understand p values. No amount of critical thinking gets past that without research/education, or an explanation in the write-up. Combine that with words like "...indicating participants in this xBmt were unable to reliably distinguish..." and you have people being misled. They are not stupid, but they shouldn't be expected to have an advanced understanding of statistics. And Brulosophy could, if they wanted, make the meaning of the results so much clearer, with very little effort.

Just to be clear, I'm not talking about something "subjective" that readers should take with a grain of salt. I'm not talking about Brulosophy's "opinions." I'm talking about the misleading words accompanying the technically correct presentation of the data.
 
Nope. I will state again... Most people don't understand p values. No amount of critical thinking gets past that without research/education, or an explanation in the write-up. Combine that with words like "...indicating participants in this xBmt were unable to reliably distinguish..." and you have people being misled. They are not stupid, but they shouldn't be expected to have an advanced understanding of statistics. And Brulosophy could, if they wanted, make the meaning of the results so much clearer, with very little effort.

Just to be clear, I'm not talking about something "subjective" that readers should take with a grain of salt. I'm talking about the misleading words accompanying the technically correct presentation of the data.
No worries - we'll have to disagree on this one!
 
I'd like to see how many tasters (random people, not experts) in a panel can tell the difference between a bud light and a weihenstephan pils. Or even a bud light and guiness (without looking). To some people beer's beer.
 
There's far worse info on the internet, including some provided on this site (and any other discussion forum). But again, if someone takes some random-person-on-the-internet's opinion as gospel then the results are their own fault. Just my opinion.

That could be their new slogan. "Brulosophy, because there is far worse information on the internet"
 
They are not stupid, but they shouldn't be expected to have an advanced understanding of statistics. And Brulosophy could, if they wanted, make the meaning of the results so much clearer, with very little effort.
I agree. I think there should be a standard, but short, explanation of what the P value means that appears in every article. But IMO the Brulosophy crew doesn't fully understand what it means - they would need some help with such an explanation. They would also benefit from doing it.


.
 
There may be something to the significance for those tests whereby the P value is at or below 0.005. Perhaps they need to reset their standard from P<=0.05 to P<=0.005.

But the current single tester version is too much for me to imagine having any validity. That would require at least 14 out of 21 identifying the odd beer out correctly.
 
Status
Not open for further replies.
Back
Top