• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

Brulosophy expert can't tell a Blonde Ale made with straight RO from RO plus minerals

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Status
Not open for further replies.
I think there's a severity level to "wrong/bad" if you're going to say "it's absolutely wrong/bad". There are some good things that come from the articles and they don't claim to be the end-all/be-all for information (at least from my observation). It's ok to present information and be wrong. We have to remember that they aren't getting paid, neither is the panel, etc, etc. It's not like they are figuring out to cure covid, it's just brewing beer, and the sharing of their experiences in a semi-controlled way. I feel like they should really remove the p values and just present the information as is, with comments from participants based on, "Did you like X sample? why? Did you like Y sample? Why? Did you taste a difference between the two? What did you taste as different in sample X vs Y?". This is basically what the writer does at the end with a semi-blind test.
 
Last edited:
The question to me is, while they do explicitly state that their findings are never conclusive sole data point yada yada yada, at what point do you find that disclaimer as bad faith. That they know exactly what they're doing and the message they're sending to most of their readership, but the disclaimer allows them to say to those who see through it "but we didn't say it was conclusive". Like the fine print at the bottom of a contract. Salesman bait and switch.

I don't know if that's Brulosophy or not. But I'm leaning in that direction.
 
The question to me is, while they do explicitly state that their findings are never conclusive sole data point yada yada yada, at what point do you find that disclaimer as bad faith.

I haven't seen any disclaimers of any kind in the experiment write-ups, let alone an explanation of what the numbers actually mean.
 
I haven't seen any disclaimers of any kind in the experiment write-ups, let alone an explanation of what the numbers actually mean.

It's never highlighted as a disclaimer. At least not in the few "xbmts" I could be bothered to suffer through reading. But every single one was caveated with language of "this is a single example so take it for what it's worth". The exact phrasing escapes me, but it was almost identical every time. And read like a disclaimer.
 
It's never highlighted as a disclaimer. At least not in the few "xbmts" I could be bothered to suffer through reading. But every single one was caveated with language of "this is a single example so take it for what it's worth". The exact phrasing escapes me, but it was almost identical every time. And read like a disclaimer.

I've read tons of them and haven't seen what you're talking about. Example?
 
Marshall Schott is a psychologist at a prison.

http://brulosophy.com/contributors/marshall-schott/
He's showing you people the ink blots... what do you see?

He's explained his point of view many times:

http://brulosophy.com/2015/09/17/exbeeriment-results-corrections/http://brulosophy.com/2017/07/20/nothing-matters-reviewing-the-first-150-exbeeriments/
Others have done things with those p-values:



Have any of you emailed this gentlemen and asked him his opinion of the "new" p-value recommendations?

Numbers are slanderous they can be read to say anything you want them to say.
 
I've read tons of them and haven't seen what you're talking about. Example?

http://brulosophy.com/2015/06/22/fermentation-temperature-pt-3-lager-yeast-exbeeriment-results/
That's the one that came to mind that I saw quoted everywhere at the time.

After all, as we rarely fail to point out, this is but a single point of data involving a beer produced under relatively well controlled conditions using a particular list of ingredients. It’s totally possible a different strain used in a different wort would produce more noticeable differences.

That's the caveat I saw repeated the other times in very similar wording. Though in looking it isn't as common as I thought. Though as others indicate (and the quoted text does as well), it's pointed out often.
 
That's the caveat I saw repeated the other times in very similar wording. Though in looking it isn't as common as I thought. Though as others indicate (and the quoted text does as well), it's pointed out often.

Perhaps it was common back in the day, and now, not so much. I just looked at the most recent 15, and they don't have it. It really should be in every write-up, along with about 50 words explaining what the p value actually means (besides "significant" or "non-significant")
 
Perhaps it was common back in the day, and now, not so much. I just looked at the most recent 15, and they don't have it. It really should be in every write-up, along with about 50 words explaining what the p value actually means (besides "significant" or "non-significant")

If they're not even paying that lip service any more, I'm even more inclined to believe when they do include it that it's in bad faith.

All the ones I actually read the whole thing were from back in the day. If it's gotten even worse since then, I'm all the more glad I disregarded them years ago.
 
This is my all time favorite Bru experiment (I'm not calling it a exbeeriment). They brewed a German leichtbier and asked their tasters to tell them what style it was. Most said kolsch with a handful saying some kind of pale lager. I'll give them those. To an uneducated blind palate, it can be tough to pick a specific beer when there are a bunch of styles all very similar to each other. One person said vienna lager, which should be pretty easy to tell the difference between but that isn't the interesting part. People said it was a witbier, saison, gose, sour beer, fruit/spiced beer and autumn seasonal beer. Gose and sour beer? And these are the people they use to test their experiments.

http://brulosophy.com/2018/10/11/short-shoddy-german-leichtbier/
 
Gose and sour beer? And these are the people they use to test their experiments.

This is why I'm actually a fan of a single taster with multiple tests (providing the taster is blind to the difference between the beers). If one good taster can reliably and consistently tell the difference between two beers, that's good enough for me to say they are perceptibly different. I don't need data that includes people that can't tell the difference between a leichtbier and a gose!
 
Every single Brulosophy experiment is invalid.
The way to perform a triangle test in a validated form is in Sensory Evaluation textbooks. Meilgaard is the most known.
If the experimental design is invalid, the conclusions are invalid.
Invalidated designs lead to a very high random error (stats term), and thus lots of experiments with no statistical significance.
Prost!
 
To my knowledge everyone is in the same room, and they can perhaps talk among themselves, watch facial expressions, etc... There are also to my knowledge no flavor purgers utilized. I hope they at least totally randomize cup coloration so no one can presume the smile or frown on their neighbor for the red cup to mean their own red cup is also suspect in any way.
 
For those taking potshots at Brulosophy, there's nothing stopping brewers from doing their own experiments with/without water manipulations on different styles and using different brewing methods. Then you can set up your own tasting panels and tabulate the conclusions.

Please post your results, its an interesting subject.
 
For those taking potshots at Brulosophy, there's nothing stopping brewers from doing their own experiments with/without water manipulations on different styles and using different brewing methods. Then you can set up your own tasting panels and tabulate the conclusions.

Please post your results, its an interesting subject.
I imagine you sending a research article for peer-review, getting criticized on the merits of your methods, process, and data, and responding "well, if you don't like it, you do it".
Science doesn't work this way. At any level.
Anecdotes are a dime a dozen.
Data collected this way are just anecdotes.
 
I imagine you sending a research article for peer-review, getting criticized on the merits of your methods, process, and data, and responding "well, if you don't like it, you do it".
Science doesn't work this way. At any level.
Imagining what other people may or may not do doesn't actually accomplish anything.
Your assertion that "science doesn't work that way" is merely an anecdotal opinion, which you have stated is a "dime a dozen".
Looking for differences between Brulosophy's blog posts and peer reviewed scientific research articles is an unfair comparison. The results are simply what they are for the particular beer that was being sampled. The methods used may or may not be the same if you try to use them.
I stand behind my statements: if you can do it better yourself, and you want to put the time, effort and money into the research, please post the results.
 
Imagining what other people may or may not do doesn't actually accomplish anything.
Your assertion that "science doesn't work that way" is merely an anecdotal opinion, which you have stated is a "dime a dozen".
Looking for differences between Brulosophy's blog posts and peer reviewed scientific research articles is an unfair comparison. The results are simply what they are for the particular beer that was being sampled. The methods used may or may not be the same if you try to use them.
I stand behind my statements: if you can do it better yourself, and you want to put the time, effort and money into the research, please post the results.
Actually, it's not an opinion. The process to do research and publish science is well established. Anyone that does research and publishes knows it.
Nice try on your "statement". I actually do it better. Every day. Just not in this area.
And read lots of papers of people who does real research in brewing.
Feel free to have the last word. I said what I needed to say.
Prost.
 
I think we need to appreciate that Brulosophy is there and the amount of time and expense Marshall Schott has spent building a creative and interesting website for us to enjoy. His experiments are full of all sorts of relavent information and challenging experiments in this hobby and bloggers here should quit with ripping into nuances of testing values and procedures. Just be glad the guy is there, he is doing a lot of groundwork and putting it out there in an interesting way that challenges the mind. Great job, and thanks!
 
Last edited:
His experiments are full of all sorts of relavent information and challenging experiments in this hobby and bloggers here should quit with ripping into nuances of testing values and procedures.

It's precisely because of the nuances that Brulosophy should change the way the data is presented. The way the write-ups are worded is misleading to casual readers (i.e. non-statisticians/non-scientists). Whether that's intentional or not, I can't say.

For many readers, the benefits of all the good stuff you mentioned is lost or potentially damaging without clear presentation. Frankly, I'm baffled by the amount of resistance to that fact. (Is it a fan-boy phenomenon?) People are literally changing (or not changing) their brewing processes based on words that do not accurately describe what the numbers mean. Would you agree in principle that the words ought to accurately describe what the numbers mean?

Note: I'm referring to "exBEERiments" here. I know there's lots of other stuff on the site.
 
OK, I agree. Most don't understand significance. If something is "significant," it must be important, right? :) :) :)

I taught college-level statistics for....well, 30+ years--retired in May--and one of the best examples of this came from the Brulosophy site.

There was an early Brulosophy experiment that compared Maris Otter with 2-Row. MO is my favorite malt, so naturally I was interested.

The results of the testing showed a significant result. But if you dug down into the preferences, guess what? Exactly the same number of testers preferred MO as did 2-Row.

Even though the result was "statistically significant," there was no actionable intelligence. If I had a brewery for profit, I'd want to know what most people preferred, so I could brew it. But nothing that came out of the results would have been helpful in that way. I suppose, if it doesn't make a difference, the "actionable intelligence" is to choose the cheaper of the two.

I suppose what it indicates, if you can take it at face value, is that some people like a maltier/fuller/richer flavor (MO), and some like a lighter/crisper flavor (2-Row). Who knew that people vary in what they like? :)

If everybody liked the same thing in equal amounts there would be no need for more than one brand of beer. We could just all have a (bud?) and call it a day. The macro breweries could run perfectly controlled triangle tests to arrive at the ultimate beer that was seen as most preferable to all tasters and that is what everyone would drink.

Hmm doesn't work that that way right? I like a super dry IPA with some bitterness in it and zero caramel notes. My wife likes one with even more bitterness and a substantially stronger malt and caramel background. Another friend doesn't like IPAs at all. I'm pretty sure any of the three of us could pass a triangle test with our preferred beer and one of the other's preferred beer. They are noticeably different beers in aroma, mouth feel and flavor. But preference wise are my wife and I going to come to our senses and realize we don't actually prefer IPA at all and think of all the time we have been wasting on craft IPA when a nice Miller Lite was really the most preferred beer after all?

I think it is cool that Marris Otter was reliably detectable in a triangle test in opaque cups. Do you think Mecca Grade vs Rahr would pass a similar test?
 
The worst thing IMHO is their trying to draw general, broad conclusion from a single data-point...


They usually mention the experiments are only a single data point and further testing is needed. I don't think they ever advocate to take the results from one experiment and broadly apply it
 
They usually mention the experiments are only a single data point and further testing is needed. I don't think they ever advocate to take the results from one experiment and broadly apply it

As we were discussing above, at what point do you concede that they know exactly how their results would be interpreted (as immutable fact "busting brewing myths" despite caveating against that, which at this point any homebrewer with an internet connection is aware of), and that rather than changing their methodology to actually fix the issue, continuing to toss in a caveat is an act of bad faith. Basically they know they're being misleading and refuse to stop. Presumably because mouse clicks and ad revenue.
 
So, what are the more acceptable alternatives to Brulosophy? That is: someone doing experiments with homebrew level equipment, not just referencing your favorite commercial brewing texts, or web sites that repeat conventional wisdom.

Brew on :mug:
 
We all have access to the internet, and most of us have a lot of homebrewing equipment, along with the means to productively dispose of homemade beer.

We could all start blogs to add more points of data to the Brulosophy folks. They document their process pretty well. Their experiments can be duplicated.
 
So, what are the more acceptable alternatives to Brulosophy? That is: someone doing experiments with homebrew level equipment, not just referencing your favorite commercial brewing texts, or web sites that repeat conventional wisdom.

Brew on :mug:

An acceptable alternative would be to determine whether an alternative is even required. Do we really need someone running these types of experiments to tell us best practice in brewing, especially given the amount of great information out there for free?

My point is not to ditch a discerning eye toward making sure we don’t bias ourselves WRT brewing, but rather to be more cautious who we place our trust in, includIng myself and others.
 
Status
Not open for further replies.
Back
Top