Brulosophy expert can't tell a Blonde Ale made with straight RO from RO plus minerals

Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum

Help Support Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Status
Not open for further replies.
I'm sure there's a blond joke here I just can't think of it... 😧
actually this is an interesting concept. I've read blond ale is a blank palette which exposes any flaws in process very easily...
That being said, the amount of minerals added is small, as the brewer noted.
Tough one...I can't imagine a Burton ale or dortmunder export tasting very good with straight RO, and yet much more specialty malt to "hide behind"
I know when I've forgotten to add enough carbonate to my water when doing a Irish stout, the specialty malt doesn't make a lick of difference...the beer tastes acidic...
 
Shipping is god-awful expensive, really really bad these days.
Yeah, it is. Plus they may sit in a hot truck.

But they don't have to be shipped, they can be dropped off and/or picked up, without contact. 3 bottles marked 1, 2 and 3 for each participant. Or use random numbers.
 
I think beer is a lot harder then we wish it would be. How many times have you heard a brewer, after starting to control, pH, ferment temp, water chemistry, proper yeast handling etc, say it was all a waste of time and go back to extract? If it were easy, brewing schools like TU Munich and UC Davis would not exist.
 
I had the bug for almost three weeks running from late January through February.
Lost my sense of smell and taste for a while and probably wouldn't have been able to tell the difference between my cat's beefy gravy and a good steak - that's how "out of it" I was for a few days.
From what I can tell the yeast probably didn't care and his mineral salt levels were BALANCED for the most part. So what was the point again?
 
From what I can tell the yeast probably didn't care and his mineral salt levels were BALANCED for the most part. So what was the point again?
Probably just to keep their ridiculous website running even in these times?
 
This season has to be one of the worst ... for me, anyway.
Can't get the kitchen to myself because almost everyone is home. Sourdough and kombucha being done by one boy and the wife is home with a busted knee for a couple months. Couldn't get into the kitchen edgewise if I wanted.
Besides, all my yeast is DEAD and out of date. :mad:
 
Brülosophy pokes at people’s belief that the harder you work the better your beer will be. That’s why everyone complains when they bust a tradition as being less important than everyone thinks. Personally, examining my own beliefs about having a sensitive palette and knowing what good beer is, I realize how full of B.S. most people are when it comes to good food and good drink.

The big hair watches Beat Bobby Flay and he regularly beats the chefs that make things from scratch and add their secret ingredients by just knowing his stuff and giving the judges what he thinks they will expect. It’s the same with beer.
 
Brulosophy also conversely provides lazy brewers a sense of legitimacy. The problem is that you can usually cut one corner, which so many exberiments suggest wont affect beer quality. Viewers then compile a list of 30 things that dont matter and end up brewing a case of swill only a mother could love.

Yes, the harder you work, the better the beer is likely to come out. Suggesting the contrary is wishful thinking.
 
The only thing Brülosphy busts daily is their own credibility. And they prove that some people will just drink anything as long as it has alcohol in it.
What they do is basically brew their next batch while pretending to be "sciency" about it. It's beyond laughable.
 
Last edited:
Can't get the kitchen to myself [...]
Besides, all my yeast is DEAD and out of date. :mad:
Sorry to hear that. How about brewing at night!
Maybe you can reserve the kitchen. Have a meeting about it, or the wife or your son is going to write something similar in their favorite forum: "He hogs the kitchen with his beer." ;)

Regarding the yeast, that depends. 2 year old yeast, kept in the fridge, is still vital. You do need to make (step) starters though.
 
Perhaps more is being made of this Brulosophy article than it is. In the third paragraph, Carter refers to a previous experiment where “the adjusted RO water beer was handily preferred.” In the last paragraph, he even states “...I’ll continue adjusting my RO water for every batch, even if I couldn’t tell these particular beers about.” While I agree a one person taste test isn’t very scientific, I don’t see this experiment as anything to taken very seriously because I see it presented only semi-seriously. With one tester, why bother? Nothing to take seriously IMO, though I suppose someone new to AG might.
 
In case you missed it every review has the non subtle brand marketing:

We brewed this on our SSbrewtech(tm) and then checked mash temp with thermapen (click for review) and chilled with the hydra immersion coil, crash cooled using our patented brunlock and drank out of our branded glasses (click to buy).
 
Whatever people think of these "experiments", one can't hide from the fact that most people, experienced beer drinkers or not, usually can't tell the difference between beer A and beer B. It might be different grain bill, different yeast, different hops or whatever... served side by side it's difficult for most people to separate one brew from another apparently.
 
Perhaps rather than focusing upon the ability to distinguish one beer alteration from another, the focus from the very onset should be only in regard to determining which beer is preferred, if either. This requires only two unidentified sample cups. One with each beer. Then after the preferred beer has been identified (by simple majority), ask those who preferred it why to them it was preferred. And lastly also ask those who preferred the overall numerically less preferable beer, why to them it was preferred. All of this being done before revealing the alteration/difference between them.

And do this in a setting of isolation, whereby only one person at a time is sampling the beers in the presence of no one. Even the one handing out the beers is asked to leave the room. Or better yet, the one setting out the sample beers exits the room through one door as the one doing the tasting enters it through another door such that even body language is removed from the equation.
 
Last edited:
Preference is meaningless if there is no confirmation that there is a detectable difference. The reason is that people who are forced to make a choice will choose one or the other even if they detect no difference. Since "preference" can neither be proven nor disproven you'll have no way of telling whether your result is meaningful or just random.
 
I love listening to the Brulosophy podcasts and reading about their 'tests'. They do things I'd like to do if I had the time and money. I see no harm in them mentioning the hardware they use and their soft ads. Sure, it may not be for everyone but obviously a number of people like it or it would not have taken off like it has.
Testing on a larger group is just not easy during these times. I commend them for keeping the pace up as much as they can.
 
Whatever people think of these "experiments", one can't hide from the fact that most people, experienced beer drinkers or not, usually can't tell the difference between beer A and beer B.

Except that fact is not a fact at all. Here is a rather typical example:

"While 15 tasters (p<0.05) would have had to identify the unique sample in order to reach statistical significance, only 13 (p=0.10) made the accurate selection, indicating participants in this xBmt could not reliably distinguish an IPA made with a 7.4 oz/209.4 g dry hop charge from one made with an 11 oz/311.8 g dry hop charge."

Note that 13 tasters accurately selected the different sample, and that p=0.10.
This means that if there were no detectable difference between the beers, there was only a 10% chance that 13 or more tasters would get it right. But they did. To translate that into "indicating participants in this xBmt could not reliably distinguish" is misleading as hell. Anyone reading those words and not thoroughly familiar with triangle testing and p values would be very likely to think "the results show that there's no difference."

More reading: http://sonsofalchemy.org/wp-content/uploads/2020/05/Understanding_Brulosophy_2.pdf
 
Nobody here has been more critical of Brulosophy's test methodology than me. But that said, I've long thought the experimental methods were actually rather good. They take pains to try to brew or adjust two batches keeping everything the same except the variable of interest, then see if there are any discernible differences. The whole idea is to remove all other explanations for the difference except one.

The testing part, not the brewing part, is where I've always had problems. No controls over who the population is, what they ate or drank beforehand.....the testing is just not very robust, and it introduces an explanation for differences besides that of the test samples.

But in the end, what Marshall does has no effect on my life. My beer tastes great; my family still loves me; he doesn't have access to my bank account. He's very straightforward in what he's doing, and the critical-thinking consumer of information can choose to accept it or not.

**********​

I met Marshall at the BREW Boot Camp in March 2019. Nice guy--the sort with whom you'd like to sit down and have a beer. Not pretentious, just doing something in the beer area.

I'd brought a few bottles of my Darth Lager along for feedback from people. We were sitting around a coffee table in the lounge area and he had a half-glass of Darth. A few minutes passed and he said "Hey, let me have some more of that." But then, it was just him, and no comparisons. :) :)

**********​

In the end, I've always thought this about Brulosophy: at least they're doing something. Is he making money from this? Sure. A lot? Nah. And nothing he does requires any money from me.

Very few of his detractors here--including me--are doing carefully-controlled experiments and then reporting on the results. At least, as Teddy would have said, he's "in the arena."
 
Last edited:
Except that fact is not a fact at all. Here is a rather typical example:

"While 15 tasters (p<0.05) would have had to identify the unique sample in order to reach statistical significance, only 13 (p=0.10) made the accurate selection, indicating participants in this xBmt could not reliably distinguish an IPA made with a 7.4 oz/209.4 g dry hop charge from one made with an 11 oz/311.8 g dry hop charge."

Note that 13 tasters accurately selected the different sample, and that p=0.10.
This means that if there were no detectable difference between the beers, there was only a 10% chance that 13 or more tasters would get it right. But they did. To translate that into "indicating participants in this xBmt could not reliably distinguish" is misleading as hell. Anyone reading those words and not thoroughly familiar with triangle testing and p values would be very likely to think "the results show that there's no difference."

More reading: http://sonsofalchemy.org/wp-content/uploads/2020/05/Understanding_Brulosophy_2.pdf

Not sure what your point was here. Statistical significance is simply an objective way of deciding whether to reject the null hypothesis (in this case, no difference) or not. There's nothing holy about the 5% level, or 1% level. They're just standards that have become....well, standard.

I would agree that translating a non-significant result into "participants....could not reliably distinguish" is misleading. Reliability is consistency of measurement, so I take issue with the use of that word, and in the end, it's the panel, not the individuals. If an individual, in repeated triangle tests, could not consistently pick the odd one out, then that individual could not reliably do so.

And then there's the panel thing. Who constitutes it? To what population of beer drinkers does it generalize? People who drink to get drunk? People who like trying different things? People who loves to deconstruct a beer into its various flavor components? Nobody knows. So if the panel can or cannot "reliably" distinguish between the beers, what does it tell us about how beer drinkers perceive it? I don't know.

And then you get to the individuals involved. What were they eating or drinking just prior to the triangle test? A couple of pints of very bitter IPA that fried their taste buds? Heavily spiced food?

So in the end, I don't know if an individual can't tell the difference because A) what they ate/drank killed their taste buds, or B) they don't have a very sensitive palate generally, or C) there really is no difference between the two samples.
 
Not sure what your point was here. Statistical significance is simply an objective way of deciding whether to reject the null hypothesis (in this case, no difference) or not. There's nothing holy about the 5% level, or 1% level. They're just standards that have become....well, standard.

My point here is that average readers don't understand significance, p value numbers, or even the basic fact that in a triangle test a blind guess is expected to be correct 1/3 of the time, and not half. So how could they let people know how to really think about the results? Well, they could append these words (with the appropriate numbers) to every test where p was >= .05...

"It should be noted that if there were no difference, on average we would expect 9 or 10 of the tasters to make the correct selection. But 13 (or more) did, which if there were no difference, was only 10% likely to happen."

But then, you wouldn't have well intentioned people posting things like "one can't hide from the fact that most people, experienced beer drinkers or not, usually can't tell the difference between beer A and beer B." And you wouldn't have people dropping or changing important parts of their process after being misled.
 
The worst thing IMHO is their trying to draw general, broad conclusion from a single data-point which is at least bad science and in such a complex system such as beer and the brewing of beer is just laughable.
One example. They try boiling with and without a lid on to see if DMS becomes detectable boiling with a lid on or not. The problem is, there are several variables affecting the final level of DMS in beer. First you have the level of DMS-precursors in the malt that makes up the grist. Then you have the lid design which which determines the amount of condensation and where that condensate ends up. Then you have DMS scrubbing during fermentation which can be stronger or weaker depending on fermentation dynamics. Change any of those variables and you could find yourself with DMS levels that are above detection threshold.
What do they do? They decide the only variable is lid-on/lid-off and only test for that, partly because of the fact that with their simple means they have no way of testing for and affecting the other variables. All they could derive from such an experiment is, "We brewed this beer with this malt and this equipment and couldn't detect any differenece vis-a-vis DMS between boiling with lid on and lid off." Which is all jolly good for them but frankly totally irrelevant for the general public, who under different conditions might very well get completely different results.
But turn that into a BeerXperiment and draw broad, albeit unproven, coclusions and suddenly you're "radical" and "revolutionary" and you can sell a T-shirt with your name on it for a few bucks...
 
My point here is that average readers don't understand significance, p value numbers, or even the basic fact that in a triangle test a blind guess is expected to be correct 1/3 of the time, and not half. So how could they let people know how to really think about the results? Well, they could append these words (with the appropriate numbers) to every test where p was >= .05...

"It should be noted that if there were no difference, on average we would expect 9 or 10 of the tasters to make the correct selection. But 13 (or more) did, which if there were no difference, was only 10% likely to happen."

But then, you wouldn't have well intentioned people posting things like "one can't hide from the fact that most people, experienced beer drinkers or not, usually can't tell the difference between beer A and beer B." And you wouldn't have people dropping or changing important parts of their process after being misled.

OK, I agree. Most don't understand significance. If something is "significant," it must be important, right? :) :) :)

I taught college-level statistics for....well, 30+ years--retired in May--and one of the best examples of this came from the Brulosophy site.

There was an early Brulosophy experiment that compared Maris Otter with 2-Row. MO is my favorite malt, so naturally I was interested.

The results of the testing showed a significant result. But if you dug down into the preferences, guess what? Exactly the same number of testers preferred MO as did 2-Row.

Even though the result was "statistically significant," there was no actionable intelligence. If I had a brewery for profit, I'd want to know what most people preferred, so I could brew it. But nothing that came out of the results would have been helpful in that way. I suppose, if it doesn't make a difference, the "actionable intelligence" is to choose the cheaper of the two.

I suppose what it indicates, if you can take it at face value, is that some people like a maltier/fuller/richer flavor (MO), and some like a lighter/crisper flavor (2-Row). Who knew that people vary in what they like? :)
 
This means that if there were no detectable difference between the beers, there was only a 10% chance that 13 or more tasters would get it right. But they did. To translate that into "indicating participants in this xBmt could not reliably distinguish" is misleading as hell.
The worst thing IMHO is their trying to draw general, broad conclusion from a single data-point which is at least bad science and in such a complex system such as beer and the brewing of beer is just laughable.
These are valid points, but you can consider what is presented and still get something useful out of their experiments. Think of it as just one data point. When I try something different, my normal method is to try to compare the experimental beer with one that I brewed, and drank, a few months ago. I just have no practical way to do the side-by-side test like they do. Obviously, side-by-side testing has a big advantage.
 
Brülosophy has helped me realize that good/bad beer is subjective and prattling on about how X makes beer better is just talk.

In this world there’s talkers and there’s doers. Brülosophy is doers. And in the doing, they show how little difference there is in the mole hills we (myself included) make mountains of.
 
And in the doing, they show how little difference there is in the mole hills we (myself included) make mountains of.

No. They really haven't shown that in most cases, if any. Not if you just go by the words they use to summarize the results, which is all most people can do, in the absence of intimate familiarity with the methodology or a concise explanation of what the results really mean (and don't mean).
 
These are valid points, but you can consider what is presented and still get something useful out of their experiments.

You certainly can, if you understand what the numbers actually mean. But why not share that in the write-ups? That would make the information useful to a lot more people.
 
Status
Not open for further replies.
Back
Top