Brulosophy expert can't tell a Blonde Ale made with straight RO from RO plus minerals

Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum

Help Support Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Status
Not open for further replies.
The truth is that perfectly drinkable yet mediocre beer is really easy to make, good beer takes experience and effort, and great beer requires borderline religious fervor and dedication. Nobody ever 'accidentally' composed a masterpiece, they are always the result of practice and continual improvement.

Also, humans as a species have an absolutely pathetic sense of taste, being both incredibly subjective and easily fooled.
 
Brülosophy has helped me realize that good/bad beer is subjective and prattling on about how X makes beer better is just talk.

In this world there’s talkers and there’s doers. Brülosophy is doers. And in the doing, they show how little difference there is in the mole hills we (myself included) make mountains of.


This hurts my eyes to read.

Bad beer exists, it is very real and it flows from the bottles and taps of home and craft breweries all over this fine country.
 
I really enjoy reading the Brülosophy site and occasionally listening to their podcast.
I think a lot of the info is very useful and listening to them can be very entertaining. Unlike other podcasts that are industry centric, I like the fact that it’s centered around home brewing. I don’t give a crap about listening to a podcast based on who is who and who they brew for.

The one major thing I disagree with Brülosophy is the constant drumbeat of drinking fresh beer. I brew mostly lagers, some ales but I never find them to be really good within two weeks or less.

Recently I’ve been using WLP840 American lager yeast and I swear the beers taste bland and flabby until it hits the 6 week mark. Ales are obviously less but still don’t hit their stride until the 4th week or so.

All my beers are kegged so I am sampling throughout.

I occasionally will brew a pale or IPA and those follow the same rule- usually best after 3-4 weeks.
 
"It should be noted that if there were no difference, on average we would expect 9 or 10 of the tasters to make the correct selection. But 13 (or more) did, which if there were no difference, was only 10% likely to happen."

It's been a complaint of mine for years with their exbeeriments. I actually like reading them but wouldn't put any faith in the results. In cases like the one you list, what they've actually shown is that there is quite likely a difference (p<0.10 is quite significant given the small sample size), but it needs a larger sample size (i.e. more tasters) to confirm a difference with statistical significance. Unfortunately, as you say, they imply that it means there's no difference. Really, their studies are at best pilots to see if it's worth studying something properly (even that's a stretch). Don't get me started on the idea that one batch of one beer vs. one batch of another beer can do anything more than prove that those two batches are different (you CAN'T reliably say that the variable is the reason for a difference unless you have multiple batches of each beer).
 
This hurts my eyes to read.

Bad beer exists, it is very real and it flows from the bottles and taps of home and craft breweries all over this fine country.

I know what you’re saying, but one man’s bad beer is another man’s nectar of the Gods. I had a sour beer from McFates that was aged in Chardonnay barrels. I honestly don’t think I have a great palette, but you didn’t have to be a beer sommelier to get malt vinegar. It was absolutely undrinkable. I have in my life not finished 2 beers on purpose and this was one of them. The bartender took the beer back and told me that people come from all around to get this beer. I believe it’s true too, besides the vinegar, there was a strong flavor of marketing.
 
I know what you’re saying, but one man’s bad beer is another man’s nectar of the Gods. I had a sour beer from McFates that was aged in Chardonnay barrels. I honestly don’t think I have a great palette, but you didn’t have to be a beer sommelier to get malt vinegar.

Yep. All of my favourite Brett beers have noticeable malt vinegar in them (not in your face acetic acid, but a background tarty-sweetness). Favourite style = Flanders Red where vinegar is part of the flavour profile. I really enjoy it. To each their own!
 
I have in my life not finished 2 beers on purpose and this was one of them. The bartender took the beer back and told me that people come from all around to get this beer. I believe it’s true too, besides the vinegar, there was a strong flavor of marketing.

I regularly leave unfinished beers, more than I could count. Beer is about the enjoyment of the flavors and craft I'm rarely drinking for the buzz when I'm out which is the only reason to drink meh beer. If a beer isn't well done I would rather spend the calories on something enjoyed.
 
I used to be Mr. “I know good and bad beer”. And I still have an opinion if it’s good or bad for my taste. Each beer has a story and I want to be able to express how I feel about a beer because I love talking about it. I don’t believe most of us can understand a beer after just a sip, at least not in depth. Example: Firestone Walker Mind Haze: I get a really strong rosewater like aroma from this beer. A powerfully flavored IPA is a big plus for a lot of people, but this beer is quite off putting to me. Am I glad I drank the whole beer? Absolutely, in fact, over some weeks I drank a twelver of it. Was the only reason I drank the beer to get drunk absolutely not. Getting drunk on Mind Haze would be a very unpleasant experience for me, but I feel like I have truly experienced this beer and can have an intelligent conversation as to what it’s about. I’m not saying that there aren’t some sort of hypothetical standards, but testing ourselves to understand what those standards are would require far more than hanging out in a brewery and drinking beer with friends and family. Brülosophy does way more than even so called experts do to that end, but even they just touch the tip of the iceberg. Searching for subjective standards has so many opportunity costs that it becomes a game of fighting wind mills.
 
The results of the testing showed a significant result. But if you dug down into the preferences, guess what? Exactly the same number of testers preferred MO as did 2-Row.

This is probably the most important thing that they actually find with every one of their exbeeriments - tasters that can tell the beers apart are always* split with their preferences. That tells us something important - brewers need to stop claiming that one method/ingredient/other makes better beer. Better beer is entirely subjective! Case in point: I love lambics, but many beer drinkers would tip them out in disgust.

*always: means I haven't read one that doesn't, but they may exist.
 
My point here is that average readers don't understand significance, p value numbers, or even the basic fact that in a triangle test a blind guess is expected to be correct 1/3 of the time, and not half. So how could they let people know how to really think about the results? Well, they could append these words (with the appropriate numbers) to every test where p was >= .05...

fwiw, they do talk about the 1/3 thing quite often on the podcast.
 
My 2 cents; Get over yourselves, we're all making beer; some good, some bad. We didn't invent this thing called beer; people have been brewing beer for thousands of years, some good, some bad.

So, good beer or bad beer is just random?
 
Let me preface by saying that I don't think what Brulosophy is doing is wrong/bad. I do think there are flaws in testing that you can't really address without a lot of money behind the testing, which I don't think they have. I personally read (and have done so for many years) Brulosophy for entertainment and brewing interest, with some actual factual information sprinkled in. And really, that's what most of us do when we share our "knowledge" of brewing with others. Depending on our experience, and how much others respect our knowledge of the subject, will influence others and their brewing process/habits. But, I'm the type of person that listens, observes, and tests for myself. I don't normally take things for face value, usually, they just make me think of ways to prove it.

With all that said, I can see how it can stifle someone from exploring the craft on their own because someone already told them "do XYZ, because of ABC result". We can't address how people use the information we provide and we just need to be as transparent as possible with the information presented. I think they should always make sure to say that "this is subjective based on many different sensory abilities".
 
I read through the experiment and I was curious about the adjusted water profile. It is a very low mineral content, and I think it would be somewhat hard to tell them apart. An approach I would like to see is RO water vs. a more minerally water, something like 190 ppm Ca, 10 ppm Mg, 15 ppm Na, 250 ppm SO4 and 150 ppm Cl, which is a water profile I usually use for some beers. It's more " english "y than anything, but I find I enjoy it more than a lower mineral content.
 
Each beer has a story and I want to be able to express how I feel about a beer because I love talking about it.

The story often starts with eyebrow raised at glaring off flavor with feelings of disappointment at dropping $6 on a pint of homebrew. The story finishes with me wishing I had just bought a $9-$12 6 pack of pilsner urquell or saison dupont at the store.
 
I appreciate Brulosophy and accept the limitations of their methods. Their Short and Shoddy method got me back into brewing after a 2 year hiatus. If I had to tolerate the 6-8 hour brew days that the purists advocate, I simply would quit the hobby and go back to buying commercial beer. A 3-hour single-vessel BIAB brew day fits into my schedule and allows me to create good and sometimes great beer that I can share with my friends and family. That's what it's all about for me.
 
After a few years of haggling over it, on January 29, 2016 the ASA (American Statistical Association) finally issued a series of policy statements with regard to the validity of the P<0.05 value. Their issued statements begin with the "Introduction" section found ~40% down the page at this link:

https://www.tandfonline.com/doi/full/10.1080/00031305.2016.1154108

A couple of culled statement points are as follow:

2.P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.

3.Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
Practices that reduce data analysis or scientific inference to mechanical “bright-line” rules (such as “p < 0.05”) for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision making. A conclusion does not immediately become “true” on one side of the divide and “false” on the other.

5.A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
Statistical significance is not equivalent to scientific, human, or economic significance. Smaller p-values do not necessarily imply the presence of larger or more important effects, and larger p-values do not imply a lack of importance or even lack of effect.

6.By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
Researchers should recognize that a p-value without context or other evidence provides limited information. For example, a p-value near 0.05 taken by itself offers only weak evidence against the null hypothesis.
 
Last edited:
If P<0.05 is factually meaningful in any way, then why are a plethora of medical/clinical/drug studies often found to be unrepeatable? I came across at least one statistician who tossed out ~25% unrepeatability as typical of such studies that rely upon P<0.05. And one might initially presume that such studies are well controlled.
 
If P<0.05 is factually meaningful in any way, then why are a plethora of medical/clinical/drug studies often found to be unrepeatable? I came across at least one statistician who tossed out ~25% unrepeatability as typical of such studies that rely upon P<0.05. And one might initially presume that such studies are well controlled.

How many studies testing similar (or the same) hypotheses had p values >.05 and, as a result (at least in part), were likely not published? Replication "crises" are prevalent in a lot of scientific domains, at least in part due the publication process itself and the incessant desire for publishing novel/new contributions. But, on the less bleak side, it's part of the natural evolution of a field. Not to argue that we should strive to do bad science, but just that things we think we know often aren't "true", at least unequivocally. I'm an academic psychologist, and one of the watershed moments for each of my Ph.D. students is when we explain that no single study really matters that much in terms of establishing whether a population effect actually exists.

1,500 scientists lift the lid on reproducibility
 
You certainly can, if you understand what the numbers actually mean. But why not share that in the write-ups? That would make the information useful to a lot more people.

Using a generally misunderstood sensory evaluation system means in the conclusions they can give people the information they want to hear and not necessarily what the results actually say.
I wonder why would somebody do that? 🤔
 
Last edited:
You certainly can, if you understand what the numbers actually mean. But why not share that in the write-ups? That would make the information useful to a lot more people.
I've read their explanations, and I think they don't understand what the numbers actually mean.
 
I take beer pretty seriously and used to enjoy reading brulosophy. I've never taken their articles seriously though for the reasons @Bobby_M mentioned.
Lately, I think the formulaic style of the exbeeriment posts has grown long in the tooth and it's a big obvious/boring, at least imnsho.
Science is formulaic, but blogging isn't...gotta mix it up!
 
Pssst, hey kid, you still measuring out your grains and hops like you were told to? We did a triple-blind triangle tasting and found that nobody could tell the difference.
Scales are slavery!

Good grief.
 
I suppose you're still bittering with hops? Sounds like you need to dose more drywall if you want to brew a RealBeer™
Can you tell hops from lawn clippings? I bet you can't honestly answer that because you haven't done an exbeeriment. And because you can't invite your buddies over for blind beersies YOU WILL NEVER KNOW THE ANSWER.
 
More on the "false alarm" related unreliability of P<0.05.

According to one widely used calculation, a P value of 0.01 corresponds to a false-alarm probability of at least 11%, depending on the underlying probability that there is a true effect; a P value of 0.05 raises that chance to at least 29%.

Here's the link to this quote which comes from an associate professor of statistics at Gallaudet University in Washington DC: Scientific method: Statistical errors
 
https://www.nature.com/news/scientific-method-statistical-errors-1.14700" One researcher suggested rechristening the methodology “statistical hypothesis inference testing”, presumably for the acronym it would yield. "

That guy might be a homebrewer.

Seriously though, the big takeaway here is that significance testing doesn't tell you anything about your hypothesis. "It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself." That distinction is lost on most people, even most scientists.

The Exbeeriment being discussed here, for example, did not actually test whether beers made with and without mineral additions taste different. This is why someone else could run exactly the same procedure and come up with a completely different result.

All that being said, I still enjoy reading their results and they provide a lot of food for thought.
 
The Exbeeriment being discussed here, for example, did not actually test whether beers made with and without mineral additions taste different. This is why someone else could run exactly the same procedure and come up with a completely different result.

Absolutely. It can only ever say that statistically those two kegs of beer are probably perceivably different, which is next to useless. In reality though, it would be a huge task to set up a rigorous experiment that could properly prove or disprove a brewing hypothesis, which is impractical at the brulosophy level (think multiple batches of multiple varieties of beer, a constant panel of at least 100 tasters with randomised tasting order, palate cleansing protocols). I still like reading exbeeriments for entertainment.
 
Just posting here to get updates, because this makes me immensely happy. If anyone wants to bring up "experimental" brewing as well, it would really make my day.
 
Let me preface by saying that I don't think what Brulosophy is doing is wrong/bad. I do think there are flaws in testing that you can't really address without a lot of money behind the testing, which I don't think they have. I personally read (and have done so for many years) Brulosophy for entertainment and brewing interest, with some actual factual information sprinkled in. And really, that's what most of us do when we share our "knowledge" of brewing with others. Depending on our experience, and how much others respect our knowledge of the subject, will influence others and their brewing process/habits. But, I'm the type of person that listens, observes, and tests for myself. I don't normally take things for face value, usually, they just make me think of ways to prove it.

With all that said, I can see how it can stifle someone from exploring the craft on their own because someone already told them "do XYZ, because of ABC result". We can't address how people use the information we provide and we just need to be as transparent as possible with the information presented. I think they should always make sure to say that "this is subjective based on many different sensory abilities".

I'm here to say it's absolutely wrong/bad what he's doing. He's basically taking the entirety of the evolution of homebrewing and all of the trial and error, and saying 'it doesn't matter 'cause my buddies can't taste a difference'. It's complete bull. You can't argue with science, no matter how many t-shirts you sell on your blog.
 
Brülosophy is super rad as Marshall might say and Jersey and Tim are the new wave of beer judges. People put waaaay to much energy into perfection, but I’m not knocking it, your beer is your hobby, brew how you want. I for one appreciate the thoughtful work those guys put in for the love of beer.
You should head on over to reddit.com and /r/homebrewing then, they'll love you there
 
Yep it sucks. I went to my LHBS a while ago and they said something along the lines of "there's been studies done on method x and proved it doesn't matter". I asked what studies and they referred to Brulosophy. This bro "science" information is packaged in a way that makes some people think it's valid which can be damaging to homebrewers by justifying bad brewing practice.
 
You should head on over to reddit.com and /r/homebrewing then, they'll love you there
Well I suppose I like it here just fine. It was always my impression that HBT welcomed alternative viewpoints, but it has been a while since I’ve posted here, maybe things have changed.
 
Marshall and co. have said repeatedly that their experiments are a single data point and should not be used to tell you how to brew or not to brew. They have also said repeatedly what amounts to "you do you". They are singling out an interesting variable and telling you what happened when they ran it by a panel of tasters and then they give their own impressions. That's it. I have some issues with their methods too and they do/don't do something things that I wouldn't be brave enough to try because I don't want to have to dump a batch and then wonder if it was because I dogmatically followed one of their experiment results.

That said, I would love to have the time and keg space to brew experiments so I could compare little changes I may want to make to my process, but I don't and Brulosophy does that and talks about the results. If it rings true to me, I give it a try. If not, I move along.

Marshall also seems like a really chill guy who would be fun to hang out with. Whoever said above that most of their homebrew tasting experiences are with an eyebrow raised at a perceived off flavor and wishing they hadn't tried it makes me wonder why the hell you're in this hobby (unless it's just to have your own beer, whatever, you do you) but I would actively avoid your table at the homebrew club meeting.
 
Status
Not open for further replies.

Latest posts

Back
Top