How much does Brulosopher affect your brewing?

Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum

Help Support Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

MattyIce

Well-Known Member
Joined
Sep 22, 2014
Messages
447
Reaction score
151
Location
Allen
Not gonna lie, I like his process. I also tend to really like his stuff when it comes to making brew day easier/shorter. I quit boiling for 90 minutes, even with pilsner malts. I am also considering how often I really should be using pure o2.

What have you changed?
 
Interesting thread.

I wouldn't say his site has changed the way I brew, yet. However, I am more confident in my process thanks to his work.

So cheers to Marshall and friends.

Edit
Maybe his yeast temp work will push me to try a lager soon.
 
I have confidently brewed lagers according to his clearly explained method (which he readily admits he didn't invent). Also, the HSA test helped me get over that issue. And I'm about to dry hop again after foregoing it in lieu of hopstands on several previous brews. So I'd say that I do think about the various Xbmts often while brewing, actually.
 
He hasn't changed anything for me really. I look at his sentence saying he is just one guy and it isn't representative of everything. He will admit one experiment isn't really enough to use as a new standard however I'll admit his experiments are interesting. I liked the vitality experiment with yeast however I still make starters and I've never really worried about pitching at high krausen or decanting.

I've been tempted to repeat one of his experiments such as 90 vs 60 minute boil myself and bring it to my beer club but I would need to invest some time to figure out yeast pitching which might be easier to just buy two packs/vials and ferment at room temp.
 
I think his site kind of blows some doors wide open. After reading a lot of his findings it gives the impression at times that nothing matters! Just soak some grain and put some yeast on it and you get beer.

I will never change anything until I try it myself, but he is making you think about how important certain things are to you. If you just want some beer, then why not shorten everything? How much is that extra 10% worth in effort? I see it as having another tool in your brewing toolbox.
 
I like the site but have had some trouble duplicating his results. Clearly he is controlling some unmentioned variables. My biggest one is mash time, shorter mashes just don't ferment as well for me.

That being said I have a lot of time for his opinions, they are very clearly expressed and his recipes are also great. What're we here for is a great beer, and is also awesome with all the hops switched out for centennial
 
He hasn't changed anything for me really. I look at his sentence saying he is just one guy and it isn't representative of everything. He will admit one experiment isn't really enough to use as a new standard however I'll admit his experiments are interesting. I liked the vitality experiment with yeast however I still make starters and I've never really worried about pitching at high krausen or decanting.

I've been tempted to repeat one of his experiments such as 90 vs 60 minute boil myself and bring it to my beer club but I would need to invest some time to figure out yeast pitching which might be easier to just buy two packs/vials and ferment at room temp.

This is me as well. I haven't changed anything in my process because of his experiments. I find them interesting for sure, but as he says, it's 1 data point, and I'm not as confident in the consistency of his process to say he's isolating variables enough to make a difference, nor do I necessarily trust his method of evaluation.

Point his, here experiments certainly warrant discussion, but until they can be replicated on large scale by everyone, I will typically continue to side with established brewing science (where the methods are much more rigorous and the testing much more exhaustive) until I'm sufficiently convinced otherwise.

Of course, I'm also never satisfied with "hey, I made beer", so perhaps I hold my brewing to a more anal retentive standard than most.
 
I've read his side by side experiments with interest, and they've gotten me to at least think about some things differently, like being more confident that I don't need to strain out trub and break material when transferring to primary, faster grain to glass timeframes, and use of cold crashing and fining agents.

In addition to what others have said, though, despite the fact that he does claim to be only one data point, it bothers me a bit that he presents his results couched in the language of statistical significance, i.e., 95% confidence intervals and .05 p-values. When your n = 13 people answering a survey about largely subjective impressions, that sort of quantitative analysis is a bit ridiculous. Moreover, talking about them in those terms comes off as self-indulgent (at least to me), and suggests the results are meant to be taken as more conclusive than they should be.
 
Challenges to his scientific method and sample size notwithstanding, his work is clearly more relevant than the countless individual instances of "Works for me so it must be fine" that you read on HBT. ;)

That's certainly true, and useful.
And to be clear, my intention isn't to crap all over what he does or anything. Just that writing the results like it's being submitted to a journal for peer review is a bit over the top, IMO.
 
Challenges to his scientific method and sample size notwithstanding, his work is clearly more relevant than the countless individual instances of "Works for me so it must be fine" that you read on HBT. ;)

^ this. I just used a variation of his quick-lager method and produced a clean, drinkable lager in four weeks (albeit a bit cloudy). That's pretty astounding if you ask me. Or maybe all I really did was start drinking it earlier than I ever would have dared before. Who knows?

Notice I said, "a variation of," which really means I improvised something similar after reading about his method. This pretty much supports what an earlier poster said: mix up a warm grain soup, add yeast, and you will get beer. The Brulosopher has just shown that a lot of the things we do to supposedly influence the process don't have as much impact as we thought, and we do them simply out of habit.

That said, other than my perspective on lagering, I haven't changed much else because it's habit, and I'm not looking to shorten my brew day: I like brewing and don't mind the 6-8 hours it takes. The reason I tried lagering differently is because I was looking to shorten the 3 MONTHS until I could drink the beer :mug:
 
Challenges to his scientific method and sample size notwithstanding, his work is clearly more relevant than the countless individual instances of "Works for me so it must be fine" that you read on HBT. ;)

Absolutely; I don't think his methods are bad at all.

I teach the scientific method and the essence of the method is eliminating, to the extent you can, alternative explanations for the hypothesis. He (and his associates) do a pretty darned good job of it. In fact, I'm using one of his exbeeriments in class to illustrate how it should look when someone takes care in their research.

[BTW, as an aside, using a BEER experiment to illustrate the scientific method in a class where I also teach "Student's" T test, and knowing in what context William Gossett developed the test, makes for an interesting symmetry, don't you think? :) ]

He takes care to use the same process for each of his comparative batches; thus he has a decent control.

I haven't read many critiques of his sample sizes, but they are what they are. In my experience, those who criticize that kind of thing simply don't understand the statistics underlying the results.

What you get for free is sometimes worth what it costs; in this case, it's worth much more.

My 2 cents.
 
That's certainly true, and useful.
And to be clear, my intention isn't to crap all over what he does or anything. Just that writing the results like it's being submitted to a journal for peer review is a bit over the top, IMO.

For me, it's exactly because he writes in that fashion that I believe the results.

The writing reflects the care in which he and his associates are putting into the exercise; if he's this careful in his writing, IMO he's likely to be as careful in his exbeeriments.

******************************

To get us back on track w/r/t the OP's intent, here are things I do or don't do that I found in Exbeeriments; I am a newbie to this, so it isn't so much that I changed anything as it is that I decided the processes were decent ones:

1. Pitching dry yeast directly into the fermenter. I'm not so concerned about it, due to this exbeeriment: http://brulosophy.com/2014/09/15/sprinkled-vs-rehydrated-dry-yeast-exbeeriment-results/

2. Aerating with oxygen. I was all set to buy an aeration wand and regulator then I read this: http://brulosophy.com/2015/05/25/wort-aeration-pt-1-shaken-vs-nothing-exbeeriment-results/ and this: http://brulosophy.com/2015/07/13/wort-aeration-pt-2-shaken-vs-pure-oxygen-exbeeriment-results/

3. Racking to secondary: http://brulosophy.com/2014/08/12/primary-only-vs-transfer-to-secondary-exbeeriment-results/ Lots of people here recommend this as well.

I also realize that the exbeeriments test only a small part of the universe of potential differences; would an exbeeriment turn out differently if a different yeast were used, or a different grain bill, or different fermentation temperatures or different...you get the point.
 
When your n = 13 people answering a survey about largely subjective impressions, that sort of quantitative analysis is a bit ridiculous. Moreover, talking about them in those terms comes off as self-indulgent (at least to me), and suggests the results are meant to be taken as more conclusive than they should be.

In Brulosopher's experiments the T-test/p value is only being applied to the triangle test, which isn't subjective at all.
 
They are interesting, but almost always I would say the sample size is too small. If we could do this with hundreds of trained judges I'd listen more.
 
This is me as well. I haven't changed anything in my process because of his experiments. I find them interesting for sure, but as he says, it's 1 data point, and I'm not as confident in the consistency of his process to say he's isolating variables enough to make a difference, nor do I necessarily trust his method of evaluation.

Point his, here experiments certainly warrant discussion, but until they can be replicated on large scale by everyone, I will typically continue to side with established brewing science (where the methods are much more rigorous and the testing much more exhaustive) until I'm sufficiently convinced otherwise.

Of course, I'm also never satisfied with "hey, I made beer", so perhaps I hold my brewing to a more anal retentive standard than most.

He does admit that it is one data point. Experimental Brewing (Denny/Drew) are leading some cooperative brewing experiments with their IGOR program. This is many brewers all doing the same experiments on their individual systems (so multiple data points). Incidentally, the IGORs just completed a 170 vs 120F hop stand experiment and found similar results to Marshall's work.

The ugly truth behind science is that individual experiments are almost never reproduced by another lab, for many reasons (the main reason being in a publish or perish environment, why would you re-do experiments that have already been done and published?). Follow-on work may confirm previous results, but they won't be reproduced just to increase N. Statistics is a way to allow a given experiment to effectively stand on its own.
 
They are interesting, but almost always I would say the sample size is too small. If we could do this with hundreds of trained judges I'd listen more.

Trained judges would theoretically be a more sensitive measure of differences between beers (something I disagree with, but that is a conversation for another time). Brulospher's experiments are done using "average Joes/average Janes". If they detect a difference then you would expect the trained judges with the well-tuned palate to find it as well. Therefore, hundreds of trained judges would not add anything to the experiment since the differences are large enough that a small group of average joes/janes can detect a difference.

Again, I don't think people have an good understanding of the statistical power behind these experiments.
 
I find the articles interesting reading. Like much of homebrewing, I believe they often result in conclusions that amount to "At the end of the day, some of what we do can be done in different fashions with minimal affect on the finished beer."

So while I think he has been doing some good work, there are MANY variables that aren't taken into consideration in some of the experiments. In the very least, though, they are very useful starting points for continued experimentation. The problem is that it would take a LOT of experimentation to gain any definite conclusions, and ain't nobody got time for that!

Perhaps some of these same experiments performed with differing ingredients would be useful in determining whether the conclusion offered was applicable to the entire gamut of possibilities, or is the result related to the specific experiment performed?
 
I find the articles interesting reading. Like much of homebrewing, I believe they often result in conclusions that amount to "At the end of the day, some of what we do can be done in different fashions with minimal affect on the finished beer."

So while I think he has been doing some good work, there are MANY variables that aren't taken into consideration in some of the experiments. In the very least, though, they are very useful starting points for continued experimentation. The problem is that it would take a LOT of experimentation to gain any definite conclusions, and ain't nobody got time for that!

Perhaps some of these same experiments performed with differing ingredients would be useful in determining whether the conclusion offered was applicable to the entire gamut of possibilities, or is the result related to the specific experiment performed?


I agree that testing the same variable with different styles would be very interesting. But as you said, ain't nobody got time for that!
 
And how did the lager turned out? Mine got some diacetyl using his method.

I also just brewed my first lager using the lager method presented on the site. At the start of my diacetyl rest I had a hint of diacetyl present and a ton of sulfur. After about 6 days of the rest, all signs of diacetyl were gone and I ramped back down. Did you follow the method as below? If so, you shouldn't have ramped back down until the off-flavors were gone:

"until it reaches 65°-68°F (18°-20°C). Allow the beer to remain at this temp until fermentation is complete and the yeast have cleaned-up after themselves, which can take anywhere from 4 to 10 days."
 
I'd say 0% I respect and admire it all, but I just do what works for me regardless.

Actually I'll take that back. 2% to account for the hoppy cream ale I'll occasionally do that seems to please the masses. I based the fermentation profile off an article there
 
The fact that the average palate can't detect a difference in most triangle tests doesn't mean there aren't differences in the beers. It means the average person can't detect them. We have the same issue in the audiophile world where the average listener can't hear or distinguish a difference in audio quality yet it can be detected by trained listeners (golden ears) and backed up by measurements. For a long time we had audiophiles telling us they could hear a lot of problems with CD audio quality, weak bass, poor imaging, harsh treble and we had to develop new ways of testing to figure out what was going on. They added dither, developed Jitter testing, improved the noise floor, found spourus signals outside the audio band etc. And now we have 24 bit audio that can sound freaking amazing. I think there are certain individuals with extraordinary palates that can consistently find faults and off flavors in beer and their results should be weighed higher than the average. Also it sounds like Brulosophy now has a lab at their disposal. It will be very interesting to see lab results next to taster results.

As for the original question; I have not changed anything about my brewing. But I love reading the experiments.
 
The fact that the average palate can't detect a difference in most triangle tests doesn't mean there aren't differences in the beers. It means the average person can't detect them. We have the same issue in the audiophile world where the average listener can't hear or distinguish a difference in audio quality yet it can be detected by trained listeners (golden ears) and backed up by measurements. For a long time we had audiophiles telling us they could hear a lot of problems with CD audio quality, weak bass, poor imaging, harsh treble and we had to develop new ways of testing to figure out what was going on. They added dither, developed Jitter testing, improved the noise floor, found spourus signals outside the audio band etc. And now we have 24 bit audio that can sound freaking amazing. I think there are certain individuals with extraordinary palates that can consistently find faults and off flavors in beer and their results should be weighed higher than the average. Also it sounds like Brulosophy now has a lab at their disposal. It will be very interesting to see lab results next to taster results.

As for the original question; I have not changed anything about my brewing. But I love reading the experiments.

This. I will trust lab analysis over a triangle test performed by average joes. If the goal is "most people can't tell the difference" then i suppose his methodology is acceptable. But that bar is not set high enough for me.

My goal for my brewing (which I'm not saying I'm anywhere remotely near) is the consistency and quality control of macro lager.
 
I haven't read many critiques of his sample sizes, but they are what they are. In my experience, those who criticize that kind of thing simply don't understand the statistics underlying the results.

I'll admit it's been a couple years since my graduate statistics and research methods classes. But I recall that if you're using p-values and confidence levels for a survey, the point is to show that the results are representative of the target population, which in this case is presumably American homebrewers and other craft beer enthusiasts with an informed opinion on the relevant metrics he's testing. Even if you arbitrarily peg that number at an extremely conservative say, 5000 or even 1000 people, a sample size of 15 is way, way too low to say your confidence level is 95% (p<.05) that your sample results are representative of the target population. Am I not remembering that correctly?

Again, I like his exbeeriments overall and the process is great, i'd say. A lot of what he tests comes down to quantifiables that don't rely on survey data, like efficiency. No problem with any of that. And even for the tests that rely on surveys, a sample size of 12-15 is great for what he's doing, particularly considering his constraints. But the statistical terms he uses in the analysis of those mean specific things the way he's using them, and strike me as out of place in this context.
 
I'll admit it's been a couple years since my graduate statistics and research methods classes. But I recall that if you're using p-values and confidence levels for a survey, the point is to show that the results are representative of the target population, which in this case is presumably American homebrewers and other craft beer enthusiasts with an informed opinion on the relevant metrics he's testing. Even if you arbitrarily peg that number at an extremely conservative say, 5000 or even 1000 people, a sample size of 15 is way, way too low to say your confidence level is 95% (p<.05) that your sample results are representative of the target population. Am I not remembering that correctly?

Again, I like his exbeeriments overall and the process is great, i'd say. And a sample size of 12-15 is great for what he's doing, particularly considering his constraints. But those statistical terms mean specific things the way he's using them, and strike me as out of place in this context.

Again, you are missing the fact that the statistical analysis is occurring on the triangle test and nothing else. A survey is being done, but no statistics are involved there. He is just giving the # of people who said the same/similar thing. He isn't using statistics to draw a conclusion from it.
 
The fact that the average palate can't detect a difference in most triangle tests doesn't mean there aren't differences in the beers. It means the average person can't detect them. We have the same issue in the audiophile world where the average listener can't hear or distinguish a difference in audio quality yet it can be detected by trained listeners (golden ears) and backed up by measurements. For a long time we had audiophiles telling us they could hear a lot of problems with CD audio quality, weak bass, poor imaging, harsh treble and we had to develop new ways of testing to figure out what was going on. They added dither, developed Jitter testing, improved the noise floor, found spourus signals outside the audio band etc. And now we have 24 bit audio that can sound freaking amazing. I think there are certain individuals with extraordinary palates that can consistently find faults and off flavors in beer and their results should be weighed higher than the average. Also it sounds like Brulosophy now has a lab at their disposal. It will be very interesting to see lab results next to taster results.

As for the original question; I have not changed anything about my brewing. But I love reading the experiments.

This. I will trust lab analysis over a triangle test performed by average joes. If the goal is "most people can't tell the difference" then i suppose his methodology is acceptable. But that bar is not set high enough for me.

My goal for my brewing (which I'm not saying I'm anywhere remotely near) is the consistency and quality control of macro lager.

If a tree falls in the woods, does anyone hear it? If you can't taste a difference in the beer, is it really different?
 
Again, you are missing the fact that the statistical analysis is occurring on the triangle test and nothing else. A survey is being done, but no statistics are involved there. He is just giving the # of people who said the same/similar thing. He isn't using statistics to draw a conclusion from it.

He absolutely does that.
http://brulosophy.com/2015/08/10/ye...saison-vs-safbrew-abbaye-exbeeriment-results/

| RESULTS |

Over the course of 6 days, a pool of 22 people consisting of BJCP judges, Cicerone Certified Beer Servers, experienced homebrewers, and dedicated craft beer geeks participated in this xBmt. At this sample size, 11 (p<0.05) would be required to accurately select the odd-beer-out to reach statistical significance. Each taster was blindly served 1 sample of the Belle Saison beer and 2 of the Abbaye beer then asked to identify the one that was different. In all, 12 (p=0.017) participants made the correct selection, suggesting a statistically significant difference– the beer fermented with Belle Saison was reliably distinguishable from the same beer fermented with Abbaye.

| DISCUSSION |

The fact 2 yeasts intended for different styles produced beers that were reliably distinguishable from each other isn’t all that surprising and confirms what most of us believe– yeast is a huge player in the beer character game. What did surprise me a bit is the number of folks who were unable to tell the beers apart, which included some very experienced craft beer drinkers. Like I mentioned earlier, I was expecting this xBmt to be a slam dunk with a huge majority of tasters accurately selecting the odd-beer-out, which wasn’t the case, significance was reached due to a single response. Still, I feel comfortable saying Belle Saison produces a beer with different character than Safbrew Abbaye.


That was just the first one I clicked on.
 
Overall I really enjoy reading the articles on the Brulosophy website, and I take time out nearly every week to do so. In general I take the findings there to be just one interesting data point, based on one particular experiment. However, sometimes the experiments touch on things that I've been thinking about myself, and his results encourage me to try them as well.

For example, I wondered about boiling for 60 vs. 90 minutes, saw his findings, gave it a shot myself, was happy with the results, and after repeating them a few times on my own, I've now switched over to 60 minute boils, and have yet to have any negative impact that I (or others that have drank my beer) have perceived.

Similarly, I wondered if I really needed to keep everything in the fermenter for two weeks. I saw his quick fermentation article, gave it a shot, was happy with the results, repeated them a few times on my end, and now use a similar profile for most of my "smallish" beers.

Conversely I see other things, think that's all well and good, but don't attempt to change my brewing methods. An example here would be pure O2 when pitching yeast. It's cheap, I've already got the gear, and it almost certainly doesn't hurt anything, so I just keep doing it.
 
I love the work he is doing. I feel inspired to try some experimentation myself. I am happy to see long standing practices challenged, but I also noticed that the process is meticulously followed and each step truly finished before the next is initiated. That is my biggest takeaway. The motto, Relax, don't worry... comes to mind.
 
I love the articles. They help me relax a little about brewing, and try new processes without worry.


^^^^^^I agree.

Also, I now find I enjoy using the pair of 3 gallon carboys my daughters gave me after their short career of cider brewing. I will frequently split my batches to try different varieties of yeasts, different varieties of hops for dry hopping, various durations of dry hopping or just comparing dry hopping to no dry hopping at all. Kind of my own little xbeeriments.
 
He does admit that it is one data point. Experimental Brewing (Denny/Drew) are leading some cooperative brewing experiments with their IGOR program. This is many brewers all doing the same experiments on their individual systems (so multiple data points). Incidentally, the IGORs just completed a 170 vs 120F hop stand experiment and found similar results to Marshall's work.

The ugly truth behind science is that individual experiments are almost never reproduced by another lab, for many reasons (the main reason being in a publish or perish environment, why would you re-do experiments that have already been done and published?). Follow-on work may confirm previous results, but they won't be reproduced just to increase N. Statistics is a way to allow a given experiment to effectively stand on its own.

Was coming here to ask if anyone else is doing something similar to confirm some of the results from Bru's tests.

Like what someone said, it has changed some, but moreso it has instilled some confidence in certain aspects of the process.

Brewing has so many variables. Its not something you realize until you actually start brewing. I think my friends and family think you just boil some grains and hops and everything comes out fine. They have no idea about sanitation, pH readings, water additions, yeast pitch rates, etc. I absolutely LOVE what Brulosopher is doing and look forward to each exBeeriment. Even though its only one data point on one persons brewing setup, it is bringing questions to traditionally held views. We need more groups or organizations to do the same to verify the results.

One thing that has changed for me is the yeast vitality starter. I have a 1 year old, so sometimes its hard to plan out a brewday 3 days in advance, which means no starter. When I see an opportunity to brew, I'll get 2 vials of yeast and make his vitality starter 4-5 hours before pitching.
 
He absolutely does that.
http://brulosophy.com/2015/08/10/ye...saison-vs-safbrew-abbaye-exbeeriment-results/

| RESULTS |

Over the course of 6 days, a pool of 22 people consisting of BJCP judges, Cicerone Certified Beer Servers, experienced homebrewers, and dedicated craft beer geeks participated in this xBmt. At this sample size, 11 (p<0.05) would be required to accurately select the odd-beer-out to reach statistical significance. Each taster was blindly served 1 sample of the Belle Saison beer and 2 of the Abbaye beer then asked to identify the one that was different. In all, 12 (p=0.017) participants made the correct selection, suggesting a statistically significant difference– the beer fermented with Belle Saison was reliably distinguishable from the same beer fermented with Abbaye.

| DISCUSSION |

The fact 2 yeasts intended for different styles produced beers that were reliably distinguishable from each other isn’t all that surprising and confirms what most of us believe– yeast is a huge player in the beer character game. What did surprise me a bit is the number of folks who were unable to tell the beers apart, which included some very experienced craft beer drinkers. Like I mentioned earlier, I was expecting this xBmt to be a slam dunk with a huge majority of tasters accurately selecting the odd-beer-out, which wasn’t the case, significance was reached due to a single response. Still, I feel comfortable saying Belle Saison produces a beer with different character than Safbrew Abbaye.


That was just the first one I clicked on.

Yes, that is statistical analysis being applied to the triangle test, not a survey. :confused:
 
I just find it quite funny that people are doubting his ability to produce really good beer consistently. I mean he's tweaking recipes in order to try to win the biggest homebrew comps out there. When he went to that one certain really big beer conference, he had way more than 15-20 people trying his beers. Yet many of those tested variables didn't reach statistical significance.

He even parsed out several experiments by judges vs. non-judges. The non-judges have even done better a couple of times. A judge's palate is not better at discerning differences between two beers, necessarily. They're trained to recognize different characteristics of the beer (whether that be intentional or a flaw), and most importantly, be able to describe what they're tasting in detail. There's wide scientific proof that everyone, whether they're a trained bjcp judge or not, have completely different taste thresholds when it comes to all the different compounds found in beer. So comparing a beer judge to a sound professional is comparing apples to oranges.

That being said, he specifically states (literally almost every experiment) that this is one single data point. He encourages others to try these things out for themselves. He often states that these experiments often lead to way more questions than they answer. In fact, if you have an issue with his process of the experimentation aspect, I know that he's always willing to accept changes in order to make it a more accepted practice.

I'm with a lot of people on here that I don't completely change my process to be set in stone after reading these experiments. I don't take them as doctrine. But they do encourage me to at least try it out. In pursuit of consistently great beer, I always want to challenge any process I take as doctrine.

Lastly, for those that do change their brewing techniques because of these experiments, one has to realize, that he's controlling for one variable. Under-pitching, or even pitching dry, has shown to make imperceptible differences... when every other variable is controlled pretty perfectly. Fermenting two lagers with 34/70 seems to create imperceptible differences...when every other variable is pretty perfectly controlled. But that doesn't necessarily mean that you can underpitch, and then also pitch warm. Maybe those who are trying his methods without achieving the same results should take these things into consideration. He hasn't done any tests up to now in which he has compounded two, or more, different variables.
 
If a tree falls in the woods, does anyone hear it? If you can't taste a difference in the beer, is it really different?

This^ If it quacks like a duck, walks like a duck, and floats like a duck, its still a good beer.

They address the "bad palate" argument quite well in http://brulosophy.com/2016/01/21/in...t-xbmt-performance-based-on-experience-level/

People with "good" or "trained" palates performed only marginally better than what some would consider "bad" or "untrained" palates. There may be some golden palates that show up in the mix, but they couldn't be more than a tiny percentage of the population so arguably(statistically) their results are not important to the discussion. Trained judges can sometimes pick out different beers simply because they have a wider vocabulary to describe flavors, but if you read that article they performed barely better than the average Joes. For competition brewers or the people that want to make "perfect" beer, that's great and all, sometimes at competition if you submitted the "perfect" beer that ignores Brulosophy findings and another person brewed a beer that uses some of the info from that website, and in a triangle test the beers cannot be distinguished, the competition judges have to just go with intangibles like "which one do I like better", in which case repeatability and perfection don't mean jack when it comes to subjective palates.

To answer the question of the thread, a lot of the exbeeriments end up reinforcing my brewing practices. I didn't want to invest in things like an O2 system for example, and it turns out that, if you at the very least oxygenate a little, and pitch good healthy vital yeast, you'll end up with good results regardless. Then again the sugar in the boil/sugar during fermentation exbeeriment is making me question my process on Belgian beers, but I put sugar in fermentation since I don't have a 5000ml flask yet for yeast propagation.

Edit: Not to suggest I *knew* that skipping pure O2 would produce similar results, but I'm not the type of person who immediately invests into every single piece of equipment and practice immediately when starting a hobby. I pick up bits and pieces along the way as I identify them as being important. If pure O2 was the only way to oxygenate and make good beer, I would've noticed that immediately in my beers, as soon as I got my pitch rates down, my open-the-spigot-a-few-feet-above-the-fermenter water fall oxygenation technique definitely appeared to produce enough oxygenation to make good beers (considering I've now placed two beers at competition in the past 6 months).
 
Love his stuff..seems like a great family guy as well. His exbeeriment on truby vs non truby totally struck a cord with me, as I have always been and always will be a non meticulous corner cutting lazy brewer. And ditching secondary which I had already done on my own was completely cemented/justified for me with that experiment. Thanks Marshall !

I haven't read his yeast experiment mentioned here but I do my own thing with yeast slurry/starter from what seems about everybody else here, and it works great for me ..Out of the fridge and into the wort in < hour and that's with only 1/2 to 1 cups of slurry and the same amount of fresh hot wort from second running tossed in and shaken up..so I'm not sold on the whole have to have a liter of yeast on a stir plate for 18 hours train. His experiments are breaking down some long standing traditions in my opinion as do some of my own, YMMV. We all have preconceived ideas as to what is and what is not acceptable. I keep an open mind to anything making brewing easier..just as I do at my day job.

Enzymes and yeast don't have a brain..we humans mold them to our will within their capability's...taste tests are the bottom line and the proof of the pudding for me, as well as 99.9% of every non brewer beer drinker. The non brewer doesn't care about process..its all about the end product. The shortest route to good taste is what I seek. Marshall echoes that I believe. Or at least that's my perception.
 
Back
Top