How much does Brulosopher affect your brewing?

Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum

Help Support Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I have definitely changed the way I brew - not 100% because of Brulosopher, but based on what he is doing, Denny/Drew and others (brewing network has preached the same for a long time as well.......

The thing I have changed about my brewing is not a recipe, ingredient, temperature, timeline, etc. What I have changed is that I have attempted to be more analytical and precise about the beers I brew. I have started to brew the same beers over and over (..... and over, and over.....) tweaking variables, replicating recipes, process, etc. I am not doing it for statistical analysis, not organizing large tastings, not attempting to write up all my results. I am doing it for myself, my system and my beers - to get them the way I like them.

I think the single best information coming from Brulosopher and others is simply this - test things for yourself. Adapt and adjust to your system, your tastes. Often, their message is simply this - "Does the effort/attention to detail matter to the average taster." Even if there is a difference, if a large portion of people cannot discern it in any measurable way - is it worth doing? Maybe, maybe not.

So, yeah - I think what he (and others) has done has influenced my brewing - not so much any one ingredient or process - but philosophically, in how I analyze and attempt to learn from my own brewing. I think it is an approach that has helped make me a better, more consistent brewer as well.
 
If a tree falls in the woods, does anyone hear it? If you can't taste a difference in the beer, is it really different?


Well...it does, in fact, make a sound but if nobody is around to hear it...

That's my point is that for most people it doesn't make any difference. I want to make the most extraordinary beer and have a few friends with excellent taste buds (better than mine) that have helped me hone in on cirtain things in my brewing that needed to be fixed...still fixing...but I'm getting excellent results!
 
Actually, now that I think about it more. The site has inspired me to NEVER try any sort of brewing experiment and report the results since people will $h!t on it no matter how deep of an understanding of statistics I have

LOL! That's the best thing I've read yet! Agreed.
 
I personally really enjoy it. It's helped me save money and make good beer.

One of my favorites was the yeast calculator vs. a single vial.

After going to an off flavor clinic the other day...I haven't ever tasted any off flavors in my beers that have been 'underpitched.'
 
I personally really enjoy it. It's helped me save money and make good beer.

One of my favorites was the yeast calculator vs. a single vial.

After going to an off flavor clinic the other day...I haven't ever tasted any off flavors in my beers that have been 'underpitched.'

That's because you don't have hundreds of trained beer judges tasting it. Then you are bound to pick up off-flavors because...statistics.:)
 
Brulosopher's done a good job of:

1: showing that ability to "taste" basically can't be trained. Either that, or BJCP employs low standards regarding tasting ability.
2: putting generally accepted processes to the test. Too much of home brewing precepts and standards are simply lore. Taking commercial and historical brewing processes, misinterpreting the science, then inappropriately trying to apply them to homebrewing is not helping our hobby. It's nice that Brulosopher is questioning these sacred cows.
3: showing that minor differences reallllly don't matter as much as you would think.
 
I think he mentions this several times in the exbeeriments discussion areas, but, a lot of our current mantras on brewing were developed from commercial brewing or the old school homebrewing practices where there are two very different goals in mind.

Commercial brewing is trying to create beer that's tasty, marketable, easy to reproduce the same product repeatedly, and quick turn around. As well they are always dealing with hundreds of times more volume than the average homebrewer deals with so they have very different challenges.

Old school homebrewers from the 80s/90s *never* had access to the same variety of ingredients, or for that matter the freshness of their ingredients never came close to what we have now. Think of not having starsan or iodophor to sanitize with, or yeast that's *old* as hell and not been kept in good refrigeration, or brown hops, malt extract that's years old. They had to take every precaution in the book possible to prevent baby diaper beer, or Band-Aid beer.

Pitching a single vial or smack pack of ancient yeast that's not been taken care of, compared to pitching a single vial/smack pack of very fresh well taken care of yeast will produce VERY different results. On the surface though will appear like the same bad practice. (I think, man I'd hate for them to do an exbeeriment with a 2 year old vial of yeast pitched straight into a batch next to a month old vial of the same yeast and it turns out theres statistically no difference)
 
I'll admit it's been a couple years since my graduate statistics and research methods classes. But I recall that if you're using p-values and confidence levels for a survey, the point is to show that the results are representative of the target population, which in this case is presumably American homebrewers and other craft beer enthusiasts with an informed opinion on the relevant metrics he's testing. Even if you arbitrarily peg that number at an extremely conservative say, 5000 or even 1000 people, a sample size of 15 is way, way too low to say your confidence level is 95% (p<.05) that your sample results are representative of the target population. Am I not remembering that correctly?

Well, not exactly. What the p value indicates is how likely the results could have been produced just by random guessing by the raters. It doesn't indicate anything about representativeness in the population because the sample itself wasn't randomly selected from the entire population of raters, beer drinkers, or whatever that population might be--and even then, that "population" is local to Brulosopher. It's not all beer drinkers.

It's sort of like doing a survey of people walking into Thai restaurant. You're defining your sample as those who would appear to like Thai food--and those who would never patronize that restaurant can never appear in the sample.

That's why, IMO, his approach is more than reasonable, and why (IMO) those who fixate on sample size may not really understand either sampling or Type I error.

All he's doing is saying how likely the result would be if the raters were choosing randomly rather than based on ability to distinguish the beers. Nobody should read more into it.

[Side note: a 'sampling frame' is the "list" of all possible particpants in a study or survey; it matters in terms of what is truly represented in the results. I don't know what population is represented in Brulosopher's tests--just that it includes people who like to drink beer *and* who are convenient to him. I'm never included, for instance, because I don't live there.]

Again, I like his exbeeriments overall and the process is great, i'd say. A lot of what he tests comes down to quantifiables that don't rely on survey data, like efficiency. No problem with any of that. And even for the tests that rely on surveys, a sample size of 12-15 is great for what he's doing, particularly considering his constraints. But the statistical terms he uses in the analysis of those mean specific things the way he's using them, and strike me as out of place in this context.

Some of the results indicate differences, but he or his associates also note that while flavors might be different, it's up to the individual to decide what they like. I'm always interested when he notes a difference and then the raters differ on which one they like--meaning, the independent variable produced a difference identifiable by the raters beyond what one might expect by chance, but that difference is hedonic in nature, i.e., what pleases them more. Unless a vast majority of raters say one of the two beers is worse or bad, then some of the results are just noting that there *are* differences...in his study....with these people. :)
 
I love the articles. They help me relax a little about brewing, and try new processes without worry.

Brulosophy reads to me like a more modern and data driven RDWHAHB. In the end, beer will be made, and it will probably be good. If I only have 45 minutes for a boil, probably will still be fine. If my pitch rate was off, or all I had was an old leftover slurry, probably still fine.

More than relaxing, it has also given me perspective on how I handle recipe "tweaks". I am no longer going to re-brew a batch with a 154F mash instead of a 151F and hope to experience a change in flavor.


Actually, now that I think about it more. The site has inspired me to NEVER try any sort of brewing experiment and report the results since people will $h!t on it no matter how deep of an understanding of statistics I have

Also this... If I get 15 of my closest beer drinking friends together and more than half can't tell two beers apart (including myself), then they're close enough for me. I'm not going to start hosting wedding receptions just to get an adequately large sample size.
 
Well, not exactly. What the p value indicates is how likely the results could have been produced just by random guessing by the raters. It doesn't indicate anything about representativeness in the population because the sample itself wasn't randomly selected from the entire population of raters, beer drinkers, or whatever that population might be--and even then, that "population" is local to Brulosopher. It's not all beer drinkers.

It's sort of like doing a survey of people walking into Thai restaurant. You're defining your sample as those who would appear to like Thai food--and those who would never patronize that restaurant can never appear in the sample.

That's why, IMO, his approach is more than reasonable, and why (IMO) those who fixate on sample size may not really understand either sampling or Type I error.

All he's doing is saying how likely the result would be if the raters were choosing randomly rather than based on ability to distinguish the beers. Nobody should read more into it.

[Side note: a 'sampling frame' is the "list" of all possible particpants in a study or survey; it matters in terms of what is truly represented in the results. I don't know what population is represented in Brulosopher's tests--just that it includes people who like to drink beer *and* who are convenient to him. I'm never included, for instance, because I don't live there.]

Fair enough regarding p-values, although the non-randomness of his sample selection itself doesn't mean much - tons of surveys and other human research suffers from some degree of sampling bias because of funding and other constraints that make them non-random in one way or another. I think their representativeness for the overall population of interest is implied pretty heavily.

More importantly, given that the p-value is only measuring non-randomness in results within the sample group as you say, I still maintain that the Brulosopher tends quite often to state conclusions in a way that generalizes his results for all craft beer drinkers. Of course, it's admittedly hard to do avoid doing that in a way that avoids super-dry rambling sentences about methodology.;)
 
Yes, that is statistical analysis being applied to the triangle test, not a survey. :confused:

My bad, I misunderstood your original post. You are correct that it's not a survey.

But at the end of the day, it's a distinction without a difference. If the triangle test is what the conclusions are based on, that's the part that matters.
 
If a tree falls in the woods, does anyone hear it? If you can't taste a difference in the beer, is it really different?

Different people have different sensitivities. I can't taste diacetyl to save my life but I'm hypersensitive to isoamyl acetate and pyrazine. Just because i can't taste diacetyl doesn't mean it's not there to another taster. Just because someone else doesn't taste jalapeño in their stout doesn't mean it's not clear as day to me.My goal is that NO ONE could detect any flavor (aroma, mouthfeel, whatever the case may be) that I do not explicitly intend to be there.

Almost certainly an unattainable standard (as when warm I can pick up isoamyl acetate even in most macro lagers), but that's what I strive for.
 
A judge's palate is not better at discerning differences between two beers, necessarily.

Brulosopher's done a good job of:

1: showing that ability to "taste" basically can't be trained. Either that, or BJCP employs low standards regarding tasting ability.

Ding, ding, ding, ding....

"trained beer judges" need to be further explained/defined.

I passed the BJCP test, I'm a recognized BJCP judge yet I wouldn't consider myself to be a "trained judge". There's not much training involved in becoming BJCP recognized. You have to fill out a thorough score sheet and get sorta close to the proctors in your scores/evaluations of 6 beers....




Triangle tests are an important distinguisher from "[insert whatever technique/variable] works fine for me!!!"

If you haven't done a blind triangle test - do it! It's interesting to see how your perceptions of aroma/flavor/mouthfeel/etc. work when you don't know what you're tasting.




Oh, and FWIW, I participated in 2 of Marshall's xBeeriments at NHC, and I didn't pass the triangle test either time. ;)
 
Different people have different sensitivities. I can't taste diacetyl to save my life but I'm hypersensitive to isoamyl acetate and pyrazine. Just because i can't taste diacetyl doesn't mean it's not there to another taster. Just because someone else doesn't taste jalapeño in their stout doesn't mean it's not clear as day to me.My goal is that NO ONE could detect any flavor (aroma, mouthfeel, whatever the case may be) that I do not explicitly intend to be there.

Almost certainly an unattainable standard (as when warm I can pick up isoamyl acetate even in most macro lagers), but that's what I strive for.

I'm totally with you here. Actually even with the isoamyl acetate. I can't stand most saisons because of this reason. Just made a pale ale for a competition that I think I'm detecting a bit in there, although I know I controlled for the usual suspects, so I'm wondering if it's coming from the hops. And then I'm wondering if the judges will be able to pick up on it.

And this is my point about the differences between the average taster and the bjcp judge. One isn't better than the other when it comes to tasting the different characteristics, just in looking for them and knowing how to describe them. But even if a bjcp judge isn't sensitive to banana-overload, won't make them a bad judge.

I also want to strive for the best beer, and won't cut corners unless I can still make the best that I possibly can. I think Marshall and his fellow contributors are the same as you and I. I don't think that they're testing these variables in order to cut corners and still make "good" beer. I think they're just wondering what we do that's actually useful, and what we don't necessarily need to fret about.
 
Not gonna lie, I like his process. I also tend to really like his stuff when it comes to making brew day easier/shorter. I quit boiling for 90 minutes, even with pilsner malts. I am also considering how often I really should be using pure o2.

What have you changed?

Back to regularly scheduled programming....

I now do a 30 minute boil, and my beers are still turning out great.

I've never worried about trub, dumping the entire kettle contents into the fermenter. His trubby vs non-trubby experiment confirmed what I already knew.
 
I also want to strive for the best beer, and won't cut corners unless I can still make the best that I possibly can. I think Marshall and his fellow contributors are the same as you and I. I don't think that they're testing these variables in order to cut corners and still make "good" beer. I think they're just wondering what we do that's actually useful, and what we don't necessarily need to fret about.

They do quite often put near the end in the discussion part "Will this change my brewing process". Most of the time the answer is "no".

It almost always comes down to a time/money investment equation. There were probably a lot of people who read the decoction exbeeriment and completely stopped decoction mashing. There are probably some that read the pure O2 exbeeriment and decided to skip pure O2 and just get their pitch rate down and started pitching really vital healthy yeast (I was on the fence until I read this exbeeriment and decided that I don't need to buy anything for pure O2).

Then the reverse also probably occurred. I read the decoction mashing exbeeriment and decided I'd still rather do that than having another grain on hand (melanoidin) just to match the same character. And others saw the pure O2 and didn't toss their pure O2 in the garbage but kept it since it would help remove variables.
 
One thing he gave me confidence on: brewing a lager using that one dry lager yeast (can't remember which one right now!) at higher temps than most lagers. At best I can get 52, maybe 50 degree (F) ambient temps in my cooler, but that's not wort temps. Seems that dry lager yeast can handle that no problem.

:ban:
 
Different people have different sensitivities. I can't taste diacetyl to save my life but I'm hypersensitive to isoamyl acetate and pyrazine. Just because i can't taste diacetyl doesn't mean it's not there to another taster. Just because someone else doesn't taste jalapeño in their stout doesn't mean it's not clear as day to me.My goal is that NO ONE could detect any flavor (aroma, mouthfeel, whatever the case may be) that I do not explicitly intend to be there.

Almost certainly an unattainable standard (as when warm I can pick up isoamyl acetate even in most macro lagers), but that's what I strive for.

OK, but you aren't a special snowflake (I hope this doesn't come as a surprise to you). If you can taste it, chances are someone else can too, and with enough tasters you should be able to statistically find a difference. If you were the only one to taste something then we would call into question if you really knew what you were tasting and your observation would likely be discarded as an outlier and your opinion wouldn't matter anyway.
 
OK, but you aren't a special snowflake (I hope this doesn't come as a surprise to you). If you can taste it, chances are someone else can too, and with enough tasters you should be able to statistically find a difference. If you were the only one to taste something then we would call into question if you really knew what you were tasting and your observation would likely be discarded as an outlier and your opinion wouldn't matter anyway.

Yes, but this is where a sample size of 10-15 people becomes the issue. It's too small a sample to imply much. A sample of even 100 people would be a different story.
 
I tend to agree that people need to be careful of taking the statistical results as gospel. For example, take the most recent lager fermentation experiment. 26 testers, with 12 (46%) choosing the correct odd sample. Random would be 33% in the triangle test. While it may not have hit the chosen statistical threshold for significance, it seems that some people could tell a difference in the samples at different ferment temps. Are those people somehow wrong? If you're trying to tailor your beer to the average palate, then is your beer just that, average?

Edited to add that the above comes off as critical but that wasn't intended. I like the blog and enjoy reading all of the articles. It probably has influenced me, for example I don't fret much about not hitting attenuation expectations anymore.
 
Yes, but this is where a sample size of 10-15 people becomes the issue. It's too small a sample to imply much. A sample of even 100 people would be a different story.

Don't they address this issue by the fact that its not a double blind test? Every time they do the triangle test the understanding is that one of the beers are different. Each person taking the triangle test ***KNOWS*** that one of the beers is going to be different. Mentally they are trying to pick out which beer is different because they know one of the beers are going to be different.

The fact that the people taking the test know that one of the beers is different and they are STILL not reaching significance is (to me) more important than the sample size.

Expectations vs. perceptions
 
Yes, but this is where a sample size of 10-15 people becomes the issue. It's too small a sample to imply much. A sample of even 100 people would be a different story.

Statistically significant or not, that's still a larger sample size than I am likely to pull together to dispute the results.
 
I successfully used a somewhat modified version of the Lager method. It is my first lager though so I can't really compare the results to the "old school". The method makes total sense to me and seems to jibe with current information on how yeast seems to work, when it adds its own character to the beer, how it cleans up, etc.
Also reading through the xBMTs my take is that some of old methods, by themselves might not make a difference.
Could be that multiple, less-controlled variables resulted in those earlier methods being developed and adopted.
 
I think the experiments are much better than most I've seen applied in the hobby. I give a lot more weight to his experiments than I do to side by side experiments that do not include the blind triangle test.
 
Don't they address this issue by the fact that its not a double blind test? Every time they do the triangle test the understanding is that one of the beers are different. Each person taking the triangle test ***KNOWS*** that one of the beers is going to be different. Mentally they are trying to pick out which beer is different because they know one of the beers are going to be different.

The fact that the people taking the test know that one of the beers is different and they are STILL not reaching significance is (to me) more important than the sample size.

Expectations vs. perceptions

Statistically significant or not, that's still a larger sample size than I am likely to pull together to dispute the results.

Not trying to fault them for their methods. They are sound enough for their means and they do obviously take it seriously. Larger samples and laboratory levels of control are obviously beyond their scope.

My point is more a caution to readers against extrapolating more from their results than is due, and they regularly make the same point as well.


Hence why I find the results interesting, definitely warranting further discussions and further expirementation. But they don't "blow open" much as far as I'm concerned.
 
I love reading his articles, but I just hate it when people quote him like he's gospel. Then they post a link to one of his xbeerments as some kind of proof that their chosen method is best.

As has been said, his experiments are great as a basis for further thought and experimentation, but in themselves prove absolutely nothing.
 
I am guilty of throwing in a Brulosopher link, but not as proof of anything other than, if someone says something isn't right, or even can't be done, I just basically point out "This guy here did it, an got away with it, to boot, so at least there is a question.." and leave it at that.

:)
 
I love reading his articles, but I just hate it when people quote him like he's gospel. Then they post a link to one of his xbeerments as some kind of proof that their chosen method is best.

As has been said, his experiments are great as a basis for further thought and experimentation, but in themselves prove absolutely nothing.

They prove plenty - maybe not what you want them to prove, though. ;)
 
I love reading his articles, but I just hate it when people quote him like he's gospel. Then they post a link to one of his xbeerments as some kind of proof that their chosen method is best.

As has been said, his experiments are great as a basis for further thought and experimentation, but in themselves prove absolutely nothing.

Empirical data > anecdotes and lore. Discussing his methodology, results, or statistical analysis is all well and good, but any competing theory or idea must be evaluated to the same rigor. So...if you want to argue the null hypothesis, prove it.
 
If anything, the Brulosophy blog has made me question many "well known brewing truths" and keep an open mind about differing techniques. I've reduced boil times to 45-60 minutes, switched to BIAB and pitched less yeast based on the experiments.

Some people are happy to stay with what works for them and that's great. By nature I love to experiment and appreciate them being the Myth Busters of brewing. I'm also lazy so anything I can do to shorten brew day work and time (without sacrificing quality) is awesome.
 
Yes, but this is where a sample size of 10-15 people becomes the issue. It's too small a sample to imply much. A sample of even 100 people would be a different story.

I don't really know that this is the case, especially if we don't know that the sample is representative. All his samples have been what we call in the business "convenience samples." Nothing wrong with that, and he does not attempt to make his sample out to be something it's not.

As far as sample size--maybe, but quite possibly not.

That's not how this works. In the z formula for proportions, in the denominator there is a division by "n" which is how the formula controls for sample size. The larger the "n" the smaller the denominator, and the larger the resulting Z value, and the smaller the p-value.

You can have tiny differences between two groups be statistically significant, and large differences between two groups not reach significance. Why? Because of sample size.

As it is, the results speak for themselves....there is a difference, the difference can be perceived by beer drinkers at a rate significantly better than random chance, and that's that.

Now, in fairness--and I'll grant this in relation to sample size, but only to a point--any single experiment is good information but it becomes GREAT information when it can be replicated. And in some cases he tries to do that. Replication would work best when done by someone else, but either way it's a good thing.
 
RE: triangle test stats - it's based upon generally accepted sensory science methods. That said, the sample sizes are underwhelming.

http://www.sensorysociety.org/knowledge/sspwiki/Pages/Triangle Test.aspx

Brulosopher is a frustrating website for me. Sometimes they do things that really tell me a lot, like the trub test.

Other times, they compare things that I feel should not be compared. Hop Stand vs Dry Hop? I'm doing the hop stand to get most of my IBUs, vs getting them from boil additions - not as a Dry Hop replacement. The added flavor or aroma that I get from it is in comparison to boil hop additions, I (and basically everybody else brewing IPAs) still dry hop. If you compare things that have different purposes and then say "the beer was different!" - I just shake my head and walk away.
 
Maybe his yeast temp work will push me to try a lager soon.


His post made me decide to try a lager in my basement that stays at 62*... Then a different post of his made me say "f it" and use wlp-029 instead of a lager yeast.

I am brewing his Munich helles this weekend. :mug:
 
RE: triangle test stats - it's based upon generally accepted sensory science methods. That said, the sample sizes are underwhelming.

http://www.sensorysociety.org/knowledge/sspwiki/Pages/Triangle Test.aspx

Brulosopher is a frustrating website for me. Sometimes they do things that really tell me a lot, like the trub test.

Other times, they compare things that I feel should not be compared. Hop Stand vs Dry Hop? I'm doing the hop stand to get most of my IBUs, vs getting them from boil additions - not as a Dry Hop replacement. The added flavor or aroma that I get from it is in comparison to boil hop additions, I (and basically everybody else brewing IPAs) still dry hop. If you compare things that have different purposes and then say "the beer was different!" - I just shake my head and walk away.

Lots of problems with whirlpool vs dry hop. No mention of yeast biotransformation of oils (ie, the flavor profile from pre-fermentation hop additions will be DIFFERENT than post-fermentation hopping!) No mention of alpha acid isomerization temperature or boil-off temperature of volatile hop aromatic oils. Also, 20 minutes isn't exactly a great steep time. The literature I've seen suggests 80 minutes is ideal.
 
Lots of problems with whirlpool vs dry hop. No mention of yeast biotransformation of oils (ie, the flavor profile from pre-fermentation hop additions will be DIFFERENT than post-fermentation hopping!) No mention of alpha acid isomerization temperature or boil-off temperature of volatile hop aromatic oils. Also, 20 minutes isn't exactly a great steep time. The literature I've seen suggests 80 minutes is ideal.

I guess I never got the impression they were setting out to explain why something is different or not different. They were just testing the *IF* something is different.

The science of why something is different is interesting. But I am not a scientist, I just brew beer in my basement. I don't *need* to know all of the science behind why a whirlpool is different than a dry hop, I do understand some of the basics though. I'd bet almost everyone walked in saying "yeah its definitely gonna reach significance" but no one had been able to say "I've tried this side by side and I *know* its different".
 
Lots of problems with whirlpool vs dry hop. No mention of yeast biotransformation of oils (ie, the flavor profile from pre-fermentation hop additions will be DIFFERENT than post-fermentation hopping!) No mention of alpha acid isomerization temperature or boil-off temperature of volatile hop aromatic oils. Also, 20 minutes isn't exactly a great steep time. The literature I've seen suggests 80 minutes is ideal.

If anything, I could have seen comparing these 4: 1. mid-boil, 2. hopburst, 3. hopstand, 4. start of fermentation dry hop. All pre-fermentation hop additions that in some way supposedly enhance flavor or aroma. Hop stand compared to post fermentation dry hop is kind of a "no duh" result.

I guess I never got the impression they were setting out to explain why something is different or not different. They were just testing the *IF* something is different.

The science of why something is different is interesting. But I am not a scientist, I just brew beer in my basement. I don't *need* to know all of the science behind why a whirlpool is different than a dry hop, I do understand some of the basics though. I'd bet almost everyone walked in saying "yeah its definitely gonna reach significance" but no one had been able to say "I've tried this side by side and I *know* its different".

I think this is a fair assessment of why some people like the things that the site does more than others. If you have a science background, you hear "there's a difference" and immediately say "show me the mechanism." If you don't have a science background, then you don't see what the big deal is - now you know that the two different conditions will make different beer!

I do get irritated when they compare things that inherently have other interfering variables. Even if you show a difference, I can't know if it will apply to my process unless I know the mechanism behind your difference. There was a post they did that was clarity ferm vs gelatin. Those do totally different things! But since all they cared about was the final result, it seemed fine to them.
 
One thing he gave me confidence on: brewing a lager using that one dry lager yeast (can't remember which one right now!) at higher temps than most lagers. At best I can get 52, maybe 50 degree (F) ambient temps in my cooler, but that's not wort temps. Seems that dry lager yeast can handle that no problem.

:ban:

Saflager 34/70. It's ideal range is 12-15C (53.6-59)! If you can hit 52 ambient, I'm guessing you'll stay in its ideal range. Especially when considering that the fermentation aren't as vigorous at those temps, so the difference between ambient and wort temps shouldn't be as extreme as with ales. So even without the experiment that you read, you could've been making lagers using that yeast ever since you could get ambients down to the low-50s! And that's trying to make it in the ideal range, it's actual range is as warm as 22C (71.6F)!
 
RE: triangle test stats - it's based upon generally accepted sensory science methods. That said, the sample sizes are underwhelming.

http://www.sensorysociety.org/knowledge/sspwiki/Pages/Triangle Test.aspx

Brulosopher is a frustrating website for me. Sometimes they do things that really tell me a lot, like the trub test.

Other times, they compare things that I feel should not be compared. Hop Stand vs Dry Hop? I'm doing the hop stand to get most of my IBUs, vs getting them from boil additions - not as a Dry Hop replacement. The added flavor or aroma that I get from it is in comparison to boil hop additions, I (and basically everybody else brewing IPAs) still dry hop. If you compare things that have different purposes and then say "the beer was different!" - I just shake my head and walk away.

But you're one person who's using that method in order to get IBUs. Many people are using the hopstand/whirlpool addition in order to enhance the flavor/aroma. Exactly the intentions of the dry hop. To the point that many people are even adding more hops, in multiple temperature rest additions, than dry hops.

So to get upset at why they were testing whether or not there was a difference between the two is a big misunderstanding of the popular techniques now when it comes to IPAs and pale ales.

And they're set up to test the difference between two different variables. Then, when a very commonly preached method gets no statistical significance, their response is usually, "I was surprised by this, as it even goes against the way I brew, so now it raises more questions than it answers." In other words, they're asking for a discussion, for guys like you and me to try this stuff out for ourselves.

So if you want to see 4 or 5 different variables compared to each other, and have the means to do so, and have the means to make it a proper experiment, then by all means, do it! It's what they would want! It's what we all would want to see! But to complain that they're not doing it the way you would want to do it is nothing more than words on a screen. If you don't like the way they're performing their experiments, then feel free to start doing your own.
 
1. In my mind, you aren't in a position to critique how the work was done if you aren't willing to get off your fat ass and do the work yourself.

2. This guy is brewing beer and trying to apply some scientific method to the process (and by his own admission there are holes in the experiments). This isn't a NIH/NSF funded peer-reviewed study.

Take the results for what they are worth.
 
Back
Top