Yeast pitch rate: Single vial vs. Yeast starter | exbeeriment results!

Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum

Help Support Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Like your previous Exbeeriments, very interesting!

It would be interesting to know what happens if we consolidate all the results into one "you're doing it wrong" exbeeriment: one half done by conventional wisdom, the other half done with all the trub, pitched warm, fermented warm, no starter, 30 min mash, 30 min boil and hot side aeration.
 
This is my favorite one so far. I love it. I haven't used a yeast calculator for over 6 months and I'm glad. I've felt they are unnecessary. Low and behold it's proven.
 
Low and behold it's proven.

The experiment in question is a single snapshot with low statistical power and relies pretty mostly on sensory evaluation. Do all yeasts behave this way? Does WLP090 behave like this in other worts? Even in the same wort, what percentage of the time does an underpitched batch still complete fermentation without undesirable off-flavors?

While interesting, I would be very cautious to generalize from this. Will an underpitched beer necessarily give you problems or taste bad? Certainly not. Does underpitching increase the odds for these things? It certainly does.
 
The experiment in question is a single snapshot with low statistical power and relies pretty mostly on sensory evaluation. Do all yeasts behave this way? Does WLP090 behave like this in other worts? Even in the same wort, what percentage of the time does an underpitched batch still complete fermentation without undesirable off-flavors?

While interesting, I would be very cautious to generalize from this. Will an underpitched beer necessarily give you problems or taste bad? Certainly not. Does underpitching increase the odds for these things? It certainly does.

This is all conjecture. Years and years of articles and calculators from others that claim to have the truth and understanding of 'under pitching.' Is there such a thing...really?

Of course this experiment relies on sensory evaluation. After all everyone that drinks beer and likes it is subjective based on personal perception.

If someone cannot distinguish the difference between a so called 'under pitched' beer vs. a properly pitched beer using a yeast calculator (there are many and they're different) why should we continue to 'over pitch' beer?

From this experiment it only proves to me (as I already knew) that pitching a large amount of yeast into a batch of beer starts fermentation faster and more vigorous.

Sure this is using one kind of yeast but I would truly be shocked if it turned out any other way with any other yeast that was pitched at the proper temps and recommendations from the manufacturer.
 
The experiment in question is a single snapshot with low statistical power and relies pretty mostly on sensory evaluation. Do all yeasts behave this way? Does WLP090 behave like this in other worts? Even in the same wort, what percentage of the time does an underpitched batch still complete fermentation without undesirable off-flavors?

While interesting, I would be very cautious to generalize from this. Will an underpitched beer necessarily give you problems or taste bad? Certainly not. Does underpitching increase the odds for these things? It certainly does.

I was thinking the use of a "Super Yeast" like WLP090 might have an effect on the experiment. A lower attenuating yeast that has a lot of character would have been more interesting.
 
I've pitched single vials of lager yeast into a 3 gallon batch at 50° a few times which according to Mr Malty is under pitching by about half. Yes they were slow starters, about 36+ hours but they still fermented out around the same amount of time as a properly pitch wort would and I couldn't tell any taste difference. I don't under pitch anymore because I feel there is more chance of contamination.
 
you have to take these exbeeriments for what they are, of course! anybody saying that these guys are claiming to be the end all, know all, are not reading these experiments for what they are. they're not chasing science, per se. they're chasing the average beer drinker, the average homebrewer, the experienced judge. they're chasing perception. it's not like they just have all newb beer drinkers on the panels (although i believe a few of them have a "bmc drinking friend" involved).
science only proves so much. and then comes perception. the perception of this particular yeast strain is that there is no perceptible difference between the pitching rates for this size of beer.
from the article:
"I’m interested to see this xBmt repeated at some point with a beer that is yeast-driven in nature, like an English or Belgian style ale, as my inclination is that we may potentially see different results when yeast character is the star of the show."
so even the one performing the experiment has the same quandary.
that being said, these experimenters know the parameters of the experiment, and are in now way saying that for every single style of beer that these experiments should be the new method. simply that we should be re-thinking our methods on the homebrew level.
 
Great write-up on the experiment. Quick starts with yeast will help prevent potential clove like flavors from baddies, which usually are subtle and most noticeable in cleaner malty beer styles. My pro brewer friend always has scolded me for slow starts as he picks out the off flavors rather easily in any beer. So I got into the habit of doing starters or re-pitching yeast for every batch.

The biggest reason to do a starter though, is to assure yourself that the yeast is ready to go and not dead or dormant form mis-treatment wherever it has been. Too many times I have done starters that were sluggish even though the vial was not past its date. Doing the starter 2-3 days ahead of time usually (but not always) fixes dormant yeast.
 
The experiment in question is a single snapshot with low statistical power and relies pretty mostly on sensory evaluation. Do all yeasts behave this way? Does WLP090 behave like this in other worts? Even in the same wort, what percentage of the time does an underpitched batch still complete fermentation without undesirable off-flavors?

While interesting, I would be very cautious to generalize from this. Will an underpitched beer necessarily give you problems or taste bad? Certainly not. Does underpitching increase the odds for these things? It certainly does.

From this experiment it only proves to me (as I already knew) that pitching a large amount of yeast into a batch of beer starts fermentation faster and more vigorous.

Sure this is using one kind of yeast but I would truly be shocked if it turned out any other way with any other yeast that was pitched at the proper temps and recommendations from the manufacturer.

I think Arcane's point was not that this experiment doesn't provide a data point, but that it should be considered a single data point with little statistical significance. Meaning it doesn't prove anything, really, it just gives data. Imagine flipping a coin a single time; if it lands heads are you convinced that the coin is biased 100% (i.e. that it will never land tails)? I wouldn't be.

Further, my immediate thought on the experiment is that you did this in a beer with (a) a "mild" yeast, as I believe people have commented on already and (b) the beer has a number of additional flavors to "hide behind" because the beer is a West Coast hoppy amber. Though I cannot confirm it because I have not done the experiment, I would bet that if you pitched a single vial of lager yeast (let's say WLP830 for example) into 5 gallons of a premium American lager, or basically anything from BJCP category 1 that is over 1.050 O.G., and then propagated a single vial to what is generally accepted as a "proper" pitch rate for 5 gallons of that style (1.5 millions cells / ml / degree Plato), and *pitched* the yeast into each beer at 50 F and fermented in the middle of the yeast's optimum range, then the experimental results would indicate the opposite as what they do here.

The results would also be more compelling if the credentials of the beer evaluators were given (i.e. how many are National BJCP judges or Master Cicerones?). Results broken up by this classification would be helpful also. For instance, while the large population could not determine which beer was different, what if the N people who were BJCP Certified or above correctly assessed each beer as different.

Still a cool experiment, and I love Ray's stuff. Just wanting everyone to keep in mind these are single data points, and we should all get in on the act and perform our own experiments. I am certainly inspired to do so.
 
When I started homebrewing 5 years ago and the wyeast smack packs were half the size they are now. The mr malty calculator was one of the things that had a huge positive influence on the quality of my beer but now the smack packs have changed but the calculator hasn't. ...and not all yeast is the same. If this experiment is going to be redone with English or Belgian yeast, go with the super flocculating, diacetyl bomb producing wy1968 London ESB. There is no way you are making a decent 5%+ ABV, 5gal batch without a starter with that one.
 
Still a cool experiment, and I love Ray's stuff. Just wanting everyone to keep in mind these are single data points, and we should all get in on the act and perform our own experiments. I am certainly inspired to do so.

and the biggest thing to remember when people are criticizing (not saying that you are) these experiments for being single data points, is that the authors realize this.

But one question I have about the bjcp judges vs. the average beer drinker - though these judges might be able to tell if a beer is to style, and possibly recognize any off-flavors, what makes everyone think the average beer drinker can't distinguish between two beers that are different? i'd be willing to guess there are plenty of homebrewers out there that are not bjcp judges (or any kind of certified anything for that matter) that could tell two beers apart if there were perceptible differences. It's just a little pompous to assume that because someone is not qualified to judge contests that they are therefore not qualified to do tests on perception.
 
From the link: "Author: Ray Found"

Lol. I don't read that crap! I just want to know wtf is going on in his fermenters.

Well, that answers that. Not the 2 username question, but that's really none of my business so I'll **** right off now.
 
But one question I have about the bjcp judges vs. the average beer drinker - though these judges might be able to tell if a beer is to style, and possibly recognize any off-flavors, what makes everyone think the average beer drinker can't distinguish between two beers that are different? i'd be willing to guess there are plenty of homebrewers out there that are not bjcp judges (or any kind of certified anything for that matter) that could tell two beers apart if there were perceptible differences. It's just a little pompous to assume that because someone is not qualified to judge contests that they are therefore not qualified to do tests on perception.

Completely agreed that there are plenty of homebrewers out there who are so-called "supertasters"; as a matter of fact, one of the best tasters I've ever seen is the wife of one of my homebrewing friends, and she knows little about brewing and is not a member of the BJCP or Cicerone programs. In my experience (I have organized a weekly group that focuses on BJCP tasting exam preparation for about 2 years), tasting regularly, as well as using off-flavor kits and in my case brewing off-batches, does very much improve the palate. I'm not saying that one must be a member of one of these groups to be a good taster; however, it does lend a lot more credibility to the experimental results. These evaluators used for Ray's experiments may be good tasters, or may not be, meaning more uncertainty and less significant results. To give some credibility to them, I think credentials would help.

Therefore I claim that, no, I am not assuming you can't taste off-flavors because a person is not a member of one of these organizations; I am assuming the converse, actually, which is that if one HAS become a member of one of these organizations and been certified by them in some way, then that person's tasting skills are perceived as more trustworthy. Isn't this why these organizations exist? So that you can have a reliable, experienced individual evaluate your beers, whether in competition (BJCP) or on tap at some beer-serving establishment (Cicerone) where you have uncertainty in your own judgment, or just less experience? For me, if there were some credentials given, I'd have much more confidence in the triangle test results in all of his experiments.

I think the best thing about his stuff is that, like I said, it has inspired both me and many members of my homebrew club to do very similar types of controlled experiments but on larger scales (multiple controlled batches) and I hope that it does inspire others to do the same. It's a lot of fun.
 
I would love to see responses from white labs, wyeast, etc. as to why they say a single vial/packet is fine. This experiment indicates they are correct, yet many people are still hesitant to believe (personally I'll continue to make starters). I know these yeast companies test many strains before they select one to sell commercially. If you go to white labs, you can sample versions of the same wort fermented with different yeasts to tell the difference. I'm wondering how much yeast they themselves pitch for tests/personal use and why they say one vial is enough, when the conventional wisdom says it's not.
 
I think Arcane's point was not that this experiment doesn't provide a data point, but that it should be considered a single data point with little statistical significance. Meaning it doesn't prove anything, really, it just gives data. Imagine flipping a coin a single time; if it lands heads are you convinced that the coin is biased 100% (i.e. that it will never land tails)? I wouldn't be.

Then we'll have to disagree because I believe that the significance here is huge. Time and time again I read, especially on this forum, about under pitching and thus creating bad (or not so good) beer. Where all one needs to do is buy a 5 liter Erlenmeyer flask and a $200 super stir plate and a pound of DME to correct their 'problem.'

Let's say we do this same experiment again and again...would we get the same results? Yes. It was a very controlled experiment. It's not like flipping a coin and I find it hard to believe that you would even use that to compare.
 
I love these experiments because they are a nice counterpoint to the home brewing absolutists - those people who say you can't possibly be making good beer if you are not doing x, y, z. Are these experiments a single data point? Yes, absolutely, so generalize at your own risk. But, a single data point disproves an absolute statement, and I like to know that there are many ways to make good beer.
 
Completely agreed that there are plenty of homebrewers out there who are so-called "supertasters"; as a matter of fact, one of the best tasters I've ever seen is the wife of one of my homebrewing friends, and she knows little about brewing and is not a member of the BJCP or Cicerone programs. In my experience (I have organized a weekly group that focuses on BJCP tasting exam preparation for about 2 years), tasting regularly, as well as using off-flavor kits and in my case brewing off-batches, does very much improve the palate. I'm not saying that one must be a member of one of these groups to be a good taster; however, it does lend a lot more credibility to the experimental results. These evaluators used for Ray's experiments may be good tasters, or may not be, meaning more uncertainty and less significant results. To give some credibility to them, I think credentials would help.

Therefore I claim that, no, I am not assuming you can't taste off-flavors because a person is not a member of one of these organizations; I am assuming the converse, actually, which is that if one HAS become a member of one of these organizations and been certified by them in some way, then that person's tasting skills are perceived as more trustworthy. Isn't this why these organizations exist? So that you can have a reliable, experienced individual evaluate your beers, whether in competition (BJCP) or on tap at some beer-serving establishment (Cicerone) where you have uncertainty in your own judgment, or just less experience? For me, if there were some credentials given, I'd have much more confidence in the triangle test results in all of his experiments.

I think the best thing about his stuff is that, like I said, it has inspired both me and many members of my homebrew club to do very similar types of controlled experiments but on larger scales (multiple controlled batches) and I hope that it does inspire others to do the same. It's a lot of fun.

I passed the BJCP tasting exam and I suck at some off flavours (scored an 80!). My wife has no training or certification but is a far better taster than I am. What BJCP judges do have is a vocabulary to describe what they are tasting. I wouldn't want her filling in a scoresheet but I'd trust her in a triangle test.
 
I passed the BJCP tasting exam and I suck at some off flavours (scored an 80!). My wife has no training or certification but is a far better taster than I am. What BJCP judges do have is a vocabulary to describe what they are tasting. I wouldn't want her filling in a scoresheet but I'd trust her in a triangle test.

Again, not a counter-argument to what I'm saying. What I'm saying is I'd trust your tasting results more than hers because she is not certified; the reason for that is that I have no other basis to go on. You (and maybe a select few other individuals) are the only person/people that know(s) that she is a great taster. So yes, you may trust her in a triangle test, but how am I supposed to trust her? This is exactly what certification provides (in any field, not just tasting): a way to provide some information to the consumer of their services as to whether or not (and at what level) that person has experience with the task they are performing.

Thus, I am not saying she is better or worse than you or anyone else; I'm saying that since the person reading the experimental results has no idea what qualifications the evaluators of the beer have, then there is a great deal of uncertainty in the result, and that little, and perhaps no, conclusion can be drawn with respect the higher-level question of "how much does pitch rate matter?". For that, I would rather trust the vast amount of literature and lab experimental results that indeed show that pitch rate matters.

I believe that you and I have conversed on this topic before (the validity of experimental results and their impact at the homebrew level), and this is yet another case where I like the experiment, but the data provided is minimal, and effectively proves nothing, because it is a single data point with very subjective metrics (the subjectivity of which, I am arguing, could be mitigated with certification/credentials).
 
Pretty interesting write up. The author says he'll still be making starters for a variety of reasons.
One quote I heard from a podcast: " Q: is there such a thing as underpitching? A: If you like the way the beer is coming out, then no, there isn't" (sorry can't remember who actually said it)
 
Let's say we do this same experiment again and again...would we get the same results? Yes. It was a very controlled experiment. It's not like flipping a coin and I find it hard to believe that you would even use that to compare.


We would probably get similar results, but who knows. I mean, we all have different equipment and processes. Obviously the takeaway from people talking data points and conclusions is that it would not be the most wise decision to take this one experiment on a 1.062 beer with a neutral yeast and apply it to all fermentations.


Just curious, saw a barleywine in your signature. Do you do no-starter one-vial / one-smack-pack pitches into big beers like that?
 
We would probably get similar results, but who knows. I mean, we all have different equipment and processes. Obviously the takeaway from people talking data points and conclusions is that it would not be the most wise decision to take this one experiment on a 1.062 beer with a neutral yeast and apply it to all fermentations.


Just curious, saw a barleywine in your signature. Do you do no-starter one-vial / one-smack-pack pitches into big beers like that?

The barleywine is the last of an extract brew kit that I had. I just used the regular smack pack and made a 1.25L starter as I always do. Before I pitched I filled (not to the top but close) a pint mason jar with the starter and the rest into the beer. The kit also came with some champagne dry yeast and half the pack was supposed to be pitched into the secondary. So I followed directions and it's just aging now.

Normally I do a 1.25L starter and fill the pint jar and the rest into the beer. As you can see on one of my other posts on this topic I may not do that for a 5 gallon batch of a 1.100 beer. Although I might. So for a standard gravity of 1.070 or less I believe my method is perfect.
 
Spelling error: "...beer fermented with a “proper” amoutnt of cells propagated..."

Pictures are fun but when illustrations really count (i.e. statistical analysis) you're blank. I like to look at charts and graphs, they go a long way with illustrating statistical results IMO.

Good experiment and not at all surprising. I brewed for years with single smack pack pitches and made good beer, although lag times were always longer than with larger pitches.
 
For experiments like this, it would probably be a good idea to go with a recipe that doesn't allow any off-flavors to hide. But the experiment shows that at least one can make an amber ale with or without a starter.
 
This is my favorite one so far. I love it. I haven't used a yeast calculator for over 6 months and I'm glad. I've felt they are unnecessary. Low and behold it's proven.

Proven is WAAAAY too strong. the correct conclusion from my article is that we

"Failed to prove making a starter with WLP090 resulted in a statistically significant difference, when compared to a direct-pitched vial on this particular wort"
 
Proven is WAAAAY too strong. the correct conclusion from my article is that we

"Failed to prove making a starter with WLP090 resulted in a statistically significant difference, when compared to a direct-pitched vial on this particular wort"

I gotcha ;)

It did prove to me though that high pitch rates are virtually meaningless. I realize this is only one particular strain and one particular beer. Time after time (I drink a lot of beer) I make fantastic beer by 'under pitching' and will continue to do so.
 
Spelling error: "...beer fermented with a “proper” amoutnt of cells propagated..."

Pictures are fun but when illustrations really count (i.e. statistical analysis) you're blank. I like to look at charts and graphs, they go a long way with illustrating statistical results IMO.

Good experiment and not at all surprising. I brewed for years with single smack pack pitches and made good beer, although lag times were always longer than with larger pitches.

Thanks. Fixed.

I can't think of anything, chart wise, that would have made this more clear. 9/20 got it right. 8/20 chose one of the two wrong beers, while only 3 chose the other wrong answer.


As for all the discussion about how broadly this could generalize, the prudent behavior is to simply not generalize from a single data point. When we repeat on a different beer/strain/etc... we can then re-evaluate the results and see if there can be any inferences made between the two points. ie: if we did it again with a similar recipe, but a higher gravity and used the same yeast, and a significant difference... we might be able to infer that that higher gravity wort responds more to pitch rates (which would confirm the conventional wisdom).

With a single data point, we can't really infer anything beyond the very narrow scope.

As for the "You should try it on a different kind of beer to really see what is going on"...

Umm... maybe? I think it is important to understand that at the end of the day, I am still brewing the beers I want to drink, my family wants, etc... So while yes, some are going to be more subtle beers, I am not really choosing the beer to try and find a particular xBmt result. I am more choosing the beer I want to brew next, then picking a xBmt from our queue that is, frankly, easily applied... methodologically speaking, to the brew process of that beer. (for example... I may do Sucrose vs. Dextrose... and while a belgian tripel might be a great yeast test beer, it is also a good sugar test beer, in a way that I can't really do a sucrose vs. Dextrose on another style... etc...)
 
I was thinking the use of a "Super Yeast" like WLP090 might have an effect on the experiment. A lower attenuating yeast that has a lot of character would have been more interesting.

I was thinking the same thing. Either way, an interesting article. I will still use starters (as will the author).
 

Latest posts

Back
Top