Value of brulosophy exbeeriments, others experience, myths and beliefs

Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum

Help Support Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
If they [Brulosophy] do the experiment and see no difference everyone from LoDO jumps up and down and says they didn't do it quite right.
With regard to LoDo/LOX, I'm looking forward to the day when a LoDo/LOX expert writes an online book (or other single source of well edited information outside of their current web site) and makes it available for an appropriate one-time fee. If there are things in the book that cause people to 'do it not quite right', update the online book for free. Advances in techniques could be packaged into a new edition of the online book and an additional on-time fee would be appropriate.
 
Local judges. But I suspect that could be doctored as well. That site, and this site for that matter are all for profit entities. If you think they are honest and unbiased, great. But the ads, sponsorships, and censoring don't line exactly up with that.

You do know that Marshall Schott has a full time job, right? The other contributors as well (some now as professional brewers). They seem to have too many ads in the podcast, but I don't think these guys are rolling in cash after paying all the bills.

If i might spill the beans as to a suggestion for a Brulosophy Exbeeriment, I will give them here my top suggestion:

My suggest is that YOU do that experiment. Once you are able to successfully brew two near identical beers, then try to get 20-30 people to sit through a blind triangle test and see how easy it is. I have a hell of a time just getting people at my homebrew clubs to put down their plate of wings and pay attention to a simple side by side test.

At its core, Brulosophy is a group of homebrewers. Marshall got some attention with reporting his experiment results out to various forums and turned it into a website and podcast. Lots of people find it valuable. I am one of them. When I started incorporating split batches into my brewing routine I learned more than I did in the prior 20 years. I appreciate it when people report out their experiences with different yeast, or different dry hopping techniques, or techniques to avoid oxidation. I am fine if they don't have proof of a 37 point score, or have not calculated a p-value.
 
It would still have appreciable relevance.

I think first they would need to establish that they can do an experiment with LoDO practices vs pre (non) LoDO practices and show a measurable difference.

Also I'm generally against experiments where you change two variables. It seems like you are hoping to show using LOX malt alone gets you to where you can get with LoDO and regular pilsner malt. That might be a great value if true but I think you need four batches of beer to get there. LOX malt or Pilsner malt each with LoDO or normal brewing techniques. Then you have six possible sets of triangle tests and probably need to do at least 3 or 4 of them to feel strongly about the outcome.

1. Pils with (LoDO vs normal)
2. (LOX vs Pils) with normal
3. LOX with (LoDO vs normal)
4. (LOX vs Pils) with LoDO
5. LOX with normal vs Pils with LoDO
6. LOX with LoDO vs Pils with normal

I think you really need 1 and 6 to show measurable differences while 3, 4, and 5 can show or not show differences depending on whether LOX+LoDO is even better than LOX+normal. If you see no difference on 1 or 6 you simply can't show LoDO works at all and the whole experiment is a bust. Frankly i've been working on this post for too long and my brain is hurting. It is just difficult to design a good experiment with two variables
 
With regard to LoDo/LOX, I'm looking forward to the day when a LoDo/LOX expert writes an online book (or other single source of well edited information outside of their current web site) and makes it available for an appropriate one-time fee. If there are things in the book that cause people to 'do it not quite right', update the online book for free. Advances in techniques could be packaged into a new edition of the online book and an additional on-time fee would be appropriate.

Colin Kaminski has published an interesting article on LOX vs. LOX-Less malts on the BYO (Brew Your Own) site. Here's the link:

https://byo.com/article/lox-less-malts-their-impact-on-staling-and-head-retention/
Note that a BYO membership is required whereby to read this.
 
I like what Brulosophy is doing. The problem I have is mostly with the lazy consumers of the content. You don't need to spend more than a day on HBT to see another "Brulosophy proved that doesn't matter" retort in a thread where the adults are talking.

Hmmm...trying to discern where the balance point is between the first sentence and the second.

I mean, if folks on HBT are using ill-founded conclusions to support their own outlooks, doesn't that indict the provider of same - which for perhaps most observers would eliminate said provider from the "Like" bin?

Cheers!
 
If i might spill the beans as to a suggestion for a Brulosophy Exbeeriment, I will give them here my top suggestion:

Brew a SMaSH lager of moderate IBU's using a LOX free Pilsner base malt and standard/good pre-LoDO practices, but with no added extra mile LoDO practices and/or process modifications, such as adding metabisulfite and BTB and ascorbic acid, etc... Then brew an identical sans for LoDO SMaSH beer applying all of the LoDO stuff and practices, while using a standard Pilsner base malt (preferably sourced from the same company from which the LOX free was sourced) with a pre-confirmed and nominally typical base malt quantity of present LOX (lipoxygenase). Leave them in kegs for 3 or 4 months, and then triangle test them before a standard triangle test participant group.

I think their target audience is the home brewer hobbyist who, if anything like me, or dare i even say the average poster on many brewing forums, is brewing beer, then packaging and beginning to consume well within the 3 month period your experiment would require. Indeed, my last three 6 gallon batches, will do well to see the light of day in 3 months.

Agree with a previous reply, their other LODO experiement seemedto cause them some process grief. I wonder how many LODO brewers post on hbt compare to say those who BIAB. Then those thta have 3 vessel systems. Must be a poll somewher on that.
 
article on LOX vs. LOX-Less malts on the BYO
Looks interesting - but it's just a "piece of the puzzle" for those that want to try LoDo/LOX brewing.

And at the moment, I have other, more interesting, hobby puzzles that I'm willing to "piece together" together.

But I also have some extra "home brew hobby money" - so if an LoDo/LOX expert writes an on-line book (Pragmatic LOX Brewing?), I'm interested.
 
Last edited:
I'm kinda interested to try the no boil IPA they featured this week. I've see that thread around here but didn't take it seriously. The article yesterday got me thinking it might be worth trying.
 
Brulosophy is a great source of information, but you have to take it for what it is. The final answer on every tested subject? Of course not, and I'm sure they'd say the same and often imply it at the ends of the articles. Are they great experiments, generally at least, providing useful information? Absolutely.

I'm a fan of the site, love the tests and love the Hops reviews as well. Would I change a few things if I could wave a magic wand? Of course. But I'm still grateful for what they do.

I find them "valuable" and "like" the site. Even if I do occasionally disagree w/ something for various reasons. Very much how I feel about this site as well!
 
I think first they would need to establish that they can do an experiment with LoDO practices vs pre (non) LoDO practices and show a measurable difference.

Also I'm generally against experiments where you change two variables. It seems like you are hoping to show using LOX malt alone gets you to where you can get with LoDO and regular pilsner malt. That might be a great value if true but I think you need four batches of beer to get there. LOX malt or Pilsner malt each with LoDO or normal brewing techniques. Then you have six possible sets of triangle tests and probably need to do at least 3 or 4 of them to feel strongly about the outcome.

1. Pils with (LoDO vs normal)
2. (LOX vs Pils) with normal
3. LOX with (LoDO vs normal)
4. (LOX vs Pils) with LoDO
5. LOX with normal vs Pils with LoDO
6. LOX with LoDO vs Pils with normal

I think you really need 1 and 6 to show measurable differences while 3, 4, and 5 can show or not show differences depending on whether LOX+LoDO is even better than LOX+normal. If you see no difference on 1 or 6 you simply can't show LoDO works at all and the whole experiment is a bust. Frankly i've been working on this post for too long and my brain is hurting. It is just difficult to design a good experiment with two variables
With two categorical variables it's just a two factor ANOVA(ignoring the triangle test which is a categorical response). However many levels of the first variable times the number of levels of the second . That would be all the combinations (cross-product). If there was a significant effect for variable 1 for instance and it had more than 2 levels, you could do multiple comparison tests between levels. Those tests can take different forms depending on the secondary hypotheses of interest. For instance, comparing each of the levels within a variable pairwise to the others, or perhaps all levels against just the control. However, they aren't interested in all the cross-products if what you are saying is their hypothesis (I haven't read the experiment). If they are just comparing Lox malt alone to LODO and regular pilsner malt that part is fine. But you can't say anything about the combinations. And if they are just comparing these two with a triangle test, it is just saying the testers could or couldn't detect a difference, it says nothing about which is better. The triangle test is simple looking at whether the tester can correctly identify the oddball out of the three. You can't get to the answer you want by doing all the pairwise tests because the response variable is simply that they are different. Now if you had something like numerical scores you could potentially get to a measurable difference if it exists.

Not in regard to this experiment as I don't know the initial methods but some of the ones I've read are a disservice. When they split a batch and perform two different treatments, then use a triangle test, they only have one experimental unit, which is an anecdote. Dressing it up as an experiment and having the results amplified is not a good scientific practice. A sample size of one doesn't tell you very much. You haven't overcome the variation inherent in the experimental unit. And I understand it might be prohibitively difficult to conduct the experiment correctly, but doing it incorrectly and disseminating those "results" implying particular conclusions is quite possibly detrimental to the craft.

Here's an example. We want to know whether a sugar substitute tastes the same as sugar in Iced (Sweet) Tea. You brew a big batch, split it, and put sugar in one half and sugar sub in the other and not enough people pick the sugar sub when opposed to the two sugar samples in the triangle test. Does that mean that in all teas there won't be a detected difference? No. You only tested one type of tea. Say it was black tea. Is it true for all black tea? No, you only tested one company's tea. Will it be true for that company's black tea? Who knows, they may have good product manufacturing controls, they may not. You only tested one batch of tea. Sure you had a bunch of taste testers but it was only one batch of tea. The brewer might have made it weak by accident, may have overheated the water, water is polluted, water has too much chlorine, etc. It's not generalizable, even if you have a thousand people in your triangle test.
 
Hmmm...trying to discern where the balance point is between the first sentence and the second.

I mean, if folks on HBT are using ill-founded conclusions to support their own outlooks, doesn't that indict the provider of same - which for perhaps most observers would eliminate said provider from the "Like" bin?

Cheers!

Counterpoint. Obesity is a big problem in the U.S. but I dont hold Burger King liable. Blulosophy does a lot to caveat their findings to their audience and its not on them when someone eats two whoppers.

Perhaps a nice balance would be to link to peer reviewed brewing articles to establish the baseline understanding of the variable but the text is hard to consume.
 
Last edited:
No-Boil Recipes! New for 2019! - which started out with a "no-boil" DME and included some CaCl & CaS04.

New England IPA - Blasphemy - No Boil NEIPA - which also started out with a "no-boil" DME + CaCl recipe for a NEIPA.

Yep I'd seen those threads and read a bit but lost interest. I may have had wrong impression about the reason for doing no boil. I don't mind boiling I built a brewing system around boiling and nor am I hyper focused on finding ways to shorten my brew day to extent I'd be willing to accept even a small hit on quality to save a little time. The experiment provided a suggestion that for certain type of beer (NEIPA) you might be able to push one quality aspect, color, in a favorable direction, while preserving other quality aspects including flavor and aroma. The lack of boil will mean tweaking the grain bill to get higher gravity wort out of the mash since you won't be concentrating the wort.

Anyway I will probably still not do it yet as I'm more west coast than NEIPA in my IPAs these days but it was thought provoking read and now I'll probably go back and read those linked HBT threads too.
 
Counterpoint. Obesity is a big problem in the U.S. but I dont hold Buger King liable. Blulosophy does a lot to caveat their findings to their audience and its not on them when someone eats two whoppers.

They used to at least. There used to be a standard sort of disclaimer in each writeup, but that hasn't been there the past few years.

It's the standard blurb that is still there that IMO leads to exactly what you said, i.e. the "Brulosophy proved that doesn't matter" retorts. IMO, it's completely unreasonable to expect average readers to understand what the p-value stats mean (and don't mean!), unless you tell them. And the standard blurb doesn't do that.
 
The examples above are examples where people need to use their brain.

You might very well make a NEIPA that tastes great with no boiling, but that doesn't mean no beers need any boiling. It's just that particular recipe that succeeded without it. Might not even be true for that style in general much less anything else. I'm not going to try making an Imperial Stout that isn't boiled but I have a solid feeling it wouldn't turn out as expected.

Some of the low oxygen stuff, maybe you can't tell a difference at 2 weeks. But let's try 4, or 6 weeks, and see if it doesn't show up.

Not faulting the site, they do a lot. But it's definitely examples where folks will read the results and read into it what they want like "don't have to boiol any beer" or "low oxygen is a farce".

I know most of us here are smarter than that, but not everyone can see things for what they are.
 
Anyway I will probably still not do it yet as I'm more west coast than NEIPA in my IPAs these days but it was thought provoking read and now I'll probably go back and read those linked HBT threads too.
IIRC, in those threads, there are a couple of people who offered "no boil, DME (or LME), pasteurized" recipes for styles other than NEIPA.
 
I'm at the point in my brewing/bbq hobby's that if the item or product or procedure doesn't produce a noticeable difference in the final product, then I won't use it. I've had great results doing "warm Lagers" fermented in the 60's. Tried out different mash temps, done experiments with my local homebrew club. It's interesting what has a perceivable difference vs what doesn't. I love listening to their podcasts, it's super interesting to me and can't wait for the next episode.
 
Average reader here. We don't care.

We know this isn't The Journal of the American Medical Association..
I think most of us are probably the average reader.

The thing is, some folks do seem to equate it with AMA. I suppose that is the issue at hand.

If there is an issue, LOL, this thread has probably gone further than needed.
 
My perspective is that they are clearly beginning to struggle and grope for relevant topics...
After publishing brewing related content for several years it can get really difficult to come up with new ideas and ways to present them. Skipping over the basic content showcasing the equipment used, grain milling, temperature checks, etc., I focus more on the testing goals and results.
 
I'd love to see them repeat tests, and would still read the articles, maybe even like them more to see if something is backed up on a subsequent try especially w/ a different recipe, brewer, and so on. Poke holes in their own past tests and see if they stand up. It doesn't have to mean the previous test was wrong, just that there's more to some subjects than can be covered in a single shot.

They do this sometimes, starting off citing an earlier test and wondering about a new aspect of it. Won't mind if they do more of it.
 
Average reader here. We don't care.

Do you care that they tell you "... indicating participants in this xBmt were unable to reliably distinguish..." when there's a very good chance that there's was actually a difference detected? Serious answer, please.

That's what I'm talking about. It's those words (the standard blurb), coupled with the fact that people don't know/care about what the p-stats actually mean (and don't mean) that causes the "Brulosophy proved that doesn't matter" phenomenon. They could very easily add a single sentence to each writeup that would make it crystal clear. But they don't.

We know this isn't The Journal of the American Medical Association..

Believe me, I'm not asking for that.
 
If i might spill the beans as to a suggestion for a Brulosophy Exbeeriment, I will give them here my top suggestion:

Brew a SMaSH lager of moderate IBU's using a LOX free Pilsner base malt and standard/good pre-LoDO practices, but with no added extra mile LoDO practices and/or process modifications, such as adding metabisulfite and BTB and ascorbic acid, etc... Then brew an identical sans for LoDO SMaSH beer applying all of the LoDO stuff and practices, while using a standard Pilsner base malt (preferably sourced from the same company from which the LOX free was sourced) with a pre-confirmed and nominally typical base malt quantity of present LOX (lipoxygenase). Leave them in kegs for 3 or 4 months, and then triangle test them before a standard triangle test participant group.
The test would be meaningless (so, in a way, suitable for Brülosophy ;)) as you'd be basically changing almost all variables at once.

You won't even be using the same malt, plus the process will be completely different. If the Brülosophy panel isn't able to reliably detect a difference then it's further confirmation that their tasting panels are a joke.
 
One way to look at this, I.E., with respect to the dissing that Brulosophy was handed by the aforementioned "LoDO community" (whomever they are), is to question as to whether (given that the Brulosophy guys have unquestionable brewing experience and knowledge, plus that on top of this they have at their disposal all of the exemplary brewing equipment which they have been so graciously gifted) the average Joe brewer, with most likely lesser to perhaps far lesser equipment and also likely lesser brewing experience, can be presumed to somehow routinely exceed the Brulosophy guys in their ability to do LoDO to the satisfaction of the same "LoDO community". If this community are that hard to please, and they have such gravitas as to instill fear even among the experienced and well equipped among us, then perhaps there is indeed value in us lesser status brewers giving LOX-Less base malt brewing a go. It seems to have the blessing of Colin Kominski.
 
Last edited:
Do you care that they tell you "... indicating participants in this xBmt were unable to reliably distinguish..." when there's a very good chance that there's was actually a difference detected? Serious answer, please

Nope. I might see that 5 out of 10 chose one thing or preferred one thing, but that is just one day at one time. Who cares?
 
Nope. I might see that 5 out of 10 chose one thing or preferred one thing, but that is just one day at one time. Who cares?

That's where the 'P' value comes into play. It is a statistical means whereby to attempt to weed out random error. Not that it always outright does. Or that it always outright has to. Our Supreme Court has already ruled that 'P' value does not have that sort of gravitas.

https://www.homebrewtalk.com/thread...-ro-from-ro-plus-minerals.682220/post-8936890
 
Last edited:
In general I think they do a pretty good job. I have seen several tests where they chose a recipe that, in my opinion, might mask the results. One was hop related (maybe late boil vs flameout or something like that), but they used a recipe that didn't really feature hops that much. I remember looking at the recipe as I read the experiment and thinking "they're not going to see a difference because this recipe would mask any difference". Then, of course, the conclusion was that it didn't matter.

After all the hoops they jumped through to try to follow something resembling scientific process, I felt like I really couldn't trust their conclusion. I sometimes wonder if they're biased towards trying to debunk things (in other words, they want to see no difference).

Overall, though, I think they're doing something that most of us don't have the time or resources to do, so I think they add significant value to the brewing community.
 
Do you care that they tell you "... indicating participants in this xBmt were unable to reliably distinguish..." when there's a very good chance that there's was actually a difference detected? Serious answer, please.

That's what I'm talking about. It's those words (the standard blurb), coupled with the fact that people don't know/care about what the p-stats actually mean (and don't mean) that causes the "Brulosophy proved that doesn't matter" phenomenon. They could very easily add a single sentence to each writeup that would make it crystal clear. But they don't.



Believe me, I'm not asking for that.

another nope

also you ignored my painstaking research in showing that the situation you are describing is the exception not the rule. Most of the time when they say the panel was unable to "reliably distinguish" the outcome was right about where you would expect it to fall if every member of the panel didn't even taste the beers and guessed at random. When results have been close to reaching significance they usually point that out and likely explore the possibility there was something there in the brewers personal evaluation but I appreciate they have the discipline to let the actual results dictate the outcome.

Only two outcomes are allowed...the panel was able to reliably detect a difference or the panel was unable to reliably detect a difference. I don't think this is as complicated as everyone wants to make it out to be. The beers are different. But the panel is handicapped by opaque cups and lack of knowledge about the beer style, recipe or tested variable. I read a lot of snarky comments about how crap these panels are when they can't detect obvious differences but I've tried triangle testing my own split batch experiments (so without the style and variable handicaps) and still struggled.
 
When they report a p-value from a triangle test of their one batch, that's like putting lipstick on a pig. If they have a hypothesis that there is a difference between two methods, and they split a batch to test that, there's only the one sample to test that hypothesis. The triangle test is just looking at whether a group of testers can discern a difference in that one batch. Reporting the p-value is providing a gravitas it doesn't deserve. The p-value reported is not a p-value for their hypothesis of interest. It's dressing up a sample size of one. Even if they were to brew two batches because the hypothesis precludes splitting one batch, that's one sample per treatment.

Another example. Take two plants of the same species and randomly assign one to receive a fertilizer. One month later you measure the plants. Whether you measure them with a ruler or a more accurate 3D scanner, you still only have measurements on two plants, one per treatment. Sounds fancy to use the 3d scanner though.
 
When they report a p-value from a triangle test of their one batch, that's like putting lipstick on a pig. If they have a hypothesis that there is a difference between two methods, and they split a batch to test that, there's only the one sample to test that hypothesis. The triangle test is just looking at whether a group of testers can discern a difference in that one batch. Reporting the p-value is providing a gravitas it doesn't deserve. The p-value reported is not a p-value for their hypothesis of interest. It's dressing up a sample size of one. Even if they were to brew two batches because the hypothesis precludes splitting one batch, that's one sample per treatment.

Another example. Take two plants of the same species and randomly assign one to receive a fertilizer. One month later you measure the plants. Whether you measure them with a ruler or a more accurate 3D scanner, you still only have measurements on two plants, one per treatment. Sounds fancy to use the 3d scanner though.

Are you sure about this? I don't believe the Brulosophy gang invented the triangle test as applied to sensory evaluations of food or the use of the p value for setting the number of correct responses needed for a given sample size.
 
also you ignored my painstaking research in showing that the situation you are describing is the exception not the rule. Most of the time when they say the panel was unable to "reliably distinguish" the outcome was right about where you would expect it to fall if every member of the panel didn't even taste the beers and guessed at random.

I never said or meant to imply that it was the rule, but rather used it as an exercise in logic. I could have picked 0.20 or 0.30, or really, anything less than 0.50. That's still a large amount of potentially misleading write-ups within the groups of experiments with those P-values. That said, I do think low-ish p-values have been more common in the experiments that include a real panel and not just one guy tasting the same beers over and over. If you look back through the archive, you'll find more than a few.

My point is that the standard blurb, when applied to them, is misleading to non-statisticians. Just looking at page two of the experiments, where they were all (I think) panel experiments, there were 37 experiments whose results earned them "the blurb." Of those, 14 (38%) had P-values of less than 0.30. So, for these 38%, the chances of getting at least the number of correct choices that were got due to random chance was, in every case, 29% or less. It's likely that a large portion of those panels detected a difference. But each and every one gets the standard blurb. That's my issue, and I think it's the only one I've raised in this thread. (Based on some responses, I think some folks might think I'm basically saying "Brulosphy sucks," but that's not what I'm saying at all.)

When the p-value is say, 0.20 (or whatever), would it not be helpful to say "Results indicate that if there were no detectable difference between the beers, there was a 20% chance of "X" <fill in the blank> or more tasters identifying the beer that was different, but "Y" tasters actually did."? IMO, that would give the readers something much more tangible on which to base their impressions, and their own risk of changing (or not) their own processes based on them.

ETA: and it might even reduce the number of *^$&*#&^ "Brulosophy debunked <X>" posts.
 
Last edited:
Are you sure about this? I don't believe the Brulosophy gang invented the triangle test as applied to sensory evaluations of food or the use of the p value for setting the number of correct responses needed for a given sample size.
What I am saying is that their p-value applies to their one sample batch. They are only testing one batch and don't repeat the process on other batches. Their treatment is on the batch which from some of their testing is a split sample. Usually they have two treatments. Their one batch is just one batch out of the population of experimental batches possible using their methodology of preparation. Which is like saying, "We tried this once." It's an anecdote. It would be a lot more robust if they repeated the experiment a number of times. If you were measuring a more continuous variable, say IBUs, and you brewed 30 batches, split them preboil, and threw in hops A in one and hops B in the other, that would be a paired t-test with a "reasonable" sample size.

The researcher decides what significance level is actually significant to them. If the p-value they obtain is less than the significance level they chose, they report it significant. The p-value is reported because the reader may not feel that the chosen significance level, suppose we pick 0.10, is stringent enough. Maybe the reader feels 0.05 or 0.01 is more appropriate. That being said, once you have a significance level, and a statistical test picked out, there would be a critical value, here a discrete number which is the number of correct responses needed to arrive at a p-value less than your significance level. Technically, they are comparing their test statistic to a Chi-square distribution which is continuous. I have not used the triangle test but I do understand the procedure based on the link provided and it is just an application of a Chi-square test with only two cells, correct and incorrect responses, that's what goes into the summation. I am familiar with quite a few nonparametric tests which are conducted similarly. Yes, they did not invent critical values for a given significance level.
 
I wholeheartedly agree on p-values and statistical buzzwords being used in a very misleading way, considering the target audience. The fact that many times they didn't reply to comments pointing out fundamental flaws in their experiments doesn't really look good in my opinion; and I'm not talking about hardcore statistics, some of their experiments are simply not showing what they set out to explore.

On a more fundamental note, I don't even agree with the approach at all. As in many other complex processes, some brewing steps are critical, and others are not but still useful. The latter kind may be hard to identify and verify empirically, especially without a scientific approach, and can very easily lead to the misleading conclusion that the variable under scrutiny is irrelevant. Sometimes, many small differences that cannot be appreciated by themselves, can make a perceivable difference all together.
 
Last edited:
it might even reduce the number of *^$&*#&^ "Brulosophy debunked <X>" posts
Doubtful. It appears to be the nature of forum discussion.

Consider accepting that there may be many more years of Brulosophy debunked <X>" posts.

Just like there ...

... were too many years of "you'll kill half the yeast if you don't rehydrate dry yeast", and

... will be many more years of "you can't make a light colored extract based beer".

There doesn't appear to be a cure for this on the horizon. But there may be hope (and comfort) in this: RDWHAAB.

:mug:
 
Back
Top