• Please visit and share your knowledge at our sister communities:
  • If you have not, please join our official Homebrewing Facebook Group!

    Homebrewing Facebook Group

the BJCP so called certification drives me crazy

Homebrew Talk

Help Support Homebrew Talk:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
I know there are judges out there that feel EVERY beer can be improved on in some way, and so they will never give a 50 and always provide at least one suggestion on how they feel the beer can be improved when judging. Is that looking for flaws? I'm not sure.

I don't necessarily agree with that POV but I also have never scored a 50 beer yet.
 
There are just too many variables that you can't control and don't know about to blame the judges or the BJCP certification.
Did the judges go to a beer blast the night before and were hung over?
Did the judges sample 50 beers before yours?
Where was your beer in the flight?
What was the serving temperature?
Was a really exceptional version of the style served right before yours?
Having said all that, I agree that there are problems with beer competitions but I disagree that BJCP certifications are completely meaningless. At least the BJCP judges have met SOME minimal standard for qualification.
I don't think you'll ever be able to get personal likes/dislikes out of beer judging.
Your beer might not have any flaws, but the "overall impression" is something that is highly subjective and thus consistent scoring really can't be expected.
I believe the best you can get out of competitions is to take each judge's comments and scores individually and learn what you can from that.
+1
Also not mentioned, unless I missed it...
A bad beer, be it flawed or just over-the-top in some way in a lineup can wreck your palate. Your tastebuds may not be able to fully recover buly the time you get to the next beer, or even the next round.

I am not a judge. But I am an experienced taster and realize how to cleanse my palate between samples, but its more like rinsing a dirty dish with tap water than really cleaning your palate.

Just for perspective, I was a judge for a friendly neighborhood wine tasting, 20 wines, 5/hr. I recall being somewhere around sample 10 and thinking, "this might be a really great wine, or maybe its sh!tty. ****, I can't tell, I'm pretty sure I can still taste that nasty wine #3 from 90 minutes ago."
 
Yes, I KNOW many judges seek flaws where there are none. I have no question of it

I see this type of behavior in many settings. Food critics, it seems, always need to find some flaw. They're a "critic" so finding something to be critical about seems to be a requirement. For some it's a badge indicating how discerning they are.

I also see this in people who are paid to review others work. Where I work, we have a team that does nothing but review documents, software, etc. There are a number of them who always find something. Always. I've watched them review a document and approve it after revisions. Then they review it again for another update and find some fault in the portion of the document they approved previously. They just need to find something.

I'm confident there are beer judges that are this way.

I used to score all the beers I've had in untappd (rarely do anymore). I never scored a 5 because it left no room for a beer to be better. I would, I think, have the same issue giving out a 50.
 
I used to score all the beers I've had in untappd (rarely do anymore). I never scored a 5 because it left no room for a beer to be better. I would, I think, have the same issue giving out a 50.

This comment is not aimed at you, Hwk-I, but rather is just a continuation of the discussion from my own perspective, as I think it is good discussion (and hope that others think so too)...

I have thousands of beers scored in Untappd. I have given 5's to dozens of them. On such a small scale, well why not. I wish the world was full of 5's. It's not, but if a beer is really world-class with zero flaws, then that's how I score it. It deserves the recognition. Same can be said for 1's and 2's as well of course, dozens of those too unfortunately. I figure, if we're going to give a range of 1 to 5, or 13 to 50, or whatever, then the full range is intended to be used, not just the middle. Not every beer is just a 3 or a 30 plus or minus a fraction of a point. I am all about normalized distribution of data. I like math, too, so I guess to me it just makes good sense to use the entire range as it is intended, not just start in the middle and work up or down by tiny amounts. By spreading it out I feel I have a better sense of truly how great or how horrible a beer really is, relative to all others. I think a lot of judges aim for the center then add or deduct points from there. But I don't really agree with that method. Starting in the middle might be okay, IF you also challenge yourself to consider whether your 30 is really a 35 or a 25. Or is your 3 really a 3.5 or a 2.5. Or whatever. Use the whole range. Everything is not mediocre. Many beers are mediocre, yes. But sometimes there is an aspect or two that stands out for one reason or another, good or bad. Don't be afraid to spread the scores out a bit so they better characterize what you really taste, rewards or dings for good and bad.

I know I'm just talking in circles now so I might just duck out for a while. Cheers all. :)
 
This comment is not aimed at you, Hwk-I, but rather is just a continuation of the discussion from my own perspective, as I think it is good discussion (and hope that others think so too)...

I have thousands of beers scored in Untappd. I have given 5's to dozens of them. On such a small scale, well why not. I wish the world was full of 5's. It's not, but if a beer is really world-class with zero flaws, then that's how I score it. It deserves the recognition. Same can be said for 1's and 2's as well of course, dozens of those too unfortunately. I figure, if we're going to give a range of 1 to 5, or 13 to 50, or whatever, then the full range is intended to be used, not just the middle. Not every beer is just a 3 or a 30 plus or minus a fraction of a point. I am all about normalized distribution of data. I like math, too, so I guess to me it just makes good sense to use the entire range as it is intended, not just start in the middle and work up or down by tiny amounts. By spreading it out I feel I have a better sense of truly how great or how horrible a beer really is, relative to all others. I think a lot of judges aim for the center then add or deduct points from there. But I don't really agree with that method. Starting in the middle might be okay, IF you also challenge yourself to consider whether your 30 is really a 35 or a 25. Or is your 3 really a 3.5 or a 2.5. Or whatever. Use the whole range. Everything is not mediocre. Many beers are mediocre, yes. But sometimes there is an aspect or two that stands out for one reason or another, good or bad. Don't be afraid to spread the scores out a bit so they better characterize what you really taste, rewards or dings for good and bad.

I know I'm just talking in circles now so I might just duck out for a while. Cheers all. :)

Regarding distrubution of scores...I'd normally expect a bell curve and I would guess mine probably fall into that...maybe skewed a bit high because I don't drink many crappy beers. I use untappd a lot when choosing beers and also when considering a visit to a brewery. I also try to score based on style, not just my preference and most commercial beers aren't awful for their style.

If I were scoring a beer for a comp (never done that) I'd have no issue giving high or low scores, but as an aggregation of sub-scores for different aspects (I believe that's how it works: appearance, aroma, flavor, carbonation, etc.), it's pretty hard to get a perfect score across the board.
 
I am quoting you solely for the attachment.

The scoring system is part of the problem. The system uses numbers to indicate qualitative categories. Entrants may not be aware of this and so think getting above mid-range scores (30-40) is a poor showing, whereas the judges are thinking this is Very Good. 20-29 is still considered Good by the judges. To be specific, the judges are translating numbers to an ordinal scale. I could be wrong, but I don't think there is a well defined definition of what changes in value in the sub-categories of the scores means. For instance, we know that 4 is one more than 3 and 5 is one more than 4. But within a sub-category on the score sheet, when the judge changes from 3 to 4 or 4 to 5 that change probably isn't a value of 1. That's because it's really an ordinal system, it's ordered by quality not quantity. This is why there is variability between the judges. On top of which, as a previous poster alludes, is the idea that the scores follow a normal distribution. It's more than likely not, and I think there is strong anecdotal evidence to say the scores don't follow a normal distribution (bell curve). The scores are very likely not symmetric. Scores are skewed with a lot more scores on the lower end versus the rest of the distribution. There aren't as many world class beers as problematic ones.

As far as competitions, I've entered a few. They've always had 3 judges. I look for consistency. If two judges are close and one off, I lean towards the pair as far as what I think my score was. Unless the of course the lone judge rates it higher, then of course that's my real score.

(I did actually have a score sheet added incorrectly once but it was a very local event and the judge may not have been certified. They forgot to carry the one and my score was 10 points off. So out of maybe out of 100 cases the 101th case had a math problem.)
 
I'm a BJCP National judge and I rarely enter competitions, just a couple of local ones to support the local clubs and occasionally NHC (but I finally won a medal last year, so I'm probably done with that). I decided many years ago that entering a competition is a crapshoot. It depends which judges you get on what day. I do think most judges genuinely try to do a good job.
 
Another way to look at this is that the organic human nature of beer judge palates could be completely overridden by using mass spectrometers and other gear to compare a homebrew with the top commercial example of the style and rate you exactly on how close you got. That would be incredibly FAIR and you couldn't complain about variation anymore. It would also be expensive and boring.

I don't know if judges reach to find flaws. The variation is mostly due to the various concentration thresholds people have. I have had beers that both have Diacetyl and Acetaldehyde at levels that were not even questionable to me but my partner didn't pick up either. If we just left the sheets like that, the entrant would just accuse me of fabrication or bias. What do you do there? In my opinion, generally I'd want to defer to the higher rank. It's not perfect but it's better than flipping a coin.
 
I know there are judges out there that feel EVERY beer can be improved on in some way, and so they will never give a 50 and always provide at least one suggestion on how they feel the beer can be improved when judging. Is that looking for flaws? I'm not sure.

I don't necessarily agree with that POV but I also have never scored a 50 beer yet.
This is me. I believe in using the entire range. I have given everything from 13 to 46 in competition. 13 is about as rare as 45-46. I think I've given three 13s and two 45-46. I've also probably evaluated 1000 or so beers in comps. But 18-42 gets used regularly. I'll probably hit both extremes once a competition.

If I can offer you a *single* concrete suggestion to improve it, it's not a 45+ beer. Any improvement beyond that is some intangible magic I couldn't identify. A 50 pointer should be a *life changing* beer. My only 3 I have had, of every beer I've ever had in comps or otherwise, are (fresh and undamaged) Weihenstephaner Hefe, Saison DuPont, and Cantillon Fou'Foune. Plenty of world class beers a couple points lower than that. Admittedly at that point it's purely intangible and purely subjective.
 
generally I'd want to defer to the higher rank.

I totally disagree with this. It's a slice of the problem with BJCP. Lower ranks get less respect, even if the lower ranked guy is a supertaster or even if the Master just had liver & onions or who knows what else for lunch. Any judge of any rank can add a lot of value. I respect all equally (...or equally poorly maybe!).
 
Another way to look at this is that the organic human nature of beer judge palates could be completely overridden by using mass spectrometers and other gear to compare a homebrew with the top commercial example of the style and rate you exactly on how close you got. That would be incredibly FAIR and you couldn't complain about variation anymore. It would also be expensive and boring.

I don't know if judges reach to find flaws. The variation is mostly due to the various concentration thresholds people have. I have had beers that both have Diacetyl and Acetaldehyde at levels that were not even questionable to me but my partner didn't pick up either. If we just left the sheets like that, the entrant would just accuse me of fabrication or bias. What do you do there? In my opinion, generally I'd want to defer to the higher rank. It's not perfect but it's better than flipping a coin.

When high ranking judges proctor exams, they're not permitted to alter sheets or scores in the consensus process, precisely for that reason. When there are significant variations, flaw sensitivity is definitely at play. Its built into the exam grading structure deliberately because it is a VERY real and very present phenomenon. Graded sets where a proctor was clearly diacetyl blind.

A couple years ago I proctored an exam and the other proctor and I were bang on agreement with everything except one where we varied by something like 15 pts. It was a Wit where I'm prone to detecting the ham/hot dog character if cheap coriander (or too much) is used. He thought it was delightful.

I wouldn't necessarily defer to higher ranks with flaw sensitivity. Rather it's knowing what you're abnormally sensitive to and when to give a pass, vs what you're lower in sensitivity. If you're mostly diacetyl blind but your judging partner isn't, even if you outrank em, defer. If you know you're good with diacetyl and you detect it, stick by your guns (though if you know you're extremely sensitive and it's faint, perhaps give the examinee the benefit of the doubt).

For example, I'm ridiculously sensitive to roast-derived pyrazines. Almost ALL coffee beers taste like green pepper or jalapeno to me. Even some darker beers without coffee can give me that character. I'll often note it on a scoresheet, note that I'm super sensitive as well, and not knock any points for it unless it's extreme enough to be present to others.
 
I totally disagree with this. It's a slice of the problem with BJCP. Lower ranks get less respect, even if the lower ranked guy is a supertaster or even if the Master just had liver & onions or who knows what else for lunch. Any judge of any rank can add a lot of value. I respect all equally (...or equally poorly maybe!).
I've seen plenty a low ranking judge paying as much attention to my sheet as to their own.

Yes, I've worked to train my palate. That hasn't made me any more or less sensitive to anything, just better at identifying what it is.

The worst individual BJCP judges I've seen are National ranked (though on average National judges are solid and quality does align with rank). Folks who passed the legacy exam in the 90s and haven't bothered to grow. Stagnant palate. Poor biased style knowledge. And lazy arrogant sheets.

I honestly think Provisional judges are phenomenal for the most part, since they're often in the midst of studying and preparing for the exam and give it their all. Good sheets. Lots of freshly studied style knowledge. Often off-flavor training fresh as well.
 
I totally disagree with this. It's a slice of the problem with BJCP. Lower ranks get less respect, even if the lower ranked guy is a supertaster or even if the Master just had liver & onions or who knows what else for lunch. Any judge of any rank can add a lot of value. I respect all equally (...or equally poorly maybe!).

Ok, but the rank is the only measuring stick that is somewhat objective. Of course by comparing rankings you don't know everything but the higher rank has judged more, has scored higher on the exam which shows some level of parity with other high ranking judges. Lower level judges have less experience and have not scored beers similarly with other high scoring judges.

If you totally disagree, what measuring stick would you prefer to use when two judges are far apart on a beer? I'm not talking about where one guy acknowledges they have a super high diacetyl threshold and voluntarily defers. I'm talking about two judges who stand their ground but are still 10 points apart? One is recognized and the other is national. You "totally disagree" that the final score should be more influenced by the higher rank judge? If you were the judge coordinator, tell us how you fix that problem on the spot.

If a lower ranked judge really is a superstar, they could easily move up the ranks without breaking a sweat so they should.
 
what measuring stick would you prefer to use when two judges are far apart on a beer? I'm not talking about where one guy acknowledges they have a super high diacetyl threshold and voluntarily defers. I'm talking about two judges who stand their ground but are still 10 points apart? One is recognized and the other is national. You "totally disagree" that the final score should be more influenced by the higher rank judge? If you were the judge coordinator, tell us how you fix that problem on the spot.

If a lower ranked judge really is a superstar, they could easily move up the ranks without breaking a sweat so they should.

If two judges cannot come to agreement, get a couple more judges to taste the same beer and then gang up against the one who either missed something or detected a flaw that doesn't actually exist. This can be humbling for any judge of any rank. Some fellow judges and I (all Recognized or Certified) had to do this once against a National who lost the argument, needless to say he was not happy and actually stormed out, but the rest of us shrugged and I remember saying something to the effect of, truth hurts sometimes. In this particular case the National actually admitted to us that he had a known insensitivity to diacetyl which was the problem. At least he had the decency to admit that he might be wrong, even though he obviously hated us for having to point it out. Typically when another judge points out to me "hey this is oxidized" or "don't you taste DMS?" then after I taste again, I'll either agree or I'll hold my ground and say no I just don't taste it and refuse to budge. It happens to all of us at one time or another. All you can really do is bring in another judge or two or three to figure out who's right & who's wrong. And if it's me, well okay, I've learned something about my own palate. I know I'm not super sensitive to DMS, but I can usually detect it after someone else points it out to me. Diacetyl and oxidation on the other hand I often pick out before others do. Stuff we all learn as we go along on the journey. But rank has absolutely NOTHING to do with our relative ability to pick up flavors, other than *maybe* the higher ranked guy has had more of these experiences so they know when to back off and not get overly argumentative, just suck it up and admit they aren't sensitive to this or that... but still should only record on the scoresheet what they themselves are able to taste, except maybe to note with an asterisk: "the other guys said DMS, but personally I don't get it".

If a lower ranked judge really is a superstar, but doesn't give a **** about rank, he/she won't go up in rank. I have enough points to be National. But I don't care. They don't have exams for National in my area, I'd have to drive 100 miles like I did last time to get to Certified. I'm content to stay at Certified at this moment in time. Does this mean I'm not as skilled a taster as a National? I dunno. Get a scoresheet from me, then you can be the judge of the judge. I think if I took the exam I would likely pass. Meh. Maybe someday. Maybe not.
 
I am quoting you solely for the attachment.

The scoring system is part of the problem. The system uses numbers to indicate qualitative categories. Entrants may not be aware of this and so think getting above mid-range scores (30-40) is a poor showing, whereas the judges are thinking this is Very Good. 20-29 is still considered Good by the judges. To be specific, the judges are translating numbers to an ordinal scale. I could be wrong, but I don't think there is a well defined definition of what changes in value in the sub-categories of the scores means. For instance, we know that 4 is one more than 3 and 5 is one more than 4. But within a sub-category on the score sheet, when the judge changes from 3 to 4 or 4 to 5 that change probably isn't a value of 1. That's because it's really an ordinal system, it's ordered by quality not quantity. This is why there is variability between the judges. On top of which, as a previous poster alludes, is the idea that the scores follow a normal distribution. It's more than likely not, and I think there is strong anecdotal evidence to say the scores don't follow a normal distribution (bell curve). The scores are very likely not symmetric. Scores are skewed with a lot more scores on the lower end versus the rest of the distribution. There aren't as many world class beers as problematic ones.

As far as competitions, I've entered a few. They've always had 3 judges. I look for consistency. If two judges are close and one off, I lean towards the pair as far as what I think my score was. Unless the of course the lone judge rates it higher, then of course that's my real score.

(I did actually have a score sheet added incorrectly once but it was a very local event and the judge may not have been certified. They forgot to carry the one and my score was 10 points off. So out of maybe out of 100 cases the 101th case had a math problem.)

for what it's worth, I don't worry about the "fair/good" labels as I don't think the words meet the description used to the right of the score ranges. In my opinion 0-13 is basically Awful, 14-19 is Problematic, 20-25 are generally Fair, 26-32 tend to be Good/Solid, 33-38 Very good. Or thereabouts; the exact number doesn't matter. Beer is on a continuum, not in little boxes within a scale.
Source: Grandmaster and Guidelines Reviewer/Editor who gives plenty of 40's where they are deserved.
 
for what it's worth, I don't worry about the "fair/good" labels as I don't think the words meet the description used to the right of the score ranges. In my opinion 0-13 is basically Awful, 14-19 is Problematic, 20-25 are generally Fair, 26-32 tend to be Good/Solid, 33-38 Very good. Or thereabouts; the exact number doesn't matter. Beer is on a continuum, not in little boxes within a scale.
Source: Grandmaster and Guidelines Reviewer/Editor who gives plenty of 40's where they are deserved.

Prost to you, Michael. :mug:

Personally I would define 0-22 as Awful, 23-27 as Less Than Good, 28-32 as Pretty Good, 33-38 Great, 39-42 Wow, and 43+ World-Class Awesomeness.

However, quantitative averages are still better than subjective qualitative bins like these. The aboves are just approximations and very subjective.
 
Given a courtesy score of 13, I go by: 13-16 unpalatable/nauseating. 17-20 seriously flawed and unpleasant though not entirely undrinkable. 21-25 generally not great but some good parts. 26-29 decent but flawed. 30-35 solid but not perfect, flaws are pretty minor. 36-39 excellent and without flaws but lacks bits depth/complexity/balance, or excellent with a very slight flaw. 40-44 flawless and bang on style, excellent beer though missing the magic. 45+ unbelievable in every regard, couldn't improve it if I tried.
 
I've learned to pay more attention to the comments as opposed to the actual score. Scoring is almost subjective because there is no manual which tells you "subtract 2 points for having medium low diacetyl instead of low in the aroma of a Czech Premium Pale Lager." The score is really more of a range anyway.
 
Prost to you, Michael. :mug:

Personally I would define 0-22 as Awful, 23-27 as Less Than Good, 28-32 as Pretty Good, 33-38 Great, 39-42 Wow, and 43+ World-Class Awesomeness.

However, quantitative averages are still better than subjective qualitative bins like these. The aboves are just approximations and very subjective.

I struggle with descriptors like awful, less than good etc. only because a person might get low scores solely due to something not being to style. It may be a very good beer, but get a low score due to style variances, right?
 
In social science surveys (think of where you are asked disagree strongly, disagree, neutral, agree, agree strongly) one has to be very careful using numbers for the categories. The categories aren't necessarily evenly spaced and without strict design of the responses should only be treated as categorical. You can't average neutral, neutral, and agree for instance.

Now consider a beer competition. The beers get a score which is averaged by several judges. The numbers are being derived from what is probably more of a categorical grading system. I suspect this influences some of the variability in scores.
 
I struggle with descriptors like awful, less than good etc. only because a person might get low scores solely due to something not being to style. It may be a very good beer, but get a low score due to style variances, right?
Style flaws and technical flaws are both flaws.

It depends on the extent that they're present. An IPA without hop character (and, unless it's a NEIPA, without bitterness) or a Berliner Weiss without sourness I would hit hard, as those are "core" attributes to the style missing. Putting new world hops in a Strong Bitter? Not technically to style per BJCP (though one could rightfully take issue with that assessment) but if the balance is there, it's a pretty minor style knock in my eyes. Might not hit the 40s, but could still score very well.

When it comes to an excellent beer that was blatantly miscategorized, I do sometimes give a courtesy 29 (say an entrant entering an excellent specialty beer in its base category). But I usually don't knock people for which specialty category they choose as long as effort was there/unless it's blatantly wrong.
 
A question was asked as to my motivation in entering beer in competitions. I guess early on it was more about getting feedback but today I would say its mort about supporting my local home brew club through the bottle fees I pay. I truly appreciation the time, effort and resources it takes to put on a competition. My comments were more in line of “I’m an a home brew expert” just because he/she passed a rudimentary beer test. And, what makes things worse, IMHO, is that so many BCJP judges that I know through the clubs I’m a part of rarely brew or if they do brew tend to brew only one or two styles. It really a disservice, for instance, when a BCJP judge only brew meads and then is slotted to evaluate DIPAs. Circling back to the beginning of my comment her, I will continue to enter competitions with the purpose of supporting my local home brew clubs. Which means I really need to shut my mouth about the work put in by those to put on the competitions.
 
Entering just to give a club money or to make them feel important seems kind of sad to me. Save your money if you're not getting what you want out of it.

That being said..... I personally haven't entered any competitions for the past 3-4 years. I've judged several, but not entered. :D
 
I enjoy submitting beers to Competitions, however, I don't really get to stressed about it. I have done pretty well, and it is a good feeling when others credit your hard work.

Honestly, my best beers don't fit well into style guidelines and I don't enter them into Competition.

I have been thinking a lot lately about hosting a Non-AHA/BJCP Beer Competition. Has anyone ever done this or have any advice? I'm interested in a Farmhouse Ale Comp. Judging would be based more on taste and theory. I think it could be a good opportunity for some homebrewers to showcase great beers that don't fit into BJCP.
 
I have to say that I'm very surprised that there is no hard and set rule for "judges must be within x points of each other." That's standard for any type of competition where opinion/subjectivity is at play, and should be taken care of immediately. I've never entered a comp because I haven't felt the need, but reading a lot of this makes me even less likely. No disrespect to the individual judges for volunteering their time, doing their best, etc.

I also tend to brew a ton of English styles, even historical ones recently, as well as a lot in the rustic Belgian area. I don't feel that these will be judged well when the BJCP guidelines for these styles are so inaccurate, as discussed already in this thread. There's nothing wrong with wanting a competition to do better, but I do agree that if you don't find them valuable, don't enter.
 
I have to say that I'm very surprised that there is no hard and set rule for "judges must be within x points of each other." That's standard for any type of competition where opinion/subjectivity is at play, and should be taken care of immediately. I've never entered a comp because I haven't felt the need, but reading a lot of this makes me even less likely. No disrespect to the individual judges for volunteering their time, doing their best, etc.

The competitions that I've stewarded (I fully recognize that I don't have the palate to judge beers, and am perfectly OK with getting to drink all the same beers without having to work at describing them) usually have that rule... for the most part it's been a 5-point spread. Still, I was a little miffed by a recent comp where I had a beer score 37-38-30 from three judges, all of them ranked. C'est la vie.
 
I have to say that I'm very surprised that there is no hard and set rule for "judges must be within x points of each other." That's standard for any type of competition where opinion/subjectivity is at play, and should be taken care of immediately.

For the two annual competitions that I was a steward at, they had a "hard and set rule" that they followed year after year after year after year: agree within 5 points. As a steward, I saw the numbers - the judges were almost always +/- two or three points. If they were a long ways apart, they brought in a 3rd person to help understand what was going on.

@lowtones84, this isn't aimed at you specifically, but your captured one side of the discussion well. My reply is to "those on the other side" of this discussion, not your specifically.
... but reading a lot of this makes me even less likely. No disrespect to the individual judges for volunteering their time, doing their best, etc.
I've entered competitions, been a steward at competitions, showed up for bottle sorts (including opening shipped packages at regional events).

If you evaluation competitions
based on what you read here,
you are missing out
on what homebrewers,
in the real world,
can do when they come together.

 
Still, I was a little miffed by a recent comp where I had a beer score 37-38-30 from three judges, all of them ranked. C'est la vie.

Yeah, that just...shouldn't be allowed. Very easy for a comp organizer or head judge to lay that ground rule, or even the BJCP itself. It might sound a little over-reaching, but if you're using all BJCP certified judges then I think it would be understandable.
 
Yeah, that just...shouldn't be allowed. Very easy for a comp organizer or head judge to lay that ground rule, or even the BJCP itself. It might sound a little over-reaching, but if you're using all BJCP certified judges then I think it would be understandable.

The BJCP explicitly doesn't set comp rules. There are a few conditions of sanctioning, the biggest ones being blind judging and published and publicly available rules/guidelines (though those don't have to be BJCP guidelines). But apart from those, competitions are allowed to do pretty much whatever they want. The BJCP has recommended best practices, which IIRC includes a max 7 point spread between judges, though I haven't read it in a while. But comps are under no obligation to follow that.
 
Yeah, I get that. Is there anything like a BJCP "sanctioned" event? Then they could have a little more control I suppose. 7 point max difference over a 50 point spread is quite high....
 
Back
Top