Competition scoring question

Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum

Help Support Homebrew Talk - Beer, Wine, Mead, & Cider Brewing Discussion Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Kingbogart

Well-Known Member
Joined
Nov 18, 2013
Messages
212
Reaction score
37
I recently entered a competition hoping for some feedback and ways I could improve my beer. My scores came back and weren't exactly what I was hoping for.

According to the results, and comparing with the style guidelines, everything that was written is in style, yet I appear to have lost a significant amount of points with no explanation.

For example, Aroma: "A candied caramel note seems to dominate at first pour as that settles there is some dark fruit ester".

The guidelines for aroma, pertinent to what they wrote are:
" Very rich and strongly malty, often with a caramel-like aroma. May have moderate to strong fruitiness, often with a dried-fruit character."

It might just be me, but that sounds very much correct. The score provided was 5/12.


How are the points given determined? If everything mentioned is in the guideline, and nothing mentioned is negative or out of the style, then where did I get docked 7 points? I know that personal preference is supposed to be removed, but how many points are still determined on "I like this one" vs. "The style says this is allowed"?

I'd also like to add that I'm not criticizing the judges, or the process, so much as trying to understand since it feels like there is something I am missing.
 
You're correct in that being dinged that hard on aroma calls for explanation of presence of off-aromas, lack of expected aromas and levels of intensity of said aromas if present.

There is no real standard way to award or detract points, but there are a few schools of thought on how to approach it that are explained in the BJCP exam prep and judging handbook available on bjcp.org.

Sometimes you're going to get that kind of feedback. I've seen much worse, but also much better.
 
Welcome to the frustration my friend! When we judge beers we are trying to take something subjective and make it objective. try as we might we don't all see, smell, taste the same things. Now try putting a numerical value to that! The best you can hope for is that a trio of judges get together so that your scores are somewhat in line with each other.
 
After the first taste, most judges have a pretty good idea what the total score should be. Most don't give any points out until they've written all their comments and then go back in fill in the numbers. Then they have to compare their scores to the other judges to see if they're all in the same ballpark. If a judge is too high or too low, there's some give and take to get the scores close.

So it's possible that this judge had to lower his score and did it all in aroma. Or that's where he adjusted his number to get the score he wanted. Or a number of other reasons.
 
There is no real standard way to award or detract points, but there are a few schools of thought on how to approach it that are explained in the BJCP exam prep and judging handbook available on bjcp.org.

After looking, for a while.....not the best organized site I've run across, I found what you were talking about. Based on the comments given, I am guessing they used the top down method. Tasted, gave it a score and then made everything else sorta match up. I did get the same sort of feedback across all the categories, not just aroma.

Based on what the sheet says, I was 6th in the flight, so I assume somewhere in the middle. If they were to say to themselves, "that beer was a 40, and this isn't that good, so.....25" I could potentially see how this would happen. And I can understand the subjective nature, even though I wish it could be more objective.

For what it's worth, I had another judge do an unofficial review, and while the feedback was similar the score was higher, more where I expected.

I was just hoping someone had some personal experience on how they personally judge. Say, if I have an off-flavor, that's usually -1, and if it's extra offensive, -2 to -4 or something like that. I just can't really see how no negative mention merits -7.
 
Off-flavors/aromas would ding way more than just a couple of points. How many judges were there? I judged my first comp the other day and was fairly shocked at some of the other judges lack of ability to discern certain components (mostly off flavors, some general components) but everyone has a different palate and ability to taste different components.

What I can say is that certainly for me vs the Certified+ judge at the table, we were consistently within a few points of each other regardless of the details of our component scores. The other two guys at my table, who did not posses any level of BJCP/other cert, were often wildly off and we had to discuss to adjust scores.
 
There were two judges, and both gave me the same score for all categories within 1 point, but neither mentioned any negatives, off-flavors, or issues. Just large amounts of missing points.

I can understand a difference in palate or perception could lead to a difference in opinion, but was also under the impression that while judging you were supposed to mention everything you perceive good or bad, and emphasis the things that were out of style. Since I don't have any mention of 'bad' I can't understand the huge deductions that would suggest large issues.

Doing some hunting today, I did find that this particular competition, while long running and popular was apparently short on judges and was trying to pull brewers and anyone that could help from the crowd the day of judging. Maybe that affected the scores across the board. Like, larger flights and restricted time?
 
I believe this is the rule rather than the exception in homebrew competitions. It seems you will rarely get the type of feedback you're expecting. They will let you know if it's an awful beer or if it's a great beer, but you can probably figure that much out for yourself. It's disappointing if it's a great beer, as you're just out a couple of great beers and the entry fee.
 
I can say that unless there were significant point differentials between the two judges that there probably wasn't much discussion. There were a bunch of entries for the comp I just judged, and unfortunately things had to be moved along to accommodate all the entries. I am new to my area and when I started looking into BJCP certification I was surprised how few people actually take the test (probably thanks to the hard to navigate website and need to DIY your own exams). So yeah, there's def a shortage of judges, and and even bigger shortage of ranked BJCP judges. There are currently 7-12 of us taking the tasting exam Saturday to help alleviate this problem.

I wish I could give you better answers. The biggest thought in my head the whole time judging was, "I need to give all these beers the best attention I can because if someone else just glanced over my entry, I would be pretty upset"
 
Its a growing "sport". Judges are getting hard to find with all of the competitions popping up. You may be correct about the lack of qualified judging.

I'm doing my part and attending classes to prepare me for the certification tests...Just doing my part to help out the community:D
 
I dunno, I think it's a fairly appropriate score...

The guidelines say "may have moderate to strong fruitiness" and "very rich and strong malty" aroma....

From the notes given:
"A candied caramel note seems to dominate at first pour as that settles there is some dark fruit ester".

"Some dark fruit ester" isn't "moderate to strong fruitiness" and "a candied caramel note" isn't "very rich and strong caramel-like aroma"

Sounds like you had some of the to-style aroma characteristics, just not the intensity that that particular judge felt it needed to be more to style.

In the aroma, flavor, mouthfeel, and clarity sections, judges are largely just supposed to evaluate the beer on what they're sensing/tasting and not necessarily if it's to style or not (unless it's really drastic). So a description that sounds close to the style description but lacks a big score probably means that the intensity wasn't there...

In the last section (overall comments, or whatever it's called) did it mention anything about the aroma lacking anything, or suggested improvements?
 
I dunno, I think it's a fairly appropriate score...

The guidelines say "may have moderate to strong fruitiness" and "very rich and strong malty" aroma....

From the notes given:
"A candied caramel note seems to dominate at first pour as that settles there is some dark fruit ester".

"Some dark fruit ester" isn't "moderate to strong fruitiness" and "a candied caramel note" isn't "very rich and strong caramel-like aroma"

Sounds like you had some of the to-style aroma characteristics, just not the intensity that that particular judge felt it needed to be more to style.

In the aroma, flavor, mouthfeel, and clarity sections, judges are largely just supposed to evaluate the beer on what they're sensing/tasting and not necessarily if it's to style or not (unless it's really drastic). So a description that sounds close to the style description but lacks a big score probably means that the intensity wasn't there...

In the last section (overall comments, or whatever it's called) did it mention anything about the aroma lacking anything, or suggested improvements?

The guidelines also included "dried-fruit character" which I attributed to "dark fruit ester"

The only note that semi-agrees with with you're saying is "Could use more complexity, add more specialty malt......maybe", in the overall opinion section. But throughout the scores, there is no indication of off-flavors, off-aromas, or issues other than the above.

Obviously it would be difficult for anyone to judge the beer without actually tasting it. Which is why I was more interested in trying to figure out how the scores are actually decided upon. You do hit upon an interesting concept, that my interpretation of the guidelines may not be the same as the judges. I thought that what they were saying and what the guidelines were the same, but you're saying maybe no. Though either way, limited feedback that isn't overall helpful, and a large deduction going unexplained.
 
I was just hoping someone had some personal experience on how they personally judge. Say, if I have an off-flavor, that's usually -1, and if it's extra offensive, -2 to -4 or something like that. I just can't really see how no negative mention merits -7.

When I judge I start with the idea of an average example of the style at the middle of the section score then award for specific complexities and spot-on representation of expected flavors or ding for off-issues. We are expected to describe the characteristics in as much detail as possible and most of the experienced judges I know will make that effort. Most judges compete or have competed as well and follow the idea that we give the feedbackdetail we would like to receive.

On the other hand to fill seats sometimes organizers are forced to bring in lots of novices who don't have a system or good descriptive ability. I had one judge on two of my beers last year without credentials that literally wrote 6 words shared between the two score sheets. One he scored a 15 (a beer that scored 35 and 40 in comps the previous 2 weeks) and the only word he wrote was "banana" in the aroma and proceeded to score without comment on the other sections. The other beer took 2nd in the category but he was 7 points lower on his score than the Master judge he was sitting with without explanation. He and the organizer both received emails that were not returned. I was also there judging.

There is some uniformity in the higher ranks, but it is not easy (nor should it be) to get there. The rest is usually more of a luck-of-the-draw situation for most comps.
 
I recently entered a competition hoping for some feedback and ways I could improve my beer. My scores came back and weren't exactly what I was hoping for.



According to the results, and comparing with the style guidelines, everything that was written is in style, yet I appear to have lost a significant amount of points with no explanation.



For example, Aroma: "A candied caramel note seems to dominate at first pour as that settles there is some dark fruit ester".



The guidelines for aroma, pertinent to what they wrote are:

" Very rich and strongly malty, often with a caramel-like aroma. May have moderate to strong fruitiness, often with a dried-fruit character."



It might just be me, but that sounds very much correct. The score provided was 5/12.





How are the points given determined? If everything mentioned is in the guideline, and nothing mentioned is negative or out of the style, then where did I get docked 7 points? I know that personal preference is supposed to be removed, but how many points are still determined on "I like this one" vs. "The style says this is allowed"?



I'd also like to add that I'm not criticizing the judges, or the process, so much as trying to understand since it feels like there is something I am missing.


Is it possible for you to upload your scoresheet?
 
Most of the posts seem to be saying that points are subtracted for faults. I don’t know anybody who does that. You could easily have a beer with no obvious problems that scores in the 30's.

If your beer is perfect you get a 40. The points after that are really hard to get. It’s the ‘Wow’ factor. Excellent is nice, but if you want to win, show me amazing.
 
There are a few additional things that haven't been discussed so far either.

Just because your beer had that "caramel note", there is no inclination on how strong, weak, stable, lasting it was etc.

This is something that would hold true for aroma and flavor inclusions.

I am surprised that no one in this thread has brought up water chemistry yet. If you had raised/lowered your total alkalinity, or adjusted chloride for that extra malt punch would it have pushed things higher.

The judges may not know you have done these things but the effect on the finished product can easily set you aside from the pack and turn a 35 into a 40.
 
Most of the posts seem to be saying that points are subtracted for faults. I don’t know anybody who does that. You could easily have a beer with no obvious problems that scores in the 30's.

If your beer is perfect you get a 40. The points after that are really hard to get. It’s the ‘Wow’ factor. Excellent is nice, but if you want to win, show me amazing.

I think that is part of the problem. Out of 50 possible points....13 points are basically "gimme's" and another 10 are hard to get. Even a 40 point beer is essentially an "80" or an "8/10". I love to read the Commercial Calibration section of Zymurgy. Even the best examples of a particular style are only scoring in the low 40's. In my mind those should be in the high 40's...or 4-5 points higher.
 
...this is why i don't enter competitions anymore. too random and lack of skilled judges.
 
Definitely a subjective process. I recently sent the same beer off to two separate competitions, only one week between the competitions. I wanted to get some professional feedback other than what I get from my semi-professional drinking friends. First competition, scored 37 and got some good feedback and suggestions from a Master BJCP judge. Second competition, 28 and no feedback as of yet...interested to see that sheet.
Regardless, I like the beer and my real interest was to see if someone "in the know" thought the same.
 
I'd love to see the score sheets as well. The issue I see is with the score sheet not being filled to completion. If that was all they wrote then that is very poor on the judges part indeed. Maybe I'm just longwinded but I try to make sure that I fill the area when judging and it is really only difficult to do (especially in the Flavor/Aroma/Overall) if the beer is so lifeless that it leaves you speechless (and then you can just write that). So like I said...I'd love to see the sheets.
 
Back
Top