Well, I think you could work out a system if each judge gets three bottles. That way each bottle gets judged by three different people, and every judge tries three beers.
Randomly assign each judge a unique ID number and tie this number to the beer that they send in (i.e. Judge 1 = Beer 1). Rather than try to randomize the beers that they get, send beers 2, 3, and 4 to judge 1, beers 3, 4, and 5 to judge 2, beers 1, 2, 3 to judge 100, etc. This will ensure that everyone gets a beer that is different from what they sent in and avoids the hassle of trying to randomize more than you have to.
Develop a consistent score sheet (or just use the BJCP sheets...) and have each judge rank all beers. Send the score sheets in to the coordinator, make a spreadsheet, and take the average/SD for each beer.
If you want to be super fancy you can throw out harsh and gentle judges by finding the mean and SD of the category subscores. If the mean rating for that category given by a judge is +/-2 SD away from the mean for all beers as a whole, throw the category scores given by that judge out and replace it with the mean value from the other two judges. Since each beer is judged three times, that should still leave you with 2 independent judges for each category for each beer if you have to throw a few out.
This would probably be easier to analyze in a statistics program (STATA/SAS/SPSS/R), but with a bit of doing it could work in Excel. Hope this helps!
Edit: Made an example spreadsheet, but can't attach it. Spreadsheet is a fake example with 5 beers/judges, will take averages, find the score, and flag any judge who is consistently more than 1 SD out of range for the subscore. It's definitely possible to extend this spreadsheet to 100 entries, but I don't the time or programming skills to do it quickly. PM me if interested.