I never said or meant to imply that it was the rule, but rather used it as an exercise in logic. I could have picked 0.20 or 0.30, or really, anything less than 0.50. That's still a large amount of potentially misleading write-ups within the groups of experiments with those P-values. That said, I do think low-ish p-values have been more common in the experiments that include a real panel and not just one guy tasting the same beers over and over. If you look back through the archive, you'll find more than a few.
My point is that the standard blurb, when applied to them, is misleading to non-statisticians. Just looking at page two of the experiments, where they were all (I think) panel experiments, there were 37 experiments whose results earned them "the blurb." Of those, 14 (38%) had P-values of less than 0.30. So, for these 38%, the chances of getting at least the number of correct choices that were got due to random chance was, in every case, 29% or less. It's likely that a large portion of those panels detected a difference. But each and every one gets the standard blurb. That's my issue, and I think it's the only one I've raised in this thread. (Based on some responses, I think some folks might think I'm basically saying "Brulosphy sucks," but that's not what I'm saying at all.)
When the p-value is say, 0.20 (or whatever), would it not be helpful to say "Results indicate that if there were no detectable difference between the beers, there was a 20% chance of "X" <fill in the blank> or more tasters identifying the beer that was different, but "Y" tasters actually did."? IMO, that would give the readers something much more tangible on which to base their impressions, and their own risk of changing (or not) their own processes based on them.
ETA: and it might even reduce the number of *^$&*#&^ "Brulosophy debunked <X>" posts.