And now we're getting into my wheelhouse... When you look into electronics, it's *all* about extracting signal from waveforms that look to the untrained eye like noise.
My own industry (data storage) is telling. When I first started getting into hard disk drives, I assumed that the data on the platter would show up as a relatively clear "1" or "0"* based upon the magnetic read data. Not true at all... It looks like noise to me.
So how do they get the bits out of your HDD? It's probability-based. They use a method called PRML - partial response, maximum likelihood. It layman's terms, it's basically taking an educated guess and then checking it against the ECC (error correction and checking) codes.
http://www.pcguide.com/ref/hdd/geom/dataPRML-c.html
https://en.wikipedia.org/wiki/Partial-response_maximum-likelihood
Now, when you actually think about it, it seems like fantasy that you can tease some of these signals out of that noise. But it works. My livelihood and the sanctity of your digital data rely on it working.
Similar things are used in a lot of data transmission scenarios. Your cell phone signal is relatively weak, and it's being sent through the air where all manner of other electronic communications are being sent and potentially interfering with yours. Upon receipt at the cell tower, I'll bet that incoming signal looks like noise. Yet your calls go through.
To bring it back to AJ's point, any time you're dealing with human sensory analysis, you have to assume that there is going to be significant noise. Like the experiments that blindfolded wine experts couldn't always tell red wine from white wine, we are very corrupted instruments. But that DOES NOT mean that all tests are unreliable. It simply means that you have to develop tests which can tease a signal out of the noise.
* Overly simplified, as it is not stored as high/low binary values but based on transitions. But that's getting into the weeds.