Technically this is spot-on, and mash pH readings should be taken at mash temperature accordingly for the highest precision. But many have observed that budget pH meters longevity is rather seriously negatively impacted by measuring at such elevated temperatures as seen within the mash, so a compensation factor "of sorts" is applied such that a mash pH reading taken at 20 degrees C. can be ballparked to what it might have been had it actually been taken at some nominal mash temperature.
The problem then becomes one of asking "What compensation factor is to be used?". Briggs states that in going from measurement at 65 C. to measurement at 18 degrees C. he observes 0.35 pH points worth of elevation. Palmer states that a similar shift in temperature results in only a 0.25 pH point rise. Sadly, this is not very good agreement. My compromise is to presume 0.30 pH points, ballpark splitting the difference.
For Briggs (if one can presume the slope of the difference in pH with temperature alteration to be linear on first approximation) the difference is one of ~0.00745 points of pH change per degree C. of measurement temperature change (or ~0.00414 pH points of difference per degree F.)
For Palmer the difference is ~0.0056 pH points per degree C, or ~0.0031 pH points per degree F.
Splitting the difference between Briggs and Palmer leads to ~0.0065 pH points change per change in measurement degree C., or ~0.0036 pH points change per change in measurement degree F.
We could come closer to a reliable compensation factor if we could resolve why Briggs witnessed 0.35 points of pH shift vs. Palmer's witnessing only 0.25 points of shift.