Having used various pH meters for lab work I think it's more important to find a durable meter and stick with it. Even with high end bench meters it's unlikely to get two of them to read the same. This one of those cases where precision is better than accuracy.
In a pH meter accuracy and precision are, at least at the outset, the same thing. OK, now that I have your attention let's discuss what that really means and let me start by asking you the question "When a manufacturer specifies that his meter has accuracy of 0.05 pH, what does that mean?". I have seen lots of pH meter specs and in only one case did it say what that means. It refers to the stability of the electrode (and electronics but with modern digital implementations the instability of the electronics is insignificant compared to that of the electrode). If the meter can measure voltage and temperature to a certain precision then the least detectable difference in pH is ∆V/57 pH where ∆V is the precision in voltage measurement and 57.88 (20 °C) is a constant that depends on temperature. If, for example, it is 0.2 mV then ∆pH = .0034 pH and, we presume, that the manufacturer would display 0.1 pH resolution. We assume that 200 uV (0.2 mV) is the A/D resolution and that the electronic noise in this design would be 63 uV or less (10 db down). We also note that quantizing the display to 0.01 pH introduces quantizing noise of .0029 pH (0.01/sqrt(12)) in the readings.
We dip the electrode into 4 and 7 buffers and solve a pair of linear equations to come up with slope and offset numbers. The offset is added to the electrode voltage reading and the result is then gained by the offset number. If you do all this and then put the electrode back in 4 buffer (or leave it there while the calculations are being done) and
nothing has changed the scaled offset voltage calculated by the meter will be (4.00221-7)*57.88 = 173.521 ~ 173.6 mV (0.2 mV precision). If moved to pH 7 buffer and
nothing has changed the calculated scaled offset voltage will be (7.01624-7)*57.88 = -0.9398 ~ -1 mV assuming both buffers were at 20 °C. 4.00221 is the pH of the standard 4 buffer at 20 °C and 7.01624 that of the 7 buffer. Given 173.6 mV the meter divides by 57.88 and subtracts this from 7 to get 7 - 173.6/57.88 = 4.0069 which it rounds to two decimal places to show 4.00. Relative to the true buffer pH of 4.00221 this represents an error of 0.0022. Doing the same for the 7 buffer, the meter would display, to two decimal places, 7.02 while the true 7 buffer pH is 7.01264 for an error of 0.0010.
Thus,
at calibration, the meter is dead on, to two decimal places with its accuracy set by its precision.
It would be well at this point to contemplate the fact that the typical NIST technical buffer is manufactured to a tolerance of ±0.02 pH and realize that it is,
at calibration, really the buffers that determine the accuracy of the meter.
Now lets go away for 10 minutes and come back. What does the meter read now in 4 buffer? If the electrode is stable, i.e. it does not drift, it will still read 4.00. How about at 20 minutes, a half hour etc.? Any real electrode will, in fact drift and may be off by a few hundredths or, in a cheap meter, many hundredths of a pH unit. It is this drift which really determines the accuracy of which a pH meter is capable. A highly precise (you're going to have to talk to me a bit to convince me your meter should display 3 digits beyond the decimal point though quite a few do) meter that drifts is innacurate. This is why I put so much emphasis on the stability test described in the Sticky.
It is clear from all this that an 'accuracy' spec on a meter is useless unless the conditions (e.g. electrode in 4 buffer in a water bath) and time duration (e.g. over a period of 1 hour) are specified. As I said at the outset the only time I have ever seen this done is for the Hach pH Pro+.