

02022013, 12:26 PM

#31

Aug 2010
McLean/Ogden, Virginia/Quebec
Posts: 9,430
Liked 1565 Times on 1191 Posts

Quote:
Originally Posted by Kaiser
If you mixed correctly both counts should be close. If not there is a problem. Others recommend to fill the hemocytometer three times and count 3 times.

Remember what I said in #23. The standard deviation will be 1/sqrt(number of cells you count) so for 10,000,000 cells/mL and a large square volume of 0.0001 mL you'd expect to have 1000 cells in a large square and the standard deviation between large squares would be sqrt(1000) = 31.6. An estimate of your overall error is 100/sqrt(total_cells_counted) so that if, in this example, you counted one large square your estimated error is 100/sqrt(1000) = 3%. If you counted 3 large squares and got a total of 2998 then your estimated error is 100/sqrt(2998) = 1.8%. These are your counting errors only. If you are making dilution errors those add to the uncertainty. If you find variances between counts more than sqrt(cells_counted) then you should suspect that you are not properly doing the dilutions.



02022013, 01:25 PM

#32

Oct 2012
Malden, MA
Posts: 2,191
Liked 247 Times on 201 Posts

Quote:
Originally Posted by Hermit
As a practical matter, generally how consistent are your counts?

Sometime I just need a ball park number and might only count 100300 cells. These have an average box to box standard deviation of 6%.
If I take my time and do everything right and count 500 cells or more the standard deviation is 3% on average.
It is quite possible to have a very low standard deviation but have the numbers be way off. Recently I had some alcohol left in a tube from cleaning and didn't realize that it had not completely evaporated. My viability was much lower than expected.
It's important to know what to expect. This might be from taking day to day measurements or you running the tests in duplicate or triplicate.



02022013, 02:42 PM

#33

Aug 2010
McLean/Ogden, Virginia/Quebec
Posts: 9,430
Liked 1565 Times on 1191 Posts

Quote:
Originally Posted by WoodlandBrew
Sometime I just need a ball park number and might only count 100300 cells. These have an average box to box standard deviation of 6%.

1/sqrt(100) = 10%; 1/sqrt(300) = 5.77%. Ain't science grand.
Quote:
Originally Posted by WoodlandBrew
If I take my time and do everything right and count 500 cells or more the standard deviation is 3% on average.

To cut your 300 count population sd in half you'd have to count 4*300 = 1200 cells. It's entirely possible that you get, by luck of the draw, a sample standard deviation less than the population sd but that won't happen very often. I question 3% in the long term. Yes, it can happen but it's unlikely.
Quote:
Originally Posted by WoodlandBrew
It is quite possible to have a very low standard deviation but have the numbers be way off. Recently I had some alcohol left in a tube from cleaning and didn't realize that it had not completely evaporated. My viability was much lower than expected.

That's called a 'bias error'. If I snuck into your lab and replaced your 0.0001 mL heomocytometer with one that had a volume of 0.00009 mL all your counts would be 10% low but your variances would be little changed.



02022013, 03:13 PM

#34

Oct 2012
Malden, MA
Posts: 2,191
Liked 247 Times on 201 Posts

It might have to do with some post processing of the data. I throw out the high and low of both the dead and live cell counts. The data includes 21 cell counts of more than 500 cells.



02052013, 03:25 PM

#35

Feb 2012
Baltimore, MD
Posts: 171
Liked 19 Times on 9 Posts

Quote:
Originally Posted by WoodlandBrew
It might have to do with some post processing of the data. I throw out the high and low of both the dead and live cell counts. The data includes 21 cell counts of more than 500 cells.

Do you have a reason to suspect that the high and low counts are biased somehow? If you don't, then you're really just hurting your estimate by doing this.



02052013, 03:50 PM

#36

Aug 2010
McLean/Ogden, Virginia/Quebec
Posts: 9,430
Liked 1565 Times on 1191 Posts

Remember that the mean isn't that robust an estimator. I'd say that if he has readings that are more than a couple of sigmas out he should toss them as the probability of getting a reading like that without some manipulation error like counting a square more than once, or not counting a square or doing a dilution wrong or waiting too long after pipetting before filling the slide (so that cells settle to the bottom of the pipet) is very small.



02052013, 04:08 PM

#37

Oct 2012
Malden, MA
Posts: 2,191
Liked 247 Times on 201 Posts

Good points. The equation I use in excel just tosses out the highest and the lowest counts regardless of the deviation. I'll have to rethink this.



02052013, 05:01 PM

#38

Feb 2012
Baltimore, MD
Posts: 171
Liked 19 Times on 9 Posts

Quote:
Originally Posted by ajdelange
Remember that the mean isn't that robust an estimator. I'd say that if he has readings that are more than a couple of sigmas out he should toss them as the probability of getting a reading like that without some manipulation error like counting a square more than once, or not counting a square or doing a dilution wrong or waiting too long after pipetting before filling the slide (so that cells settle to the bottom of the pipet) is very small.

Well, so that's why I was asking what the reasoning was behind dropping the highest and lowest counts. If there aren't 'outliers' then the total number of counted cells will give you the minimumvariance unbiasedestimator for a poisson count. Testing for outliers depends a lot on the actual data, but if the counts in each square are large enough that a gaussian assumption can be made then an ANOVA could give you some guidelines for dropping the counts for particular squares.





