![]() |
#1
|
|||
|
|||
Measuring accuracy or consistency
Is anyone aware of any research, or at least conclusions, regarding different ways to measure bullet hole groups?
I know the convention is to shoot X number of Y shot groups, then measure the extreme spread of each group and calculate the average. However, I don't want to shoot more groups/shots than necessary, or waste my time by shooting fewer than necessary. For example, if I can be 90% confident that a 10% difference in group sizes is meaningfull by shooting three groups, I'd rather do that than shoot 10 groups to be 95% confident. Heck, I might settle for 80% confidence. I'm looking for answers to questions like this 1) How many shots are enough and how many groups are enough to be X% confident that a given difference is meaningfull? 2) Is one 10 shot group better than two five shot groups? 3) If I measure the Average Group Radius (AGR) instead of the Extreme Spread (ES), can I reduce the number of shots required to get a statistically significant answer? It seems likely that AGR is a better method than ES, but has anybody actually tested it? Just because everybody has been using ES forever doesn't mean it's the best method. One good reason to use ES is that it's far easier to do than AGR, but I have a computer program that makes AGR easy enough. |
|
|