Ask Me Help Desk

Ask Me Help Desk (https://www.askmehelpdesk.com/forum.php)
-   Mathematics (https://www.askmehelpdesk.com/forumdisplay.php?f=199)
-   -   Is there a way to calculate a probable range for the next single datapoint? (https://www.askmehelpdesk.com/showthread.php?t=468129)

  • Apr 30, 2010, 04:09 PM
    johnnleach
    Is there a way to calculate a probable range for the next single datapoint?
    Given a number of experimental results (n-datapoints), I find a range of reported values. I can use that set of data to calculate a confidence interval (t-distribution or normal). However, the confidence interval tells me nothing about the probable range that my next datapoint will fall within. The confidence interval is actually a probable range for the mean of my datapoints. In other words, it only applies to predicting the next mean of n-datapoints. Is there a way to calculate a probable range for the next single datapoint?

    The more datapoints I have the narrower the confidence interval. Which makes sense, because the more data I have, the more confident I am that the true value is represented by the mean of those points. At some point, more datapoints will tell me nothing new about the range of values that I can expect to see. Additional redundant values is why the standard deviation decreases and the confidence interval narrows. But, if I'm attempting to establish acceptable limits to ensure that future datapoints are not showing a trend, I need a range that encompasses all of the individual datapoints that are considered to be valid. Each future point needs to be scrutinized by itself as being acceptable or not.
  • May 4, 2010, 03:23 PM
    takeonme

    I think for the normal distribution you would just take your sample mean and sample variance and apply normal confidence intervals e.g. mean+-1.96sd to get a 95% prob of your data being in that range. I think it doesn't matter that the mean is estimated because the normal is symmetrical. I'm not sure what you'd do for the t-distribution.
  • May 5, 2010, 05:13 PM
    johnnleach

    The normal distribution works reasonably well, sometimes. I guess that it depends on whether the data is "normal". Most of the time my data falls outside the confidence interval more frequently than the alpha value predicts. The t-distribution often works if I use the standard deviation (s) instead of s/sqrt(n) to calculate the confidence interval. But, I've never seen a textbook that approves of this approach.
  • May 13, 2010, 03:59 PM
    johnnleach

    Is there a way to estimate how well a normal model will fit your data? In other words, is there a way to say that my data is something like 80% normal, so that I may interpret my data as expected to fall within the normal confidence interval 80% of the time. The interval would then be at the 80% times alpha level.

  • All times are GMT -7. The time now is 06:05 AM.