Best Answer Charles.C, 26 August 2017 - 05:40 AM

Hi MDG,

2 further comments -

(1) I presume yr stated limit of coliform <10cfu/g refers to the finished product and therefore, in practice, an accumulated "__lot__" of the product. It is unclear what/where yr own sampling is being carried out.

If this limit is associated with some particular Standard, it is common to find that the Standard __also __defines the sampling/analytical procedure used to demonstrate compliance.

IMEX, it is more common to find micro. lot acceptance requirements stated in a nmcM format. For example I noted an ice cream standard elsewhere of the form 5/10/2/100. A sole criterion such as <10cfu/g is Ok as a guideline target but sort of statistically limited from an acceptance POV unless it refers to an average result based on a defined number of samples.

(2a) There is an alternative way to utilise data from yr OP via either basic statistics or a ** process **POV.

If you __assume __that yr set of data are taken from a process "in control", the average/std.dev. of the data can be used to predict the __probability __that, for the subsequently accumulated batch, any random sample from the batch will yield a result <10cfu/g (ie compliance with spec) (assuming same analytical procedure). Conversely, you can estimate the __necessary __target average required if the first calculation gives too low a probability of compliance. This uses basic statistics formulae and should be quite easy to apply.

As an extension of (2a) you could consider implementing a process control procedure (again basic textbook stuff) -

(2b) Assuming that you set a target __average __coliform process level to be such that, theoretically, any random sample of the subsequent accumulated lot ** will ** "likely" match yr OP-spec (ie <10cfu/g), yr timed sampling data can be used to test whether yr process is in suitable "statistical control" to

__consistently__match such a requirement.

This means that you set yr target process __average __level of coliform at a certain value, say X (below 10cfu/g), such that a random sample from the resulting batch will have, for example, an approx 99% probability of giving a result of < 10cfu/g (3 sigma outer control limit).

This requires (a) defining X from an estimated Process std. deviation (via sampling data) and then (b) using timed sampling data to test whether the process data conforms to the required control limit. Usually this procedure requires at least 2 samples being analysed at each designated time.

The above set-up is fully described in any textbook on process statistical control. Once you've done the initial calculations, you just plot the points with time.

One caveat to the above is that inaccurate analytical data may generate large standard deviations so that the adequacy of process control becomes difficult to evaluate.

Go to the full post