< 1 min reading time
As per process validation definition in ISO-13485 and FDA CFR 21 Part 820 I have a concern where help from groups members is really appreciated, Recently I read a procedure that states that, since regular lots size are too big, sampling can be conducted to validate the process. This is the first time I see this approach and I am kind of confused since per my understanding when process validation is conducted the regular lots must be built and 100% inspected according with the reliability/confidence level sample size drawn from pFMEA or available risk assessment. For instance 298 units if 99% R and 95% CL is required, it does not matter if the lots are a little bigger, all units must be inspected, same situation for variable data. Is it ok to execute a validation by determing a sample size due to big lot size e.g, 2000 units? What about the cost/time? Thanks, Alvaro, source: https://www.linkedin.com/groups/2070960/2070960-6206319507657494531 Marked as spam
|
Meet your next client here. Join our medical devices group community.
Private answer
Rob Packard
ISO 13485:2016 now requires statistical rationales for sample sizes when you are performing verification and validation. Some standards have specific sample sizes specified--such as sterilization (i.e., 10 samples from 3 lots). Other processes should be sampled 100% until the process is validated. With a lot size of 2,000 a typical approach would be to establish a minimum statistical sample size. If that size was 200 (the actual value would be less), then every 10th part should be sampled. It is also ok to design a validation that exceeds the minimum statistical sampling size. With regard to the application of risk, a pFMEA is typically based upon the results of the validation--not the validation based upon the pFMEA. However, the severity of the potential failure could change the confidence level that is needed for the specific metric.
Marked as spam
|
|
Private answer
Jonathan Wacks
The rationale depends on the product, as well as the parameter. For example, tableting machines can spit out product at 1000+ pills/minute. Damaged pills can potentially be checked/validated with 100% with high-speed cameras, but potency or dissolution would best be validated with an X-R chart.
Marked as spam
|
|
Private answer
Alvaro Viquez
Thanks Jonathan Wacks, but as Robert Packard commented above for the potency, using your example would be sampled, right?
Marked as spam
|
|
Private answer
Alvaro Viquez
Thanks Jonathan Wacks, I understand that technology helps with inspections like e.g damaged pills but I guess to evaluate potency by using SPC sampling is required, as per valuable comment from Robert Packard. My concern lies in the fact that I had always worked for medical companies where the average lot size was 300 units so it is feasible to use 3 lots and inspect 100%. Really appreciate your comments Jonathan Wacks and Robert Packard, by the way, is there any book, article or site where I can learn a little more about the statistical behind these sampling methods and estimation for process validation? thanks in advance.
Marked as spam
|
|
Private answer
Alvaro Viquez
Thanks for your comment Jonathan Wacks, however the validation for potency will be conducted by sampling as per Robert Packard comment above, am I right?
Marked as spam
|
|
Private answer
Jonathan Wacks
It depends. One approach for critical features in large production lots would be to use an X-R methodology, where you would sample a minimum of 120 samples, n=5 per sampling event, and check the parameter. An analysis of the data and final calculated Cp or Cpk would demonstrate a valid approach. This is a legitimate statistical rationale as well.
Marked as spam
|
|
Private answer
G M Butcher
@ Alvaro Viquez - i am confused about the question and the replies. the question mentions 21CFR820, but the replies use examples from pharma. in what industry is the question based?
Marked as spam
|
|
Private answer
G M Butcher
Also, is the data attribute or variables or both? And what is(are) the statistical distribution(s) of the characteristic/parameter.
Marked as spam
|
|
Private answer
Mark Proulx, CQA, cSSBB
Alvaro Viquez Something I didn't see in all this was whether your testing is destructive or nondestructive. If nondestructive, you should already have performed a 100% over the entire population in order to gain an understanding of your failure rate. Combined with the criticality of the component and the severity, you should be able to decide on a proper sampling plan that ensures your confidence of actually catching a "bad" part. I like to use LTPD (Lot Tolerance Percent Defective) because nothing is 100%. Establish what you can live with as a "normal" failure rate, put this in your pFMEA (as Robert Packard suggested) and VOILA! You have the basis for residual risk! All you would need to do then (in accordance with 14971) is generate an RBA for the risk and an action line for when to need corrective action should you fail above that line.
Marked as spam
|
|
Private answer
Alvaro Viquez
Hello Jonathan Wacks, just wonder what if after collecting the samples it is realized the process is out of control? By the way, really appreciate if you can provide the source of the sample size above. Thanks in advance.
Marked as spam
|
|
Private answer
Mark Proulx, CQA, cSSBB
Alvaro Viquez Ahhhhh...now you get to the heart of lean engineering and Six Sigma. You'll be needing to determine what factors in your process are causing you to be out of control. Things like the Taguchi method for determining your essential variables and enumerating your processes to be able to generate Cp/Cpk are the things your desing engineering should be using to bring processes under control and stabilized. You should have, in testing, determined what a controllable lot size is, as well. All these things should have been designed into your process and, by using the proper Six Sigma tools, you could have a much more robust process that has a lot more control.
Marked as spam
|
|
Private answer
Jonathan Wacks
A non-capable process is an inherently invalid one. For example, if you're making sterility heat seals (from PFMEA high risk of sepsis), you would want Cpk of 2.0. Anything lower cannot ensure seal stability. You might want to contact AIAG for their SPC manuals-I believe that n=5 for 24 sampling periods was valid for large manufacturing rates when controlling a single variables datum.
Marked as spam
|
|
Private answer
Jonathan Wacks
A non-capable process is an inherently invalid one. For example, if you're making sterility heat seals (from PFMEA high risk of sepsis), you would want Cpk of 2.0. Anything lower cannot ensure seal stability. You might want to contact AIAG for their SPC manuals-I believe that n=5 for 24 sampling periods was valid for large manufacturing rates when controlling a single variables datum.
Marked as spam
|
|
Private answer
Alvaro Viquez
@G M Butcher the data is both variable and attributes and I work for medical devices industry. Mark Proulx, BSc, CQA, cSSBB the testing is nondestructive. I posted this question because my intention is to clarify if sampling is a valid approach to conduct process validation for medical devices and if so what are the sources i.e standards, statistical rational. Best regards.
Marked as spam
|
|
Private answer
Jonathan Wacks
If the process is out of control, you cannot claim that the process is validated. For example, if pull/burst testing for sterility seals have a Minimum burst value, and Cpk is 1.0, the process is not capable to consistently create a valid seal (would ideally like a Cpk of
Marked as spam
|
|
Private answer
Hi Alvao
I was working more than 15 years in the Medical Devices and we were using following standards, also for process validation as 100% inspection with lot size being sometimes about 100.000 is not possible. - ISO 2859-1(Sampling procedures for inspection by attributes -- Part 1: Sampling schemes indexed by acceptance quality limit (AQL) for lot-by-lot inspection) - ISO 3951-4 (Sampling procedures for inspection by variables -- Part 4: Procedures for assessment of declared quality levels) Hope this will help you, best Rgds Marked as spam
|
|
Private answer
G M Butcher
@Alvaro Viquez - thanks for the follow-up. So, yes sampling is allowed for process validation. There are some criteria to determine usability: 1. the test under consideration must be statistical (sampling plan-sample size & acceptance criteria based on risk to patient). 2. if a standard exists for that test, use it. 3. The validation acceptance criteria would be based on confidence interval, tolerance interval, and process capability. 3. Recognize the difference in testing a mean or an individual, . GHTF SG3 Process Validation Guidance gives many tools for accomplishing what you want to do.
Caveat: Use when - Routine tests have insufficient sensitivity, destructive testing would be required, tests do not reveal all variations in safety and efficacy, process capability is unknown/not capable. The organization should determine the cost for the paths. Marked as spam
|
|
Private answer
Chuck H. Mograbi
You cannot say that your process is too risky or unstable, so you are doubling your sample size to compensate for your process risk.
As Jonathan indicated above, you cannot validate a process that is out of control; it will stay out of compliance no matter how many samples you use to validate it. You need to go back to the drawing board and follow Mark Proulx note to determine the factors that caused your process to be out of control using Taguchi with a combination of Six sigma Methodology. My other concern, is your company sampling plan! Make sure that everything you do should be documented & based on a company procedure in accordance with 21CFR 820 with a rationale connected to these procedures explaining the reasons for your action plans. Marked as spam
|
|
Private answer
Elaine Duncan
Going back to first principles I would map out the process in a flow chart and be certain that this "process validation" is actually evaluating a contained process. A process validation plan may require multiple "process validation" protocols. Maybe, in other words, your bite is too big. And yes, the PFMEA must be the guide, as that should be targeting the process risk to be mitigated. If this takes multiple bites out of the apple, it would prove smarter in the long run.
Marked as spam
|