The FDA cited Houston, Texas devicemaker Talon for a multitude of transgressions at its facility, including problems with validations and complaint investigations.
The FDA investigator logged a whopping 13 observations during the March 11-28 inspection—including four repeat violations from an inspection in 2011.
The agency took issue with the facility’s design change procedures, which did not include specific instructions for steps such as how the change would be identified, documented, verified or reviewed for implementation. The company implemented multiple design changes that did not undergo verification.
The investigator found that customer complaint investigation records lacked required information. For example, some corrective actions only included email notification addresses.
The firm’s servicing procedure was also inadequate, as it did not include instructions for ensuring that servicing adhered to specified requirements.
Read the full Talon Form 483 here: www.fdanews.com/06-06-19-talonansstechco483.pdf. — James Miessler
Sources of Error for Verification and Validation
All verification and validation testing must be concerned with variability, also known as error.
There are, essentially, two sources of error. The first source, the most common and most-focused-on by devicemakers, is the variability of the device or component itself, according to Steven Walfish, president of Statistical Outsourcing Services. This is the standard deviation and represents the part-to-part variability in the process.
The second source is measurement variability, which refers to variability in the measurement of a device characteristic or output due to the instrument used. The question asked here is: “How much variability will I get when I take the exact same unit and measure it repeatedly with the same instrument?”
This source of error becomes particularly important when a company wants to look at variability within a part versus variability part-to-part. “If my measurement system variability is very high, then I’m going to need to test the same unit more times than necessarily testing multiple units,” Walfish said. “It becomes very important when we start to talk about how we’re going to partition out our sample size.”
Thus, companies must periodically perform a measurement system analysis (MSA). Walfish suggested that devicemakers should do so before undertaking any data collection and decision-making. MSAs also should be repeated periodically on a maintenance basis.
“Always, always, always get this measurement system under control,” Walfish said. “Under the variability prior to doing any data collection and, more importantly, before you do any decision-making about the design.”
Some key activities that could warrant an MSA include:
Walfish placed extra emphasis on the first bullet point, saying, “If we’re going to make a decision about the product design, we want to make sure that our measurement system is adequate, and if we’ve already done a measurement system analysis, that it’s still valid, before we go about making any decisions about product design.”
Finally, Walfish cautioned against focusing only on Type I errors — the so-called producer risk — when developing sampling plans.
While company risk is important, devicemakers also need to consider the FDA’s priority, which is risk to the patient, or Type II errors. Thus, the agency would like to see sampling plans that also control for this type of risk.
Excerpted from the FDAnews management report: Choosing the Best Device Sample Size for Verification and Validation.