FDAnews
www.fdanews.com/articles/77878-using-risk-analysis-to-simplify-computer-validation

Using Risk Analysis to Simplify Computer Validation

April 11, 2006

Pharma companies should use risk analysis procedures to decide which aspects of their computer systems need validation testing, rather than wasting time and money attempting to test everything, G. Raymond Miller, president of Miller Regulatory Consulting, said during a recent FDAnews audio conference.

Miller noted that the FDA has relaxed some of its regulatory oversight in the past few years, which "allowed us to focus on the good science rather than get into overkill with regulation," but he said some pharma companies are still mired in an attempt to be exhaustive in their testing. "Bad habits have driven validation way overboard," Miller said.

He listed five bad habits to avoid:

Overly rigid interpretation of standard operating procedures Paranoia, or a "better safe than sorry" philosophy; Turning off-the-cuff remarks by an FDA investigator into a new SOP; An ultra-thorough orientation. "There are some questions you really don't want to ask," said Miller. The purpose is to avoid writing a protocol no system would pass; and An accumulation of cultural excesses. Overly meticulous notation, such as insisting that every page of supporting documents be signed, falls under this heading. "From a risk perspective, we spent one-third to one-half of our time messing around with annotation," yet the FDA has never issued a single warning letter based on scripting detail or a failure to annotate documents fully, Miller said.

It's more useful to consider under what circumstances the FDA actually has issued warning letters for inadequate testing, such as:

The validation did not include testing for volume, stress, performance, compatibility and boundaries; The company did not test under worst-case (full capacity) conditions or show that the functionality worked; Only a small range of values was tested; There was a lack of verification that the test system was equivalent to a live system; and The company failed to check input to and output from its computer (a sample mix-up).

A systematic approach to computer validation using risk analysis is preferable. This does not refer to testing by the developer during the development process, nor at the modular or code stage, Miller said. Instead, it means operation qualification and performance qualification (OQ/PQ) or user testing at the end-user site -- after initial hardware and/or software systems are installed. This is what the FDA called for in its guidance, "General Principles of Software Validation," issued Jan. 11, 2002.

One critical question is how much testing is enough. No system should be assumed error-free. "When I first got into the validation business, I was not expecting to find errors. These were commercial systems, and I thought they would be pretty good. But I've found there have been errors in every system we've ever looked at," said Miller.

It is best to start with a risk analysis for each requirement, Miller said. Some of the questions to ask: What might go wrong? What is the most likely failure mode for each requirement? What would the business impact of the potential failure be -- such as the potential for harm or damage if it were to occur? How high would the probability of catching it be?

"Some are appropriate [to test for], while others are so farfetched they're not worth testing," Miller said. High impact, high likelihood and hard-to-detect failures should get thorough testing, while low-impact, unlikely and highly visible failures may not need testing at all. The interaction of these various factors can be complex, however, and Miller suggests creating a "traceability matrix [that] shows correspondence between the requirements and the test suite."

For those companies that insist the SOP calls for testing -- even of failures that are very unlikely to occur and would be both low impact and very visible if they did happen -- Miller's response is, "Great, test it -- it's only time and money."

Still, if the decision is made not to test, it's good to be explicit about the rationale to "show at least you considered it," he said. A quantitative risk analysis matrix resulting in a sum (a score) for each potential test item is one way to document this; below a certain score, it is not necessary to test.

The matrices Miller has developed are based on good automated manufacturing practices (GAMP) guidelines, although he acknowledged that summing is not explicitly a GAMP concept. Still, he said, "it's a justifiable way of defining what things are included in the testing suite the whole idea behind risk analysis is to say we have examined the matter and will spend a lot of time on this one and less time on that." To order a copy of the audioconference, visit http://www.fdanews.com/wbi/cds/2242-1.html (http://www.fdanews.com/wbi/cds/2242-1.html). -- Martin Gidron (mailto:mgidron@fdanews.com)