Tags

, , , , ,

BY: KARIN LILTORP

Karin Liltorp

Have you ever been involved in a war between scientific specialists and quality assurance (QA)? Specialists say “this requirement does not make sense!” And QA says “the guideline states….” If they do not agree, QA usually has the last word, as they have to give the final approval.

Pharmaceutical guidelines from the Food and Drug Administration (FDA), the International Conference on Harmonisation (ICH), etc. can give you all the answers regarding what is expected during development of a drug product. A very short summary of all these guidelines is: “You should always be in control, be able to document it, and use your common sense to evaluate risks so that patients can feel safe.”

So, if the guidelines are so clear, how come there are often war-like situations between the specialists and QA? I guess it comes back to interpretation! We read the guidelines differently and focus on different paragraphs.

FDA’s homepage makes it clear that “…you can comment on any guidance at any time.” This implies that FDA acknowledges that the guidelines are not laws beyond discussion: If you find the guidelines insufficient, you are encouraged to send your suggestion for improvement to FDA. Thus, the guidelines are living documents where expert knowledge is very welcome: The goal is to ensure patient safety in the most intelligent way.

In many places the guidelines are open for interpretation. For instance, it is stated that you “should” instead of “must,” which indicates that there might be other solutions.

In the guideline describing risk management (ICH Q9), one very important phrase is found: “The protection of the patient by managing the risk to quality should be considered of prime importance…based on scientific knowledge….” This indicates that evaluation of risks should be performed by specialists. This risk evaluation should subsequently be used for defining the need for control. If the specialists consider a certain risk insignificant, the work load required to examine this specific risk can be reduced significantly. Thus the specialists define the risks.

So back to the war between specialists and QA: Of course the problem is two-sided. Specialists know exactly what they are doing and might, for instance, consider some of the rules for documentation as unnecessary control or some validation requirements completely redundant—especially if they consider it low risk.

QA and upper management might not be willing to “interpret” guidelines or fully benefit from the specialist’s risk evaluation. Better safe than sorry—and therefore they might prefer to over-do things instead of relying on the scientific rationales.

For bigger pharmaceutical companies, they might very often choose to perform a bunch of experiments and implement complex procedures that are actually redundant, instead of risking an observation during an inspection. Spending $1 million might be considered peanuts in comparison to the risk of a negative inspection.

When the rationale is clear, there is no need for doing redundant experiments. So why are they done anyway? Is it because the companies are afraid that auditors are not clever enough to understand their risk evaluation? Is it better to have over-validated and over-documented than rely on the auditors to have the insight to understand the scientific approach?

Who does this approach serve? Definitely not the patients! The time and money used on redundant work could have been used, for instance, on development of new drugs. Unfortunately the authorities have not started giving warning letters based on indirect effects (e.g., “wasted time and material which, in the end, was harmful for the patient”). Further, excessive documentation might lead to a situation where the essential risks are overlooked/buried in the redundant work.

Another indirect effect such as loss of motivation for employees spending their time on waste—instead of actually creating results—should not be forgotten. If they lose motivation it will negatively impact quality.

To my knowledge, warning letters have never been given without a justified reason: They are not given for using an approach that is scientifically well-founded. If the company has made a sound risk assessment and is able to show that they have patient safety as first priority (which implies control and documentation), they will never receive a warning letter. Of course, they might get some observations if the auditors do not find the arguments fulfilling. However, the auditor’s role is not only to find errors, but to assure patient safety.

What do you think? Should the companies be more proactive in assuring reduction of redundant work? And do you agree that redundancy impacts quality negatively?

KARIN LILTORP, Ph.D., is currently working as a senior CMC specialist at CMC Solutions IVS.