Improving Data Reliability Using a Non-Compliance Detection Method versus Using Pharmacokinetic Criteria
- First Online:
- Cite this article as:
- Kshirsagar, S.A., Blaschke, T.F., Sheiner, L.B. et al. J Pharmacokinet Pharmacodyn (2007) 34: 35. doi:10.1007/s10928-006-9032-2
Data from clinical trials present numerous problems for the data analyst. These include non-compliance with the prescribed dosing regimen and inaccurate recollection of dosing history by patients as well as mistakes in recording data. Several methods have been proposed to address these issues. One such technique by Lu et al. (Selecting reliable pharmacokinetic data for explanatory analyses of clinical trials in the presence of possible noncompliance. J. Pharmacokinet. Pharmacodyn. 28:343–362 (2001)) identifies occasions in pharmacokinetic (PK) data where the preceding dosing history is likely to be unreliable. We used this method, implemented in the software program NONMEM (beta) VI, to clean a dataset containing indinavir (IDV) plasma concentrations from HIV-1 infected patients. The data was also cleaned by inspection in Microsoft Excel using clinical PK criteria. A one-compartment model with first order absorption and elimination was fit to both sets of cleaned data. IDV population PK parameters obtained from these analyses were similar to those reported previously. It is established that IDV nephrotoxicity is related to high IDV exposure. However, no relationships were found between any PK parameters and nephrotoxicity in the “compliance cleaned” dataset. In the “PK cleaned” dataset, the oral clearance and apparent volume were lower by 9.1% and 6.6%, respectively in patients with any type of nephrotoxicity and the maximum IDV concentration (Cmax) was 12.1% higher. In patients suffering from nephrolithiasis in particular, Cmax was 15.5% higher. Accordingly, the use of the non-compliance detection method did not improve the reliability of our dataset over the usual method of applying clinical criteria. In fact, analyses on the compliance-cleaned dataset missed some exposure-toxicity relationships. Thus, automated methods must be tested rigorously with ‘real life’ datasets, used with caution, and always in conjunction with clinical reasoning to avoid overlooking a signal in noisy data.