Software reliability models, such as the Basic (i.e., Exponential) Model and the Logarithmic Poisson Model, make the idealizing assumption that when a failure occurs during a program run, the corresponding fault in the program code is corrected without any loss of time. In practice, it takes time to rectify a fault. This is perhaps one reason why, when the cumulative number of faults is computed using such a model and plotted against time, the fit with observed failure data is often not very close. In this paper, we show how the average delay to rectify a fault can be incorporated as a parameter in the Basic Model, changing the defining differential equation to a differential- difference equation. When this is solved, the time delay for which the fit with observed data is closest can be found. The delay need not be constant during the course of testing, but can change slowly with time, giving a yet closer fit. The pattern of variation during testing of the delay with time can be related both to the learning acquired by the testing team and to the difficulty level of the faults that remain to be discovered in the package. This is likely to prove useful to managers of software projects in the deployment of staff.