In this issue of Diabetologia, Dr Monteiro-Soares et al. [1] have produced a timely systematic review of systems that have been described for classifying the relative risk of future ulceration of the foot in diabetic individuals. Disease of the foot represents a considerable threat—a threat to both the well-being and survival of the individual [2]—as well as posing an enormous burden on heathcare services. And yet this review collates the overwhelming evidence from both cross-sectional and prospective studies, to indicate that the level of risk can be predicted, with the likelihood of new ulceration being variously increased by the presence of peripheral neuropathy, peripheral arterial disease, deformity and, in particular, by a history of previous ulceration or amputation. Other factors may be important, too, including visual impairment (with risk of foot trauma), renal dialysis (not referred to in this paper but known to be a powerful risk factor [3]) and HbA1c. The role of the last of these may relate to its association with the development, or progression, of other complications of diabetes [4].

The conclusion to be drawn from these observations is that all people with diabetes should undergo annual screening to have their foot risk status determined. This is seemingly self-evident, and yet the benefit associated with such practice is dependent on three major assumptions: (1) that the observations made in formal studies can be extrapolated to examinations undertaken by less well-trained clinicians, using more simple clinical techniques in routine clinical practice; (2) that the identification of increased risk is linked to a programme of augmented preventive education, surveillance and care; and (3) that this augmented programme actually reduces the incidence of new ulceration.

The first of these is the easiest to answer because the likelihood is that the clinical methods employed are relatively unimportant. The formal studies cited by Monteiro-Soares et al. [1] used a variety of methods to determine both neuropathy (loss of protective sensation) and peripheral arterial disease, and yet the conclusion from all was the same. Thus, it probably matters little that some of these definitive studies used equipment that is not in general use in the community (such as a biothesiometer) or that that there is no consensus on the choice or number of sites to be tested with, for example, a 10 g monofilament. The clinical method used to detect or exclude peripheral arterial disease is even more uncertain: the measurement of ankle–brachial pressure index is potentially insensitive and the equipment is not available for routine use across the community. It follows that the presence or absence of peripheral arterial disease is usually determined by palpation of foot pulses, which many non-specialist practitioners find difficult. However, the evidence from the Scottish community study [5] and the earlier north-west of England study [6] suggests that simple clinical methods can be usefully adopted for this purpose in large community-based populations.

The second assumption is that the detection of increased risk is linked to a programme of augmented education, surveillance and care, as specified in many national and international guidance documents. In many places, and especially in countries where there is ready access to skilled podiatric practice, such a programme will be adopted, at least to the extent of ensuring more frequent surveillance (and the repeated opportunistic education inherent in more frequent clinic visits), as well as the provision of protective footwear. But such services are not universally available, and even when they are they may not (as in the case of the UK) be properly used for this specific purpose. Since 2004, GPs in UK have been paid for undertaking regular foot examinations in diabetic individuals, but their obligation is only to undertake the examination, and not to either record the result or take any action dependent on the findings—an arrangement that enhances the income of GPs but may achieve little else.

The third assumption is also of importance: that the adoption of a programme of increased surveillance, education and care is associated with a reduced incidence of new ulceration. Every clinician believes that this is so, and yet the data to substantiate the belief is desperately thin, especially for the role of education [7]. The absence of evidence almost certainly relates less to the lack of effect of the intervention than to the problem of trial design in this difficult area. If, for example, the aim of any trial is to prove that enhanced input will reduce the incidence of new ulcers in those with neuropathy alone, the incidence of new clinical events is only two to three times that of an unselected population with diabetes [6], and this means that the sample size for any prospective study would be very large indeed, and the provision of a uniform programme of care for the intervention arm of such a large population would be both difficult and expensive. If, on the other hand, the aim is to study the effect of augmented care in a population at much higher risk (such as those with a recently healed ulcer in whom the incidence of new ulceration is approximately 40% at 12 months [8]), then it is possible that the incidence of new foot disease is dominated by established physical factors and that educational input and surveillance may have only limited impact. One other problem of a study on patients at the highest risk is that it will almost inevitably be conducted by specialist centres where the level of care may already be high, and it may not therefore be possible to demonstrate the benefit of any new programme. Nevertheless, a single study from the USA reported that the introduction of a specialist podiatry service into a dialysis unit resulted in a prompt and significant reduction in the incidence of amputation in the space of just 1 year [9]. This study needs to be repeated, and if the results are confirmed they could have enormous implications for the management of this particular high-risk group.

The review by Monteiro-Soares et al. [1] collates the available evidence to demonstrate the strong predictive power of foot risk stratification in diabetes. It is very likely that this power will be preserved even when the process is adopted across whole communities, with generally less trained staff using less refined techniques. However, this review should act as a reminder to all that there is little point in documenting risk unless it is linked to an effective intervention strategy. Even if it was eventually shown that, despite the odds, an augmented programme had no effect on the incidence of new ulceration, it can be assumed that the programme would include the simple instruction that all new disease should be referred promptly for expert assessment and that it should be explained to the patient how they can ensure this happens. There are data to show that while an education programme may not reduce the incidence of ulceration, it may lead to a decreased incidence of amputation [10], and it is known that early expert assessment is associated with improved clinical outcome [11, 12].