Noninvasive ventilation

In 2012, the journal published several articles dealing with the use of noninvasive ventilation (NIV) in patients with acute respiratory failure (ARF). Carrillo et al. [1] prospectively evaluated the characteristics and outcomes of 184 patients with severe ARF due to community-acquired pneumonia treated with NIV, and determined the factors predicting NIV failure and mortality. NIV was successful in 63 % of patients. Confirming previous findings, the authors found that patients with “de novo” ARF failed NIV more frequently than those with previous cardiac or other respiratory disease (46 vs. 26 %). Worsening radiologic infiltrate 24 h after admission, maximum Sepsis-Related Organ Failure Assessment (SOFA) score, and higher heart rate and lower PaO2/FiO2 after 1 h of NIV were among the factors predicting NIV failure. NIV failure, maximum SOFA, and older age independently predicted hospital mortality. In patients with de novo ARF only, delayed intubation was consistently associated with a decreased survival. In 13 patients (six with chronic obstructive pulmonary disease), Piquilloud et al. [2] compared short-term patient–ventilator interaction during NIV with pressure support (PS) and neurally adjusted ventilatory assist (NAVA). As compared with PS, NAVA improved patient–ventilator synchrony during NIV by reducing trigger delay and severe asynchrony and by abolishing ineffective efforts and both delayed and premature cycling. Vaschetto et al. [3] performed a pilot study to assess the feasibility of using NIV to facilitate discontinuation of invasive mechanical ventilation in patients with resolving hypoxaemic ARF. Twenty patients randomly received immediate NIV after early extubation or conventional weaning. At the end of the study, arterial blood gas, success of extubation, septic complications, ICU length of stay, and mortality were similar, but the number of days without invasive ventilation at day 28 was higher in the NIV group than in the control group. The authors concluded that, in a highly experienced centre, early extubation followed by NIV is feasible and might facilitate liberation from mechanical ventilation in selected patients with hypoxaemic ARF. These findings were discussed in the editorial by Laghi and Fernandez [4].

Another study evaluated the optimal type and tolerance of the interface chosen for long-term noninvasive positive pressure ventilation (NPPV) in children with neuromuscular disease or thoracic scoliosis (n = 35), obstructive sleep apnea with (n = 32) or without (n = 21) maxillofacial deformity, or lung disease (n = 9) [5]. All 25 children 2 years of age or younger, as well as four older children, were fitted with custom-made nasal masks; all other children were fitted with an industrial nasal mask (50 %), a facial mask (16 %), or nasal prongs (2 %). All patients with obstructive sleep apnoea used interfaces with manufactured leaks, whereas all patients with neuromuscular disease or thoracic scoliosis used interfaces without manufactured leaks. Both types of interfaces were used in patients with lung disease. The interface had to be changed in 20 patients because of discomfort (n = 16), leaks (n = 4), facial growth (n = 3), skin injury (n = 2), or change of the ventilatory mode (n = 2). A second or third mask change was necessary in nine and four patients, respectively, suggesting that the choice of the interface for NPPV in children is determined by the patient’s age and the underlying disease.

In bench studies Olivieri and co-workers [6] evaluated NIV devices. The increasingly popular NIV can be provided with an increasing range of new equipment, both ventilators and interfaces. To enable a comparison between equipment the authors tried to develop standard references for demand and effort, mode of generation and extent of air leaks as well as impact on the respiratory mechanics of the patient. Even terminology needs to be uniform. To this end the authors used model tests aimed at standardising measurement criteria to enable such comparisons.

The prediction of weaning outcome is relevant. Ouanes-Besbes et al. [7] aimed to compare the performance of NT-proBNP levels, plasma protein concentration, haematocrit, and fluid balance for the preceding 24 h in predicting the outcome of the two steps of weaning: (1) spontaneous breathing trial (SBT), (2) extubation. A prospective observational study of 143 patients who were mechanically ventilated for more than 48 h (55 % COPD) and were ready to wean was carried out. Patients underwent an SBT and were extubated when they passed the trial. Diagnostic tests were performed immediately before the SBT. Of 143 patients, 80 (56 %) passed the SBT and were extubated. Of these, two were reintubated for laryngeal dyspnoea, 57 had no respiratory problem during the next 48 h, and 21 developed post-extubation respiratory distress (26 %). Rescue NIV prevented reintubation in 15 (71 %) patients. Only NT-proBNP was an independent predictor of the occurrence of post-extubation respiratory distress (OR 1.2; 95 % CI 1.09–1.4; p = 0.003); the area under the ROC curve for NT-proBNP to predict post-extubation respiratory distress was 0.78 (95 % CI 0.67–0.89; p = 0.0001). It was concluded that NT-proBNP levels at SBT help in the prediction of post-extubation respiratory distress and could identify the subgroup of extubated patients requiring close observation and/or prophylactic NIV.

Work of breathing, monitoring and patient–ventilator interaction

The assessment of work of breathing can prove useful in mechanically ventilated patients. Vivier et al. [8] assessed the feasibility and accuracy of ultrasonography to assess diaphragmatic function and its contribution to respiratory workload in 12 patients receiving NIV after extubation. The authors reported a significant correlation between diaphragmatic pressure–time product per breath and thickening fraction of the diaphragm during tidal ventilation defined as (diaphragm thickness in the zone of apposition at inspiration − thickness at expiration)/thickness at expiration, suggesting that this method may be useful to assess work of breathing in ICU patients. The coefficient of repeatability for interobserver reproducibility was, however, relatively high (15–18 %). Banner et al. [9] assessed the usefulness of work of breathing for predicting extubation outcome. In 97 patients (38 in a training set and 59 in a prospective validation set), work of breathing was noninvasively determined by use of an artificial neural network during the spontaneous breathing trial preceding extubation and compared to indices traditionally used to predict extubation outcome. Noninvasive work of breathing had the greatest overall predictive accuracy, the greatest sensitivity and positive and negative predictive values compared to other indices. In particular, a work of breathing no greater than 10 J/min was observed in 95 % of successfully extubated patients. The authors concluded that noninvasive work of breathing warrants consideration for use in a complementary manner with spontaneous breathing pattern data for predicting extubation outcome.

Three studies dealt with technological innovations for ventilatory monitoring and management. Blanch et al. [10] performed a pilot, clinical study in a small number of patients to validate a mathematical algorithm for automatic, real-time detection of ineffective efforts during expiration (Better Care®). Better Care® analysis was compared with that of expert clinicians based on visual examination of airway pressure and flow signals and with an automated analysis based on the electrical activity of the diaphragm. The results showed that Better Care® was able to identify ineffective efforts with an accuracy similar to that of expert intensivists and the electrical activity of the diaphragm. These results were discussed in the accompanying editorial [11]. In 50 sedated, passively ventilated patients, Arnal et al. [12] assessed the short-term safety and efficacy of IntelliVent-ASV®, a development of adaptive support ventilation that automatically adjusts ventilation and oxygenation parameters. As compared with adaptive support ventilation, the use of IntelliVent-ASV® was associated with a decrease in minute ventilation, tidal volume, respiratory rate, plateau pressure, positive end-expiratory pressure (PEEP), and FiO2, whereas respiratory mechanics and oxygenation were similar and arterial carbon dioxide tension (PaCO2) slightly increased. The authors concluded that in passive patients and over a short period of time, IntelliVent-ASV® was safe and able to ventilate with less pressure, volume and FiO2 while producing similar results in terms of oxygenation.

Prigent et al. [13] evaluated the effects of a tracheostomy speaking valve (SV) on breathing–swallowing interaction in a group of eight tracheostomised neuromuscular patients who were able to breathe spontaneously and were studied with and without an SV. Breathing–swallowing interactions were investigated by chin electromyography, cervical piezoelectric sensor, and nasal and tracheal flow recording. Three water bolus sizes (5, 10 and 15 ml) were tested in random order. The authors’ conclusion was that in tracheostomised patients, protective expiration towards the upper airway after swallowing is restored by the use of an SV.

The state-of-the-art in electrical impedance tomography (EIT) for ventilation and perfusion imaging was reported in an interesting review [14]. EIT is a relatively new technology used to image regional impedance distributions in a cross-sectional area of the body. The authors focused on regional ventilation monitoring using EIT. Several recently presented indices that are useful to extract information from EIT image streams are described. Selected experimental and clinical findings are discussed with respect to future routine applications in intensive care. The authors also provide a number of clinical scenarios in which EIT may be a useful as a diagnostic or monitoring tool. Finally, past and ongoing research activities aimed at obtaining cardiac output and regional perfusion information from EIT image streams are summarised. The authors concluded that EIT will change the way we treat mechanically ventilated patients.

Patroniti et al. [15] evaluated the effect of a wide range of assistance levels during NAVA and pressure support ventilation (PSV) on respiratory pattern, breathing variability, and incidence of tidal volumes (V T) above 8 and 10 ml/kg in ARF patients, reporting that increasing NAVA levels were associated with no effect on respiratory rate, a small increase in V T, and an increase in V T and EAdi variability. An effective decrease in EAdi occurred at NAVA levels below 2–2.5 cmH2O/μV, while preserving respiratory variability and low risks of V T above 8 or 10 ml/kg.

The same innovative mode was also evaluated in paediatric patients by de la Oliva et al. [16] to determine if NAVA improves asynchrony, ventilatory drive, breath-to-breath variability and COMFORT score when compared to PS. Four sequential 10-min periods of data were recorded after 20 min of ventilatory stabilisation (wash-out) at each of the following settings: baseline PS with the ventilator settings determined by the attending physician (1-PSb); PS after optimisation (2-PSopt); NAVA level set so that maximum inspiratory pressure (P max) equalled P max in PS (3-NAVA); the same settings as in 2-PSopt (4-PSopt). The authors concluded that NAVA as compared to optimised PS results in improved synchrony, reduced ventilatory drive, increased breath-to-breath mechanical variability and improved patient comfort.

A study by Fine-Goulden and co-workers [17] analysed the monitoring of electrical activity in the diaphragm in children with neuromuscular and respiratory control disorders. The authors used NAVA with an oesophageal catheter to measure the diaphragm activity. They studied the lowest achievable level of respiratory support in six ventilator-dependent patients. Their major finding was that the respiratory dynamic patterns were morbidly abnormal despite no clinical signs. The authors conclude that using NAVA enables a minimally invasive diagnostic adjunct in children with neuromuscular and respiratory control disorders who are ventilator dependent.

In a study of physiological dead space Sinha and Soni [18] compared classic dead space measurements using a Douglas bag method with volumetric capnography where expiratory gas flow and CO2 are continuously measured. They made comparisons in critically ill patients and found a good agreement between the physiological dead space by the Douglas bag technique and the volumetric capnography. One can discuss the difference between the Bohr and Enghoff dead spaces, the latter being measured by replacing alveolar CO2 by arterial CO2. The former will calculate a true dead space, the latter will be influenced by gas exchange abnormalities of other kinds. Anyway, the volumetric capnography was considered useful in studying their critically ill patients.

PaO2 and O2Sat

Eastwood et al. [19] retrospectively assessed the relationship between arterial oxygen tension (PaO2) and hospital mortality in 152,680 patients undergoing mechanical ventilation in 150 ICUs in Australia and New Zealand. Mean PaO2 was 152 mmHg and mean inspired fraction of oxygen (FiO2) was 62 %; about 50 % of patients had hyperoxia (PaO2 > 120 mmHg). After adjusting for baseline characteristics including the Simplified Acute Physiology Score (SAPS) II, there was no relationship between PaO2 or hyperoxia and mortality. Subgroup analysis in patients with high or low FiO2 and according to diagnostic categories (elective surgery, sepsis, respiratory diagnosis and surgical procedures) was consistent with the findings in the overall cohort. Thus the impact of early hyperoxia on mortality remains uncertain.

In a study by Nesseler and co-workers [20] pulse oximetry with forehead and finger sensors was compared in patients in shock, a condition when finger perfusion may be reduced. In a fair sample of 32 patients, the forehead sensor readings never failed, whereas a few finger sensor recordings did. As hypothesised the forehead measurements were more accurate when compared with the arterial saturation in their critically ill patients and the authors recommend the use of this instead of the finger sensor.

Sleep

Sleep disturbances are extremely common in critically ill patients and sedatives are often used to promote sleep. In 12 ICU patients ventilated on assisted modes, Kondili et al. [21] compared sleep quality with and without propofol infusion (targeted to a Ramsay sedation score of 3) during two consecutive nights. In all patients, sleep architecture was abnormal during both nights, with no difference in sleep efficiency, sleep fragmentation, and sleep stage distribution. In addition, propofol infusion suppressed the REM stage and further worsened the poor sleep quality in these patients. These effects could not be explained by changes in respiratory variables, in gas exchange or in patient–ventilator asynchrony, which were similar during the two study nights.

ARDS

Several articles dealing with acute respiratory distress syndrome (ARDS) were published in the journal in 2012. Ferguson et al. [22] provided an expanded discussion of the rationale underlying the development of the recently published Berlin definition of ARDS [23], and provided further supplementary material to facilitate its application. According to the new definition, ARDS is characterised by an acute onset (within 1 week of a known clinical insult or new/worsening respiratory symptoms), presence of bilateral opacities on chest radiograph or computed tomography, and respiratory failure not fully explained by cardiac failure or fluid overload (objective evaluation is required to exclude hydrostatic oedema if no risk factor is present). ARDS is categorised as mild, moderate or severe according to the PaO2/FiO2 value measured with a PEEP of at least 5 cmH2O. In ARDS, excessive lung deformation or strain can occur during mechanical ventilation, and it may be linked to biological lung response. González-López et al. [24] studied the relationship between lung strain and biomarkers of matrix remodelling and inflammation in mechanically ventilated patients with ARDS. Patients with high strain had greater concentrations of IL-6 and IL-8 in bronchoalveolar lavage fluid than patients with normal strain, and matrix remodelling markers, gas exchange, and respiratory IL-6 and IL-8 concentrations in bronchoalveolar lavage were correlated with strain values. The authors conclude that increased strain is associated with a proinflammatory lung response in these patients; mechanics were similar.

The receptor for advanced glycation end products (RAGE) is a pattern-recognition receptor and evolutionary member of the immunoglobulin superfamily that is involved in the host response to infection, injury and inflammation. It exists in two forms: membrane-bound and soluble forms (sRAGE). The soluble form, sRAGE, is a decoy receptor and competitively inhibits membrane RAGE activation. RAGE is constitutively expressed abundantly in the lung under basal conditions. This expression is enhanced during inflammatory states such as with acute lung injury (ALI) and ARDS. A review by Guo et al. [25] summarised the characteristics of RAGE, RAGE isoforms, RAGE ligands, and signalling pathways in the pathogenesis of ALI and ARDS. Additionally, the review explores the potential of RAGE as an important therapeutic target in ALI/ARDS. The authors concluded that a complete understanding of the relationship between RAGE and other receptors will be beneficial in formulating potential therapeutic approaches and limiting the RAGE ligation may modulate the intensity of an overwhelming pulmonary inflammatory response.

Two papers evaluated health-related quality of life together with physical and psychological disability in survivors of ARDS. Chiumello et al. [26] compared 1-year pulmonary function and quality of life in 26 survivors of ARDS previously ventilated in supine or prone position. In the whole population, 1-year mortality rate was high (60 %), whereas pulmonary function, gas exchange and lung computed tomography scan analysis were within normal values. Health-related quality of life was similar to that of healthy subjects, but these patients showed an impairment of daily activity specifically due to pulmonary disease. No difference was observed between groups in any of these variables, except for overinflated tissue which was higher in patients treated in prone position. Cox et al. [27] assessed coping ability (i.e. the adaptive response aimed at diminishing the physical, emotional and psychological burden that is linked to stressful life events and daily challenges) in ICU survivors (35 % with ALI) and their informal caregivers. A telephone-based coping skills training intervention was then developed and evaluated in survivors of ALI and their caregivers. ICU survivors and their caregivers used adaptive coping infrequently, a pattern that was strongly associated with psychological distress. The telephone-based coping skills training intervention was feasible, acceptable and associated with a decrease in scores of psychological distress, more evident among patients than informal caregivers. These improvements were associated with adaptive coping and high self-efficacy. The authors conclude that the novel telephone-based coping skills training intervention may reduce psychological distress among patients and caregivers.

Extracorporeal support in ARDS

Three articles dealt with the use of extracorporeal support in patients with respiratory failure. Grasso et al. [28] evaluated whether partitioning of respiratory mechanics to target values of end-inspiratory transpulmonary pressure up to 25 cmH2O may led to a change of ventilatory strategy allowing one to improve oxygenation thus avoiding an inappropriate use of extracorporeal membrane oxygenation (ECMO). Among 14 patients with influenza A(H1N1)-associated ARDS referred for ECMO, seven patients showed lower values of end-inspiratory transpulmonary pressure (17 vs. 27 cmH2O): in these patients, raising PEEP to obtain an end-inspiratory transpulmonary pressure of 25 cmH2O improved oxygenation and allowed one to keep up the conventional ventilatory treatment without having to turn to ECMO. The authors concluded that, in some patients with influenza A(H1N1)-associated ARDS, abnormalities of chest wall mechanics may be present: in these patients, titrating PEEP to the end-inspiratory pressure of the respiratory system may overestimate the incidence of refractory hypoxaemia leading to inappropriate use of ECMO. This article was accompanied by an editorial by Richard and Marini [29].

In extracorporeal veno-venous support, the positioning of the cannula is important but often difficult to determine. Körver and co-workers [30] used ultrasound dilution to optimise the position of a double lumen catheter to achieve maximum recirculation. This was a helpful adjunct to the visualisation of the catheter position but the technique should also be tested by other clinicians before it could be considered safe.

A review article focused on the circuit components and management of adult patients receiving ECMO for respiratory failure [31]. Hollow-fibre oxygenators and Mendler-designed centrifugal pumps have replaced the old silicon oxygenators and roller pumps. Advances in cannula technology allow greater ease of patient positioning, in some cases facilitating extubation and ambulation on ECMO. Improvements in ECMO circuitry have led to a reduction in heparin and blood product requirements, with consequently fewer complications. Greater understanding of severe ARDS has allowed clinicians to successfully support adults on ECMO. Nowadays, this technique is safer, cheaper and simpler than in previous eras. Prospective studies of ECMO for adult respiratory failure are underway. Contemporary ECMO in awake, potentially ambulant patients to provide short-term support for those with acute, reversible respiratory failure and as a bridge to transplantation in those with irreversible respiratory failure is now ready for widespread evaluation. This review is accompanied by an editorial comment by Alan H. Morris [32].

In 42 patients with hypercapnic ARF who had failed NIV, Kluge et al. [33] performed a retrospective, matched case–control study to assess feasibility, safety and effectiveness of pumpless extracorporeal carbon dioxide removal (ECCO2r) as compared with conventional mechanical ventilation. Nineteen out of 21 patients in the ECCO2r group did not require intubation. In these patients, PaCO2, pH and respiratory rate significantly improved within 24 h; bleeding (two major and seven minor) was the main complication. There was no difference in hospital length of stay or short-term and long-term mortality between groups, but the duration of invasive support and the tracheostomy rates were lower in the ECCO2r group. These results suggest that ECCO2r can be a feasible option to avoid intubation and invasive mechanical ventilation in selected patients with hypercapnic ARF not responding to NIV.

Gottlieb et al. [34] performed a retrospective analysis of 100 mechanically ventilated patients (60 also under extracorporeal support) accepted for high-urgency lung transplantation in a single German centre to assess outcome and identify factors associated with survival. Sixty patients were transplanted, 38 died while on the waiting list and two recovered. One-year survival rates were 36 % for all candidates: it was 57 % in transplanted patients and 5 % in non-transplanted candidates (p < 0.01). SAPS greater than 24, procalcitonin level greater than 0.5 μg/l and any escalation of bridging strategy were independent predictors of mortality. The authors concluded that (1) high-urgency lung transplantation improves 1-year survival in critically ill, ventilated candidates; and (2) prognostic factors can help identify suitable candidates.

Sedation during mechanical ventilation

Several studies have also been dedicated to new sedation strategies in mechanically ventilated patients. Oto et al. [35] evaluated sleep continuity in mechanically ventilated patients during dexmedetomidine infusion by recording polysomnography (PSG) for 24 h in mechanically ventilated patients sedated with dexmedetomidine. Dexmedetomidine (0.2–0.7 μg/kg/h) was administered intravenously to maintain the Richmond Agitation Sedation Scale between −1 and −4 only during the night-time (9.00 p.m.–6.00 a.m.). The authors concluded that in mechanically ventilated patients, night-time infusion of dexmedetomidine preserved the day–night cycle of sleep but induced severely disturbed sleep architecture without evidence of SWS or REM sleep.

Dexmedetomidine was also evaluated by Burbano et al. [36] in a retrospective study designed to describe changes in haemodynamic variables, sedation, and pain score after discontinuation of prolonged infusions of dexmedetomidine in a paediatric population of critically ill cardiac patients, suggesting that tachycardia, transient hypertension, and agitation are frequently observed in paediatric cardiac intensive care unit patients after discontinuing prolonged dexmedetomidine infusions.

Paediatric

Respiratory

The monitoring and study of lung ventilation in small, sick infants is often hampered by difficulties in obtaining reliable measurements. Advances in ventilators incorporating accurate flow measurement systems have in recent years started to enable changes in the way we ventilate small babies. Duman and colleagues [37] have, for example, recently shown the feasibility of deploying a ‘volume guarantee’ mode of ventilation in very small infants using a ventilation system incorporating accurate measurements of tidal volume. The development of EIT has likewise enabled researchers to better study the diseased lung in very small subjects. Bhatia et al. [38] have shown, in a piglet model, that EIT can reliably determine the presence of very small pneumothoraces before significant changes in clinical parameters. Miedema et al. [39] reported on the measurement of lung volume with EIT in preterm infants in whom lung volume and regional time constants were assessed. The importance of these studies comes not only from their insights into lung recruitment but also from the potential usefulness of EIT in a clinical setting i.e. the assessment of lung volume or detection of pneumothorax in very small infants [39].

Controversy persists about the relevance of angiotensin-converting enzyme insertion/deletion (I/D) polymorphisms in disease severity and outcomes in ARDS. Whilst some adult studies have suggested ACE gene I/D polymorphism increases susceptibility or risk of mortality in patients with severe sepsis or with sepsis-induced ARDS [40], others [41] have failed to demonstrate such associations. Cruces et al. [42] compared 60 children with ARDS to 60 controls and found an increase in the frequency of the ACE I/D genotype in the ARDS group, although severe hypoxaemia was less frequent in patients with the D allele. Further explorative studies are awaited with interest.

Over the past year we have published a number of reports on aspects of ventilatory care in children. Whilst positive pressure ventilation is a relatively mature technology, innovation continues to improve patient care. Da Silva et al. [43] presented a report of a small randomised controlled trial of three doses of nebulised l-epinephrine in children with post-extubation stridor and found no evidence of a dose–response relationship with the doses administered. This is perhaps unsurprising, as with any nebulised drug, the amount of drug actually delivered to the airway depends significantly on the child’s tidal and minute volumes and the duration of nebulisation, rather than the mass of drug in the nebuliser [43]. The paper by Ganu et al. [44] presents a series of infants requiring respiratory support for bronchiolitis, the most common cause of PICU admission in infancy. Over a 10-year period they documented an increase in the use of noninvasive ventilatory techniques associated with a decreased PICU length of stay. It is sobering to recall that the first description of noninvasive ventilation (CPAP) in bronchiolitis was published by Beasley and Jones in 1981 [45]. From noninvasive and long-term ventilation to the ultimate respiratory support technique—ECMO, a report by Smalley et al. [46] presents a 23-year experience of the use of ECMO in a single Australian paediatric centre. Since 2005, 88 % of children supported with ECMO for refractory pneumonia survived, suggesting that ECMO is a technically successful support technique in this group of children. Direct comparisons across centres are, however, fraught with difficulty not least because of case mix, selection and management variability. Finally, the report by Paulides et al. [47] reminds us that not all ventilation takes place in hospital, and that not all survivors of intensive care are independent of mechanical ventilators. The authors describe a series of 197 children started on home mechanical ventilation between 1979 and 2009. In the early stages, ventilation was mainly provided for boys with respiratory failure due to neuromuscular disorders. In the third decade of the series, many more young children were supported, reflecting the increase in survivors of complex neonatal and paediatric care requiring long-term ventilatory support and the development of ventilatory equipment designed for use in small children.

Cardiovascular

Mortality in paediatric cardiac surgery has fallen dramatically over the past 20 years. The focus in the current era is on reducing morbidity through prediction, prevention and anticipatory management of cardiac critical illness and perioperative care. Care of children with congenital heart disease often requires prolonged periods of complex intensive care with prolonged central venous access increasing risk of venous thrombosis. Thrombi in the central veins impair placement of central venous catheters making vascular access a challenge, and limit or prevent the establishment of cavopulmonary connections to secure pulmonary blood flow. Thrombotic occlusion of femoral veins can impede the performance of cardiac catheterisation for diagnostic or therapeutic interventions. Todd Tzanetos [48] used ultrasound surveillance and serial measurements of biomarkers of coagulation and fibrinolytic pathways to determine the prevalence of thrombus and to describe the time course of biomarkers of coagulation and fibrinolysis in a group of 16 children undergoing early stage palliation for complex congenital heart disease. Thrombus was detected in 31 % of 16 babies studied despite the administration of heparin and aspirin, with impaired ventricular function, low antithrombin III and increased tissue plasminogen activator levels being associated with the occurrence of thrombus. There is no consensus on how thrombotic risk should be managed in these vulnerable patients, and larger structured studies are needed to further explore risk factors and treatment strategies for central venous thrombosis. Algra et al. [49] studied a cohort of 412 children following cardiac surgery and detected infection defined by Center for Disease Control definitions in 25 % of them. Of the patients developing infections, 26 % developed surgical site infections and 25 % bloodstream infections. A panel of clinical variables available to bedside clinicians at 48 h after cardiac surgery were subjected to regression analysis, and a simple bedside ‘infection prediction’ rule including age (<6 months), delayed sternal closure (>48 h) and PICU stay (>48 h) was tested and shown to have a positive predictive value of 54.5 %, and negative predictive value of 79.5 %. Whilst the rule is helpful it is probably no surprise for experienced intensivists to learn that smaller children, sick enough to stay in for more than 48 h with an open sternum are at risk of infection. Those interested in this topic should also read the paper by Barker et al. [50] which reports data derived from the large US Society of Thoracic Surgeons database. Clinically important ‘low cardiac output’ is a common complication seen in infants and young children following cardiac surgery. Wernovsky et al. [51] developed a descriptive inotrope score (IS) as a means of comparing levels of inotropic support in a paediatric critical care setting. Gaies et al. [52] updated the original IS adding noradrenaline and ‘new’ drugs, milrinone and vasopressin, naming this the vasoactive inotrope score (VIS). Davidson et al. [53] recently undertook an evaluation of the VIS and IS in children following cardiac surgery and determined that a higher VIS at 48 h following cardiac surgery is strongly associated with poorer short-term outcomes including length of ventilation and PICU stay. VIS was superior to IS in predicting short-term outcome. The VIS may soon have to be modified again, if the use of levosimendan gains traction in paediatric cardiac critical care [53]. Ricci et al. [54] recently reported on its use in a small randomised controlled trial of levosimendan versus ‘usual inotropes’ in this population. Children were allocated to receive either levosimendan, a novel inodilator agent belonging to the family of calcium sensitiser agents, or ‘usual inotropic therapy’. The incidence of clinically assessed ‘low cardiac output’ was lower in the levosimendan group than in the control patients (37 vs. 61 %) [54]. Whilst the positive inotropic and vasodilating properties and unique pharmacodynamics suggest a strong rationale for the use of levosimendan in paediatric perioperative care, it is a great shame that despite pressure from the clinical community, the drug’s manufacturer has so far not conducted or supported any sizable clinical trials in children. In a prospective observational study, Hassinger et al. [55] hypothesised that elevated preoperative levels of asymmetrical dimethylarginine (ADMA), an endogenous competitive inhibitor of nitric oxide synthase which is known to be associated with disruption of endothelial function, might be associated with worse outcomes following paediatric cardiac surgery. ADMA levels were measure before cardiac surgery in 100 patients aged 0–18 years. Raised ADMA levels were found in 29 (29 %). Using logistic regression including a range of common short-term clinical outcomes, the authors determined that preoperative ADMA level (odds ratio 452.9; 95 % CI 7.9, 999; p = 0.003), cardiopulmonary bypass (CPB) time (odds ratio 1.03; 95 % CI 1.01, 1.05; p = 0.002) and peak inotrope score (odds ratio 1.27; 95 % CI 1.01, 1.59; p = 0.042) carried independent risk for poor short-term outcomes. Mastropietro et al. [56] sought to investigate the paradox that some children appear to have increased arginine vasopressin (AVP) levels following cardiac surgery whilst others who are hypotensive respond favourably to exogenously administered AVP. A great difficulty in studying AVP is its short half-life and demanding assay techniques. Copeptin is a more stable and easily measured product of pro-AVP metabolism and may therefore act as a useful surrogate for endogenous AVP production. The investigators found that relative AVP deficiency occurred in a portion of infants and children following cardiac surgery. Plasma AVP and copeptin concentrations were positively associated in these patients, supporting a basis for additional studies of larger cohorts of children to determine if copeptin will prove useful in identifying these patients and therefore children who could benefit from exogenous AVP administration.

The maintenance of stable haemodynamic conditions whilst inducing sedation of anaesthesia during critical illness is challenging. Some of the most commonly used sedatives in critical care such as midazolam and propofol are poorly tolerated in cardiovascularly compromised children. Etomidate, an alternative sedative-anaesthetic has a more favourable cardiovascular profile but is associated with competitive inhibition of cortisol production which itself may be harmful unless supplemental corticosteroid is given. The inert gas xenon has long been recognised as an effective inhaled anaesthetic agent. Its rarity in the atmosphere and therefore its expense have to date severely limited its clinical application. Chakkarapani et al. [57] recently reported the effect of 18 h 50 % xenon (Xe) inhalation at normothermia (NT 38.5 °C) or hypothermia (HT 33.5 °C) on mean arterial blood pressure, inotropic support and heart rate (HR) following an induced perinatal global hypoxic-ischaemic insult (HI) in newborn pigs. Xe maintained stable blood pressure, thereby reducing the inotropic support requirements during and after administration independently of induced HT-current neonatal encephalopathy treatment. If supply, delivery and recapture issues can be resolved, Xe may in future offer haemodynamic benefits during neuroprotection and also in other haemodynamically critical situations in paediatrics such as the induction of anaesthesia in children with sever septic shock or end-stage heart failure (cardiomyopathy, acute fulminant myocarditis). Whilst ‘myocardial depression’ or ‘low cardiac output’ are often imputed from clinical findings in neonates and young children in the perioperative period, routine measurement of cardiac output is both difficult and as yet of unproven benefit in patient management [57]. Grollmuss et al. [58] investigated the use of a noninvasive and continuous method of measuring stroke volume, electrical velocimetry (EV) with intermittent transthoracic Doppler measurements, and found the two methods to be interchangeable. EV has the potential to become a useful tool for continuous cardiac output measurement and guiding goal-directed therapy in critically ill infants with heart failure. Even blood pressure can be challenging to measure in the critically ill preterm neonate [58]. Konig et al. [59] studied 60 infants of less than 32 weeks gestation and compared noninvasive cuff measurements with intra-arterial measurements obtained from umbilical arterial catheters. Although the authors found that overall the average differences between invasive and noninvasive measurements were acceptable, the range of under- and overestimation was large, making reliance on noninvasive measurements in guiding circulatory management ‘problematic’. Clearly better cardiovascular noninvasive monitoring methods are needed for our smallest patients. The majority of papers published in the paediatric pages of the journal relate to critical care undertaken in the developed world. Nguah et al. [60] remind us of the huge burden of critical illness in underdeveloped and developing countries, and what can be achieved in those settings. These investigators reported on observations of cardiac function and haemodynamic status of 183 children with severe malaria at presentation and after recovery, including details of echocardiographic evaluation of 121 children in the cohort. There is huge potential for studies in sepsis in such settings which have the potential both to improve local clinical care and provide key populations for clinical trials with potentially worldwide application, as we have recently seen from the FEAST trial [61].

Cyclosporine has been shown to reduce myocardial cell death following ischaemia–reperfusion. Therefore Gill et al. [62] hypothesised that cyclosporine treatment may attenuate asphyxia-related intestinal injury in neonates. In an asphyxiated newborn piglet model, anaesthetised asphyxiated piglets were block-randomised to receive cyclosporine (10 mg/kg) or placebo (normal saline) boluses at 5 min of reoxygenation (n = 8/group). Within 2 h of hypoxia, piglets had cardiogenic shock with hypotension and acidosis and decreased superior mesenteric perfusion. Cyclosporine treatment increased superior mesenteric arterial (SMA) flow (114 ± 6 vs. 78 ± 19 % of baseline of controls, respectively) with improved intestinal tissue lactate (all p < 0.05) [62].

This is the first study to demonstrate that post-resuscitation administration of cyclosporine improves mesenteric perfusion and attenuates necrotising enterocolitis (NEC)-like intestinal injury in newborn piglets following asphyxia-reoxygenation.

Outcomes, quality and safety

Simple simulation techniques, as exemplified by practical basic life support training, have been in medicine for many years. In the aviation industry, sophisticated simulators are used to facilitate whole crew (team) training and this model has recently been adopted for the training of medical teams. Initially purpose-built simulation ‘laboratories’ were developed, and more recently the availability of portable high-fidelity simulation equipment has enabled simulation to take place in clinical facilities. Stocker et al. [63] conducted a prospective, single-centre, longitudinal study over the first 2 years of an embedded clinical simulation programme involving a total of 219 paediatric health-care professionals. Approximately 90 % of participants rated the simulation training as being effective in improving their practical care delivery and in their ability to communicate and work effectively in a team. In a longitudinal analysis there was a stepwise significant (p < 0.05) increase of confidence of participants, with a significantly (p < 0.001) higher confidence in participants who had attended at least three sessions (90.7 vs. 61 %). Repeated exposure to in situ simulation appeared to be more beneficial than isolated exposure [63]. Two studies recently reported in the journal by Booth et al. [64] and Martinez-Anton et al. [65] focussed on improving patient safety in PICUs by implementing intervention programmes aimed at reducing prescribing errors. Martinez-Anton standardised resources for prescribers and re-educated prescribers on good prescribing practice. Prescriptions were studied over 4 months before and after a 12-month intervention period. The overall error rate reduced from 34.2 % prior to the intervention to 21.7 %. Booth focussed on an intervention involving daily error feedback, achieving an absolute risk reduction of 44.5 % (95 % CI 40.8–48 %). These are clinically important reductions which potentially reduced harm to patients and their associated costs. Of course some change may have occurred through a Hawthorne effect, although this appears to be unlikely as Martinez-Anton reported that error rates actually increased with time during their initial (pre-intervention) observation period. It would be of great interest to know whether the improvements reported by these investigators were sustained over time, and if so how that was achieved.

Severity of illness scores such as PRISM and PIM are now widely used in paediatric critical care to risk adjust mortality data which allows assessment of unit performance and in particular comparison between different units within a health-care system. The revised Paediatric Index of Mortality score (PIM2), originally developed in the UK and Australia, uses data readily available at the point of PICU admission and has been validated in a number of different populations. Imamura et al. [66] recently assessed data from 2,536 children admitted to a large PICU in Japan, of whom 62 died. PIM2 was found to have excellent discriminatory power, good calibration and therefore to be a potentially useful tool in the assessment of quality of care in Japanese PICUs. Likewise, Leteurtre et al. [67] sought to compare the performance of PIM2 in 14 French and 1 Belgian PICUs. The authors undertook a recalibration of the score in both the French-Belgian (FB) and Great Britain and Ireland (GBI) data sets. PIM2 score and recalibrated PIM2 scores were found to perform well in both the populations, whereas the version recalibrated using the FB data, whilst performing well in that population, was poorly calibrated when applied to the GBI data. The most important finding of this study was that the PIM2 score was valid in the FB population, a result which is encouraging as moves are made towards a Europe-wide paediatric intensive care registry. PIM2 may well evolve in time as improvements, or at least changes, in paediatric intensive care evolve with time. Scores can be updated by recalibration exercises as described by Leteurtre, in which the algorithm generating the score is changed whilst using the same data elements. More radical evolution of a score can be undertaken by adding additional more discriminatory data elements. Morris et al. [68] reported on the utility of substituting admission blood lactate for the base excess element when calculating the PIM2 score to predict death among a population of 2,380 children from a single English PICU. Lactate significantly predicted mortality with an odds ratio (OR) for death per unit (mmol/l) increase of 1.11 (95 % CI 1.06–1.16; p < 0.001). Adding lactate to the PIM2 model improved the fit of the model, with an increase in area under the ROC curve from 0.832 to 0.848, an example of how the PIM2 score may be usefully modified to improve its performance further.

Modelling of capacity in health care is critical to the efficient use of resources. One source of ‘inefficiency’ is the readmission of patients to an ICU soon after primary discharge. This is important information both for clinicians and health-care managers to have, as if readmission rates are benchmarked against appropriate standards or populations, high or low readmission rates may be a sign of inefficient or unsafe systems of care. PICU readmission may of course follow a genuinely unpredictable deterioration, but may also result from preventable problems including inappropriately precipitous ICU discharge or poor quality care immediately following discharge. Bastero-Minon et al. [69] determined that 2.45 % of 4,625 children discharged from a tertiary paediatric cardiac ICU were readmitted within 72 h of their primary discharge. Respiratory deterioration was the most common cause of ICU readmission within 72 h. In retrospect, changes were detected on pre PICU discharge chest X-rays in 12.5 % of these children, suggesting that different management might have prevented discharge-readmission. Frequently there is a lack of certainty on when a patient is “ready” for discharge or when it is “safe” for a patient to be transferred. Although ‘readiness for discharge’ criteria have been suggested, the operational complexity of individual hospitals is likely to limit their applicability. The baseline readmission rate, derived as it is from a large paediatric cardiac ICU, is, however, very useful data against which other providers may wish to benchmark [69].

Tibby et al. [70] highlighted another issue relevant to PICU capacity planning and care pathway development. Using a competing risks model, the investigators sought to determine whether mortality or morbidity of children with Down syndrome differed from the non-Down population after adjusting for important confounding variables. They found that children with Down syndrome required a higher proportion of organ support than expected by disease severity at ICU admission, and that deaths in the Down syndrome population occurred significantly later than those in the non-Downs group. The authors raise an intriguing ‘hibernation’ hypothesis to explain their findings, suggesting that patients (e.g. Down patients) who are able to trigger a prompt hibernation response will tend to avoid death early in an episode of critical illness, though at a cost of a greater requirement for early organ support. This large study provides useful data for PICU clinicians and managers who must take into account the population prevalence of Down syndrome when planning intensive care provision. Further exploration of the hibernation hypothesis is warranted [70].

As well as child patient outcomes, clinicians need to be aware of the potential impact on family members associated with an episode of critical illness. Colville and Pierce [71] undertook a longitudinal cohort study of the one or other parent associated with intensive care admission of 66 consecutive 7- to 17-year-olds. The significant findings of this study are that nearly half of the families were experiencing significant post-traumatic stress 12 months after discharge, and that many experienced delayed reactions suggesting that families need longer-term monitoring and support.

SIRS and sepsis in children

Shime et al. [72] recently provided an overview of the incidence and mortality in paediatric severe sepsis using data drawn from the Japanese national paediatric intensive care registry. A total of 167 patients were studied who matched definitions of paediatric severe sepsis (systemic inflammatory response syndrome with cardiovascular dysfunction, ARDS or other multiorgan dysfunction caused by infection) as defined by the International Consensus Conference on Pediatric Sepsis, 2005 [73]. The incidence of severe sepsis was 1.4 % of PICU admissions. Crude 28-day mortality was 18.9 %, comparable to PIM2 predicted mortality of 17.7 %. Children with shock associated with haematological disorders and those presenting in shock had a poor prognosis. Well-managed registries are increasingly important in paediatric critical care research. Standard data sets and data management rules allow inter-unit, inter-regional or international comparison of outcomes or permit patients suitable for studies to be identified. This is perhaps even more valuable in paediatric critical care than in adult care, as the numbers of patients are far fewer, including some with very rare conditions which cannot be properly studied unless data from large populations are available. Gatterre et al. [74] provide an interesting insight into the presentation of rare severe forms of Kawasaki disease: 11 cases with the majority presenting in shock and multiple organ dysfunction syndrome (MODS). This is a timely reminder that Kawasaki disease should be considered in the differential diagnosis of children presenting with shock or organ dysfunction. Paize et al. [75] recently presented an elegant study of the microcirculation in paediatric severe meningococcal disease, comparing findings in this cohort with healthy controls. Microcirculatory insufficiency correlated with markers of endothelial activation, and showed longitudinal improvement as disease resolved. Whilst these findings are interesting, further work is required to determine whether the assessment of microcirculatory parameters can be relied upon to guide resuscitation and ongoing therapy in sepsis or other shock states. Finally, prevention of SIRS is clearly better than ‘cure’; Jack and colleagues [76] describe how the use of in-line filtration reduces complications and length of stay in critically ill children admitted to a PICU and receiving intravenous therapy. A total of 807 children were randomly allocated to a control or filter group, with the latter receiving in-line filtration. The incidence of predefined complications including SIRS was lower in the filtered patients (40.9 vs. 30.9 %, p = 0.003). Another interesting aspect of paediatric critical care was raised by Lombel et al. [77] in a paper comparing different fluid overload definitions. Surely we know how to measure fluid overload in children in 2012? Apparently not: Lombel studied four weight-based definitions and four fluid-based definitions in 21 children undergoing renal replacement therapy. The number of patients considered to have greater than 10 % fluid overload ranged from 14 to 48 % depending on the definition used; percentage fluid overload was not significantly associated with PICU mortality but five of eight fluid overload definitions were predictive of higher subsequent PELOD scores. Further studies are urgently needed to determine the optimal definition of fluid overload in critically ill children.

Neurology

Seizures are common in comatose children but may be clinically subtle or detectable only with continuous EEG monitoring. Kirkham et al. [78] addressed this important issue reporting the use of continuous 1–3 channel EEG monitoring in a series of 204 head-injured children. Of these, 74 % were found to have electroencephalographic seizures (ES), the majority without recognisable clinical signs. Most of the children had clinical seizures (CS) before EEG monitoring, but five had only ES, i.e. no CS. The presence of unfavourable outcome was independently predicted by the presence of ES. Further research looking at the reliability of one- or two-channel cEEG monitoring devices in diagnosing non-convulsive seizures is required as the technical issues are resolved and our data add to the evidence that outcome may be predicted on the PICU. In another EEG-based study, Gunn et al. [79] reported on the relationship between perioperative electrical seizures, the background pattern of the amplitude-integrated EEG and 2-year developmental outcome in young infants undergoing congenital heart surgery. CPB was used in 83 % of cases. Perioperative electrical seizures were detected in 30 % of cases, of whom 25 % had clinically apparent seizures. The occurrence of seizures did not correlate with 2-year neurodevelopmental outcome assessed using Bayley Scales of Infant Development [79].

Sedation and analgesia

Accumulation of commonly used opioids can be a problem in paediatric intensive care, especially in children with renal or hepatic impairment. Fentanyl for instance has a context-sensitive elimination half-life which increases with time. Remifentanil is a potent opioid with an exceptionally short half-life of 3–5 min with no evidence of prolongation with time. Welzing et al. [80] compared a sedative analgesic combination of fentanyl–midazolam with remifentanil–midazolam in a randomised double-blind study conducted in 23 infants. Although designed as a pilot study, the mean extubation time was significantly shorter in the remifentanil group compared to the fentanyl group (80 vs. 782 min, p = 0.005). Both groups showed comparable sedation score, good haemodynamic stability and low rates of side effects. If these results are confirmed in a larger study, remifentanil may be the potent opioid of choice in mechanically ventilated infants. Sedatives are almost certainly overused in paediatric intensive care whilst the causes of distress including delirium may be overlooked and therefore undertreated. Silver et al. [81] described the use of a simple rapid assessment tool, the Cornell Assessment of Pediatric Delirium tool, for the detection of delirium in 50 paediatric intensive care patients. There was excellent concordance (97 %) between the new tool and the reference standard (DSM-IV), which requires a lengthy psychiatric evaluation. Further studies of this potentially very valuable screening tool are required to confirm its validity and establish inter-rater reliability and practicality of use by bedside clinicians.

Endocrine

Measured deficiency tempts intensivists to replace, examples being cortisol, thyroid hormone, albumin and vitamins. Evidence on which clinical practice is based is often weak but the compulsion to ‘normalise’ is powerful. Karaguzel et al. [82] investigated the incidence of adrenal insufficiency (AI) in three groups of children: 23 with acute critical illness with severe sepsis, 27 with acute critical illness without sepsis and 24 patients following major surgery. Baseline cortisol was measured on admission and then a low-dose ACTH stimulation test was performed. Serial cortisol and ACTH levels were then followed up to 14 days. The authors found that AI, defined as an increment in cortisol level less than 9 μg/dl following low-dose ACTH testing, occurred in 28 % of patients with acute stress related to these three specific conditions. As previously reported by Menon et al. [83], the lowest rate of AI was seen in sepsis patients, suggesting that the presence of sepsis was not associated with an increased risk of AI. Whether treatment of AI in critically ill children is required is, however, unclear. Rippel et al. [84] also tempts us to consider replacement, this time with a report of the incidence of vitamin D deficiency in a critically ill population. Hypovitaminosis D is common in critically ill adults. Rippel et al. measured vitamin D levels in 316 children and determined that levels were low in 34.5 % of the study population. Hypovitaminosis D was not, however, associated with differences in PICU length of stay or hospital survival. Further studies of hypovitaminosis D are warranted, especially in children undergoing cardiac surgery or with existing cardiac dysfunction.

Miscellanea

Controversy exists over how to ‘clear’ (we mean enable the clinician to safely remove spinal precautions based on imaging and/or clinical examination) the spine of significant unstable injury among clinically unevaluable obtunded blunt trauma patients (OBTPs). To clear the spine in this clinical scenario largely centres on imaging, possibly supported by clinical evaluation if and when patients become evaluable. The review by Plumb and Morris [85] provides a clinically relevant update of the available evidence since 2004. Plain radiography has low sensitivity for detecting unstable spinal injuries in OBTPs, whereas multidetector-row computerised tomography (MDCT) approaches 100 % sensitivity. Magnetic resonance imaging (MRI) is inferior to MDCT for detecting bony injury but superior for detecting soft tissue injury with a sensitivity approaching 100 %, although 40 % of such injuries may be stable and ‘false positive’. In studies comparing MDCT with MRI for OBTPs, MRI following ‘normal’ CT may detect up to 7.5 % of missed injuries with an operative fixation in 0.29 % and prolonged collar application in 4.3 %. Increasing data is available on the complications associated with prolonged spinal immobilisation among a population where a minority have an actual injury. It was concluded that given the variability of screening performance it remains acceptable for clinicians to clear the spine of OBTPs using MDCT alone or MDCT followed by MRI, with implications for either approach. With all screening processes false positive and false negative results will occur and have consequences. It is essential that clinicians and institutions audit their data to determine their likely screening performances in practice.

Regional citrate anticoagulation (RCA) is an attractive anticoagulation mode in continuous renal replacement therapy (CRRT) because it restricts the anticoagulatory effect to the extracorporeal circuit and may decrease the heparin-induced risks such as bleeding and thrombocytopaenia. Zhang and Hongying [86] performed a systematic review and meta-analysis of randomised controlled trials comparing the efficacy and safety of CRRT anticoagulation therapies. The intervention treatment was RCA and the end-points were circuit life span, bleeding events, metabolic complications and mortality. Six studies met the inclusion criteria, which involved a total of 658 circuits. Patients with liver failure or a high risk of bleeding were excluded. The circuit life span in the RCA group was significantly longer than that in the control group, with a mean difference of 23.03 h (95 % CI 0.45–45.61 h), but much heterogeneity was found among studies. RCA was able to reduce the risk of bleeding, with a risk ratio of 0.28 (95 % CI 0.15–0.50). RCA was associated with more episodes of hypocalcaemia. Two studies compared the mortality rate between RCA and control groups, with one reporting similar mortality rate and the other reporting superiority of RCA over the control group. It was concluded that RCA is effective in maintaining circuit patency and reducing the risk of bleeding, and the impact on mortality is controversial.

Brotschi and co-workers [87] tested whether an in-line filter that was inserted in the syringe infusion pump line will influence start-up times and also flow irregularities at low infusion rates. The authors conclude that the in-line filters helped to reduce flow irregularities but also, and more surprisingly at first glance, a delay in drug delivery. This may be particularly important when highly concentrated drugs are administered at low flow rates.

Mallat and co-workers [88] studied the anion gap in critically ill patients and determined the smallest detectable change (SDD) in individual patients. They had a large sample with 161 patients and studied the repeatability by different mathematical expressions and their conclusion was that repeatability was good and that SDD was an appropriate determinant of the sensitivity or precision of the technique. It was preferable to the coefficient of variation and “least significant change” because of their dependence on the levels of the variables. All in all, measurements of the anion gap according to Stewart’s parameter could thus be used to continuously follow intensive care patients.

Hyperbaric environment

The role of hyperbaric oxygen therapy in necrotising soft tissue infections (NSTI) was analysed by Soh et al. [89] to determine the effect of hyperbaric oxygen (HBO) therapy on mortality, complication rate, discharge status/location, hospital length of stay (LOS) and inflation-adjusted hospitalisation cost in patients with NSTI in a retrospective study of 45,913 patients in the US Nationwide Inpatient Sample (NIS) from 1988 to 2009. A total of 405 patients received HBO therapy. The patients with NSTI who received HBO therapy had a lower mortality (4.5 vs. 9.4 %, p = 0.001). After adjusting for predictors and confounders, the authors found that patients who received HBO therapy had a statistically significantly lower risk of dying, higher hospitalisation cost (US$52,205 vs. US$45,464, p = 0.02) and longer LOS. Despite the last two factors, the statistically significant reduction in mortality supports the use of HBO therapy.

In a bench study Lefebvre and co-workers [90] tested a hyperbaric chamber ventilator at different atmospheric pressures. With increasing gas density by raising the gas pressure the resistance increases in the ventilator and reduces the flow delivered by the ventilator. A new machine (Siaretron IPER 1000) has a built-in compensatory technique to deliver a preset flow irrespective of atmospheric pressure. Tests covering a range from 1 to 4 atmospheres showed the ventilator to be reliable within a certain range of gas flows. Outside this range tidal volume and minute ventilation were reduced.