Skip to main content

How Investigators Can Answer More Complex Questions About Assess Concrete Strength and Lessons to Draw from a Benchmark

  • Chapter
  • First Online:
Non-Destructive In Situ Strength Assessment of Concrete

Abstract

This benchmark aims to assess mean compressive strength at several scales and to identify the location and characteristics of possible weak areas in the structure. It concerns synthetic data simulated on a group of four concrete cylindrical structures of identical dimensions with different kinds of strength distribution, based on a real case study. After having received the test results corresponding to their request (non-destructive or destructive), all the experts have to analyze these data and assess the concrete properties and to localize possible weak areas. In addition, they have to define their assessment methodology, i.e. level of investigation, number, type and location of measurements. This study provides information about how the accuracy of the final estimates depend on choices done at the various steps of the assessment process, from the definition of the testing program to the final delivery of strength estimates.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This case study has been described by Soutsos et al. [1]. The context of the investigation, the general dimensions of the structure and the access conditions for the benchmark were directly derived from the real case study. The material properties were however changed in order to guarantee the objectivity and confidentiality of the benchmark.

  2. 2.

    The ratio between these three amounts (1/2/3) correspond to a quantitative estimation carried out by S. Biondi on the basis of Eurocode 8.

  3. 3.

    Typically, as an example, this criterion can be expressed as “where the NDT result is as close as possible from the average value of all NDT results” or “where NDT results reach extreme values”.

  4. 4.

    Bob [2], Parrott [3], Duval [4], Basheer et al. [5].

  5. 5.

    Turgut and Kucuk [6].

  6. 6.

    JGJ/T23-2011, Technical specification for inspecting of concrete compressive strength by rebound method, 2011 (see also, Proceq, Rebound number corrections with JGJ/T23-2011, online technical documentation).

  7. 7.

    By combining Eqs. 7.2 and 7.11, one can compute the difference between the R values of two concretes (A and B) that have the same strength, concrete A being uncarbonated and concrete B being carbonated. It appeared (after the beginning of the simulations) that there is some compensation effect: as low strength concrete carbonates more (Fig. 7.2), the k1 correction factor is larger and the R value of the poor strength carbonated concrete can be more or less identical to that of a “good” uncarbonated concrete. A side effect is that the contrast regarding original concrete strength can be masked if the lower strength concrete is more carbonated. Such situations can be found in true life. In the benchmark, this fact had not been anticipated and resulted in some handicap for some strategies which were mostly relying on rebound measurements on the carbonated face (see Sect. 7.6.1 for more details).

  8. 8.

    Pham [7].

  9. 9.

    Of course, the reader can come back at any time to Sect. 7.5.1 to check any information.

  10. 10.

    The numbers given in this section correspond to the mean value and standard deviation of all strength data obtained on cores. They are summarized for all contributors in Table 7.12.

  11. 11.

    It must be pointed that, like for the first benchmark (Chap. 6), chance has played some role in the results, and that this comparison is not a ranking between contributors. Furthermore, some important features identified during this second benchmark will be, like for the first benchmark, further analyzed by randomly repeating the full process in a Monte-Carlo simulation (see Chap. 8).

  12. 12.

    It must also be reminded that several contributors (I, C, F, L, O, M, see Table 7.15 in Sect. 7.6.1) suffered from the problem due to the influence of carbonation on rebound test results, which induced some wasting of resources.

  13. 13.

    In practice, during a real on-site investigation, refining the extension of these areas is easier because the limits between the different batches may be visible.

  14. 14.

    Breysse and Fernández-Martínez [8].

  15. 15.

    Ddl: degrees of freedom

References

  1. Soutsos et al.: In: D. Breysse (ed.) Non-destructive Aassessment of Concrete Structures: Reliability and Limits of Single and Combined Techniques. RILEM SOA TC-207, pp. 151–154 (2012)

    Google Scholar 

  2. Bob, C.: Durability of concrete structures and specification. In: Dhir, R.K., Dyer, T.D., Jones, M.R. (eds.) International Congress on Creating with Concrete. University of Dundee, Dundee, 6–10 September 1999, pp. 311–318

    Google Scholar 

  3. Parrott, L.J.: A Review of Carbonation in Reinforced Concrete. Cement and Concrete Association, Slough (1987)

    Google Scholar 

  4. Duval, R.: La durabilité des armatures et du bétons d’enrobage, in La durabilité des bétons, Collection de l’ATILH, pp. 173–226. Presse ENPC, Paris, France (1992)

    Google Scholar 

  5. Basheer, P.A.M., Russell, D.P., Rankin, G.I.B.: Design of concrete to resist carbonation: rate of carbonation of concrete. In: Lacasse M.A., Vanier D.J. (eds.) 8th International Conference on “Durability of Building Materials and Components, Vol. 1. NRS Research Press, Vancouver, Canada, 30 May-3 June 1999, pp. 423–435

    Google Scholar 

  6. Turgut, P., Kucuk, O.F.: Comparative relationships of direct, indirect and semi-direct ultrasonic pulse velocity measurements in concrete. Russ. J. Nondestruct. Test. 42(11), 745–751 (2006)

    Article  Google Scholar 

  7. Pham, S.T.: Étude des effets de la carbonatation sur les propriétés microstructurales et macroscopiques des mortiers de ciment Portland, Ph. D. Thesis, University of Rennes, France (2014)

    Google Scholar 

  8. Breysse, D., Fernández-Martínez, J.L.: Assessing concrete strength with rebound hammer: review of key issues and ideas for more reliable conclusions. Mater. Struct. 47, 1589–1604 (2014)

    Article  Google Scholar 

Download references

Acknowledgements

The 2nd author would like to acknowledge the financial support by Base Funding—UIDB/04708/2020 of CONSTRUCT—Instituto de I&D em Estruturas e Construções, funded by national funds through FCT/MCTES (PIDDAC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jean Paul Balayssac .

Editor information

Editors and Affiliations

Appendices

Appendix 7.1: Number of Tests of Each Type for KL1 and KL2 Investigations

These two tables provide the information about the number of cores and non-destructive tests for all contributors that considered the KL1 and the KL2 levels (see Table 7.11 in the main text for KL3 level) (Tables 7.19 and 7.20).

Table 7.19 Synthesis about the number of cores and semi-destructive tests for KL1
Table 7.20 Synthesis about the number of cores and semi-destructive tests for KL2

Appendix 7.2: Recommendations Regarding the Conversion Model Identification and Validation

During the benchmark, each contributor received its own set of test results, corresponding to its specific investigation program and identified its own conversion model(s). It is therefore very difficult to compare the models, as everything is different: data set, model type and identification strategy. To better understand what can be done (and to draw lessons for the RILEM TC 249-ISC Recommendations), some specific post processing work has been carried out separately, which is described here.

Comparison between conversion models

All datasets of test results were considered as they were obtained by the contributors (their main characteristics are summarized in Table 7.12). For each of these datasets, three conversion models were identified when it was possible, i.e. a linear model with rebound, a linear model with velocity and a combined power law model. A least squares regression was used for the model parameter identification. This led to 10, 11 and 6 datasets, respectively. Table 7.21 summarizes what was obtained for both rebound and velocity. For each model, it provides the model parameters a and b, the determination coefficient identified by the regression analysis and the number of degrees of freedom of the model. It must be noted that these models may differ from those identified by the contributors, as they may have used another model type or another calibration approach. The two remaining columns in Table 7.21 (RMSEpred) provides, in MPa, the prediction error (see Sect. 1.5.6.2 for the definition of this parameter). This estimator is the best way to quantify the quality of the assessment since it measures the average uncertainty when using the conversion model as a predictor of an individual strength. In real investigations it can be estimated by using the leave-one-out procedure (see Sect. 11.2). However, with synthetic simulations, it is easy to calculate it by using the conversion model to estimate strength at all possible test locations on Tank A from the values of NDT results.

Table 7.21 Characteristics of the linear regression models identified from the available data sets (dof = number of degrees of freedom of the conversion model)

Most comments regarding the results in Table 7.21 are valid for both rebound and velocity test results. The first comment is about the variety of identified (a, b) pairs. Figures 7.18a, b illustrate linear conversion models for rebound (Fig. 7.18a) and velocity (Fig. 7.18b). They show large differences, both on the slope parameter a and on the intercept parameter b.

Fig. 7.18
figure 18

Illustration of conversion models identified with the different sets on test results (rebound on left and velocity on right)

However, when the a and b model parameters are plotted together, a strange feature emerges, as these parameters are strongly correlated. This phenomenon, named “trade-off”, was previously pointed as being a general property, and can be seen to be relevant for both on field data and on laboratory dataFootnote 14 (Fig. 7.19).

Fig. 7.19
figure 19

Illustration of the trade-off between conversion model parameters (left: fc = a R + b, right: fc = a V + b)

Assessing the prediction error

The explanation lies in the mathematical inversion problem of identification of “best” (a, b) pair from a data set of N pairs (fc, NDT) with random uncertainty on measurements (see Sects. 11.3 and 12.4 for details). Due to (a) the sampling uncertainty which leads to different data sets, (b) to the minimization of an error function (least squares method), each single identification process leads to a specific (a, b) pair, which belongs to a whole set of “equivalent error set”. The conclusion is that the differences between the (a, b) pairs of the various conversion models is only the result of chance. However, these models have not the same prediction ability. This is why the issue of identifying the best conversion model must be revisited in order to ensure the best predictive ability of the model.

The second interesting point is related to the criterion used to assess the quality of the model. The determination coefficient r2 (Table 7.21) is the most commonly used parameter, but it is not a relevant indicator. In fact, RMSEpred, which is the relevant indicator of the predictive ability, appears to be poorly correlated with r2. Furthermore, r2 is highly dependent on the number of cores, since it is easier to fit a model when the size of the dataset (in our case the number of cores) is reduced, but this is by no way a sign that the model can accurately estimate strength. Table 7.21 shows that the lowest RMSEpred values mostly correspond to larger datasets (i.e. higher dof). A larger dataset leads to a more stable and representative model, an aspect that the r2 value provides nothing about.

It is also visible that conversion models identified with velocity test results are better (lower RMSE) than those identified with rebound test results. With velocity, the best models lead to RMSE values of about 1.6 MPa while with rebound, the best models yield RMSE values of about 2.4 MPa. This is directly linked with the random measurement error affecting NDT results. This error has a double influence: (a) on the dataset used for identifying the conversion model, (b) when the model is used for assessing strength for a new test result. This clearly puts a limit for the estimation of any local strength value, which can be assessed from the test result precision (Appendix 7.3). It can be added that values of the prediction error close to 3 MPa, as is obtained with some datasets, are of little interest, since it is approximately the magnitude of the standard deviation of strength for the whole tank.

More complex conversion models: nonlinear models, multivariate models

A usual question when identifying the conversion model is that of the mathematical shape to be chosen (linear, exponential, power…). Most contributors of the benchmark have chosen linear models. We have however tested alternative models (exponential model fc = a exp (bV)), and seen that they exhibit the same trade-off property between the a and b coefficients. While the determination coefficient may be a little bit better for nonlinear models, the RMSE values are roughly identical, for each data set, to those of linear models. It thus seems relevant to consider only linear models, as long as experimental data do not cover a large range and do not exhibit obvious nonlinear features.

A last issue commonly discussed in the literature is that of the possible combination of several non-destructive methods for improving the estimation of concrete strength. The efficiency of this combination remains an open issue in the literature. Table 7.22 synthesizes, for the 6 datasets which combined rebound and velocity measurements, the (a, b, c) model parameters that can be identified with a double power law (SonReb type) conversion model. The predictive RMSEpred of the combined model can be compared with those of the two univariate linear models.

Table 7.22 Comparison of the predictive ability of univariate and bivariate models

Although the coefficient of determination r2 of the combined models is slightly better than that of the single models (it is easier to fit a model when the number of model parameter increases), this is not the case for the RMSE values.

For five out of six datasets, the performance of the combined model is inferior to that of the univariate models. This comes from the fact that the lower repeatability of rebound measurements has a negative effect on the combined model efficiency. This confirms that:

  • the coefficient of determination cannot be used as an indicator of the model predictive ability,

  • the combination of several NDT test results in a single model may not be efficient.

Appendix 7.3: Repeatability of Test Results (or Test Result Precision, TRP)

The Test Result Precision (TRP) is a crucial parameter which indicates the magnitude of the measurement uncertainty associated to each test result. This parameter is related to both destructive and non-destructive tests, and both on-site and laboratory tests. As explained in Appendix 7.2, a higher TRP may be the reason why the final predictive RMSE is smaller. However, assessing the TRP is not common in engineering practice and only two contributors (O and E) paid some attention to its assessment. Their approach and the information that can be deduced is described in the following.

Test result repeatability for non-destructive tests (Contributor O, KL3).

The choice was to repeat 5 times the testing process at the same test location. In practice, for Tank A, the reference test location was x = 50 m, y = 1.5 m and measurements were carried out at 4 additional testing points within a close vicinity, 20 cm apart in all directions (i.e. vertical and horizontal). Table 7.23 provides the series of test results.

Table 7.23 Test results for repeatability assessment (the index indicates the internal or external side)

The standard deviation and, therefore, the coefficient of variation can be compared to that estimated from all tests carried out on the same tank. Table 7.24 synthesizes these results.

Table 7.24 Test results precision assessment (cv is the coefficient of variation)

The value of information provided by a test result increases if the value of cvrep is small compared with the overall cv, calculated on the whole tank. Particularly valuable information can therefore be drawn from these results:

  • the semi-direct velocity tests on the internal face provided the larger amount of information, followed by the direct velocity tests,

  • the semi-direct velocity tests on the external face and pull-out tests are not so good,

  • the rebound number tests on the internal face have the lowest repeatability.

The main consequence of these results is their direct influence on the quality of correlations used in the conversion model identification stage. The effect of a lower test result precision (TRP) is a higher RMSE value obtained at the end of the process (see Table 7.22). Another effect is that a test result cannot be taken at face value, but should be considered in a wider context. It is only with a larger number of test results and the smoothing effect of larger samples that tests with a poor repeatability can be useful. It must be noted that, when compared to the total number of NDTs carried out on the same tank (Table 7.11), the additional effort for assessing repeatability remains marginal (4 more rebound tests vs 91, 12 more velocity tests vs 74).

Test result repeatability for non-destructive tests (Contributor E, KL3).

Contributor E took 12 cores (3 cores on each tank) following a similar process: the second and third cores were taken at +/− 25 cm from the first one. This process enables to quantify the local scale variability of core strength, which combines the effect of the local scale material variability, and that of the variability of the strength measurement process (including the drilling phase). The average variability for each of the four sets of three cores can be compared to the global variability (on the full set of twelve cores), which combines the variability of the strength measurement process to the effect of the material variability at a larger scale. The available results are summarized in Table 7.25.

Table 7.25 Core strength test results and variability assessment

The local cv varies from 4.7 to 7.3% (average value is 6%), while the global one amounts 9.8%, which comes to a global variance two to three times larger than the local one. Therefore, any local strength value must be considered with this +/− 6% uncertainty margin (i.e. roughly +/− 2 MPa), corresponding to repeatability of the strength measurement process at a local scale. As for the non-destructive test results repeatability, this variability has also some impact on the accuracy of conversion models.

The large scale variability is estimated at about 10%, which is more than the local uncertainty. This means that the material variability has an important effect but these measurements cannot separate between the variability at the tank scale and the between-tank variability.

Rights and permissions

Reprints and permissions

Copyright information

© 2021 RILEM

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Breysse, D. et al. (2021). How Investigators Can Answer More Complex Questions About Assess Concrete Strength and Lessons to Draw from a Benchmark. In: Breysse, D., Balayssac, JP. (eds) Non-Destructive In Situ Strength Assessment of Concrete. RILEM State-of-the-Art Reports, vol 32. Springer, Cham. https://doi.org/10.1007/978-3-030-64900-5_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64900-5_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64899-2

  • Online ISBN: 978-3-030-64900-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics