Model Feeding and Data Quality
As seen in previous chapters of this book, models assess, over a long term projection period, future cash flows in different scenarios with various types of data. Their results are used by top managers to make important decisions at the Company level. Obviously feeding those models is a big issue to be addressed. Moreover, as the saying goes “garbage in, garbage out”, meaning that the quality of results is directly related to the quality of data. So data quality is a vital subject for many top managers of insurance undertakings (life and non life as well as mutual), because they need to trust in risk models results and use them in their decision making process. The article covers all the major questions related to data quality that you may have. In particular, it explains why data quality should be considered as a process and not a commando-like operation, because there is no absolute level of quality and because after the targeted level of data quality is achieved, it has to be maintained at this level in a changing environment. Firstly the article focuses on data definition (contract and asset information, endogenous or exogenous parameters for example), existing standards, best practices that can be put in place by the Company, and on the data life cycle that need to be well understood and mastered by top management. Secondly, the article elaborates also the advantages of the launch of data quality projects, the building of clear data quality governance and of an optimal documentation. The article focuses on the importance of beginning on a well-defined and representative perimeter in order to experience the method, to achieve in a constrained period of time and after that, to increase the perimeter of data on which the process of quality has to be put in place. The article also considers existing solutions proposed by the market such as packaged data management solutions.