One of forerunners of Data Science from a structural perspective is the famous CRISP-DM (Cross Industry Standard Process for Data Mining) which is organized in six main steps: Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, and Deployment [10], see Table 1, left column. Ideas like CRISP-DM are now fundamental for applied statistics.
In our view, the main steps in Data Science have been inspired by CRISP-DM and have evolved, leading to, e.g., our definition of Data Science as a sequence of the following steps: Data Acquisition and Enrichment, Data Storage and Access, Data Exploration, Data Analysis and Modeling, Optimization of Algorithms, Model Validation and Selection, Representation and Reporting of Results, and Business Deployment of Results. Note that topics in small capitals indicate steps where statistics is less involved, cp. Table 1, right column.
Usually, these steps are not just conducted once but are iterated in a cyclic loop. In addition, it is common to alternate between two or more steps. This holds especially for the steps Data Acquisition and Enrichment, Data Exploration, and Statistical Data Analysis, as well as for Statistical Data Analysis and Modeling and Model Validation and Selection.
Table 1 compares different definitions of steps in Data Science. The relationship of terms is indicated by horizontal blocks. The missing step Data Acquisition and Enrichment in CRISP-DM indicates that that scheme deals with observational data only. Moreover, in our proposal, the steps Data Storage and Access and Optimization of Algorithms are added to CRISP-DM, where statistics is less involved.
The list of steps for Data Science may even be enlarged, see, e.g., Cao in [12], Figure 6, cp. also Table 1, middle column, for the following recent list: Domain-specific Data Applications and Problems, Data Storage and Management, Data Quality Enhancement, Data Modeling and Representation, Deep Analytics, Learning and Discovery, Simulation and Experiment Design, High-performance Processing and Analytics, Networking, Communication, Data-to-Decision and Actions.
In principle, Cao’s and our proposal cover the same main steps. However, in parts, Cao’s formulation is more detailed; e.g., our step Data Analysis and Modeling corresponds to Data Modeling and Representation, Deep Analytics, Learning and Discovery. Also, the vocabularies differ slightly, depending on whether the respective background is computer science or statistics. In that respect note that Experiment Design in Cao’s definition means the design of the simulation experiments.
In what follows, we will highlight the role of statistics discussing all the steps, where it is heavily involved, in Sects. 2.1–2.6. These coincide with all steps in our proposal in Table 1 except steps in small capitals. The corresponding entries Data Storage and Access and Optimization of Algorithms are mainly covered by informatics and computer science, whereas Business Deployment of Results is covered by Business Management.
Data acquisition and enrichment
Design of experiments (DOE) is essential for a systematic generation of data when the effect of noisy factors has to be identified. Controlled experiments are fundamental for robust process engineering to produce reliable products despite variation in the process variables. On the one hand, even controllable factors contain a certain amount of uncontrollable variation that affects the response. On the other hand, some factors, like environmental factors, cannot be controlled at all. Nevertheless, at least the effect of such noisy influencing factors should be controlled by, e.g., DOE.
DOE can be utilized, e.g.,
-
to systematically generate new data (data acquisition) [33],
-
for systematically reducing data bases [41], and
-
for tuning (i.e., optimizing) parameters of algorithms [1], i.e., for improving the data analysis methods (see Sect. 2.3) themselves.
Simulations [7] may also be used to generate new data. A tool for the enrichment of data bases to fill data gaps is the imputation of missing data [31].
Such statistical methods for data generation and enrichment need to be part of the backbone of Data Science. The exclusive use of observational data without any noise control distinctly diminishes the quality of data analysis results and may even lead to wrong result interpretation. The hope for “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete” [4] appears to be wrong due to noise in the data.
Thus, experimental design is crucial for the reliability, validity, and replicability of our results.
Data exploration
Exploratory statistics is essential for data preprocessing to learn about the contents of a data base. Exploration and visualization of observed data was, in a way, initiated by John Tukey [43]. Since that time, the most laborious part of data analysis, namely data understanding and transformation, became an important part in statistical science.
Data exploration or data mining is fundamental for the proper usage of analytical methods in Data Science. The most important contribution of statistics is the notion of distribution. It allows us to represent variability in the data as well as (a-priori) knowledge of parameters, the concept underlying Bayesian statistics. Distributions also enable us to choose adequate subsequent analytic models and methods.
Statistical data analysis
Finding structure in data and making predictions are the most important steps in Data Science. Here, in particular, statistical methods are essential since they are able to handle many different analytical tasks. Important examples of statistical data analysis methods are the following.
-
a)
Hypothesis testing is one of the pillars of statistical analysis. Questions arising in data driven problems can often be translated to hypotheses. Also, hypotheses are the natural links between underlying theory and statistics. Since statistical hypotheses are related to statistical tests, questions and theory can be tested for the available data. Multiple usage of the same data in different tests often leads to the necessity to correct significance levels. In applied statistics, correct multiple testing is one of the most important problems, e.g., in pharmaceutical studies [15]. Ignoring such techniques would lead to many more significant results than justified.
-
b)
Classification methods are basic for finding and predicting subpopulations from data. In the so-called unsupervised case, such subpopulations are to be found from a data set without a-priori knowledge of any cases of such subpopulations. This is often called clustering.
In the so-called supervised case, classification rules should be found from a labeled data set for the prediction of unknown labels when only influential factors are available.
Nowadays, there is a plethora of methods for the unsupervised [22] as well for the supervised case [2].
In the age of Big Data, a new look at the classical methods appears to be necessary, though, since most of the time the calculation effort of complex analysis methods grows stronger than linear with the number of observations n or the number of features p. In the case of Big Data, i.e., if n or p is large, this leads to too high calculation times and to numerical problems. This results both, in the comeback of simpler optimization algorithms with low time-complexity [9] and in re-examining the traditional methods in statistics and machine learning for Big Data [46].
-
c)
Regression methods are the main tool to find global and local relationships between features when the target variable is measured. Depending on the distributional assumption for the underlying data, different approaches may be applied. Under the normality assumption, linear regression is the most common method, while generalized linear regression is usually employed for other distributions from the exponential family [18]. More advanced methods comprise functional regression for functional data [38], quantile regression [25], and regression based on loss functions other than squared error loss like, e.g., Lasso regression [11, 21]. In the context of Big Data, the challenges are similar to those for classification methods given large numbers of observations n (e.g., in data streams) and / or large numbers of features p. For the reduction of n, data reduction techniques like compressed sensing, random projection methods [20] or sampling-based procedures [28] enable faster computations. For decreasing the number p to the most influential features, variable selection or shrinkage approaches like the Lasso [21] can be employed, keeping the interpretability of the features. (Sparse) principal component analysis [21] may also be used.
-
d)
Time series analysis aims at understanding and predicting temporal structure [42]. Time series are very common in studies of observational data, and prediction is the most important challenge for such data. Typical application areas are the behavioral sciences and economics as well as the natural sciences and engineering. As an example, let us have a look at signal analysis, e.g., speech or music data analysis. Here, statistical methods comprise the analysis of models in the time and frequency domains. The main aim is the prediction of future values of the time series itself or of its properties. For example, the vibrato of an audio time series might be modeled in order to realistically predict the tone in the future [24] and the fundamental frequency of a musical tone might be predicted by rules learned from elapsed time periods [29].
In econometrics, multiple time series and their co-integration are often analyzed [27]. In technical applications, process control is a common aim of time series analysis [34].
Statistical modeling
-
(a)
Complex interactions between factors can be modeled by graphs or networks. Here, an interaction between two factors is modeled by a connection in the graph or network [26, 35]. The graphs can be undirected as, e.g., in Gaussian graphical models, or directed as, e.g., in Bayesian networks. The main goal in network analysis is deriving the network structure. Sometimes, it is necessary to separate (unmix) subpopulation specific network topologies [49].
-
(b)
Stochastic differential and difference equations can represent models from the natural and engineering sciences [3, 39]. The finding of approximate statistical models solving such equations can lead to valuable insights for, e.g., the statistical control of such processes, e.g., in mechanical engineering [48]. Such methods can build a bridge between the applied sciences and Data Science.
-
(c)
Local models and globalization Typically, statistical models are only valid in sub-regions of the domain of the involved variables. Then, local models can be used [8]. The analysis of structural breaks can be basic to identify the regions for local modeling in time series [5]. Also, the analysis of concept drifts can be used to investigate model changes over time [30].
In time series, there are often hierarchies of more and more global structures. For example, in music, a basic local structure is given by the notes and more and more global ones by bars, motifs, phrases, parts etc. In order to find global properties of a time series, properties of the local models can be combined to more global characteristics [47].
Mixture models can also be used for the generalization of local to global models [19, 23]. Model combination is essential for the characterization of real relationships since standard mathematical models are often much too simple to be valid for heterogeneous data or bigger regions of interest.
Model validation and model selection
In cases where more than one model is proposed for, e.g., prediction, statistical tests for comparing models are helpful to structure the models, e.g., concerning their predictive power [45].
Predictive power is typically assessed by means of so-called resampling methods where the distribution of power characteristics is studied by artificially varying the subpopulation used to learn the model. Characteristics of such distributions can be used for model selection [7].
Perturbation experiments offer another possibility to evaluate the performance of models. In this way, the stability of the different models against noise is assessed [32, 44].
Meta-analysis as well as model averaging are methods to evaluate combined models [13, 14].
Model selection became more and more important in the last years since the number of classification and regression models proposed in the literature increased with higher and higher speed.
Representation and reporting
Visualization to interpret found structures and storing of models in an easy-to-update form are very important tasks in statistical analyses to communicate the results and safeguard data analysis deployment. Deployment is decisive for obtaining interpretable results in Data Science. It is the last step in CRISP-DM [10] and underlying the data-to-decision and action step in Cao [12].
Besides visualization and adequate model storing, for statistics, the main task is reporting of uncertainties and review [6].