Large-scale predictive modeling and analytics through regression queries in data management systems

Regression analytics has been the standard approach to modeling the relationship between input and output variables, while recent trends aim to incorporate advanced regression analytics capabilities within data management systems (DMS). Linear regression queries are fundamental to exploratory analytics and predictive modeling. However, computing their exact answers leaves a lot to be desired in terms of efficiency and scalability. We contribute with a novel predictive analytics model and an associated statistical learning methodology, which are efficient, scalable and accurate in discovering piecewise linear dependencies among variables by observing only regression queries and their answers issued to a DMS. We focus on in-DMS piecewise linear regression and specifically in predicting the answers to mean-value aggregate queries, identifying and delivering the piecewise linear dependencies between variables to regression queries and predicting the data dependent variables within specific data subspaces defined by analysts and data scientists. Our goal is to discover a piecewise linear data function approximation over the underlying data only through query–answer pairs that is competitive with the best piecewise linear approximation to the ground truth. Our methodology is analyzed, evaluated and compared with exact solution and near-perfect approximations of the underlying relationships among variables achieving orders of magnitude improvement in analytics processing.


Introduction
Predictive Modeling and Analytics (PMA) concerns data exploration, model fitting, and regression model learning tasks used in many real-life applications [5,16,22,40,43]. The major goal of PMA is to explore and analyze multidimensional feature vector data spaces [1]. Recently, we have seen a rapid growth of large-scale advanced regression analytics in areas like deep learning for image recognition [22], genome analysis [43] and aggregation analytics [9]. earth analytics monitoring regions of interest from sensors' acoustic signals, and environmental monitoring for chemical compounds correlation analysis given a geographical area.
The interactive predictive analytics process conducted by data science analytics, engineers, and statisticians is as follows [24,41]: Analysts and data scientists interact with in-DMS analytics tools by issuing selection queries (i.e., dNN queries) to define real-valued data subspaces D ⊂ R d in a d-dimensional data space of interest for exploration and analysis. Then, the local dependencies among the features (dimensions) in those subspaces are extracted and certain regression models are evaluated for their goodness of fit over those data subspaces D, i.e., by identifying the statistical model that is most likely to have generated those data in D. For concreteness, we focus on defining data subspaces of interest D(x 0 , θ) using a dNN query, notated by Q, as the convex subset of d-dim. data points (row vectors) x = [x 1 , . . . , x d ] ∈ R d lying within a hypersphere (ball) with center x 0 and scalar radius θ , i.e., D(x 0 , θ) contains all x ∈ R d : x − x 0 2 ≤ θ , where x 2 is the Euclidean norm; for an illustration, see Fig. 1.
A major challenge in PMA is to model and learn the very local statistical information of analysts' interested data subspaces, e.g., local regression coefficients and local data approximation functions, and then extrapolate such knowledge to predict such information for unexplored data subspaces [53]. Based on this abstraction of PMA, which is massively applied on the above-mentioned real-life applications, we focus on two important predictive analytics queries for in-DMS analytics: mean-value queries and linear regression queries.
x 1 (longitude) x 0 x 0 Fig. 2 Mean-value Q1 and linear regression Q2 queries over the data space (u, Example 1 Consider the running example in Fig. 2. Seismologists issue a mean-value query Q1 over a 3-dim. space (u, x 1 , x 2 ) ∈ R 3 , which returns the mean value y of the feature u (seismic signal; P-wave speed) of those spatial points (x 1 , x 2 ) ∈ D(x 0 , θ) ⊂ R 2 projections (referring to surface longitude and latitude) within a disk of center x 0 and radius θ . The query Q1 is central to PMA because the average y is always used as a linear sufficient statistic for the data subspace D, and it is the best linear predictor of the seismic signal output u based on the region identified around the center point (x 1 , x 2 ) ∈ D(x 0 , θ) [32].
A linear regression query Q2 calculates the coefficients of a linear regression function within a defined data subspace. For example, in Fig. 2, consider geophysicists issuing queries Q2 over a 3-dim. space (u, x 1 , x 2 ) ∈ R 3 , which returns the seismic primary-wave (P-wave) velocity u-intercept (b 0 ) and the coefficients b 1 and b 2 for x 1 (longitude) and x 2 (latitude), where the x = [x 1 , x 2 ] points belong to a subspace D(x 0 , θ) ∈ R 2 . By estimating the linear coefficients, e.g., the parameter row vector b = [b 0 , b 1 , b 2 ], we can then interpret the relationships among the features x and u and assess the statistical significance of each feature of x within D(x 0 , θ). The output of the Q2 query refers to the dependency of u with x, which in our example is approximated by a 2-dim. plane u ≈ b 0 + b 1 x 1 + b 2 x 2 , and quantifies how well the local linear model fits the data.
Query Q2 is important in PMA because it supports model fitting through, e.g., piecewise linear regression (PLR) [10], and provides confidence whether linear models fit well or not the underlying data. To better illustrate Q1 and Q2 queries, consider their corresponding SQL syntax. The mean-value query Q1 over data subspace D(x 0 , θ) for the example data space (u, x 1 , x 2 ) shown in Fig. 2 (relation R(u, x 1 , x 2 )) is represented by a disk of center x 0 = [x 0(1) , x 0(2) ] and radius θ : Q1: SELECT avg(R.u) as y FROM R WHERE SQRT((R.x 1 − x 0(1) ) * (R.x 1 − x 0(1) ) + (R.x 2 − x 0(2) ) * (R.x 2 − x 0(2) )) <= θ , where SQRT(x) is the square root of real number x. Consider now the regression query Q2 over subspace D(x 0 , θ) for the example data space (u, x 1 , x 2 ) in Fig. 2. Based on the XLeratorDB/statistics LINEST function in SQL Server 2008 syntax, Q2 first defines the subspace D(x 0 , θ), then it stores the corresponding tuples (u, x 1 , x 2 ) temporarily to a relation S(u, x 1 , x 2 ), i.e., (x 1 , x 2 ) ∈ D(x 0 , θ), and, finally, invokes the multivariate linear regression function LINEST over relation S: To evaluate queries Q1 and Q2, the system must access the data to establish the data subspace D(x 0 , θ), and then take the average value of u in that subspace for query Q1 (e.g., the average seismic signal speed in San Andreas, CA, region) and invoke a multivariate linear regression algorithm [32] for Q2. The Q1 and Q2 type queries are provided by all modern PMA systems like Spark analytics [47], MATLAB 1 and DMS systems, e.g., XLeratorDB 2 of Microsoft SQL Server 3 and Oracle UTL_NLA. 4 Table 2 in "Appendix" for a table of notations and symbols used in this paper.

Desiderata
We focus on in-DMS analytics with PMA using models and algorithms for the query types Q1 and Q2. The aim is to meet the following desiderata, providing answers to the following questions: -D1 Are there linear dependencies among dimensions in unexplored data subspaces, and which are such subspaces? -D2 If there are data subspaces, where linear approximations fit well with high confidence, can the system provide these yet unknown linear regression models efficiently and scalably to the analysts? -D3 If in some subspaces linear approximations do not fit well w.r.t. analysts needs, can the system provide fitting models through piecewise local linear approximations? -D4 A solution must meet scalability, efficiency, and accuracy desiderata as well.
Concerning desideratum D1 We study the regression problem-a fundamental inference task that has received tremendous attentions in data mining, data exploration, predictive modeling, machine and statistical learning during the past fifty years. In a regression problem, we are given a set of n observations of (x i , u i ), where u i 's are the dependent variables (outputs) and the x i 's are the independent variables (inputs); for instance, refer to the 3-dim. input-output points (x, u) with input x = [x 1 , x 2 ] and output u shown in Fig. 2 in our Example 1.
We desire to model the relationship between inputs and outputs. The typical assumption is that there exists an unknown data function g that approximately models the underlying relationship and that the dependent observations are corrupted by random noise. Specifically, we assume that there exists a family of functions L such that for some data function g ∈ L it holds true the generative model: where i 's are independent and identically distributed (i.i.d.) random variables drawn from a distribution, e.g., Gaussian.
Let us now move a step further to provide more information about the modeled relationship function g in (1). The derivation of several local linear approximations, as opposed to a single linear approximation over the whole data space, can provide more accurate and significant insights. The key issue to note here is that a global (single) linear approximation of g interpolating among all items of the whole data space D leaves, in general, much to be desired: The analysts presented with a single global linear approximation might have an inaccurate view due to missing 'local' statistical dependencies within unknown local data subspaces that comprise D. This will surely lead to prediction errors and approximation inaccuracies when issuing queries Q1 and Q2 to the DMS.

Example 2
Consider the input-output in a (u, x) 2-dim. space in Fig. 3(upper) and the actual data function u = g(x) (in red). A Q2 query issued over the data subspace D(x 0 , θ) will calculate the intercept b 0 and slope b 1 of the linear approximation u ≈ĝ(x) = b 0 + b 1 x (the green line l) over those x ∈ D(x 0 , θ). Evidently, such a line shows a very coarse and unrepresentative dependency between output u and input x, since u and x do not linearly depend on each other within x 1 x 2 the entire data subspace D(x 0 , θ). The point is that we should obtain a finer grained and more accurate dependency between output u and input x. The principle of local linearity [26] states that linear approximations of the underlying data function in certain data subspaces fit the global nonlinearity better in the entire data subspace of interest. In Fig. 3(upper), we observe four local linear approximations l 1 , . . . , l 4 in the data subspace. Therefore, it would be preferable if, as a result of query Q2, the analysts were provided with a list of the local line segments S = {l 1 , . . . , l 4 }, a.k.a. piecewise linear regression. These 'local' segments better approximate the linearity of output u. Moreover, in Fig. 3(lower) the underlying data function u = g(x 1 , x 2 ) in the 3-dim. data space does not exhibit linearity over the entire (x 1 , x 2 ) plane. We can observe how the global linear relationshipĝ(x 1 , x 2 ) cannot capture the very local statistical dependencies between x = [x 1 , x 2 ] and u, which are better captured in certain data subspaces by certain local line segmentsĝ k (x 1 , x 2 ).
Concerning desiderata D2 and D3 Consider the notion of the mean squared error (MSE) [26] to measure the performance of an estimator. Given the n samples (x i , u i ) and following the generative model in (1) having mean value E[ i ] = 0 and variance E[ 2 i ] = σ 2 , our goal is to estimate a data functionĝ that is close to the true, unknown data function g with high probability over the noise terms i . We measure the distance between our estimateĝ and the unknown function g with the MSE: In the general case, the data function g is nonlinear and satisfies some well-defined structural constraints. This has been extensively studied in a variety of contexts [11,17,37]. In our desiderata D2 and D3, we focus on the case that the data function g is nonlinear but can be approximated by a piecewise linear function through an unknown number K of unknown pieces (line segments). We then provide the following definition of this type of data function: The case where K is fixed (given) has received considerable attention in the research community [54]. The special case of piecewise polynomial functions (splines) has been also used in the context of inference including density estimation, histograms, and regression [39].
Let us now denote with L K the space of K -piecewise linear functions. While the ground truth may be close to a piecewise linear function, even in certain subspaces, generally we do not assume that it exactly follows a piecewise linear function (yet unknown). In this case, our goal is to recover a piecewise linear function that is competitive with the best piecewise linear approximation to the ground truth.
Formally, let us define the following problem, where we assume that the generative model in (1) representing the data function g is any arbitrary function. We define: to be the error of the best fit K -piecewise linear function to g and let g * be any K -piecewise linear function that achieves this minimum. Then, the central goal of desiderata D2 and D3 is to discover g * , which achieves a MSE as close to OPT K as possible, provided that we observe only queries and their answers and not having access to the input-output actual pairs (x i , u i ).

Remark 2
If the segments of the data function g were known a priori, the segmented regression problem could have been immediately reduced to K independent linear regression problems. In the general case, where the location of the segment boundaries and their corresponding coefficients are unknown, one needs to discover them using information provided only by the observations of input-output pairs (x i , u i ).
To address this problem, previous works [13,54] while being statistically efficient are computationally slow and prohibited for large-scale data sets, i.e., the running time for a given data subspace scales at least quadratically with the size of data points n in the queried data subspace, thus, being impractical for large data subspaces or even worse for the entire data space.
In our context, however, the analysts explore the data space only by issuing queries over specific data subspaces, thus, observing only the answers of the analytics queries. Specifically, the analysts do not know before issuing a query how the data function behaves within an ad hoc defined data subspace D(x 0 , θ). When a query Q2 is issued, it is not known whether the data function g behaves with the same linearity throughout the entire D(x 0 , θ) or not, and within which subspaces, if any, g changes its trend and u and x exhibit linear dependencies. Thus, the desiderata D2 and D3 focus on learning the boundaries of these local subspaces within D(x 0 , θ) and within each local subspace, discovering the linear dependency (segment) between output u and input x. This would arm analysts and data scientists with much more accurate knowledge on how the data function g(x) behaves within a given data subspace D(x 0 , θ). Hence, decisions on further data exploration w.r.t. complex model selection and/or validation can be taken by the analysts.
Concerning desideratum D4 Our motivation comes from the availability of the past issued and executed queries over large-scale datasets. In the words of [34]: 'As data grows, it may be beneficial to consider faster inferential algorithms, because the increasing statistical strength of the data can compensate for the poor algorithmic quality', it seems to be advantageous to sacrifice statistical prediction accuracy in order to achieve faster running times because we can then achieve the desired error guarantee faster [35].
Our motivation rests on learning and predicting how the data function g behaves differently in certain data subspaces, within a greater data subspace defined by past issued and executed queries. The state-of-the-art methods leave a lot to be desired in terms of efficiency and scalability. Our key insight and contribution in this direction lies in the development of statistical learning models which can deliver the above functionality for queries Q1 and Q2 in a way that is highly accurate and insensitive to the sizes of the underlying data spaces in number of data points, and thus scalable. The essence of the novel idea we put forward rests on exploiting previously executed queries and their answers obtained from the DMS/PMA system to train a model and then use that model to:  (3). Such models best explain (fit) the underlying data function g over a given data subspace D(x 0 , θ); predict the answer y of any unseen mean-value query over a data subspace D(x 0 , θ); predict the output data valueû given an unseen input x based on the approximate data functionĝ.

Remark 3
In the prediction phase, that is, after training, no access to the underlying data systems is required, thus, ensuring desideratum D4.

Challenges and organization
In Fig. 4, we show the system context within which our rationale and contributions unfold. A DMS serves analytics queries from a large user community. Over the time, all users (data scientists, statisticians, analysts, applications) will have issued a large number of queries (Q = {Q 1 , Q 2 , ...,Q n }), and the system will have produced responses (e.g., y 1 , y 2 , . . . , y n for Q1 queries). Our key idea is to inject a novel statistical learning model and novel query processing algorithms in between users and the DMS that monitors queries and responses and learns to associate a query with its response. After training, say after the first m < n queries T = {Q 1 , . . . , Q m } then, for any new/unseen query Q t with m < t ≤ n, i.e., Q t ∈ V = Q\T = {Q m+1 , . . . , Q n }, our model approximates the data function g with an estimator functionĝ through a list S of local linear regression coefficients (line segments) that best fits the actual and unknown function g given the query Q t 's data subspace and predicts its responseŷ t without accessing the DMS. The efficiency and scalability benefits of our approach are evident. Computing the exact answers to queries Q1 and Q2 can be very time-consuming, especially for large data subspaces. So if this model and algorithms can deliver highly accurate answers, query processing times will be dramatically reduced. Scalability is also ensured for two reasons. Firstly, in the data dimension, as query Q1 and Q2 executions (after training) do not involve data accesses, even dramatic increases in DB size do not impact query execution. Secondly, in the query-throughput dimension, avoiding DMS internal resource utilization (that would be required if all Q1 and Q2 queries were executed over the DMS data) saves resources that can be devoted to support larger numbers of queries at any given point in time. Viewed from another angle, our contributions aim to exploit all the work performed by the DMS engine when answering previous queries, in order to facilitate accurate answers to future queries efficiently and scalably.
The research challenge of this rationale is the problem of non-fixed designed segmented regression exploiting only queries and answers by not accessing the underlying data anymore. Specifically, the challenges are: -Identify the number and boundaries of the data subspaces with local linearities and deliver the local linear approximations for each subspace identified, i.e., predict the list S for an unseen query Q2, thus, no need to execute the query Q2. Clearly, this challenge copes further with the following problems: the boundaries of these data subspaces are unknown and cannot be determined even if we could scan all of the data, which in any case would be inefficient and less scalable. -Predict the average value of answer y for an unseen query Q1, thus, no need to execute the query Q1.
It is worth noting that these cannot be achieved solely by accessing the data, as we need information on which are the users' ad hoc defined subspaces of interest. It is possible to provide this information a priori for all possible data subspaces of interest to analysts, i.e., consider all possible center points x 0 and all possible radii θ values. However, this is clearly impractical-this knowledge is obtained after the analysts have issued queries over the data, thus, reflecting their subspaces of interest and exploration. The paper is organized as follows: Sect. 2 reports on the related work and provides our major contribution of this work. In Sect. 3, we formulate our problems and provide preliminaries, while Sect. 4 provides our novel statistical learning algorithms for large-scale predictive modeling. Section 5 introduces our proposed query-driven methodologies, corresponding algorithms and analyses, while in Sect. 6, we report on the piecewise linear approximation and queryanswer prediction methods. The convergence analysis and the inherent computational complexity are elaborated in Sect. 7, and we provide a comprehensive performance evaluation and comparative assessment of our methodology in Sect. 8. Finally, Sect. 9 summarizes our work and discusses our future research agenda in the direction of query-driven predictive modeling.

Related work
Out-with DMS environments, statistical packages like MAT-LAB and R 5 support fitting regression functions. However, 5 https://www.r-project.org/. their algorithms for doing so are inefficient and hardly scalable. Moreover, they lack support for relational and declarative Q1 and Q2 queries. So, if data are already in a DMS, they would need to be moved back and forth between external analytics environments and the DMS, resulting in considerable inconveniences and performance overheads, (if at all possible for big datasets). At any rate, modern DMSs should provide analysts with rich support for PMA.
An increasing number of major database vendors include in their products data mining and machine learning analytic tools. PostgreSQL, MySQL, MADLib (over PostgreSQL) [21] and commercial tools like Oracle Data Miner, IBM Intelligent Miner and Microsoft SQL Server Data Mining provide SQL-like interfaces for analysts to specify regression tasks. Academic efforts include MauveDB [23], which integrates regression models into a DMS, while similarly FunctionDB [50] allows analysts to directly pose regression queries against a DMS. Also, Bismarck [27] integrates and supports in-DMS analytics, while [48] integrates and supports least squares regression models over training datasets defined by arbitrary join queries on database tables. All such works that also support Q1 and Q2 queries can serve as the DMS within Fig. 4. However, in the big data era exact Q1, Q2 computations leave much to be desired in efficiency and scalability, as the system must first execute the selection, establishing the data subspaces per query, and then access all tuples in Q1, Q2.
Apart from the standard multivariate linear regression algorithm, i.e., adopting ordinary least squares (OLS) for function approximation [29], related literature contains more elaborate piecewise regression algorithms, e.g., [10,12,18,28], which can actually detect the nonlinearity of a data function g and provide multiple local linear approximations. Given an ad hoc exploration query over n points in a d-dim. space, the standard OLS regression algorithm has asymptotic computational complexity of O(n 2 d 5 ) and O(nd 2 + d 3 ), respectively [32]. Therefore, OLS algorithms suffer from poor scalability and efficiency-especially as n is getting larger, and/or in high-dimensional spaces, as will be quantified in Sect. 8. Such methodologies are suffering such overheads for every query issued, which is highly undesirable. To address this, one may think to perform data-space analyses only once, seeking to derive all local linear regression models for the whole of the data space, and use the derived models for all queries. Indeed, a literature survey reveals several methods like [12,42], which identify the nonlinearity of data function g and provide multiple local linear approximations. Unfortunately, these methods are very computationally expensive and thus do not scale with the size n of the data points. All these methods execute queries like query Q2, going through a series of stages: partitioning the entire data space into clusters, assigning each data point to one of these clusters, and fitting a linear regression function to each of the clusters. However, data clustering cannot automatically guarantee that the within-cluster nonlinearity of the data function g is captured by a local linear fit. Hence, all these methods are iterative, repeating the above stages until convergence to minimize the residuals estimation error of all approximated local linear regression functions. For instance, the method in [10] clusters and regresses the entire data space against K clusters with a complexity of O(K (n 2 d + nd 2 )). Similarly, the incremental adaptive controller method [20] using self-organizing structures requires O(n 2 dT ) for training purposes. The same holds for the methods [12,19,20,28] that combine iterative clustering and classification for piecewise regression requiring also O(n 2 dT ). Linear regression methods indicate their high costs when computing exact answers. As all these methods derive regression models over the whole data space, e.g., over trillions of points, the scalability and efficiency desiderata are missed, as Sect. 8 will showcase.
This paper significantly extends our previous work [8] for scalable regression queries in the dimensions of mathematical analyses, fundamental theorems and proofs for vector quantization and piecewise multivariate linear regression (Sects. 5 and 6), theoretical analyses and proofs of the PLR data approximation and prediction error bounds (Sect. 5), analysis of the model convergence, variants of partial and global convergence of PLR data approximation, query answer prediction (Sects. 7 and 7.2), comprehensive sensitivity analysis, and comparative assessment of the proposed methodology (Sect. 8).
Our approach accurately supports predicting the result of mean-value Q1 queries, approximating the underlying data function g based on (multiple) local linear models of regression Q2 queries, and predicting the output data values given unseen inputs by estimating the underlying data function. It does so while achieving high prediction accuracy and goodness of fit, after training without executing Q1 and Q2, thus, without accessing data. This ensures a highly efficient and scalable solution, which is independent of the data sizes, as Sect. 8 will show.

Contribution
The contribution of this work lies in efficient and scalable models and algorithms to obtain highly accurate results for mean-value and linear regression queries and PLR-based data function approximation. This rests on learning the principal local linear approximations of data function g. Our approach is query-driven, where past issued queries are exploited to partition the queried data space into subspaces in such a way of minimizing the induced regression error and the model fitting/approximation error. In each data subspace, we incrementally approximate the data function g based on a novel PLR approximation methodology only via query-answer pairs. Given a query over a data subspace D(x 0 , θ), we contribute how to: -deliver a PLR-based data approximation of the data function g over different unseen subspaces that best explain the underlying function g within D(x 0 , θ), -predict the data output valueû ≈ g(x) for each unseen data input x ∈ D(x 0 , θ), -predict the average value y of the data output u = g(x) with x ∈ D(x 0 , θ).
The research outcome of this work is: -A statistical learning methodology for query-driven PLR approximations of data functions over multi-dimensional data spaces. This methodology indirectly extracts information about the unknown data function g only by observing and learning the mapping between aggregation queries and their answers. -A joint optimization algorithm for minimizing the PLR data approximation error and answer prediction error in light of quantizing the query space. -Convergence analyses of the methodology including variants for supporting partial and global convergence. -Mathematical analyses of the query-driven PLR approximation and prediction error bounds; -Mean-value and data-value prediction algorithms for unseen Q1 and Q2 queries. -A PLR data approximation algorithm over data subspaces defined by unseen Q2 queries. -Sensitivity analysis and comparative performance assessment with PLR and multivariate linear regression algorithms found in the literature in terms of scalability, prediction accuracy, data value prediction error and goodness of fit of PLR data approximation.

Definitions
x d ] ∈ R d denote a multivariate random data input row vector, and u ∈ R a univariate random output variable, with (unknown) joint probability distribution P(u, x). We notate g : R d → R with x → u the unknown underlying data function from input x to output u = g(x).

Definition 3
The p-norm (L p ) distance between two input vectors x and x from R d for 1 Consider a scalar θ > 0, hereinafter referred to as radius, and a dataset B consisting of n input-output pairs (x i , u i ) ∈ B.

Definition 4
Given input x ∈ R d and radius θ , a data subspace D(x, θ) is the convex data subspace of R d , which includes input vectors x i : Definition 5 Given an input vector x ∈ R d and radius θ , the mean-value Q1 query over a dataset B returns the average of the outputs u i = g(x i ), whose corresponding input vectors where n θ (x) is the cardinality of the set |{x i : We represent a query as the (d space Q is referred to as query vectorial space. We adopt the compact notation i ∈ [n] as for i = 1, . . . , n.

Definition 7
The queries q, q , which define the subspaces D(x, θ) and D(x , θ ), respectively, overlap if for the boolean indicator A(q, q ) ∈ {TRUE, FALSE} holds true that: A query q = [x, θ] defines a data subspace D(x, θ) w.r.t. dataset B.

Problem formulation
Formally, our challenges are: -CH1: predict the aggregate output outcomeŷ of a random query q = [x, θ]. Given an unknown query function f : we seek a query-PLR estimatef ∈ L K to predict the actual answer y = f (q) = f (x, θ) 6 for an unseen query q, i.e.,ŷ =f (q) =f (x, θ). The challenge is: 6 We deliberately proceed with abuse of notation f (q) = f (x, θ) to provide detailed information of the input arguments of the query-PLR function f .
-CH2: identify the local linear approximations of the unknown data function u = g(x) over the data subspaces D(x, θ) defined by unseen queries q = [x, θ]. Based on the query-PLR estimatef we seek a statistical learning methodology F to extract a data-PLR estimateĝ ∈ L K from the query-PLR estimatef , notated byĝ = F(f ) to fit the data function g. The challenge is: -CH3: predict the data outputû of a random input data vector x based on the data-PLR estimateĝ, i.e.,û =ĝ(x).
Consider the challenge CH1 and let us adopt the squared prediction error function (y − f (x, θ)) 2 for penalizing errors in prediction of aggregate output y given a mean-value query q = [x, θ]. This leads to a criterion for choosing a query-PLR function f , which minimizes the Expected Prediction Error (EPE): for all possible query points x ∈ R d and query radii θ ∈ R.
To calculate the expectation in (5) For proof of Theorem 1, refer to [32]. According to Theorem 1, the aggregate output y can be decomposed into a conditional expectation function E[y|x, θ], hereinafter referred to as query regression function, which is explained by x and θ , and a left over (noisy component) which is orthogonal to (i.e., uncorrelated with) any function of x and θ . In our context, the query regression function is a good candidate for minimizing the EPE in (5) envisaged as a local representative value for answer y over the data subspace D(x, θ). Therefore, the conditional expectation function is the best predictor of answer y given D(x, θ): For proof of Theorem 2, refer to [32].

Remark 4
We rely on Theorems 1 and 2 to build our statistical learning methodology F for estimating a query-PLRf and then, based on Theorem 1, we will be estimating the data-PLRĝ only throughf and the answer-query pairs (q, y) = ([x, θ], y), without accessing the actual data pairs (x, u).
i.e., the conditional expectation of answer y over D(x, θ). However, the number of data points n θ (x) in D(x, θ) is finite; thus, such conditional expectation is approximated by averaging all data outputs u i 's conditioning at x i ∈ D(x, θ). Moreover, the answer y of a query q refers to the best regression estimator over D(x, θ). Each query center x ∈ D(x, θ) and corresponding answer y provides information to locally learn the dependency between output u and input x, i.e., the data function g. In this context, similar queries w.r.t. L 2 distance provide insight for data function g over overlapped data subspaces.
The query-PLR estimate functionf (x, θ) from challenge CH1's outcome is used for estimating the multiple local line segments (i.e., local linear regression coefficients intercept and slope) of the data-PLR estimate functionĝ. This is achieved by a novel statistical learning methodology F, which learns from a continuous query-answer stream {(q 1 , y 1 ), . . . , (q t , y t )} through the interactions between the users and the system. We can then formulate our problems are: Problem 1 Given a finite number of query-answer pairs, approximate the query-PLR functionf (x, θ) and predict the aggregate answerŷ of an unseen query q = [x, θ].

Problem 2
Given only the query-PLR functionf (x, θ) from Problem 1, approximate the data-PLR functionĝ(x) and predict the data outputû of an unseen data input x.

Incremental learning and stochastic gradient descent
The stochastic gradient descent (SGD) [14] is an optimization method for minimizing an objective function E(α), where α is a parameter and optimal parameter α * minimizes the objective E. SGD leads to fast convergence to the optimal parameter α * by adjusting the estimated parameter α so far in the direction (negative gradient −∇E) that improves the minimization of E. SGD gradually changes the parameter α upon reception of a new training sample. The standard gradient descent algorithm updates the parameter α in E(α) as: where the expectation is approximated by evaluating the objective function E and its gradient over all training pairs and η ∈ (0, 1) is a learning rate. On the other hand, SGD computes the gradient of E using only a single training pair at step t, that is we incrementally optimize the objective E. The update of parameter α t at step t is given by The learning rate {η t } ∈ (0, 1) is a step-size schedule, which defines a slowly decreasing sequence of scalars that satisfy: Usually, we adopt a hyperbolic schedule from [14]:

Adaptive vector quantization
Vector quantization refers to a data partitioning processes, A prototype w k represents a subspace of R d and behaves as a quantization vector. Given a distortion measure, a common measure for the performance of a VQ v is the expected distortion: where F(x) is the cumulative distribution of the vectors in R d and w(x) refers to the prototype selected by the VQ v(x). For each random vector x, the optimal VQ that minimizes (8) determines the best matched prototype from the codebook w.r.t. the Euclidean distance: An AVQ algorithm [2,49,57] is a VQ algorithm that incrementally learns as only the closest prototype w j to input vector x, i.e., v(x) = j, changes in response to x observed once. During incremental partition of R d , a stream of input vectors x are projected onto their closest prototypes (a.k.a. winners), which the latter adaptively move around the space to form optimal partitions (subspaces of R d ) that minimize the Expected Quantization Error (EQE): with winner prototype w j such that

Methodology overview
We first proceed with a solution of Problem 1 to approximate the query function f through a query-PLR functionf . Then, we use the approximatef to address Problem 2 to approximate the data function g by a data-PLR functionĝ. Concerning Problem 1 and Theorem 2, we approximate f (x, θ) = E[y|x, θ] that minimizes (5). However, the answer y in (4) involves the average of the outputs g( for an arbitrary radius θ and L p norm expressed by definition as: where n θ (x) varies depending on the location of the query point x in the input data space R d and the query radius θ . Moreover, the nonlinearity of function g over certain subspaces is further propagated to f by definition of the aggregate answer y in (4). Hence, we must identify those data subspaces where data function g behaves almost linearly, which should be reflected in the function f approximation byf . This will provide the key insight on approximating both functions f and g through a PLR family functions by learning the unknown finite set of local linear functions. We call those local linear functions as local linear mappings (LLMs) and derive the corresponding: query-space LLMs and dataspace LLMs for the query function f and data function g, respectively.
In Problem 1, we approximate the query function f (x, θ) with a set of query-space LLMs (or query-LLMs), each of which is constrained to a local region of the query space Q, defined by similar queries w.r.t. L 2 distance. Similar queries are those queries with similar centers x and similar radii θ . Our general idea for those query-space LLMs is the quantization of the query space Q into a finite number of query subspaces Q k such that the query function f can be linearly approximated by a query-LLM f k , k = 1 . . . , K , that is the k-th PLR segment. Those query subspaces may be rather large in areas of the query vectorial space Q where the query function f indeed behaves approximately linear and must be smaller where this is not the case. The total number K of such query subspaces depends on the desired approximation (goodness of fit) and the query-answer prediction accuracy, and may be limited by the available issued queries since overfitting might occur.
Fundamentally, we incrementally quantize the query space Q over a series of issued queries through quantization vectors, hereinafter referred to as query prototypes, in Q. Then, we associate each query subspace Q k with a query-LLM f k in the query-answer space, where the query function f behaves approximately linear.
In Problem 2, principally each query subspace Q k is associated with a data subspace D k , i.e., for a query q ∈ Q k ⊂ R d+1 , its corresponding query point x ∈ D k ⊂ R d . This implies that the input vector x (of the query q) is constrained to be drawn only from the k-th data subspace D k . Based on that association, we use the query-LLM f k to estimate the data-LLM g k , i.e., estimate the local intercept and slope of the data function g over the k-th data subspace D k .

Query local linear mapping
, approximates the dependency between aggregate answer y and query q over the query subspace Q k defined by similar queries under L 2 distance. For modeling a query-LLM, we adopt the multivariate first-order Taylor expansion of the scalar-valued function where ∇ f (q 0 ) is the gradient of query function f at query vector q 0 , i.e., the 1 and ∂ f ∂θ k . As it will be analyzed and elaborated later, the query vector q 0 = [x 0 , θ 0 ] and the gradient of query function f at q 0 are not randomly selected. Instead, the proposed methodology F attempts to find that query vector q 0 and that gradient vector of query function f at q 0 , which satisfies the following optimization properties: -(OP1) minimization of the EPE in (5) as stated in challenge CH1; -(OP2) minimization of the EQE in (10); -(OP3) extraction of the data-LLM estimator from the query-LLM estimator such that the query-driven PLR g fits well the underlying data function g, as stated in challenges CH2 and CH3.
As it will be proved later by Theorems 7, 8, and 9, we require a query-LLM f k to derive from a specific Taylor's approximation around the local expectation query Specifically, the coefficients of the query-LLM f k which satisfies the optimization properties OP1, OP2, and OP3, are: -The local intercept, with two components: the local expectation of answer y, i.e., , notated by the scalar coefficient y k ; and the local expectation query Hereinafter, w k is referred to as the prototype of the query subspace Q k .
Based on these constructs that satisfy OP1, OP2, and OP3, the query-LLM f k is rewritten as: Up to now, our challenge is for each query-LLM f k to estimate the parameter α k = (y k , b k , w k ) in light of minimizing the EPE as stated in OP1 by the following constrained optimization problem:

Remark 5
It is worth mentioning that the constraint y k = f k (x k , θ k ), ∀k ∈ [K ] in the optimization problem (15) requires that in each query subspace Q k , the corresponding query-LLM f k refers to a (hyper)plane that minimizes the EPE and, also, given a query q with a query point being the centroid of the corresponding data subspace Q k and radius θ = E[θ |x ∈ D(x k , θ k ), q ∈ Q k ] being the mean radius of all the queries from Q k , it secures that f k supports the OP2 and OP3.
However, we need to further optimize α k to satisfy also the optimization properties OP2 and OP3.

Our statistical learning methodology
Our statistical learning methodology F departs from the optimization problem in (15) to additionally support the optimization properties OP2 and OP3. Our methodology is formally based on a joint optimization problem of optimal quantization and regression. This is achieved by incrementally identifying within-subspaces linearities in the query space and then estimating therein the query-LLM coefficients such that we preserve the optimization properties OP1, OP2, and OP3.

Joint quantization-regression optimization for query-LLMs
Firstly, we should identify the subspaces Q k , i.e., determine their prototypes w k , their number K , and their coefficients y k and b k , in which the query function f can be well approximated by LLMs. We identify the prototypes w k (associated with Q k , k ∈ [K ]) by incrementally partitioning the query space Q = ∪ K k=1 Q k . Before elaborating on our methodology, we provide an illustrative example on query space quantization.
Each query is represented by a disk with center x t and radius θ t . Figure 5(lower) shows the five query prototypes w k = [x k , θ k ], k ∈ [5] projected onto the 2D input space. Note, centers x k of the prototypes w k correspond to Voronoi sites under L 2 onto the data space.
The introduction of the query space quantization before predicting the query's answer, i.e., regression of aggregate answer y on the query vector q, raises a natural fundamental question: Question: Since by query quantization will lose information and, thus, likely damage the prediction performance of query function approximatef , would it not be better to always proceed with regression based on the original, un-quantized, query vectors? Answer: There is one response on that question: one can consider a VQ as part of the regression estimate function f . The overall goal is not purely regression, i.e., queryanswer prediction using query function f , but also PLR fitting of the underlying data function g. The VQ yields several benefits starting from constructing the query pro- of the query-LLMs f k , that is minimizing the EQE (OP2), to constructing the intercepts and slopes {(y k , b k )} K k=1 , which are needed to minimize the EPE (OP1) and also to derive the data-LLMs g k (OP3). And, based on Theorem 7, the query prototypes w k converge to the optimal vector prototypes only when adopted by the VQ; specifically by an incrementally growing AVQ, as it will be elaborated later. The inclusion of estimating the query prototypes w k provides a methodology not suggested by the regression/prediction goal alone, which nonetheless allows one to weight the prediction performance as being the more important criterion and which may eventually yield better regression algorithms. However, in this case our goal has with one model to satisfy the optimization properties OP1, OP2, and OP3 simultaneously, and this can be viewed as finding an algorithm for jointly designing a VQ and PLRbased predictor to yield performance close to that achievable by an optimal PLR-based predictor operating on the original answer-query pairs and input-output data pairs, as it will be shown at our performance Sect. 8.
Given a finite and unknown number of query prototypes K and a VQ v(q) over the query space, the query quantization performance measured by mean squared distortion error J is given by: where P(v(q) = k) is the probability the VQ maps query q to the query prototype w k . We obtain the minimum value of which is the lower bound achievable if each query prototype w k is chosen by the VQ to be the centroid of the conditional expectation: In parallel, within each Q k , we incrementally estimate the PLR coefficients (y k , b k ) of each query-LLM f k . These coefficients are learned only from similar query-answer pairs whose queries belong to the query subspace Q k .
We propose a hybrid model by partitioning Q into K (unknown) subspaces Q k , i.e., unsupervised leaning of w k to minimize the EQE , and supervised learning of the coefficients y k and b k to minimize the EPE. The idea is that each query subspace Q k associates the LLM f k with the query prototype w k , as shown in Fig. 6 (see Example 4), conditioned on the result of the VQ. In other words, the regression performance is provided by the conditional EPE H: We obtain the minimum value of H: which is the lower bound achievable if the regression is chosen to minimize the prediction error derived by the kth query-LLM f k , which corresponds to the closest query prototype w k , i.e., the VQ chooses k = v(q) such that the query prototype w k is the winner prototype.
The joint quantization-regression optimization incrementally minimizes the two objective functions: EQE J and conditional EPE H upon receiving a new query-answer pair (q, y), that is, our constraint joint optimization problem is: The objective function in (21) corresponds to optimal partitioning of the query space into K partitions (OP1), each with a prototype. The objective function in (22) corresponds to a conditional EPE conditioned on the k-th query prototype w k , which is the closest to the query q (OP2). The constraints will ensure the derivation of the data-LLMĝ k from the query-LLM f k at it will be shown later (OP3). The quantization of the query space Q operates as a mechanism to project an unseen query q to the closest query subspace Q k w.r.t. L 2 distance from the prototype w k , wherein we learn the dependency between the aggregate answer y with the query point x and radius θ . Figure 6 depicts the association from the query space to the 3D data space. A query prototype w j , a disk on the input space (x 1 , x 2 ), is now associated with the query-LLM f j (x, θ) and its corresponding regression plane u j = f j (x, θ j ) on the data space (u, x 1 , x 2 ), which approximates the actual data function u = g(x 1 , x 2 ) = x 1 (x 2 +1). Note, in each local plane, we learn the local intercept y j and slope b j where x j is the representative of the data subspace D j (see Theorems 7, 8 and 9).

Data-LLM function derivation from query-LLM function
Concerning Problem 1, the prediction of the aggregate output y of an unseen query q is provided by neighboring query-LLM functions f k , as will be elaborated later. Concerning Problem 2, we derive the linear data-LLM function g k (intercept and slope) between output u and input x over the data subspace D given the query-LLM function f k . Then, we approximate the PLR estimate of data function g by interpolating many data-LLMs. Based on Theorems 1 and 2, we obtain that the data output u = g(x) = E[u|x] + . In that context, we can approximate the data function g(x) over the data subspace D k , i.e., the PLR segment g k from the corresponding query-LLM function f k conditioned on the mean radius θ k .

Theorem 3
The data function g(x) in the data subspace D k is approximated by the linear regression function: Proof For any random variable u, x, θ, and y we can easily by definition of the y function and the independence assumption of x and θ . Through decomposition in Theorem 1, we approximate the dependency of u with x through the conditional expectation function: Example 5 We provide the following visualization in Fig. 7 to better explain and provide insights of the data-LLMs derivation from query-LLMs. Specifically, Fig. 7 interprets the mapping methodology F from the query-LLMs to the data-LLMs after obtaining the optimal values for the parameters that satisfy the optimization properties OP1, OP2, and OP3. We observe three regression planes in the query-answer space (x, θ, y), which are approximated by the three query-LLMs f 1 , f 2 and f 3 . This indicates the PLR approximate of the query function f . Now, focus on the regression plane since, as proved in Theorem 7, the radius θ k is the expected radius of all the queries q with The data-LLM is represented by the linear regression approximation g k (x) laying on the regression plane defined by the query-LLM f k . We obtain the PLR data approximate g over all data input x space by following the data-LLMs g k over the planes defined by the query-LLMs f k , as illustrated in the inner plot in Fig. 7. As we are moving from one query-LLM f k−1 to the next one f k , we derive the corresponding data-LLMs g k−1 to g k by setting θ = θ k−1 and θ = θ k to the query-LLM definitions (linear models) such that: respectively. Hence, based on this trajectory we derive the PLR estimate data functionĝ of the underlying data function g.

Remark 6
It is worth noting that the data function g based on Theorem 3 is achieved based only by the knowledge extracted from answer-query pairs and not by accessing the data points.

Query-driven statistical learning methodology
In this section we propose our query-driven statistical learning algorithm for our methodology through which all the query-LLM parameters α k minimize both (21) and (22). Then, we provide the PLR approximation error bound of the PLR estimate functions f k of query function f and the impact of our VQ algorithm in this error. Let us focus on the EQE J in (21) and liaise with Example 3 (Fig. 5). We seek the best possible approximation of a random query q out of the set {w k } K k=1 of finite K query prototypes. We consider the closest neighbor projection of query q to a query prototype w j , which represents the j-th query We incrementally minimize the objective function J with the presence of a random query q and update the winning prototype w j accordingly. However, the number of the query subspaces and, thus, query prototypes K > 0, is completely unknown and not necessarily constant. The key problem is to decide on an appropriate K value. In the literature a variety of AVQ methods exists, however, not suitable for incremental implementation, because K must be supplied in advance.
We propose a conditionally growing AVQ algorithm under L 2 distance in which the prototypes are sequentially updated with the incoming queries and their number is adaptively growing, i.e., the number K increases if a criterion holds true. Given that K is not available a-priori, our VQ minimizes the objective J with respect to a threshold value ρ. This threshold determines the current number of prototypes K . Initially, the query space has a unique (random) prototype, i.e., K = 1. Upon the presence of a query q, our algorithm first finds the winning query prototype w j and then updates the prototype w j only if the condition q − w j 2 ≤ ρ holds true. Otherwise, the query q is currently considered as a new prototype, thus, increasing the value of K by one. Through this conditional quantization, our VQ algorithm leaves the random queries to self-determine the resolution of quantization. Evidently, a high ρ value would result in coarse query space quantization (i.e., low resolution partition) while low ρ values yield a fine-grained quantization of the query space. The parameter ρ is associated with the stability-plasticity dilemma a.k.a. vigilance in Adaptive Resonance Theory [30]. In our case, the vigilance ρ represents a threshold of similarity between queries and prototypes, thus, guiding our VQ algorithm in determining whether a new query prototype should be formed.

Remark 7
To give a physical meaning to the vigilance parameter ρ, we express it through a set of coefficient percentages a i ∈ (0, 1) and a θ ∈ θ of the value ranges of each dimension x i of query center x ∈ R d and radius θ , respectively. Then, we obtain that ρ = [a 1 , . . . , a d ] 2 + a θ and if we let a i = a θ = a ∈ (0, 1), ∀i, then the vigilance parameter is rewritten as: A high quantization coefficient a value over high-dimensional data results in a low number of prototypes and vice versa.
Let us now focus on the EPE H in (22) and liaise with Examples 3 and 4 (Figs. 5 and 6). The objective function H is conditioned on the winning query-prototype index j = arg min k q − w k 2 , i.e., it is guided by the VQ v(q) = j. Our target is to incrementally learn the query-LLM coefficients offset y j and slope b j of the LLM function f j , which are associated with the winning query prototype w j ∈ Q j for a random query q.
We incrementally minimize both objective functions J and H given a series of issued query-answer pairs (q t , y t ) to estimate the unknown parameters set α = ∪ K k=1 α k , with LLM parameter α k = (y k , b k , w k ) through SGD. Our algorithm processes successive query-answer pairs (q t , y t ) until a termination criterion max( and Γ H t refer to the distance between successive estimates at steps t − 1 and t of the query prototypes w.r.t. objective J and query-LLM coefficients w.r.t. objective H, respectively. The algorithm stops at the first step/observation t * where: where The update rules for the optimization parameter α in our SGD-based dual EQE/EPE optimization of the objective functions J and H are provided in Theorem 4.

Theorem 4
Given a pair of query-answer (q, y) and its winning query prototype w j , the optimization parameter α converges to the optimal parameter α * , if it is updated as: If q − w j ≤ ρ, then If q − w j > ρ, then Δw j = 0, Δb j = 0, Δy j = 0. For any prototype w k , which is not winner (k = j): where the learning rate η ∈ (0, 1) is defined in Sect. 3.3.

Proof
We adopt SGD to minimize both (21) and (22). J and H are minimized by updating α = {y k , w k , b k } in the negative direction of their sum of gradients. We obtain the set of update rules: The objective function J requires competitive learning, thus, at each step we update the winner w j , while H is conditional updated with respect to j = v(q). The w j converges when E[Δw j ] = 0 given that q−w j ≤ ρ. We require at the convergence that each query q is assigned to its winner w j with probability 1, that is, P( q − w j ≤ ρ) = 1, which means that no other prototypes are generated. Therefore, based on Markov's inequality we obtain that: However, ρ is a real finite number and relatively small, since it interprets the concept of neighborhood. Hence, we require that E[ q−w j ] → 0, i.e., E[(q−w j )] = 0, or E[Δw j ] = 0, while completes the proof.
The provided training Algorithm 1 processes a random pair of query-answer one at a time from a training set T = {(q, y)}; see also Fig. 4. In the initialization phase of the training algorithm, there is only one query prototype w 1 , i.e., K = 1, which corresponds to the first query, while the associated query-LLM coefficients b 1 and y 1 are initialized to 0 and 0, respectively. For the t-th random pair (q t , y t ) and onwards with t ≥ 2, the algorithm either updates the closest prototype to q t (out of the so far K prototypes) if their L 2 distance is less than ρ, or adds a new prototype increasing K by one and then the new LLM coefficients are initialized. The algorithm stops updating the query prototypes and query-LLM coefficients at the first step t where max(Γ J t , Γ H t ) ≤ γ . At that time and onwards, the algorithm returns the parameters set α and no further modification is performed, i.e., the algorithms has converged.
Through the incremental training of the parameters set α = {(y k , b k , w k )} K k=1 , each query-LLM function f k has estimated its parameters. The PLR approximation error bound for the LLM function f k around the query prototype w k depends on the dimension d and curvature (second derivative) of the function f k in the query subspace Q k as provided in Theorem 5. The approximation depends on the resolution of quantization K . Notably, the more prototypes K , the better the approximation of the query function f is achieved by query-LLMs, as proved in Theorem 6. if w j − q 2 ≤ ρ then Update y j , b j ,w j using Theorem 4.
Theorem 5 For a random query q with closest query prototype w k , the conditional expected approximation error bound for the LLM function f k in query subspace Q k around w k is:

Proof
The query-LLM f k (x, θ) in the query subspace Q k refers to the 1st Taylor series approximation of f (x, θ) around the prototype w k = [x k , θ k ]. The approximation error is then: Assume that f (x, θ) is differential at most two times on Q k . For simplicity of notation, let q = [x 1 , . . . , x d , θ] = [q 1 , . . . , q d+1 ] and f (i) (q) and f k,(i) (q) be the actual and approximation function on dimension q i , where all the other dimensions are fixed. Then by Taylor's inequality theorem [33] (based on the Mean Value Theorem), we obtain that the approximation error bound By accumulating the approximation error bounds λ (i) , ∀i, we obtain that: with C k = 1 2 max i∈[d+1] C (i) . Now, from the convergence Theorem 7, the query prototype w k is the centroid of all queries q ∈ Q k . If we define the random vector z = q − w k , then the L 2 2 norm z 2 2 = q − w k 2 2 is distributed according to the χ 2 (Chi-squared) distribution with d + 1 degrees of freedom given that E[z] = E[q] − w k = 0 from Theorem 7 and q ∈ Q k . Hence, we obtain that E[ z 2 2 ] = d + 1 and the expected approximation error bound is Theorem 6 For a random query q, the expected approximation error given K query- Proof Upon a random query and the quantization of the query space Q into K LLMs, each with a query prototype w k , the derived approximation error of f through all f k , k ∈ [K ], is where λ k is the conditional approximation error bound given that q is assigned to prototype w k and P(w k ) is the prior probability of w k . Provided that all w k are equiprobable for being assigned to queries, i.e., P(w k ) = 1 K , ∀k, then: where C k is defined in Theorem 5.

Data and query functions approximation and prediction
In this section we propose an algorithm that uses the query-LLM functions to approximate the PLR data function g over a data subspace given the corresponding data-LLM functions and an algorithm to predict the aggregate answer y of an unseen query based on the query-LLM functions. Our algorithms entail the use of the previously trained query-LLM functions from the training query-answer pairs in the training set T to predict aggregate answers to unseen queries Q1 and Q2 from the test set V; see also Fig. 4. We adopt the principle of the nearest neighbors regression for prediction [32]. The notion of neighborhood here is materialized by the overlapping of an unseen query with the query prototypes in the quantized space Q (see Example 4, Fig. 6). By Definition 7, the queries q = [x, θ] and q = [x , θ ] overlap if the condition A(q, q ) = TRUE. To quantify a degree of overlapping between those queries represented as hyper-spheres in the (d + 1)-dim. space, we require that the two spheres are partially intersected. Let us define the ratio between the L 2 distance of the centers of data subspaces D(x, θ) and D(x , θ ) over the distance of their radii, i.e., x−x 2 θ+θ . This ratio takes values in [0, 1] in the case of overlapping, with a value of unity when both spheres just meet each other. In the concentric case, the degree of overlapping should also take into consideration the remaining area from this perfect inclusion. We define the degree of overlapping for two queries as the normalized ratio δ(q, q ) ∈ [0, 1]: The data subspaces D(x, θ) and D(x k , θ k ) defined by query q and query prototype w k = [x k , θ k ], respectively, correspond to the highest overlap when δ(q, w k ) = 1. We define the overlapping query prototypes set W(q) of query subspaces Q k corresponding to data subspaces D k given a query q = [x, θ] as: The mean-value query Q1 and the linear regression query Q2 are based on the neighborhood set W(q) for an unseen query q. Figure 6 shows the average value and regression query prediction: An unseen query q = [x, θ] is projected onto input space x = (x 1 , x 2 ) to derive the neighborhood set of prototypes W(q) = {w i , w k , w l }. Then, we access the query-LLM functions f i , f k , f l to predict the aggregate outputŷ for query Q1 (see Algorithm 2) and retrieve the data regression planes coefficients S of the data-LLM functions g i , g k , g l from query-LLM functions f i , f k , f l , respectively, for query Q2 (see Algorithm 3).

Query Q1: mean-value aggregate prediction
Our algorithm predicts the aggregate output value y given an unseen query q = [x, θ] over a data subspace D(x, θ).
The query function f between query q and answer y over the query space Q is approximated by K query-LLM functions (hyperplanes) over each query subspace Q k ; see Fig. 8(lower). Given a query q, we derive the overlapping prototypes set W(q). For those query prototypes w k ∈ W(q), we access the local coefficients (y k , b k , w k ) of query-LLM f k . Then, we pass q = [x, θ] as input to each function f k to predict the aggregate outputŷ through a weighted average based on the normalized degrees of overlappingδ(q, w k ):δ The aggregate output predictionŷ derives from the weighted W(q)-nearest neighbors regression: with In the case where W(q) ≡ ∅, we extrapolate the similarity of the query q with the closest query prototype to associate the answer with the estimationŷ derived only from the query-LLM function f j (q, θ) with the query prototype w j being closest to the query q. Through this projection, i.e., j = arg min k∈[K ] q − w k 2 , we get the local slope and intercept of the local mapping of query q onto the aggregate answer y.
The prediction of the query answer depends entirely on the query similarity and the W neighborhood. The meanvalue prediction algorithm is shown in Algorithm 2. Figure  8(lower) shows how accurately the K = 7 query-LLM functions (as green covering surfaces/planes over query function f ) approximate the linear parts of the query function f (x, θ) over a 2D query space Q defined by the queries (x, θ). Predict answerŷ = f j (q, θ) using (30); else Calculate normalized overlapping degreeδ(q, w k ), w k ∈ W(q) using (26) and the engaged query-LLMs; Predict answerŷ using (29) and (30); end end

Query Q2: PLR-based data function approximation
The algorithm returns a list of the local data-LLM functions g k of the underlying data function g over data subspace D(x, θ), given an unseen query q = [x, θ] (see Example 4). An unexplored data subspace D defined by an unseen query might: -(Case 1) either partially overlap with several identified convex data subspaces D k (corresponding to query subspaces Q k ), or -(Case 2) be contained or contain a data subspace D k , or -(Case 3) be outside of any data subspace D k .
In Cases 1 and 2, the algorithm returns the derived data-LLMs of the data function g interpolating over the overlapping data subspaces, using the corresponding query-LLMs, as proved in Theorem 3. In Case 3, the best possible linear approximation of the data function g is returned through the extrapolation of the data subspace D k whose query prototype w k is closest to the query q. For Cases 1 and 2, we exploit the neighborhood W(q) of the query q = [x, θ]. For Case 3, we select the data-LLM function, which corresponds to the query-LLM function with the closest query prototype w j to query q since, in this case, W(q) ≡ ∅.
The PLR approximation of the data function g(x) over the data subspace D(x, θ) involves both the radius θ and the query center x using their similarity with radius θ k and the point x k , respectively, from the W(q). For the Cases 1 and 2, the set of the data-LLMs for a PLR approximation of data function g(x) is provided directly from those query-LLMs f k , whose query prototype w k ∈ W(q). That is, for x ∈ D k (x k , θ k ), we obtain: For the Case 3, the PLR approximation of the data function g(x) derives by extrapolating the linearity trend of u = g(x) = f j (x, θ j ) : j = arg min k q − w k 2 over the data subspace, with u intercept: y j −b X , j x j and the u slope: b X , j .
The PLR approximation of the data function is shown in Algorithm 3, which returns the set of the data-LLM functions S defined over the data subspace D(x, θ) for a given unseen query q = [x, θ]. Note that depending on the query radius θ and the overlapping neighborhood set W(q), we obtain: 1 ≤ |S| ≤ K , where |S| is the cardinality of the set S. Figure 8(upper) shows how the data function u = g(x) is accurately approximated by K = 6 data-LLMs (green interpolating local lines) compared with the global linear regression function (REG in red) over the data subspace D(0.5, 0.5). We also illustrate the K linear models derived by the actual PLR data approximation algorithm [44], i.e., the best possible PLR data approximation should we have access to that data subspace, which corresponds to OPT K in (3). Unlike our model, PLR needs access to the data and is thus very expensive; specifically, it involves a forward/backward iterative approach to produce the multiple linear models [44]. Our model, instead, incrementally derives the data-LLMs based on the optimization problems in (21) and (22). Note that the derived data-LLMs are highly accurate.

Global convergence analysis
In this section we show that our stochastic joint optimization algorithm is asymptotically stable. Concerning the objective function J in (21), the query prototypes w k = [x k , θ k ] converge to the centroids (mean vectors) of the query subspaces Q k . This convergence reflects the partition capability of our proposed AVQ algorithm into the prototypes of the query subspaces. The query subspaces naturally represent the (hyper)spheres of the data subspaces that the analysts are Derive data-LLM g j from query-LLM f j : interested in accessed by their query centers x k and radii θ k , ∀k.
Concerning the objective function H in (22), the approximation coefficients slope and intercept in Theorem 3 converge, too. This convergence refers to the linear regression coefficients that would have been derived should we were able to a fit linear regression function over each data subspace D k , given that we had access to the data.
Theorem 7 refers to the convergence of a query prototype w k to the local expectation query E[q|Q k ] = E[q|v(q) = k] given our AVQ algorithm.

Theorem 7 If E[q|Q k ] = E[q|v(q)
= k] is the local expectation query of the query subspace Q k and the query prototype w k is the subspace representative from our AVQ algorithm, then P(w k = E[q|Q k ]) = 1 at equilibrium.

Proof
The update rule for a prototype w k based on Theorem 4 is Δw k = η(q − w k ), given that P( q − w k 2 ≤ ρ) = 1. Let the k-th prototype w k reach equilibrium: Δw k = 0, which holds with probability 1. By taking the expectation of both sides we obtain: This indicates that w k is constant with probability 1, and then by solving E[Δw k ] = 0, the w k equals to the centroid E[q|Q k ].
We provide two convergence theorems for the coefficients y k and b k of the query-LLM f k . Firstly, we focus on the aggregate answer prediction y = y k + b k (q − w k ) . Given that the query prototype w k has converged, i.e., w k = E[q|Q k ] from Theorem 7, then the expected aggregate value E[y|Q k ] converges to the y k coefficient of the query-LLM f k . This also reflects our assignments of the statistical mapping F of the local expectation query w k to the mean of the query-LLM f k , i.e., This refers to the local associative convergence of coefficient y k given a query q ∈ Q k . In other words, the convergence of the query subspace enforces also convergence in the output domain. Proof Based on the law of total expectations, we write the expectation of Δy k given the output variable y: By using the update rule in Theorem 4, we write for the conditional expectation term E[Δy k |y] as:

By replacing E[Δy k |y] into E[Δy k ], we obtain
given that E[q|Q k ] = w k from Theorem 7. By solving E[Δy k ] = 0, which implies that y k is constant with probability 1, we obtain that y k = E[y|Q k ].
Finally, we provide a convergence theorem for b k as the slope of the linear regression of q − w k onto y − y k .

Theorem 9 Let
be the linear regression population coefficient of all pairs (q − w k , y − y k ) for a LLM function y = y k + b k (q − w k ) . Then P(b k = β k ) = 1 at equilibrium.
Proof Based on law of total expectations, for the LLM coefficient b k we obtain E[Δb k ] = R E[Δb k |y] p(y)dy. By using the update rule in Theorem 4, we write for the conditional expectation term E[Δb k |y]: Hence, by replacing E[Δb k |y] into E[Δb k ], we obtain By solving E[Δb k ] = 0, which implies that b k is constant with probability 1, we obtain that b k = β k . This refers to the population normal equations for the multivariate linear regression model within the subspace Q k × R.

Partial convergence analysis
The entire statistical learning model runs in two phases: the training phase and the prediction phase. In the training phase, the query-LLM prototypes (w k , b k , y k ), k ∈ [K ], are updated upon the observation of a query-answer pair (q, y) until their convergence w.r.t. the global stopping criterion in (24). In the prediction phase, the model proceeds with the mean-value prediction of the aggregate answerŷ, the PLR data approximation of the data function g and the output data value predictionû, without execution of any incoming query after convergence at t * . The major requirement for the model to transit from the training to the prediction phase is the triggering of the global stopping criterion at t * w.r.t. a fixed γ > 0 convergence threshold.
Let us now provide an insight of this global criterion. The model convergence means that on average for all the trained query-LLM prototypes their improvement w.r.t. a new incoming query-answer pair is not as much significant as it was at the early stage of the training phase. The rate of updating such prototypes, which is reflected by the difference vector norms of (w k,t , b k,t , y k,t ) and (w k,t−1 , b k,t−1 , y k,t−1 ) at observation t and t − 1, respectively, is decreasing as the number of query-answer pairs increases, i.e., t → ∞; refer also to convergence analysis in Sect. 7.
In a real world setting, however, we cannot obtain an infinite number of training pairs to ensure convergence. Instead, we are sequentially provided a finite number of training pairs (q t , y t ) from a finite training set T . We obtain model convergence given that there are enough pairs in the set T such that the criterion in (24) is satisfied. More interestingly, we have observed that some of the query-LLM prototypes, say L < K converge with less training query-answer pairs than all provides pairs |T |. Specifically, for those L prototypes, which represent certain data subspaces D and query subspaces Q , = 1, . . . , L, it holds true that the convergence criterion max{Γ J , Γ H } t ≤ γ for t < t * , where t * cor-responds to the last observed training pair where the entire model has globally converged, given a fixed γ convergence threshold. In this case, we introduce the concept of partial convergence if there is at least a subset of query-LLM prototypes, which have already converged w.r.t. γ at an earlier stage than the entire model (entire set of parameters). Interestingly, those query-LLM prototypes transit from their training phase to the prediction phase. The partial convergence on those data subspaces is due to the fact that there were relatively more queries issued to those data subspaces compared to some other data subspaces up to the t-th observation with t < t * . Moreover, by construction of our model, only a relatively small subset of query-LLM prototypes are required for mean-value prediction and PLR data approximations (refer to the overlapping set W in Sect. 6). Hence, based on the flexibility of the partial convergence, we can proceed with prediction and data approximation to certain incoming queries issued onto those data subspaces, where their corresponding query-LLM prototypes have partially converged, while the entire model is still on a training phase, i.e., it has not yet globally converged.
The advantage of this methodology is that we deliver predicted answers to the analysts' queries without imposing the execution delay for those queries. Evidently, we obtain the flexibility to either proceed with the query execution after the prediction for refining more the converged data subspace or not. In both options, the analysts 'do not need to wait' for the system to execute firstly the query and then being delivered the answers. This motivated us to introduce a progressive predictive analytics or intermediate phase, where some parts of the model can, after their local convergence, provide predicted answers to the analysts without waiting for the entire model to converge.
The research challenge in supporting the progressive analytics phase is when some of the involved query-LLM parameters are not yet converged with some other query-LLM parameter, which have locally converged. Specifically, assume that at the t-th observation (with t < t * ) there are L query-LLM prototypes that have converged and the query q t = (x t , θ t ) is arriving to the system (note: L < K at observation t). The overlapping set W(q t ) consists of ≤ L query prototypes w i , i = 1, . . . , , which have converged and κ < K −L query prototypes w j , j = 1, . . . , κ, which have not yet converged, i.e., W(q t ) = {w i }∪{w j }. In this case, the meanvalue prediction and the PLR data approximation over the data subspace D(x t , θ t ) involves + κ prototypes such that: with = |C(q t )| and κ = |U(q t )|. We adopt a convergence voting/consensus scheme for supporting this intermediate phase between training and prediction phase in light of delivering either predicted answers or actual answers to the analysts.
-Case A If the consensual ratio +κ ≥ r , i.e., more than r % of the query prototypes in W(q t ) have locally converged, with r ∈ (0.5, 1) then two options are available: -Case A.I The model predicts and delivers the answer based only on those query prototypes which have converged to the analysts and, then, executes the query for updating the κ not yet converged query prototypes to align with the model convergence mode. In this case, the analysts are delivered a predicted answer where the degree of confidence for this answer is regulated through the consensual ratio r . The mean-value prediction and PLR data approximation is achieved as described in Algorithms 2 and 3 by replacing W(q) with the locally converged query prototypes C(q) in (32). After the query execution, the query-LLM prototypes from the un-converged set U(q) in (32) are updated as described in Algorithm 1. Obviously, if the consensual ratio +κ = 1, then there is no such an intermediate phase.
-Case A.II The model predicts and delivers the answer based only on those query prototypes which have converged, to the analysts, and does not execute the query, thus, no update is performed for those κ query prototypes. The mean-value prediction and PLR data approximation is achieved as described in Algorithm 2 and Algorithm 3 by replacing W(q) with the locally converged query prototypes C(q) in (32). This obviously delays the global convergence and reduces the number of queries executed for convergence. This option is only preferable when most of the incoming queries focus on specific data subspaces and not on the entire data space. In other words, there is no meaning for the entire model to globally converge to transit from the training phase to the prediction phase, if most of the queries are issued on very specific data subspaces. At the extreme case, the model could delay a lot its convergence if more than 50% of the query prototypes are involved in the overlapping sets for all the incoming queries. To alleviate this case, our model creates new prototypes (incrementally) only when there is at least some interest on a specific data subspace, as discussed in Sect. 5 adopting the principles of adaptive resonance theory [30].
-Case B Otherwise, i.e., the consensual ratio +κ < r , the model acts as usual in the training phase, i.e., it first executes the query and delivers the actual answer to the analyst, and then based on this actual answer it updates the prototypes as discussed in Sect. 5.
Algorithm 4 shows the partial convergence methodology of the model transition from the training phase to the intermediate phase, and then to the prediction phase. ALGORITHM 4: Partial Convergence Algorithm. Input: convergence threshold γ ; consensual threshold r Result: query-LLM parameters and query prototypes of set α begin Get first query-answer pair (q, y) ; Observe only the query q ; Derive converged and κ un-converged query-LLM prototypes, respectively, from W(q); if +κ ≥ r then Call prediction Algorithm (3) or Algorithm (4) replacing W(q) with C(q); else Execute query and obtain query-answer pair (q, y) ; Call training Algorithm (1); Our progressive predictive analytics methodology allows the combined mode of operation, whereby the training and prediction phases overlap. In this combined mode, the model runs its training and prediction algorithms based on the consensual threshold r . Let us define t the first observation at which the consensual ratio +κ exceeds threshold r , i.e., t = arg min{t > 0 : For any observation t < t the model is in a single training phase, while for any observation t ≤ t < t * the model is in the intermediate phase, i.e., prediction and/or training phase depending on the consensual ratio at the t-th observation (Cases A and/or B). At t > t * the model transits to the single prediction phase. Figure 9 illustrates the activation of the training, intermediate and prediction phases over the observation time axis and the landmarks t and t * . The landmark t denotes the minimum number of training pairs the model requires to deliver to the analysts predicted and/or actual answers w.r.t. Cases A and B, while only predicted answers are delivered after t * training pairs.

Remark 9
The prediction performance of the model in the intermediate phase is up to the performance of the model in the single prediction phase. This is attributed to the predicted answers based on the partial convergence w.r.t. consensual threshold r , where only r % of the query-LLM prototypes from the overlapping set W(q) are used for prediction given an unseen query q. The prediction performance is a nondecreasing function with the number of observations t with t ≤ t ≤ t * as will be shown in our performance evaluation Sect. 8.

Computational complexity
In this section we report on the computational complexity of our model during the training and prediction phases. In the global convergence mode, the model 'waits' for the triggering of the criterion in (24) to transit from the training to the prediction phase. Under SGD over the objective minimization functions J and H, with the hyperbolic learning schedule in (7), our model requires O(1/γ ) [15] number of training pairs to reach the convergence threshold γ . This means that the residual difference between the objective function value J t * after t * pairs and the optimal value J * , i.e., with the optimal query-LLM parameters, asymptotically decreases exponentially, also known as linear convergence [25]. In this mode, there is a clear separation between the training and prediction phases, while the upper bound of the expected excess difference E[J t − J * ] after t training pairs is O log t t [52], given a hyperbolic learning schedule in (7). In the prediction phase, which is the operational mode of our model, given a mean-value query Q1 and a linear regression query Q2, we require O(d K ) to calculate the neighborhood W set and deliver the query-LLM functions, respectively, i.e., independent on the data size, thus, achieving scalability. We also require O(d K ) space to store the query prototypes and the query-LLM coefficients. The derivation of the data-LLMs is then O(1) given than we have identified the query-LLMs for a given linear regression query.

Performance metrics
The proposed methodology deals with two major statistical learning components: prediction of the aggregate answer and data output, and data function approximation over data subspaces. For evaluating the performance of our model in light of these components, we should assess the model predictability and goodness of fit, respectively.
Predictability refers to the capability of a model to predict an output given an unseen input, i.e., such input-output pair is not provided during the model's training phase. Measures of prediction focus on the differences between values predicted and values actually observed. Goodness of fit describes how well a model fits a set of observations, which were provided in the model's training phase. It provides an understating on how well the selected independent variables (input) explain the variability in the dependent (output) variable. Measures of goodness of fit summarize the discrepancy between actual/observed values during training and the values approximated under the model in question.
We compare our statistical methodology against its ground truth counterparts: the multivariate linear regression model over data subspaces, hereafter referred to as REG, and the piecewise linear model (PLR) over data subspaces, both of which have full access to the data. Note that the PLR data approximation is the optimal multiple linear modeling over data subspaces we can obtain because it is constructed by accessing the data. Hence, we demonstrate how effectively our data-LLMs approximate the ground truth data function g and the optimal PLR data approximation. Specifically, we compare against the REG model using DMS PostgreSQL and the MATLAB and the PLR model using the ARESLab (MATLAB) toolbox 7 for building PLR models based on the multivariate adaptive regression splines method in [44]. We show that our model is scalable and efficient and as (or even more than) accurate than the REG model, w.r.t. predictability and goodness of fit, and close to the accuracy obtained by the optimal PLR model. Our model is dramatically more scalable and efficient as, unlike REG and PLR models, it does not need access to data, yielding up to six orders of magnitude faster query execution.

Predictability
Predictability in the query space The Mean-Value Accuracy (A1 metric) refers to the answer prediction of the average valueŷ given an unseen Q1 query q = [x, θ]. Based on the EPE in (5), the A1 metric is the root-mean-square error (RMSE): where y = f (x, θ) andŷ is the actual and the predicted average value of data output u, respectively, from Algorithm 2 given M unseen Q1 queries. Predictability in the data space The data output accuracy (A2 metric) refers to the prediction of the data output u = g(x) given an unseen input x ∈ R d . Here, query-LLM functions are exploited to predict the data output u by approximating the data function g(x) as in (31) by aggregation of neighboring query-LLMs f k (x, θ k ), i.e., the PLR-based data-LLMs g k (x). Let u andû be the actual and the predicted data output value of g(x) given M unseen points x. Based on (29) and (31) we predictû as: where W(q) is the overlapping set for query q defined in (27). Note that the query-LLM function f k (x, θ k ) provides the intercept y k and slope b X ,k over the data input space by setting the radius θ = θ k in the function f k . For a given data input x, the A2 metric is the RMSE of the predicted output u over M unseen inputs:

Goodness of fit
PLR approximation in data space Given an unseen Q2 query q = [x, θ] defined over the data subspace D(x, θ), we evaluate how well our methodology approximates the data function g through data-LLM functions comparing with the REG model and the optimal PLR data approximation model over the same data subspace D. For goodness of fit we adopt the metrics: Fraction of Variance Unexplained (FVU) s and Coefficient of Determination (CoD) R 2 [26]. FVU indicates the fraction of variance of the dependent data output variable u, which cannot be explained, i.e., which is not correctly predicted by the explanatory data input variable x. Given a data subspace D(x, θ), consider the data pairs ( , with outputs u i = g(x i ), and approximationsû i for each input x i . The sum of squared residuals (SSR) and the total sum of squares (TSS) over D(x, θ) are then defined as: respectively, whereū is the average output value: The FVU and CoD are then defined as: respectively. The FVU metric indicates how closely the approximation of the data function g over a data subspace D matches the actual data function g over that data subspace. If the FVU value is greater than 1, the explanatory input variable x does not convey any information about the output u in the sense that the predictionsû do not covary with the actual output u. In this case, the data approximation function is a bad fit. The approximation is considered good when the FVU metric assumes a low value less that 1. Given an unseen query q over the data subspace D(x, θ), we measure the FVU and CoD metrics for the REG and PLR models, and the average FVU value s = 1 |S| |S| =1 s of the FVUs s (and CoDs) corresponding to the set of data-LLM functions S : |S| ≥ 1 derived by our Algorithm 3. In our experimental evaluation and comparative assessment of our model with the PLR and REG models (pair-wise), in each performance metric, we adopted the paired-sample two-tailed Student t test using a 95% confidence interval, i.e., significance level 0.05.

Real and synthetic datasets
Our goal is to evaluate accuracy in terms of predictability and goodness of fit, efficiency and scalability over real and synthetic datasets. For accuracy, using the A1, A2, FVU, and CoD metrics, we intentionally sought multivariate real data functions g that exhibit extreme nonlinearity in many data subspaces. For this reason, to assess the A1 metric for Q1 queries, the A2 metric for data output predictions, and the FVU, CoD metrics for Q2 queries, we used two real datasets R1 from [45] and R3 from [16], and a synthetic dataset referred to as R2.
Real datasets The real dataset R1 consists of 6-dim. feature vectors corresponding to the concentration level of 6 gases, namely, Ethanol (E1), Ethylene (E2), Ammonia (A1), Acetaldehyde (A2), Acetone (A3), and Toluene (T) derived from chemical sensors. The sensors measurements of the dataset R1 were gathered within 36 months in a gas delivery platform facility situated at the ChemoSignals Laboratory in the BioCircuits Institute (BCI 8 ), University of California. We expand the R1 size by adding extra 6-dim. vectors with Gaussian noise, thus, in total the R1 dataset contains 15 · 10 6 multi-dimensional data vectors of gases concentration levels.
With the R1 dataset we wished to delve into accuracy issues and this dataset was chosen because its data exhibits nonlinear relationships among features. All d-dim. real-valued vectors are scaled and normalized in [0,1] (d ∈ {1, . . . , 6}) with significant nonlinear dependencies among the features, evidenced by a high FVU = 4.68. This indicates that a linear approximation of the entire data space is definitely to no avail, presenting a challenging dataset for our approach. Figure 10 shows the R1 scatter plot matrix for all gases concentrations (before scaling and normalization) depicting the dependencies between gases and the corresponding histograms of each dimension. We obtain significant correlations among many gases, indicatively E1 with A1, A2 and A3 with Pearson correlation coefficient 0.41, 0.23, and 0.98 ( p < 0.05), respectively, and E2 with A2 having correlation 0.36 ( p < 0.05). By further analyzing the R1 dataset, the first three Principal Components (PCs) explain the 99.73% of the total variance by 73.57%, 23.94%, and 2.22%, respectively, which are used for Q1 and Q2 analytics queries (prediction of the mean-value and model fitting).
The R3 real dataset contains environmental sensed data used for data-driven predictive models for the energy use of appliances [16]. The data include measurements of temperature and humidity sensors from a wireless network in a house located in Stambruges, Belgium. The sensors read contextual data every 10 min for about 4.5 months. The house temperature and humidity conditions were monitored with an in-house ZigBee wireless sensor network built with XBee radios, Atmega328P micro-controllers and DHT-22 sensors. The digital DHT-22 sensors have an accuracy of ± 0.5 Celsius for temperature and ± 1% for relative humidity. The environmental parameters are: temperature in kitchen area (T1), humidity in kitchen area (H1), temperature in living room area (T2), humidity in living room area (H2), temperature in Fig. 11 R3 Dataset Scatter Plot Matrix: Each cell plots the contextual/environmental parameter level of dimension X against dimension Y, with: temperature in kitchen (T1), humidity in kitchen (H1), temperature in living room (T2), humidity in living room (H2), temperature in laundry room (T3), and humidity in laundry room area (H3); the diagonal plots the histogram of each environmental parameter laundry room area (T3), and humidity in laundry room area (H3). The real dataset R3 consists of 6-dim. feature vectors corresponding to the above-mentioned six contextual parameters. We expand the R3 size by adding extra 6-dim. vectors with Gaussian noise, thus, in total the R3 dataset contains 10 · 10 6 multi-dimensional data vectors of temperature and relative humidity of different areas within the house. All ddim. real-valued vectors are scaled and normalized in [0,1] (d ∈ {1, . . . , 6}) with significant nonlinear dependencies among the features, evidenced by a FVU = 7.32 indicating that a single linear approximation of the entire data space is not an option. Figure 11 shows the R3 scatter plot matrix for all dimensions (before scaling and normalization) along with their dependencies and histograms. The first four Principal Components (PCs) explain the 99.08% of the total variance by 67.82%, 19.60%, 9.29%, and 2.37%, respectively, used for Q1 and Q2 analytics queries (prediction of the mean-value and model fitting).
Synthetic dataset To further evaluate scalability and efficiency along with accuracy, we now use a big synthetic dataset deriving from a benchmark function to ensure also significant nonlinearity. The R2 synthetic dataset of inputoutput pairs (u, x) contains 10 10 d-dim. real data generated by the Rosenbrock function [46] u = g(x) and d ∈ {1, . . . , 6}. This is the popular benchmark function for testing nonlinear, gradient-based optimization algorithms. It has a global minimum inside a long, narrow, parabolic shaped flat valley, where convergence to the global minimum, however, is extremely non-trivial [46]. We obtain the Rosenbrock u = , attribute domain |x i | ≤ 10 and global minimum is 0 at x i = 1, ∀d. Obviously, there is no linear dependency among features in the data space evidenced by a FVU = 12.45. In addition, we generate 10 10 vectors adding noise ∼ N (0, 1) to each dimension. For illustration purposes, Fig. 12 shows the R2 dataset of the Rosenbrock function u = g(x 1 , x 2 ) System implementation The real datasets R1 and R3 and the synthetic dataset R2 are stored in a PostgreSQL server with 2x Intel Xeon E5645, RAM 96 GB, HD: Seagate Constellation 1TB, 32MB cache. We use R1, R2, and R3 to assess the query Q1 prediction accuracy of the aggregate answer y (A1 metric), corresponding to the average of the dimensions of the first three and four PCs in R1 and in R3, respectively, and to the average data output u of the Rosenbrock in R2. The PLR approximation of the data function g regarding Q2 queries over the R1, R2, and R3 datasets is conducted over data dimensions d ∈ {2, 3, 5, 6} for the metrics: FVU, CoD and data output prediction accuracy metric A2. For scalability and efficiency, our method compared against the PostgreSQL with a B-tree index on input vector x (d ∈ {2, 5, 6} over Q1 queries, d = 2 over Q2 queries) and MATLAB (d = 5 and d = 6) over Q2 queries using the regress function on the server and the ARESLab tool for PLR data approximation.

Query workloads, training and testing sets
Query workload Firstly, we generate certain query workloads to train and test our model. The random queries q = [x, θ] with centers x and radii θ over the data subspaces are generated with uniformly distributed centers x ∈ [0, 1] d for the R1 and R3 datasets and in [−10, 10] d for the R2 dataset (recall that the data vectors in R1 and R3 are scaled and normalized in [0,1]). That is, the query centers can uniformly at random appear over all the data space defined by the domains of the datasets dimensions d ∈ {1, . . . , 6}. The query radius θ affects the training time and the prediction quality (both in predictability and goodness of fit). In brief, a larger (smaller) θ implies shorter (longer) training times as will be elaborated later. For each query, the radius θ ∼ N (μ θ , σ 2 θ ) is generated from a Gaussian distribution with mean μ θ , variance σ 2 θ . We set random radius θ ∼ N (0.1, 0.01) for the R1 and R3 datasets and θ ∼ N (1, 0.25) for the R2 dataset, covering ∼20% in each feature data range; the justification for this setting is discussed later. Section 8.7 provides an extensive experimental and theoretical analysis of the impact of θ on the model performance. Based on this set up, we generated random queries q that are issued over the data spaces of the R1, R2 and R3 datasets. We use these queries for training and testing our models as follows.
Training and testing sets We describe how we generate the training and testing query-answer sets from the abovementioned query workload methodology. To train our model, we generate training files T consisting of random queries q as described above along with their actual aggregate answers y after executing them. To test the performance of our models, we generate different testing files V dedicated only for predictions containing random queries of various sizes: |T | ∈ {10 3 , . . . , 10 4 } and M = |V| ∈ {10 3 , . . . , 2 · 10 4 }, respectively. Specifically, the training sets T and testing sets V contain pairs of queries and answers, i.e., (q, y), where the queries were executed over the R1, R2, and R3 datasets (see also Fig. 4). We adopted the cross-validation technique [51] to evaluate all the predictive models by partitioning the original query-answer sets into a training set to train the models, and a test set to evaluate them. We use 10-fold cross-validation, where the original query-answer set is randomly partitioned into 10 equal size subsets. Of the 10 subsets, a single subset is retained as the validation dataset for testing the models, and the remaining 9 subsamples are used as training data. The cross-validation process is then repeated 10 times (the folds), with each of the 10 subsamples used exactly once as the validation data. The 10 results from the folds are then be averaged to produce a single estimation of the above-mentioned performance metrics.

Model training and convergence
We train our model with the training set T and then evaluate and compare it with the ground truths REG (ProstgreSQL and MATLAB) and PLR (MATLAB) with the testing set V examining the statistical significance in accuracy with respect to paired-sample two-tailed Student t test with significance level 0.05. Note, the T and V sets contain explicitly different queries as discussed in Sect. 8.2.2. The granularity of quantization for our model is tuned by the percentage coefficient a ∈ [0.05, 1], involved in vigilance parameter ρ = a(d 1/2 + 1) (see Sect. 5 and Remark 7). Specifically, a value of quantization coefficient a = 1 corresponds to the generation of only one prototype, i.e., K = 1 (that is, coarse quantization), while any value of coefficient a < 1 (that is, fine grained quantization) corresponds to a variable number of prototypes K > 1 depending on the underlying (unknown) data distribution. The default value for the quantization percentage coefficient is a = 0.25 in our experiments. The model is adapting its parameters/prototypes in a stochastic manner every time a new query-answer pair (q, y) ∈ T is present. We set model convergence threshold γ = 0.01 in Algorithm 1 to transit from the training phase to the prediction phase. Moreover, the hyperbolic learning rate schedule for the t-th training pair is: η t = (1 + t) −1 [14] for the stochastic training of the prototypes as query-answer pairs (q, y) are retrieved (one at a time) from the training set T . Notably, to fairly compare against the optimal PLR data approximation, we set its maximum numbers of the automatically discovered linear models (in the forward building phase of the PLR algorithm) equal to K and the generalized crossvalidation penalty per PLR knot to 3 as suggested in [44]. The proposed statistical methodology requires training and prediction phases. We also introduce the intermediate phase in Sect. 7.2, which is controlled by the consensual threshold r = 0.7 of the partially converged query prototypes involved in queries. That is the model starts providing predictions in the intermediate phase when 70% of the query prototypes have converged. We examine firstly the global convergence of the training phase of our model and, then, study the variant of partial convergence w.r.t. the landmarks t and t * and the impact on the required number of training pairs and accuracy. Table 1 shows the experimental parameters and their range/default values. Figure 13 examines the termination criterion of the training Algorithm 1 Γ = max(Γ J , Γ H ) against the number of training pairs (q, y) in training set T for d ∈ {2, 5} over R1 and R2 datasets with quantization coefficient a = 0.25; similar results are obtained from R3 dataset. The training phase terminates at the first instance t * when Γ ≤ γ , which is obtained for |T | ≈ 5300 training pairs. The total average training time, which includes both Q1 execution time and model updates time, is (0.41, 0.36, 2.38) h for R1, R3, and R2 datasets, respectively. This should not be perceived as overhead of our approach, as 99.62% of the training time is devoted to executing the queries over the DMS/statistical system, which we cannot avoid that anyway even in the typical case as shown in Fig. 4. Any traditional approach would thus also pay 99.62% of this cost. This only affects how early our approach switches to using the trained model versus executing the queries against the system. Our experiments show that excellent quality results can be produced using a reasonable number of past executed queries for training purposes. Obviously, this can be tuned by setting different model convergence threshold γ values. We set γ = 0.01, where Γ is (stochastically) trapped around 0.0046, with deviation 0.0023 in R1 and 0.0012 in R3. In R2, the Γ is strictly less than γ for |T | > 5300. Figure 14 shows the relation between the percentage of the number of the training pairs |T |% used for a specific percentage of query prototypes to partially converge given the intermediate phase for d ∈ {2, 5} over R1 dataset with quantization coefficient a = 0.25. Specifically, we observe that with only 35% of the training pairs, i.e., with almost 1800 query-answer pairs (landmark t ≈ 1800), we obtain a model convergence of the 70-80% of the query-LLM prototypes. This indicates that the entire model has partially converged to a great portion w.r.t. number of query-LLM prototypes requiring a relatively small number of training pairs. In this case, the intermediate phase is deemed of high importance for delivering predictions to the analysts while the model is still being in a 'quasi-training' mode. The model converges with a high rate as more training pairs from the training set T are observed after the convergence of the 70% of the query-LLM prototypes. This suggests to set the consensual threshold for the intermediate phase r = 0.7. However, during this phase, the delivered predictions to the analysts have to be assessed w.r.t. prediction accuracy, as will be discussed in Sect. 8.4. Figure 15(upper) shows the evolution of the joint objective functions J and H and Fig. 15(lower) shows the evolution of the individual norm difference of each query prototype from the current average, i.e., the individual convergence criterion, against the percentage of training pairs. We observe that after 37% of training pairs of the training set T , all the prototypes start to transit from their training phase to the prediction phase, while minimizing their deviations from the average convergence trend of the model. This indicates the flexibility of our model in being in the prediction phase thus proceeding with prediction for those data subspaces where the corresponding query prototypes have already converged and, in the same time, being in the training phase for those query prototypes which have not yet converged, thus, keeping in a learning mode. After the global convergence, i.e., when the consensual ratio reaches the unity, the model transits entirely in the prediction phase thus achieving significantly fast query execution without accessing the data. This signifies the scalability of our query-driven approach. Figures 16 and 18(upper) show the RMSE e of the predicted answer y (A1 metric) against the resolution of quantization (coefficient) a over R1, R2, and R3 datasets, respectively, for Q1 queries using the generated testing set V (see Sect. 8.2.2). For different quantization coefficient a values, our model identifies subspaces where the function f (x, θ) behaves almost linearly, thus, the query-LLM functions approximate such regions with high accuracy. Interestingly, by quantizing the query space by adopting a small coefficient a value, i.e., fine-grained resolution of the query space, then high accuracy is achieved (low RMSE e values) with obvious nonlinear dependencies among dimensions. This is due to the fact that with small coefficient a values, we focus on very specific query subspaces where linear approximations suffice to approximate the query function f . This expects to result in a high number of prototypes K in order to build many query-LLM functions to capture all the possible nonlinearities of the query function f as will be discussed below. Figure 23(lower) shows the number of query prototypes formated during the query space quantization. Indicatively, we obtain K = 450 prototypes for quantization coefficient a = 0.25. This indicates that only 450 prototypes are required to accurately predict the aggregate answer y for data dimension d = 5, that is, it is required 450 query-LLM functions to capture the curvature of the query function f over the query subspaces. As the quantization coefficient a → 1, then we quantize the query function f into fewer query-LLMs approximations, thus, yielding higher RMSE values as expected due to coarse approximation of the function f . Figures 17 and 18(lower) show the robustness of the our model w.r.t. predictability with various testing file sizes |V| for R1, R2, and R3 datasets, respectively. Once the LLM model has converged, it provides a low and constant prediction error in terms of RMSE for different data dimensions d, indicating the robustness of the training and convergence phase of the proposed model. This means that the model after transiting into the prediction phase can accurately predict the aggregate answer y via the identified and optimized query-LLM functions thus no query processing and data access is needed at that phase.

Evaluation of Q1 query: predictability and scalability
To assess the efficiency and scalability for the mean-value prediction query Q1, Fig. 24(upper) shows in log scale the average Q1 execution time over the dataset R2 for LLM (with quantization coefficient a = 0.25) corresponding to K = 92 and K = 450 query prototypes for dimensions d ∈ 2, 5, respectively. Our method requires just 0.18 ms per query over massive data spaces in its prediction phase, offering up to five orders of magnitude speedup (0.18 ms vs up to 10 5 ms/query). This is expected, since the LLM-based model predicts the Q1's outputs and does not execute the query over the data during prediction achieving high prediction accuracy.
We now examine the impact of the model partial convergence on the predictability, i.e., when the model is in the intermediate phase between the training and the prediction phases. Figure 19 (upper) shows the partial RMSEẽ of the predicted aggregate answer y (A1 metric) during the intermediate phase of the model and the achieved RMSE e during the prediction phase against the percentage of training pairs for consensual threshold r = 0.7 over dimension d ∈ {2, 5} for the dataset R1. Similar results are obtained from R2 and R3 datasets. Specifically, the partial RMSEẽ is obtained only from the converged query prototypes during the intermediate phase as described in Case A.I in Sect. 7.2 for r = 0.7. That is, from those query prototypes whose any additional training pair (q, y) does not significantly move the query prototypes in the query space. We observe the predictability capability of our model w.r.t. number of training pairs such that with almost 35% for d = 2 (and 45% for d = 5) of the observed training pairs, the model achieves a RMSE value close to the RMSE value obtained in the fully prediction phase, i.e., after observing 100% of the observed training pairs from T . This indicates the flexibility of our model to proceed with accurate predictions even being in the intermediate phase, where some of the query prototypes are still in a training mode until the model entirely converges.
More interestingly, Fig. 19(lower) shows the efficiency of our model in achieving high prediction accuracy even during the intermediate phase describe above. The model being in the intermediate phase can provide RMSE values close to that at the end of the training phase by having 70% of the prototypes converged after observing 37% of the training pairs from the training set T . This demonstrates the fast convergence of the model and its immediate application for delivering predictions to the analysts and real-time predictive analytics applications while not yet being fully converged.

Remark 10
The RMSE in Fig. 19(lower) is normalized in [0,1] for comparison reasons with the percentages of converged prototypes and training pairs.

Evaluation of Q2 query: PLR data approximation and scalability
We evaluate the Q2 queries by using our query-/data-LLMs model against the REG and PLR models and show the statistical significance of the derived accuracy metrics. The explanation over the linear/nonlinear behaviors of data function g is interpreted by the variance explanation and model fitting metrics fraction of the variance unexplained FVU and coefficient of determination CoD against the quantization resolution coefficient a and the model prototypes K . Figures  20 and 21(upper) show the sum of squared residuals SSR between the actual answers and the predicted answers for the data-LLMs and REG model with d ∈ {2, 5, 6} over the datasets R1, R2, and R3 with p < 0.05. Figures 22(upper) and 21(lower) show that the fraction of the variance unexplained of the function approximation FVU < 1 for our model, while for the REG model we obtain FVU > 1, p < 0.05. This indicates the capability of our model to capture the nonlinearities of the underlying data function g over all the data subspaces, compared with the REG model provided in the modern DMS. Specifically, as the query space quantization is getting coarse, that is a low resolution with a → 1, our model approaches the fraction of unexplained variances FVU of the model fitting as that of the REG model, i.e., resulting to a few number of data-LLM functions. This is because, we enforce our model to generate fewer data-LLM functions, thus, cannot effectively capture the nonlinearities of the data function g over all possible data subspaces. Indicatively, we obtain only one data-LLM when a = 1, i.e., only a global linear model approximates the data function g. As expected, the optimal PLR model achieves the lowest FVU by capturing the nonlinearity of the data function g with multiple linear basis functions. For quantization coefficient a < 1, we achieve a low FVU and our model captures effectively the nonlinearity of data function g by autonomously deriving multiple data-LLMs, which is very close to the actual PLR approximation for quantization coefficient a < 0.1 with p < 0.05. As the Rosenbrock function g is nonlinear, our model attempts to analyze it into data-LLMs and provide a fine-grained explanation on the behavior of the data function g. This cannot be achieved by the in-DMS model REG, since the Rosenbrock function cannot be expressed by a 'global' line within the entire data space D(x, θ). The PLR model shows statistically superior FVU performance ( p < 0.05), while being dramatically inefficient compared to our model, as shown in Fig. 24(lower). On the other hand, our model conditionally quantizes the data function g into data-LLMs, thus, providing the list S of local lines that significantly better explain the data subspace, without accessing the data and then providing high accurate model fitting.
In Fig. 22(lower) and in Fig. 21(lower) the data function g in R1 and R3 datasets does not behave linearly in all the random data subspaces. This is evidenced by the FVU metric of the REG model, which is relatively close to/over 1 for d = 2, d = 5, and d = 6 with p < 0.05. This information is unknown a priori to analysts, hence the results using the REG model would be fraught with approximation errors indicating 'bad' model fitting. It is worth noting that, the average number of data-LLM functions that are returned to the analysts for all the issued testing queries in the testing set V is |S| = 4.62 per query with variance 3.88. This denotes the nonlinearity behavior of data function g and the fine-grained and accurate explanation of the function g within a specific data subspace D(x, θ) per query q = [x, θ]. Here, the PLR model achieves the lowest FVU value, i.e., best model fitting as expected ( p < 0.05), but note that this is also achieved by our data-LLM functions with a quantization coefficient a < 0.1. Figure 23(upper) shows the coefficient of determination CoD R 2 for the LLM, REG, and PLR models over the R1 dataset (similar results are obtained for datasets R2 and R3) having a significance level of 5%. A positive value of R 2 close to 1 depicts that a linear approximation is a good fit for the unknown data function g. While, a value of R 2 close to 0, and especially, a negative value of R 2 indicates a significantly bad fit signaling inaccuracies in function approximation. In our case, with K > 60 query prototypes, our model achieves high and positive R 2 indicating that our model better explains the random queried data subspaces D(x, θ) compared with the obtained explanation of the current in-DMS REG model over exactly the same data subspaces and p < 0.05. The REG model achieves low R 2 values, including negative ones, thus it is inappropriate for predictions and function approximation. This indicates that the underlying data function g highly exhibits nonlinearities, which are not known to the analysts a-priori. By adopting our model, the analysts progressively learn the underlying data function g and also via the derived data-LLM functions capture the hidden nonlinearities over the queried data subspaces. Such subspaces could never be known to the analysts unless exhaustive data access and exploration takes place. This capability is only provided by our model. Notably, as the quantization coefficient a → 1, our model increases significantly the coefficient of variation R 2 value indicating a better capture of the specificities of the underlying data function g, thus, providing more accurate linear models. Again, the data-access exhaustive PLR model achieves the highest CoD values, however at the cost of high insufficiency; see Fig. 24(lower). Regardless, note that our model can catch the PLR's CoD value by simply increasing K , i.e., the granularity of query space quantization.
Figures 24(lower) and 26(lower) show the Q2 execution time over the dataset R2 and R3, respectively, for data-LLM (a = 0.25, i.e., K = (92, 450) for d = (2, 5)) through Algorithm 3, the REG model from PostgreSQL (d = 2) (REG-DBMS), the REG model from MATLAB (d = 5) (REG-MATLAB), and the optimal PLR against dataset size. The derived results are statistically significant with p < 0.05. Our model is highly scalable (note the flat curves) in both datasets and highly efficient, achieving 0.56 ms/query and 0.78 ms/query (even for massive datasets)-up to six orders of magnitude better than the REG and PLR models for R2 and R3 datasets, respectively. The full picture is then that our model provides ultimate scalability by being independent of the size of the dataset) and many orders of magnitude higher efficiency, while it ensures great goodness of fit (CoD,FVU), similar to that of PLR.
The PLR model with data sampling techniques could also be considered as an effective efficiency-accuracy trade-off. Figure 24, however, shows the efficiency limitations of such an approach. The PLR model, even over a very small random sample of size 10 6 = 0.01% of the 10 10 dataset, is shown to be > 3 orders of magnitude less efficient than our model. Also, recall that PLR here is implemented over MATLAB, with all data in memory, hiding the performance costs of a full in-DBMS implementation for the selection operator (computing the data subspace); the sampling of the data space; and the PLR algorithm (whose performance is shown in Fig. 24). All of this is in stark contrast to the O(1) cost of our model. Finally, note that, to our knowledge, PLR is not currently implemented within DMSs, regardless of its cost.

Data output predictability
We compare our data-LLM functions against the in-RDBMS REG and PLR models for providing accurate data output predictions w.r.t. the A2 metric, i.e., the data prediction performance with statistical level of significance 5%. We use our data-LLM functions for providing data output u predictions over unseen data subspaces D(x, θ) using (31) and (29) for prediction. Figures 25 and 26(upper) show the RMSE v for the LLM, REG, and PLR models over the R1, R2, and R3 datasets against the testing set size |V|.
The LLM model can successfully predict the data output u by being statistically robust in terms of number of testing pairs |V| ( p < 0.05) and assume comparable or, even, lower prediction error than the REG model. This denotes that our model, by fusing different data-LLM functions which better capture the characteristics of the underlying data function g, provides better data output u prediction than a 'global' REG model over random queried data subspaces D. Evidently, the PLR model achieves the lowest RMSE value by actually accessing the data and captures the actual nonlinearity of the data function g through linear models. However, this is achieved with relatively high computational complexity, higher than the REG model including polynomially dataaccess process [44]. Note, the data output prediction times for the LLM, REG, and PLR models in this experiment are the same presented in Fig. 24: The LLM model executes our Algorithm 2 by replacing θ = θ k in (30), ∀w k ∈ W(q), the REG model creates the linear approximation over the data space D, and the PLR adaptively finds the best linear models for data fitting in each prediction request.
Overall, the proposed LLM model through the training, intermediate and prediction phases achieves statistically significant scalability and accuracy performance compared with the in-RDBMS REG model and the data-access intensive PLR model ( p < 0.05). The scalability of the proposed model in the predictive analytics era is achieved by predicting the query answers and delivering to analysts the statistical behavior of the underlying data function without accessing the raw data and without processing/executing the analytics queries, as opposed to the data-driven REG and PLR models in the literature,

Impact of radius Â
In this section we examine the impact of the query radius θ on the predictive and scalability performance of our querydriven approach. Consider that the query function f is approximated by some functionf . Then, the expected prediction error (EPE) in (5) for a random query q with actual aggregate answer y is decomposed as: The first term is the squared bias, i.e., the difference between the true function f and the expected value of the estimate E[f ], where the expectation averages the randomness in the dataset. This term will most likely increase with radius θ , which implies an increase in the number of input data points n θ (x) from the dataset B. The second term is a variance that decreases as the query radius θ increases. The third term is the irreducible error, i.e., the noise in the true relationship that cannot fundamentally be reduced by any model with variance σ 2 = Var( ). The θ value controls the influence that each neighbor query point has on the aggregate answer prediction. As radius θ varies there is a bias-variance tradeoff. An increase in radius θ results in smoother aggregate answer prediction but increasing the bias. On the other hand, the variance goes to zero since, for instance, with a high radius θ , the predictionf (x, θ) ≈ E[y], i.e., unconditioned to the query q. Imagine queries whose radii include all data points x i in the entire data space. In that case, we obtain a constant aggregate answer for each issued query, i.e., the aggregate answer y = 1/n n i=1 u i , which is the average of all data outputs u i thus, no need to predict the aggregate answer y. By selecting a radius θ such that n θ (x) ≈ n, then the aggregate answer prediction is a relatively smooth function of query q, but has little to do with the actual positions of the data vectors x i 's over the data space. Evidently, the variance contribution to the expected error is then small. On the other hand, the prediction to a particular query q is systematically biased toward the population response, regardless of any evidence for local variation in the data subspace D(x, θ). The other extreme is to select a radius θ such that n θ (x) = 1 for all queries. We can expect less bias and, in this case, it goes to zero if n → ∞ [32]. Finally, Given the true model and infinite data, we should be able to reduce both the bias and variance terms to 0. That is, as the number of input data points n and n θ (x) → ∞ such that n θ (x) n → 0, thenf (x, θ) → E[y|x, θ]. However, in real world with approximate models and finite data, the radius θ plays the trade-off between minimizing the bias and minimizing the variance.
We experiment with different mean values μ θ of the radius θ ∼ N (μ θ , σ 2 θ ) having a fixed variance σ 2 θ to examine the impact on the model training, quality of aggregate answer prediction, and PLR approximation of the underlying data function g. We examine the number of training pairs, |T |, where our method requires to reach the convergence threshold γ = 0.01. We also examine the impact of radius θ on the RMSE and CoD metrics. Hence, three factors (|T |, RMSE, and CoD) are influenced by the radius θ . We experiment with mean radius μ θ ∈ {0.01, . . . , 0.99} over the R1 dataset (similar results are obtained in R2 and R3 datasets). Consider the queries with high radius θ drawn from Gaussian N (μ θ , σ 2 θ ) with high mean radius μ θ . Then, radius θ nearly covers the entire input data range and aggregate answer y is close to the average value of output u for all queries, i.e., n θ contains all x input data points. In this case, all query prototypes w k correspond to constant query-LLM functions f k (x, θ) ≈ y k = y, where aggregate answer y = E[u] unconditioned to x and radius θ . Hence, the training and convergence of all LLMs is trivial since there is no any specificity to be extracted from each query-LLM function f k . Our method converges with a low number of training pairs |T | as shown in Fig. 27(lower). On the other hand, a small θ value refers to learning 'meticulously' all the specificities for all LLMs. In this case, our method requires a relatively high number of training queryanswer pairs |T | to converge; see Fig. 27(lower).
In terms of accuracy, the higher the radius θ is, the lower the RMSE e becomes. With high radius θ , all query-LLM functions refer to constant functions with the extreme case where f k ≈ E[u], ∀k as discussed above, thus, the RMSE e = 1 M (y i −ŷ i ) 2 → 0 with y i ≈ E[u] due to the fact that n θ contains all input data and, thus,ŷ i ≈ E[u] (see Fig. 27(upper) with |T | = 5359 training pairs required for convergence w.r.t. γ = 0.01). However, this comes at the expense of a low CoD R 2 since the data function g is approximated 'solely' by a constant approximate function g(x) ≈ E[u] (see Fig. 27(lower)). When radius θ is small, we attempt to estimate the query function f over (x, θ) and, thus, approximate the data function g(x). This, however, requires many training query-answer pairs |T |; see Fig. 27(lower). Overall, there is a trade-off in the number of training pairs |T | with approximation and accuracy capability. To obtain quality approximation, the CoD metric should be strictly greater than zero. This is achieved by setting the mean radius value μ θ < (0.4, 0.5) for d ∈ (5, 2). Then, we can compensate the RMSE and training time (number of training pairs |T |) as shown in Figs. 27 and 28. In addition, there is a tradeoff between training effort and predictability. As shown in Figs. 27, 28 and as explained above, a low μ θ value results to a high RMSE and training effort in terms of |T | size. By combining those trade-offs, for a reasonable training effort, to achieve low RMSE and high goodness of fit, i.e., a high positive CoD value, we set mean radius value μ θ = 0.1, which corresponds to ∼ 20% of the data range for σ 2 θ = 0.01. Finally, Fig. 29 shows the impact (trajectory) of the mean radius μ θ on the training set size |T |, the prediction accuracy w.r.t RMSE e, and the goodness of fit w.r.t CoD R 2 metric for d ∈ (2, 5) for R1; we obtain similar results for R2 and R3 datasets.

Conclusions and future plans
We focused on the inferential task of piecewise linear regression and predictive modeling which are central to in-DMS predictive analytics. We introduced an investigation route, whereby answers from previously executed aggregate and regression queries are exploited to train novel statistical learning models which discover and approximate the unknown underlying data function with piecewise linear regression planes, predict future mean-value query answers, and predict the data output. We contribute with a statistical learning methodology, which yields highly accurate answers and data function approximation based only on the queryanswer pairs and avoiding data access after the model training phase. The performance evaluation and comparative assessment revealed very promising results.
Our methodology is shown to be highly accurate, extremely efficient in computing query results (with sub-millisecond latencies even for massive datasets, yielding up to six orders of magnitude improvement compared to computing exact answers, produced by piecewise linear regression and global linear approximation models), and scalable, as predictions during query processing do not require access to the DMS engine, thus being insensitive to dataset sizes.
Our plans for future work focus on developing a framework that can dynamically and optimally switch between the training/intermediate phases and query prediction phases as analysts interests shift between data subspaces. Moreover, the developing framework is expected to cope with nonlinear approximations by evolving and expanding the fundamental representatives of both: data and query subspaces for supporting robust query subspace adaptation and for dealing with data spaces with online data updates.