Hybrid design for sports data visualization using AI and big data analytics

In sports data analysis and visualization, understanding collective tactical behavior has become an integral part. Interactive and automatic data analysis is instrumental in making use of growing amounts of compound information. In professional team sports, gathering and analyzing sportsperson monitoring data are common practice, intending to evaluate fatigue and succeeding adaptation responses, analyze performance potential, and reduce injury and illness risk. Data visualization technology born in the era of big data analytics provides a good foundation for further developing fitness tools based on artificial intelligence (AI). Hence, this study proposed a video-based effective visualization framework (VEVF) based on artificial intelligence and big data analytics. This study uses the machine learning method to categorize the sports video by extracting both the videos' temporal and spatial features. Our system is based on convolutional neural networks united with temporal pooling layers. The experimental outcomes demonstrate that the recommended VEVF model enhances the accuracy ratio of 98.7%, recall ratio of 94.5%, F1-score ratio of 97.9%, the precision ratio of 96.7%, the error rate of 29.1%, the performance ratio of 95.2%, an efficiency ratio of 96.1% compared to other existing models.

sports. Sports contain players' physical activity, and behavioral actions between athletes allow time, description, and count data on action to be recorded [2]. The advent of competitive data, has therefore, given fuel to research in competitive sports and offers a platform to investigate the law of human life and human inclinations [3]. The big data age has had an unparalleled influence on sport's development [4]. Mostly connected big data services like health data, exercise, statistics on training, and analysis may successfully assist athletes in creating game plans and becoming vital ways to win contests [5]. Advanced big data technology has changed the realm of sports. The increase in sports data has created new difficulties in big sports data and opportunities [6]. The growth of the Internet and sports are the result of big sports data. For all major sports, analysts may frequently extract enormous numbers of data that media, fans, athletes, and organizations can use [7]. These efforts frequently go hand in hand with top technology providers that have realized the enormous benefits of sports analytics. The ubiquity, diversity, and easy accessibility of sports data make it especially appealing to many visual researchers [8].
Artificial intelligence and computer vision technologies are becoming trendy in analyzing videos in the sports field [9]. A CNN or convolutional neural network-based model has been extensively utilized to efficiently solve complex machine translation, signal processing, and computer vision tasks [10]. However, computer vision is presently fluctuating from mathematical methods to machine learning approaches due to its effectiveness in extracting compound features without human involvement [11]. In this expanding field of study, there is a tremendous opportunity to start innovation and development. For example, predictive analysis by artificial intelligence can be applied to enhance health and fitness [12]. Wearable applications can offer information on player tear and strain, reducing damage to sportspeople. In addition, AI can discover trends in gaming tactics, plans, and weaknesses [13].
Data are a significant part of the sports industry for trainers, performers, management, sports medicine employees, and supporters [14]. Data analytics can aid teams to victory games, and these data can help enhances sportsperson's performance, reduce injuries and inspire fans to join games [15]. In addition, big data helps us to develop improved sports strategies. Whether it is an individual or a team sport, strategic management is essential for any sport. These methods are depending upon the professional athletes and teams to compete against their opponents. Modern coaching leverages large data sets to develop successful tactics for individual and team athletes [16]. People will indeed follow credible leaders, and famous coaches have the edge over younger ones since they have access to a far greater data bank of information. As a coach, the most obvious application of data is in the collection of figures and statistics. Sports industry data are vital to players, coaches and management, and sports medical professionals and spectators. While data analytics can assist a team win games and enhance player performance, the same numbers can also motivate spectators to attend games. Data science enables coaches of professional teams to build hyperpersonalized athlete matches and other plans for each match played by the team in particular. The strategies of the team are left unexpected yet efficient [17].
Analyzing performance in sports helps coaches and players reach their goals by identifying activities that can guide decision-making, maximize performance, and assist them on their road to excellence. They often include tactical evaluation, movement analysis, video, and statistical databasing/models, and coach/player data displays. Due to the obvious improvements in technology, data collecting, storage, and coaching requirements for data presentation have changed where analysts now require a lot more knowledge in many tracking devices and software. A video recording of a game can help eliminate such biases and give a more impartial perspective of what transpired on the field. For players and coaches to understand what went well and what went wrong, performance analysts collect data from all events occurring on-field. Sports like baseball, soccer, football, basketball, and fields like fantasy have expanded players' effectiveness and forecast future performance according to the big data [18]. Whether it is historical data, vital scorekeeping, algorithm forecasts, or clear player statistics, big data are an indispensable element of the sports industry [19]. Big data permits teams and corporations to stay updated on performance, carry out forecasts, and be big data enables teams and organizations to stay relevant on performance, to forecast, and be determined in the world of sports [20]. Beyond that, all parties involved in the industry, including analysts, experts, and supporters, continuously alter data to update play-by-play or make predictions [21]. Big data analytics and artificial intelligence have revolutionized the sports sector by clarifying statistical information and managing quantitative and qualitative data into understandable and stable content [22].
The first step towards making sense of data is visualizing it. Using data visualization, users can make stories by organizing data into a form that is simpler to grasp, highlighting patterns and outliers. In addition to eliminating the noise from the data and displaying the relevant information, excellent visualizations create a story. It takes a careful balance between design and function to create an effective data visualization. As a result, data analysts utilize a wide range of tools such as graphs and diagrams, and maps, among others, to translate and show data and data connections. To make data comprehensible, the proper approach and its setup are typically required.
The major contribution of the study is: • Designing the VEVF model for sports data visualization based on AI and big data analytics. • Assessing the mathematical model of convolutional neural network for sports video classification. • The numerical results have been performed. The recommended model enhances classification accuracy, precision, sports performance, and prediction ratio and reduces error rate and computation time compared to other existing models.
The rest of the study is arranged as follows: "Literature survey" discusses the existing models and frameworks of sports data visualization and sports monitoring. Then, in "Video-based effective visualization framework (VEVF)", the proposed VEVF model design has been discussed with significant theoretical validation and statistical modeling. Next, in "Performance analysis", the experimental results have been implemented with critical discussion, and finally, "Conclusion" concludes the research paper.

Literature survey
Rafiq et al. [23] suggested transfer learning (TL) for sports video summarization from scene classification. This paper focused on practical implementations in existing approaches and provided a way to obtain a high-grade scene categorization. Cricket was considered a case study that classifies five categories, i.e., batting, bowling, bordering, crowding, and close-up for scene categorization using the previously trained AlexNet. The suggested technique used encoder-like new, completely linked layers. They increased the data by 99.26% over lesser data sets to attain high accuracy. Finally, they compared the results to basic techniques to demonstrate the superiority of the strategy and the latest models. Their performance results were evaluated on cricketing videos and compared several professional learning models like Inception V3, VGGNet16, and AlexNet. Their studies showed that the AlexNet technique delivers superior outcomes than current suggestions.
Li et al. [24] proposed the neutrosophy theory (NT) to visualize sports news data. This paper selected two types of large sporting events of different types and sizes and used Excel data statistics techniques, Newtonian analysis of mechanical systems, messy dynamics, etc., to identify and classify events in sports media, and research visualization reports on sports rules media development. Sport media events have a beneficial impact on the scope of sporting activities. Evolution comprises three times, such as a "beginning period," the "high-tide period," and the "decent period," a comparable period of process and evident chaos, such as starting values, innate randomness, fractal resemblance, etc. Finally, the visualization of the data of the sporting news based on the idea of neutrosophy was applied in the news sector.
Fenil et al. [25] introduced the Histogram of Oriented Gradients and Bidirectional Long Short-Term Memory (HOG-BDLSTM) for real-time violence detection in a football stadium. A system for perceiving violence in real-time was suggested in this work, in which vast information could be processed, and human intelligence simulation recognized the aggression. The system's input is the huge number of realtime video feeds of diverse sources analyzed by Spark. The Spark Frames separated the frames and extracted features from each frame by utilizing the HOG algorithm. The frames were then labeled based on a violent model, a model for the human aspect of the person, and a negative model used to train the BDLSTM network to identify violent situations. The output, was therefore, created in connection with information both past and future.
Pavitt et al. [26] discussed the natural language processing and conversational interfaces (NLP-CI) for supporting match analysis and scouting through artificial intelligence. Here, they explained how sports professionals could give analyti-cal help in exploring and insight into traditional data sources in some frequently available AI applications. In particular, they focused on leveraging natural language processing and conversational interfaces so that users could explore their data sets and the results derived from the analyses done on them with a simple and time-saving toolbox. They demonstrated the benefit of the presentation to domain experts of powerful AI and analytic techniques that show the potential for an impact on the sport's elite, where AI and analysis are available, and on the more popular level, where access to specialist resources is generally limited.
ShuttleSpace [27] was developed by Ye et al. to help badminton experts in evaluating trajectory data. Sports trajectory data, such as the movement of players and balls, has a wealth of information on player behavior. As a result, coaches and analysts used it extensively to enhance the performance of their players'. Researchers had progressively been able to experience these 3D trajectories using immersive technologies such as virtual reality thanks to recent advances in immersive technologies such as virtual reality.
A web server named SGDB, which stands for Sports Gene DataBase, was built by Cao et al. [28] after combining eight published gene expression datasets from human skeletal muscle. Searching for genes that were expressed after exercise and without exercise was possible using the SGDB database. While allowing for the visualization of changes in females and males, it is possible to identify the effects of physical activity on gene expression by analyzing the data. Additional information can be found on different types of exercise and the link between activity and age.
Based on the survey, there are several challenges in implementing sports data visualization. Hence, in this paper, the VEVF model has been suggested. The following section discusses the proposed VEVF model briefly.

Video-based effective visualization framework (VEVF)
Sports analytics and data visualization offered player selectors, managers, and players a broader platform to enhance performance on the field. In the following section of the framework, policymakers and analysis use statistical instruments and algorithms for data to get insight into the future. Data visualization is one of the most important results in the field of sports analytics. The virtual representation of data is easier to grasp than numbers and words. Big data analysis principles include learning analytics, gaming analytics, productive analysis, and data viewing to evaluate significant user-generated data game analytics. Artificial and real-world datasets comprise several visualization techniques: uncertainty visualization, data collections, and multidimensional/multivariate data viewing. Differences in the distribution of the ensemble are the most critical elements in proper game analysis. The data in graphs and charts to visualize data are the most important visualization and predictive analytics component. The data gathered are shown for the team's selectors, the captain, and the executives of the next auction to gain a better and clear knowledge of all the season factors, teams, all-rounder, and batsmen. Figure 1 shows the proposed VEVF model. The video correlates in terms of semantical content between the following frames. Thus, the classification system can get a promising outcome if the frames' temporal relationships can be monitored. Therefore, the selection of a certain number of frames to evaluate video data is a necessity. In this case, the input video selects a certain number of the successive RGB color frame. The raw frames are then carried out to match additional processing, including resizing, rescaling, multiplying, and normalization. These processing steps are the pre-processing stages that convert the raw frames into the processed frames, suitable for neural network analytics. Before processing rules, it is necessary to extract representative information from the rules to create optimally structured and logically arranged data structures that reflect the dependencies between rules. Each incoming data frame is compared against this data structure to determine the least expensive matching rule. The spatial feature is then collected from the managed video images via a convolutional neural network concurrently. The proposed CNN has convolution layers followed by max-pooling and an activation function to extract the features from the processed frames. Next, the retrieved features are passed to the classification layer to collect the temporal features through CNN. Finally, the sport class is classified using Softmax function-based output layers lead by fully connected layers. Figure 2 shows the sports data visualization management system. The model has the subsequent function: (1) real-time gathering of athletes motion data via wearable devices and harmonization of the gathered information into data server databases; (2) motion visualization management systems read data from the data servers to drive the visual personality to move in real-time, to envision the information; (3) CNN is utilized to forecast the future state of motions in line with the prior athlete's workout information; (4) multi virtual role management and human-computer interaction. The practitioners can utilize various analytical methods and techniques in athlete monitoring systems. Numerous variables need to be considered while gathering such data, determining relevant changes, and different ways to data presentation. The capacity of practitioners to convey essential information and provide significant information to their coaches leads to improved athletic performance as a basis for a successful athlete monitoring system.
Let us consider the issue of classifying a video sequence − → y y t |t 1, . . . T by allocating it labels x from a dis-crete set of classes X . For instance, the series can be videos, and the labels can be the activity existing in videos. Now, X is the set of identifiable activities like jumping, running, skipping, etc. This study assumes that every component y t of the video series are objects from input field Y (e.g., video frames). Our initial task is to convert the arbitrary-length video sequences into a form that is agreeable to classification. To this end, this initial study map every component of video sequences into q-dimensional feature vectors through parameterizable feature functions ϕ θ (.): The feature function can be convolutional neural networks (CNNs) employed to video frames with features extracted from the last activation layer in networks. This study introduces the shorthand − → u u 1 , . . . , u T to signify video sequences of the component feature vector. When coupled with feature attribution methods, which explain which pixels were essential for the categorization, feature visualization can be a powerful tool for learning. An individual categorization can be explained and seen locally using both techniques. A feature function should describe the feature's advantage or "what it does." Subsequently, the element feature vector sequence into single p-dimensional feature vectors defining the whole video sequences through temporal encoding functions has been mapped: The vector v is now a fixed-length depiction of the video sequence, which can be utilized for classification. When it comes to computer vision and deep learning, a feature vector seems to be an n-dimensional vector of numerical characteristics that describe a certain item. Since numerical representations improve processing and statistical analysis, many machine learning methods require numerical representations of things. Vectors of explanatory variables are employed in statistical techniques such as linear regression, and feature vectors are equivalent. RGB (red-green-blue) color descriptors are an example of a feature vector that may be familiar with. The amount of red, blue, and green can be used to characterize it.
Characteristic temporal encoding function contains adequate data computations or simple pooling operation, like avg or max. Though the temporal encoding function can be much more refined, called rank-pool operators. Unlike avg or maxpooling operator, which can be expressed in closed-form, rank-pool needs an optimization issue to identify the representation. Prior to processing rules, it is necessary to extract representative information from the rules to create optimally  structured and logically arranged data structures that reflect the dependencies between rules. Each incoming packet is compared against this data structure to determine the least expensive matching rule. Among many pooling methods, we chose the rank pooling strategy since it can significantly reduce the input dimension: As inferred from Eq. (3), where f (·, ·) is a measure of how well a video sequence is defined by every depiction, and this research seeks a better depiction. It is a category of temporal encoding functions. With its scene or object and wide receptive fields, this temporal encoding uses casual convolutions and dilations to adapt to sequential data. Figure 3 shows the CNN model. Convolutional layers in CNNs methodically employ learned filters to the input image to make a feature map that encapsulates the existence of those features in inputs. A pooling layer is a new layer added after the convolutional layer. After convolutionary layers, a pool- ing layer is applied. In particular, convolutional layers have been added to the output of feature maps following nonlinearity (e.g., ReLU). The feature map output of convolutionary layers is restricted by recording the accurate location of features in inputs. This means that tiny adjustments in the feature located in input video images lead to another feature map. It can happen when the image is re-cropped, rotated, moved, and changed in an input image. The optimization issue is common and can contain constraints on the solution and reduce the objective f . Furthermore, many normal pooling operations can be expressed in this way. For instance, average pooling can be formulated as: Significantly, rank-pool operators encrypt temporal dynamics of video sequences, which avg and max-pooling operator does not. Precisely, the rank-pool operators attempt to seizure the order of components in the video series by determining a vector v such that v T u b < v T u a for all b < a, i.e., the function u → v T u honors the comparative order of the components in the video series. This is attained by regressing component feature vectors onto their index in video sequences and resolved to utilize regularized support vector regression (SVR) to provide point-wise ranking functions. Furthermore, rank-pool − → u is defined as: As shown in Eq. (5) and Fig. 4 demonstrates the rank-pool, where [.] ≥0 max{·, 0} projects onto the positive reals.D denotes the learning parameter, and indicates the residual error. Convolutional neural networks have rich characteristics that may be captured via rank pooling algorithms. To recognize actions from the video sequences, the designed network creates a new representation by learning to rank the frame-level characteristics in a video in chronological order.
With a fixed-length video sequences descriptor, v ∈ R p , prediction tasks are to map v to one of the discrete class labels. Let g α be a predictor parameterized by α. This paper can encapsulate our classification channel of random-length video sequences − → y to a label x ∈ X as: To classify a video is to assign a label that is appropriate to it based on the frames that make up the video's content. Videos must be classified so that they offer correct frame labels and accurately represent their whole content based on its features and annotations. Typical predictors comprise softmax and (linear) support vector machines (SVM) classifiers. For two-group classification issues, a support vector machine (SVM) is a machine learning model that employs supervised methods for classification problems. With the use of labeled training data, an SVM model can classify fresh text. Video data is growing exponentially; thus, it is necessary to categorize video clips depending on their content. Determining how to categorize videos into different genres based on their content automatically is, therefore, a top priority. With hyperparameter learning, the SVM algorithm performs a bi-level optimization in this case. For the latter, the likelihood of labels x given sequences − → y has been used in this work, and it can be written as: Here g α (v) indicates the (discrete) probability distribution total labels and α {α x } are the learned parameters of models. Any random variable can have any number of potential values and probabilities within a given range, and the probability distribution characterizes them all. our objective is to learn both the classifier's variables and the depiction of the components in video sequences. Therefore, let (·, ·) be loss functions. For instance, when utilizing soft-max classifiers, a classic selection would be the cross-entropy loss: The present study jointly evaluates feature functions and prediction function parameters by reducing the normalized empirical risk. As a general rule, normalizing the data has sped up learning and leads to faster convergence. The logarithm of the cross-entropy function helps the network to detect and remove such tiny mistakes. To punish a projected class probability depending on how distant it is from the actual anticipated value, a score/loss is generated. Our learning issue is: As discussed in Eq. (9), where R(·, ·) is regularization functions, normally the k 2 -norm of the variables, and θ seems in the definition of the − → u ( j) by Eq. (1). Losses can be reduced by modifying the weights and learning rate of your neural network. As a consequence of optimization approaches, fewer losses are incurred, and the outcomes are more accurate. Equation (9) is an example of a bilevel optimization issue, which has newly been discovered in the setting of hyperparameter learning and support vector machine (SVM).
Here an upper-level issue is resolved subject to restraints imposed by a bottom-level issue. Several solution approaches have been suggested for bilevel optimization issues. However, assumed our interest in learning video depictions from powerful CNNs features, gradient-based methods are most suitable for fine-tuning the convolutional neural network parameter.
When the temporal encrypting functions ϕ can be assessed in closed-form (e.g., avg or max), this study can replace the constraints in the Eq. (9) straightly into the target and use (sub-)gradient descent to resolve for (globally or locally) optimal variables. Providentially, when the lower level goal is twice differentiable, this study can calculate the gradient of the argmin function as other authors have perceived. It is then merely a matter of employing chain rules to find the derivative of the loss function concerning any variable in models. Figure 5 shows the big data-based sports visualization model, including the data collection, data source layer, central repository layer, exchange layer, application layer, and data analysis layer. The data source layer primarily involves sportspersons' historical data, sportspersons' behavior trajectories, Internet data sources, and video information. The data sources layer is the basis for the analysis and predictive use of diverse sports big data. The next layer is the data gathering layer that gathers and processes data from the data sources layer; data collecting, data storage, data interchange, manual imports, and the webserver. Next, the data obtained are cleansed, and the processing needed is carried out following varied application requirements. Finally, the information is categorized and saved. The data processed will be saved in the main storage layer, including unstructured, structured, and file storage. The data analysis layer is based on individual applications and performs functional selection, relationship analysis, statistical analysis, and analysis of the social network to determine possible knowledge, legislation, and patterns in sporting big data according to the demands of specific applications.
Based on the research results above, machine learning and big data technology can boost sports applications. It is possible to improve every aspect of sports training with machine learning and big data analytics. A sports organization's performance, player recruiting, and ticket sales may all be improved with the aid of predictive analytics. A team needs to know how to evaluate players to maximize their value in the future. The examination of sports big data is a crucial use of big data in sport. When evaluating athletes, it is crucial to look at how they perform in different phases. During this session, tutors and other users will learn about player performance factors and data-driven evaluation models. Valuable sports big data may enhance individuals and teams' competitive level and encourage the growth of fitness. Therefore, prediction is an essential study issue for applications of sports big data. The success of a career in sports depends on the player's skill and is associated with the team and nation of the athlete. Therefore, a team or a country needs people and material resources to cultivate an exceptional athlete. The growing sports star refers to the athletes who are not excellent amongst their peers and are at the start of their sports careers but tend to become sports stars. Finding a growing sports star gives positive guidelines on investing national funding and helps sportspeople get great results earlier. The proposed VEVF model enhances the accuracy, recall, precision, F1-score, performance, efficiency and decreases the error rate compared to other existing approaches.

Performance analysis
This study used objective metrics to assess the performance of the suggested VEVF model based on big data analytics. For this reason, this study utilized accuracy (A), recall (R), precision (P), F1-score, and error rate (E) to evaluate the per-  formance. In addition, these metrics are calculated in terms of exact/improper classification of data types for every classification in videos. Finally, the outcomes of all categories of videos or shots be an average of determining the concluding value.

Accuracy ratio analysis
For video scene classification of sports videos, accuracy denotes the ratio of properly classified scenes (true positive and true negative) out of an overall number of video scenes. Therefore, accuracy is calculated by:  Figure 6 represents the accuracy ratio of the recommended VEVF model.

Error rate
Error rate denotes the ratio of miscategorized shots [false negatives (FN) and false positives (FP)] to the entire observed shots. Therefore, the error rate is calculated as: If the error rate is acceptable, the training is done. Otherwise, backpropagate the error through the network. Feed the error rates backward towards the input layer. The weights of the hidden layers require information from the next layer, and thus the error rate is sent backward through the network. Figure 7 demonstrates the error rate of the suggested VEVF model.

Precision rate
The suggested method outperforms classification tasks using convolutional neural networks without reflecting multi-shot existence in mean average precision. Instead, precision represented the ratio of properly labeled scenes over the whole number of sports data and was calculated as follows: A support vector machine classification was investigated to classify new videos utilizing the floating word in video frames and visual contents, significantly improving accuracy and retrieval rates. Furthermore, the spatial-temporal vector is utilized to categorize video types into cartoon, news, commercial, and sports utilizing probabilistic analysis and modeling of the main component, demonstrating an overall performance increase. Figure 8 shows the precision ratio of the suggested VEVF model.

Recall rate
A common metric utilized to specify the classification model quality is the F-score, the harmonic mean of recall and precision. The recall is the proportion of true prediction of the shots over an overall number of sports classes in videos and calculated as: The standard CNN architecture achieved the results for shot classification with high precision, recall, F1-score, lower error rate, and increase accuracy ratio. Figure 9 demonstrates the recall ratio of the suggested VEVF model.

F1-score ratio
F1-score or F-measure denotes the harmonic mean of recall and precision. F-measure is a beneficial metric for performance assessment in better accuracy and a lesser recall ratio than the other technique. In this situation, recall and precision ratios autonomously are unable to deliver a true evaluation. Thus, F-measure can reliably be utilized in such cases for performance appraisal. F-measure is calculated as: After complete training, the model trained to recognize drains was virtually flawless, while the model for low streams had the smallest F-score. Figure 10 illustrates the F1-score ratio of the suggested model.

Sports performance ratio
To assess the performances of a team in a game, tactical data is essential and is the basis for training players and studying decisions on the game. There is no time and space characteristic for statistical information, as it focuses more on personal or competition information for players and behavioral decisions or movement on the ground. Information about statistics includes scores, shooting times, and free throws. Statistical data may be easily recorded in a game and represent the performance of the teams. It may provide a comprehensive analysis for whole games and provide a granular analysis based on a particular event or object. Team data include shot timings, free throws, mistakes, and other quantitative performance and efficiency metrics. The team information frequently refers to combat array and tactical information in competing sports data vision studies. It is often used in teams for comparison and assessment of efforts. Figure 11 shows the sports performance ratio.

Efficiency ratio
Possession data and shooting information are commonly utilized to examine a player's performance and assess the performance and contribution of a player. A player's action is a time event when he shoots some time series at a certain coordinate point. Other events include errors, flaws, assists, robs, player substitutions, and off-court issues. Other events include blunders. Many researchers are studying players' performance over time and calculating the effectiveness of the players to assess whether players increased their score or lowered their score in their defense. Time series events are hence another way of measuring player performance and contribution to games. The key indicator forms the basis for analyzing player patterns of conduct and is the focal point for most displays of sports data. Figure 12 demonstrates the efficiency ratio of the proposed VEVD model. The proposed VEVF model enhances the accuracy, recall, precision, F1-score, sports performance, efficiency and decreases the error rate compared to other existing transfer learning (TL), neutrosophy theory (NT), Histogram of Oriented Gradients and Bidirectional Long Short-Term Memory (HOG-BDLSTM), natural language processing and conversational interfaces (NLP-CI) methods.

Conclusion
This study presents the VEVF model for sports visualization based on a convolutional neural network using big data analytics. Integrating deep learning with coaches and analysts in applied contexts may account for more interactive factors, delivering useful knowledge to teams faster. The temporal features of video sequences are important rather than the features in static images, which can be extended in future work. Our system will be added with different stride, padding, and convolution layers to different optimizers and solve the classification problem to increase accuracy and performance. Our temporal pooling layer may reside above any CNN architecture and allows end-to-end learning from all model parameters via bilevel optimization. Compared with manual digitization, time reduction can make performance measurements feedback in training environments and competitive sports simulation. Our results indicate that the proposed VEVF model enhances the accuracy ratio of 98.6%, recall ratio of 94.5%, F1-score ratio of 97.9%, the precision ratio of 96.7%, the error rate of 29.1%, the performance ratio of 95.2%, an efficiency ratio of 96.1% compared to other existing models. Even though this proposed approach can best suit competitive sports training through effective visualization through CNN with bilevel optimization, this framework demands significant network design strategies for optimal energy conservation and secured network communication. Hence, to overcome this demerit, this study plans to extend the proposed model with an energy-optimized communication system with a security scheme and real-time visual analytics implementation.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.