Background

Many medical records are mostly in text format, and these documents must be analyzed to find meaningful information. According to the National Science Foundation, managing and analyzing scientific data on a large scale is a major challenge for data and future research [1]. The massive amount of biomedical text data can be a valuable source of knowledge for biomedical researchers. Biomedical texts contain unstructured information, such as scientific publications and brief case reports. Text mining seeks to discover knowledge from unstructured text sources by utilizing tools and techniques from a variety of fields such as machine learning, information extraction, and cognitive science. Text mining is a promising approach and great scientific interest in the biomedical domain. These text documents in biomedical require new tools to search for related documents in a collection of documents. Today's biomedical text data is created and stored very quickly. Such as, in 2015, the number of papers available on the PubMed website exceeded six million. The average record of hospital discharges in the United States is more than 30 million [2]. Therefore, companies can save annual costs by using advanced data analysis technology based on machine learning for biomedical text data. Therefore, there is a need to produce efficient topic modeling techniques through advanced machine learning to discover hidden topics in complex biomedical texts.

One way to represent biomedical text documents in natural language processing is called the bag-of-words (BOW) model. The BOW model corresponds to the frequency of words reflected in the matrix of a document collection, and word order in the document does not affect the BOW model. If the document has a vocabulary much shorter than a matrix, it is called a sparse matrix [3].

In text mining, all text corpora are processed, not just biomedical ones. There are several text mining applications such as Medline and PubMed. However, because most biomedical data is in unstructured text format, analyzing that unstructured data is a difficult task. Numerous text mining techniques are developed for the biomedical data domain that processes unstructured data into structured data. In the unstructured existence of biomedical text data, topic modeling techniques such as latent Dirichlet allocation (LDA) [4], Latent semantic analysis(LSA) [5], Fuzzy latent semantic analysis (FLSA) [6] and Fuzzy k-means topic model (FKTM)[7] are developed to analyze biomedical text data. LDA performs better in the classification of clinical reports [8]. LDA is used in a various applications, including the classification of genome sequence [9], the discovery of discussion concepts in social networks [10], patient data modeling [11], topic extraction from medical reports [12], the discovery of scientific data and biomedical relationships [13, 14]. The LDA method finds important clinical problems and formats clinical text reports in another investigation [15]. In other work, [16] used topic modeling to express scientific reports efficiently, allowing the analysis of the collections more quickly. Probabilistic-based topic modeling is applied to find the basic topics of the biomedical text collection. Topic process models are utilized in a variety of activities such as computer linguistics, overview for source code documents [17], product review brief opinion [18], description of a thematic revolution [19], discovery aspects of document analysis [20], sentiment analysis [21] and Twitter text message analysis [22]. LSA discovers clinical records from psychiatric narratives [19]. Semantic space is developed from psychological terms. LSA is also used to reveal semantic insights and ontology domains that are used to build a speech act model for spoken speech [23]. LSA also excels at topic identification and segmentation in clinical studies [24]. The RedLDA topic model is used in the biomedical field to determine redundancy in patient information data [25]. The latent semantic analysis (LSA) is an automatic analysis of the summary of clinical cases [26]. Topic models are used in biomedical data for a variety of purposes, such as finding hidden theme in documents and searching documents [27], document classification [28], and document analysis [29]. Topic modeling is an effective way to extract biomedical text, but word redundancy negatively affects topic modeling [30], and since most biomedical documents are duplicate words, it still needs improvement [31]. Answering biological factoid questions is a crucial part of the biomedical question answering domain [32]. In [33] relationship are discover from the text data.

Clustering is a process utilized in the biomedical investigation to extract meaningful information from large datasets. Fuzzy clustering is another way for hard clustering algorithms to divide data into subgroups with similar aspects [34]. The nonlinear nature of fuzzy clusters and the flexibility of large-scale data clusters distinguish them from hard clustering. It offers more accurate solutions for partitioning and additional options for decision-making. Fuzzy clustering is a type of computation based on fuzzy logic, reflecting the probability or score of a data item belonging to multiple groups. Once the data is partitioned, the centers of the clusters are moved instead of the data points. Clustering is commonly done in order to identify patterns in large datasets and to retrieve valuable information [35]. Fuzzy grouping techniques are frequently used in a variety of applications where grouping of overlapping and ambiguous elements is required. In the biomedical field, some experience has been gathered in diagnosis and decision support systems, where a wide range of measurements is used as the data entry space, and a decision result is formed by suitably grouping the data symptom. Fuzzy clustering is a technique used for various applications such as medical diagnosis, biomedical signal classification, and diabetic neuropathy [36, 37]. It can also detect topics from biomedical documents and make informed decisions about radiation therapy. Fuzzy clustering has several uses in the biomedical field, especially in image processing and pattern recognition, but it is rarely used in topic modeling. In this study, we presented a multiple kernel fuzzy topic modeling method for biomedical text data. The main contributions made to this research are summarized below.

  • We proposed a novel multiple kernel fuzzy topic modeling (MKFTM) technique, which solves the problem of sparsity and redundancy in biomedical text mining.

  • We proposed a FP-IDF (fusion probabilistic inverse document frequency) for global term weights, which is very effective for filtering out common high frequency words.

  • We conduct extensive experiments and show that MKFTM achieves better classification and clustering performance than latest state-of-the-art topic models including LDA, LSA, FLSA, and FKTM.

  • We also compare the execution time of MKFTM and shows that its execution time is stable for different topics.

Materials and method

We described our proposed multiple kernel fuzzy topic modeling method that discover the uncover hidden topics in biomedical text documents. The two main approaches to clustering are hard clustering and fuzzy (soft) clustering. In clustering, objects are divided, and each object is a partition. MKFTM handles multi-kernel fuzzy view, a unique method for topic modeling, and validates over various experiment for medical documents. LDA performance is better for topic modeling, but redundancy always negatively impacts its performance. Therefore, MKFTM has the potential to deal with redundancy issues and discover more accurate topics in biomedical documents with higher performance than competitors like LDA and LSA.

Multiple kernel fuzzy topic modeling (MKFTM)

The documents and words in these document are fuzzy groups in multiple clusters. Fuzzy logic is an extension of the classic 1 and 0 logic to a truth value between 1 and 0. Through MKFTM, documents and words are fuzzily clustered, with each cluster being a topic. The documents are multi-distribution across topics, and clusters are the topics in these documents. MKFTM finds the different matrices of probability. The proposed MKFTM are the following steps:

Pre-processing

This step performs a preprocessing of the document input text collection. There is a lot of noise in text documents, such as word transforms, word shape transforms, special characters, punctuation marks, and stop words that add noise. Several pre-processing steps are used to clean up the text data. The punctuation is removed from the document collection. Text data is converted to lowercase and documents are tokenized. After that, short, empty words with fewer characters are removed. Also, the words are normalized through the Porter Stemmer [38].

Bag-of-words (BOW) and term weighting

The bag-of-words model represents text documents and extracts features from text documents for machine learning algorithms. BOW is a systematic method for calculating document words count [39]. After collecting and preprocessing the document's text, the BOW model is applied. BOW model converts unstructured text data into word-based structured data, ignoring the grammar in information retrieval [40]. The \({\varvec{m}}\) documents contain the word \({\varvec{k}}\) finding the association between words and document. Also, the frequency of \({\varvec{k}}\) words in documents \({\varvec{m}}\) is calculated. Equation 1 represent the words \({\varvec{k}}\) frequencies in documents \({\varvec{m}}\). The \({\varvec{k}}^{{\varvec{n}}}\) means the words \({\varvec{k}}\) count in \({\varvec{n}}\) documents. The \({\varvec{n}}_{{\varvec{i,j}}}\) means the count of words in matrix \({\varvec{i}},{\varvec{j}}\). The \({\varvec{k}}_{{\varvec{j}}}\) means numbers that the numbers of words count in rows. The \({\varvec{tf}}\) is term frequency.

$$tf_{{i,j}} = \mathop \sum \limits_{{k^{n} }}^{{n_{{i,j}} }} k,j$$
(1)

Local terms are weighted after applying BOW and the term frequency method is another local term method. The term frequency [41] evaluate evaluates the frequency with which the term appears in a document. Because each document is of different lengths, more terms may appear in longer documents than shorter ones. Equation 2 shows a typical weighting term that uses a vector field of normalization coefficients. The term weight, which reduces these terms, is essential and assigned \(w_{dk}\) that constantly varies from 0 to 1. Here, \(d\) represents a document, k defines the term and \(w_{dk}\) means k terms of d documents in words w. Weight is used in the most important terms and zero is used in the least important terms. In some cases, the use of a standard weight assignment may be useful, and the weighting term depends on many impacts on the weights, using different terms individually within each vector.

$$\frac{{w_{dk} }}{{\sqrt {\mathop \sum \nolimits_{vector} (w_{d} i)^{2} } }}$$
(2)

This shows the weight \(w\) of the \(k\) term. If a term index \(k_{i}\) frequency \(f_{i,j}\) appear in the document \(d_{j}\), the general frequency \(F_{i}\) of the k terms is well-defined in Eq. 3.

$${\text{F}}_{{\text{i}}} = \mathop \sum \limits_{{\left( {{\text{j}} = 1} \right)}}^{{\text{N}}} {\text{f}}_{{{\text{i}},{\text{j}}}}$$
(3)

\(N\) is a numbers of document in a large set of text corpus. The frequency of document term kiki refers to the number of \(n_{i}\) documents occurrence and ni < Fi.

Fusion probabilistic inverse document frequency (FP-IDF)

The weight of global term (GTW) is estimated at this stage. GTW provide "discrimination values" for all terms. The less frequent terms in document collection are more discriminating [42]. The \(tf_{ij}\) symbol determine the number of time word \(i\) appears in document j. The number of documents is indicated by \(N\) and \(n_{i}\) is total number of documents appearing in the \(i\) term. GTW is calculated by finding the b(tfij) and Pij using Eq. 4, 5.

$$b\left( {tf_{ij} } \right) = \left\{ {\begin{array}{*{20}l} 1 \hfill & {if} \hfill & {tf_{ij} > 0} \hfill \\ 0 \hfill & {if} \hfill & {tf_{ij} = 0} \hfill \\ \end{array} } \right\}$$
(4)
$$P_{ij} = \frac{{tf_{ij} }}{{\mathop \sum \nolimits_{j} tf_{ij} }}$$
(5)

The b(tfij) and Pij are used to calculate the fusion probabilistic inverse document frequency. We proposed a FP-IDF by combining the hybrid inverse documents frequency \(\left( {{\text{Hybrid}} - {\text{IDF}}} \right)\) and probabilistic Inverse documents frequency \(\left( {{\text{Probablistic}} - {\text{IDF}}_{ } } \right)\) for weighting global term. Equations 6 and 7 show the formula for \({\text{Hybrid}} - {\text{IDF}}\) and \({\text{Probablistic}} - {\text{IDF}}_{ }\).

$$Hybrid - IDF = \log \left( {\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} \left( {\frac{N}{{n_{t} }}} \right)} \right)$$
(6)
$$Probablistic - IDF = \log \left( {\frac{{N - n_{t} }}{{n_{t} }}} \right)$$
(7)
$$Fusion\,Probablistic - IDF = \log \left( {\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} \left( {\frac{N}{{n_{t} }}} \right)} \right) + \log \left( {\frac{{N - n_{t} }}{{n_{t} }}} \right)$$
(8)

Use the product property of logarithms, \({\text{log}}_{{\text{b}}} {\text{x}} + {\text{log}}_{{\mathrm{b}}} {\text{y}} = {\text{log}}_{{\mathrm{b}}} {\text{xy}}\).

$$Fusion\,Probablistic - IDF = \log \left( {\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} \left( {\frac{N}{{n_{t} }}\frac{{N - n_{t} }}{{n_{t} }}} \right)} \right)$$
(9)

Combine \(\max _{{\left\{ {{\text{t}}^{{\prime }} \in {\text{d}}} \right\}{\text{n}}_{{{\text{t}}^{{\prime }} }} }}\) and \(\frac{{\text{N}}}{{{\mathrm{n}}_{{\text{t}}} }}\)

$$Fusion\,Probablistic - IDF = \log \left( {\frac{{\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} N}}{{n_{t} }} \cdot \frac{{N - n_{t} }}{{n_{t} }}} \right)$$
(10)

Multiply \(\frac{{\max _{{\left\{ {{\text{t}}^{{\prime }} \in {\text{d}}} \right\}{\text{n}}_{{{\text{t}}^{{\prime }} }} }} {\text{N}}}}{{{\mathrm{n}}_{{\text{t}}} }}\,\,{\text{and}}\,\,\frac{{{\text{N}} - {\text{n}}_{{\text{t}}} }}{{{\mathrm{n}}_{{\text{t}}} }}\)

$$Fusion\,Probablistic - IDF = \log \left( {\frac{{\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} N\left( {N - n_{t} } \right)}}{{n_{t} n_{t} }}} \right)$$
(11)

Raise \({\text{n}}_{{\text{t}}}\) to the power of 1.

$$Fusion\,Probablistic - IDF = \log \left( {\frac{{\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} N\left( {N - n_{t} } \right)}}{{n_{t}^{1} n_{t} }}} \right)$$
(12)

Raise \({\text{n}}_{{\text{t}}}\) to the power of 1.

$$Fusion\,Probablistic - IDF = \log \left( {\frac{{\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} N\left( {N - n_{t} } \right)}}{{n_{t}^{1} n_{t}^{1} }}} \right)$$
(13)

Use the power rule \({\text{a}}^{{\text{m}}} {\text{a}}^{{\text{n}}} = {\text{a}}^{{{\text{m}} + {\text{n}}}}\) to combine exponents.

$$Fusion\,Probablistic - IDF = \log \left( {\frac{{\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} N\left( {N - n_{t} } \right)}}{{n_{t}^{1 + 1} }}} \right)$$
(14)

Add 1 and 1. We proposed a FP-IDF in Eq. 15.

$$FP - IDF = \log \left( {\frac{{\max_{{\left\{ {t^{{\prime }} \in d} \right\}n_{{t^{{\prime }} }} }} N\left( {N - n_{t} } \right)}}{{n_{t}^{2} }}} \right)$$
(15)

Principal component analysis (PCA)

After the FP-IDF global terms weighting method, the PCA is used. The PCA [43] technique has been used to avoid large-scale adverse effects in the weighting of global terms. This method removes redundant dimensions from the data and retains only the most important data dimensions. The PCA calculates the new variable that refers to the principal component, resulting from the integrated integration of the initial variables.

Multiple Kernel fuzzy C-means clustering

At this step, the multiple kernel fuzzy c-means clustering algorithm [44] is used for fuzzily group documents, which is represented by GTW method. In multiple kernel fuzzy c-means clustering algorithm B is a data point,\({\text{Y}} = \left\{ {{\text{Y}}_{{\text{i}}} } \right\}_{{{\text{i}} = 1}}^{{\text{B}}}\), kernel function \(\left\{ {{\text{G}}_{{\text{g}}} } \right\}_{{{\text{g}} = 1}}^{{\text{S}}}\), numbers of desired clusters are F and output membership matrix \({\text{V}} = \left\{ {{\text{v}}_{{{\text{if}}}} } \right\}_{{{\text{i}},{\text{f}} = 1}}^{{{\text{B}},{\text{F}}}}\) with weight \(\left\{ {{\text{Z}}_{{\text{g}}} } \right\}_{{{\text{g}} = 1}}^{{\text{S}}}\) for kernels. The multiple kernel fuzzy c-means have the following steps:

  1. 1:

    Procedure multiple kernel fuzzy c-means MKFCM (Data Y, Clusters F, Kernels \(\left\{ {Z_{g} } \right\}_{g = 1}^{S}\))

  2. 2:

    Membership matrix initialization \(V^{\left( 0 \right)}\).

  3. 3:

    Repeat

  4. 4:

    \(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{v}_{if}^{l} = \frac{{u_{ic}^{{(l)^{s} }} }}{{\mathop \sum \nolimits_{i = 1}^{B} v_{if}^{{(l)^{s} }} }}\), \(\triangleright\) Calculate the normalized membership.

  5. 5:

    \(\triangleright\) Calculate Coefficients Eq. 16

    \({\upalpha }_{{{\text{ifg}}}} = {\text{G}}_{{\text{g}}} \left( {{\text{y}}_{{\text{i}}} ,{\text{y}}_{{\text{i}}} } \right) - 2\mathop \sum \limits_{{{\text{J}} = 1}}^{{\text{B}}} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\text{v}}_{{{\text{jf}}}} {\text{G}}_{{\text{g}}} \left( {{\text{y}}_{{\text{i}}} ,{\text{y}}_{{\text{j}}} } \right) + \mathop \sum \limits_{{{\text{j}} = 1}}^{{\text{B}}} \mathop \sum \limits_{{{\text{j}}^{\prime} = 1}}^{{\text{B}}} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\text{v}}_{{{\text{jf}}}} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\text{v}}_{{{\text{j}}^{{\prime }} {\text{f}}}} {\text{G}}_{{\text{g}}} \left( {{\text{y}}_{{\text{j}}} ,{\text{y}}_{{{\text{j}}^{{\prime }} }} } \right)\quad (16)\)

  6. 6:

    for (i = 1…B; f = 1..F; g = 1..S) do

  7. 7:
    $$\alpha_{ifg} \mathop \leftarrow \limits^{\quad \quad } G_{g} (y_{i} ,y_{i} ) - 2\mathop \sum \limits_{J = 1}^{B} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{v}_{jc} G_{g} \left( {y_{i} ,y_{j} } \right) + \mathop \sum \limits_{j = 1}^{B} \mathop \sum \limits_{{j^{\prime} = 1}}^{B} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{v}_{jf} \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{v}_{{j^{{\prime }} f}} G_{g} \left( {y_{j} ,y_{{j^{{\prime }} }} } \right)\quad (17)$$
  8. 8:

    end for

  9. 9:

    \(\triangleright\) Calculate coefficient by Eq. 18.

  10. 10:

    for (g = 1…S) do

  11. 11:
    $$\beta_{k} \leftarrow \mathop \sum \limits_{i = 1}^{B} \mathop \sum \limits_{f = 1}^{F} \left( {v_{if}^{(l)} } \right)^{s} \alpha_{ifg} \quad (18)$$
  12. 12:

    end for

  13. 13:

    \(\triangleright\) Update weights by Eq. 19.

  14. 14:

    for (g = 1…S) do

  15. 15:

    \(z_{g}^{(l)} \leftarrow \frac{{\frac{1}{{\beta_{g} }}}}{{\frac{1}{{\beta_{1} }} + \frac{1}{{\beta_{2} }} \cdots \frac{1}{{\beta_{S} }}}}\quad (19)\)

    end for

  16. 16:

    \(\triangleright\) distance calculate by Eq. 20.

  17. 17:

    for (i = 1…B;c = 1..F) do

  18. 18:
    $$T_{if}^{2} \leftarrow \mathop \sum \limits_{g = 1}^{S} \alpha_{ifg} \left( {z_{g}^{\left( l \right)} } \right)^{2} \quad (20)$$
  19. 19:

    end for

  20. 20:

    \(\triangleright\) update memberships Eq. 21

  21. 21:

    for (i = 1…B;f = 1..F) do

  22. 22:
    $$v_{if}^{(l)} \leftarrow \frac{1}{{\mathop \sum \nolimits_{{f^{\prime} = 1}}^{F} \left( {\frac{{D_{if}^{2} }}{{D_{{if^{\prime}}}^{2} }}} \right)^{{\frac{1}{s - 1}}} }}\quad (21)$$
  23. 23:

    end for

  24. 24:

    until \(\left| {\left| {Vl - V^{l - 1} } \right|} \right|\)< ∋ 

  25. 25:

    return \(V^{(l)} ,\left\{ {z_{g}^{\left( l \right)} } \right\}_{g = 1}^{S}\).

  26. 26:

    end procedure

Probabilistic distribution of documents

The document term matrix, along with the GTW method (matrix of words × documents), find the probability of a document P(Dj), calculated by Eq. 22. Here \(i\) represents the various documents.

$$P(D_{j} ) = \frac{{\sum\nolimits_{i = 1}^{m} {(W_{i} ,D_{j} )} }}{{\sum\nolimits_{i = 1}^{m} {\sum\nolimits_{j = 1}^{n} {(W_{i} ,D_{j} )} } }}$$
(22)

Probabilistic distribution of the topics for documents

The probabilities of obtaining the \(j\) documents in the \(k\) topic are \(P(D_{j} |T_{k} )\) through P(Tk|Dj) with P(Dj), as described in Eq. 23.

$$P(D_{j} ,T_{k} ) = P(T_{k} |D_{j} ) \times P(D_{j} )$$
(23)

Since, finding the \({ }P(D_{j} |T_{k} )\), normalized the P(D,T) for each topic through Eq. 24.

$$P(D_{j} |T_{k} ) = \frac{{P(D_{j} ,T_{k} )}}{{\sum\nolimits_{j = 1}^{n} {P(D_{j} ,T_{k} )} }}$$
(24)

Probabilistic distribution of words in documents

This step calculates the probability of a word \(i\) in the \(j\) document applying Eq. 25.

$$P(W_{i} |D_{j} ) = \frac{{P(W_{i} ,D_{j} )}}{{\sum\nolimits_{i = 1}^{m} {P(W_{i} ,D_{j} )} }}$$
(25)

Probabilistic distribution of words in topics

The probabilities of word \(i\) in topic \(k\) \(P(W_{i} |T_{k} )\) through \(P(D_{j} |T_{k} )\) and P(Wi|Dj) is calculated through Eq. 26.

$$P(W_{i} |T_{k} ) = \mathop \sum \limits_{j = 1}^{n} P(W_{i} ,D_{j} ) \times P(D_{j} |T_{k} )$$
(26)

Datasets

In this research, we used six state-of-the-art datasets, which are publicly available. The first dataset is a medical abstract of the English scientific corpus from MuchMore Springer Bilingual Corpus,Footnote 1 a labeled dataset. We used two categories of journals, including the federal health standard and arthroskopie, for experimentation. Table 1 shows the statistics of datasets.

  • The medical abstract from MeSH categories from Ohsumed CollectionFootnote 2 is a second labeled corpus dataset. The experiments are conducted in three categories: virus disease, bacterial infection, and mycoses.

  • Biotext [45] is the third dataset, containing summaries of diseases and treatments collected from Medline.

  • The fourth data set is GENIA corpora [46], abstracts collection from Medline papers describing the molecular biology literature.

  • The fifth is the redundant corpus of synthetic WSJ and is generally used in natural language processing (NLP) [47, 48].

  • The six datasets are health news tweetsFootnote 3 (T-datasets), an unlabeled dataset.

    Table 1 Datasets statistics

Results

Experimental performed

We performed the classification, clustering, execution time and redundancy issues for experiments. We used six state-of-the-art datasets for experiments. The first two datasets Muchmore springer bilingual and Ohsumed Collection, are labeled datasets. Therefore, it's used for classification. The other two datasets Biotext and Genia are unlabeled. Hence, it's used for clustering. The redundant corpus of synthetic WSJ is used for the redundancy issue comparison because in literature this dataset is mostly consider for redundancy issue. Therefore, we used the same dataset for fair comparison. The execution time is compared to the health news tweets dataset, containing more documents.

Experimental setup

We used the laptop core i7 computer with 16 GB RAM and MATLAB software for experiments.

Baseline topic models

In this section, our proposed MKFTM topic model is compared with the state-of-the-art LDA [4], LSA [5], FLSA [6] and FKTM [7] topic models. Experiments are performed for both classification and clustering. We also compare our proposed topic model with RedLDA [25] and FKTM, which are used for redundancy problems.

Classification of documents

The first classification evaluation is performed with Bayesian optimization for two datasets, including MuchMore Springer Bilingual Corpus and Ohsumed Collection. Optimization refers to searching for points to minimize functions with real value, known as objective functions. The bayes optimization is a gauss-process objective function model that evaluates the objective functions. Bayesian optimization minimizes cross-validation error. MATLAB fit function is used for Bayesian optimization. MKFTM performance is compared to LDA, LSA, FLSA, and Fuzzy k-means topic models using a tenfold cross-validation method. Document classification is performed on topic probabilities for document P(T|D) through discriminant analysis machine learning classifier [49] using Bayes optimization. Discriminant analysis is described in Eq. 27

$${\mathbf{y}} = {\mathbf{arg}} {\mathbf{min}}_{{{\mathbf{y}} = 1, \ldots .{\mathbf{K}}}} \mathop \sum \limits_{{\mathbf{k}}}^{{\mathbf{K}}} {\mathbf{p}}\left( {{\mathbf{k}}{|}{\mathbf{x}}} \right){\mathbf{C}}({\mathbf{y}}|{\mathbf{k}})$$
(27)

The \(\user2{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{y} }\) represent the expected classes and \({\varvec{k}}\) is number of classes. The \(\user2{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{p} }({\varvec{k}}|{\varvec{x}})\) is the posterior probability of class \({\varvec{k}}\) and observations x. The Cy|k is the classification cost and observation \({\varvec{y}}\) with the true class \({\varvec{k}}\). The discriminant analysis classifies the document features with different topics such as 50, 100, 150 and 200. MKFTM performance of classification is measured using precision, recall, accuracy, and F1-score. Precision, recall, accuracy and F1 measurements are used to verify the performance of the MKFTM. The classification results of two datasets labeled MuchMore Springer and Ohsumed are shown in Tables 2 and 3. The results of the MKFTM classification are compared with the latest LDA, LSA, FLSA and FKTM state-of-the-art topic models for the biomedical text corpora.

Table 2 Classification results (muchmore springer bilingual corpus)
Table 3 Classification results (Ohsumed collection dataset)

Clustering of documents

The clustering performance is measured in two datasets, Genia and Biotext. Document clustering is performed using the k-mean clustering method of P (T | D).There are two methods for clustering validation, and internal validation method is more accurate than external validation [50]. We use the internal validation method of the Calinski-Harabasz index to evaluate multiple topics and clusters. The Calinsiki-Harabasz (CH) index [51] is a widely used internal verification method. The exponent CH is the exponent relationship where cohesion is estimated at the distance from the center point as shown in Eq. 28, where k is the number of clusters and N is the total number of observations.

$$CH(C) = \frac{(N - K)}{{(K - 1)}}\frac{{\sum\limits_{ck} { \in C|C_{k} |d_{e} |} \,(\overline{C}_{k} ,\overline{X})}}{{\sum\limits_{ck} { \in C\sum\limits_{xi} {C_{k} d_{e} (x^{i} ,\overline{C}_{k} )} } }}\;\;$$
(28)

The Calinsiki-Harabasz index can assess the reliability of all clusters by summing the mean square error. The highest Calinsiki-Harabasz index shows the best results of the clustering. The Calinsiki-Harabasz index gives the best results for clusters and finds the corresponding clusters that appear. Figure 1, 2, 3, 4, 5, 6, 7 and 8 shows the CH index for clustering performance in Genia and biotext datasets.

Fig. 1
figure 1

CH-index results for Genia datasets with K = 50

Fig. 2
figure 2

CH-index results for Genia datasets with K = 100

Fig. 3
figure 3

CH-index results for Genia datasets with K = 150

Fig. 4
figure 4

CH-index results for Genia datasets with K = 200

Fig. 5
figure 5

CH-index results for Biotext datasets with K = 50

Fig. 6
figure 6

CH-index results for Biotext datasets with K = 100

Fig. 7
figure 7

CH-index results for Biotext datasets with K = 150

Fig. 8
figure 8

CH-index results for Biotext datasets with K = 200

Redundancy issue

The experiment examined the influence of the redundancy problem using a WSJ synthetic redundant corpus. MKFTM versus LDA and RedLDA developed to address redundancy issues in biomedical documents [25]. LDA, RedLDA, FKLSA, Fuzzy k-means topic model and MKFTM are trained on the same redundant WSJ synthetic corpus to compare the performance of these topic models. Table 4 shows the log-likelihood probability of WSJ dataset synthetic redundancy with topics ranging from 50 to 200.

Table 4 Comparison of loglikelihood for WSJ corpora

Execution time

Health News Tweets are used to compare MKFTM runtime with LDA, LSA and FLSA. Figure 9 shows the runtime performance of MKFTM, LDA and LSA.

Fig. 9
figure 9

Comparison of execution times of health tweet dataset

Discussion

The classification, clustering, redundancy issue, and execution time are used for the performance of experiments. The document classification is presented in Tables 2 and 3. Table 2 shows the classification results for the Muchmore Springer dataset. The classification results are measured with 50,100,150 and 200 numbers of topics on both datasets. MKFTM achieved 99.04%, 99.62%, 99.69%,99.61% accuracy with 50,100,150 and 200 topics on Muchmore Springer dataset. FKTM accuracy is 98.29%, 98.87%, 98.97%,98.86% with 50,100,150 and 200 topics for Muchmore Springer dataset. FLSA accuracy is higher than LDA and LSA on the Muchmore Springer dataset. FKTM accuracy is better than FLSA for the Muchmore Springer dataset. However, MKFTM achieved the highest accuracy compared to baseline topics models FKTM, FLSA, LDA, and LSA for the Muchmore Springer dataset. We also measured the precision, recall, and F1-sore score for all topic models. The precision, recall, and F1-sore score of FKTM is better than the FLSA on the Muchmore Springer dataset. LDA and LSA precision, recall, and F1-sore values are lower than FLSA for the Muchmore Springer dataset. Overall, MKFTM attained the higher scores values for precision, recall, and F1-sore for Muchmore Springer dataset. Table 3 shows the Ohsumed dataset classification results and MKFTM achieved 94.10%, 89.45%, 92.91%, 90.35% accuracy with 50,100,150, and 200 topics, respectively. For the Ohsumed dataset, FKTM accuracy is 92.35%, 87.70%, 90.16%, 88.25% for 50,100,150, and 200 topics, respectively. On the Ohsumed dataset, FLSA accuracy outperforms LDA and LSA. For the Ohsumed dataset, FKTM accuracy is highest than FLSA. However, the accuracy of MKFTM for the Ohsumed dataset is higher than the FKTM, FLSA, LDA, and LSA base topics models. In the Ohsumed dataset, FKTM outperforms FLSA in precision, recall, and F1-sore. For the Ohsumed dataset, LDA and LSA values of precision, recall, and F1-sore are lower than the FLSA. The precision, recall, and F1-sore of MKFTM is highest than the FKTM, FLSA, LDA, and LSA. The classification results show that MKFTM performance is superior to FKTM, FLSA, LDA, and LSA for Muchmore Springer and Ohsumed datasets.

Documents clustering performance is measured using the Calinski-Harabasz index for Genia and Biotext Datasets with 50,100,150 and 200 numbers of topics. Figure 1, 2, 3 and 4 shows that the CH-index values of LDA and LSA are lower than FLSA for the Genia dataset. The FKTM CH-index values are higher than FLSA for the Genia dataset, and MKFTM CH-index values are higher than FKTM for the Genia dataset. Therefore, the clustering performance of MKFTM is highest than other topic models like FKTM, FLSA, LDA, and LSA for the Genia dataset. Figures 1, 2, 3, and 4 indicate that the CH-index values of LDA and LSA are lower than those of FLSA for the Genia dataset. For the Genia dataset, the FKTM CH-index values are greater than the FLSA. For the Genia dataset, MKFTM CH-index values are greater than FKTM. As a result, MKFTM outperforms other topic models for the Genia dataset, like FKTM, FLSA, LDA, and LSA in terms of clustering performance. Figures 5, 6, 7, and 8 show that the CH-index values of LDA and LSA are lower than FLSA for the Biotext dataset. For the Biotext dataset, the FKTM CH-index values are greater than the FLSA. MKFTM CH-index values are greater than FKTM for the Biotext dataset. As a result, for the Biotext dataset, MKFTM outperforms other topic models like FKTM, FLSA, LDA, and LSA for clustering. Therefore, MKFTM achieved better clustering performance for Genia and Biotext datasets.

Table 4 shows that log-likelihood for the WSJ dataset with 50, 100, 150, and 200 topics. The log-likelihood results of MKFTM are better than the FKTM, FLSA, LDA, and LSA with different topics. Therefore, MKFTM also solves the redundancy issues and achieves better performance for redundant corpora than FKTM, FLSA, LDA and LSA.

The execution time performance for the health news tweets dataset is shown in Fig. 9. The execution time performance is measured with 50, 100, 150, 200, 250, 300, and 350 numbers of topics. The execution time of LDA and LSA is increased as the number of topics increases, but the execution time of MKFTM is stable.

Conclusion

Biomedical text is on the rise these days, while evaluating these documents is extremely important to discovering valuable sources of information. Biomedical databases like PubMed provide valuable services to scientific communities. To reveal the hidden theme structures from biomedical text document topic modeling is a famous technique. These text documents used structured to search, index, and summarize. In advanced machine learning the fuzzy methods are mostly utilized in medical imaging. The existing topic modeling method is based on linear and statistical distribution. This paper presented a new multiple kernel fuzzy topic modeling (MKFTM) approach for biomedical text documents. We also proposed a new fusion probabilistic inverse document frequency. MKFTM improves the negative consequences of redundancy words for biomedical text documents and perform better than LDA and RedLDA. MKFTM also remove the sparsity problem in biomedical text documents. Experimental results indicate that MKFTM performs better in biomedical documents' classification and clustering tasks than the state-of-the-art topic models LDA, LSA, FLSA and FKTM. MKFTM is a new approach to topic modeling, which has the flexibility to work with a variety of clustering and scaling techniques. Furthermore, the MKFTM method uses discrete and continuous data to extract topics from biomedical documents. The six datasets quantitative evaluation describes that MKFTM performs better than progressive baselines with significant improvements.