Introduction

Magnetic Resonance Imaging

Magnetic resonance imaging (MRI) creates images of the inside organs and bodily processes by combining strong magnetic fields and radio waves. It is commonly used in radiology to examine a wide range of health conditions and illnesses without exposing patients to ionizing radiation. MRI has numerous applications in medical diagnosis and there are more than 35,000 MRI scanners in use worldwide. This imaging modality has revolutionized the field of medical diagnosis and has significantly impacted patient care in many specialties. However, there is still much to learn about how it affects better health outcomes. Unlike computed tomography (CT) scans, MRI does not expose patients to ionizing radiation, which is one of its benefits. Therefore, when either modality can provide the same diagnostic information, it is advised to choose MRI rather than CT. Although MRI is typically regarded as a secure imaging method, instances of patient damage have grown over time [11, 12].

Even though magnetic resonance imaging (MRI) [1,2,3] is a helpful diagnostic tool, there are specific circumstances when it is not recommended, such as when patients have cochlear implants, cardiac pacemakers, metallic foreign bodies in their eyes, or some ferromagnetic surgical implants [9]. Although it might be a better option than other imaging techniques, there is still debate about the safety of MRI during the first trimester of pregnancy. As MRI is used more often in healthcare, questions have been raised about its cost-effectiveness and the potential for overdiagnosis. To ensure that MRI is used appropriately and successfully, healthcare providers must weigh the potential advantages of the technology against its costs and patient hazards.

Prostate Cancer

The second most common cancer in males to be diagnosed with and the sixth most prevalent cause of cancer-related mortality in men, prostate cancer develops in the glandular cells of the male reproductive system. Although it usually advances gradually, it can occasionally spread swiftly to other parts of the body, including the lymph nodes and bones. The prostate is an exocrine gland that is about the size of a walnut and is situated beneath the bladder and in front of the rectum. Management of prostate cancer should be based on the severity of the disease, with active surveillance being a viable option for low-risk tumors. Surgery, radiation therapy, proton therapy, and cryosurgery are potential curative treatments, whereas hormonal therapy and chemotherapy are generally reserved for advanced cases. Age, overall health, extent of metastasis, and response to treatment are all crucial factors in determining the outcome of the disease. Differentiating between benign and malignant tumors is difficult, but the dependable imaging method of dynamic contrast-enhanced MRI (DCE-MRI) [5] can offer crucial knowledge regarding the tumour microenvironment and angiogenesis. DCE-MRI can calculate blood perfusion and capillary permeability using pharmacokinetic models, assisting with the treatment of prostate cancer.

VBIR Technique

Over the past decade, Visual-based visual information retrieval (VBVIR) predominant area the field of computer vision. This technique analyzes the image visual characteristics rather than relying on metadata like keywords or image tags/descriptions. In the medical field, images are often complex and composed of various structures, so feature abstraction and image classification are necessary for efficient retrieval. VBIR automatically retrieves images based on properties like color composition, shape, and texture. Every day, numerous medical images are produced in various hospitals and medical centers, including dental, endoscopy, skull, MRI, ultrasound, and radiology images. Medical image retrieval is critical for diagnosis, education, and research, as it enables retrieval of similar cases and leads to more accurate diagnoses and appropriate treatment decisions [13, 14].

This work aims to present and analyse a new technique for measuring intravascular attentiveness in prostatic tissue: adaptive complex independent components analysis (ACICA). Specifically, it focuses on the first phases of contrast agent transit in the tumour vasculature. Furthermore, the research attempts to show increased focus in the prostate's periphery, which has previously been recognised by hyper intensity in T2-weighted MRI and apparent diffusion coefficient maps. In addition, the research presents a unique partitioning clustering approach for visual-based image retrieval, with the goal of outperforming conventional k-means clustering in terms of both intra- and inter-cluster similarity metrics. Future objectives involve investigating the use of quantitative analysis and hyper-spectral imaging to improve the detection of prostate cancer by identifying spectral signatures and enhancing the assessment of these characteristics for more efficacy.

Anticipated Architecture

The anticipated architecture shown in Fig. 1 introduces a new method called Adaptive Complex Independent Components Analysis (ACICA) for measuring the attentiveness of contrast agents in prostate tissue. The ACICA method calculates intravascular components and converts them into a curve that represents the attentiveness of contrast agents in the blood vessels within the tumor. This method doesn’t need information about the contrast uptake in an artery outside the tumor, which was previously necessary to accurately measure the AIF during the early stages of contrast agent passage through the prostate's blood vessels. A RR-based method is utilized to rectify the intravascular attentiveness curve calculated from the ACICA method through a clustering and thresholding mechanism. This process is automated, ensuring accurate MR image diagnoses shown in Fig. 2. The corrected intravascular attentiveness curve is then subjected to pharmacokinetic analysis (PK-analysis) utilizing a fuzzy C-Mean mechanism, yielding a high Ktrans value that corresponds to the suspicious region in the prostate tumor. Finally, VBIR techniques based on clustering will be employed to find relevant images from a large image database utilizing visual visual.

Fig. 1
figure 1

General structure of anticipated technique

Fig. 2
figure 2

General structure of VBIR system (enhancement part)

Visual-based image retrieval (VBIR) stands apart from traditional information retrieval techniques because image databases lack inherent structure and meaning. Unlike text databases, which contain logically structured ASCII character strings, digitized samples of image consist exclusively of arrays, and must be processed to extract relevant information before any analysis can take place [21]. VBIR draws heavily on image processing and computer vision techniques, but focuses specifically on retrieving images with specific abstracts from large databases. While there is some overlap between VBIR and related fields such as object recognition by feature analysis, VBIR's focus on image retrieval distinguishes it from broader image processing applications such as compression and interpretation. For example, police may use face recognition systems to match a single image with an individual's database record for identity verification, which would not be considered VBIR. However, searching an entire database for images that closely match a given test data would be an example of VBIR.

Some of the most significant are: In VBIR, there are various research and development concerns that overlap with mainstream image processing and information retrieval. These issues include understanding the needs of image users and their information-seeking behavior, finding appropriate methods to describe image visual, extracting relevant abstracts from raw images, storing large image databases in a compact manner, matching query and stored images based on human comparison judgments, professionally accessing stored images by visual, and designing user-friendly interfaces for VBIR systems. On the other hand, in video retrieval, some key research issues include developing techniques for retrieval purposes, and designing procedures that enable efficient and effective search of video data.

Recent advancements underscore challenges tied to contrast enhanced imaging in prostate cancer analysis [26]. Varied uptake patterns within tumors complicate differentiation, and subtle contrast agent uptake differences make distinguishing cancerous regions from healthy tissue difficult [27]. Limited spatial resolution hampers precise lesion identification, affecting staging accuracy. Prostate movement leads to motion artifacts, potentially compromising image quality [28]. Optimal timing for contrast agent injection is a challenge, and renal dysfunction can lead to complications. Variability in quantitative metrics due to measurement techniques introduces inconsistencies, and a lack of standardized protocols yields varied outcomes [29]. Patient-specific factors, like physiological differences, affect contrast agent distribution and image quality [30]. Contrast agents, pharmaceutical compounds that enhance diagnostic image information, play a pivotal role [31]. They boost imaging sensitivity and specificity by modifying tissue attributes and influencing contrast mechanisms. Unlike single-property-based modalities, MRI employs multiple intrinsic tissue characteristics for image creation [32]. MRI distinguishes itself from computed tomography, avoiding reliance on a single contrast mechanism [33]. In the contemporary medical landscape, the significance of employing dynamic contrast-enhanced MRI (DCE-MRI) with models like the Tofts kinetic model extends beyond prostate cancer detection. This approach holds broad relevance across various medical fields as it provides a noninvasive [34] means to assess tissue perfusion and capillary permeability. Such insights are essential for understanding the pathophysiology of numerous diseases, including cancers, neurodegenerative disorders, and cardiovascular conditions. The integration of DCE-MRI into clinical practice meets the growing need for accurate and early disease detection and monitoring without invasive procedures. This technique empowers healthcare professionals [35] with a powerful tool to evaluate treatment responses, differentiate benign and malignant lesions, and optimize therapeutic strategies [36].

Methods Used for Implementing VBIR

The two types of visual descriptors are used in VBIR system are:

Color Layout Descriptor

The spatial distribution of colour in a picture is captured using the feature abstraction approach known as the colour layout descriptor (CLD) [4]. A discrete cosine transform with quantization is used in combination with the selection of representative colours based on a grid to achieve this. The use of colour to describe and convey an image is possible since it is a key component of visual perception. The CLD, one of the methods provided by the MPEG-7 standard to derive colour descriptors, allows for the description of colour relationships between collections or sequences of pictures. The spatial arrangement of representative colours on a grid superimposed on an area or image is recorded by the CLD. It represents data using DCT coefficients, producing a very compact descriptor shown in Figure 3.

Fig. 3
figure 3

Schema of CLD

The abstraction process of this color descriptor consists of four stages:

The described color descriptor is likely to be a variant of the well-known color descriptor, "Color Histogram," which is widely used in computer vision and image processing. However, this variant incorporates additional processing steps to improve its invariance to image resolution and capture more useful information about the image's color distribution.

The four stages of this color descriptor are:

  1. (a)

    Image subdividing: The input sample is divided into 64 chunks to ensure that the descriptor is invariant to image rescale. This step helps to capture the local color information of the image effectively.

  2. (b)

    Representative color selection: From each chunk, a single characteristic color is selected. This step reduces the dimensionality of the descriptor and helps to eliminate noise or irrelevant information. The resulting miniature sample icon of size 8x8 contains 64 representative colors, one for each chunk.

  3. (c)

    DCT transformation: An 8 × 8 Discrete Cosine Transform (DCT) is used to deform the image's luminance, blue, and red chrominance (Y,Cb, and Cr) values. This process aids in identifying the properties of the frequency domain.

  4. (d)

    Zigzag scanning is carried out on the three sets of 64 DCT coefficients that result. The 8 × 8 DCT matrix's low-frequency coefficients are grouped by rearranging the coefficients in this step. This rearranging is done to increase the descriptor's resistance to minute changes in the image's colour distribution.

  5. (e)

    Overall, this color descriptor incarcerations the local color information of the image effectively, while also considering the frequency domain physiognomies of the color . The resulting descriptor is compact and robust to variations in the image's resolution or scale.

Edge Histogram Descriptor

The text describes the use of histograms for representing the visual of an image and how they can be used for indexing and retrieving images. Specifically, it mentions the edge histogram descriptor anticipated in MPEG-7, which focuses on capturing the local edge distribution in the image. The text mentions that the standard edge histogram in MPEG-7 comprises only 80 histogram bins and is designed to store metadata in a space-efficient manner. However, it emphasizes that using global and semi-global edge histograms may be necessary to improve retrieval performance because these local histogram bins may not be sufficient to capture the overall nature of the edge distribution. The text further elaborates on how global and semi-global edge histograms can be produced from the local histogram bins, and how all three types of histograms (global, semi-global, and local) can be leveraged to measure the similarity between images. Overall, the text highlights the importance of edge abstracts in representing the visual of an image, and how histograms can be used to efficiently capture and represent these abstracts for indexing and retrieval purposes. It also acknowledges the limitations of local histogram bins and suggests the use of global and semi-global histograms to improve retrieval performance.

Normative of Semantics

The normative section of the edge histogram descriptor, which has 80 local edge histogram bins, is described. The next subsections provide a full description of each bin and its significance in relation to the image's edge distribution and define the semantics of these histogram bins in more depth.

Without access to the specific sub-sections, it is difficult to provide further information about the semantics of these histogram bins. However, it can be inferred that these bins likely capture specific aspects of the local edge distribution, such as edge orientation, strength, or direction. The use of these histogram bins allows for efficient storage of metadata while still capturing important information about the edge distribution of the image.

Edge Partition of Image Space

Localization and Identification

It is a method for concentrating the edge distribution within a certain region of the image. As seen in Figure 4, this procedure includes partitioning the image space into 4x4 sub-images. An edge histogram is created for each sub-image to show how the edges are distributed there. The book advises further breaking the sub-image into tiny square chunks known as image-chunks to specify various edge kinds within the sub-image. It is likely that the edge histograms for each sub-image are generated based on the edge types identified within these image-chunks. Without additional information, it is difficult to determine the exact method used to generate the edge histograms or the specific edge types identified within the image-chunks. However, this approach allows for more precise localization of the edge distribution and provides a way to capture more nuanced information about the edge types present in the image.

Fig. 4
figure 4

Sub-image and image-chunk

Edge Types

Figure 5 illustrates that the edge histogram descriptor outlines five edge types for creating edge histograms. Four directional edges and one non-directional edge make up these edge types. The directional edges are retrieved from the image-chunks of the sub-images and are referred to as vertical, horizontal, 45-degree diagonal, and 135-degree diagonal edges. A non-directional edge is one that has no obvious directionality and is present on an image chunk. This approach allows for more detailed characterization of the edge distribution within the image, enabling more nuanced analysis and comparison of images based on their edge abstracts.

Fig. 5
figure 5

Five types of edges

Data stream diagrams are used to illustrate the stream of data between and among modules. This covers several important topics related to software design, including Design Considerations that must be addressed before attempting to devise a complete solution, Development Methods, Architectural Strategies, and System Architecture. The System Architecture section is particularly important, as it describes the Data Stream Diagrams (DSDs) that serve as the root of any scheme.

Design Considerations

This stage represents the initial step in transitioning from the problem to the solution domain. Essentially, the design process involves determining how to satisfy the identified needs. The quality of the software is heavily influenced by the design phase and can have significant impacts on the subsequent stages, such as testing and maintenance. During the system design phase, the main focus is on determining the various modules that will be included in the system, their specifications, and the manner in which they will interact with each other to achieve the desired results. Upon the conclusion of this phase, the key data structures, output and file formats, and module specifications are solidified.

Development Methods

The development of the project in the research work followed the Waterfall lifecycle model, which was initially anticipated by Royce. This model sown in Fig. 6 is characterized by a sequential approach, where each activity is completed before moving on to the next one. The project design is divided into smaller tasks in order of precedence, and each task is designed individually to ensure its proper functionality before starting on the next one. Verification of each completed task is done to ensure that it meets all the requirements and is error-free.

Fig. 6
figure 6

Waterfall model

it offers simplicity, as breaking down the entire project into smaller activities makes it easier to develop code that can be combined to form the entire project. Secondly, the verification step in the Waterfall model ensures that any errors in a task are eliminated before proceeding to dependent tasks, reducing the likelihood of errors remaining undetected in the overall task hierarchy.

Schematic illustration of waterfall model:

The model is divided into stages: The problem is identified during the requirement analysis and definition step, and service objectives and constraints are defined. The system specifications are translated into a software representation, including data structure, software architecture, process details, and interface representations, during the system and software design stage. The designs are translated into the software realm during implementation. The goal of unit, integration, and system testing is to identify errors and ensure that the software fulfills its specifications. Finally, the software is updated during the operations and maintenance period to satisfy changing customer needs, adapt to changes in the external environment, correct errors and oversights, and so on.

System Architecture

In software engineering, the process of designing large systems involves breaking them down into smaller sub-systems that offer related services. The primary goal of the architectural design process is to establish a basic structural framework for a system by identifying its major components and their communications. The system's structure is presented in Fig. 7, illustrating the steps involved in pre-processing the MRI image of prostate cancer. The input image is processed through a novel technique known as Adaptive Complex Analysis, which employs Information about intravascular contrast agent attention is extracted using Independent Component Analysis (ICA) [6,7,8]. The models are used to extract the independent components, which are then stored in a matrix and analyzed by a Visual-Based Image Retrieval (VBIR) system. The planned architecture strives to accomplish the required results effectively. To calculate intravascular contrast agent attention in prostate cancer, Mehrabian's proposed system builds on Independent Components Analysis [10]. However, the projected system includes a cutting-edge VBIR technique to improve the result. The creation of an affordable and computationally effective feature abstraction technique that uses robust descriptors for superior outcomes is the anticipated system's key contribution.

Fig. 7
figure 7

Anticipated system architecture

Research Methodologies

To calculate the attentiveness in the intravascular prostate tissue, the expected technique uses the Adaptive Complex Independent Components Analysis procedure. To correct the intravascular attentiveness curve, a RR-based method is used. Visual-based image retrieval (VBIR) technique is used to create various MR image databases, which can be used to retrieve similar images based on programmed feature abstraction methods. The medical imaging field has shown increased interest in methods and tools for analyzing medical images, and VBIR is a useful technique for comparing images based on specified abstracts stored in a database. The anticipated method is smeared to a database of MR images to retrieve the required image based on similarity matching with respect to Ktrans map value.

The anticipated system defines classes based on parts or segments of medical images, such as regions, objects, or segments, which are then classified into novel classes corresponding to regular or irregular anatomical structures like tumors, ventricles, or hematomas, organized by diagnostic and anatomical hierarchies. The classification is done by experts who select appropriate names from the hierarchy. The dataset used comprises approximately 5,000 simulated but realistic MRI and CT images from three different MR image catalogues, covering almost all classes related to prostate cancer.

In [16, 17], the novelists directed readings on the effectiveness of CNNs in generating effectual images. These networks have found applications in various machine learning tasks. In [18], the novelists directed a medical image retrieval system called LBpDAD that incorporates the advantages of both local bit plane deciphering and the features extracted by a neural network. [19] Introduced a novel approach that is smattering coefficients from histogram of compression that utilizes a specific version of deep networks. Additionally, [20] proposed smattering distinct forms of trodden data, namely data attentiveness and analysis in canonical correlation.

Data Stream Diagram

A data stream diagram (DSD) is a visual illustration of how data moves through a statistics system, depicting its process aspects. DSDs are often used as a maiden step to provide an impression of a system, which can then be further developed. They can also be used to represent data processing in structured design. In a DSD, one can see the input and output data, the origin and destination of the data. However, it does not specify the timing of the processes or whether they will run sequentially or concurrently. The DSDs for the expected system are listed below.

In Fig. 8, we can observe the level zero data stream diagram where the input image is an MRI image. This input image undergoes the anticipated architecture, which is illustrated in the L1 DSD (Fig. 9) and explained in detail in the L2 DSDs (Figs. 10, 11, 12). Finally, the output obtained is the retrieved image that detects prostate cancer.

Fig. 8
figure 8

Level zero data stream diagram

Fig. 9
figure 9

Level one data stream diagram

Fig. 10
figure 10

Level two data stream diagram

Fig. 11
figure 11

Level two data Stream diagram

Fig. 12
figure 12

Level two data stream diagram

In Fig. 9, we can observe the level one data stream diagram of the anticipated model depicted in Fig. 8. The model is divided into three sub-processes, namely: (i) Dynamic Contrast Enhancement 1.1, (ii) Adaptive Complex ICA (ACICA)1.2, and (iii) Visual-based Image Retrieval (VBIR) 1.3. The detailed steps of each sub-process are explained using level two data stream diagrams (Figs. 10, 11, 12).

In Fig. 10, the second level data stream diagram for the first sub-process DCE 1.1 is presented. The process stream starts with an input MRI image 1.1.1 which is digitized and then converted to grayscale 1.1.2 to simplify the computations. The grayscale image is then transformed to enhance its image intensity 1.1.3. The output of this process is the enhanced image, also known as DCE-MRI. This enhanced image serves as an input to the next sub-process, ACICA 1.2.

Figure 11, represents the decomposition process of the second sub-process ACICA 1.2. Initially, the enhanced image obtained from sub-process 1.1.3 is considered as input. The process involves capturing the actual and fictional parts of the image component 1.2.1, followed by subjecting them to Gaussian distribution 1.2.2. This distribution estimates the process generates an estimation of the likelihood that an actual observation will lie between two real numbers or limits. This output is then analyzed to approximate the parameters of the model 1.2.3, and subsequently, to determine the probability density function (PDF) 1.2.4.The PDF is used to extract non-linearity 1.2.5, and it describes the relative likelihood for this random variable to take on a given value. Finally, the independent components are derived 1.2.6, which will be used by the VBIR system in the latter part.

Figure 12 depicts the level two data stream diagram that provides a brief overview of the third sub-process, Visual-Based Image Retrieval (VBIR) 1.3. In the first stage, the output of independent components and image information is used as input for creating a database 1.3.4. The visual descriptor is then applied 1.3.5, and abstracts are extracted 1.3.6.In the second stage, the test data is subjected to similar operations, and the outcomes of both the test and the trained images are subjected to similarity matching 1.3.7. To retrieve similar images from a large image dataset, indexing and retrieval processes are used 1.3.8. Finally, a matrix is computed to determine the similarity match, and the system retrieves multiple sets of the most matched images. By utilizing Adaptive Complex Independent Component Analysis (ACICA) and VBIR system, the anticipated procedure provides better confirmation of images.

Process Stream Chart

The stream chart shown in Fig. 13 provides an overview of the anticipated work. The stream begins with the input of an MRI image, which is then subjected to dynamic contrast enhancement (DCE) and digitization and transformation processes. This output is then passed through AC -ICA, followed by the orientation region-based method. The resulting output is then subjected to feature abstraction using a visual descriptor. Simultaneously, the test data is obtained from an image collection and is subjected to the same operations (i.e., feature abstraction) as the MRI image output. The resulting feature vectors from both the MRI and query images are subjected to similarity matching. Finally, the system retrieves multiple sets of the most matched images, and the process is stopped. By following this stream, the anticipated procedure aims to detect prostate cancer effectively and efficiently through the extraction and analysis of intravascular contrast agent attentiveness, as well as visually similar medical images through the use of visual descriptors and feature vectors.

Fig. 13
figure 13

Process Stream chart of the anticipated work

Procedure Description

The brief illustration of the procedure of the anticipated system is as follows:

Procedure for Cancer Detection Using ACICA and VBIR

Begin
  1. o

    The anticipated procedure involves the following steps:

  2. o

    Take an input image.

  3. o

    Convert it to grayscale.

  4. o

    To initiate the ACICA process:

  5. o

    Transform the image intensity.

  6. o

    Apply Adaptive Complex ICA.

  7. o

    Evaluate the intravascular curve.

  8. o

    Correct the intravascular curve using the RRB method.

  9. o

    Apply sculpting technique as Pharmacokinetic.

  10. o

    Apply descriptors to the image.

  11. o

    Extract feature vectors.

  12. o

    To initiate the VBIR process:

  13. o

    Create feature vectors database.

  14. o

    Provide a query image.

  15. o

    Apply Steps 7 and 8 to the query image.

  16. o

    Perform a resemblance match between the test data and the database.

  17. o

    Retrieve the most resemblance image from the database.

  18. o

    By employing these steps, the anticipated procedure aims to efficiently detect prostate cancer through the extraction and analysis of intravascular contrast agent attentiveness, and retrieval of visually similar medical images through the use of visual descriptors and feature vectors.

END

The primary objective of the procedure discussed above is to excerpt the attentiveness of contrast agent of intravascular image. To accomplish this, the Visual-Based Image Retrieval (VBIR) component of the procedure will employ visual descriptors. The chosen descriptor for the project is the EHD, which is a texture-based information descriptor. This descriptor will allow for effective feature abstraction across a range of medical images, facilitating faster and more efficient retrieval through the VBIR process. As a result, it is expected that the VBIR system will deliver highly efficient and cost-effective prostate cancer detection, with reduced time and space complexity [15].

Procedure for Feature Abstraction

Given an input image, the anticipated procedure extracts abstracts using two techniques: Color Layout and Edge Histogram Descriptor.

To implement the CLD procedure, the input image is first subdivided into 8 × 8 chunks, then the values of each chunk are saved in a cell. DCT is applied to each chunk, and the lowest frequency coefficient is stored in its respective place. The coefficients are saved, and the CLD is then extracted.

To implement the EHD procedure, the input image is filtered both vertically and horizontally. The filtered image is then divided into 2 × 2 chunks, and an empty vector is initialized to store the histogram values. Each filter is multiplied with its corresponding chunk, and the supreme value is found. If the value is above a certain threshold, it is considered an edge chunk, and its respective value is incremented. The histogram is then normalized. Once both techniques have been applied, the resulting extracted abstracts can be used for further analysis or processing (Table 1).

Table 1 Statistical Abstracts for a test and trained data samples

Implementation Data

  1. 1.

    Then we will get the output as shown in Fig. 14.

  2. 2.

    Then we will get the output as shown in Fig. 15.

  3. 3.

    Select the input image by clicking the MRI-image button in diagnosis user interface module as shown in Fig. 16.

  4. 4.

    Next the VBIR system button is clicked in diagnosis module and the VBIR window is opened which is also a anticipated model.

  5. 5.

    Then we will get the output as shown in Fig. 17.

  6. 6.

    Create the database by clicking the create database button in VBIR system user interface module and select the test dataas shown in Fig. 18.

Fig. 14
figure 14

Snapshot of main user interface output window

Fig. 15
figure 15

Snapshot of diagnosis user interface module output window

Fig. 16
figure 16figure 16

Snapshot of VBIR system user interface module output window

Fig. 17
figure 17

Snapshot of VBIR system user interface module output window

Fig. 18
figure 18figure 18

ae Snapshot showing the step-by-step output of VBIR system user interface module

This system utilizes adaptive complex independent components analysis (ICA) and visual-based image retrieval (VBIR) to achieve improved image identification relevant to the clinical diagnosis of prostate cancer by extracting distinctive abstracts in the process. The findings obtained using this approach for the anticipated visual art-based VBIR technique are shown in Table 2.

Table 2 Recall and precision values of database

We used a 3 T Philips MRI machine to perform a thorough multiparametric MR imaging assessment for our investigation. T2-weighted imaging, diffusion-weighted imaging (DWI) at several b-values (100, 400, and 1000 s/mm2), and dynamic contrast-enhanced MRI (DCE-MRI) were all performed during this examination. Magnevist (Gd-DTPA) was the contrast agent employed for image enhancement, and it was injected intravenously at a rate of 3 mMol/Kg. A 3D spoiled gradient echo (SPGR) sequence with the following settings was used for the DCE-MRI acquisition: TR/TE = 3.8/1.7 ms, Flip Angle = 7°, Nx/Ny/NEX = 128/128/2, FOV = 20 cm, and Slice Thickness = 3.5 mm. Over the course of five minutes, yielding a temporal resolution of 3.85 s each image. Adaptive complex independent components analysis (ACICA) was used to dynamically contrast-enhanced MRI images in order to determine the intravascular contrast agent concentration in the target tissue. The premise behind this technique was that intravascular and extravascular MR signals were spatially independent. The time lag was subsequently optimised using pharmacokinetic modelling, especially for the curve that was roughly derived from an artery. Furthermore, we used the reference region (RR)-based technique to correct the intravascular concentration curve. The independent component (IC) picture that corresponds to the intravascular signal was used to identify the reference region. The recall values, which vary from 83 to 99%, show that the model was successful in identifying a sizable portion of true positive instances in various datasets. Similarly, accuracy values show that the model was generally accurate in predicting affirmative instances, ranging from 82.8 to 99.6%. For every dataset, the average recall and precision are 92.86% and 92.82%, respectively. These averages show that the model performed well overall across the datasets in terms of both recall and precision.

Conclusion

The projected system introduces a novel adaptive complex independent components analysis (ACICA) method for assessing intravascular attentiveness inside prostatic tissue. The early stages of contrast agent transit via the tumor vasculature are also addressed by this method. The prostate's peripheral region, which had previously displayed hyperintensity in both the T2-weighted MRI and the apparent diffusion coefficient map, is now demonstrated to have enhanced attentiveness. In the study, the intravascular attentiveness curves for a big artery and a corrected artery are compared. The proposed research also presents a novel partitioning clustering method that is founded on the idea of a data point's contribution. The method is used to visual-based picture retrieval, and the effectiveness of the method is contrasted with that of the k-means clustering method. The suggested SVM method improves both intra-cluster and inter-cluster similarity measures, in contrast to the k-means method. The method consists of three passes, each of which takes the same amount of time to complete as a k-means iteration. Incorporating hyper-spectral imaging and quantitative analysis to help with prostate cancer diagnosis is one possible technique to enhance the projected system. This could involve extracting spectral signatures and comparing them between cancerous and normal tissue samples to improve accuracy. Furthermore, future developments could focus on enhancing the evaluation of these spectral signatures for more effective cancer detection.

Future research projects include integration with other imaging modalities such as DWI or MRS, clinical validation on larger patient cohorts, and investigation of sophisticated machine learning methods. In order to improve efficiency and guarantee uniformity throughout imaging centres, standardising methods and automating processes are also being used. Enhancing diagnostic accuracy through integration with medical decision-support systems is important, and longitudinal studies offer valuable perspectives on the course of disease and the effectiveness of therapy. Research centre collaboration will increase the ability to be generalised of findings, and prognostic value will be established through correlation studies. Further fields of inquiry include optimizing imaging methods and investigating applicability to cancer types other than prostate cancer.