Methods of Machine Learning-Based Chimeric Antigen Receptor Immunological Synapse Quality Quantification

Chimeric Antigen Receptor (CAR)-mediated immunotherapy shows promising results for refractory blood cancers. Currently, six CAR-T drugs have been approved by U.S. Food and Drug Administration (FDA). Theoretically, CAR-T cells must form an effective immunological synapse (IS, an interface between effective cells and their target cells) with their susceptible tumor cells to eliminate tumor cells. Previous studies show that CAR IS quality can be used as a predictive functional biomarker for CAR-T immunotherapies. However, quantification of CAR-T IS quality is clinically challenging. Machine learning (ML)-based CAR-T IS quality quantification has been proposed previously. Here, we show an easy-to-use, step-by-step approach to predicting the efficacy of CAR-modified cells using ML-based CAR IS quality quantification. This approach will guide the users on how to use ML-based CAR IS quality quantification in detail, which include: how to image CAR IS on the glass-supported planar lipid bilayer, how to define the CAR IS focal plane, how to segment the CAR IS images, and how to quantify the IS quality using ML-based algorithms. This approach will significantly enhance the accuracy and proficiency of CAR IS prediction in research.


Introduction
Chimeric antigen receptor (CAR) immunological synapse (IS) is the interface between CAR-modified cells and their susceptible target cells [1,2].This interface includes several key steps of CAR-modified cell activation and cytotoxicity.Specifically, these steps include (1) the initiation of CAR-T/NK cell engaging with tumor cells, (2) activation of CAR signaling, (3) mobilization of cytotoxic machinery (e.g., lytic granules) into the IS area, and (4) degranulation and killing of tumor cells via an effective IS between CAR-T cells and tumor cells.The IS formation by CAR-T cells begins upon tumor antigen-specific interaction with CAR.This initial contact forms a cluster of tumor antigens, analogous to the central cluster of the T-cell receptors (TCR) at the synapse [3].After accumulating the CAR-tumor antigen complexes in combination with other co-stimulatory molecules in the IS, these clusters can trigger the activation of CAR signaling, beginning with the phosphorylation of the intracellular downstream signaling molecules, such as CD3ζ.
Additionally, phosphorylation and micro-clustering of other signaling molecules of the TCR, such as ZAP70 and Lck, are important indicators of CAR-T cell activation [4,5].Following CAR activation, re-organization, and accumulation of the F-actin ring around the synapse stabilizes the IS and leads to the polarization of lytic granules, such as perforin and granzymes, to the synapse [6,7].The functional CAR IS formation can lead to degranulation of the cytotoxic granules and efficient killing of tumor cells via a synapse.Thus, CAR IS's effective formation and quality can be imaged and quantified [8,9].
Through a new development inspired by machine learning (ML), our team applies the practice of instance segmentation [8,10] to high-resolution CAR IS images (Fig. 1), highlighting the CAR-modified cells and quantifying their fluorescence intensities.Using neural networks that perform pattern recognition algorithms, object detection, and crossvalidation, our program automates the process of CAR IS quantification (Fig. 2).Keypoint detection is used to find the top-left, top-right, bottom-left, bottom-right, and center of each cell individually.Bounding boxes generated using Keypoint detection have proven to be more accurate than previous methods [11].When a bounding box generation is complete, non-maximum suppression is applied to prevent multiple detections of the same object [12].There are two methodologies of identifying cells-semantic segmentation and instance segmentation (Fig. 3).Semantic segmentation methods identify multiple objects within the same category as one object.On the other hand, instance segmentation distinguishes individual cells as unique objects.
Introducing instance segmentation to imaging CARs has proven more effective and efficient than other ML methods in immunotherapy [8].With this approach, our team improves on conventional ML methods with more accurate results on any density of cells.By utilizing pre-built deep learning modules such as artificial neural networks (ANNs) and their various configurations [13], our program is easy to install and simple to use.
Applying these principles yields a high-throughput system, leading to a faster evaluation of CAR IS quality.Our program also reduces labor costs and marginal error.We aim to create a user-friendly method of quantifying CAR IS data while improving cost and efficacy.
The ML-based CAR IS quantification software has hardware and software requirements as described below.

Recommended Hardware Configuration
Compute Unified Device Architecture (CUDA)-capable NVIDIA graphics card is recommended (see Note 1).Since CUDA is backward compatible, older NVIDIA graphics card series may be used without issue.

2.
You can obtain the software by downloading "Package1" and "checkpoint.pth.tar"

3.
Download and install the latest version of Python 3 from: https:// www.python.org/downloads/.

4.
Open the command terminal and enter the installation command for each library (Table 1) (see Note 2).

1.
As previously described, preparing the glass-supported planar lipid bilayer with CAR-NK cells is important for IS data acquisition [8].To acquire images, turn on all the necessary hardware and software modules for Nikon A1R confocal microscope with a motorized stage.Choose 60' 1.4 NA oil objective and select the desired fluorescent channels and their laser settings.

2.
To find the CAR IS focal plane, identify the highest intensity peak of the tumor antigen channel.Upon CAR interaction, the tumor antigen clustering on the lipid bilayer can best capture the IS focal plane.

3.
Set the Z-stack to 0.25 μm per slice for 5 slices relative to the focal plane position (see Note 4).

5.
Start acquiring images of the CAR IS with the motorized stage (see Note 5).

6.
Export the images to 16-bit TIFF format to quantify the CAR IS quality.

Automated CAR IS Quantification Using Machine Learning
The following steps cater to Linux operating systems.A user-friendly interface is developing when writing this paper (see Note 6): 1.
From the package provided in Subheading 2.2 item 1, Open the run file using a text editor such as Notepad (see Note 7).

2.
Line 2 of the run file, after "cd," update the following file location to the folder containing run (see Note 8).

3.
Line 4 determines where processed image results will be saved.Update the folder location within the single quotes to the preferred folder path.

4.
Line 5, within the single quotes, update the folder location to the folder containing unprocessed image data.

5.
On line 6, enter the order of the images from the image data folder.After each channel name is a space followed by a number representing the order of the channel."0" represents the first channel in order, "1" represents the second channel and so on.

6.
On line 7, write the names of the folders containing the images that need to be compared.For example, "-compares 'folder1' 'folder2' \," with "folder1" and "folder2" each being a folder containing ordered image data.

7.
Click on "Save" to save the changes made to the run file.

8.
Open a terminal (see Note 9) and type "cd" followed by the folder location of the package, and press "enter."

9.
Type and enter "./run" to run the program.Results (Fig. 4) can be saved to the directory specified in step 3. Quantification results (Fig. 5) involve the total fluorescence intensity (Table 2), mean fluorescence intensity (MFI), and the synapse area.
4. To take the 3D image of the synapse, we take two slices up and down from the focal plane, each Z-stack step at 0.25 μm for a total of 1 μm.The number of Z-stacks can be increased or decreased accordingly.5.For faster acquisition of the images on the Nikon A1R microscope, you can choose the resonant scan option with Denoise.ai for noise removal.6.Our development team works on a graphical user interface (GUI) while writing this paper.A GUI will provide a simpler way to access and use the machine learning program.7. Text editors such as Visual Studio Code are recommended as they provide greater functionality and are easy to use. 8.The run folder is also simply the unzipped package we provide in Subheading 2.2 item 1.To view this folder's location, right-click the folder and select "properties."9. On Linux operating systems, open the menu and search "terminal."On Windows, search "cmd" to open the Command Prompt.

3.
Under "Images," select "Stacks" and then "Plot Z-axis Profile" to identify the brightest focal plane.

4.
Choose the three brightest focal planes and combine them into a single image by navigating under "Image," selecting "Stacks," and then "Z Project."

5.
To quantify the MFI at the synapse, go to "Analyze," select "Tools," and "ROI Manager."

6.
On the DIC slide, circle the cells individually, selecting "Show All" and "Labels."Then transfer these labels to other channel images.

8.
To calculate the background intensity, repeat steps 5-7, creating 5 ROIs (4 corner regions and a center region) for each channel background.Subtract the background mean value from the MFIs of the synapse.

Fig. 1 .
Fig. 1.Illustrative Key Steps of CAR IS Image Processing.Using neural networks requires manually annotating training datasets by exposing the program to large amounts of training data.(a) Once trained, image processing can begin.Raw images (b) are processed through a series of machine learning algorithms (c) used to identify cell instances and later to quantify each cell.The final processed image (d) shows the computer's identification of individual cells

Fig. 2 .Fig. 3 .Fig. 4 .Fig. 5 .
Fig. 2. Illustrative model of CAR IS quantification.(a) The machine processes raw images (b) to identify cell instances.(c) From this, quantification data is automatically transferred to an Excel sheet and graphed (c) 1.If using an NVIDIA graphics card is impossible, you may also use the CPU version of CUDA installable from the PyTorch official website.Select the "CPU" variant of the installation package.
2. The use of a Python virtual environment is not necessary but recommended.3. Check the compatibility of the CUDA toolkit to your NVIDIA driver version.An NVIDIA driver update may be required.For more information, see: https://docs.nvidia.com/deploy/cuda-compatibility/index.html.

Table 2
Sample data illustration.The machine learning program will generate an Excel file containing the total fluorescence intensity of each cell in an image Methods Mol Biol.Author manuscript; available in PMC 2024 January 26.