Introduction 

In the digital era that we live, eight of ten users access health information on the internet [1]. Of the various sources, YouTube is the second most popular search platform worldwide [1]. It is also one of the most visited platforms for seeking health-related information, especially for rare disorders of health. Abundant information about different health-related events is available on the platform. Its audio-visual interface makes it easier to register and retain information. Although the information on YouTube is subjected to a stringent copyright check, the same cannot be said about their reliability and quality-check process. The misinformation on the internet makes it imperative to secure an understanding of the quality of health-related disease-specific information online. Recent quality assessment studies on rheumatic diseases like systemic lupus erythematosus, rheumatoid arthritis and gout have identified content that may not be useful and even misleading at times [2,3,4]. As more and more patients increasingly turn to the Internet to seek health information online, it becomes pertinent for healthcare providers to identify platforms or sources providing reliable and high-quality information and to assist patients in evaluating the same [5, 6].

The idiopathic inflammatory myopathies are not only rare but also complex conditions, which are often associated with comorbidities and complications of prolonged treatment [7]. A typical consultation for myositis or another complex connective tissue disease requires several aspects to be addressed and presumably takes longer than other general medical consults. Often, patients feel overwhelmed with the information that is provided to them in a short interview, leaving gaps in understanding resulting in poor drug compliance. This opens an area of felt need for additional information support which patients seek with the help of online resources. Identifying the determinants of quality videos can help time-constrained physicians to guide patients to appropriate, reliable, accessible and understandable information.

We undertook a cross-sectional review of the reliability and quality of YouTube videos on myositis using validated assessment tools (modified DISCERN criteria, Global Quality Scale (GQS) and the JAMA scoring system) [3, 8, 9]. We triangulated baseline features, views and usage data with video utility using predefined criteria to identify the determinants of useful videos. Sources of useful information were identified.

The results of this study will help determine the percentage of misinformation present about myositis on YouTube and identify the type, distribution and quality of content present, which will help physicians to direct patients to useful videos. Knowing the type and quality of content present will help plan future strategies for creating videos for supplementing patient and physician education, which may include the production of videos on the lesser prevalent content, and the debut of renowned healthcare providers on YouTube.

Methodology

In a cross-sectional design, we evaluated the quality of videos on YouTube® related to myositis after a thorough search using terms “Myositis”, “Idiopathic Inflammatory Myositis”, “Dermatomyositis”, “Polymyositis”, “Cancer Associated Myositis”, “Inclusion Body Myositis”, “Immune Mediated Necrotizing Myopathy”, “Juvenile Dermatomyositis” and “Overlap Idiopathic Inflammatory Myositis” sequentially in March 2021. Search terms were determined based on the type and classification of myositis. The internet browser was cleared of cookies and search history prior to the search which was carried out in the incognito mode to avoid bias from previous searches.

The first ten pages, i.e. one-hundred videos per search term, were included in the evaluation, as viewers rarely go beyond this. We searched videos using the ‘relevance’ filter, which is the default setting on YouTube®, thus duplicating the search results of the common population and providing accurate results. The first 100 videos for each term were saved to a playlist for future analysis. Two authors (MJ and RG) conducted the search and evaluated the videos independently while LG resolved conflicts if any.

Inclusion and exclusion criteria

We included videos that had primary content related to myositis, in English language, with acceptable audio–video quality, and were available on 13 March 2021. Multi-part videos were evaluated as a single entity. We excluded videos that did not relate to myositis, were repeats or duplicate or were advertisements. Of the 900 videos initially screened, 453 videos were included. The details are presented in Fig. 1.

Fig. 1
figure 1

Flowchart of selection of YouTube® videos for the study 

Video characteristics

The videos were classified as useful, not very useful or misleading, or patient narratives by two independent assessors, as defined in Table 1 [2]. A third independent assessor resolved bias, if any. A total of 21 video characteristics were collected including title, channel name, number of subscribers, category (treatment, aetiology, diagnosis, signs and symptoms, ancillary care, diet, physiotherapy, adverse drug reactions, physical examination, risk factors, pathogenesis, patient experience and miscellaneous), upload date, duration, number of views, days since upload, viewing rate, daily viewership, number of likes, number of dislikes, interaction index, source (hospital, group practice or physician, non-medical independent user, non-medical media organization, professional medical body/patient support group OR pharmaceutical company) (Online Resource 1), hashtag use, age (> 18 or all), number of comments, intended audience (anyone/general public, specifically for patients, healthcare providers including students and caregivers), YouTube ID/channel URL, image quality (poor, good and high), training level of the speaker (formal medical training and no formal medical training) and speciality of the speaker (if doctor-rheumatologist, dermatologist, neurologist, general medicine physician and others/unknown).

Table 1 Video content evaluation [2]

A daily viewership rate was calculated in order to avoid bias that may result from the date of upload to YouTube®, using the formula: total view count/duration of availability. The total view count was noted during viewing of the video. The duration of availability was calculated by subtracting the date of upload from the date of viewing in days. Interaction index and viewing rate were calculated using the following: (number of likes-number of dislikes)/total number of views × 100% and (number of views/number of days since upload × 100%) respectively.

Video content evaluation

The reliability of the videos was determined using the modified DISCERN criteria, a scale for assessment of audio–video content (Table 1) [3]. It is a 5-point scale where each element is scored individually. Quality of the videos was determined by the GQS (Table 1) [8]. A higher score on both the scales implies greater reliability and quality respectively. Additionally, the JAMA scoring system was also used to determine the quality of videos (Table 1) [9]. It is a 4-point scale, in which videos are scored on the basis of 4 parameters, namely authorship, attribution, disclosure and currency. Similar to GQS, a higher score on this scale implies greater quality of the video.

Statistical analysis

Continuous variables are presented as mean (± standard deviation) or median (interquartile range) depending upon normality of data, and categorical variables are presented as counts and percentages. The results were analysed using the nonparametric Kruskal–Wallis test and the Pearson chi-squared test for continuous and categorical variables respectively. A p-value of < 0.05 was considered significant. Inter-rater reliability was evaluated for mDISCERN, JAMA and GQS scores using intraclass correlation (ICC) estimates and their 95% confidence intervals based on a mean-rating (κ = 3), absolute-agreement, two-way mixed-effects model.

Ethical considerations

No human subjects were involved in the conduct of this study. The study included a review of video content posted on YouTube® which falls under the category of open source distribution. The study was exempt from ethical review.

Results

A total of 900 videos were screened for inclusion in the study. Of these 372 duplicates, 25 videos presented in a language other than English and 50 that were considered irrelevant were excluded. A total of 453 videos were identified for further analysis. After detailed viewing, 108 were found to be patient narratives and experiences, and eight were identified as not very useful for patients and doctors.

Video characteristics

Of the analysed content, 191 (42%) videos were uploaded by a professional medical society or a patient support group (PSG). Furthermore, 79 (17%) were developed by group practice, physician and 62 (14%) by a hospital. While reviewing the videos, it was observed that 316 (70%) were presented by a speaker with formal medical training which included rheumatologists (16%) and neurologists (12%).

Audience reception and Viewership

The median number of views was 985 (IQR, 339–3388) and the daily viewership was found to be 139 (IQR, (− 412 to 2069). The detailed baseline characteristics of the videos are presented in Table 2. While there was no statistically significant difference in the median views between the useful and not very useful videos, the median number of likes (p = 0.02) and the median number of views per day (p = 0046) or daily viewership were significantly higher for useful videos. The interaction index for useful videos was significantly higher (p = 0.01).

Table 2 Baseline characteristics of YouTube videos

Target audience

Among the included videos, 324 (71%) were developed for a target audience of patients, 313 (69%) for healthcare providers including students and 75 (17%) for caregivers. Videos contained information on the treatment (143, 32%), diagnosis of myositis (137, 30%), signs and symptoms (125, 28%) and miscellaneous information (138, 31%).

Video usefulness and reliability

Of the analysed videos, 74.3% (337/453) provided information that was considered useful and 1.7% (8/453) not very useful. It was found that content uploaded by non-medical media organizations was found not very useful videos (3 of 8 videos, 38%). Content created by professional medical societies/patient support groups was considered useful (158 of 337; 47%). Content created by group practice or physician was also considered useful (74 of 337; 22%). As many as 310 (92%) videos were created by professionals who had undergone formal medical training. The detailed baseline characteristics of the videos, stratified by the source, are presented in Online Resource 2.

Reliability

The reliability of the videos as assessed by the modified DISCERN criteria was higher for useful videos as compared to the not very useful videos (4.0, range 3.0–4.0 vs 2.0, range 1.0–3.0), with a significant statistical difference (p ≤ 0.001). The useful videos also had higher scores for quality obtained by GQS (4.5, range 3.5–5.0 vs 1.0, range 1.0–2.8) and JAMA (3.0, range 3.0–4.0 vs 2.2, range 2.0–3.0) compared to the not very useful videos, and this was statistically significant (p ≤ 0.001 and p = 0.004, respectively). A comprehensive analysis of video characteristics by usefulness category is given in Table 3.

Table 3 Analyses of video characteristics by usefulness category

The useful videos had higher scores for reliability and quality obtained by modified DISCERN (4.0, range 3.0–4.0 vs 2.0, range 2.0–3.0), GQS (4.5, range 4.0–5.0 vs 3.0, range 3.0–3.5) JAMA (3.0, range 3.0–4.0 vs 2.0, range 2.0–2.5) compared to the misleading videos, and this was statistically significant (p < 0.001 for all the three scoring systems). A detailed analysis of video characteristics by score based usefulness is given in Online Resource 3.

Agreement

The cut-off for a useful video was more than or equal to 4 for the modified DISCERN criteria and GQS and more than or equal to 3 for the JAMA scoring system. Of the analysed videos, 67.1% (304/453) were categorized as useful and 9.0% (41/453) as not very useful videos, based on the aforementioned criteria. The Kappa values for agreement between modified DISCERN ≥ 4, GQS ≥ 4, JAMA ≥ 3 and physician decided usefulness were 0.061, 0.083 and 0.104 respectively. The combined Kappa value for agreement between score-based usefulness and physician-decided usefulness was 0.129. The Cohen’s Kappa statistics demonstrating inter-observer agreement were 0.686, 0.469 and 0.537 for mDISCERN, GQS and JAMA scores respectively.

Predicting video usefulness

In order to identify the features of a video with useful content, we performed a multivariable binary logistic regression analysis. It was noted that the GQS scoring system is a statistically significant factor (p = 0.002) for predicting video usefulness.

Discussion

The present study explored characteristics, reliability and the quality of the video content on myositis available on YouTube and found that three-quarters of the videos were produced by trained medical professionals. We evaluated a total of 453 videos, with a median of 985 views that addressed different aspects of the disease including signs and symptoms, diagnosis and treatment options. The videos on YouTube had a median viewing rate of 126 views per day indicating that audience frequently accesses content on this platform to gain knowledge on myositis.

Among the top YouTube channels producing content for myositis were Myositis Association, Myositis Support and Understanding, Myositis, Cure JM Foundation and Johns Hopkins Rheumatology. Myositis being a rare, underdiagnosed and heterogeneous group of systemic autoimmune disease, poorly understood by generalist doctors, a felt need for an acceptable, readily available, valid content developed by a reliable source is identified. The significant difference identified between the quality of videos developed by medical groups indicates a need for specialists to develop valid and reliable content [10].

Majority of the videos were targeted toward patients (63%) and students (90%). Interestingly, the median views on the videos identified by our reviewers as useful were not significantly different from the not very useful ones. Indeed, those identified “not very useful videos” had higher number of median subscriptions as compared to “useful ones”. Most of the not very useful videos were produced by nonmedical media organizations and professional society/support groups, with patients being the primary intended audience. However, the quality and flow of the videos were generally poor, with many essential topics missing. One may postulate that the viewers can relate more closely and identify with the presenter in the videos developed by nonmedical individuals, making them more likely to subscribe to their channel. The disparity in the number of subscribers might also be because of the evident difference in the number of useful and not very useful videos. However, this might allow the spread of misinformation. This makes it important for health workers to direct patients towards appropriate content on the platform during clinical consults.

As anticipated, non-medical organizations uploaded the highest proportion of not very useful videos (3/37) as per physician-determined usefulness. The score-based usefulness on the other hand stated that the highest number of not very useful videos came from “group practice/physician” (54%, 22) compared to only 13% (n = 1) as per physician-decided usefulness. This disagreement between physician-decided and score-based usefulness (Kappa value 0.129) could be attributed to physician bias or objectivity in score-based systems. Although, poor agreement between the kappa values of mDISCERN, GQS, JAMA and physician-based usefulness (0.061, 0.083 and 0.104) is likely due to the mutually exclusive nature of the scores judging different aspects of the videos, objective scores should be applied to all videos providing health care-related information. The GQS system, which takes into account the quality and flow of the video and the extent of relevant information covered, was found to better predict physician-identified video usefulness using a binary logistic regression analysis.

In the long run, accreditation metrics for validity and guidelines related to the uploading of factually correct and scientifically backed information may be needed. Other factors related to video characteristics such as duration, upload date, audio–video quality and credentials of the speaker also tend to affect people’s decision of choosing which videos to watch. Moreover, YouTube records a view when a user watches at least 30 s of a video. An average person takes longer than the above-mentioned time period to make a decision about the usefulness of a video. As expected, the videos classified as useful captured a higher number of likes (useful-12: not very useful-1) and daily viewership (useful 95: not very useful (− 1136)), an indirect indicator of viewers’ acknowledgement of the quality.

While YouTube has become the second most popular social network and the third most trustworthy source of health-related information after physicians and government healthcare institutions, a further rise in usage of social media platforms is expected in the near future, especially in seeking health-related information [1, 11, 12]. Unfortunately, the ease of use also includes a potential for misinformation. Moving forward, development of an algorithm to predict usefulness of videos with badges to identify usefulness and signify verified information on the platform may be the way to promote scientific and reliable information and curb misinformation on platforms such as YouTube.

While several authors have found social media to be a source of misinformation like in COVID-19, rheumatoid arthritis, vaccines or drugs, we found that such was not the case with myositis, possibly owing to it being a niche area [3, 13, 14]. Different surveys have found the proportion of misinformation from 8.6% in Sjögren’s syndrome, 11.5% in SLE, 12.28% in gout, 14% in spondyloarthritis, 17.4% in systemic sclerosis, 27.5% in methotrexate self-injection education (2016), 30% in rheumatoid arthritis, 50% in self-administer subcutaneous anti-tumour necrosis factor agent injections, 57.3% in methotrexate self-injection education (2022) to 67% in COVID-19 [2,3,4, 13, 15,16,17,18,19,20] (Table 4). This is understandable because myositis enjoys a niche area with a complicated pathophysiology.

Table 4 Misinformation present on social media platforms

Like many other, our study is also bound by certain limitations. Using a cross-sectional study design, we were unable to capture the dynamicity of a social media platform which gets updated continuously. Findings of other investigators may be at variance due to this dynamicity. Moreover, we only evaluated videos with English content. Evaluating other languages was beyond the scope of our current resources. Additionally, we included only the first one-hundred search results on a single social media platform. A broader inclusion of content as well as platforms is likely to reveal more comprehensive picture. However, we have reason to believe that our results are reasonably representative of a larger body of content across different platforms, and physicians should befriend social media while empowering patients to identify valid, reliable and effective content. Our study systematically evaluated video content available on YouTube and provides a reliable tool to predict usefulness for clinicians and patients. Including videos in languages other than English, reviewing the content available on other social media platforms and understanding the psychological and motivational factors influencing patients’ decision to subscribe to not very useful videos possibly by a thematic analysis may be the direction for future studies.

The present study is one of its kind to study characteristics of information on sub-specialisms like myositis available on YouTube and report that the content is reliable with minimal misinformation. Nearly three quarters of the videos are from verifiable sources, and the results are quite reassuring. While it is challenging to eliminate misleading information easily available online on the internet, the present study also concludes that GQS system is a significant marker to identify reliable, valid and useful content on YouTube for myositis and possibly other medical subjects. Specialists should actively participate in the development of medically related videos using validated tools to disseminate appropriate health information.