Keywords: Bibliometrics, scientometrics, citescore, journal impact factor, citations, journal based metric, key performance indicator, paradox, non sequitur, cognitive bias, specificity, sensitivity.

This article accompanies a more serious debate on the value of citations as a measure of research quality [1]. This article has a purpose described in the debate. This article is largely pointless (any points made are unintentional and purely accidental), it is of poor quality (e.g. the first 3 sentences start with ‘This article’), has too many keywords, and contains speeling misteaks, and, questionable, grammar (plus, the tense in the title may or may not be correct). In fact, the web pages of Wikipedia [2] and Youtube [3] are cited for no reason (and incorrectly). It is bad enough that the editor would never accept it into this journal if written by another person or for another purpose. It is, therefore, an editor’s rant.

There is conjecture on whether the number of citations an article or journal receives determines the quality and impact. If the argument that citations equals quality is true, then this article should deservedly receive no citations and no further attention. However, if it gains citations or any other attention, does this disprove the notion?

Ironically, should this article not gain citations, and thus support the argument, then it might potentially be cited as supporting evidence. But then the very act of proving the argument would then disprove the argument that it proves.