This year, we mark a special anniversary, the 170th year of publication of Graefe’s Archive of Clinical and Experimental Ophthalmology. This journal was established by its namesake, Albrecht von Graefe, who is considered by many to be the founder of modern-day ophthalmology. This was a notable achievement as Graefe’s Archive is the oldest global ophthalmic journal in existence. Graefe was a true visionary, who made landmark contributions to our field including the first recognition of central retinal artery occlusion, cupping of the optic nerve in glaucoma, optic neuritis as a manifestation of neurologic disorders, optic disk edema in the setting of intracranial hypertension, and the use of iridectomy as a treatment for glaucoma. In creating a journal distributed worldwide, Graefe recognized the importance of global scientific exchange, in order to advance the field of ophthalmology [1].

This reality is all the more apparent today, where a digital world has dramatically facilitated the rapid and efficient review and dissemination of scientific content. While I still enjoy the touch and feel of a print journal, the reality is that most readers likely now consume the publication in an electronic format. The pace of advances in medicine, and in ophthalmology in particular, is truly staggering. Many diseases, including common conditions such as age-related macular degeneration (AMD), were not even well recognized during Graefe’s lifetime. Yet, today, the pathophysiology and molecular and genetic basis of these conditions are being uncovered with exquisite and unprecedented detail. For the journal, we have sought to capture these dramatic advances in “hot topic” areas, by aggregating related articles into topical collections with the intention that this will allow our readers to consume the breadth of the scientific landscape on that topic in an efficient manner.

Rapid technological advances, however, have introduced new challenges. For example, artificial intelligence has exploded onto the scene due to the availability of large datasets and advances in high-speed computing, capable of taking advantage of this data. Large language models (LLMs), such as GPT4, now have the capability to ingest large masses of data, analyze these, and provide answers to complex questions. Indeed, these LLMs can generate extensive, well-considered sequences of text and images, effectively synthesizing the equivalent of a review manuscript, perhaps more comprehensively than a human could achieve. Many individuals from various fields have taken advantage of these LLMs to improve their efficiency, for example generating automated e-mail responses to a variety of queries. Physicians and other clinical practitioners have explored the possibility of using LLM-generated responses to address questions from patients and aid in patient education. Indeed, we have accepted a number of papers on these topics and concepts in the Graefe’s Archive [2]. LLMs have progressed to the point where some have questioned whether AI systems can themselves serve as authors of scientific manuscripts. Most would agree, however, that current AI-based approaches would not meet all ICMJE criteria to qualify as an author, and most journals have taken the position that the use of AI in the generation of data or content for a scientific publication should be cited in the methods or the acknowledgements sections of the manuscript, as appropriate.

While there may be many benefits of AI-based approaches in improving the efficiency of clinical care and science, as with many advances, AI also has a number of drawbacks which can impact scientific pursuits and scientific publications, such as Graefe’s. For one, results of queries to LLMs may not always be accurate, as they are based on consideration of a large swath of data, and they lack the ability to discriminate inaccurate materials or disinformation. Thus, the use of LLMs to generate content requires the appropriate supervision of an expert to correct erroneous material produced by the model. Second, AI-based models can be manipulated to facilitate scientific misconduct. While presumably uncommon, falsification or fabrication of data and results in research remains an important problem in science and scientific publications. Journals rely upon the peer review process to ferret out potential fraudulent results, protecting the community from their dissemination. Researchers are also asked to preserve their source data and results, such that these can be made available for inspection by third parties should questions arise regarding the reliability of the reported results. In the era of AI and the availability of generative adversarial networks (GAN), it is possible to produce synthetic images or results—i.e., “deepfakes.” While experts may sometimes be able to distinguish these synthetic outputs from real data, this is not always possible, and one would expect as the AI models continue to advance, detection of synthetically created results will be increasingly difficult. On the other hand, one might expect that these concerns may also spawn the development of a new generation of tools specifically designed to recognize these synthetic creations.

Another continuous challenge for all scientific journals, including Graefe’s, is identifying high-quality reviewers who can provide comprehensive, insightful, and timely scientific appraisals of submitted manuscripts. All of us in the scientific community are of course busy with many competing obligations, and accepting a peer review assignment can appear to be burdensome, and it is very easy to reject the opportunity to take on such an assignment. As an Editor-in-Chief of Graefe’s, I am extremely grateful to our Editorial Board and a large family of peer reviewers who tirelessly take on these assignments that allow Graefe’s to maintain a robust and rapid timeframe from submission to publication. As mentors to the next generation of clinicians and scientists, however, it is incumbent upon senior researchers to highlight to our trainees the critical importance and sanctity of the peer review process. We must emphasize to our trainees that participation in the peer review process is a fundamental responsibility and obligation that belongs to all of us in the field, much like a civic responsibility such as jury service. We must also emphasize the personal benefits achieved by participating as a reviewer in improving our own skills as scientists and science writers, as a result of lessons learned during the peer review process. We also need to make education on how to perform a thoughtful scientific review of a manuscript an integral part of our trainees’ education. Most training programs include journal clubs or other such scientific exchanges in their didactics, which are certainly helpful in this regard; however, more formal instruction on reviewing and critiquing a manuscript would create a generation of capable reviewers that will enhance our scientific literature to the benefit of us all. Some have suggested that AI-based approaches and LLMs may also play a role in the peer review process. While I expect there may be benefits of taking advantage of such tools in an assistive capacity, I would contend that human oversight will remain a crucial element in this process.

Thus, as we look back at the 170-year history of Graefe’s Archive of Clinical and Experimental Ophthalmology, we stand now at a critical junction in human history as AI plays an increasing and significant role in scientific discovery, as well as in our daily existence. How we responsibly manage the use of these powerful tools will dictate the shape of the next 170 years.