Throughout history science has been an important tool of humanity, even if haphazardly. Humans have used science, without even knowing that they are scientists, to understand and improve every aspect of their lives. For many thousands of years, the way to handle, make sense of and address the experiences of life was through the use of science (and myth). “The purpose of science and art is one: to render experiences intelligible, i.e., to assist man to adjust himself and the environment in order that he may live” (White 1938). Although the term ‘science’ had different meaning from the one we use today, throughout history the knowledge created by science enabled humanity to create technologies. These technologies assisted us with everyday tasks and eased the physical burden—instead of chiseling words on rocks or clay we moved to ink and softer materials like papyrus and paper; and instead of intense and long physical labor we used the ox and the plow to farm. Using science, we continuously improved technologies and their design to create better ones.

These innovations augmented people by enabling them to do more and better. People were, and are, always looking for how to leverage tools to improve the tasks at hand—agriculture, construction, textile, and waging wars. While these technologies were operated by laymen, there were scientists who created and improved the knowledge and the artisans (engineers) that built and perfected these tools.

While throughout history technologies have worked beside us and by us. Even before the first industrial revolution, advanced technologies were perceived to have the ability to do the work instead of us. This fear of automation was best articulated in the late sixteenth century by Queen Elizabeth 1 when William Lee requested her to patent his stocking frame knitting machine. Her reply was—“thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars”.

Since then, and throughout all the industrial revolutions, this fear of automation ebbed and flowed. The fear that machines would not be operated and controlled by us, and that they may even replace us, drove discussions and predictions about “the end of jobs”—causing structural-technological unemployment. Having a job and being employed is part of one’s identity providing status, relationships, and purpose.

Not only workers, employees or lay people used technology; but also, scientists leveraged it to augment and ease their work, improving the quality of their science. Whether it was the abacus, telescope, microscope, or computer—these technologies were the result of science and enabled improved scientific exploration. However, they were always operated by the scientists themselves to assist them in their work. This virtuous cycle of observation, hypothesizing, and experimentation became the basis of the scientific method as we know today.

From the 1950s, a new groundbreaking technology was developed that enabled computers to learn and improve by themselves—it was named artificial intelligence (AI). This technology drove global discussions and fears in the 1960’s about the potential replaceability of human labor by AI. It took AI almost 70 years to mature, evolve and transform into the advanced technology we have today. We now understand its real potential and ability to automate human tasks and even jobs.

In their 2013 paper, Frey and Osborne discussed the potential of AI and specifically machine learning (ML) to automate jobs and professions. They analyzed all the 712 professions described in the O*NET database and calculated the probability of their computerization (automation). They showed that science and engineering professions have low probability of automation due to the “high degree of creative intelligence” required. On the augment-replace continuum of automation, the place of profession of scientists according to Frey and Osborne is clear—“while it is evident that computers are entering the domains of science and engineering, our predictions implicitly suggest strong complementarities between computers and labor in creative science and engineering occupations”. This means that scientists and engineers may use AI to augment every aspect of their work, but the core “sciencing” will be kept human based.

Since their publication AI has evolved and advanced at an accelerated pace reaching human parity on several skills. The fourth industrial revolution starting around the beginning of 2014 is based on and driving the use of robotics and AI to augment human work and drive economic growth. During the past seven years we saw and experienced the transformation of several industries by robotics and AI, and even accelerating its pace throughout the COVID-19 pandemic.

The work of science and scientists is also transforming as we see that more and more physicists, chemists, and biologists, among others, are using AI for their scientific work and data analysis. Today we have very advanced technologies that enable much better data collection, storage, and analysis. We need to adapt our scientific work to address these major changes. The “sciencing” is evolving rapidly but is still based on the scientists themselves.

Already in 2014 Rob Kitchin stated that “big data and new data analytics are disruptive innovations which are reconfiguring in many instances how research is conducted” (Kithcin 2014). The rise of data-driven science can be relevant to each step of the scientific process augmenting the work and even directing it.

During the last two and a half years we have had major achievements and breakthroughs in AI, all published in academic journals. This may mean the beginning of the end of human driven science as we know it. One of the first was DeepMind’s AlphaFold (and AlphaFold2) AI that predicts the 3D structure of proteins from their 1D amino acid sequence—ending a 60-year problem that scientists were not able to solve. Second, we saw that AI can design better experiments than human scientists could have thought of. Furthermore, AI can devise automatically the underlining physics principles and governing equations, and also an AI that by using the scientists’ observational data rediscovers the physical laws they devised. It was also shown, by Raayoni et al. in Nature, that AI can devise mathematical conjectures and replacing “the mathematical intuition of great mathematicians and providing leads to further mathematical research”.

This is a transformative period for science and the scientific method. From these examples it is clear that AI can analyze unlimited data better than humans, discover patterns (laws and principles) faster and better than us, and devise experiments in ways that we cannot. In a world where IoT technologies will be more and more ubiquitous, AI will also be able to manage the observation and data collection stages. And maybe in the near future, using Open AI’s GPT-3, it will be able to write scientific papers describing its discoveries and scientific work (which has already happened during the review process of this article), while human scientists will be left only to write the limitations and discussion sections, and for its new AI-peer review.

Finally, scientists from numerous fields are already using AI to augment their research and scientific work creating new scientific facts and “truths”. The inductive vs deductive approaches to science have been debated for centuries, but the scientific method was not. AI today is empirically “stepping in” and may be changing it. “Truth in science, however, is never final, and what is accepted as a fact today may be modified or even discarded tomorrow” (National Academies of Sciences 1999).

The collaborative work of science may change, and we know today that human + AI perform much better. But maybe in this case the direction is AI + human, as AI will be the scientist of the scientist.