Skip to main content
Log in

Alien Reasoning: Is a Major Change in Scientific Research Underway?

  • Published:
Topoi Aims and scope Submit manuscript

Abstract

Are we entering a major new phase of modern science, one in which our standard, human modes of reasoning and understanding, including heuristics, have decreasing value? The new methods challenge human intelligibility. The digital revolution (deep connectionist machine learning, big data, cloud computing, simulation, etc.) inspires such claims, but they are not new. During several historical periods, scientific progress has challenged traditional concepts of reasoning and rationality, intelligence and intelligibility, explanation and knowledge. The increasing intelligence of machine learning and networking is a deliberately sought, somewhat alien intelligence. As such, it challenges the traditional, heuristic foresight of expert researchers. Nonetheless, science remains human-centered in important ways—and yet many of our ordinary human epistemic activities are alien to ourselves. This fact has always been the source of “the discovery problem”. It generalizes to the problem of understanding expert scientific practice. Ironically, scientific progress plunges us ever deeper into complexities beyond our grasp. But how is progress possible without traditional realism and the intelligibility realism requires? Pragmatic flexibility offers an answer.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. As excerpted in translation, in Du Châtelet (1740/2009). A bit later she is more positive that basic truths have already been found. Like many of her day, she places metaphysics at “the summit of the edifice” of science. Thanks to Katherine Brading for calling this passage to my attention.

  2. About rejecting the traditional conception of knowledge, Kuhn (2000, p. 111) remarked: “Perhaps knowledge, properly understood, is the product of the very process these new studies describe. I think something of that sort is the case”. He was referring to the micro-sociological, social constructivist studies, although he believed they came to the wrong conclusions about scientific knowledge. The early Kuhn (1970) had similar views on the rationality of science: the historical trajectory of the sciences sets the standard for what counts as progress and rationality, not vice versa.

  3. For a computer to justify a knowledge claim is not in itself new. The first use of a computer to justify a major mathematical conjecture was the proof of the four-color theorem for maps, in 1977, by Appel and Haken (see their 1977). The computer program assisted them by running systematically through all possible cases, a task that would have been nearly impossible by hand. The current digital revolution is, of course, a huge leap beyond the capabilities of those days.

  4. In Brockman (2015b), a long series of short pieces by many authors from the online Edge.org conversation.

  5. Dennett (2017, Chap. 4) notes that being able to do something significant does not imply explicit know-how. See below.

  6. Symptomatic of the crude state of today’s deep learning (as compared with the ultimate goals) is the report that Google and Facebook have hired thousands of people to supplement algorithm-based search for inappropriate web content.

  7. Among philosophers, see, e.g., Bishop and Trout (2005).

  8. For entry into the styles of reasoning discussion, see Hacking (2012), his contribution to a special issue of Synthese on the topic.

  9. Today, leading physical scientists typically write about such topics not in formal papers but in informal settings such as the popular books edited by Brockman (e.g., 2015a).

  10. See the short piece by Luca de Biase in Brockman (2015b) and Emiliano Ippoliti’s contribution on “dark data” in Ippoliti and Chen (2017).

  11. And that’s just logic and math. Historiography is even worse.

  12. What counts as understanding is itself a contested issue. See Trout (2002) and the essays in De Regt et al. (2009).

  13. Perhaps I am wrong, but I think of shallow correlations as triggering responses similar to predators’ cognitive responses to the “eyes” on butterfly wings, leading to the action-decision “not-food,” or at least, “not-safe-food”. Similarly for peahens appraising the virility of peacocks on the basis of the number of “eyes” on their spread tails. These are evolved heuristics that are fast and frugal in something like Gigerenzer’s sense, but they convey no explanatory depth and are false, strictly speaking.

  14. See also Eubanks (2018), Noble (2018), and Pasquale (2015). As I write this, several tech people are meeting at a San Francisco event called “Data for Good Exchange,” to consider a digital ethics code or digital “Hippocratic Oath” as a form of self-regulation of the industry.

  15. There are other, more generic challenges to machine learning that must be addressed in individual applications (Domingos 2015). A major one is overfitting—finding significant patterns where there are none. Overfitting makes correct generalization difficult, whereas underfitting does the reverse. Calude and Longo (2015) worry about “the deluge of spurious correlations in big data”. A third challenge is scaling, one form of which is plagued by “the curse of dimensionality”. As dimensions increase, the data density, for a given data set, decreases exponentially. And as dimensions increase, say in robotic control, efficiency also decreases exponentially, e.g., when every orientation of each finger joint needs to be explicitly controlled. “The catastrophic forgetting problem” occurs when new learning overwrites old, although some forgetting is necessary for learning, especially for generalization (Tishby and Zaslavsky 2015). A practical problem is that much deep learning remains terribly expensive. This may well change, although the size of big data from world-wide sensors also increases at a tremendous rate.

  16. But not by those epistemologists who take seriously the work on bounded rationality (as Herbert Simon termed it), heuristic judgment, cognitive biases, and behavioral economics.

  17. See Norman (1993, 2004), Petroski (2003), and Meikle (2005).

  18. Simon’s early model, based on studying human problem-solving protocols, was a “conscious model” that problematically assumes we have access to the sources of our thoughts and decisions. Simon became aware of the problem, and knowledge engineering exacerbated it. The gist of my paper is that we philosophers need to become more aware of it!

  19. For a small sample, see, e.g., Rheinberger (1997), Collins (2010), Chang (2012), Leonelli (2016). See also Ericcson et al. (2006).

  20. I am sympathetic to the naturalistic approach to these topics in the Ippoliti and Cellucci articles in this issue (see also Ippoliti 2008 and; Cellucci 2017). However, I am less confident that the complex neural-causal processes even in visual processing can be reduced to humanly intelligible rules. This is not the place to engage theories of vision or theories of practice.

  21. Some groundbreaking work has been done by scientific philosophers nearly twenty years ago, e.g., Glymour and Cooper (1999) and Spirtes et al. (2000). Pearl (2000) has been fundamental.

  22. More work is needed to sort out different sorts of unintelligibility. Complex as they are, we don’t find the entities and processes to which molecular biologists generally appeal to be as weirdly alien as those in fundamental physics.

  23. That is, rhetorical tropes—the ancient enemy of logic.

  24. I do not deny the heuristic value of individual, intentional realism. What bothers me is strong social or community realism in complex domains, the idea that the specialist community must agree on the near truth and the metaphysical interpretation of anything that is “licensed” as a product on which others may build. Recall the individual, intentional realism of Popper, who, simultaneously, vehemently denied that mature science today is close to the truth. He was not a strong realist in my sense. We can have realist-inspired heuristics without commitment to strong realism.

  25. I have argued for the latter points in Nickles (2017, 2018, forthcoming)  and elsewhere. See also my personal recollections in this special issue, where I explain why, because of the strong realist tenor of the achievement term ‘scientific discovery’, I now prefer to speak of scientific innovation and of creative work at research frontiers. Standard analytic epistemology has not contributed much to what I call “frontier epistemology”.

  26. Ryle (1949) was a major, early exception. Recently, Stanley (2011) has rejected Ryle’s arguments and argued that knowledge-how is reducible to knowledge-that. It seems to me that we are very far, scientifically, from being able to make that sort of case. Even if he has succeeded in refuting Ryle’s ordinary language arguments, that is not enough.

  27. On the two-context distinction, see Schickore and Steinle (2006). My own, Dennett-like view involves a variation-selection process that cannot be reduced to logic “all the way down”.

References

  • Anderson C (2008) The end of theory: The data deluge makes the scientific method obsolete. Wired Magazine 16:16-07

    Google Scholar 

  • Appel K, Haken W (1977) Every planar map is four colorable. Part I: Discharging. Illinois J Math 21(3):429–490

    Google Scholar 

  • Baird D (2004) Thing knowledge: a philosophy of scientific instruments. University of California Press, Berkeley

    Google Scholar 

  • Bishop M, Trout JD (2005) Epistemology and the psychology of human judgment. Oxford University Press, New York

    Google Scholar 

  • Brockman J (ed) (2015a) This idea must die: scientific theories that are blocking progress. Harper, New York

    Google Scholar 

  • Brockman J (ed) (2015b) What to think about machines that think. Harper, New York

    Google Scholar 

  • Calude CS, Longo G (2015) The deluge of spurious correlations in big data. http://www.di.ens.fr/users/longo/files/BigData-Calude-LongoAug21.pdf. Accessed 4 Feb 2018

  • Cellucci C (2017) Rethinking knowledge: the heuristic view. Springer, Cham

    Google Scholar 

  • Chang H (2012) Is water H2O? Springer, Dordrecht

    Google Scholar 

  • Collins H (2010) Tacit and explicit knowledge. University of Chicago Press, Chicago

    Google Scholar 

  • Daston L (1988) Classical probability in the Enlightenment. Princeton University Press, Princeton

    Google Scholar 

  • Daston L (2016) History of science without structure. In: Richards R, Daston L (eds) Kuhn’s structure of scientific revolutions at fifty. University of Chicago Press, Chicago

    Google Scholar 

  • Dawes R (1988) Rational choice in an uncertain world. Harcourt, New York. 2nd edn. Sage, Thousand Oaks

    Google Scholar 

  • De Langhe R (2014) To specialize or to innovate? An internalist account of pluralistic ignorance in economics. Synthese 191:2499–2511

    Google Scholar 

  • De Regt H, Leonelli S, Eigner K (eds) (2009) Scientific understanding: philosophical perspectives. University of Pittsburgh Press, Pittsburgh

    Google Scholar 

  • Dear P (2009) Revolutionizing the sciences: european knowledge and its ambitions, 1500–1700, 2nd edn. Princeton University Press, Princeton

    Google Scholar 

  • Dennett D (1971) Intentional systems. J Philos 68:87–106

    Google Scholar 

  • Dennett D (1995) Darwin’s dangerous idea. Simon & Schuster, New York

    Google Scholar 

  • Dennett D (2017) From bacteria to Bach and back: the evolution of minds. Norton, New York

    Google Scholar 

  • Dewey J (1929/1984) The quest for certainty. In: Boydston J (ed) John Dewey: the later works, vol 4. Southern Illinois University Press, Carbondale

  • Domingos P (2015) The master algorithm: how the search for the ultimate learning machine will remake our world. Basic Books, New York

    Google Scholar 

  • Du Châtelet É (1740/2009) Institutions de physique (translated as The foundations of physics). Paris

  • Du Châtelet E (1739) Selected philosophical and scientific writings. In: Zinsser J, Bour I (eds and translators). University of Chicago Press, Chicago

    Google Scholar 

  • Ericsson KA, Charness N, Feltovich P, Hoffman RR (2006) The Cambridge handbook of expertise and expert performance. Cambridge University Press, New York

    Google Scholar 

  • Eubanks V (2018) Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York

    Google Scholar 

  • Funkenstein A (1986) Theology and the scientific imagination. Princeton University Press, Princeton

    Google Scholar 

  • Giere R (2006) Scientific perspectivism. University of Chicago Press, Chicago

    Google Scholar 

  • Gigerenzer G, Todd P (eds) (1999) Simple heuristics that make us smart. Oxford University Press, Oxford

    Google Scholar 

  • Glymour C, Cooper GF (eds) (1999) Computation, causation, & discovery. MIT Press, Cambridge

    Google Scholar 

  • Gomez MA, Skiba RM, Snow JC (2017) Graspable objects grab attention more than images do. Psychol Sci. https://doi.org/10.1177/0956797617730599

    Article  Google Scholar 

  • Hacking I (2012) Language, truth and reason’ 30 years later. Stud Hist Philos Sci A 43:599–609

    Google Scholar 

  • Heng K (2014) The nature of scientific proof in the age of simulations. Am Sci 102:174–177

    Google Scholar 

  • Hume D (1738) A treatise of human nature. Everyman, London

    Google Scholar 

  • Humphreys P (2004) Extending ourselves: computational science, empiricism, and scientific method. Oxford University Press, New York

    Google Scholar 

  • Ippoliti E (2008) Inferenze ampliative: Visualizzazione, analogia e rappresentazioni multiple. Lulu Press, Morrisville

    Google Scholar 

  • Ippoliti E, Chen P (eds) (2017) Methods and finance: a unifying view on finance, mathematics and philosophy. Springer, Cham

    Google Scholar 

  • James W (1907/1981) Pragmatism. Hackett, Indianapolis

  • Knight W (2017) The dark secret at the heart of AI: no one really knows how the most advanced algorithms do what they do. MIT Technology Review, Cambridge, pp 55–63

    Google Scholar 

  • Koza J (1992) Genetic programming: on the programming of computers by means of natural selection this is the first volume of a multi-year series. MIT Press, Cambridge

    Google Scholar 

  • Kuhn TS (1962/1970) The structure of scientific revolutions, 2nd edn. Univ of Chicago Press, Chicago

    Google Scholar 

  • Kuhn TS (1977) The essential tension. University of Chicago Press, Chicago

    Google Scholar 

  • Kuhn TS (2000) The road since structure. University of Chicago Press, Chicago

    Google Scholar 

  • Laudan L (1981) Science and hypothesis. Reidel, Dordrecht

    Google Scholar 

  • Leonelli S (2016) Data-centric biology: a philosophical study. University of Chicago Press, Chicago

    Google Scholar 

  • Loghmani RL, Caputo B, Vincze M (2017) Recognizing objections in-the-wild: where do we stand? arXiv.org:1709.05862v1. Accessed 5 Feb 2018

  • Lynch MP (2016) The internet of us: Knowing more and understanding less in the age of big data. Liveright/W.W. Norton, New York

    Google Scholar 

  • Marcus G (2018a) Deep learning: A critical appraisal. arXiv.org:1891.00631. Accessed 5 Feb 2018

  • Marcus G (2018b) In defense of skepticism about deep learning. Submitted to arXiv.org

  • Marcus G (2018c) Innateness, AlphZero, and artificial intelligence. arXiv.org:1801.05667. Accessed 5 February 2018

  • McLuhan M (1964) Understanding media: the extensions of man. McGraw-Hill, New York

    Google Scholar 

  • Meehl P (1954) Clinical versus statistical prediction: a theoretical analysis and a review of the evidence. University of Minnesota Press, Minneapolis

    Google Scholar 

  • Meikle J (2005) Ghost in the machine: why it’s hard to write about design. Technol Culture 46(2):385–392

    Google Scholar 

  • Newell A, Simon HA (1972) Human problem solving. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  • Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: High Confidence predictions for unrecognizable images. 2015 IEEE Conference on computer vision and pattern recognition (CVPR), 427–436

  • Nickles T (1987) From natural philosophy to metaphilosophy of science. In Kargon R, Achinstein P (eds) Kelvin's baltimore lectures and modern theoretical physics: historical and philosophical perspectives. MIT Press, Cambridge, MA, pp 507–541

    Google Scholar 

  • Nickles T (2017) Strong realism as scientism: are we at the end of history? In Boudry M, Pigliucci M (eds) Science unlimited? The challenges of scientism. Univ of Chicago Press, Chicago

    Google Scholar 

  • Nickles T (2018) TTT: a fast heuristic to new theories? In Danks D, Ippoliti E (eds) Building theories. Springer, Cham, Switzerland, pp 169–118

    Google Scholar 

  • Nickles T (forthcoming) Do cognitive illusions make scientific realism deceptively attractive? In: González WJ (ed) New approaches to scientific realism

  • Nielson M (2012) Reinventing discovery: the new era of networked science. Princeton University Press, Princeton

    Google Scholar 

  • Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. New York University Press, New York

    Google Scholar 

  • Norman D (1993) Things that make us smart: defending human attributes in the age of the machine. Addison-Wesley, Reading

    Google Scholar 

  • Norman D (2004) Emotional design: why we love (or hate) everyday things. Basic Books, New York

    Google Scholar 

  • O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown, New York

    Google Scholar 

  • Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge

    Google Scholar 

  • Pearl J (2000) Causality: models, reasoning and inference, 2nd edn. Cambridge University Press, Cambridge

    Google Scholar 

  • Petroski H (2003) Small things considered: why there is no perfect design. Alfred Knopf, New York

    Google Scholar 

  • Polanyi M (1958) Personal knowledge. University of Chicago Press, Chicago

    Google Scholar 

  • Rescher N (1984) The limits of science. University of California Press, Berkeley

    Google Scholar 

  • Rheinberger H-G (1997) Toward a history of epistemic things: synthesizing proteins in the test tube. Stanford University Press, Stanford

    Google Scholar 

  • Rozenblit L, Keil F (2002) The misunderstood limits of folk science: an illusion of explanatory depth. Cogn Sci 26(5):521–562

    Google Scholar 

  • Ryle G (1949) The concept of mind. Hutchinson, London

    Google Scholar 

  • Schickore J, Steinle F (2006) Revisiting discovery and justification: historical and philosophical perspectives on the context distinction. Springer, Dordrecht

    Google Scholar 

  • Shapere D (1984) Reason and the search for knowledge. Reidel, Dordrecht

    Google Scholar 

  • Shapin S, Schaffer S (1985) Leviathan and the air-pump. Princeton University Press, Princeton

    Google Scholar 

  • Shapiro B (1983) Probability and certainty in seventeenth-century England. Princeton University Press, Princeton

    Google Scholar 

  • Somers J (2017) Is AI riding a one-trick pony? MIT Technology Review, Cambridge

    Google Scholar 

  • Spirtes P, Glymour C, Scheines R (2000) Causation, prediction, and search, 2nd edn. MIT Press, Cambridge

    Google Scholar 

  • Stanley J (2011) Know how. Oxford University Press, Oxford

    Google Scholar 

  • Sweeney P (2017) Deep learning, alien knowledge and other UFOs. https://medium.com/inventing-intelligent-machines/machine-learning-alien-knowledge-and-other-ufos-1a44c66508d1. Accessed Nov 18 2017

  • Szegedy C, Zaremba W et al (2014) Intriguing properties of neural networks. arXiv.org 1312.6199. Accessed 5 Feb 2018

  • Teller P (2001) Twilight of the perfect model model. Erkenntnis 55(3):393–415

    Google Scholar 

  • Tishby N, Zaslavsky N (2015) Deep learning and the information bottleneck principle. arXiv.org:1503.02406v1 [cs.LG]. Accessed 5 February 2018

  • Trout JD (2002) Scientific explanation and the sense of understanding. Philos Sci 69:212–233

    Google Scholar 

  • Wachter-Boettcher S (2017) Technically wrong: sexist apps, biased algortihms, and other threats of toxic tech. Norton, New York

    Google Scholar 

  • Wilson T (2002) Strangers to ourselves. Harvard University/Belknap Press, Cambridge, MA

    Google Scholar 

  • Weinberger D (2014) Too big to know: rethinking knowledge. Basic Books, New York

    Google Scholar 

  • Weinberger D (2017) Alien knowledge: when machines justify knowledge. Wired Magazine

  • Wimsatt WC (2007) Re-engineering philosophy for limited beings. Harvard University Press, Cambridge

    Google Scholar 

  • Wise MN (2011) Science as (historical) narrative. Erkenntnis 75:349–376

    Google Scholar 

  • Wittgenstein L (1953) Philosophical investigations. Macmillan, London

    Google Scholar 

  • Zenil H et al (2017) What are the main criticism and limitations of deep learning? https://www.quora.com/What-are-the-main-criticsm-and-limitations-of-deep-learning. Accessed 5 Feb 2018

Download references

Acknowledgements

Thanks to Emiliano Ippoliti for helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Nickles.

Ethics declarations

Conflict of interest

Authors declare that he has no conflict of interest to declare.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nickles, T. Alien Reasoning: Is a Major Change in Scientific Research Underway?. Topoi 39, 901–914 (2020). https://doi.org/10.1007/s11245-018-9557-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11245-018-9557-1

Keywords

Navigation