Correction to: Ethics and Information Technology (2024) 26:38

https://doi.org/10.1007/s10676-024-09775-5

In this article, there are multiple corrections as listed below,

The sentence beginning, ‘Solutions such as…’ must be Solutions such as connecting the LLM to a database don’t work because, if the models are trained on the database, then the words in the database affect the probability that the chatbot will add one or another word to the line of text it is generating.

The sentence beginning, ‘We will argue….’ must be ‘We will argue that these falsehoods aren’t hallucinations later.’

The sentence beginning ‘In Sect. 3.2 we consider…….’ must be ‘In our final section, we consider whether ChatGPT may be a hard bullshitter, but it is important to note that it seems to us that hard bullshit, like the two accounts cited here, requires one to take a stance on whether or not LLMs can be agents, and so comes with additional argumentative burdens.’

The sentence, ‘We canvas a few ways in which ChatGPT can be understood to have the requisite intentions in Sect. 3.2.’ must be ‘We then canvas a few ways in which ChatGPT can be understood to have the requisite intentions.’

The sentence, ‘We are not confident that chatbots can be correctly described as having any intentions at all, and we’ll go into this in more depth in the next Sect. (3.2)’ must be ‘We are not confident that chatbots can be correctly described as having any intentions at all, and we’ll go into this in more depth in the next section’.

The sentence beginning, ‘In Sect. 1, we argued….’ must be ‘Earlier, we argued that ChatGPT is not designed to produce true utterances; rather, it is designed to produce text which is indistinguishable from the text produced by humans. It is aimed at being convincing rather than accurate’.

The sentence, ‘We will consider these questions in more depth in Sect. 3.2.2.’ must be ‘We will consider these questions in more depth below.’

The sentence beginning ‘We don’t think…….’ must be ‘We don’t think that ChatGPT is an agent or has intentions in precisely the same way that humans do (see Levinstein and Herrmann (forthcoming) for a discussion of the issues here).

The sentence, ‘In the next Sect. (3.2.3), we will argue that ChatGPT has no similar function or intention which would justify calling it a confabulator, liar, or hallucinator.’ must be ‘In the next section, we will argue that ChatGPT has no similar function or intention which would justify calling it a confabulator, liar, or hallucinator.’

The sentence beginning, ‘But there are strong ……….’ must be ‘But there are strong reasons to think that it does not have beliefs that it is intending to share in general–see, for example, Levinstein and Herrmann (forthcoming).’

Also, there are typo errors in references section.

The corrected references should read as

Levinstein, B. A., & Herrmann, D. A. (forthcoming). Still no lie detector for language models: Probing empirical and conceptual roadblocks. Philosophical Studies, 1–27.

and

Levy, N. (2023). Philosophy, Bullshit, and peer review. Cambridge University.

The original article has been corrected.