In this paper I am concerned with the question of whether degrees of belief can figure in reasoning processes that are executed by humans. It is generally accepted that outright beliefs and intentions can be part of reasoning processes, but the role of degrees of belief remains unclear. The literature on subjective Bayesianism, which seems to be the natural place to look for discussions of the role of degrees of belief in reasoning, does not address the question of whether degrees of belief play a role in real agents’ reasoning processes. On the other hand, the philosophical literature on reasoning, which relies much less heavily on idealizing assumptions about reasoners than Bayesianism, is almost exclusively concerned with outright belief. One possible explanation for why no philosopher has yet developed an account of reasoning with degrees of belief is that reasoning with degrees of belief is not possible for humans. In this paper, I will consider three arguments for this claim. I will show why these arguments are flawed, and conclude that, at least as far as these arguments are concerned, it seems like there is no good reason why the topic of reasoning with degrees of belief has received so little attention.