Encyclopedia of Educational Philosophy and Theory

2017 Edition
| Editors: Michael A. Peters

Informal Assessment

  • Maria Araceli Ruiz-Primo
Reference work entry
DOI: https://doi.org/10.1007/978-981-287-588-4_390

Assessment is practiced everyday in all social interactions. It is the way we naturally, even automatically, assess each other as we socialize in a variety of situations everyday: “Whenever we take the time to look and notice, we find naturally occurring, unremarkable, and unremarked assessment activities that nevertheless are fundamental in making human collaboration possible…[]…in-a-glance, on-the-fly mutual assessments underlie all of human sociality, and in fact, the sociality of all social species” (Jordan and Putz 2004, p. 348). This on-the-fly assessment has come to be known as interactive (Bell and Cowie 2001) or informal assessment (Shavelson et al. 2003) in education, or inherent assessment in noneducation organizations (Jordan and Putz 2004). These terms have been used as differentiators from those at the other end of the continuum, i.e., more planned or formal assessment as applied in education (Bell and Cowie 2001; Shavelson et al. 2003; Wiliam and Black 1996) and documentary assessment as applied in organizations (Jordan and Putz 2004).

At any point of the continuum between informal and formal, the assessment process is a cycle of gathering, interpreting, and acting upon the information collected, all guided with a particular goal in mind. This goal could be to know if our interlocutor understands what we are saying or to know if students are achieving the intended learning goals.

Any assessment process can be characterized by the following dimensions: (a) the agents involved (i.e., the assessed and the assessor); (b) the role that each agent plays in a larger social context in relation to other individuals (e.g., supervisor/supervisee; expert/novice; teacher/student/peer); (c) the purpose of the assessment or how the information will be used (e.g., to check understanding in a conversation, to improve the assessed performance, to judge and assign a number to the assessed performance); (d) the type of evidence collected (i.e., incidental or purposeful; informal or formal); and (e) the strategies used to communicate the results of the assessment (e.g., body language, tone of speech, gestures, actions, oral feedback, graphic representations, a score).

The informal-to-formal continuum includes intermediate points. The extremes of the continuum and their intermediate points vary according to the characteristics of each of these dimensions. Informal assessment at one end of the continuum may be completely implicit, as, for example, when a listener in a conversation “looks perplexed and confused” and his interlocutor rephrases what was just said without any justification (Jordan and Putz 2004). In this situation, the listener relied on the speaker’s facial gestures to interpret what was being said and what was not fully understood, then rephrased to clarify the intended message. As we move to the other end of the continuum what was implicit gradually becomes explicit, and the assessment message is shared with the assessed (an individual, a pair, or a group). This explicitness is made evident across all the dimensions.

Informal assessment can be practiced minute-by-minute because everything can be used as a source of information to guide our next move. For example, we can use the interlocutor’s gestures and body language, the questions she may ask you or others, the conversation she may be having with another person, or how she responds to the questions posed by you or others (orally or written).

An important element in informal assessment is to maintain an interpretative (Davis 1997) state of mind, which allows us to gather the desired information and to know what to do next. Only by maintaining this state of mind is it possible to notice – that is, to attend, interpret, and decide what to do next (Jacobs et al. 2010).

A good way to exemplify informal formative assessment and its impact is to focus on a context. Then, the rest of this entry focuses on thinking about informal formative assessment in the context of teaching and learning in classrooms. There are at least two reasons for doing so. First, classroom interaction is a core part of teaching (Alexander 2008) and, as such, offers the ideal setting for analyzing informal formative assessments at a more molecular level. Second, informal formative assessment in classrooms allows analyzing the process with the single goal of improving students’ learning.

Informal Formative Assessment in the Classroom Context

Daily ongoing informal assessment in the classroom makes use of various informal strategies to gather information about students’ learning. When this process is conducted for the primary purpose of advancing students’ learning, this becomes informal formative assessment or informal assessment for learning.

A critical premise in informal formative assessment is that much of what teachers and students do in the classroom can be described, potentially, as an assessment that can provide evidence about where students are in their learning. Informal formative assessment, then, involves taking advantage of the numerous opportunities within a class period to collect evidence about where students are relative to the learning goals, and then to use this information to help improve students’ learning. That is, much of what occurs every day in the classroom offers potential assessment opportunities. Informal formative assessment has been named on-the-fly assessment because it takes advantage of the opportunities, planned (such as a carefully thought question to ask students) and unplanned (such a question asked by a student in the middle of a discussion) to collect evidence about students’ level of understanding. For example, when a student asks a question indicating confusion about something, teachers can use this as an opportunity to better understand the students’ thinking – that is, they interpret. The important characteristic of this interpretation is to find out what the student knows, understands and can do, and not only if the student knows, understands, and can do something. The latter one involves a yes/no response approach rather than searching for what exactly students’ know and in which way this knowledge can contribute to help reach higher levels of understanding. Only with this type of interpretation, teachers can then decide which action to take that can facilitate, support, and advance the student’s learning (e.g., provide feedback, make an instructional adjustment, or suggest additional practice).

Based on classroom observations (Ruiz-Primo et al. 2015), we know that informal formative assessment is not necessarily done consciously: something is noticed, and how to respond to what it is noticed is decided very quickly. Proficient teachers attend; they observe; they look at what students are doing, writing, or saying; and they ask questions that help them understand their students’ thinking. All this in turn will inform the teacher’s next steps and decisions. For these strategies to work, there is a critical requisite: the teacher needs to fully understand what to look for when attending to what students are thinking. That is, the learning goal should be very clear so it is obvious what to attend to and how to respond to what it is noticed. Informal formative assessment then can be generally characterized by the conscious discovery of novel information about student understanding in any interaction at any given point in time; it is the constant searching for making sense of students’ responses, actions, comments, and behaviors (Ruiz-Primo 2011).

Different from formal formative assessment, informal formative assessment is not necessarily associated with a particular assessment instrument, rather teachers use four sources of information readily accessible to them that offer opportunities to gather information: what students say, what they write, what they do, and what they make. Among all these sources, what students say during any instructional interactions is critical; “…talk is the most pervasive in its use and powerful in its possibilities…[it]…vitally mediates what the child knows and understand and what he or she has to yet know and understand” (Alexander 2008, p. 118). Thus, one of the most important tasks for teachers is to create interaction opportunities that directly and appropriately engineer such mediation (Alexander 2008; Bellack et al. 1966; Edwards and Westgate 1994). This is true independently of whether the interactions are between the teacher and a student, or the teacher and many students, or among the students themselves.

Instructional dialogues can be viewed as assessment conversations (Duschl and Gitomer 1997) in which dialogues become a source of information for interpretation and deciding what to do with the information collected. Assessment conversations are dialogues usually embedded in any activity that occurs in the classroom. “The main purpose of assessment conversations is to make students’ thinking explicit, or to voice their understanding so that teachers can recognize and act on it to promote learning. Assessment conversations make evident what and how students are thinking, enabling teachers to recognize their students’ conceptions, mental models, strategies, language use, and/or communication skills. Teachers can then use this information accordingly to guide the next activities” (Ruiz-Primo 2011, p. 17).

Earlier it was mentioned that the formative assessment process is a cycle of gathering, interpreting, and acting upon the information collected that is guided with a particular goal in mind. In what follows the critical characteristics of informal formative assessment are presented considering this cycle. The discussion starts with what should guide the other activities, clarifying the learning goal and expectations.
  1. 1.

    Clarify what students are expected to learn and how they can show that they are learning. Assessment conversations can only take place when there is a clear goal that guides the interaction. The classroom instructional activities assigned can be both a learning experience as well as a window into students’ thinking. When the most central things students need to learn are clear, the ways to gather information can be streamlined and efficient. The learning goals that guide informal formative assessment conversations tend to be discrete and immediate (e.g., what students need to get from a class discussion or from a task). Reminding students about the learning goals and targets during the interactions make them more purposeful (e.g., remind students the purpose of an activity or a conversation).

  2. 2.

    Gather Information: “How can students’ thinking be made explicit? The key is to design questions and activities that can create windows into student’s thinking. They should be designed to reveal how students arrived at an answer and their thinking behind the answer. Asking questions, observing, and listening become critical tools for informally gathering evidence about student learning:

    Questioning. Questions are asked for a variety of reasons: to make sure students are on task (“Does everyone have their planners out?”), to see if students are paying attention (“Can everyone see this?”), to model our thought process (“Okay, what is the next thing we do when we are solving problems of this type?”), to see if students know or can do something (“What is the density of water?,” or “How do we read volume with a graduated cylinder?”), to push students’ thinking (“What do you think would happen if we continued this experiment for two more weeks?”), and to understand students’ thinking (“Can you tell me more about why you think that?,” or “Show me step by step how you solved this problem?”).

    Whether conducted with a student, a group of students, or the whole class, questioning, when done purposefully, can reveal a great deal about students’ thinking. The questions asked and the interpretation of the students’ responses should be based on the learning goal(s). The most informative questions ask students to explain their answer, elaborate on their response, or provide information about why they think something. Asking multiple questions with increasing focus can help to hone in on the source of rationale behind students’ thinking or source of confusion. Questions that begin with “why does…?”; “how would you…?”; “could you explain…?”; “why do you think…?”; “why is _______an example of ______?”; “why is ______and not _____?” can help to develop dialogical interactions. When we seek to learn something about students thinking, these types of questions become the most fruitful. “Good diagnosis relies on rich questions that elicit learners’ higher-order thinking” (Stobart 2014, p. 118). Questions that are closed (with a yes/no answer or with specific correct answers in mind) can help to find if students know something, but they may not help to find what students know.

    Observing and listening. Most teachers observe or listen to students while they are working. What is less common is to observe and listen to students with the purpose of noticing – that is, observing and listening with the purpose of learning about students’ thinking. It has been observed that when teachers circulate around their classrooms they focus mainly on making sure students are on task (Ruiz-Primo et al. 2015). When teachers focus on knowing more about students’ thinking, they use circulating around the room as an opportunity to observe students working and to listen to their conversations, which in turn helps them to learn more about individual student progress. Observing and listening are usually event-oriented activities; therefore, the teacher should use her knowledge about the content, the potential students’ misconceptions, and the nature of the task to make decisions about whether to intervene or not based on her interpretations of what is happening during the event. As teachers walk around, they can use the information gathered to adjust their instruction accordingly on the next instructional episode or they can decide to intervene with a student or a group as they do so. There is a fine line between becoming intrusive and becoming a learning facilitator when a student is working autonomously or a group of students is discussing a task. When to intervene will depend on the stage of the event (e.g., is it the beginning of the task, the middle, the end?), the nature of the intervention based on the teacher’s interpretation of the situation (e.g., asking a question that can help a student to see another perspective or deciding that a deeper conversation is necessary), the time constraints (e.g., will the intervention require a conversation with the student that is longer than the time required to complete the task?), and the student’s affective stage (e.g., is the student becoming frustrated because there is not advancement in the task while others are moving forward?).

  3. 3.

    Interpret the collected information. As mentioned above, on-the-fly formative assessment requires teachers to make quick decisions about how to help students, and teachers typically make these decisions based on quick interpretations of what they noticed. Most of the time, these interpretations are unobservable to another person. They are derived naturally based on what teachers know about the content being taught and how to teach it, students’ conceptions, and potential issues the students may have with certain tasks. For example, when walking around the room and the teacher is observing and listening, she may intervene by asking a question to the students working in pairs based on the interpretation of her observations and the students’ conversations. The teacher may have decided at a particular moment that a group of students were having a circular discussion and, therefore, they are not advancing their reasoning about how to solve a problem. A question is asked as a way to assist students in their discussion; the question becomes a scaffold (e.g., Have you all considered the volume of the object?).

  4. 4.

    Use the evidence of students’ understanding to decide what to do next. The purpose of gathering information on-the-fly about students’ thinking is to use it. A critical characteristic of formative assessment, what defines it, is the use of the collected information to reduce the gap between where the student is and where she should be. If teachers do not decide what to do next using the relevant collected information, the gap is not altered. It follows then that there is no improvement of learning and no formative assessment.

    Some strategies are more effective than others in helping students to move forward in their learning. For example, rather than just evaluating a student’s response (i.e., stating that they are correct or incorrect), teachers can explain or elaborate for the students why something is correct or incorrect, when something (such as a procedure) can be used, and how to use it. Teachers can decide on-the-fly to reteach something or, if the source of the student’s confusion is centered in a particular problem, they can guide them through the solution to a problem by modeling with or without the help of students. Sometimes all a student needs is for teachers to clarify or reexplain the task that they need to accomplish. At other times a student’s response reflects clear and deep understanding of the content, in which case we might do something to further push their thinking.

There are two additional aspects related to the implementation of informal formative assessment:
  • Create ways for all students to have the same opportunities to participate in formative assessment. Asking a question and receiving an answer from one student or calling on the students who most often provide correct answers makes it difficult to conclude that everybody in the class is thinking the same thing. However, it is natural to make assumptions about the whole class from only one or two students’ responses. Thus, to make instructional decisions for the whole class (e.g., deciding to move on or assign additional work), it is important to ask a handful of students the same question, including those who are often too shy or tend not to volunteer to participate. When teachers pay attention only to those students who volunteer their participation, they can inappropriately decide to move on even though many students are not ready; therefore, the achievement gap within the classroom can widen. It is important to allow all students the opportunity to advance their learning.

  • Interact with students while they work independently. A good time to work with individual students is when students are working on something either by themselves, or in pairs or small groups. This could be the best opportunity to visit with identified students who can use individualized feedback and assistance. Research shows that the more proficient teachers tend to provide feedback to almost all their students during this independent work period (Ruiz-Primo et al. 2015).

Informal formative assessment can be conceptualized around the idea that everyday activities can be treated as potential sources of assessment information, from physical gestures during a conversation to body posture or words written or spoken. Such information is used by participants to guide their personal social interactions in a variety of nonschool settings. In the classroom context, informal formative assessment is critical to supporting continuing academic progress for every student. When students’ thinking can be explicitly described, it can be thoughtfully examined, questioned, and shaped into an “active object of constructive learning” (Glaser 1995, cited in Duschl and Osborne 2002).


  1. Alexander, R. (2008). Essays on pedagogy. New York: Routledge.Google Scholar
  2. Bell, B., & Cowie, B. (2001). Formative assessment and science education. Dordrecht: Kluwer.Google Scholar
  3. Bellack, A. A., Kliebard, H. M., Hyman, R. T., & Smith, F. L. (1966). The language of the classroom. New York: Teachers College Press.Google Scholar
  4. Davis, B. (1997). Listening for differences: An evolving conception of mathematics teaching. Journal for Research in Mathematics Education, 28(3), 355–376.CrossRefGoogle Scholar
  5. Duschl, R. A., & Gitomer, D. H. (1997). Strategies and challenges to changing the focus of assessment and instruction in science classrooms. Educational Assessment, 4, 37–73.CrossRefGoogle Scholar
  6. Duschl, R. A., & Osborne, J. (2002). Supporting and promoting argumentation in science education. Studies in Science Education, 38, 39–72.CrossRefGoogle Scholar
  7. Edwards, A. D., & Westgate, D. P. G. (1994). Investigating classroom talk. London: Falmer Press.Google Scholar
  8. Jackson, P. W. (1968). Life in classrooms. New York: Holt, Rinehart and Winston.Google Scholar
  9. Jacobs, V., Lamb, L., & Philipp, R. (2010). Professional noticing of children’s mathematical thinking. Journal for Research in Mathematics Education, 41(2), 169–202.Google Scholar
  10. Jordan, B., & Putz, P. (2004). Assessment as practice: Notes on measures, tests, and targets. Human Organization, 63, 346–358.CrossRefGoogle Scholar
  11. Leahy, S., Lyon, C., Thompson, M., & Wiliam, D. (2005). Classroom assessment: Minute by minute, day by day. Educational Leadership, 63(3), 18–24.Google Scholar
  12. Mercer, N., Dawes, L., Wegerif, R., & Sams, C. (2004). Reasoning as scientist: Ways of helping children to use language to learn science. British Educational Research Journal, 30(3), 359–377.CrossRefGoogle Scholar
  13. Ruiz-Primo, M. A. (2011). Informal formative assessment: The role of instructional dialogues in assessing students’ learning. Special Issue in Assessment for Learning Studies of Educational Evaluation, 37(1), 15–24.Google Scholar
  14. Ruiz-Primo, M. A., & Furtak, E. M. (2006). Informal formative assessment and scientific inquiry: Exploring teachers’ practices and student learning. Educational Assessment, 11(3–4), 205–235.Google Scholar
  15. Ruiz-Primo, M. A., & Furtak, E. M. (2007). Exploring teachers’ informal formative assessment practices and students’ understanding in the context of scientific inquiry. Journal of Research in Science Teaching, 44(1), 57–84.CrossRefGoogle Scholar
  16. Ruiz-Primo, M. A., Kroog, H., & Sands, D. (2015). Teachers’ judgments on-the-fly: Teachers’ response patterns to students’ responses in the context of informal formative assessment. Paper to be presented at the Symposium “From teachers’ assessment practices to shared professional judgment cultures” at the European Association for Research on Instruction and Learning. Limassol.Google Scholar
  17. Shavelson, R. J., Black, P. J., Wiliam, D., & Coffey, J. (2003). On linking formative and summative functions in the design of large-scale assessment systems (Internal manuscript, stanford education assessment laboratory). Stanford: Stanford University.Google Scholar
  18. Stobart, G. (2014). The expert learner. Challenging the myth of ability. Berkshire: Open University Press.Google Scholar
  19. Shavelson, R. J., Yin, Y., Furtak, E. M., Ruiz-Primo, M. A., Ayala, C. C., Young, D. B., et al. (2008). On the role and impact of formative assessment on science inquiry teaching and learning. In J. Coffey, R. Douglas, & C. Stearns (Eds.), Assessing science learning (pp. 21–36). Arlington: NSTA Press.Google Scholar
  20. William, D. (2011). Formative embedded assessments. Bloomington: Solution Tree Press.Google Scholar
  21. Wiliam, D., & Black, P. (1996). Meaning and consequences: A basis for distinguishing formative and summative functions of assessment? British Educational Research Journal, 22(5), 537–548.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Singapore 2017

Authors and Affiliations

  1. 1.Stanford UniversityStanfordUSA