Participants
The participants were recruited at a local elementary school servicing both middle and lower income communities in a southeastern city with approximately 64% of the student population receiving free or reduced price lunches. The total of 93 children (52 males, 41 females) between the ages of 5 and 8 years (M
age = 6.95 years, SD
age = 0.89 years, range = 4.9 to 8.4 years) participated in the study. The children were from various racial and ethnic groups (i.e., 34% African American, 29% White or Caucasian, 22% Hispanic, and 15% Other). Parental consent was obtained for all participants, and each child received stickers and pencils in appreciation for participation.
Of the 93 children who participated in the study, 4 (3 kindergartners, 1 second grader) were not included in the analyses of performance on the LSS task because they failed to meet the practice-trial criterion (described below). Thus, the final sample for these analyses was 89 children (50 males, 39 females).
For the DCCS, of the 93 children who participated in the study, 12 (9 kindergartners, 1 first graders, and 2 second graders) were not included in the analyses of performance on the DCCS task because they failed to reach the postswitch criterion, as outlined by Zelazo (2006) and described below. Thus, the final sample for these analyses was 81 children (47 males, 34 females).
For analyses comparing children’s performances on the LSS relative to the DCCS, children who did not meet the inclusion criteria for both tasks were removed from the analyses (n = 16). As a result, the final sample for these analyses was 77 children (45 males, 32 females).
Apparatus and stimuli
E-Prime 2.0 computer software was used to create and administer the computerized versions of the LSS picture-word task and the DCCS task (Zelazo, 2006). Both tasks were administered using a PC laptop connected to a 15-in. ELO touch-screen computer monitor. The touch-screen monitor and E-Prime software were also used to record participants’ responses (e.g., accuracy, reaction time). Auditory stimuli were presented through noise-reducing headphones (Sony Model MDR-7506).
LSS task
Three sorting “gobblers” (i.e., yellow happy faces) were created using PowerPoint clip art and Adobe Illustrator. The “object gobbler” was depicted holding a box and was positioned midway down and on either the right or the left side of the computer screen. The “color gobbler” was depicted holding a color palette and a paintbrush and was positioned on the side of the screen opposite the object gobbler. The “mismatch gobbler” had a mischievous grin and a Mohawk hairstyle and was always positioned at the bottom center of the computer screen (see Fig. 2).
Demonstration stimuli
Photographs of a blue fish, a green dress, a black car, and a purple flower were used during the demonstration phase. The blue fish was labeled “blue” (i.e., color match), the green dress was labeled “orange” (i.e., color mismatch), the black car was labeled “car” (i.e., object match), and the purple flower was labeled “school” (i.e., object mismatch).
Practice stimuli
The labels used for the 16 practice trials are depicted in Appendix A.
Test stimuli
Appendixes A and B present the test stimuli (labels and pictures) used in the present study. The test stimuli comprised 25 object labels, 20 object photographs, and 9 color labels. The 25 object labels were selected based on young children’s familiarity with them. More specifically, we consulted the MacArthur Communicative Development Inventory: Words and Gestures (Fenson et al., 1993) to select object labels that referred to concrete nouns that children as young as 30 months were reported to both understand and say. We selected only labels that referred to concrete nouns to ensure that the corresponding objects could be clearly represented in still photographs (e.g., shoe, cup, book). Using this criterion, we generated a list of 25 words to use as object labels. Of these 25 labels, 5 were not paired with representative photos and always served as the object mismatch labels (i.e., “leg,” “bag,” “hat,” “bottle,” and “meat”). The remaining 20 object labels were paired with corresponding still photographs.
The 20 still photographs were selected because they were (1) easily recognized when presented in isolation and (2) monochromatic. Object photos were only selected if they could be clearly interpreted without additional cues. For example, the concrete nouns “chin” and “tongue” were difficult to visually represent without simultaneously showing the “mouth” attached to them. Similarly, the word “beach” was difficult to visually represent without simultaneously showing “sand” and “ocean.” Furthermore, photographs were only selected if the depicted object could readily be identified as one color (e.g., black dog, red plane), so that clear color match and color mismatch labels could be appropriately applied to each photograph.
Naturally, the color match labels were selected based on the colors of their corresponding objects. As a result, nine different color labels were utilized. The color mismatch labels were selected and paired with objects based on two considerations. First, similarities in hue (e.g., purple/blue, red/orange, or black/brown) were considered, such that a red object would never be paired with the word “orange” as a color mismatch label. Secondly, similarities between the initial sounds of the words (e.g., “blue” vs. “black”) were considered, such that a black object would never be paired with the word “blue” as a color mismatch label.
After selecting the 20 pictures and the corresponding object and color match/mismatch labels, we devised a counterbalancing strategy to ensure that each label category was equally represented across the 20 trials (i.e., 5 object matches, 5 object mismatches, 5 color matches, and 5 color mismatches). To accomplish this, we sorted the 20 pictures into four sets of 5 pictures while keeping three specific factors in consideration. The first factor was the conceptual categories of the objects. The concern was that if selection were left unconstrained, it was possible that several items from one conceptual category (e.g., animals) could be disproportionately paired with one type of label (e.g., object match labels). To minimize this possibility, objects in the same conceptual category were split across the four sets. For example, pictures of the dog, cow, and horse were assigned to separate sets, because they all fell under the category of animals. The specific categories taken into account when creating the list included animals, food, home/furnishings, and body parts/clothing.
The second factor considered in determining the sets was the objects’ perceptual cues (i.e., color, shape), as represented in the pictures. Pictures of objects that were similar in shape (e.g., ball, cookie, apple) or color (light green apple vs. dark green cup) were assigned to different sets. Again, the goal was to ensure that objects that were similar in size or color were not disproportionately paired with one type of label (e.g., color match labels). For example, two or more objects of a similar color (e.g., light blue vs. dark blue) were never presented with the same color match label (e.g., “blue”). Thus, for every set, the five objects selected were depicted in five different colors.
The third factor considered in determining the sets was the phonetic similarity of the labels. Specifically, similar-sounding words were not clustered within the same set. As a result, “cow,” “cup,” and “cookie” were intentionally separated due to the similarity in the initial velar /k/ sound. See Appendix A for a sample depiction of the counterbalancing strategy used in creating the five object sets, with corresponding object match, object mismatch, color match, and color mismatch labels.
DCCS task
PowerPoint clip art pictures of red and blue rabbits and trucks were used as the stimuli for the DCCS. For the border version of the task, each combination (i.e., blue rabbit, red rabbit, blue truck, or red truck) had an identical counterpart surrounded by a 5-mm black border (Zelazo, 2006; see Fig. 2 for an example).
Peabody Picture Vocabulary Test–4th edition (PPVT-IV)
The PPVT-IV is a standardized test of receptive vocabulary in English (Dunn & Dunn, 2007). Participants are presented with a sheet of paper or page that is broken up into four equal quadrants, each depicting an image. Participants are given an auditory label (e.g., “ball”) and instructed to select or point to the image that correctly corresponds to the label. Testing is concluded when a participant make eight incorrect responses within a 12-item section. The raw score is calculated by subtracting the number of errors committed in the entire assessment from the ceiling item (i.e., the last item). Per the PPVT-IV instruction manual (Dunn & Dunn, 2007), the raw score was calculated by subtracting the number of errors committed in the entire assessment from the ceiling item (i.e., the last item). Raw scores were translated into standardized scores according to the age of the participant. The PPVT is standardized with a mean score of 100 and a standard deviation of 15.
Comprehensive Test of Phonological Processing (CTOPP)
The CTOPP is a standardized measure of phonological processing comprising several subtests (Wagner, Torgesen, & Rashotte, 1999). For the present study, we used the Phonological Awareness subtest. This subtest is composed of two tasks: elision and blending words. During the elision task, children heard a word, and then had to repeat it while omitting one of the sounds in the word. For example, when hearing the word “tiger,” participants had to repeat it without the “g” (i.e., \tī-ər\). During the blending-words task, participants combined various sounds in order to form a word. Participants heard a word broken up into individual sounds (e.g., /t/ + /oi/) and had to merge the sounds to form the given word (i.e., “toy”). Raw scores are calculated by counting the number of correct responses for each subset (i.e., elision and blending). To calculate the standard scores, the children’s scores on the subsets were added together to create a total raw score of their phonological awareness. This score was then transformed into a standardized score, as indicated by the instructional manual (Wagner et al), based on the relevant children’s age group (i.e., 5–11 years). Similar to the PPVT, the composite score for the CTOPP is standardized with a mean score of 100 and a standard deviation of 15.
Procedure and coding
After receiving parental consent, children were tested individually during school hours. A female experimenter tested each participant in a small, quiet room at the school. Children were tested during two sessions on two different days. Before each session, participants were informed that they could stop the session and return to their classroom without any consequence at any time. After child assent was given, the testing session began. Each session lasted approximately 20 min.
For both the DCCS and LSS tasks, children were seated in front of the touch-screen monitor and alongside an experimenter who was positioned in front of the laptop. The DCCS and LSS tasks were administered on different days in order to reduce the possibility that performance on one task would directly influence performance on the subsequent task. As a result, children were given the DCCS and the LSS in two different sessions with at least a 1-week gap between. The order in which the tasks were given (e.g., LSS first, DCCS second, or vice versa) was counterbalanced across participants.
LSS task
For the LSS task, children were seated in front of the touch-screen monitor, next to the experimenter. The experimenter initiated the task by depressing a key on the laptop connected to the monitor. First, children were introduced to the three “gobblers.” Children were told that the object gobbler was moving and needed help packing the correct objects in his box. The color gobbler was painting a picture, but was running low on paint and needed help collecting colors for her paint palette. The mismatch gobbler liked to trick people and was trying to prevent the other gobblers from being helped. Second, children were informed that it was their job to help the object and color gobblers while avoiding being tricked by the mismatch gobbler (see Appendix C for the instructions). Finally, children were told that they were going to see a picture and hear a word. If the word matched the color of the picture, they were to touch the color gobbler. If the word matched the object, they were to touch the object gobbler. However, since the mismatch gobbler liked to switch the pictures around, sometimes the word would not match the object or the color, so they were to “give it back to him” by touching the mismatch gobbler.
Demonstration phase
After the rules of the game were explained, the experimenter invited the participant to play the game with her for four demonstration trials (color match, object match, color mismatch, and object mismatch examples). During this phase, the experimenter narrated the task and asked the child for his/her help.
Practice phase
After the demonstration phase, children were initially given 16 practice trials (4 from each label type category). The experimenter watched and recorded the numbers of correct and incorrect responses. If a child scored significantly above chance (i.e., got 10 out of 16 or 62.5% correct responses), he or she proceeded to the test phase. If a child got fewer than 10 correct, he or she was reminded of the instructions and allowed to complete 8 more practice trials that were identical to 8 of the 16 trials previously seen during practice. It should be noted that 70% (i.e., 65 of 93) of the participants met or exceeded this criterion. Only 24 of the 93 children completed the additional practice trials, over half of which were kindergartners (n = 14). Of these 24 children who had extra practice trials, 4 (3 kindergartners, 1 second grader) were removed from all analyses for the LSS because they failed to respond correctly at above chance levels (i.e., 5 out of 8 correct responses, or 62.5%).
Test phase
After the practice phase, children proceeded to the test phase. The experimenter told participants that it was now their turn to play the game by themselves with the headphones on. Children were informed that the experimenter could not help them during the task because she was unable to hear the words. If they did not hear or know a word, they were instructed to make their best guess and keep going. Once children put on the headphones, a volume check trial was conducted.
During this volume check trial, a prerecorded audio file instructed participants to touch a specific gobbler (e.g., “touch the object gobbler”), which was also designated by an arrow. Once the participants had correctly touched each of the three gobblers, as instructed by the audio and arrows, the experimenter removed their headphones and asked if the volume was at an appropriate level. Next, the experimenter reminded participants about the rules of the game and informed them that their responses would be timed, so they should try to make their choices as quickly as possible. Once a participant acknowledged that he or she was ready to play the game, the test phase was administered.
During the test phase, participants completed 20 trials. The presentation order of the trials was randomly determined. While the mismatch gobbler was always on the bottom center of the screen, the left/right orientations of the object gobbler and the color gobbler were also counterbalanced across all participants (see Fig. 2). In order for a response to be recorded, the participant had to touch somewhere within a 3-in. radius of a gobbler. A response made outside of this radius was recorded as “no response.” A trial did not proceed unless a touch somewhere on the screen was registered. The entire task took approximately 5–7 min to complete.
Coding
Children’s responses during the test phase were coded as correct, incorrect, or “no response.” Correct, incorrect, and no responses for each of the sorting categories (i.e., color match/mismatch, object match/mismatch) were tallied separately. These tallies were converted to percentage scores. The percentages of correct and incorrect responses in each sorting category were calculated by adding up the number of correct or incorrect responses and dividing by five, minus the number of “no responses.” Across all categories, the total numbers of correct responses, incorrect responses, and no responses for the match and mismatch trials were calculated. The percentage accuracy was calculated by dividing the number of correct responses over 20 trials, minus the total number of “no responses.” Reaction time, measured in milliseconds, from the initial presentation of the object and auditory label to response/selection on the touch screen, was recorded via the E-Prime computer software.
DCCS task
For the DCCS, we followed the procedure outlined by Zelazo (2006), except that we created a computerized version using children’s fingerpresses on the touch-screen monitor as the behavioral response. Following Zelazo’s (2006) protocol, children were given two demonstration trials during which they were introduced to the stimulus pairs (e.g., “Here’s a red truck and here’s a blue rabbit”) and told about the sorting game (e.g., “This is the shape game”; see Zelazo, 2006, for detailed instructions). On the computer screen, the two pictures (i.e., rabbit and truck) were presented side by side at the top of the screen. At the bottom of the screen, a new “card” (i.e., a picture of a truck or rabbit) was presented (see Fig. 1). During this demonstration phase, the experimenter sorted the upcoming pictures to the appropriate place on the monitor while explaining these actions (e.g., “See, here’s a rabbit. So it goes here [touches rabbit on left side of monitor]).” After this demonstration phase, children proceeded to the preswitch phase.
During the preswitch phase, children were given six trials to sort by the predefined rule (e.g., shape game = rabbits on the right, trucks on the left). The presentation of the six trials was randomized. After completing the preswitch phase, children proceeded to the postswitch phase. During this phase, children were instructed to sort again, but this time by a different rule (e.g., sort by color instead of shape). Again, the presentation of the six postswitch trials was randomized. After completing the six postswitch trials, children proceeded on to the border version of the task.
During the border version of the task, children were asked to sort the cards that had a border by color and the cards with no border by shape (see Fig. 1). For example, if the picture of the red rabbit had a border around it, children were to play the “color game.” However, if there was no border, children were to play the “shape game.” After these instructions were explained to the participants, the experimenter encouraged them to do their best and to always make a selection, even if they were unsure of the answer. At this point, the experimenter answered any questions, then instructed the participant to put on the headphones to begin the task. Before each trial, prerecorded verbal instructions reminding the participant how to sort the cards were repeated through the headphones (Zelazo, 2006). A trial ended only when the participant made a selection. The task was complete after the child completed 12 trials (see Zelazo, 2006, for a detailed description of the task). For the present study, the left/right orientations of the objects (rabbit/truck) on the monitor and the colors (red/blue) of each object were counterbalanced across all participants. The entire task took approximately 5–7 min to complete.
Coding
For the preswitch phase, the numbers of correct and incorrect responses out of 6 trials were recorded. For the postswitch phase, the numbers of correct responses and perseverative errors (i.e., selections made based on the initial rule) out of six trials was recorded. Children who responded correctly on at least five trials were classified as “passing” the postswitch phase (Zelazo, 2006). Using this criterion, 12 children did not pass this phase and thus did not continue on to the border version of the task. For the border version of the task, the numbers of correct and incorrect responses out of 12 trials were recorded. Children who responded correctly on 9 out of the 12 (75%) trials were classified as “passing” the task (Zelazo, 2006). Also, for the DCCS, reaction time (in milliseconds) was coded from the presentation of the object or “card” to the instance of a registered touch/selection. Reaction time was recorded via E-Prime computer software.