Behavior Research Methods

, Volume 49, Issue 6, pp 2093–2112 | Cite as

BACS: The Brussels Artificial Character Sets for studies in cognitive psychology and neuroscience

  • Camille Vidal
  • Alain Content
  • Fabienne Chetail


Written symbols such as letters have been used extensively in cognitive psychology, whether to understand their contributions to written word recognition or to examine the processes involved in other mental functions. Sometimes, however, researchers want to manipulate letters while removing their associated characteristics. A powerful solution to do so is to use new characters, devised to be highly similar to letters, but without the associated sound or name. Given the growing use of artificial characters in experimental paradigms, the aim of the present study was to make available the Brussels Artificial Character Sets (BACS): two full, strictly controlled, and portable sets of artificial characters for a broad range of experimental situations.


Artificial characters Letters Uppercase/lowercase Similarity 
Whatever the different aspects of characters across scripts (e.g., ξ, a, Open image in new window , 也,
in Greek, English, Thai, Chinese, and Cherokee, respectively), written symbols constitute the basic elements of any transcription of language into print. Among the different types of characters, letters (i.e., the elements of alphabetic scripts) have been under deep scrutiny, be it to understand the processes by which the activation of letter representations enable readers to recognize words (e.g., McClelland & Rumelhart, 1981) or to understand how letter feature analysis leads to the activation of abstract letter identities (e.g., Grainger, Rey, & Dufau, 2008). However, beyond the field of visual word recognition, letters are also frequently used to examine processes involved in other mental functions (e.g., global/local processing in visual perception: Navon, 1977; memory span: Pollack, 1953; attentional blink: Raymond, Shapiro & Arnell, 1992). This can be explained by the fact that letters are simple objects, highly familiar to individuals, without associated meaning, and they are easy to manipulate. Sometimes, however, researchers want to manipulate such simple objects while removing their associated characteristics such as shape or sound. To do so, new characters are used, which come from scripts either unknown to the participants (e.g., Thai for monolingual French speakers) or devised by the researchers (referred to as pseudoletters, artificial characters, or false font; see Fig. 1).
Fig. 1

Examples of artificial characters used in previous studies (from left to right: Yoncheva et al., 2010; Bitan et al., 2003; Brooks, 1977; Williams, 1969; Jeffrey et al., 1967; and Taylor et al., 2011)

Given the growing use of artificial scripts in cognitive sciences, the aim of the present study was to generate and make available a full, strictly controlled, and portable set of artificial characters. In the following, we first review the different types of studies using unknown or artificial characters, and we then present the critical elements to take into consideration when devising and using a set of artificial characters.

Using unknown and artificial characters: State of the art

Researchers resort to unfamiliar symbols in three main situations: (1) to understand letter/word recognition processes, (2) to create a control condition in experiments involving letters, and (3) to investigate nonlinguistic learning processes.

Letter/word recognition

Unsurprisingly, most of the studies using artificial characters are found in the field of psycholinguistics. The starting point was the hot debate about reading instruction (Valentine, 1913): Should reading be taught by means of a phonic method (systematic teaching of print-to-sound mapping) or a whole-word method (teaching associations between orthographic word form and meaning, without code explicitly provided)? Artificial or unknown scripts started to be used in the 1960s to investigate this issue, by manipulating print-to-sound mapping. Typically, each character was mapped onto a phoneme or a syllable of the language (most of the time a phoneme, to mimic the print-to-sound mapping in English), so that word-like pronunciations could be generated from groups of characters. This mapping was explicitly taught to new readers or not. The first studies showed that explicit teaching of print-to-sound correspondences facilitates novel word reading in the unfamiliar script (e.g., Bishop, 1964, in adults; Jeffrey & Samuels, 1967, in children), thus supporting the phonic method. Follow-up studies were mostly run in adults (who could be extensively trained and who were quickly able to use a completely new alphabet) and largely confirmed the first results (e.g., Baron & Hodge, 1978; Bitan & Booth, 2012; Bitan & Karni, 2003, 2004; Bitan, Manor, Morocz, & Karni, 2005; Brooks, 1977, 1978; Yoncheva, Blau, Maurer, & McCandliss, 2010; Yoncheva, Wise, & McCandliss, 2015). In the same line, artificial characters were used to examine the impact of letter sound or letter name knowledge (e.g., Chisholm & Knafle, 1975; Jenkins, Bausell & Jenkins, 1972; Samuels, 1972) and phonetic feature knowledge (e.g., Byrne, 1984; Byrne & Carroll, 1989) on reading acquisition. Some other factors potentially influencing learning were also examined (e.g., letter discrimination: Williams, 1969; or the grain size of print-to-sound mapping: Hirshorn & Fiez, 2014).

Gradually, unknown and artificial scripts have been used differently. The aim was no more to simulate reading acquisition per se (which is actually hardly possible, see Knafle & Legenza, 1978, for a discussion), but rather to examine the developmental course of letter string processing (e.g., acquisition of visual expertise in reading: Maurer, Blau, Yoncheva, & McCandliss, 2010; development of high quality lexical representations: Hart & Perfetti, 2008; letter position coding: García-Orza, Perea, & Muñoz, 2010) or to finely investigate processes that occur during letter/word processing (e.g., effects of orthographic or graphotactic regularities: Samara & Caravolas, 2014; Singer, 1980; Mason & Katz, 1976; print-to-sound consistency effects: Taylor, Plunkett, & Nation, 2011; influence of first language characteristics on the acquisition of a second language: Ehrich & Meuter, 2009; Meuter & Ehrich, 2012; influence of handwriting knowledge on letter recognition: Longcamp, Boucard, Gilhodes, & Velay, 2006). For such studies, the relevance of using unknown or artificial characters lies in the possibility of investigating issues in a “pure” way, in the sense that the degree of familiarity with the script is fully controlled and that it is easier to avoid confounds that are inevitable with natural stimuli. We know for example that in real orthographies frequent words entail frequent letter clusters. Due to this confound, it can be tricky to examine pure effects of cluster frequency (i.e., disentangled from word frequency effects, see Chetail, 2015). With an artificial script on the contrary, it is possible to devise combinations of artificial characters made of either rare or recurrent character clusters, while maintaining constant the frequency at which each artificial word is presented to the participants.

More recently, there has been a renewal of interest in artificial scripts, combined with the development of neuro-imaging techniques (electroencephalography and functional magnetic resonance imaging, especially). Using characters unfamiliar to readers makes it possible to precisely track the development of the neural networks underpinning letter and written word recognition, from lack of knowledge of the script to high familiarity (e.g., Callan, Callan, & Masaki, 2005; Moore, Brendel, & Fiez, 2014; Xue, Chen, Jin, & Dong, 2006). Brain plasticity associated with the development of orthography-phonology relationships was also examined (e.g., Hashimoto & Sakai, 2004), as well as the impact of script characteristics on neural activation during reading (e.g., Mei et al., 2013).

Control condition

In other studies, researchers used unfamiliar characters, but without these characters being the focus of interest. For example, to investigate the mechanisms of letter perception (in real scripts), one can use an alphabetic decision task (e.g., Cosky, 1976; Marzouki, Grainger, & Theeuwes, 2007; New & Grainger, 2011). Symbols are presented (either letters or unknown characters), and participants have to decide whether each symbol is a letter from the Latin alphabet. With this task, New and Grainger (2011) tested the effect of letter frequency in letter recognition. The artificial characters were therefore only used as filler items, for the negative responses.

More generally, pseudoletter strings are frequently used to provide a control condition (usually referred as a false-font condition). In this case, the experiment deals with the processing of real letters or written words, and pseudoletters are used as a baseline to control for the task execution processes, which are not specific to real letters/words (e.g., detection of visual features) (e.g., Ben-Shachar, Dougherty, Deutsch, & Wandell, 2007; Longcamp, Anton, Roth, & Velay, 2003; Turkeltaub, Gareau, Flowers, Zeffiro, & Eden, 2003). Another reason to use unknown characters for the control condition is that it makes it possible to reduce the familiarity with the symbols while maintaining visual characteristics identical to those of letters used in the experimental conditions (e.g., Chanceaux, Mathôt, & Grainger, 2014; Petersen, Fox, Snyder, & Raichle, 1990; Vinckier et al., 2007).

Importantly, the use of false fonts as a baseline or filler condition is not restricted to experiments on letter/word processing (e.g., Awh & Jonides, 2001; de Gardelle, Sackur, & Kouider, 2009; Maki & Mebane, 2006). For example, to show that the richness of phenomenal experience (i.e., the feeling that our perceptual experience is richer than what we can express) is an illusion, de Gardelle et al. (2009) used a classical partial-report paradigm with letters. Participants were briefly presented with a matrix of letters and they had to report the cued row. In some trials, the uncued rows contained pseudoletters. The results of free reports showed that in these rows, participants had the illusory impression that there were only letters.

Nonlinguistic learning

Unknown characters are also used to investigate learning, beyond print, because they offer a good alternative to objects, letters or digits that are traditionally used in learning paradigms. In the field of concept learning for example, it is well known that people can learn a new concept from few examples (see Feldman, 1997), leading to the acquisition of rich representations that enable them to generate new exemplars and parse objects. In several concept learning experiments, artificial characters were used to understand how people learn categories (e.g., Lake, Salakhutdinov, & Tenenbaum, 2015; see also Feldman, 1997). For example, participants are first exposed to a target image and to new examples of that character, and they are then asked to devise a new exemplar or to parse the exemplars into parts. The reason to use pseudoletters in such experiments is they are cognitively natural and can serve as a benchmark for comparing learning algorithms. Moreover, parsing (on the basis of visual features) can be easily tested as well as generalization (be it by humans or machines, Lake et al., 2015). Yet another example comes from the field of sequence learning, dedicated to understand how we use sequences of information or sequences of actions to which we are exposed. Sequence learning is also used to examine the acquisition of new skills such as the capacity to draw inferences. For instance, participants first learn the sequential relation between adjacent elements (e.g., A < B, B < C, C < D), and they are then tested on their capacity to infer the transitive relation between nonadjacent stimulus elements (i.e., B < D; see, e.g., Van Opstal, Verguts, Orban, & Fias, 2008). In such experiments, letters or digits are frequently used, but to avoid the highly reinforced knowledge about of the ordinal sequence of numbers and letters throughout lifespan, one can rather use pseudoletters or shapes (e.g., Acuna, Sanes, & Donoghue, 2002; Van Opstal et al., 2008).

Why and how devising a set of artificial characters?

The previous overview showed the assets of using artificial characters, whatever the domain of research. In the field of visual word recognition specifically, designing experiments with an artificial script is a unique way to thoroughly examine the developmental course of a given orthographic process or effect that is stable in adults. Children could still be tested in their native writing system, but this is often made difficult by the presence of developmental confounds and by the practical difficulties of training experiments in children. In addition, using an artificial script enables one to perfectly control for the amount of exposure to the symbols across participants, so that one can be sure that there is no difference in familiarity. It also makes it possible to independently manipulate variables that covary in real scripts and that are therefore hard to isolate in native-language studies. Moreover, it is easy to take into account the mapping of “artificial words” with linguistic features (phonology, semantics) either to avoid confounds or to examine their impact, while generating a large number of stimuli. More generally, in any study including letter stimuli, unknown or artificial characters are an ideal control condition (as soon as they have similar characteristics as letters, see below). Furthermore, they enable one to use letter-like stimuli while eliminating knowledge associated with the letters (e.g., shape, sound, ordinal arrangement).

Until now, the character sets that have been used have varied strongly from one study to another, and there is no accepted rule of thumb for selecting or devising symbols. Sometimes, the new characters are devised from a recombination of the features of real letters (e.g., Park et al., 2014; Stevens et al., 2013). Sometimes characters are just borrowed from other, unfamiliar scripts (e.g., Bishop, 1964; Callan et al., 2005) or result from modifications of borrowed symbols (e.g., Williams, 1969). Sometimes, pseudoletters are just nonalphanumeric symbols (e.g., *, /, ^), which do not necessarily entail letter features and are rather familiar to the participants (e.g., Bitan & Karni,, 2003; Gombert & Peereman, 2001). Thus, the character sets used vary overall and are more or less similar to the native script of participant. Furthermore, the characters used in previous studies are most of the time not available, so that replications are difficult and comparisons among studies are questionable (e.g., Knafle & Legenza, 1978). In the following, we highlight the characteristics that need to be considered when devising and using artificial characters. This enables us to present the main features of the Brussels Artificial Character Sets (BACS).

Configurations of strokes

Despite a great deal of variation, characters of different writing systems share several properties. A cross-linguistic study comparing more than 100 alphabetic and nonalphabetic scripts showed that writing systems share a similar number of strokes per symbol, with three strokes per character on average (e.g., Changizi & Shimojo, 2005). Moreover, there is high redundancy within sets (around 50%) reflecting the tendency to re-use the same types of strokes rather than to create new ones. Along the same lines, Changizi, Zhang, Ye, and Shimojo (2006) showed that the typological configurations of strokes (i.e., the organization of strokes relative to each other) are very similar across writing systems. According to them, the high similarity of basic features and stroke configurations between strongly different scripts can be explained by the fact that characters would be largely made of strokes that are commonly found in natural scenes, and that are thus easily processed by the visual system. The first criterion for devising BACS was therefore to meet these characteristics shared by most writing systems.

Similarity with the native script

As we already mentioned, characters borrowed from unknown scripts could be used (e.g., Thai characters for monolingual French speakers) rather than artificial symbols. In that case, the characters are necessarily made of attested configurations of strokes. However, as characters vary in complexity, the risk is to use symbols of higher complexity (e.g., higher average number of strokes) than those of the native script. This can be an issue because characters more complex than those of the native script could alter processing relative to simpler symbols. Knafle and Legenza (1978) showed for example that the positive influence of letter name knowledge on reading acquisition in English (see Levin, Shatil-Carmon, & Asif-Rave, 2006) was present in artificial scripts only when the characters were of similar complexity to the letters of the Latin alphabet. To meet the complexity and familiarity constraints, devising new characters (based on the native system) thus appears as a good alternative. In BACS, characters were therefore made so that they share most of the characteristics of the Latin alphabet (note that the procedure described in the next section can be applied to any writing system).

BACS provides two sets of characters. In the first one (BACS-1), character strokes were borrowed from existing writing systems and characters were controlled overall relative to major features of the Latin alphabet. Thus, the set shares the same average number of strokes per character and the same number of different types of strokes as the Latin alphabet. In the second set (BACS-2), each character was matched with a Latin letter on size and number of strokes. Characters were also matched on the number of junctions, that is vertices (e.g., Lanthier, Risko, Stolz & Besner, 2009; Szwed, Cohen, Qiao, & Dehaene, 2009; Szwed et al., 2011) and on the number of terminations (Fiset et al., 2009; Fiset et al., 2008). Some studies actually showed that these characteristics are critical for letter identification (e.g., Fiset et al., 2009; Fiset et al., 2008; Lanthier et al., 2009; Szwed et al., 2009; Szwed et al., 2011) although this result has not been consistently replicated (e.g., Petit & Grainger, 2002; Rosa, Perea, & Enneson, 2016). Most characters were also matched on presence/absence of axes of symmetry.

Note that although BACS-2 is more strictly matched on Latin letters than BACS-1, characters of the latter set are more distinct from the Latin letters, which may be preferable for certain studies.

Similarity between characters

Similarity within the set of characters should also be taken into account when devising new symbols. In the Latin alphabet—as in any other system—some symbols are highly similar (e.g., O, Q), whereas others are very different (e.g., O, W). It is well known that similarity between characters influence their identification, with similar letters being less easily recognizable than dissimilar letters (see Mueller & Weidemann, 2012, for a review). An artificial set of characters mimicking a real script should therefore include high- and low-similarity symbols. This was taken into account in BACS-1 and BACS-2. Furthermore, we provide objective measures of similarity (i.e., similarity matrices and clustering; Mueller & Weidemann, 2012; Podgorny & Garner, 1979; Simpson et al., 2013) so that researchers can easily select more similar or less similar characters.

Script extensivity

Many studies with artificial characters used restricted sets (e.g., 6–12 characters only; e.g., Bitan & Karni, 2004; Jeffrey & Samuels, 1967; Singer, 1980; Yoncheva et al., 2010). This could be sufficient for certain nonlinguistic studies, but it is definitely not valid when the aim is to closely reproduce situations of exposure to natural print. In real writing systems, the number of characters varies from 6 to 180, but only two writing systems (out of more than 100) have less than ten characters, and the average number is 32 (Changizi & Shimojo, 2005). In both sets, we therefore created a number of characters similar to the number of letters in the Latin alphabet (i.e., 24 for BACS-1 and 26 for BACS-2). An additional strength of our scripts—rendering them complete and unique—is that they contain three different series: uppercase and lowercase computerized characters (which are delivered as OpenType fonts) as well as lowercase handwritten characters. Furthermore, BACS-2 has one version with serifs and one without serifs.

BACS: Presentation

For each set, three groups of characters were devised, corresponding to the three usual different versions of alphabets: uppercase characters, lowercase computerized characters, and handwritten lowercase characters (see Appendices 1 and 2). The uppercase characters and the lowercase version for computer were generated with the FontCreator software (format .otf). The font can be used in text editors as well as in experimental programming softwares (e.g., PsychoPy: Peirce, 2007; Psychophysics Toolbox: Brainard, 1997). Size and colour can be changed and bold and italic variants are available. The handwritten lowercase characters were created manually on a sheet of paper before being scanned. All the files are available at In the following sections, we present the procedure followed to design the sets.


Uppercase characters

First, a standard character size was defined. The font “Courier New” was taken as a baseline, given that it is very frequently used in psycholinguistic experiments thanks to its monospace format (i.e., all letters occupy the same space, when space around the letter is included). The height and width from this font were used to define a frame in which the new characters were created (Fig. 2). All characters therefore have the same size. Then we selected strokes on the basis of the definition used by Changizi and Shimojo (2005). A character stroke is a pencil stroke ending when the movement slows down to mark an angle or to raise the pencil. The letter “A,” for example, has three strokes, whereas the letter “S” has just one stroke. The strokes we used were taken from real characters belonging to various script systems (e.g., the Latin alphabet, Chinese characters, and the Cyrillic alphabet). The new characters were then created, either by assembling these strokes or by modifying the shapes of existing foreign characters (Fig. 3). Moreover, the types of strokes used were chosen so that the total number of different strokes was similar to the total number of different strokes in the Latin alphabet. According to Changizi and Shimojo, parsing all the letters into strokes leads to 65 strokes, among which 17 different types can be distinguished, leading to a type/token ratio of 0.26. In the uppercase characters of BACS-1, the total number of strokes is 59 for 18 different types, leading to a similar type/token ratio, namely 0.3 (Fig. 4C, D). Finally, we ensured that the average numbers of strokes per character were matched across the two sets, with 2.50 strokes in the Latin alphabet and 2.46 in our set (Fig. 4B).
Fig. 2

Letters in Courier New font (upper line) compared to the BACS-1 characters (lower line)

Fig. 3

Characters from BACS-1 (on the right) created from existing characters (on the left)

Fig. 4

Strokes used for the BACS-1 uppercase characters. (A) Full characters. (B) Characters parsed into strokes. (C) The individual strokes used (type frequency). (D) The total strokes used (token frequency)

Handwritten lowercase characters

For lowercase characters, parsing Latin letters into strokes is less intuitive than for uppercase characters. For example, Changizi and Shimojo (2005) mentioned 14 different types of strokes, but it is not straightforward to isolate them. Hence, while paying attention to character complexity, we focused on other properties of lowercase characters to create the lowercase characters. First, artificial handwritten lowercase characters should be as easy to produce as Latin symbols. A critical factor in that respect is the direction of drawing. In the Latin alphabet, the great majority of letters are written from left to right with the starting and stopping points on the line, so that it is not necessary to raise the pencil between letters. Hence, we devised the characters so that this characteristic was again present (Fig. 5). Moreover, unlike the uppercase characters, the lowercase characters include both upstrokes and downstrokes (13 out of 26 letters in the Latin alphabet). Consistently, 13 out of the 24 BACS-1 lowercase characters include such features (see Appendix 1). The same reference as for uppercase characters was taken to define the frame of character drawing (i.e., size of letters in the Courier New font), except that letters with ascending or descending strokes have a different height than letters without such strokes. Finally, regarding similarities between uppercase and lowercase characters, most of the letters in the Latin alphabet are strongly different according to the case (e.g., A/a, H/ h, G/g), only a few ones having similar shapes (e.g., Z/z, Y/y, U/u). In BACS-1, we paid attention to create some characters so that they would share a similar global form across cases, but overall there was no direct link between uppercase and lowercase characters.
Fig. 5

Directions of drawing in the Latin alphabet (letter n, on the left) and in the handwritten lowercase characters of BACS-1 (on the right). Triangles and circles represent starting and stopping points, respectively

Computerized lowercase characters

The lowercase computerized characters were then derived from the handwritten lowercase characters, with some modifications. The line strokes included for character linking were removed (e.g., compare Open image in new window and m in the handwritten and computerized letters), and the curved strokes that were added to facilitate hand-writing were removed or replaced by straight lines (e.g., compare Open image in new window and j). Finally, as in the Latin alphabet, some of the lowercase computerized characters were different from the handwritten versions (e.g., compare Open image in new window and r).


Contrary to BACS-1, BACS-2 was devised by directly pairing each character with a Latin letter. In addition to control overall the type, number and configuration of strokes, this set provides characters paired with letters on size, number of strokes, presence/absence of symmetry, number of junctions and number of terminations. Furthermore, given the fairly high number of fonts with serifs, we devised for each case a version of the characters with and without serifs.

Uppercase characters

As for BACS-1, the first step was to define the sizes of the characters. Here, however, the size was individually defined for each character. We used the actual height and width of the Latin letters (with and without serifs) written in “Courier New” font to define the frame of each character (Fig. 6). Then, each character was designed so that it shared the number of strokes, junctions, terminations, and axis of symmetry (as well as the number of serifs, for the version with serifs) with its model. For example, to design the character “ Open image in new window ” corresponding to the letter “A,” we used three strokes and organized them so that the character would entail three junctions, two terminations, and an axis of symmetry, as well as three serifs for the version with serifs (Fig. 7). Moreover, as in BACS-1, the types of strokes used were chosen so that the total number of different strokes was similar to the total number of different strokes in the Latin alphabet (Latin alphabet: 17 different types of strokes among 65 = type/token ratio of 0.26; BACS-2: 18 different types of strokes among 65 = type/token ratio of 0.28) (Fig. 8C, D). Note that for the letter “O,” it was not possible to respect all these constraints, since O is the only possible shape with one stroke and without junctions and terminations.
Fig. 6

Letters in Courier New font (upper lines) compared to the BACS-2 characters (lower lines)

Fig. 7

Examples of controls for the BACS-2 uppercase characters (Courier New font on the left, BACS on the right)

Fig. 8

Strokes used for the BACS-2 uppercase characters. (A) Full characters. (B) Characters parsed into strokes. (C) The individual strokes used (type frequency). (D) The total strokes used (token frequency)

Computerized lowercase characters

The computerized lowercase characters were created before the handwritten ones. We followed the same procedure as for the uppercase characters, by first defining frames and then matching characters on the numbers of strokes, junctions, terminations, and serifs (except for the letter “o”; Fig. 9). In addition, characters were matched on upstrokes and downstrokes. Finally, when the Latin letters were very similar in both cases, the corresponding character was created consistently (e.g., based on “J” and “j,” we designed “ Open image in new window ” and “ Open image in new window ”). In the end, the Latin-alphabet lowercase computerized letters include 51 strokes, among which are 15 different types (type/token ratio = 0.3), and the lowercase computerized characters also include 51 strokes, among which are 16 different types (same type/token ratio; Fig. 10).
Fig. 9

Examples of controls for the BACS-2 computerized lowercase characters (Courier New font on the left, BACS on the right)

Fig. 10

Strokes used for the BACS-2 lowercase characters. (A) Full characters. (B) Characters parsed into strokes. (C) The individual strokes used (type frequency). (D) The total strokes used (token frequency)

Handwritten lowercase characters

Characters were derived from the computerized form without additional changes (see Appendix 2).

Character similarity measurements

Letter similarity can be a factor of confusion when perceiving strings, leading to false recognitions (e.g., reporting P instead of R; see Mueller & Weidemann, 2012), but similarity between characters is inherent to any script, since the same strokes are used in several characters (cf. the type/token ratio in the Latin alphabet; e.g., Changizi et al., 2006). To mimic real scripts, BACS includes both similar and dissimilar characters. To enable researchers to precisely select groups of characters according to their similarity, and to facilitate cross-script comparisons, we provide here objective measures of similarity.

Among the different methods to measure letter similarity (e.g., Bagnara, Boles, Simion, & Umiltà, 1983; Boles & Clifford, 1989; Mueller & Weidemann, 2012), we used a similarity judgment task on a rating scale (e.g., Podgorny & Garner, 1979; Simpson, Mousikou, Montoya, & Defior, 2013). In this task, two characters are presented and participants have to assess how similar/dissimilar they are. This technique was favoured over other ones (e.g., speeded same–different matching) because it does not require a rapid presentation. Although rapid presentation may be adequate for familiar symbols, which have robust memory representations, one cannot be sure that new characters would be precisely processed under such conditions.




Separate groups of 31 and 75 students estimated similarity for characters of BACS-1 and BACS-2, respectively. For BACS-2, the pool of participants was divided into four groups of 18–19 participants, so that each participant was exposed to only one of the four versions of the set (sans-serif lowercase, sans-serif uppercase, serif lowercase, serif uppercase). They were all native French speakers and reported having normal or corrected-to-normal vision. They received a small financial compensation for their participation. To anticipate, four of the participants were excluded (n = 2 in BACS-2 sans lowercase, n = 2 in BACS-2 sans uppercase) due to misunderstanding the instructions (e.g., wrong use of the scale, incomplete task). They were not considered in the following analyses.


For the first group of participants, the stimuli corresponded to the computerized BACS-1 characters (24 uppercase characters, 24 lowercase characters). The 576 (24 × 24) combinations of each case were presented, thus including reverse versions of the same pairs (e.g., Open image in new window and Open image in new window ) as well as identical pairs (e.g., Open image in new window ), leading to a total of 1,152 trials. In the second group of participants, each individual was exposed to one group of computerized BACS-2 characters (26 sans-serif lowercase characters, 26 sans-serif uppercase characters, 26 serif lowercase characters, 26 serif uppercase characters), again including the reverse version of each pairs and identical pairs. This led to 676 trials (26 × 26) for each participant.


Participants were tested individually or in groups of up to six, for approximately 35 to 45 min. The task was programmed with the PsychoPy toolbox (Peirce, 2007, version 1.81). The session started with a familiarisation phase. Each character was presented once on the screen and participants had to hand copy it on paper. Before moving to the similarity judgment task, they received a sheet with all the characters and had to examine the whole set for 45 s. Then, for each trial, a pair of characters was displayed on the centre of the screen, as well as a continuous rating scale (ranging from 0 to 1) at the bottom of the screen until response. Participants were asked to judge to what extent the two characters were similar by placing the cursor on the scale, with 0 and 1 corresponding to very dissimilar and very similar characters, respectively. They were encouraged to use the whole scale, and not just extremities. For BACS-1, the 1,152 pairs were randomly distributed among 16 blocks of 72 trials, separated by brief breaks, mixing uppercase and lowercase pairs. The order of presentation was randomized for each participant. For BACS-2, the 676 pairs were randomly distributed among 13 blocks of 52 trials. The computer recorded the similarity score, corresponding to the distance from 0 to the position of the cursor on the scale, thus ranging from 0 to 1.


All the analyses were performed with the R software (R Development Core Team, 2015). Overall, the estimated similarities were low and very close in the lowercase and uppercase versions (BACS-1: .23 and .26, respectively; BACS-2 sans: .22 and .26; BACS-2 serif: .24 and .31). Two parameters enabled us to check for the validity of the ratings. First, pairs of identical characters were judged as virtually identical (mean estimated similarity of .99 in all the different versions of BACS-1 and BACS-2). Second, we found very high correlations between the estimated similarities for the two character arrangements (e.g., Open image in new window and Open image in new window )—namely r = .96 and .95 for lower- and uppercase, respectively, in BACS-1; .95 and .93 for BACS-2 sans; and .93 and .93 for BACS-2 serif (Fig. 11).
Fig. 11

Scatterplots between estimated similarities for the different versions of BACS, presented for order 1 in pairs (e.g., Open image in new window ) and estimated similarity for characters in order 2 (e.g., Open image in new window )

Distribution plots show that the estimates are not uniformly distributed (Fig. 12C, D, E, F, G, H). The distributions are right-skewed, with most of the observations in the left 20% tail. The results are highly similar for lower- and uppercase characters, for BACS-1 and BACS-2, and for the versions with and without serifs. Critically, the distributions are also very close to those computed for the Latin alphabet (data from Simpson et al., 2013), despite the use of a slightly different design (discrete scale, from 1 to 7) (Fig. 12A and B).
Fig. 12

Histograms of estimated similarity scores, according to case. (A and B) Latin alphabet (N = 650 values, ranging from 1 to 7). (C and D) BACS-1 (N = 552, values ranging from 0 to 1). (E to H) BACS-2 (N = 650 values, ranging from 0 to 1). The data for the Latin alphabet come from Simpson et al. (2013)

On the basis of the participants’ estimates, we computed matrices of similarity with heat maps (Fig. 13). The numbered similarity tables are available in Appendix 3. From these matrices, we computed distance matrices [dist() function in R], and then performed cluster analyses [hclust() function]. The results are presented in Fig. 14. As in the Latin alphabet, characters sharing many features (e.g., Open image in new window ) or that were devised to mirror each other (e.g., Open image in new window ) are estimated as highly similar.
Fig. 13

Similarity matrices for both BACS-1 and BACS-2 according to case

Fig. 14

Dendrograms of character similarity for BACS-1 and BACS-2, uppercase and lowercase sets. The lower the height for each pair, the stronger the estimated similarity


BACS provides two original collections of artificial characters devised so that they closely match the visual characteristics of Latin letters. In both sets, the total number and types of strokes are similar to what is found in scripts overall and in the Latin alphabet in particular. Moreover, in BACS-2, each character is paired with a Latin letter for the number of strokes, junctions, terminations and serifs, with a number of strokes types similar to the Latin alphabet. Furthermore, similarity matrices confirmed that, as in the Latin alphabet, characters are relatively dissimilar, except for a few characters, so that it is possible to select very similar symbols as well as very dissimilar ones.

BACS is therefore a perfect tool to investigate letter and word processing through artificial scripts. It enables one to create new print-to-sound correspondence systems (be it alphabetic or not) and thus to examine in a unique way the developmental course of a given orthographic process or effect, to precisely control or manipulate print exposure, to disentangle the effect of confounded variables in real scripts. Furthermore, the three versions of the two sets (uppercase, handwritten lowercase, and computerized lowercase) strengthen the similarity with existing scripts. The different versions can be used to address precise issues, such as upper/lowercase learning or the development of abstract letter representations. Alternatively, this gives the possibility to choose a subset of symbols among 3x24 characters. More generally, due to its precise controls, BACS can be used in any experimental situation requiring either the manipulation of letter-like stimuli or the inclusion of a baseline condition for letters (e.g., Lake et al., 2015; Longcamp et al., 2003; Vinckier et al., 2007). Finally, because BACS is available to the research community and easily usable when designing experiments, its use would improve the comparability between studies using artificial characters.


Author note

The work reported here was supported by the Interuniversity Attraction Poles Program of the Belgian Science Policy Office (Project P7/33). We adhere to the PRO initiative for open science ( All of the files (e.g., BACS fonts, raw data, script for analyses, matrices of similarity) are available at We thank the anonymous reviewers for their helpful comments on an earlier version of the manuscript.


  1. Acuna, B. D., Sanes, J. N., & Donoghue, J. P. (2002). Cognitive mechanisms of transitive inference. Experimental Brain Research, 146, 1–10. doi: 10.1007/s00221-002-1092-y CrossRefPubMedGoogle Scholar
  2. Awh, E., & Jonides, J. (2001). Overlapping mechanisms of attention and spatial working memory. Trends in Cognitive Sciences, 5, 119–126. doi: 10.1016/S1364-6613(00)01593-X CrossRefPubMedGoogle Scholar
  3. Bagnara, S., Boles, D. B., Simion, F., & Umiltà, C. (1983). Symmetry and similarity effects in the comparison of visual patterns. Perception & Psychophysics, 34, 578–584. doi: 10.3758/BF03205914 CrossRefGoogle Scholar
  4. Baron, J., & Hodge, J. (1978). Using spelling–sound correspondences without trying to learn them. Visible Language, 12, 55–70.Google Scholar
  5. Ben-Shachar, M., Dougherty, R. F., Deutsch, G. K., & Wandell, B. A. (2007). Differential sensitivity to words and shapes in ventral occipito-temporal cortex. Cerebral Cortex, 17, 1604–1611. doi: 10.1093/cercor/bhl071 CrossRefPubMedGoogle Scholar
  6. Bishop, C. H. (1964). Transfer effects of word and letter training in reading. Journal of Verbal Learning and Verbal Behavior, 3, 215–221. doi: 10.1016/S0022-5371(64)80044-X CrossRefGoogle Scholar
  7. Bitan, T., & Booth, J. R. (2012). Offline improvement in learning to read a novel orthography depends on Direct letter instruction. Cognitive Science, 36, 896e918. doi: 10.1111/j.1551-6709.2012.01234.x CrossRefGoogle Scholar
  8. Bitan, T., & Karni, A. (2003). Alphabetical knowledge from whole words training: effects of explicit instruction and implicit experience on learning script segmentation. Cognitive Brain Research, 16, 323–337. doi: 10.1016/S0926-6410(02)00301-4 CrossRefPubMedGoogle Scholar
  9. Bitan, T., & Karni, A. (2004). Procedural and declarative knowledge of word recognition and letter decoding in reading an artificial script. Cognitive Brain Research, 19, 229–243. doi: 10.1016/j.cogbrainres.2004.01.001 CrossRefPubMedGoogle Scholar
  10. Bitan, T., Manor, D., Morocz, I. A., & Karni, A. (2005). Effects of alphabeticaly, practice and type of instruction on reading an artificial script: An fMRI study. Cognitive Brain Research, 25, 90–106. doi: 10.1016/j.cogbrainres.2005.04.014 CrossRefPubMedGoogle Scholar
  11. Boles, D. B., & Clifford, J. E. (1989). An upper-and lowercase alphabetic similarity matrix, with derived generation similarity values. Behavior Research Methods, Instruments, & Computers, 21, 579–586. doi: 10.3758/BF03210580 CrossRefGoogle Scholar
  12. Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10, 433–436. doi: 10.1163/156856897X00357 CrossRefPubMedGoogle Scholar
  13. Brooks, L. (1977). Visual pattern in fluent word identification. In A. S. Reber & D. L. Scarborough (Eds.), Toward a psychology of reading (pp. 143–181). Hillsdale, NJ: Erlbaum.Google Scholar
  14. Brooks, L. (1978). Non-analytic correspondences and pattern in word pronunciation. In J. Requin (Ed.), Attention and performance VII (pp. 163–177). Hillsdale, NJ: Erlbaum.Google Scholar
  15. Byrne, B. (1984). On teaching articulatory phonetics via an orthography. Memory & Cognition, 12, 181–189. doi: 10.3758/BF03198432 CrossRefGoogle Scholar
  16. Byrne, B., & Carroll, M. (1989). Learning artificial orthographies: Further evidence of a non-analytic acquisition procedure. Memory & Cognition, 17, 311–317. doi: 10.3758/BF03198469 CrossRefGoogle Scholar
  17. Callan, A. M., Callan, D. E., & Masaki, S. (2005). When meaningless symbols become letters: Neural activity change in learning new phonograms. NeuroImage, 28, 553–562. doi: 10.1016/j.neuroimage.2005.06.031 CrossRefPubMedGoogle Scholar
  18. Chanceaux, M., Mathôt, S., & Grainger, J. (2014). Effects of number, complexity, and familiarity of flankers on crowded letter identification. Journal of Vision, 14(6), 7. doi: 10.1167/14.6.7 CrossRefPubMedGoogle Scholar
  19. Changizi, M. A., & Shimojo, S. (2005). Character complexity and redundancy in writing systems over human history. Proceedings of the Royal Society B, 272, 267–275. doi: 10.1098/rspb.2004.2942 CrossRefPubMedPubMedCentralGoogle Scholar
  20. Changizi, M. A., Zhang, Q., Ye, H., & Shimojo, S. (2006). The structures of letters and symbols throughout human history are selected to match those found in objects in natural scenes. American Naturalist, 167, 117–139. doi: 10.1086/502806 CrossRefGoogle Scholar
  21. Chetail, F. (2015). Reconsidering the role of orthographic redundancy in visual word recognition. Frontiers in Psychological Science, 6, 645. doi: 10.3389/fpsyg.2015.00645 Google Scholar
  22. Chisholm, D., & Knafle, J. D. (1975). Letter-name knowledge as a prerequisite to learning to read. Reading Improvement, 15(1), 2.Google Scholar
  23. Cosky, M. J. (1976). The role of letter recognition in word recognition. Memory & Cognition, 4, 207–214. doi: 10.3758/BF03213165 CrossRefGoogle Scholar
  24. de Gardelle, V., Sackur, J., & Kouider, S. (2009). Perceptual illusions in brief visual presentations. Consciousness and Cognition, 18, 569–577. doi: 10.1016/j.concog.2009.03.002 CrossRefPubMedGoogle Scholar
  25. Ehrich, J. F., & Meuter, R. F. (2009). Acquiring an artificial logographic orthography: The beneficial effects of a logographic l1 background and bilinguality. Journal of Cross-Cultural Psychology, 40, 711–745. doi: 10.1177/0022022109338624 CrossRefGoogle Scholar
  26. Feldman, J. (1997). The structure of perceptual categories. Journal of Mathematical Psychology, 41, 145–170. doi: 10.1006/jmps.1997.1154 CrossRefPubMedGoogle Scholar
  27. Fiset, D., Blais, C., Arguin, M., Tadros, K., Éthier-Majcher, C., Bub, D., & Gosselin, F. (2009). The spatio-temporal dynamics of visual letter recognition. Cognitive Neuropsychology, 26, 23–35. doi: 10.1080/02643290802421160 CrossRefPubMedGoogle Scholar
  28. Fiset, D., Blais, C., Éthier-Majcher, C., Arguin, M., Bub, D., & Gosselin, F. (2008). Features for identification of uppercase and lowercase letters. Psychological Science, 19, 1161–1168. doi: 10.1111/j.1467-9280.2008.02218.x CrossRefPubMedGoogle Scholar
  29. García-Orza, J., Perea, M., & Muñoz, S. (2010). Are transposition effects specific to letters? Quarterly Journal of Experimental Psychology, 63, 1603–1618. doi: 10.1080/17470210903474278 CrossRefGoogle Scholar
  30. Gombert, J. E., & Peereman, R. (2001). Training children with artificial alphabet. Psychology, 8, 338–357.Google Scholar
  31. Grainger, J., Rey, A., & Dufau, S. (2008). Letter perception: From pixels to pandemonium. Trends in Cognitive Sciences, 12, 381–387.CrossRefPubMedGoogle Scholar
  32. Hart, L., & Perfetti, C. A. (2008). Learning words in Zekkish: Implications for understanding lexical representations. In E. L. Grigorenko & A. J. Naples (Eds.), Single word reading: Behavioral and biological perspectives (pp. 107–128). New York, NY: Taylor & Francis.Google Scholar
  33. Hashimoto, R., & Sakai, K. L. (2004). Learning letters in adulthood: Direct visualization of cortical plasticity for forming a new link between orthography and phonology. Neuron, 42, 311–322. doi: 10.1016/S0896-6273(04)00196-5 CrossRefPubMedGoogle Scholar
  34. Hirshorn, E., & Fiez, J. (2014). Using artificial orthographies for studying cross-linguistic differences in the cognitive and neural profiles of reading. Journal of Neurolinguistics, 31, 69–85. doi: 10.1016/j.jneuroling.2014.06.006 CrossRefPubMedPubMedCentralGoogle Scholar
  35. Jeffrey, W. E., & Samuels, S. J. (1967). Effect of method of reading training on intial learning and transfer. Journal of Verbal Learning and Verbal Behavior, 6, 354–358. doi: 10.1016/S0022-5371(67)80124-5 CrossRefGoogle Scholar
  36. Jenkins, J. R., Bausell, R. B., & Jenkins, L. M. (1972). Comparisons of letter name and letter sound training as transfer variables. American Educational Research Journal, 75–86. doi:10.3102/00028312009001075Google Scholar
  37. Knafle, J. D., & Legenza, A. (1978). External generallzability of inquiry involving artificial orthography. American Educational Research Journal, 15, 331–347. doi: 10.3102/00028312015002331 Google Scholar
  38. Lake, B. M., Salakhutdinov, R., & Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350, 1332–1338. doi: 10.1126/science.aab3050 CrossRefPubMedGoogle Scholar
  39. Lanthier, S. N., Risko, E. F., Stolz, J. A., & Besner, D. (2009). Not all visual features are created equal: Early processing in letter and word recognition. Psychonomic Bulletin & Review, 16, 67–73. doi: 10.3758/PBR.16.1.67 CrossRefGoogle Scholar
  40. Levin, I., Shatil-Carmon, S., & Asif-Rave, O. (2006). Learning of letter names and sounds and their contribution to word recognition. Journal of Experimental Child Psychology, 93, 139–165. doi: 10.1016/j.jecp.2005.08.002 CrossRefPubMedGoogle Scholar
  41. Longcamp, M., Anton, J.-L., Roth, M., & Velay, J.-L. (2003). Visual presentation of single letters activates a premotor area involved in writing. NeuroImage, 19, 1492–1500. doi: 10.1016/S1053-8119(03)00088-0 CrossRefPubMedGoogle Scholar
  42. Longcamp, M., Boucard, C., Gilhodes, J.-C., & Velay, J.-L. (2006). Remembering the orientation of newly learned characters depends on the associated writing knowledge: A comparison between handwriting and typing. Human Movement Science, 25, 646–656. doi: 10.1016/j.humov.2006.07.007 CrossRefPubMedGoogle Scholar
  43. Maki, W. S., & Mebane, M. W. (2006). Attentional capture triggers an attentional blink. Psychonomic Bulletin & Review, 13, 125–131. doi: 10.3758/BF03193823 CrossRefGoogle Scholar
  44. Marzouki, Y., Grainger, J., & Theeuwes, J. (2007). Exogenous spatial cueing modulates subliminal masked priming. Acta Psychologica, 126, 34–45. doi: 10.1016/j.actpsy.2006.11.002 CrossRefPubMedGoogle Scholar
  45. Mason, M., & Katz, L. (1976). Visual processing of nonlinguistic strings: Redundancy effects and reading ability. Journal of Experimental Psychology: General, 105, 338–348. doi: 10.1037/0096-3445.105.4.338 CrossRefGoogle Scholar
  46. Maurer, U., Blau, V. C., Yoncheva, Y. N., & McCandliss, B. D. (2010). Development of visual expertise for reading: rapid emergence of visual familiarity for an artificial script. Developmental Neuropsychology, 35, 404–422. doi: 10.1080/87565641.2010.480916 CrossRefPubMedPubMedCentralGoogle Scholar
  47. McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological Review, 88, 375–407. doi: 10.1037/0033-295X.88.5.375 CrossRefGoogle Scholar
  48. Mei, L., Xue, G., Lu, Z.-L., He, Q., Zhang, M., Xue, F., & Dong, Q. (2013). Orthographic transparency modulates the functional asymmetry in the fusiform cortex: An artificial language training study. Brain and Language, 125, 165–172. doi: 10.1016/j.bandl.2012.01.006 CrossRefPubMedGoogle Scholar
  49. Meuter, R. F. I., & Ehrich, J. F. (2012). The acquisition of an artificial logographic script and bilingual working memory: Evidence for L1-specific orthographic processing skills transfer in Chinese–English bilinguals. Writing Systems Research, 4(1), 8–29. doi: 10.1080/17586801.2012.665011 CrossRefGoogle Scholar
  50. Moore, M. W., Brendel, P. C., & Fiez, J. A. (2014). Reading faces: Investigating the use of a novel face-based orthography in acquired alexia. Brain and Language, 129, 7–13. doi: 10.1016/j.bandl.2013.11.005 CrossRefPubMedPubMedCentralGoogle Scholar
  51. Mueller, S. T., & Weidemann, C. T. (2012). Alphabetic letter identification: Effects of perceivability, similarity, and bias. Acta Psychologica, 139, 19–37. doi: 10.1016/j.actpsy.2011.09.014 CrossRefPubMedGoogle Scholar
  52. Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353–383. doi: 10.1016/0010-0285(77)90012-3 CrossRefGoogle Scholar
  53. New, B., & Grainger, J. (2011). On letter frequency effects. Acta Psychologica, 138, 322–328. doi: 10.1016/j.actpsy.2011.07.001 CrossRefPubMedGoogle Scholar
  54. Park, J., Chiang, C., Brannon, E. M., & Woldorff, M. G. (2014). Experience-dependent hemispheric specialization of letters and numbers is revealed in early visual processing. Journal of Cognitive Neuroscience, 26, 2239–2249. doi: 10.1162/jocn_a_00621 CrossRefPubMedPubMedCentralGoogle Scholar
  55. Peirce, J. W. (2007). PsychoPy—Psychophysics software in Python. Journal of Neuroscience Methods, 162, 8–13. doi: 10.1016/j.jneumeth.2006.11.017 CrossRefPubMedPubMedCentralGoogle Scholar
  56. Petersen, S. E., Fox, P. T., Snyder, A. Z., & Raichle, M. E. (1990). Activation of extrastriate and frontal cortical areas by visual words and word-like stimuli. Science, 249, 1041–1044.CrossRefPubMedGoogle Scholar
  57. Petit, J. P., & Grainger, J. (2002). Masked partial priming of letter perception. Visual Cognition, 9, 337–354. doi: 10.1080/13506280042000207 CrossRefGoogle Scholar
  58. Podgorny, P., & Garner, W. R. (1979). Reaction time as a measure of inter- and intraobject visual similarity: Letters of the alphabet. Perception & Psychophysics, 26, 37–52. doi: 10.3758/bf03199860 CrossRefGoogle Scholar
  59. Pollack, I. (1953). Assimilation of sequentially encoded information. American Journal of Psychology, 66, 421–435. doi: 10.2307/1418237 CrossRefPubMedGoogle Scholar
  60. R Development Core Team. (2015). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from Scholar
  61. Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: an attentional blink? Journal of Experimental Psychology Human Perception and Performance, 18, 849–860.CrossRefPubMedGoogle Scholar
  62. Rosa, E., Perea, M., & Enneson, P. (2016). The role of letter features in visual-word recognition: Evidence from a delayed segment technique. Acta Psychologica, 169, 133–142. doi: 10.1016/j.actpsy.2016.05.016 CrossRefPubMedGoogle Scholar
  63. Samara, A., & Caravolas, M. (2014). Statistical learning of novel graphotactic constraints in children and adults. Journal of Experimental Child Psychology, 121, 137–155. doi: 10.1016/j.jecp.2013.11.009 CrossRefPubMedGoogle Scholar
  64. Samuels, S. J. (1972). The effect of letter-name knowledge on learning to read. American Educational Research Journal, 9, 65–74. doi: 10.3102/00028312009001065 CrossRefGoogle Scholar
  65. Simpson, I. C., Mousikou, P., Montoya, J. M., & Defior, S. (2013). A letter visual-similarity matrix for Latin-based alphabets. Behavior Research Methods, 45, 431–439. doi: 10.3758/s13428-012-0271-4 CrossRefPubMedGoogle Scholar
  66. Singer, M. H. (1980). The primacy of visual information inthe analysis of letter strings. Attention, Perception, & Psychophysics, 27, 153–162. doi: 10.3758/BF03204304 CrossRefGoogle Scholar
  67. Stevens, C., McIlraith, A., Rusk, N., Niermeyer, M., & Waller, H. (2013). Relative laterality of the N170 to single letter stimuli is predicted by a concurrent neural index of implicit processing of letternames. Neuropsychologia, 51, 667–674. doi: 10.1016/j.neuropsychologia.2012.12.009 CrossRefPubMedGoogle Scholar
  68. Szwed, M., Cohen, L., Qiao, E., & Dehaene, S. (2009). The role of invariant line junctions in object and visual word recognition. Vision Research, 49, 718–725. doi: 10.1016/j.visres.2009.01.003 CrossRefPubMedGoogle Scholar
  69. Szwed, M., Dehaene, S., Eger, E., Kleinschmidt, A., Valabregue, R., Amadon, A., & Cohen, L. (2011). Specialization for written words over objects in the visual cortex. NeuroImage, 56, 330–344. doi: 10.1016/j.neuroimage.2011.01.073 CrossRefPubMedGoogle Scholar
  70. Taylor, J. S. H., Plunkett, K., & Nation, K. (2011). The influence of consistency, frequency, and semantics on learning to read: An artificial orthography paradigm. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 60–76. doi: 10.1037/a0020126 PubMedGoogle Scholar
  71. Turkeltaub, P. E., Gareau, L., Flowers, D. L., Zeffiro, T. A., & Eden, G. F. (2003). Development of neural mechanisms for reading. Nature Neuroscience, 6, 767–773. doi: 10.1038/nn1065 CrossRefPubMedGoogle Scholar
  72. Valentine, C. W. (1913). Expermiments on the method of teaching reading. Journal of Experimental Pedagogy, 2, 99–112.Google Scholar
  73. Van Opstal, F., Verguts, T., Orban, G. A., & Fias, W. (2008). A hippocampal–parietal network for learning an ordered sequence. NeuroImage, 40, 333–341. doi: 10.1016/j.neuroimage.2007.11.027 CrossRefPubMedGoogle Scholar
  74. Vinckier, F., Dehaene, S., Jobert, A., Dubus, J., Sigman, M., & Cohen, L. (2007). Hierarchical coding of letter strings in the ventral stream: Dissecting the inner organization of the visual word-form system. Neuron, 55, 143–156. doi: 10.1016/j.neuron.2007.05.031 CrossRefPubMedGoogle Scholar
  75. Williams, J. P. (1969). Training kindergarten children to discriminate letter-like forms. American Educational Research Journal, 6, 501–514. doi: 10.3102/00028312006004501 CrossRefGoogle Scholar
  76. Xue, G., Chen, C., Jin, Z., & Dong, Q. (2006). Cerebral asymmetry in the fusiform areas predicted the efficiency of learning a new writing system. Journal of Cognitive Neuroscience, 18, 923–931. doi: 10.1162/jocn.2006.18.6.923 CrossRefPubMedGoogle Scholar
  77. Yoncheva, Y. N., Blau, V. C., Maurer, U., & McCandliss, B. D. (2010). Attentional focus during learning impacts N170 ERP responses to an artificial script. Developmental Neuropsychology, 35, 423–445. doi: 10.1080/87565641.2010.480918 CrossRefPubMedPubMedCentralGoogle Scholar
  78. Yoncheva, Y. N., Wise, J., & McCandliss, B. (2015). Hemispheric specialization for visual words is shaped by attention to sublexical units during initial learning. Brain and Language, 145, 23–33. doi: 10.1016/j.bandl.2015.04.001 CrossRefPubMedGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2017

Authors and Affiliations

  • Camille Vidal
    • 1
  • Alain Content
    • 1
  • Fabienne Chetail
    • 1
  1. 1.Laboratoire Cognition Langage, Développement (LCLD), Centre de Recherche Cognition et Neurosciences (CRCN)Université Libre de Bruxelles (ULB)BrusselsBelgium

Personalised recommendations