Skip to main content

Insights from Exploration of Engaging Technologies to Teach Reading and Writing: Story Baker

  • Chapter
  • First Online:
Frontiers in Pen and Touch

Part of the book series: Human–Computer Interaction Series ((HCIS))

  • 471 Accesses

Abstract

To engage children in learning to write, we spent several years exploring tools designed to engage children in creating and viewing stories. Our central focus was the automatic generation of animations. Tools included a digital stylus for writing and sketching, and in some cases simple robots and tangible, digitally-recognized objects. In pilot studies, children found the prototypes engaging. In 2007, a decision not to develop new hardware was made, but at today’s greatly reduced tablet cost and with more capable touch and pen technology, these experiments could inspire further research and development.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    List of actions (verbs) initially on Story Baker prototype: chase, create, dance, eat, fall, fly, jump, kick, love, marry, meet, play, and see.

  2. 2.

    List of actors (nouns) initially on Story Baker prototype: Alligator (an), apple (an), astronaut (an), bag, ball, banana, bee, beehive, belt, bicycle, book, boy, broccoli (some), cake, car, cat, chair, cheese (some), clock, clown, cookie, couch, cow, dinosaur, dog, donut, dragon, duck, fish, girl, Gus (actor name), hand, hippo, horse, house, invention (an), Jack (actor name), Jish (actor name), kangaroo, King, lasergun, lemon, letter, lion, mailbox, man, Michel (actor name), microscope, monkey, mouse, pancake, pants (a pair of), penguin, phone, pig, pizza, plane, prince, princess, queen, racecar, Red (actor name), robot, rock, rocket, shark, shoe, snake, strawberry, table, toy, tree, truck, turtle, underpants (a pair of), and woman.

  3. 3.

    List of locations initially on Story Baker prototype: backyard, beach, bedroom, castle, classroom, den, factory, farm, forest, kitchen, moon, park, party, store, town, and zoo.

  4. 4.

    Physical: Weight [0..9] (0 – Light, 9 – Heavy), Strength [0..9] (0 – Weak, 9 – Strong), Soft/Hard [0..9] (0 – Soft, 9 – Hard). Personality: Type Human, Animal, Vegetable, Mineral, Serious/Silly [0..9] (0 – Serious, 9 – Silly), Shy/Ongoing [0..9] (0 – Shy, 9 – Outgoing), Lazy/Hard worker [0..9] (0 – Lazy, 9 – Hard worker), Grumpy/Happy [0..9] (0 – Grumpy, 9 – Happy), Dumb/Smart [0..9] (0 – Dumb, 9 - Smart), Sleepy/Awake [0..9] (0 – Sleepy, 9 – Awake).

References

  1. Aikawa, T., Schwartz, L., Pahud, M.: Nlp Story Maker (2005)

    Google Scholar 

  2. Cptte demo page. https://aka.ms/mpahud-CPTTE2016-demos/

  3. Dc Comics’ “scribblenauts”. www.scribblenauts.com

  4. Lee, J.C., Forlizzi, J., Hudson, S.E.: The kinetic typography engine: an extensible system for animating expressive text. In: Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology, pp. 81–90. ACM (2002)

    Google Scholar 

  5. Lee, J., Kim, D., Wee, J., Jang, S., Ha, S., Jun, S.: Evaluating pre-defined kinetic typography effects to convey emotions. J. Korea Multimed. Soc. 17(1), 77–93 (2014)

    Article  Google Scholar 

  6. Loop, C., Blinn, J.: Resolution independent curve rendering using programmable graphics hardware. In: ACM Transactions on Graphics (TOG), vol. 24, pp. 1000–1009. ACM (2005)

    Google Scholar 

  7. Macro actions. https://aka.ms/mpahud-CPTTE2016-MacroActions/

  8. Mueller, P.A., Oppenheimer, D.M.: The pen is mightier than the keyboard advantages of longhand over laptop note taking. Psychol. Sci. 25(6), 1159–1168 (2014)

    Article  Google Scholar 

  9. Oviatt, S.: The design of future educational interfaces. Routledge, New York (2013)

    Google Scholar 

  10. Pixelsense table. https://www.windowscentral.com/microsoft-surface-pixelsense-table

  11. Strapparava, C., Valitutti, A., Stock, O.: Dances with words. In: IJCAI, pp. 1719–1724 (2007)

    Google Scholar 

  12. Visual sentence API documentation. https://aka.ms/mpahud-CPTTE2016-VisualSentenceAPI/

  13. Wowwee. http://wowwee.com/products/robots

Download references

Acknowledgements

This work would not have been possible without Chuck Thacker, Craig Mundie, Margaret Johnson, Takako Aikawa, Lee Schwartz, Bill Buxton, Jonathan Grudin, Howard Phillips, Tom Steinke, Duncan, John D. Bransford, Diana Sharp, Heinz Schuller, Margaret Winsor, Allison Druin, Tim Kannapel, Arin Goldberg, Keith Daniels, Xiang Cao, Anoop Gupta, Chris Quirk, Bill Dolan, John Paquin, Neal Noble, Chad Essley, Lindsey Sparks, John Manferdelly, Xiaolong Li, the children participating in studies, and perhaps others I forgot to list.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michel Pahud .

Editor information

Editors and Affiliations

Appendix

Appendix

1.1 A.1 Prototype Tablet

In 2007, we made a prototype device with a 7” 800 × 480 pixel display (Fig. 12.17a) and 4 GB solid state memory, running Windows CE on an AMD Geode 500 MHz processor. It had a FinePoint digitizer with an active stylus tethered by a retractable power cord. The device was designed to survive 3 foot drops. Its overall size was 11 × 6. 25 × 1. 4 inches, weighing 0.8 kg. By design the device looked like a painter’s palette to emphasize the stylus and creativity experience. The device also had a handle to make it easy to carry. The stylus was tethered to avoid losing it and to provide power without requiring a battery (Fig. 12.17a). A polished version of Story Baker was implemented on the device (Fig. 12.17b).

Fig. 12.17
figure 17

The prototype hardware (2007). (a) The prototype device. (b) Story Baker running on the device

1.2 A.2 Sentence Structure

Figure 12.18 shows the sentence structure with some of the objects and actions. Sentences have a primary actor, action, a secondary actor, and optional background. We created an API to convert and animate a sentence (see Visual Sentence API documentation [12]).

Fig. 12.18
figure 18

Animation system overview

The top row of Fig. 12.19 illustrates a generic eating animation. Beneath it is an instance of a dragon eating a banana: From left to right, the dragon faces the banana, its mouth opens, the banana enters its mouth, and the dragon chews. Chewing uses rotation and scaling, and can dynamically recolor the character; for example, the head and/or body could turn green to depict disgust or overeating. After chewing, the primary actor ejects the food to avoid trauma if the secondary actor is a favorite character. The animation is accompanied with audio to make it convincing.

Fig. 12.19
figure 19

Animating a dragon eating a banana in the forest. Top: Generic eating animation, Bottom: Animating a dragon eating a banana in the forest

Figure 12.20 illustrates how our system transforms a sentence into an animation. The digital DNA, image, and audio of the primary and secondary actors are drawn from the KidWords dictionary. Behavior instructions are fetched for each actor and the resulting trajectories and sounds are modified to accord with their meta-data. A synchronization barrier ensures that the actors synchronize. For example, an actor that is kicked should not begin flying off before the kick is delivered. Since the primary actor will likely have different digital DNA than the secondary actor, they could behave quite differently (e.g., a “silly” primary actor might stumble around before kicking). Actions are defined as a sequence of behavior instructions, each of which indicates what the actor is to do, but not how: That is defined by the digital DNA. For instance, a digital instruction to move from point A to point B could be executed by striding, stumbling, or hopping. For more details, see this list of macro actions [7].

Fig. 12.20
figure 20

Model for animating with digital DNA variations

Figures 12.21 and 12.22 show generic animations for “kick” and “pick up” actions, respectively, without variants due to digital DNA.

Fig. 12.21
figure 21

Example of kicking

Fig. 12.22
figure 22

Example of picking up

1.3 A.3 Fontlings Data Structure

In the data structure for Fontlings (Fig. 12.23), each letter (or number/special character) has a split line and a rotation point for each part (top and bottom part). Each letter has position, angle (rotation), and scaling parameter (not shown) for each part, and each letter animates independently (Top split position, Top split angle, Bottom split position, Bottom split angle, etc. in Fig. 12.23). An instruction index (Instruction Index in Fig. 12.23) tracks its position in the execution of the script. One letter or word could be in idle mode while another is happy. In this way, a specific word can highlight something while other words are alive but less dramatic. For each letter, scripts define behaviors (idle, happy, misunderstood, etc.). The Splits/Rotation Points Data Structure refers to a hash table with an entry key for each font type (Arial, Times, etc. in Fig. 12.23). It specifies rotation points and the height of the split line as a percent of the character’s height. The Behavior Scripts Data Structure is a hash table with the behavior name as entry key (Happy, Misunderstood, Idle, etc. in Fig. 12.23). It contains a script for that behavior for each character. Figure 12.23 also highlights a specific example for the font type Arial (in orange color) and the behavior Idle (in green color).

Fig. 12.23
figure 23

Fontlings data structure

The data structures are initialized by the xml file Fontlings Split Init File, which contains the height of the split line for each character of a given font, and Fontlings Behavior Init File, which contains the scripts to animate each character for each behavior (see bottom of Fig. 12.23).

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Pahud, M. (2017). Insights from Exploration of Engaging Technologies to Teach Reading and Writing: Story Baker. In: Hammond, T., Adler, A., Prasad, M. (eds) Frontiers in Pen and Touch. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-64239-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-64239-0_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-64238-3

  • Online ISBN: 978-3-319-64239-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics