Skip to main content

Automated Essay Scoring System Based on Rubric

  • Chapter
  • First Online:
Applied Computing & Information Technology (ACIT 2017)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 727))

Abstract

In this paper, we propose an architecture of automated essay scoring system based on rubric, which combines automated scoring with human scoring. Rubrics are valid criteria for grading students’ essays. Our proposed rubric has five evaluation viewpoints “Contents, Structure, Evidence, Style, and Skill” and 25 evaluation items which are subdivided viewpoints. The system is cloud-based application and consists of several tools such as Moodle, R, MeCab, and RedPen. At first, the system automatically scores 11 items included in the Style and Skill such as sentence style, syntax, usage, readability, lexical richness, and so on. Then it predicts scores of Style and Skill from these items’ scores by multiple regression model. It also predicts Contents’ score by the cosine similarity between topics and descriptions. Moreover, our system classifies into five grades “A+, A, B, C, D” as useful information for teachers, by using machine learning techniques such as support vector machine. We try to improve automated scoring algorithms and a variety of input essays in order to improve accuracy of classification over 90%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Shermis, M. D., Burstein, J.: Handbook of Automated Essay Evaluation: Current Applications and New Directions. Routledge, pp. 1–353 (2013)

    Google Scholar 

  2. Ishioka, T.: Latest trends in automated essay scoring and evaluation. Trans. Jpn. Soc. Artif. Intell. 23(1), 17–24 (2008) (in Japanese)

    Google Scholar 

  3. Ishioka, T., Kameda, M.: Automated Japanese essay scoring system based on articles written by experts. In: Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pp. 233–240 (2006)

    Google Scholar 

  4. Ishioka, T.: Computer-based writing tests. J. Inst. Electron. Inf. Commun. Eng. 99(10), 1005–1011 (2016) (in Japanese)

    Google Scholar 

  5. Attali, Y., Burstein, J.: Automated essay scoring with e-rater® V.2. J Technol. Learn. Assess. 4(3), 3–30 (2006)

    Google Scholar 

  6. Vantage Learning: Research Summary IntelliMetric™ Scoring Accuracy Across Genres and Grade Levels. www.vantagelearning.com/docs/intellimetric/IM_ReseachSum-mary_IntelliMetric_Accuracy_Across_Genre_and_Grade_Levels.pdf

  7. Association of American Colleges and Universities: Inquiry and analysis VALUE rubric. www.aacu.org/value/rubrics/inquiry-analysis

  8. Matsushita, K.: Assessment of the quality of learning through performance assessment: based on the analysis of types of learning assessment. Kyoto Univ. Res. High. Edu. 18, 75–114 (2012). (in Japanese)

    Google Scholar 

  9. Yamamoto, M., Umemura, N.: Analysis and Evaluation of Reports based on Lexical Richness. In: Moodle Moot Japan 2015 Proceedings, pp. 6–8 (2016) (in Japanese)

    Google Scholar 

  10. Recruit Technologies Co., Ltd.: RedPen. redpen.cc/

    Google Scholar 

  11. Sunakawa, Y., Lee, J., Takahara, M.: The construction of a database to support the compilation of Japanese learners dictionaries. Acta Linguistica Asiatica 2(2), 97–115 (2012)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Megumi Yamamoto .

Editor information

Editors and Affiliations

Appendices

Appendix 1: Proposed Rubric for Human Scoring

Evaluation Viewpoint

Achievement Level and Scoring

D (0–1)

C (2–3)

B (4–5)

A (6–7)

A+ (8–9)

[Content]

Understanding of the assigned tasks and validity of contents

Misunderstanding the assigned task, or the contents are not related to the topic at all

Understanding the assigned task, but includes some errors

Understanding the assigned task, but the contents are insufficient

Understanding the assigned task, but has some points to improve

Appropriate contents with relevant terms.

No need for improvement

[Structure]

Logical development

No structure or theoretical development

There is a contradiction in the development of the theory

Although developing theory in order, there are some points to be improved

Although developing theory in order, the theory is not compelling

The theory is compelling and conveying the writer’s understanding

[Evidence]

Validity of sources and evidence

It does not show evidence

Demonstrates an attempt to support ideas

The sources to be referenced are inappropriate or unreliable

Uses relevant and reliable sources, but the way of reference is not suitable

Demonstrates the skillful use of high-quality and relevant sources

[Style]

Proper usage of grammar and elaboration of sentences

There are some

grammatical errors.

Many corrections required

Not following the rules.

Some corrections required

Almost follow the rules.

A few corrections required

Although error-free, some improvement will be better

Virtually error-free and well elaborated.

No point to improve

[Skill]

Readability and writing skill

The sentences are hard to read. Writing skills are missing

There are several points to be improved, such as the length of sentences

Although sentences can be read generally, some improvement will be better

Easy to read. Rich in vocabulary

Easy to read. Skillfully communicates meaning to readers. Rich in vocabulary

Appendix 2: Proposed Rubric for Automated Scoring

Evaluation Viewpoints

Evaluation Items

Automated Scoring

(0–9)

[Content]

1

Similarity between topic and description

Applicable

2

Presence of keywords

Applicable

3

Understanding of the writing task

Not applicable

4

Comprehensive evaluation of contents

5

Understanding of learning contents

[Structure]

6

Logic level

Not applicable

7

Validity of opinions and arguments

8

Division of facts and opinions

9

Persuasiveness

[Evidence]

10

Quality level of reference material

Not applicable

11

Relevance of reference material

12

Validity of reference material

13

Explanation about tables and figures

14

Validity of the quantity of citations

Conditionally applicable

[Style]

15

Unification of stylistics

Applicable

16

Eliminate misused or misspellings

17

Validity of syntax

18

Dependency of subject and predicate

19

Proper punctuation

20

Eliminate redundancy and double negation

21

Eliminate notation variability and ambiguity

[Skill]

22

Kanji usage rate

Applicable

23

Validity of sentence length

24

Lexical richness

25

Lexical level

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Cite this chapter

Yamamoto, M., Umemura, N., Kawano, H. (2018). Automated Essay Scoring System Based on Rubric. In: Lee, R. (eds) Applied Computing & Information Technology. ACIT 2017. Studies in Computational Intelligence, vol 727. Springer, Cham. https://doi.org/10.1007/978-3-319-64051-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-64051-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-64050-1

  • Online ISBN: 978-3-319-64051-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics