Keywords

Introduction

This chapter focuses on using Personal Learning Environments (PLEs) in formal learning in higher education (HE). Here, formal learning means that the PLE and its widget bundles support the established, “traditional” way of teaching in a lecture. The teaching and learning activities are not newly created and centered on the PLE, but the PLE extends the existing teaching context and provides additional activities within. Therefore, the primary audience of PLEs consists of teachers instead of students. While this might sound as a contradiction to the paradigm of personal learning, we argue that a valuable goal of ROLE technology is to increase the range of interactive and social learning opportunities.

In the ROLE project, three test beds served to explore such a setting:

  • RWTH Aachen University, whose department of mechanical engineering is ranked at 17th in the world (best in Germany) by the QS University Subject Ranking in 2012.

  • The School of Continuing Education (SOCE) of Shanghai Jiao Tong University (SJTU), a blended learning institution whose students are young, working adults who study part-time.

  • Uppsala University, Sweden’s oldest university founded in 1477, which has got a long tradition of distance education.

Albeit all three test beds are placed within higher education, they cover quite different contexts. For instance, SOCE is located in China. With its approach to teaching and learning based on the Confucian tradition (Zhang 2007), it is quite different to RWTH as an example of a Western university. Also, SOCE students study part-time, most of them have a job and family, while RWTH students are younger full-time students. Furthermore, SOCE is a blended institution, where a significant part of teaching and learning takes place online, while RWTH is a traditional on-campus university. In contrast, while Uppsala University is also for the most part an on-campus university, it offers a wide selection of distance courses with very limited or no on-campus participation required. It proved particularly interesting to investigate what different forms ROLE technology could take in these settings, and it speaks of its flexibility that this was possible at all. Last but not least, the number of participants in the test bed classes differ from 20 (Uppsala Scenario) and 250 (SOCE Scenario) to 1,600 (RWTH Scenario). Growing numbers of students combined with limited resources for teaching are often an important motivation to search for a better support by e-learning tools.

The chapter starts with a review of related work, followed by the description of the test beds in separate sections. In each section, we describe the learning context, the tools and bundles employed, the most relevant evaluations we performed and the lessons we learned. We end this chapter with a brief conclusion that summarizes the main differences and similarities regarding the usage of ROLE technology in the three test beds.

Related Research

The presented approach addresses various recent research issues such as teaching and learning in large classes as well as using cloud services and Web 2.0 applications for e-learning support.

The usage of PLE technology has been investigated by a few studies, albeit in small-scale environments. Blees and Rittberger (2009) describe the usage of a learning environment assembled from different Web 2.0 services in a course on “Social Software.” The 13 participants were familiar with Web 2.0 technology and rated the usage of the service relatively high. The 26 case studies reported by Minocha in (2009) mainly cover studies where students worked with a single Web 2.0 service integrated into a PLE. Law and Nguyen-Ngoc (2008) present a social network and a content analysis of interactions in a collaborative learning environment. Their data show that some students profit from such environments, but not all students. The challenge of teaching large classes has been a research issue for many years (cf. Leonard et al. 1988; Knight and Wood 2005). The more technical background of building e-learning tools from Web 2.0 components is being discussed in Palmér et al. (2009). The approach uses six dimensions for the mapping of Web 2.0 applications to personalized learning environments. The capabilities of ROLE-based cloud learning services are investigated in Rizzardini et al. (2012). The evaluation shows that a cloud-based learning support with ROLE environments is possible but the learners may need introduction and time to be familiar with interactive e-learning tools. The particular aspect of navigation guidance for learning questions in Java programming is discussed in Hsiao et al. (2010).

While these studies shed light on specific questions regarding of PLE, no prior work has investigated how a single PLE platform can be adapted to suit the needs of different formal higher learning scenarios and how it performs in such scenarios over long periods of time and with significant number of users. More specifically, the case studies described in this chapter add to the mentioned evaluations insofar they investigate (1) the use of learning environments that contain components apart from Web 2.0 tools (in a narrower sense), (2) with large user groups (3) of both teachers and learners (4) from different cultural contexts. The results are ultimately relevant for ROLE-based environments but can easily be transferred to other kinds of environments and systems.

RWTH Aachen University: ROLE for Full-Time Students in Large Classes

Large classes at universities create their own challenges for teaching and learning. Audience feedback is lacking. Individual needs of students are often hard to address sufficiently. At RWTH Aachen University, a ROLE-based knowledge map learning tool was developed and embedded in the context of a large class course for computer science in mechanical engineering. The objective of this PLE was to support individual learning of students during exam preparation. Theme-based exercises have been developed and evaluated. The tool was grounded in the notion of self-regulated learning (SRL) with the goal of enabling students to learn independently.Footnote 1

Learning Scenario

The Institute of Information Management in Mechanical Engineering (IMA) of RWTH Aachen University offers a lecture about computer science in mechanical engineering, which was attended by 1,600 students in 2012 (see Fig. 1). The lecture is part of the curriculum for the bachelor degree in mechanical engineering (second semester) and business engineering (fourth semester).

Fig. 1
figure 1

Lecture for computer science in mechanical engineering given in the RWTH auditorium maximum (made by David Emanuel)

In 2012, the lecture focused on object-oriented software development with Java and on software engineering (for details see Ewert et.al. 2011). The lecture is accompanied by a programming lab, a group exercise, and exam preparation courses. In the lab, the students are taught to program Lego NXT Mindstorms robots with Java. They are working in small teams of two students in problem-based learning scenarios. They were requested to program a robotic gripper inspired by industrial robots.

The robots simulated pick-and-place machines (P&Ps) as they are used for surface-mount devices (SMDs). The resemblance to industrial robots was meant to result in a better understanding of mechanical engineering principles by the students. To support the Java programming language implementation on the NXT controller, LeJOS was used (Solorzano 2012).

The lab took place together with the lecture during the summer term 2012. The lecture period started in April and ended in July. Exam preparation courses were provided in September just before the final test. These courses offered the students the possibility to train the addressed competences in smaller audiences.

All parts of the course received good feedback and results from the students within the evaluation. Nevertheless, the students were challenged by learning in large classes in the programming lab. Individual support was often requested, but the number of supporting tutors was limited.

Therefore, one important objective for the course revision in 2012 was a better support for individual learning with e-learning tools. The e-learning system L2P of RWTHFootnote 2 is already used as a Learning Management System (LMS) in the lecture, the group exercises, and the lab mentioned above. However, additional learning support was requested to assist students in and out of class, but particularly when learning autonomously. Two major challenges of the described scenario are:

  • A wide range of pre-course programming skills among the students.

  • Individual support for learning with limited resources for teaching personal.

To meet these requirements, a Web-based e-learning test bed was designed and implemented which supports different kinds of learning situations like SRL, peer-instruction learning, and email support by tutors. The test bed learning content ranged from exam preparation exercises for all students to additional background information for advanced students. It extends the L2P learning room with interactive learning capabilities and is described in the next chapter.

The Personal Learning Environment

The development of the interactive e-learning platform was part of the ROLE project. Beginning with the summer semester 2010, a previous version of a Web 2.0 Knowledge Map (WKM) was enhanced with ROLE technology. In particular, it was transferred to a widget-based environment (cf. von der Heiden et al. 2011), that is a bundle of widgets interacting via ROLE Inter-widget Communication (IWC) (Renzel 2011).

The WKM is an electronic reference book, which can be regarded as a kind of improved Wiki system. It won the second prize in the 2010 International E-Learning Association Awards, in the category “Academic Blended Learning.” The application supports students in looking up factual knowledge. Students can search for articles by entering topic keywords and by navigating from their current article to related articles following hyperlinks. It is based on semantic net technology, where hyperlinks are not just links, but belong to predefined categories, each bearing a meaning, as a named relation. The object-oriented content organization knows classes and objects of knowledge. A class is a predefined template for a knowledge object such as an “Exercise” and its realization. Similar to a Wiki, the WKM supports the creation of new content. A dedicated rights management allows the usage of different roles as authors, administrators, and users. Authoring is currently restricted to lecturers and tutors. Additionally, the content visualization capabilities based on hypermedia support nonlinear learning approaches.

For the ROLE project, the WKM was redesigned as an interactive learning tool and as a test bed for ROLE technology in a higher education scenario. The new design was motivated by the following main goals:

  • Guide and support students in a self-regulated and nonlinear learning process.

  • Motivate, introduce, and provide high-quality basic knowledge using multimedia material.

  • Provide an interactive reference book on lecture contents for exam preparation.

  • Support interest-based real-time communication and collaboration in learner communities.

Thus, the former WKM has been extended with a chat to provide theme-based learning communication between users. A learning history accomplishes the setup. Built up with ROLE technology, the “new” WKM is composed of three intercommunicating widgets (see Fig. 2), namely:

Fig. 2
figure 2

Screenshot of RWTH testbed with Web 2.0 knowledge map, chat and history widget

  • Web 2.0 knowledge map widget for accessing and reading topic articles as well as exam exercises.

  • Chat widget: general or topic-related group chats and presence information for individual tutor support or peer-to-peer instruction.

  • History widget: tracks individual learning activities and shows personal history of visited topics.

The test bed scenario was deployed for the course lab and also for the students’ individual exam preparation in August and September. The WKM aimed to provide the students with information covered in the lecture and in the lab. It was filled with additional SRL-adapted content thus focusing on typical SRL situations such as the exam preparation phase. It contained explanations and motivations for notions, definitions, or examples, e.g., for basic Java programming constructs. Background information was provided as well, e.g., about software installation. Exercises for exam preparation were associated with lecture content. The presentation and organization of the WKM followed the paradigm of object-oriented analysis and design in software development. Relations between objects and classes of objects were visualized (see Fig. 3) to underline knowledge associations. Functionalities for annotations, remarks, and feedback were provided.

Fig. 3
figure 3

Screenshot of the Web 2.0 knowledge map RWTH (start page)

The second widget, a chat widget, was embedded to offer students the possibility to ask and answer topic-related questions. Other students answered the posed questions while a tutor moderated the chat.

Finally, a history widget was embedded into the learning environment. It supported the backward navigation within the environment by offering the last five activated knowledge objects. The widget uses data from the WKM widget to support the learner with his or her own learning history.

The WKM was maintained by the IMA, the test bed was hosted by the department of information science at RWTH. Access to the WKM was granted via the login for the course lab. For the first time in the course’s history, this WKM learning environment gave students the opportunity of individual support during their exam preparation.

Technical realization: The WKM has been bundled with a chat widget and a personal history widget. The chat widget allows learners to communicate with instant chat messages and to see the online status of other learners. It is integrated with the WKM by automatically creating a separate chat room for each topic that is currently read by the learner. Learners can see the topics of other learners and can quickly join them in the topic-specific chat room to discuss their understanding of the topic and how it relates to their current work. The personal history widget records the topics visited in the knowledge map and allows quickly navigating back to previous topics. The three widgets interoperate based on IWC; the following examples of widget communication events demonstrate a selection of implemented interactions:

  • Entering a topic-based chat room on topic selection: When a student selects a topic from the WKM, a corresponding chat room is entered in the chat widget. At the same time, the student’s online status is changed to the new topic in real-time and visible to and clickable for other students.

  • Following a users activity: When a student clicks the online status of another student, he navigates to the corresponding topic in the WKM, in turn triggering an event to enter the corresponding topic-specific chat room.

  • Real-time updates of learning history: When a student selects a topic from the WKM, the selected topic appears at the top of his or her personal history.

Following the overall ROLE approach of open standard compliant widget-based learning environments, the WKM test bed was implemented involving the following enabling technologies:

  • OpenSocialFootnote 3: OpenSocial is a set of common application programming interfaces (APIs) for Web-based social network applications developed by Google along with MySpace and a number of other social networks. Applications implementing the OpenSocial APIs will be interoperable with any social network system that supports them. The ROLE version of the WKM is deployed in Apache Shindig,Footnote 4 the open-source reference implementation of an OpenSocial-container.

  • Extensible Messaging and Presence ProtocolFootnote 5 (XMPP) is an open standard technology for real-time communication, which powers a wide range of applications including instant messaging, presence, multiuser chat, and collaboration. The ROLE version of the WKM offers XMPP-based features such as topic-based chat rooms and real-time information on current presence and learning activities.

  • Inter-widget Communication (IWC) (cf. Renzel 2011; Zuzak et al. 2011): With IWC, individual widget functionalities can be combined to realize complete application workflows. ROLE leverages various forms of both local and remote collaboration and communication among. The ROLE version of the WKM demonstrates local IWC using technologies such as PMRPCFootnote 6 and Google Gadget PubSub being part of the OpenSocial specifications. A basic form of remote IWC was demonstrated with the new WKM chat functionality.

  • Monitoring: All learning activities are tracked by the history widget and persisted as Contextualized Attention Metadata (CAM; cf. Schmitz et al. 2011). The ROLE version of the WKM was the first test bed producing real-life usage data, which, later on, served for producing recommendations and as a basis for further development of the WKM and ROLE technologies in general.

A detailed description of the ROLE framework technology can be found in chapter VIII “Lessons learned from the development of the ROLE framework.”

Evaluation and Methodology

Additionally to the tool development, a test bed evaluation was designed to analyze how the environment influenced the students’ learning processes. The RWTH ROLE test bed work in 2012 was initiated with a Web-based survey that aimed to collect details about the students’ experience with e-learning and SRL at the beginning of the lab in April 2012. The ROLE widget environment was introduced to the students during the second week of their studies. The enriched ROLE-based learning environment offered additional support for improvement in SRL opportunities. It also provided information about programming in general, related tools, modeling as well as Java as such. Around 1,600 students participated in the course. All students were informed about the ROLE-enhanced learning environment via several announcements during lectures and labs as well as via email. During the standard midterm teaching evaluation, a short ROLE-related survey was issued. At the end of the lecture period, the ROLE test bed was also adapted for individual exam preparation during summer time. Finally, after the exam, educational staff was interviewed to evaluate the environment and its application within the course. The lab sessions took place in the largest computer pool of the RWTH which is equipped with approximately 200 workstations. This, however, restricted the maximum number of students that could attend the lab in parallel to 200 students who then worked with 100 Mindstorms NXT robots. Since those 100 robots could not be dismounted and reassembled in each lesson, the lab was based on a standardized and preassembled robot model.

The ROLE environment was used during the lab time from April to June. Usage grew significantly in September when the students started their individual exam preparations some weeks before the exam. The access peak was reached in the days just before the exam when students switched to “power learning.” This is illustrated by Fig. 4 showing the number (by day) of accessed knowledge objects. The number of generated views corresponds with the access rate and indicates the intensity of usage by the students. Figure 4 underlines the exam-oriented learning during the preparation that restricts the leeway in learning and thus the autonomy of the learner. This characteristic learning activity trend has been repeated during the next exam period in March 2013.

Fig. 4
figure 4

Requested knowledge objects by day in the RWTH testbed

Results

In June 2012, before the summer break (i.e., at the end of the lab session but before the exam preparation), the students were asked about the usefulness of the e-learning environment and rated it positively. 162 stated that the application of the computer-based learning environment was useful. On the given scale from 1 (strongly disagree) to 5 (strongly agree), the arithmetic mean (AM) of the results was 3.7 with a standard deviation (SD) of 1.3. Since 3 would be neutral, the students evaluate the environment positively without being overwhelmed.

After the course, the environment has been evaluated by the teaching staff. We conducted four interviews, three of them with student assistants who acted as tutors within the practical exercise and the exam preparation. They were responsible for adding contents to the knowledge map and for solving technical issues. One interview was conducted with the lecturer who was responsible for the overall coordination and who was involved in the planning and conception of the whole course. In the interviews, we asked the participants to rate several statements on a scale from 1 (strongly disagree) to 5 (strongly agree) and to explain their ratings. Moreover, we asked them to comment on the strengths and weaknesses of the environment and to suggest improvements. The students’ positive judgment of the environment has been corroborated by the teachers. For each statement, the arithmetic mean (AM) and the standard deviation is given (SD) (while interpreting these measures, one has to keep in mind that only four persons rated the statements):

  • The environment was useful for the students. AM: 4.25, SD: 0.43

  • The environment was useful for me in my role as a lecturer/tutor. AM: 4.00, SD: 0.71

  • The students reached the learning goals better because of the environment. AM: 4.00, SD: 0.71

  • I reached my teaching goals better because of the environment. AM: 3.50, SD: 1.12

  • I would advise the students to use such environments more often if they had access to them. AM: 4.75, SD: 0.43

  • I would use such environments more often for teaching if I had access to them. AM: 4.67, SD: 0.47

  • I would use such environments more often for learning if I had access to them. AM: 3.25, SD: 1.79 (This is an interesting result: Why do the lecturers/tutors rather advise their students to use such an environment than use it themselves? The interviewees answered that their personal learning style is not optimally supported by such an environment, because firstly they prefer not to browse through learning contents but to study textbooks and other material, in particular exercises and exam questions from previous semesters, from beginning to end. Secondly, they prefer using pen and paper over doing all exercises with the computer. Therefore, they request an export to PDF so that they can print selected parts of the material.)

  • I consider the environment used within this course as a didactically sound means. AM: 4.50, SD: 0.50

According to the interviewees, the strengths of the environment were, firstly, that the knowledge map gave a clear overview on the course contents and their inter-relationships. The students got a starting point for browsing through the material and exploring the themes independently. Questions could be answered by pointing to specific objects on the knowledge map, and students could (and did) answer their follow-up questions themselves by exploring the surrounding/linked objects. Thereby, the autonomy of the student was effectively supported. Secondly, the chat widget allowed fast feedback from the students. Questions could be answered immediately. Since all students could read the answers, questions did not have to be answered twice. Thereby, the tutors’ explanations became more efficient. The tutors saved time for helping with truly individual problems. Thirdly, the environment improved the communication among the students and, thereby, the collaborative learning. After a short time span, the students began to answer questions asked by other students. Fourthly, the environment rendered the students more flexible regarding their time management and learning speed. They were able to repeat lessons and exercises without losing track of the course or thwarting others.

Concerning weaknesses, the interviewees mentioned technical and usability issues; in particular regarding the administration of the environment and the adding of new contents to the knowledge map. These issues have to be solved, but they do neither affect the concept nor the general design of the environment. Moreover, the interviewees propose the following extensions of the environment:

The chat widget should be exchanged or supplemented by a forum for general questions and by a commentary function for the elements of the knowledge map. This would improve the linking of contents with questions and comments. They consider a learning planner consisting of a simple to-do list with links to exam-related material and topics, self-tests and a visualization of the current level of knowledge/exam preparation progress (related to the self-test results) as extremely useful. The interviewees agree that the contents are the most important feature of the environment. These have to be updated regularly. So far, the contents of the knowledge map are explored by browsing. An additional search engine for the direct search of specific content would be reasonable. One interviewee deems a recommender system that recommends related external material useful.

One aim of offering the ROLE environment was to support SRL. Has this goal been reached, that is, did the environment effectively support self-regulation? The interviewees claim that this is in fact the case. While in the beginning, a lot of trivial questions were asked, the students were able to find the answers to such simple questions themselves soon. (The question is, however, whether we can attribute this development to an improvement of self-regulation or rather to a learning effect regarding the course contents.) The interviewees considered it important to support SRL. They estimated that by far, most of their students had medium SRL-level. They correlated the SRL-level with the general knowledge level and acknowledged that students with a high SRL-level learned better and faster. However, as tutors and lecturers they generally preferred to teach students with a medium SRL-level over students with a high SRL-level. They justified this preference as follows: A tutor was supposed to lead interesting discussions with high SRL-level students. However, they did not need a tutor that much and therefore did not get in close contact with them. Often, teaching did not really take place. Moreover, these students tended to be good students that asked difficult questions. A teacher had to be well-prepared and feel certain on the course topic to cope with these questions. This made it sometimes harder to teach students with a higher SRL-level.

Medium SRL-level students were intelligent but still requested interaction with a teacher. The teacher got in contact with them, observed the learning progress and saw the positive effect of explanations and assistance. The interviewees found this very rewarding.

The interviewees considered that a low SRL-level is correlated with rather low learning success.

Teaching students with a general low level was considered to be cumbersome and not very rewarding. Feedback given through the environment was recognized by teachers as very important. The interviewees emphasized the role of the chat (or a forum). Feedback was deemed important for estimating the students’ progress and thus adjusting interventions. Moreover, it makes teaching more satisfying.

Conclusions

The evaluation proved the necessity of intensive promotion for new and additional e-learning tools. Tool objectives and advantages must be clearly communicated (at the right time) to the students. Nevertheless, only a minority of all students had used the test bed for a longer time. Here, guidance with learning questions as in (Hsiao et al. 2010) may motivate students and foster communication.

Until now, overview and learning guidance is given by the visualization of topic relations on the start page, the hierarchical and object-oriented organization of knowledge in the map and the linking of knowledge objects. The evaluation proved that the environment supports SRL and collaborative learning in large classes. The answering of student questions was easier via the chat widget than by email as all students were able to see the answer. Additionally, the chat fostered student-to-student support. Even if the test bed offered support for early learning, the peak of usage was reached just before the exam. It indicates the students’ remaining in power learning.

The test bed was implemented as a cloud learning application combining widgets as services in an overall application and using IWC for communication between the widgets. Since different people were responsible for the particular widgets, it was sometimes hard to fix problems, e.g., when a server was not accessible.

So far, the test bed was aimed to demonstrate the possibilities of ROLE technology in large classes. The demonstration was successful and further development has to focus more on the learning requirements of students. Therefore, future improvements are seen in better communication and feedback support to strengthen, e.g., learning motivation. Suggested improvements comprise firstly a better collaboration support that can be implemented by adding improving, topic-related communication (forum, notepad linked to contents of knowledge map) and secondly a better SRL-support that can be implemented by adding a learning planner that supports planning (to-do list) and reflection (self-tests, visualization of progress). The offering of learning strategies such as learning questions (Hsiao et al. 2010) within the learning tool may provide new advantages and motivation for the students.

Lessons Learned

The evaluation resulted in the following recommendations. These recommendations are focused on the context of large classes in HE:

  1. 1.

    New e-learning tools—especially if in concurrence to existing solutions—require intensive promotion. Students need a clear motivation and benefits for the own learning objectives. Multiple announcing will be helpful incl. situations when students will “hear” the message. Here, the usage of the exam preparation tool was fostered with an announcement only some weeks before the exam when student’s individual learning situations corresponds to the “message.”

  2. 2.

    If a new e-learning tool must fit to an already existing process of learning and organization in HE, a good idea is better support for student’s individual learning process, e.g., when the student is not present at the university and not in contact with tutors.

  3. 3.

    During their learning workflow, students are interested to ask and answer questions. Therefore, e-learning tools for individual exam preparation are more attractive if combined with communication services as chats and email. Peer-to-peer communication within the students is welcome and can reduce the effort for mentoring by tutors. Even, communication can provide qualitative feedback for learning content as exercises and solution samples.

  4. 4.

    One chat for all—instead of multiple chats for different content—provides more communication activity. Chat activities can link to the corresponding content to clarify the context for questions.

Outlook

The design of the complete course has been analyzed after the exam in September 2012. The usage of the ROLE test bed, the participation and other indicators has shown that the students’ SRL capabilities do not meet the presumptions for the lab design. On the one hand, the students’ pre-skills in programming differ significantly; as a consequence, the strict schedule of the lab was often too slow or too fast for them. On the other hand, the capabilities for self-organizing teaching material from different sources are not very well developed and students expect to have all pieces of information at one place. The lab learning activities were often frustrating for the students and tutors.

Therefore, we developed completely new teaching material which guides the students through the learning process. In the beginning, the basics of programming are explained in every detail and step by step, the autonomous work phases of the students get longer and longer. This teaching material supports an individual schedule of learning activity, as well, which corresponds better to different programming skills. First evaluation results show that the students are much more interested in the learning activities. Even the tutors prefer the new learning design since the students are motivated and ask interesting questions.

Shanghai Jiao Tong University: ROLE for Employed, Part-Time Students

The SJTU case study set out to address the issue of using Personal Learning Environments in adult higher education. It is associated with the SOCE whose blended classrooms are based on the Standard Natural Classroom model (Shen et al. 2008) providing face-to-face interaction with the instructor as well as online courses. Its students, mostly adult learners who have a job, take classes in the evening or at the weekend, by either attending in person in the classroom or by watching live over the Web. Improving their competencies via degree education or certificate training enables them to increase their chances in the highly competitive Chinese job market—a market that is also characterized by frequent job-hopping.

Teaching and learning in this case study follows a traditional pattern: it has a teacher-centric focus, with a “broadcast” model, where most students watch the lectures rather passively. The ROLE project, in this instance, has offered SJTU an opportunity to explore and investigate how to use existing available ROLE technologies and tools that could provide a larger amount of opportunities for learner and teacher interaction enabling further potential creative ideas for both parties too. For instance, selected bespoke ROLE tools, such as those offering Voice Recording and Text-to-Speech recognition, allow foreign language students to practice their pronunciation by recording themselves and comparing their speech to the “original” one, thus providing the students with an active learning opportunity.

Learning Scenario

One central aspect of PLEs is that learners can assemble their own learning environments from existing services. They decide which services to use, assembled them, and use it for learning. Such a usage presupposes active, technical-savvy students. From our experience we knew that the students at SOCE do not fall into these categories. Most students have limited knowledge about Web tools (RSS is virtually unknown), only limited time at their disposal, and limited technical expertise. Furthermore, in the Confucian culture of China learning is still very teacher-centered (Zhang 2007), and students are not used to actively contributing to class. We therefore decided to build a PLE according to the learning scenarios specified by the teachers of the courses and make the pre-build PLE accessible to the students.

One example scenario devised by the teacher of the French course is as follows. His course aims at helping the students mastering the first steps in spoken and written French as well as learning about and mastering tools that help students in their working life. These two goals are supported by activities that require using the tools. For instance, starting with an English (or Chinese) sentence, such as “Hello, my name is Tianxiang,” the teacher shows how to use a translation tool to get a first rough French translation, and how it can be refined by using a dictionary and spell checker. The French sentence is read aloud by a text-to-speech tool and repeated by the student until it can finally be recorded. This recorded introduction can then be uploaded to social networks. At a later point in the lecture, the students will receive a similar task, such as describing their job without being shown how to use the tools.

Figure 5 contains a screenshot of a PLE, whose basic functionality is similar to the start pages Netvibes and iGoogle. It provides a single page from which the students can access different language related services and sites. The widgets of this PLE facilitate learning a foreign language. For instance, the top right widget performs spell checking; the second one below enables the translation of texts using Google Translate; the third widget accesses a text-to-speech synthesizer; and the bottom left widget allows the student to record and playback his voice.

Fig. 5
figure 5

Screenshot of a SOCE PLE

As another example of a learning scenario, for the course on Data Structures, the teacher wanted to use a PLE that supports rehearsing for the exam. This PLE consisted of a large number of exercises, which trained different concepts in this specific domain, such as linked lists, sorting, etc. The PLE was not tightly integrated into the weekly teaching, but made accessible at the end of the semester, a few weeks before the exams.

To summarize, the PLE usage serves different purposes: the students have a chance to acquire knowledge about existing tools that will be helpful even outside class. Communication in a foreign language becomes facilitated and empowered when supported by translations tools and text-to-speech. The former allow understanding and producing content that learners not yet master due to insufficient vocabulary. The later enables the students to listen to new texts, copied from any source or written by themselves. Together with a recording device, they can compare their speech to the artificially produced one. Furthermore, the PLE provides opportunities to train domain knowledge, in this case multiple-choice exercises that cover topics taught during the lecture (gender of verbs, prepositions, linked lists, sorting, etc.). By reusing existing services we were not required to build our own version of these tools. The free text-to-speech service offers an astonishing quality close to native speakers that would have been difficult to achieve on our own. By assembling all the services in one page, access to these services is facilitated.

The Personal Learning Environment

During the ROLE project, SOCE moved from a self-developed, proprietary LMS to MoodleFootnote 7 as the Online Learning Environment. Thus, in a first phase, ROLE technology was developed and evaluated in an additional system (Liferay), which offers widget support. Once the shift to Moodle began, the ROLE evaluations took part in ROLE Moodle extensions. This section provides additional information on the technical realization.

Technical realization: The technical realization of the PLE used at SOCE underwent an early and an advanced phase. In the early phase, SJTU explored first usage of PLE in a technical environment that allowed only rudimentary PLE features, as the advanced features possible at a later time by technology developed in ROLE were not yet available. In the advanced phase, on which we focus in this chapter ROLE technology was developed and integrated into the SOCE learning environment.

Starting in 2011, SOCE moved from its proprietary LMS to Moodle. Moodle is a popular LMS to manage courses that is the de-facto standard among many educational institutions. It is a plugin-based PHP application that can be extended by installing additional modules. These modules have to be installed on a Moodle server by a system administrator. The Moodle view, as shown to students and teachers, consists of a main center area and a rather narrow left column (see Fig. 5 for an example). The center area contains main course resources, such as a wiki page, a forum, a lesson, a quiz, etc.

Moodle’s flexibility and adaptability is achieved via visual themes and server side plugins, thus an intervention of system administrators is required every time a change should be done. Teachers and students are not involved in the process of the customization. Teachers, for example, cannot add or remove plugins on their own. Differently from Moodle plugins, widgets are client-side applications that can be added to a system by skipping server side installation, which makes them easy to add.

Our OpenSocial plugin for Moodle allows a simple and teacher/student-driven extension of Moodle’s functionality. Once the plugin is installed to Moodle, a teacher can add a “Widget space” to the course, specify a set of widgets for it, and choose whether 1, 2, or 3 column view should be used for widgets display (Fig. 6). The resulting outcome (as displayed to students) is the page with widgets shown in the iGoogle in similar fashion, where students can work with several widgets simultaneously (see Fig. 5).

Fig. 6
figure 6

A teacher creates a space with widgets for a course

From the implementation perspective, the plugin consists of two main parts (Bogdanov et al. 2013). The first part is an engine that renders OpenSocial apps on a page. This engine is Apache Shindig which represents a reference open-source implementation of the OpenSocial specification. The second part is a PHP module that is responsible for a configuration of a page with widgets, adding and removing them to/from the page and gluing Moodle with the Shindig engine. The OpenSocial API provides the standardized way to retrieve and exchange information between different Moodle installations and other social networks, which improves data portability and interoperability. More precisely, widgets can query Moodle for data via the Shindig engine: they can retrieve the currently logged in user, the current course, its participants as well as save and get arbitrary data. The privacy and security are managed via the Shindig engine and it is in the full control of university administrators. However, a widget installed within a course runs on behalf of the teacher who added it and can retrieve/update information that teachers can normally do in their courses. Thus, teachers are responsible for checking the trustfulness of a widget, before adding it into their environments. The ability to retrieve course information and its participants is achieved via OpenSocial Space extension that allows widgets to adapt to the specific context of the course (contextual widgets). For example, a wiki widget can save data for a course and restrict access to only people engaged in this course. The same wiki widget will behave differently being added to another course: it will have a different wiki history and a different list of participants.

Bundles

Two bundles were created by SJTU/SOCE: Creating an audio self-presentation and the SRL bundle. Both are available in the ROLE widget store.Footnote 8

Creating an audio self-presentation: The bundle for creating an audio self-presentation in French includes four main widgets: a translator widget, a spell checker, a text-to speech engine and a recording widget, and some additional tools such as a CAM widget, a business dictionary and a conjugation tool. The four main widgets are used to create a self-presentation in French language, the additional widgets are to assist student in his learning activities and to collect usage data. This widget bundle is helpful in a language learning context and can be used to complete different tasks, such as learning vocabulary, improvement of pronunciation, producing of texts and audio-files, etc. The precise functionality of the widgets is as follows.

The Translator widget allows user translating terms or sentences entered. The Translator is linked with a Translator homepage and a Dictionary homepage, which can be called up by user. User, who is a beginner in French, may work with this tool to translate his self-presentation text from English or Chinese into French.

The Spell Checker widget serves to fine tune the translation from the Translator widget or any other (self-produced) text. User may examine his self-presentation text with help of this tool and correct spelling mistakes occurred by translation. To check spelling, type or paste a text into a text entry field and click on a “Spell Check” button.

The Text-to-Speech Engine allows listening to the pronunciation. The user may listen to the pronunciation of his self-presentation text to create or to check his own audio-file made up with the Recording widget. To check the pronunciation, enter a term or a text into a text entry field and click on a “Say it” button. Voice (male or female), language (also British and US pronunciation for English language) as well as additional effects can be selected.

The Recording widget can be used to check user’s own pronunciation or to produce audio-files such as pronunciation samples, audio texts, or presentations. With this tool the user may record his self-presentation and compare it with the given pronunciation of the Text-to-Speech Engine.

Some additional tools, such as a Business dictionary and a Conjugation Tool (to check a modification of a verb from its basic form), can be added to the bundle to assist user in his/her learning activities.

SRL bundle: The students at the SOCE of Shanghai Jiao Tong University are young adult learners who typically have a job and family, and this only limited time at their disposal. Also, the students are average learners in the sense that they have little knowledge about how to learn, specifically only limited SRL skills.

For that reason, SJTU together with Koblenz and supported by FIT and Uppsala devised a bundle for supporting SRL (see Fig. 7). The bundle consists of three widgets: the To-Learn list, the Activity Recommender, and the Contextualized Attention Metadata monitor. It also contains a video illustrating the usage of the widgets.

Fig. 7
figure 7

A screenshot of the self-regulated learning bundle

The Activity Recommender widget (bottom right in Fig. 7) gives hints about how to process a given task. In this bundle, the task consists of “how to create a presentation.” With the tool, users compile a learning plan consisting of learning activities. The tool shows the current task, matching learning strategies, a list of concrete learning steps, and additional information.

The To-Learn list widget (bottom left in Fig. 7) allows to compile and to modify a learning plan. Users can add, rearrange, delete and rename recommended learning activities or add own activities. The Activity Recommender sends learning activities to the To-Learn list. Students can keep them, or discard them, as they wish.

The CAM monitor tracks how students use the widgets by storing the events send from the other widgets in a central database. This widget is not directly of use for the students in this case, but allows evaluating their usage.

The case study done at SJTU that evaluated the bundle highlighted several points that are of utmost importance when using such a bundle in class. First, due to the novelty of the widgets, students will probably be unclear about how precisely they should use each tool on its own. It is thus recommended that extensive support is provided and that instructions are available that explain the tools. Secondly, the two main widgets in this bundle interact, which is not frequently observed as a behavior in existing systems. This has to be explained in the tool itself. The instructions should cover the usage but also clearly explain the purpose of the widgets, since again the offered functionality is seldom encountered in existing software.

Evaluation and Methodology

The evaluations of PLE usage at SOCE started in 2009 and continued until the end of the ROLE project in 2013. Each semester, PLE technology was used in a number of courses, mostly language learning courses, namely several English courses (English Listening, English Speaking, Critical Reading), and 2 two-semester-long introductions to French and German, but also a Computer Science course (Data Structures). The number of students varied over the courses and semesters. In average, for the language learning about 200 students were inscribed to the courses, about 20–25 came to the lectures in person and about 50 (25%) took the final exams. These numbers are typical for the second language (which is deemed as rather unimportant by the students). About 1,200 students per semester took part in the Computer Science course, with a similar percentage as for the language learning courses of students participating in the final exams. In each course, an example PLE was assembled by the teacher, supported by a team of researchers. In the language learning courses, the PLE was introduced and used in class, and the students were expected and encouraged to use it outside class. In the Data Structure course, the PLE was created at the end of the semester and students were supposed to use it for preparing their exams.

Results

Results from the PUEU surveys: The survey used at SOCE is based on the Technology Acceptance Model (TAM, cf. Venkatesh and Bala 2008) and consists of a set of statements that measure the Perceived Usefulness and Ease of Use (PUEU) of PLEs. The PUEU questionnaire has therefore constituted the “core” part of the surveys used among the users (educators, learners, employees, trainers, etc.) of the ROLE test beds. Naturally, the rest of the questions included in these surveys have been targeting the specific context of each test bed. At SOCE, surveys were created for delivery in both English and Chinese. Students were informed that they should complete them once they had finished their course of study. This was entirely voluntary. For example the survey from the French and German course was only completed by 20 of the 150 students. This was despite the fact that the bespoke Moodle spaces were regularly used by students in class and enhanced by being actively demonstrated by the teacher. The survey results show that 65% of students indicated that they used the PLE, while 20% stated that they did not use it, with the remaining 15% reporting that they did not know what was meant by the term PLE. Similarly it was also noted that students felt that their knowledge of e-learning was quite good (65%); with the remaining 35% recording their knowledge as high and better.

Students reported an overall positive experience of using the PLE. Using a Likert-type rating scale with a range from 1 (strongly disagree) to 5 (strongly agree) all but two students either agreed or strongly agreed with the specified statements, such as “I find a PLE useful for my work,” “I accomplish my work more effectively with a PLE,” “I find the exercises provided in the PLE helpful for my learning,” “It is easy for me to use a PLE,” “It is easy for me to use a PLE,” and so on (see Appendix x for the detailed results of this PUEU survey). Interestingly, the two negatively formulated questions, namely “I find using a PLE frustrating” and “I find interacting with a PLE requires a lot of my mental effort” received less positive answers, namely 7 students agreed or strongly agreed, and 13 students were neutral or disagreed.

Several questions in the PUEU survey inquired whether students would like to modify their PLE by adding new widgets or by replacing widgets. The majority of students (15 or more) agreed with this premise. Furthermore, the perceived value of specific widgets was also investigated. They remarked that they thought that the preassembled PLE/Moodle space was well designed. This manifested as very positive ratings for all widgets. The actual Moodle usage logs, however, indicate that only a few ROLE widgets were actually used by the students (mainly the translation tool and some of the exercises), although the teacher encouraged usage in every class. The results from the survey, therefore, can be seen as representative of potential interest rather than reflecting active ROLE widget usage.

Significantly the English students completed fewer surveys than those on other courses, and correspondingly their rating was also less positive. In the English courses it appears that the focus was on the exercises and, in contrast to the German/French courses, where the English teacher used the PLE/Moodle spaces less frequently in class. The survey also revealed that the majority of English students did “not know what was meant by PLE” (more than half of the given answers). Those English students who declared an understanding of the term PLE also rated the perceived usefulness and ease of use positively albeit less than those students in the German/French courses. It also appears that the English students also are less inclined to assemble or change widget spaces (as evidenced by the answers to the respective questions). In addition, the responses relating to the perceived value of the provided tools appear to be mostly average, with only the translator and exercises being rated slightly more positive.

Results from the Activity Recommender and To-Learn task list: In addition SJTU also conducted a further evaluation of two specific widgets, namely the To-Learn list and the Activity Recommender, within the Business English class. The To-Learn list enabled students to define and work on to-do-lists specific for learning. The Activity Recommender supported students in preparing a presentation, and can add tasks to the To-Learn list if the student agrees. For this evaluation, the students were given the mandatory homework of preparing a set of slides for a product presentation and also needed to use a PLE space that contained the two widgets. Due to the usage of the widgets, and the completion of the survey being mandatory, a large number of students (240) completed the survey.

Initially a preliminary evaluation took place a few months before the deployment of the second survey evaluating the usage of the implemented ROLE tools. In this first evaluation, the activity recommender was implemented as a mock-up using slides to introduce the widget. The immediate student feedback and a recording of the class’ reaction to the idea of this widget served to inform the ROLE partners of some potential shortcomings. The second evaluation, using the implemented tools, took place over a period of 4 weeks. The students received the (previously described) task to author a set of slides that portray a real or imagined product that a company would sell. The task was mandatory as homework. Students were also instructed to use the two ROLE tools during the authoring of their homework.

The evaluation revealed several problems: those related to technology, e.g., features did not run due to browser issues and related to task/tool understanding, e.g., students did not understand the purpose of the tools. Thus, while the overall number of participation of the student was very high (given that this was a mandatory homework), the dissatisfaction of the teacher and students (as conveyed orally and by email) was also high, resulting in the suggestion that the tools needed to be improved for more effective usage in a classroom environment.

The quantitative and qualitative outcomes arising from this evaluation are based on survey results with a sample size of N = 239. It can also be reported that the answers to the free text question “Did you have any problems using the Activity Recommender and the To-Learn-List?” revealed a series of technical issues as well as disclosing a lack of understanding among the student group that manifested as significant usability problems for them. That may have been influenced by the fact that SRL was a concept these learners were not used to. Since the level of English of the learners was relatively high, the problems were not primarily caused by comprehension problems. It is important, however, to take into account that the Learning Activity Recommender widget that they were using was designed originally for a specific group of learners who would have needed substantial support and guidance related to their cognitive level and, therefore, was not designed with all learners in mind. Nonetheless the overall survey results can be interpreted as quite positive with regard to the perceived usefulness of the ROLE widget bundle. Since the To-Learn List in the evaluated widget bundle offered support for learners with good meta-cognitive competences it could be used a convenient tool.

Widget Authoring Tool and teacher interviews: One of the major problems regarding the usage of ROLE technology at SJTU is that most teachers do not appear to be interested in many of them as they feel that there are too few appropriate widgets available that they can use in their courses. To overcome this problem, the ROLE staff at SJTU developed an easy to use authoring tool that requires little technical knowledge. In order that the authoring tool fulfills the teachers’ expectations and also meets the ease of use requirement, separate interviews with five teachers were arranged in which a mock-up of the tool was presented to the teachers. During the interview, the mock-up was used to go through the authoring process. This helped to identify any immediate difficulties and enabled the immediate collection of suggestions from the teachers. The tool has now been made available at SJTU. It uses ROLE technology such as inter-widget communication to capture interaction data and to allow students to rate widgets.

The SJTU teachers involved in the courses using the ROLE technology were invited for a semi-structured interview with the aim to record their experiences with ROLE and how they would like to see ROLE improve in the final year. Two of the teachers accepted to be interviewed online and a third teacher accepted to provide a paper-based response to the interview questions. Overall the respondents thought that ROLE was extremely useful as it made possible to allow learners to access materials in a more flexible manner, enable them to self-assess their skills, enable them and teachers to have enhanced access to course metadata.

All of the respondents expressed that they will continue to use ROLE tools in future because they were impressed by the benefits it brought to their learners (e.g., access to greater and better resources) and to them (e.g., monitoring, portability, etc.). The respondents zeroed on in two areas of improvement, which they mentioned throughout the interview. Firstly the need to demonstrate the value of developing more contextual or subject specific widgets and bundles:

I think what we have to do is to show more clearly the value that ROLE can bring to the teacher, so I think the basic technology is there. But it’s not every visible in the current widgets, the current tools that are available. I think this is one place where it really needs to improve.

Maybe the developers should understand the course, because different courses need different learning pedagogies. The theory and experiment methods for different course are depended.

Secondly, to make the widgets and associated technologies facing the end-users much easier to use without the need for any more technical knowledge than using Facebook for example.

… . but maybe just something like Facebook, where I can just upload, share something, and press some simple buttons. And I think it’s still too complicated for me, and also for my students, as I described before. I’m getting a lot of questions. So I think the usability definitely needs to be improved.

Conclusions

Lessons Learned

Two major lessons have been learned by the evaluations at the SOCE test bed. First, the significant role of guidance in such an HE setting, and second, the importance of having a sufficient amount of widgets.

The first lesson became quickly obvious after the initial evaluations based on the Liferay system (cf. Ullrich et al. 2010). The initial approach of PLE usage consisted of introducing the PLE during class on the basis of an example. Then, the students received a homework that required them to use the PLE as demonstrated. For instance, the teacher showed how to record a video using a Web 2.0 service, and the students’ homework consisted of recording a video, without a given specific topic. This open approach was motivated by the thought that the more open the task was, the more motivated the students would feel. However, this initial approach failed. None of students did this or any other similar open homework, interestingly although the students rated the PU of the PLE very high. In the next iteration when given more guidance by the teacher with specific tasks to perform the number of handed-in homework increased. We believe the low initial uptake despite PU is due to several reasons. First, students quickly become overtaxed. The concept of a PLE is unfamiliar, the embedded services are new to them, and they have only limited experience in Web 2.0 in general. Second, students often do not see the value in learning how to use these tools. They feel it distracts from learning grammar and vocabulary, and does not prepare them for the exam. Thirdly, most of the students (and teachers as well) are not intrinsically motivated to use Web services, and the majority of our students feel that the time could be spent more effectively. Thus, the task of the teacher becomes to demonstrate and highlight the advantages of a PLE, and guide them through it, so that the students can arrive at an understanding of what a PLE offers.

Secondly, uptake of ROLE was significantly hindered since only few domain-specific widgets were available. In the case of SJTU, teachers were less interested in general purpose widgets, but asked for widgets covering very specific domain knowledge. Content available in existing Learning Object Repositories was not used in a single case, since these resources were too different from the specific needs of the teachers. For instance, existing learning objects about French were too much dependent of the original course book, and not usable in the SJTU courses due to too different vocabulary. Yet, teachers did find usable resources on Websites not available in learning object repositories. We therefore had to enable teachers of turning these Web resources into widgets usable in their PLEs by using a widget authoring tool. In addition ROLE staff at SJTU offer those courses that use the ROLE technology extensive support by technical teams who can set up the widget spaces and create widgets for those teachers who wish to avail of this service. We could observe that only those teachers who were extensively reported actually used ROLE technology.

Finally, it is important to note that the discrepancy between the often very positive ratings given in the PUEU surveys and the often negative vocal feedback or confusion observed in classes related to ROLE technology, as well as the recorded low actual usage visible in the logs, indicate that, at least in a Chinese context at SJTU, the ensuing survey results should be interpreted with caution. Notwithstanding this word of caution, however, SJTU is still convinced that ROLE technology can enable the easy creation and usage of interactive activities that make the overall classroom activities more interesting and, therefore, empower teachers to offer extra learning activities that go beyond what a standard Moodle online course can offer.

Outlook

SOCE will continue using ROLE technology after the official ROLE project has ended. The teachers who were using PLEs in their classes during the project’s runtime have taken over their PLE spaces into the course Moodle sites of the new semesters. Also, new teachers have expressed their interest in creating their own widgets during the presentations of the authoring tool. One teacher from the Social Science department created widgets of Web games about different political topics.

Uppsala University: ROLE for Distance Students Working Collaboratively in Small Groups

The Uppsala University test bed was performed within the distance course “Social software and Web 2.0” at Uppsala University. The course was given during the summer semester in June and July 2011 with 34 students and in the spring of 2012 with around 20 students. The course corresponds to 5 weeks of full-time study. A university wide LMS installation was used throughout the course for the main interaction with the students. The course consists of four assignments each containing a part to be done individually and a part to be done within a group.

Learning Scenario

The test bed involved one of the four assignments in the course. The assignment in question aimed to give the students a deeper look into how social interactions are used in a professional manner, most specifically by a corporation that communicates with its customers via twitter. The students were tasked with finding patterns of behavior and how the social interactions work in a specific medium (in this case micro-blogging and customer relations).

The goal of the assignment, as stated by the teacher of the course, was to use a typology presented by Shaw et al. (2011) to categorize tweets (i.e., 140-character statements made in the social media platform Twitter) sent to and from the Swedish train company SJ during the winter of 2010/2011. This particular winter was unusually problematic for Swedish train traffic, with extreme weather conditions resulting in severe delays all over the country. The assignment thus meant to analyze twitter discussions about traffic disruptions, mainly in commuting. Outside of the study, the students were required to read a paper coauthored by the teacher (Larsson et al. 2011) where a more summarizing typology was used to categorize the same tweets. The students were divided into six groups of 3–5 for the whole duration of the course, and were told to categorize the tweets collaboratively within these groups.

The Personal Learning Environment

The assignment was carried out with the ROLE Uppsala prototype (a slight adaption of the ROLE SDK with single sign on for the students), mainly using a widget bundle that was developed specifically for the assignment. The bundle resulted from discussions with the course’s teacher about what tools he would find desirable for the assignment.

Technical realization: The Analyze Tweets bundleFootnote 9 allows students to take a closer look at Twitter tweets. Figure 8 shows the bundle’s five widgets. First, tweets and tweet conversations are presented in a timeline. Second, the same tweets are presented in a list and can be tagged with 18 different categories. Third, a pie-chart of the amount of tweets tagged with each category is shown. Fourth, the tweets can be discussed in a forum and references inserted to individual tweets within posts as links. This allows students to, at a later time, refocus on the mentioned tweet by clicking on the link in the post (timeline and tweet list will adjust to show the tweet). Finally, any related content can be shown in the content viewer, for example, any links to Web pages mentioned in tweets show up here if clicked upon. This bundle makes heavy use of inter-widget communications and most interactions performed inside an individual widget have consequences in other widgets.

Fig. 8
figure 8

The analyze tweets bundle

The bundle is intended to be useful when investigating tweets and the conversations they form. Students can make sense of the various ways people use tweets to communicate by using the provided vocabulary to categorize tweets. The forum allows students to discuss the problems that appear, for instance if some tweets do not fit into any of the provided categories.

The purpose is to give students a chance of experimenting with one way of doing research in the area of social science. A teacher can assess students’ performance by looking into the categorizations that the students make as well as their activity in the forum.

Evaluation and Methodology

An initial pre-study was conducted as a survey during the summer course that gave input for the design of the full study in the spring of 2012. One of the results showed the importance of providing a prepared environment rather than relying on the students ability to assemble a suitable environment from scratch. On the positive side was that the students reported that multitasking and having multiple tools on the screen at once was nothing new to them (Jonsson 2012).

Based on the pre-study, and the constraints on what we could do within the given course, a decision was made to test a limited set of functionality provided by ROLE. The choice was to focus on inter-widget communication, and how students perceive a user interface where multiple widgets could be used in combination to reach a predefined goal.

The evaluation method chosen for the test bed was surveys since the context of the evaluation was a distance course at the university, where it was problematic to do interviews or use other methods where the investigator needs to be physically co-located with the users. The survey consisted of 28 questions and was answered by 16 of approximately 20 students taking the course (~80%). The reason for this approximation of the total number of students is due to a discrepancy between the number of students registered for the course and the number actually attending it.

Out of the 28 questions 23 were formulated as statements and the students were asked to position themselves on a 5-grade Likert scale with the polar values labeled as “I fully agree” (1) and “I do not agree at all” (5). As we did not specify intermediate scale steps between the extremes, we can assume an interval scale between these. In order to detect survey artifacts some of the questions were formulated with a mirrored scale. Responses to these were inverted a posteriori to make it easier to interpret averages and correlations between statements. Of the five remaining questions four were free text and one consisted of multiple-choice checkboxes.

For more details about the evaluation, please see the extended report concerning the evaluation of the Uppsala University ROLE prototype by Lind and Laaksoharju (2012).

Results

Most of the students were at some point annoyed with something about the system (AM = 2.25, SD = 1.07). This judgment co-varied negatively with the impression that the system was working flawlessly.

The students did not report any difficulties to learn how the system worked and generally considered the platform to supply good support in solving the assignment (AM = 3.50, SD = 0.82). That widget content was changing automatically when performing different actions in the system was conclusively seen as not confusing (AM = 4.1, SD = 0.93). They also seemed to think that system was relatively easy to work in (AM = 3.19, SD = 0.75). However, there is a tendency to judge the number of required mouse clicks to be too high (AM = 2.44, SD = 1.32).

Students’ overview of their working process was perceived as good with an average of 3.5 and a standard deviation of 1.0. The fairly high standard deviation stems from responses being spread over the whole scale; though the majority (12 respondents) settled for a grading of 3 or 4, as reflected by the average value above and by the median value of 4. Significant correlations were observed between this statement and the level of participation and collaboration, the perception of the system as practical and the system’s impact on the motivation to complete the assignment.

The students did not seem to have any significant problems understanding how to use the system for the assignment (AM = 3.75, SD = 0.86). Interestingly, this statement did not have any significant correlation with the other statements in the survey.

The support from the system for doing the assignment was perceived as fairly high (AM = 3.5, SD = 0.82). This statement correlates with 11 of the other statements, making it the second best predictor after the perceived support for collaboration.

Applicability in other areas was generally seen as high (AM = 3.38, SD = 1.03). Significant correlations were found between this statement and the perception of the system as motivating/tiring (inverted), the perceived support for collaboration and the statement that the student would have preferred another tool (inverted).

Perceived support for collaboration received the highest count of correlations with other statements, twelve out of 22, making it a good predictor for overall user perception of system usefulness. The statement itself received an average of 2.75, with a standard deviation of 1.07, which is on the lower half of the scale, though still interpreted as relatively high considering the system is still in a development stage.

The statement that the system was practical to use, received an average of 3.31 with a standard deviation of 0.95. This indicates that the students’ perception of the system’s practicality was fairly high. This statement is also strongly correlated with the perceived lack of problems in the system, the support for collaboration and the perception that the system increased the student’s motivation.

The students felt to a high degree like they were part of a team while working in the system (AM = 3.94, SD = 0.93).

Conclusions

Generally the results of the evaluation were positive. They indicate that the students were quite satisfied with the overall usability of the system and perceived that the system increased their motivation for learning and collaboration.

Many of the students were, however, at some point annoyed with the system. We have seen in other studies that students have a fairly low tolerance to usability problems in systems that they are expected to use in their studies. The perception of the system as a whole may thus be biased toward a more negative impression than what would have been the case if the system setup had been more stable.

An interesting, positive result when it came to the users’ perception of one of the unique features in this prototype—that widget content was changing automatically when performing different actions in the system— is that it was conclusively seen as not confusing; thus supporting this novel avenue toward automating internal communication between widgets. Initially there were some concerns that this technologically rather advanced approach would also be perceived as complicated by the users, a fear which proved to be unfounded. However, since the students did not get the opportunity to compose their own, unique set of widgets, we cannot determine whether this acceptance was due to a perception of the system as one united whole or whether the widget performance will be predictable even when users pick and choose widgets at will. Future evaluations should address this question.

The results for how useful the different widgets were for solving the assignment are interesting, especially the low score for the tweet timeline. The ambition with this widget is to visualize patterns of interaction between Twitter users over time. However, either few of the students seem to have had use for this functionality or the interaction with the widget was not satisfactory. When looking at the classification data, almost half of the total classifications regard reactions and discussions, for which the widget should be a useful tool. The conclusion to draw from this is that the widget functionality was not apparent to the students, which can be due to either interaction problems or learning problems.

The best predictor for the perceived value of the platform was how well it supported collaboration. This means that students who considered the platform to support collaboration also considered it to be valuable and vice versa. The conclusion we draw from this is that the collaborative aspects of the tool were something that the students expected, and either perceived as present or not. This suggests that the system was perceived primarily as a tool to support collaboration in the solving of an assignment. The fulfillment of the perceived purpose of a tool determines its value assessment. This is also a good result for the prototype as it shows that the students who thought the system was a good platform for collaboration also thought that the system would be useful in the context of other courses.

A perceived good overview seems to be very important for the overall impression of the system as it correlated with other important features, viz., the level of participation and collaboration and the system’s impact on the motivation to complete the assignment.

Generally the students felt highly engaged in the team effort of solving their assignment. It is not a controversial claim that a good overview in the system also positively affects the perceived presence of team members. Thus, if an important goal for the system is to stress the value in collaboration, creating a sense of good overview should be highly prioritized.

Lessons Learned and Outlook

The evaluation resulted in the following recommendations:

  1. 1.

    Investigate why some participants perceived the platform as good for supporting collaboration and others not, as this appears to be an important determinant of the general impression of the platform.

  2. 2.

    Keep the number of required interactions (like mouse clicks) at a minimal level. This can be achieved by exploiting the widget communication even more. The participants in the study did not have trouble understanding the automated behavior of widgets even though it conceptually appears rather complex.

  3. 3.

    Reevaluate the platform in a course where it is possible to fully implement the SRL pedagogy. Currently we do not know how users of the system would cope if they were required to choose widgets by themselves. In the current study, the students might not be aware of the intricate, self-establishing nature of the widget communication and it is necessary to find out how this will be perceived when users set up the learning space by themselves.

  4. 4.

    To address how complete SRL platforms are for students, future evaluation surveys should include questions about whether any other means of contacting group members were used in parallel, e.g., instant messaging (like MSN Messenger, Facebook chat), conference call (like Skype), collaborative documents (e.g., Google Drive). The aim should not be to create tools to replace existing, well-functioning communication solutions but to complement these.

On the whole the results of the evaluation were encouraging and showed that continued work on the ROLE SDK and platform was motivated.

Widget Bundles for Formal Learning: Lessons Learned

In this chapter, we described usage of ROLE technology in three different settings for formal learning in higher education. Despite the differences in the three test beds, the simple fact that ROLE technology was used to support teaching and learning at each of the locations supports the claim of flexibility made by the ROLE project. RWTH and SOCE were able to apply ROLE technology rather extensively and from early on during the ROLE project, and their evaluation results had significant influence on the later course of the project. The UU test bed took place at a later stage in the project and was therefore able to use a prototype very nearly resembling the project’s end product.

The evaluation work done at these three test beds has taught the ROLE consortium valuable lessons that shaped the further work:

  • The added value of ROLE has to be clearly visible to its users. What became apparent in the two test beds SOCE and RWTH is that technology may not be attractive in itself, but its added value needs to be clearly visible to the users. This was voiced by users in the SJTU test bed, who have a full-time job, limited time available for study and low digital literacy, but also found in the RWTH test bed, which involves students with high digital literacy (similarly, evaluation outcomes from the BILD test bed pointed to the need for communicating the benefits of the ROLE project to organizations).

  • The value of ROLE that needs to be made visible includes technological solutions, but also psycho-pedagogical results, i.e., the benefits of SRL, group work, etc. These results and solutions have to be explained in a way the average user (organization, student and teacher) understands.

  • Uptake of ROLE technology significantly depends on the availability of widgets. If too few widgets are available for immediate use, then only few teachers will employ such technology. Having a few general purpose widgets, such as widgets teaching SRL or enabling discussions, is insufficient to attract users, instead they ask for very domain-specific resources. Such demand can be fulfilled by offering authoring tools.

  • Finally, the primary users of PLEs in formal learning in higher education in the three test beds described in this chapter were teachers, not students. In these settings, teachers used ROLE technology to integrate tools into the daily teaching that enabled additional learning activities. In such a setting, teachers served as multipliers, who demonstrate the potential of personal learning environments to their learners. For instance, the SOCE French teacher reported that having worked with the PLE during the semester for doing homework, his students asked whether and how they could access the PLE even after the lecture was over. They said they found it so useful; they want to continue using it.