Introduction

In problem based learning (PBL) settings (Barrows and Tamblyn 1980; Dochy et al. 2003) groups of students are guided by staff tutors but also by student tutors (Schmidt et al. 1995). Tutors guide discussions and promote in-depth discussion during group sessions. They are also expected to encourage the use of specific cognitive skills by students, such as making connections, giving appropriate feedback and monitoring the learning processes of students (Dolmans et al. 2002; Norman and Schmidt 1992). Student (peer) tutors can be fellow students (i.e. same level tutoring) or more advanced students (i.e. cross level tutoring). A recurring question is whether student tutors are able to successfully fulfil the complex responsibilities of a tutor.

Theoretical background

Student (peer) tutor

De Smet et al. (2009) define a student tutoring setting as a specific type of collaborative learning (Griffin and Griffin 1997; Topping 1996); Students are working together in small groups and a peer takes a supportive role as a student tutor. Through a scaffolding process offered by their peers, students learn or co-construct (Duran and Monereo 2005).

Searching for advantages of teaching by student tutors in a PBL environment, we found that students that are familiar with PBL are better able to adjust to the difficult role of a PBL tutor (Lockspeiser et al. 2008; Schmidt et al. 1995). Although student tutors have less domain specific knowledge compared to staff tutors, they have the advantage of higher cognitive and social congruity with students. Student tutors are therefore likely as capable as staff tutors of promoting the learning of their ‘peers’ (Lockspeiser et al. 2008; Schmidt et al. 1995). Concluding his review Topping (1996) argues that cross level small group tutoring is an effective teaching method that merits wider use in practice. The review of Secomb (2008) reported mostly positive outcomes on the effectiveness of peer teaching; it can increase student’s confidence and improve learning.

Study achievements

The achievements of students exposed to a student tutor versus a staff tutor can provide information about the quality of teaching by student tutors (Kassab et al. 2005; Schmidt et al. 1995; Marsh and Roche 1997; McKeachie 1979). Results from earlier studies are diverse and the conclusions are not univocal. Schmidt et al. (1995) surveyed 800 health sciences students and found differences in study achievements between students taught by cross level student versus staff tutors, with the latter obtaining higher grades. De Volder et al. (1985) also found variable study achievements in a study on cross level student tutors that attended the same 3-day training course as the staff tutors. Student tutors were not selected. Volunteering students were accepted until the number of student tutors needed, was reached. In this study 148 first year students were involved. In one course students with a student tutor scored lower than students with a staff tutor, but other groups showed no such differences. No differences in student achievements were also reported by Kassab et al. (2005). This study had 91 participants taught by same level student versus staff tutors. Steele et al. (2000) investigated same level peer tutoring versus staff tutoring in a group of 127 students. They also found no differences in student achievements. Furthermore, no differences in student achievements were found in a study of cross level tutoring with 230 (course A) and 177 (course B) students by Moust and Schmidt (1994). De Grave et al. (1990) found in their study, with 165 participants, no differences in achievement. Without making use of any selection procedure this study worked with cross level student tutors. Sobral (1994) reported no negative effect of cross level student tutoring on students’ acquisition of knowledge (N = 479).

Gielen et al. (2010) examined whether peer feedback can have an equally positive effect on learning as teacher feedback in a study comparing the effects of various forms of peer feedback. The results showed no significant differences between students’ progress in essay marks after plain substitutional peer or teacher feedback and the authors concluded that peer feedback can substitute teacher feedback without any significant loss of effectiveness in the long run (Gielen et al. 2010). Cho and Schunn (2007) show similar findings.

Students’ perceptions

Also, students’ perceptions of student tutors versus staff tutors vary. Feedback (Kassab et al. 2005) and cognitive congruity (Lockspeiser et al. 2008; Schmidt et al. 1995; Moust and Schmidt 1993) were perceived as more positive in groups with a student tutor. Students also indicated that staff tutors used more domain specific knowledge (Moust and Schmidt 1995; Schmidt et al. 1995).

Peterson and Swing (1985) stated that PBL tutors should facilitate students in an indirect manner by asking stimulating questions and regularly evaluating the group process. In a study examining the perceptions of students in relation to staff versus student tutors, Schmidt et al. (1995) found first-year students had a higher opinion of the relevant contribution of student tutors and their ability to encourage questioning, whereas staff tutors were more appreciated by more senior students. Compared to staff tutors, student tutors paid more attention to the evaluation of group functioning.

Sobral (1994) found that in a PBL setting cross level tutoring increased students’ motivation and participation. Yang et al. (2006) reported that teachers using their wide range of domain specific knowledge often provide feedback that is not always understood and sometimes misinterpreted by students because it is based on extensive knowledge of the complexities of subjects and domain specific considerations. Cho and Schunn (2007) also found that feedback from experts is often unhelpful or sometimes even harmful to novice writers’ revision.

Training and selection

Research has taught us that it is of the utmost importance that student tutors are specially trained for their task (Arco et al. 2006; Kassab et al. 2005; Lockspeiser et al. 2008; Nestel and Kidd 2003, 2005; Parr and Townsend 2002; Wadoodi and Crosby 2002). Training can enhance the didactic skills of student tutors and thereby positively affect students’ study achievements and their perceptions of student tutors. A study by Groves et al. (2005) has implications for the recruitment and training of PBL tutors; training should focus on the development of a wide range of strategies to encourage optimal group functioning and stimulate the learning of students.

Research on peer feedback (Min 2008; Sluijsmans et al. 2002) also showed that training in peer assessment skills can make peer feedback as effective as teacher feedback.

Aim and research question

The preceding shows that studies into student tutoring report differing results. Findings of previous research are diverse and conclusions of those studies are not univocal. Better evidence is needed. After all, as a result of growing attention and recognition that the quality of education is crucial, institutes have to assure that the teaching of their tutors is effective and excellent. Improving teaching has become a major topic in higher education (Biggs 2003). Although there still is no consensus about the concept of ‘teaching effectiveness’, research refer to teaching effectiveness as “the degree to which an instructor facilitates student achievement” (McKeachie 1979). Citing Marsh and Roche (1997, p. 1189): ‘The most widely accepted criteria of effective teaching involves student’s learning’. Furthermore they stress the importance of combining those findings with other criteria such as students’ evaluations of teaching. Students’ perceptions (student ratings of instruction) can be seen as one of the most influential measure of teaching effectiveness (d’Appollonia and Abrami 1997). The meta-analysis of Cohen (1981) provided strong support for the use of student ratings of instruction as a valid method to measure teaching effectiveness. Students are able to distinguish among teachers based on how much they have learned. Furthermore, Cohen’s meta-analysis showed that the relation between ratings and achievement is strong. Whereas most earlier studies of staff and student tutors mainly focus on student achievement or examine process variables by seeking students’ perceptions, we conducted a study in which we examined both.

Recent studies (Groves et al. 2005; Kassab et al. 2005; Lockspeiser et al. 2008; Nestel and Kidd 2005, 2003; Parr and Townsend 2002; Arco et al. 2006) emphasise the importance of training of student tutors. Therefore research on effects of student and staff tutoring should incorporate a profound training process for peer tutors and staff tutors. This study will take this in account. Furthermore, we will work with rigorous selected student tutors as the importance of this is accentuated in previous research (Weyrich et al. 2008).

In order to study the effects of student and staff tutoring, we conducted a comparative study. The design of the study was influenced by a study (Dolmans et al. 2002) proposing that studies of student tutoring should focus on student achievement and combine qualitative and quantitative methods. We therefore used a mixed design study combining quantitative and qualitative methods and investigated student tutors that had been selected from high achieving students and received extensive training. The first indicator of tutor effectiveness that we examined was students’ study achievement, and this indicator was supplemented by students’ perceptions obtained from a questionnaire and a focus group interview.

The study investigates the following research question: Is there a difference between staff tutors and rigorously selected and well trained student tutors with respect to students’ achievements and perceptions?

Methods

Setting

The study was conducted at the Faculty of Law of the Maastricht University. This is a university with a fully problem-based curriculum. Hung (2009) defines PBL as one of the most widely adopted instructional methods across various disciplines and professional studies, all age groups of learners, and around the globe. The student centered character, as well as significant, contextualized, real-world, ill-structured situations and providing resources, guidance, instruction and opportunities for reflection to learners as they develop content knowledge and problem skills, is distinctive for PBL (Hoffman and Ritchie 1997). PBL promotes the development of reflective thinking (Yuen Lie Lim 2011).

In this study the curriculum is taught in 8 week courses during which students work on assignments that require them to tackle real life problems. Small groups of students (10–14) meet twice weekly. During these group sessions the students prepare for self-study activities and they report and reflect upon the results of these self study activities. Group sessions are guided by a tutor. New groups of students are composed for each course. Students have different tutors in each course. Students were randomly assigned to a staff tutor versus a student tutor condition. In addition to the tutorials, students attend weekly lectures and practical classes.

Selection of student tutors

We invited the students with an average final mark of seven or higher (ten-point scale) at the end of the first year to apply for a student tutorship. All the applicants took part in a rigorous selection procedure, based on the assessment centre method (Dochy and de Rijke 1995). The following selection criteria were used: motivation, knowledge, study achievements and inherent tutor skills. A committee consisting of two educationalists, a senior student and the dean of the faculty judged the students based on interviews, assignments and simulations.

The tutor training programme and the tasks of the student tutors

During their second year, the selected students tutors (N = 23) received 36 h of intensive training in tutoring skills, built around the following themes: stimulating cognitive processes, stimulating active involvement of students, scaffolding, fostering meta-cognitive strategies, reflecting on own conceptions of learning and teaching, creating awareness of own (individual) tutoring style and those of others. The interactive training methods that were used included observation with elaborate reflection, peer coaching, simulations and collaborative learning. These methods are based on Dolmans et al. (2002) and are in line with De Smet et al. (2007).

During the third year of their own study the student tutors guided first-year students and attended further training and personal coaching (supervision and intervision) as well as weekly tutor meetings with staff tutors, led by the course supervisor, in which assignments and the best way to approach them were discussed. Before their actual work started, the student tutors observed each tutorial (14 different sessions) with an experienced staff tutor. This provided student tutors with new ideas and enabled them to learn from experienced tutors. Bell and Mladenovic (2008) emphasise the potential benefits from observing peers, especially when observation is integrated with an academic development programme.

The training course for the student tutors was similar to the regular 38 h teacher training course that is obligatory for newly recruited teaching staff during the first 2 years of their appointment. Other staff members are offered a variety of faculty and university based staff development activities that are tailored to their needs.

Instruments

Study achievement

The use of an achievement measure, such as course final examination, can be seen as the most appropriate way to assess student achievement (Cohen 1981). The measure used to determine study achievement were the grades (1–10; ≥5.6 is a pass) on the end-of-course exams, consisting of 40 multiple choice questions and one or two open-ended questions. In order to assure the quality of these exams, a content expert and an assessment expert evaluate whether questions are well constructed, whether the answer options for the multiple choice questions are appropriate, whether content and difficulty of the exam reflect the subject matter covered by the course, etc.

Student perceptions

Student perceptions were elicited by an online questionnaire (five-point Likert scale) consisting of 12 closed questions and administered after each end-of-course exam. The questionnaire was based on a questionnaire for retrospective quality assurance (Biggs 2001) developed by Pletinckx and Segers (2001), and contains items about the tutor, such as: ‘The tutor encouraged the students to participate actively in group discussions’;‘The tutor encouraged the use of existing knowledge.’

In order to establish relationship patterns between the dependent variables—and to explore the nature of the independent variables affecting them—factor analysis (n = 683) was performed on the 12 items (Table 1), using principal component analysis followed by a Varimax rotation. Because of the cut off criterion of factor loadings above 0.35 and discrepancy of cross loadings of 0.20, two items (‘The tutor understood the problems faced by the tutorial group regarding the subject ‘and ‘The tutor made regular use of his/her expert knowledge in guiding the group’’ were removed (Nunally and Bernstein 1994). Based on a factor analysis, using principal component analysis followed by a Varimax rotation, the remaining items of the questionnaire were reduced to four factors: stimulating function (α = 0.85), cognitive congruency (α = 0.87), use of domain specific expertise (α = 0.83) and social congruency (α = 0.80).

Table 1 Factor analysis: rotated component matrix

The four factors together explained 81% of the variance

A semi-structured focus group interview was conducted after the end-of-course exams to gain more in-depth insight into students’ perceptions of student and staff tutors. The participating students were encouraged to express their opinions about student and staff tutors and to react to each other’s opinions. The questioning route for the interview (Krueger and Casey 2000) was based upon the online questionnaire. The students were asked to identify and elaborate on differences between student and staff tutors in relation to each factor: stimulating function, cognitive congruency, use of domain specific expertise and social congruency. Additionally, they were asked to indicate differences between student and staff tutors in relation to the 12 questionnaire questions and to discuss these differences. Two educationalists were moderating the discussion.

A recapitulation of the answers from the participants was presented to them in order to let them reconsider their answer. Subsequently, the participants had the opportunity to reformulate or to enrich their opinion.

The focus group interview was audio-taped and transcribed verbatim.

Participants

Study achievement

The study was conducted among first-year students (novice students) in two consecutive years. Data were collected for four courses (A, B, C and D). This led to 2 cohorts of participants: cohort 1, course A (N = 102); course B (N = 124); course C (N = 114), and course D (N = 56) and cohort 2, course A (N = 107); course B (N = 85), course C (N = 81) and course D (N = 82). Exam results were collected for all the students who attended the courses. To study effects in study achievement we distinguish between cohort 1 and 2 as they received different end-of-course exams for security reasons.

Questionnaire

All the students who attended these courses were requested to fill out the student perception questionnaire after the end-of-course exam. Informed consent was acquired. As for this study cohort is not a variable, there is no need to distinguish between both cohort groups. Respondents with missing values were removed from the dataset. The remaining number of participants of the questionnaire is represented in Table 2.

Table 2 Number of participants with the questionnaire per course

Focus group interview

For the focus group interview we selected students who had been tutored by two student tutors and two staff tutors (Bloor et al. 2001). From this group six students were randomly selected from each cohort and invited to take part in a focus group interview.

Data analysis

Quantitative

We used SPSS 15 to conduct the quantitative analyses. ANOVA was conducted to identify significant differences between students in study achievement and in answers to the questionnaire. When the assumption of homogeneity of variance was not fulfilled, a Kruskal–Wallis test was performed. Effect sizes (Cohen’s d) were calculated, weighted by sample size and pooled variances (Hojat and Xu 2004).

Qualitative

The data were transcribed and indexed (Bloor et al. 2001) to combine all the data pertaining to a particular factor (stimulating function, cognitive congruency, use of domain specific expertise and social congruency). First the focus group responses were organized according to the question to which it is in response. Next, we coded the responses in accordance with the four factors (stimulating function, cognitive congruency, use of domain specific expertise and social congruency). As the goal of the focus group interview was to gain more in-depth insight into students’ perceptions of student and staff tutors the following questions were guidelines while interpreting the focus groups data: What was known from the results of the questionnaire and is confirmed or contested by the focus group data?; What is new that was not previously suspected from the results of the questionnaire?

Two researchers, one of which had no involvement in the actual focus group interview, interpreted the data separately. Through reflection and discussion they came to a consensus. The results are illustrated by quotes representing opinions that were consistently expressed during the interviews.

Results

The results are presented separately for study achievements and perceptions of staff tutors versus student tutors. The results for student perceptions are organized according to the four factors: stimulating function, cognitive congruency, use of domain specific expertise and social congruency.

Study achievements

Descriptive statistics are presented in Table 3.

Table 3 Mean study achievements (on a ten-point scale) and standard deviations, per course and cohort

The differences in achievement between students guided by a student tutor and those guided by a staff tutor are not significant and all effect sizes are small (d ≤ 50) (Table 4).

Table 4 Study achievements of students: results of the analysis of variance and effect size

Student perceptions

The analyses of the questionnaire and the focus group show positive perceptions of both student and staff tutors. There are some significant differences but these are not consistent across courses. Table 5 shows the results on the questionnaire for the four factors.

Table 5 Student perceptions (questionnaire): mean scores (1–5) and standard deviations, per course

The results for course A show more positive perceptions of staff tutors compared to students tutors for 3 factors: stimulating function: (X ² = 7.8, df = 1, P = 0.005); cognitive congruency: (X ² = 12.3, df = 1, P = 0.000), use of domain specific expertise: (X ² = 10.7, df = 1, P = 0.001). The effect sizes are medium (d > 0.50) for stimulating function, cognitive congruency and use of domain specific expertise. There are no significant differences for social congruence in course A. Small effect size is found for social congruence. There are no significant differences between the perceptions of staff and student tutors for courses B, C and D. Small effect sizes were found for courses B, C and D.

During the focus group interview the importance of a good tutor was strongly emphasized by the students. The general view is that a good tutor is enthusiastic, knowledgeable and keeps students focused. The results of the focus group interview also indicate that students’ perceptions of both staff tutors and student tutors are positive with regard to all the factors.

The stimulating function of the tutor

The focus group interview shows that the stimulating function of the tutor is deemed very important by the students and that students see no differences between student and staff tutors in this respect:

“If you feel comfortable in a group and the tutor makes sure that all the students are actively involved and not afraid to ask questions, the discussion is better. In this respect, I don’t see any difference between student tutors and staff tutors. No, they do it both, it depends on individual tutors.”

Students say that both student and staff tutors ask stimulating questions. Differences are related to individual tutors. The tutor’s enthusiasm is considered a very important aspect of the stimulating function and students report no differences between staff and student tutors in this respect. According to the students, student tutors pay more attention to the introduction of new assignments:

“Student tutors take more time for the preliminary talk. They spend more time discussing the learning goals. Staff tutors are more likely to state: ‘this is important’.”

Students agree that during the group sessions student tutors give more feedback about the assignments provided in the course book. Student and staff tutors both stimulate in-depth reporting of the results of self-study activities.

Cognitive congruency

Students indicate that student tutors show more cognitive congruency than do staff tutors.

“A staff tutor knows the literature so well that they don’t see a difficult question as a problem. A student tutor can better relate to the students.” “Student tutors are better able to give clear explanations. Student tutors do not use difficult terminology so often.” “Student tutors can explain things more clearly, because to staff tutors everything is self-evident.” “Staff tutors explain things differently. Student tutors are better at explaining things. Of course the best part is that I understand it.” “Student tutors make remarks with respect to content at the right time. Staff tutors elaborate more on a subject because they have more knowledge.”

Students also remark that student tutors make more use of schemes and the whiteboard. This contributes to students’ perception that student tutors explain more clearly. Additionally, students say that student tutors formulate questions in such a way that they are easier to understand.

“Student tutors ask a question that is clear and staff tutors ask such vague questions that everybody thinks: what is he talking about. Then a whole explanation has to follow. And then you think: oh yes, this is how we should interpret the question.”

Students also say that student tutors have a better idea of students’ prior knowledge:

“Student tutors know better what you already know and they can work with that. That’s an advantage.”

Domain specific expertise

Students think that staff tutors have more and make more use of domain specific expertise. This can be an advantage according to the students. Nevertheless, respondents noted that in the first year domain specific expertise is not so very important.

“Staff tutors use more difficult terminology. At first you think ‘what am I supposed to do with that’, but it is also nice to look it all up. Staff tutors are more aware of the latest developments in their domain of expertise. When they tell you about that, you remember it.”

Social congruence

There is unanimity among students that, compared to staff tutors, student tutors are more involved with the group and more open to their opinions:

“Some staff tutors may be quite open to students’ opinions, but you don’t have to give your opinion very often. The advantage of student tutors is that they know what it is like to be a student. That studying is not the only thing you do, because older people think all you do is study and that is not true. With student tutors the atmosphere is more open, because they know what it is like to study and that is also very nice.”

Discussion and conclusions

Based on the assumption that the tutor’s domain specific expertise can play an important role in the learning processes of students, one would expect that groups with a staff tutor do better on exams than groups with a student tutor (Schmidt et al. 1995). However, similar to studies by Kassab et al. (2005), Steele et al. (2000) and Moust and Schmidt (1994), our study finds no such differences. The definition of domain specific expertise is of course arguable. The level of domain specific expertise required to promote effective learning in a PBL environment is not a given, but depends on students’ prior knowledge and familiarity with PBL (Neville 1999). Considering that tutors’ domain specific expertise gains importance as students advance in knowledge (Moust 1993; Schmidt et al. 1995), this factor might be of less importance in the first year of the curriculum. This appears to be born out by the results of the interviews in this study, which show that first-year students do not attach great importance to the tutor’s domain specific expertise. Studies have shown that student tutors are likely to show more cognitive congruency (Moust and Schmidt 1995; Moust 1993) and social congruency (Lockspeiser et al. 2008; Schmidt et al. 1995; Moust and Schmidt 1995) with students. The results of the questionnaire do not support this, but the results of the focus group interview are in line with the differences reported in the literature between student and staff tutors in domain specific expertise and in cognitive and social congruency. These results appear to support findings by Moust and Schmidt (1995) that student tutors’ strong cognitive congruency compensates for their lack of domain specific expertise. The results of the focus group interview are also in line with claims that teacher-initiated revisions are less successful than peer-initiated revisions due to more interpretations of teacher feedback (Yang et al. 2006). The results of the focus group interview indicate that differences between staff and student tutors in domain specific expertise and cognitive and social congruency do not affect students’ general perceptions of tutors. Finally, it appears from the interviews that students see the tutor role as very important to their learning and think that staff and student tutors are equally able to perform this role effectively. In general, students showed no preference for either group of tutors.

The quantitative results for perceptions in one of the four courses indicate that students take a more positive view of staff tutors than of student tutors. These significant differences in findings appeared in the first course of the year and may be explained by the fact that first-year students are unfamiliar with student tutors. Those differences in perceptions are not found consistently across courses. Further research would be needed to identify the causes of the incidental differences between perceptions of students and staff tutors.

Several limitations of the current study should be mentioned.

A recurring question in research and in educational practice is what really influences achievement in PBL. This study is focussing on the advantages and disadvantage of working with student tutors in PBL. Also research on group dynamics, the quality of course materials, tutor interventions, motivation, expertise, the effects of reflective thinking could result in a clearer view on this important question. The limited number of variables is a limitation to the current study.

The grades of the end-of-the course exams, consisting of multiple choice questions and open-ended questions, are used to determine the study achievement. It would be interesting to search for effects with different assessment forms. A limitation of using the current assessment form with combination of multiple choice questions and open-ended questions is that those exams could asses mainly a knowledge construct, while the tutorial within PBL emphasizes other aspects.

Although the focus group sample size was appropriate for our goal, because of the specific selection of participants, it may be useful to work with more focus groups.

The study has been conducted in a particular setting with freshmen PBL courses in only one university setting.

Regarding the generalization of findings it would be better to have more respondents and more tutors.

Notwithstanding these limitations, the results of our study suggest several valuable and promising directions for future research. Overall, the added value of this study compared to earlier studies of peer tutoring is that the student tutors were carefully selected and extensively trained. The results of this study do not warrant conclusions with regard to the concrete impact and the importance of tutor selection and training. Because of the belief (Groves et al. 2005) that training and selection of tutors in a PBL curriculum is conducive to successful task performance, it seems worthwhile to examine whether there is a relationship between selection and training of student tutors and their performance. The design of this study—a rigorous selection of the student tutors and a profound training process—could explain why some previous studies comparing student tutors with staff tutors found effects disadvantageous for the student tutors. Further research, conforming those findings with well selected and well trained student tutors, is needed to elucidate on this.

In this study we studied student achievement and perception. As there are much more variables, giving valuable information about this student-staff comparison we would like to make some suggestions for further research hereupon. Further research could focus on level of interactivity in the groups, motivation, quality of course materials, expertise or the effects of reflective thinking. Also, it would be very interesting to analyze tutors’ contributions in this research setting in a future study. Furthermore, research on differences in deep and surface approaches to learning between the student and staff tutor condition would be useful.

Looking at the study achievements of students as an indicator for the quality of tutors, it is interesting to ask the question whether increasing grades over time and course could be attributed to the growth in expertise of the student tutor. It would be useful to offer all the student tutors of this study a second year of tutoring and then compare study achievement and perceptions of students over time with regard to the tutor growth in expertise of the student tutors. It is also a challenge to find out whether working with other assessment forms within a PBL setting shows similar results.

New studies should try to verify our findings by involving other knowledge domains and other educational settings. Future research could use a proxy measure in order to compare equal groups. Furthermore, it would be very interesting to look at the effects for the student tutors in further research. Also individual characteristics of the student tutors, such as experience in working with groups, can be considered in further research.

Concluding, the results for students’ perceptions and exam results suggest that carefully selected and trained student tutors have neither a positive nor a negative impact. Student tutors are inevitably less experienced than staff tutors, but in the first curricular year this apparently does not translate to poorer exam results. The results of this study therefore warrant a negative answer to the research question. There appears to be no difference between staff tutors and rigorously selected and well trained student tutors with respect to students’ achievements and perceptions. This study proves that well selected and well trained student tutors are ready to successfully undertake complex tutor responsibilities (Dolmans et al. 2002; Norman and Schmidt 1992). Giving good students the opportunity to participate in a student tutor programme thus appears to be justified, and can offer first-year students an extra stimulus to get high grades in order to get selected for the programme.