Introduction

Developments in education in general and in tertiary vocational education in particular illustrate a shifting balance from external teacher regulation to student (self-) regulation. In recent years, there has been an abundance of literature which contrasts the ‘new’ learning with the ‘old’ learning (e.g. Boekaerts et al. 2000; Grabinger 1996; Simons et al. 2000). According to self-determination theory (e.g. Deci and Ryan 2000), student autonomy is beneficial for intrinsic motivation. This is in line with the socio-constructivist plea for activating students, stimulating inquiry, self-regulation and collaboration. Self-regulated learners are motivated, independent and metacognitively active participants in their own learning (e.g. Bastiaens and Martens 2000; Boekaerts and Martens 2006). But others contest this. For instance, Mayer (2004) argues against discovery learning which makes learning environments disordered, unpredictable and thus ineffective. They increase task complexity in an unwanted way. This debate on self-regulation is important, but there has not been much research to provide empirical evidence. It is clear that student-controlled learning environments can have motivational benefits, because “students fed a continuous diet of well-structured tasks might shortcut learning and self-regulation” (Lodewyk and Winne 2005, p. 3). On the other hand, research on cognitive load, for instance, shows that teacher-controlled learning environments lead to more effective learning with less extraneous load (Mayer 2004). Because empirical explorations of student-regulated programs are scarce, we initiated the current study.Footnote 1

Self-determination theory (SDT; Deci and Ryan 1985, 1995; Ryan and Deci 2000) puts forward three basic psychological needs that are conditional to personal growth, integrity and well-being. These are the needs for autonomy, competence and relatedness. The need for autonomy refers to freedom of action, mainly being self-initiating, and to self-regulating one’s own actions. It is defined as the awareness of sovereignty in choosing and designing courses of action. Notably, sense of autonomy involves the level of autonomy that people experience, not the autonomy actually granted. The level of experienced autonomy depends on whether people’s opportunities for autonomous decision making are within the realm of their proximal development. Therefore, even small opportunities for choice can already increase self-determination (Anderman and Midgley 1997). In other words, SDT stresses the importance of students’ perceptions of their learning environment.

The need for competence involves understanding how to attain various external and internal outcomes and being effective in performing the requisite actions. The need for competence indicates a need to experience satisfaction in exercising and extending one’s capabilities. Naturally, people seem to seek out challenges that are optimal for their level of development (Levesque et al. 2004). The need for relatedness involves developing secure and satisfying connections with others in one’s social milieu (Deci and Ryan 2000).

The conceptualisation of these three needs opens up the possibility to specify conditions that are relevant to learning and personal growth. There is ample evidence that a learning environment that satisfies students’ basic needs of autonomy, competence and relatedness promotes learning (e.g. Connell and Wellborn 1991; Deci and Ryan 2002; Deci et al. 1991; Grolnick and Ryan 1989). So, the level of need satisfaction provides an informative criterion in the comparison of learning environments that differ in the amount of self-regulation: if student and teacher regulation make a difference, then this difference would surface in measurements of students’ sense of autonomy, competence and relatedness.

At this point, one can raise the worry that it is hard to combine all the ‘basic needs’ into the same learning environment. Indeed, research has shown that, although it is clear that perceptions of control, relatedness and competence are related to intrinsic motivation, it is unclear how exactly they are interrelated (Sheldon and Niemiec 2006). This study focused on the difficulty of combining all these aspects into a learning environment. After all, too much freedom and low levels of teacher control in ill-structured tasks will cause a perception of high autonomy, but might cause a perception of low self-efficacy. On the other hand, high teacher control can occur simultaneously with perception of low autonomy. Critics of SDT and ‘new learning’ argue that students still might perceive personal efficacy but will perceive less ‘contextual’ efficacy. According to Lodewyk and Winne (2005, p. 3):

…well-structured tasks can be identified as those with straightforward operations for constructing products, predictable evaluations, and agreed-upon standards for their products. In contrast, ill-structured tasks do not make obvious the operations to use in creating products, offer erratic evaluations, and have moot standards for judging the product.

On the distinction between ill- and well-structured tasks, see also Hew and Knapczyk (2006).

Many researchers point at this complicated relation between student’s self-regulation and external regulation in the learning environment (e.g. Ten Gate et al. 2004; Vermunt 2007). In the first place, we need a clear distinction between learning environments with different amounts or degrees of regulation. As stated above, too much or too little guidance in the learning environment can hinder students’ development or (intrinsic) motivation. A balance should be found between guidance and self-regulation. Vermunt and Verloop (1999) call this constructive friction between learning and teaching, which refers to the distance between the actual developmental level as determined by independent problem solving capability, and the level of potential development with the assistance of others (see also Ten Gate et al. 2004). These authors distinguish between learning environments that almost completely rely on teacher guidance and those that rely fully on internal guidance. Between those versions, they describe a stage that is often found in ‘new’ learning environments such as those described above: shared guidance (from both teacher and student) in which, at a cognitive level, students are helped to determine the importance of issues themselves, at an affective level, students are stimulated to figure out their motives and, finally, at a metacognitive level, students are given neither more nor less help than actually needed. As the authors state, this internalisation of teacher functions in the learner is not an easy task and might involve constructive friction.

Solving this problem is multi-faceted and complex. One of the key issues is the fact that, although many authors agree that students’ perceptions of these different types of regulation are highly important, the exact measurement of this distinction between efficacy related to personal and to contextual aspects, as criteria for the comparison of learning environments, has proven to be very difficult. As yet, SDT doesn’t provide such a potential and relevant distinction in measurement, and neither do other tools for assessing students’ perceptions of learning environments such as the What Is Happening In this Class? (WIHIC) questionnaire (den Brok et al. 2006), Questionnaire on Teacher Instructional Behaviour (QIB) or other instruments (e.g. den Brok et al. 2004; Masui and de Corte 2005). Therefore, this article aims to unravel this distinction in learning environments which vary with different levels of student regulation.

Student-regulated learning in tertiary vocational education

The basic features of student-regulated learning environments are displayed in Table 1. These are taken from Simons et al. (2000). Students typically receive an educational credit to be spent on fields and topics which they judge to be important. A pivotal role is ascribed to the establishment of a personal development plan. Learning takes place in authentic environments, namely, either in simulated or in real-work conditions. Students are encouraged to actively (re)create knowledge from their concrete work experiences through reflection and investigation. They are responsible for initiating their learning activities. They monitor and evaluate their progress independently, but in consultation with their teachers.

Table 1 Guidelines for developing student-regulated learning (Simons et al. 2000)

Sense of efficacy

To conceptualise sense of competence in this article, we elaborate on Bandura’s (1986, 1994) concept of efficacy. Perceived self-efficacy is defined as “people’s judgments of their capabilities to organize and execute courses of action required to attain designated types of performances” (Bandura 1986, p. 391). According to Bandura, self-efficacy cannot be conceived of as a general characteristic but is task specific: it can be incorrect to extrapolate a positive level of self-efficacy from one domain of tasks to other domains. Depending on their efficacy expectancies, people anticipate likely outcomes. The anticipation of outcomes as such is an idiosyncratic process: every person foresees different outcomes. Outcomes can be compared, however, with respect to the value that they represent. Thus, the efficacy construct entails two important aspects that are quantifiable: efficacy expectancies (how successfully I can perform specific courses of action); and valence expectancies (how valuable to me the likely outcomes of these courses of action are). Of course, valence expectancy is akin to the concept of ‘subjective task value’ (Eccles and Wigfield 2002; Pintrich 2003; Pintrich and Schunk 2002), but we prefer to handle these concepts within the unifying domain of a single theory.

The learning environment, of course, is not immediately reflected in students’ self-efficacy. Therefore, we theorised that, for the investigation of their relationship, it is necessary to expand the efficacy construct by drawing a distinction between personal and environmental aspects. Despite its importance, to date much is still unclear about the concept, such as the conceptual distinction between goal orientation and related constructs like self-efficacy (Zweig and Webster 2004). In this study, we tried to deepen understanding of the efficacy concept by drawing a distinction between the person and his or her environment. The personal side of efficacy expectancies involves judgements of personal capabilities, and thus is more or less equivalent to the original definition of self-efficacy. The environmental side, however, involves judgements of the extent to which conditions in the environment are conducive or obtrusive to the execution of courses of action aimed at designated types of performances. The environmental side of efficacy expectancy makes explicit what remains implicit in Bandura’s theory: deliberating the viability of a certain course of action, people not only estimate their personal capabilities, but they also make judgements about the context in which a specific course of action is to be executed. Are the conditions favourable and are there any obstacles in the context that have to be dealt with. In an analysis of teachers’ efficacy expectancies, Imants and de Brabander (1996) showed that the distinction between perceived self-efficacy and perceived school efficacy shed interesting light on the efficacy expectancies of male and female teachers. Depending on the characteristics of the context, environmental efficacy expectancies can have more specific names. In the context of school organisation, for instance, the label of ‘organisational’ efficacy expectancies seems natural.

The distinction between person and environment applies equally well to valence expectancies. Personal benefits indeed can be the most important determinants of action choices or behavioural persistence. However, this does not preclude people from considering benefits that might result for other people. Reflecting on courses of action in which they might get involved, people do take into account the benefits that might result for their social support group, for the organisation for which they work, or even for planet earth. Indeed, in many contexts, the primary goal of courses of action that people undertake is not personal benefit, but benefit for other people. Teaching, for instance, is a pre-eminent example in this regard. In this study, however, because of concentration on the expectancies of students, who are clients of the school organisation rather than members, we did not consider non-personal valence expectancies relevant enough. Therefore, we addressed only personal valence expectancies that we define as the total value of the outcomes of a course of action that people anticipate for themselves personally.

The aim of this study was to investigate personal and organisational efficacy expectancies and personal valence expectancies in student- and teacher-controlled learning environments. We studied the empirical feasibility of these constructs and their capacity to discriminate between learning environments. This is not translated into an expectation. If these two aspects can be proven to be measurable separately, we have the possibility to test hypotheses. Based on what is described in the sections above about the debate on ill-structured versus well-structured environments, we formulate one hypothesis. This hypothesis illustrates constructive friction. We anticipate that students in the student-regulated environment have a higher level of personal efficacy expectancy, a lower level of organisational efficacy expectancy, and a higher level of personal valence expectancy.

Method

Sample

‘Student-regulated’ programs were acquired from a Dutch network of schools for tertiary vocational education that aims to promote student-regulated learning. Confronted with discrepancies and incompatibilities between the knowledge with which students are equipped by completion of their preservice education and the needs of employing companies, these institutions have developed educational programs which aim to increase student regulation. The development of these programs is based on heuristics such as the 12 guidelines in Table 1 formulated by Simons et al. (2000). All these schools have learning environments that can be characterised as ‘shared guidance’ learning environments (see introduction). In order to introduce some variation in disciplines, an ‘informatics’ program and a ‘small business and retail management’ program were selected. Recruiting second and fourth year students allowed the tracing of quasi-developmental aspects. The sample was obtained by recruiting two comparable programs from traditional, teacher-regulated schools. The final sample consisted of 163 participants (see Table 3) from four schools. The age of the students in the sample ranged from 18 to 29 years with a mean of 21.56 years and a standard deviation of 1.89 years. The male domination of about 85% in the attendance of these programs was reflected in the sample: only 21 female students participated (15%).

Learning tasks

In accordance with their task-specific nature, we used a task specific-approach (Bakkenes et al. 1993; Imants and de Brabander 1996) to develop measures of sense of efficacy. In this approach, judgements are acquired with respect to a series of specific tasks. For such an approach, we needed to identify learning tasks that were general enough to apply to different types of programs and different environments. Vermunt (1992) compiled a set of cognitive, affective and regulative processing activities (Table 2) that met this prerequisite. These activities were transformed into descriptions of learning tasks that would be comprehensible to students. However, this was not possible for all the processing activities. The category of affective processing activities especially proved to be difficult. Eventually, we were able to come up with a list of 17 tasks (Appendix). A few examples of these task descriptions are: “Identifying relationships between different parts of the subject matter, and between new information and what you know already” (Relating), “Motivating yourself to realise the learning goals you planned, building and sustaining the willpower to learn” (Motivating), “Handling distractions, thoughts, and emotions, that threaten to disturb the learning process” (Concentrating).

Table 2 Catalogue of processing activities according to Vermunt (1992)

Variables

Sense of efficacy

To measure the three aspects of sense of efficacy, the 17 learning tasks described above were used. The student reported on all three aspects of sense of efficacy using a five-point scale, ranging from 1 (Not At All or Hardly) to 5 (Very Much So) for each of these 17 tasks. We adapted the precise wording of the scale positions to the specific response-eliciting statement that we used for each efficacy aspect. Personal efficacy expectancy was measured in response to the statement: “I have enough skills and abilities to accomplish this task successfully.” Organisational efficacy expectancy was measured in response to the statement: “In our school all conditions are fulfilled that are necessary to accomplish this task successfully.” Personal valence expectancy was measured in response to the statement: “Accomplishing this task, and the results I obtain in doing so, are very valuable to me personally.”

The three apriori aspects of efficacy normally would suggest the desirability of using confirmatory factor analysis. However the psychological structure of the type of tasks was unknown in advance. The logical categories of cognitive, affective and regulative tasks do not necessarily constitute also a psychological structure. Therefore, we based the development of efficacy scales on exploratory factor analysis. The eigenvalues of a principal components analysis of sense of efficacy for the data in our final sample allowed for the extraction of four or even five components. However, the fifth component in the unrotated solution appeared to be dominated by the fifth task, namely, preparing for tests. Moreover, from the loadings on the fourth component, we were not able to identify a clear contrast between different types of tasks. Therefore, we limited the number of components to three. These three components undoubtedly dealt with the three efficacy aspects defined in advance (eigenvalues: 14.109, 4.133 and 3.399). To ease interpretation, we rotated the principal components solution (varimax method with Kaiser normalisation). The component loadings in the rotated solution are given in Table 3. The first component was defined by the organisational efficacy expectancies, the second by the personal efficacy expectancies, and the third by the personal valence expectancies. All items had their highest loading in their own category. Subsequently, we formed three scales and analysed their reliability. For each scale, all tasks appeared to contribute to the reliability. Calculated with Cronbach’s α coefficient, the reliability of the personal efficacy, the personal valence, and the organisational efficacy scales were 0.90, 0.89 and 0.93, respectively.

Table 3 Component factor loadings for sense of efficacy in learning tasks

Sense of autonomy

A measure of autonomy was developed using the same list of tasks. Friedman (1999) has used a comparable approach in the field of autonomy perception. His Appropriate Teacher Work Autonomy instrument comprises a set of 32 teacher tasks. The teacher was asked to judge the level of autonomy for each task on a five-point scale. Likewise, for our instrument, we asked the student to indicate the level of autonomy which he or she experienced in the fulfilment of each of 17 tasks on a five-point scale ranging from 1 (Not Autonomous At All) to 5 (Fully Autonomous).

Because of our uncertainty about the psychological structure of the tasks, sense of autonomy in different tasks again was analysed with principal components analysis. The scree plot did not warrant extraction of more than one component. Moreover, we could not find a plausible interpretation of subsequent components. With an eigenvalue of 5.325, the first component explained 31.3% of the variance. A reliability analysis with Cronbach’s α coefficient showed that the reliability could be improved by removing Item 5. This item also had a low factor loading of 0.231 (Table 4). The task addressed in this item involved preparation for tests which, with respect to autonomy, is apparently different from other tasks; no-one else but the students themselves can prepare for tests. Without this item, the final value of Cronbach’s α for the autonomy scale was 0.861.

Table 4 Component loadings for sense of autonomy in learning tasks

Procedures

We invited the respondents by email to participate. This email presented a link to an online version of the questionnaire where the respondent could enter his or her responses. The student received a reminder if he or she had not responded within 2 weeks.

Design

First, the composition and characteristics of the sample were explored in different ways. Next, the effect of regulation source was investigated with analysis of variance. In all analyses, regulation source, discipline and year of study served as independent variables. The model of analyses was limited to main effects and to all two-way interactions. The independent variables, autonomy perceptions and sense of efficacy were analysed separately. Considering the number of dependent variables, we used a univariate or a multivariate analysis.

Results

Descriptive analyses

About 163 students gave a usable response. Table 5 gives the distribution over regulation sources, disciplines and year of study. The response rate was low (21%), but similar to other comparable inquiries. We considered this response rate acceptable given that the objective of this study was not generalisation, but testing the feasibility of an approach. In the student-regulated programs, the response rate was slightly higher than in the teacher-regulated programs. The number of responses from second-year small-business students in the teacher-regulated program was very low. A univariate analysis of variance with age as dependent variable and source of regulation, discipline and year of study as independent variables, and with all main effects and all two-way interactions in the analysis model, showed a significant year-of-study effect (F[1, 154] = 16.666, p < 0.0001) but also a significant interaction between discipline and year of study (F[1, 154] = 15.538, p = 0.0001). In the group of small-business students, the age difference between second year (mean = 20.7 years) and fourth year (mean = 22.7 years) students was 2 years, as was to be expected, but the second year (21.1 years) and fourth year (21.7 years) informatics students on the average were clearly closer in age.

Table 5 Sample distribution

Though the chi-square value of a cross tabulation of preliminary education and source of regulation was not statistically significant, the percentage of students with vocational secondary education in the student-regulated program was 10 points higher than the percentage of students with general secondary education. This might be interpreted as an indication that students with vocational secondary education as preliminary education found the student-regulated programs more attractive, possibly because of the practice and action-oriented components of a typical student-regulated program.

Neither regulation source nor year of study was related to sex: the unequal distribution between men and women applied to all regulation sources and all years of study. But there was a significant association between sex and discipline (χ2 [1] = 11.481, p = 0.0007), showing that male overrepresentation was stronger in the informatics programs than in the small business programs (Table 6).

Table 6 Distribution of men and women

Sense of autonomy

Differences in sense of autonomy were examined with a univariate analysis of variance. We used regulation source, discipline and year of study as independent variables. Because of the interaction effect between discipline and year of study on age level, age level was added as a covariate. In addition to all main effects, the two-way interactions between regulation source, discipline and year of study were tested. Significance tests were based on Type III sums of squares (unique effects). The analysis revealed a significant main effect for regulation source (F[1, 153] = 3.991, p = 0.0475) and a significant interaction effect between regulation source and discipline (F[1,153] = 4.919, p = 0.028). The graphical representation of this interaction (Fig. 1) shows that the main effect can be explained by the interaction between regulation source and discipline: in the small business programs students in the student-regulated program had a slightly higher sense of autonomy than students in the teacher-regulated programs (means of 3.83 and 3.44), but there was no difference between regulation sources among informatics students (means of 3.57 and 3.61).

Fig. 1
figure 1

Interaction between regulation source and discipline for sense of autonomy

The interaction between regulation source and year of study was not statistically significant, although a trend can be signalled (F[1, 153] = 2.984, p = 0.086). According to the observed means, there was only one group with a lower sense of autonomy and that group was the second-year students in the teacher-regulated environment: mean 3.39 and 3.75 (second year student regulated), 3.67 (fourth year, teacher regulated) and 3.70 (fourth year, student regulated).

Sense of efficacy

A multivariate analysis of variance was used to reveal any differences in sense of efficacy. Independent variables were source of regulation, discipline and year of study, with age level again as covariate. In addition to the covariate, the model of analysis included the main effects of regulation source, discipline and year of study and the three-two-way interactions that could be formed with them. Significance tests were based on Type III sums of squares (unique effects).

The multivariate tests yielded two significant results, namely, the main effect of regulation source (F[3, 148] = 4.630, p = 0.004) and the interaction effect between regulation source and discipline (F[3, 148] = 2.713, p = 0.047). Subsequent univariate tests revealed that the main effect of regulation source was attributable to an effect on organisational efficacy expectancy (F[1, 150] = 11.651, p = 0.0008): student regulation was superior to teacher regulation in terms of organisational efficacy expectancy (Table 7). The effect on personal valence expectancy was not significant (F[1, 150] = 3.023, p = 0.084), but there was a slight trend in the same direction that might be meaningful (Table 7).

Table 7 Mean personal valence expectancy and organisational efficacy expectancy

Univariate tests for the interaction effect between regulation source and discipline, however, failed to reach statistical significance, although the effect on organisational efficacy expectancy showed a trend (F[1, 150] = 3.212, p = 0.075). The graphical representation of the interaction (Fig. 2) suggests that the difference in organisational efficacy expectancy between student regulation and teacher regulation was more distinct in the informatics programs. According to the univariate tests, it might also be worthwhile to investigate the main effect of discipline on organisational efficacy expectancy (F[1,150] = 3.932, p = 0.049).

Fig. 2
figure 2

Interaction between regulation source and discipline for organisational efficacy expectancy

Discussion

Consistent with the objectives of this study, the discussion of results below concentrates on the usefulness and the feasibility of specific constructs and measures which we used in the evaluation of student-regulated versus teacher-regulated ‘shared guidance’ learning environments.

Sense of efficacy

In an elaboration of the SDT concept of perceived competence, the three efficacy aspects consistently surfaced in the principal components analysis. The response-eliciting statements apparently had clear and different meanings to the respondents.

With respect to sense of efficacy, we anticipated that the students in the student-regulated environment would have a higher level of personal efficacy expectancy, a lower level of organisational efficacy expectancy, and a higher level of personal valence expectancy. Although the analysis revealed interesting results, these expectations were not confirmed. First, we found no difference in personal efficacy expectancy between student-regulated and teacher-regulated environments. Second, there was a sizable difference between teacher and student regulation in terms of organisational efficacy expectancy, but the size of the difference might be bigger in the informatics programs. However, contrary to our hypothesis, students judged the conditions in the student-regulated environment in general as more favourable than in the teacher-regulated environment. They clearly saw the conditions in the student-regulated environments as more conducive to the execution of their learning tasks than in the teacher-regulated environment. Apparently, a low level of organisational efficacy expectancy is not a hallmark of a student-regulated learning environment as such, but appears to depend on how support in that environment is organised. The results in this study suggest at the very least that student-regulated environments have potential to offer adequate guidance. Regarding the third aspect, personal valence expectancy, we found no significant differences but, if we take the observed tendency seriously, then the students found that the learning activities in the student-regulated environment were slightly more valuable than in the teacher-regulated environment. This conforms to our expectation.

The primary goal of this study was the conceptualisation of personal and environmental aspects of the efficacy construct. In regard to this objective, we conclude that the proposed distinction appears to be appropriate and fruitful: the two types of expectancies made sense to the respondents and made the instrument clearly more sensitive to the variability in the learning environments. If the difference between the learning environments is not strong enough to affect differences in personal efficacy expectancy, apparently a measurement of organisational efficacy expectancy still can detect environmental variability.

Task-specific approach

Both with respect to sense of autonomy and sense of efficacy, we find it somewhat discomforting that the principal components analysis did not detect multiple components contrasting different types of tasks. If the task descriptions had a clear and discriminating meaning for the respondents, then the principal components analysis would have detected components that were related to contrasts between different types of tasks. We explain the absence of task contrasts in terms of a deficiency in the task descriptions, which apparently were not clear enough to enable the respondents to grasp a clear understanding of the differences between the tasks. This might have several causes. In the first place, the descriptions of the tasks could have been too abstract, making it difficult for the respondents to connect them with their concrete activities. Another possibility is that these tasks indeed describe courses of actions that contain a set of activities that is rather heterogeneous with respect to regulation/autonomy and efficacy. In either case, what was clear to the respondents, however, was that all tasks had something to do with learning. This general resemblance between tasks elicited more or less equal responses to different items. Ironically, then, the very satisfactory internal consistency of our scales was actually boosted by one or more inadequacies in the task descriptions.

Regulation source

We found some partial effects for sense of autonomy that were consistent with expectation. Students in the student-regulated environment in the small-business program reported a higher sense of autonomy, but this was not the case for students in the informatics program. We also found a tendency for second-year students in the teacher-regulated environment to demonstrate a lower sense of autonomy than the other groups. Although sense of autonomy was not our primary interest, we did not expect differences of this small size. With respect to personal efficacy expectancy, we found no influence of regulation source at all. There was a possibly meaningful difference in personal valence expectancy in favour of student regulation. Setting aside organisational efficacy expectancy, overall the impact of regulation source was rather small.

This lack of effects of regulation source cannot be attributed to the less-than-optimal characteristics of the dependent measures. The task descriptions were not perfect. Nevertheless, because there is no doubt that all task descriptions were related to activities in a learning context, sense of autonomy and sense of efficacy still were adequately measured, admittedly in a global manner. If source of regulation had made a substantial impact, it would have surfaced in more measures. Therefore we can conclude that student and teacher regulation are either insufficiently different or not relevant to the dependent measures.

Presumably, source of regulation was conceived mistakenly as an all-or-none distinction. Teacher-regulated environments also vary substantially with respect to the level of self-regulation. Educational programs that, according to the subject matter and transmission model, would be characterised as teacher regulated can allow for considerable self-regulation (e.g. when they stress independent learning). On the other hand, because educational traditions are rather persistent, it is safe to assume that actual practice in student-regulated environments is less student regulated than it is assumed to be. And what if student regulation in fact would be implemented as a laissez faire type of coaching? The opportunity to act autonomously would be hampered severely. Maybe it would be possible to isolate effects of weak differences in terms of source of regulation in a very large sample. It would be wiser, however, to analyse the type of regulation actually employed in educational programs in a more fine-grained manner. These can be based on analyses of these learning environments’ characteristics or by the use of more specific instruments to assess students’ opinions, such as the What Is Happening In this Class? (WIHIC) questionnaire described by den Brok et al. (2006) and Rickards et al. (2005) or the Questionnaire on Teacher Instructional Behaviour (QIB; den Brok et al. 2004).

The relation between regulation context and self-regulation is further complicated by the fact that they are not directly linked. Even the most rigorous external regulation by itself would not preclude people from self-regulation. As deCharms (1976) remarked long ago, ‘origins’ and ‘pawns’ are equally subject to external constraints but, while pawns feel defeated by their constraints and complain about them, origins are not obsessed with them and strive to visualise their paths to their personal goals through the requirements with which they are faced. The contexts in which students of more than 20 years of age follow their courses of study might be different on multiple aspects, but students will find their path through their obligations using the resources that they see fit. This would imply that there is an immediate relationship between the regulative context and neither subjective sense of autonomy nor personal efficacy expectancy. Finally, as also indicated by den Brok et al. (2006) and Dhindsa and Fraser (2004), differences between male and female students’ opinions of the learning environment, or more precisely, of the distinction between personal and environmental aspects, might be an interesting subject for future investigations.

In summary, it can be concluded that the distinction between personal and environmental aspects of efficacy is promising, although we have no data on non-personal valence expectancies. The measurement of environmental efficacy expectancies proved very sensitive to source of regulation, despite the difficulties that we identified with the concept of regulation source. Their importance is that environmental efficacy expectancies involve perceptions of the support offered by the environment and, therefore, have a more immediate relationship with the regulative context. If the description of learning tasks can be improved to permit the comparison of different types of tasks, our conceptualisation of efficacy could contribute substantially to the evaluation of teacher and student regulation, but also more generally to understanding of phenomena for which task performance is relevant. This eventually could contribute to the difficult and lengthy discussion about the benefits and disadvantages of student-regulated learning environments. The notion of regulation, however, needs further conceptual clarification. Important dimensions of regulation must be identified, and the relation between subjective and objective perspectives must be enlightened.

With these comments in mind, the results of this study are taken not as definitive answers, but as encouragement to pursue research on this topic in the direction chosen here.