Welcome to the final issue of the 2013 volume of Educational Assessment Evaluation and Accountability (EAEA). As we go to print, in London, we are easing out of autumn, and the start of the new academic term is fading into the distance. For those who will be celebrating a new year and/or a new term in January, there is perhaps a mounting sense of the dawn of another opportunity to start afresh with a sense of anticipation and hope.

I recently had occasion to pause and consider some of the benefits of academic life. I concluded that at times the greatest gift is the sense of beginning that accompanies each new academic year and term. When combined with an annual calendar of holidays, observations and celebrations, education professionals have many chances to reflect and start over. I wonder if perhaps it is this regularized practice of restarting that keeps us going.

The idea of new beginnings and fresh starts serves as a good departure point for this final issue of the volume. Guaranteed, whenever you choose to peruse it, you will be close to a fresh start or new beginning yourself—and the work included herein will provide some food for thought and a chance to reflect on your own work.

When the papers for this issue arrived, I was again on the road, en route to Canada for data collection. Quite thankfully, the start of my trip coincided with the arrival of the newest member of our extended family, and I was able to sneak in a visit with my fabulous 5-day-old nephew, Ryker. Our fleeting visit was truly amazing—albeit filled with a little editorial multi-tasking. For example, combining my doting aunt role/editor persona as his parents snuck in a short nap, I read through the scripts as my nephew slept on the sofa.

As intriguing as the papers were, I could not resist the lure of a cuddle and took a break from reading to simply sit and chat with my nephew. We had a lovely time—although there was not much in terms of reciprocal conversation! With Ryker in hand (quite literally), I had some time to reflect on what emerged as a clear line of similarity between the papers in this issue.

The papers share a commitment to the importance of rigorous gathering and understanding of the evidence to improve individual and collective outcomes. As per most issues of EAEA, the authors span the globe and present work from the USA, Singapore and Portugal. Similarly broad, the papers explore individual assessment of primary school student behaviour, classroom-level democratic engagement, teacher training ICT course outcomes, multiple city-level school accountability strategies and national-level higher education policies. Together, the papers highlight the value and importance of assessment and evaluation across the full spectrum of educational provision.

The issue begins with an important paper by Steiner and colleagues entitled “Development and testing of a direct observation code training protocol for elementary-aged students with attention deficit/hyperactivity disorder”. The paper explores the issue of developing and sustaining consistency between individual users of an assessment instrument. Based in the USA, the authors examine their own process in developing and testing an approach for training and improving the consistency of the use of the Behavioural Observation of Students in Schools. The paper provides food for thought for those engaged in multi-researcher observational research in any setting. The authors provide a proposed protocol and guidelines and underline the importance of accuracy and consistency in such an important area of assessment.

Our second paper by Hur et al. is entitled “Finding autonomy in activity: Development and validation of a democratic classroom survey”. Similarly, these authors discuss both the design and testing of an instrument but focus on measuring student perceptions of the democratic classroom environment. As such, this paper tackles the timely issue of individual student experiences of autonomous learning and engagement in distributed learning. The authors root their work in the assumption that a student's perception of the democratic learning atmosphere is tightly coupled with their personal sense of autonomy within their individual and collective activities. While the testing of the instrument occurred in an American university setting, the implications and applications of the instrument are pertinent across all phases of education.

Koh et al. present a paper called “Understanding the relationship between Singapore pre-service teachers' ICT course experiences and technological pedagogical content knowledge (TPACK) through ICT course evaluation”. The authors' analysis is predicated on the assumption that teacher education, and perhaps more widespread practices in higher education course evaluation, focus too narrowly on student experience. To address this, their study examines the potential design and validation of a tool to assess both student perception of their learning experience but also their learning. More specifically, within the context of teacher education-focused ICT modules, the authors propose and test an instrument to measure both teacher experience and the development of their technological content knowledge. The end goal of the work is to assist programmes in gathering evidence on the relationship between student experience, learning and intention to integrate learning into their future practice.

In the fourth paper, Ehren and Hatch draw on evidence from a recent study of the multiple accountability measure employed in the New York City school system and their collective influence on schools. The paper, entitled “Responses of schools to accountability systems using multiple measures: The case of New York City elementary schools”, finds that testing tends to dominate as a driver of school-level decision-making even within the context of multiple and diverse measures of school performance.

Our final paper, by Sarrico et al., focuses on the implications of national shifts in governance and accountability on the quality of higher education delivery. While this is a global issues in universities and colleges, the authors contribute to the debates and discussion with a specific focus on the experience of four institutions in Portugal. Their paper, “The long road—how evolving institutional governance mechanisms are changing the face of quality in Portuguese higher education”, reports on the experience of senior leaders to highlight the diversity of challenges and sets out recommendations for future larger-scale research studies.

As we close this volume, we can proudly reflect on EAEA and our shared voyage in 2013. EAEA began its first full year as an ISI-listed journal and experienced a significant upswing in its submissions. As we go to print, with 2 months left in the year, we appear to be heading toward at least a 40 % increase in contributions over the previous year. We believe this is a great sign for the overall health of the publication. Our ongoing and continued growth would not be possible without the support of our colleagues at Springer, and our editorial board members and reviewers for their continuous and dedicated work in support of EAEA. From this vantage point, we are excited to share the final issue of the 2013 volume of EAEA and look forward to kicking off the 2014 volume with a special issue on Participatory Evaluation and Research in Developing Country Contexts.

From our EAEA team, we wish you all the best for your transition from 2013 to 2014. And we wish Ryker, and his fellow global members of his 2013 birth cohort, a wonderfully inspiring educational life and one that benefits from the hard work of academics and professionals around the world who are relentlessly working to understand and improve educational practice. And, on that note, happy new beginnings to everyone! We look forward to receiving and publishing your work in the future.