Keywords

Introduction

As we showed in Chap. 2, the idea of access to a shared culture has always been a central component of legitimising funding for arts and cultural activities. In the UK, the first trusts and foundations were set up in the nineteenth century to share the collections belonging to the wealthy in museums and libraries. In the twentieth century, the formation of the Arts Council of Great Britain, as well as the permission for local authorities to fund the arts and entertainment, both introduced public money from taxation, which likewise assumed some public benefit. It is only since the beginning of the twenty-first century, however, that an explicit drive to increase participation has been a growing part of cultural policy discourse.

What has been defined as a shift “from supply to demand” (Bunting, 2006), in other words, from a focus on support for the artist to support for the participant, is in part a response to a failure of cultural policy to equitably address the cultural needs of the wider population. Academics have accused cultural policymakers of perpetuating elite hierarchies of taste (Bourdieu, 1984), which can be witnessed in many countries through a focus on funding specific art form practices while excluding others or supporting professional practice at the exclusion of amateur cultural activities. On a global scale, data from surveys has further demonstrated an “elitism hypothesis” (Courty & Zhang, 2018) based on a correlation between those who take part in subsidised culture and their wealth, education, and social status. Cultural policymakers have therefore increasingly been under pressure to justify the legitimacy of their decision-making processes (Holden, 2006a).

Despite these apparent failures, both to distribute funds equitably and to legitimise cultural policy, the language of policy documents more commonly focuses on narratives of success rather than acknowledging failures. As we argued in Chap. 3, this resistance to discuss failure reduces the potential for the kind of learning which is necessary to facilitate the changes that might create a more equitable cultural sector. Furthermore, we argue that the language of participation has taken a performative turn, through the use of multiple and at times even contradictory definitions, which obscure its meaning, making meaningful change even more difficult to achieve. We also demonstrated in Chap. 2 how policymaking is not just written but enacted by people with power to not only speak but also to be heard. This chapter therefore draws on data collected from workshops and interviews with policymakers in the UK, as described in detail in the Chap. 1, to examine how the meanings they give to participation affects policy development and the extent to which acknowledgement of failure contributes to policy learning.

In defining policymakers, we acknowledge that the word policy suggests two meanings: the politics of decision making and the policing of those decisions. Policymakers may therefore be defined as the preserve of what politicians decide, or as something enacted by anyone with the power to do so within the cultural sector. One local authority officer we interviewed said, “politicians get very jumpy about the use of the word ‘policy’ because they believe that’s their domain and their domain only”. But in this chapter, as cultural policy is said to be delivered at “arm’s length” from the government in Britain (Madden, 2009), while we consider the influence of politics on policymaking, we have defined policymakers as those who police the sector: in other words, anyone with control over the funds to which organisations and individuals apply in order to undertake cultural participation work. In this chapter, we therefore focus on those working for local authorities, arts councils, trusts, foundations (who, through dint of the independent funds they receive from endowments, also distribute funds), and the consultants who advise them. We then discuss the influence of practitioners on policy in Chap. 5 and participants in Chap. 6.

In the workshops we undertook as part of this research, we asked policymakers to discuss the meanings they give to participation and what both success and failure look like from a policy perspective. We conducted these workshops within a safe space among their peers. The aim of this process was to test the level of agreement over how the purpose of cultural participation policies are understood, as well as observe variances in attitudes to failure regarding the implementation of these policies. The follow-up interviews we undertook with these policymakers then explored their personal attitudes to failure and the processes of both policymaking and review within their own organisations. We asked each of them to share specific examples of policy learning and to reflect on what part acknowledging failure played in that learning process. In completing this work, we identified Creative People and Places (CPP) (https://www.artscouncil.org.uk/creative-people-and-places-0) as a policy intervention that was frequently cited as a success story not only by policymakers but also in the national media. Through interviews with staff from CPP and “deep hanging out” (Walmsley, 2018) with participants in one CPP area, we explore what might be learnt from re-examining its perceived successes through the alternative lens of failure.

The chapter therefore begins by exploring the different meanings that participation has for policymakers. It then examines the attitudes to talking about failure among our sample and suggests what barriers may be preventing them from doing so. The final section then considers where they locate the failure of participation policy, as described above and the extent to which cultural policy not only learns from this but acts upon this learning. Throughout the chapter, we illustrate key points with reference to CPP and, unless otherwise stated, all quotes come from the policymakers in our sample.

Meaning of Participation

Through our research, we observed a clear consensus among policymakers, whether from arts councils, local authorities, or trusts and foundations, that “participation” was a policy agenda which they all saw as one of their priorities. While one member of staff from Arts Council England defined participation in artistic terms as “a cultural form […] a practice”, most defined it in relation to phrases such as “social justice”, “the cultural rights argument”, or “our equalities work”. In other words, participation was seen as a duty or responsibility to ensure universal and equitable opportunities to take part in cultural activities, rather than as a separate form of cultural practice.

As we show in Chap. 5, the idea of participation as a form of artistic practice, undertaken specifically by participatory artists, is more common among practitioners, and draws heavily from art theory (Bourriaud et al., 2002; Ranciere & Schad, 2004) which shifts the focus of attention from art as an object to art as a relational process, between the artist and the audience or viewer. In such a conception, there is space for “agonism” (Miller, 2016) or dissenting voices to challenge the status quo. For most of our policymakers, however, participation was described as a way of “fitting in” to systems far more than challenging them. All saw participation as a public benefit, which promised positive outcomes for both society and individuals and they supported claims in the literature that participation plays a key role in the creation of healthy individuals and sustainable communities (see e.g., Keaney, 2006). Words and phrases such as “social inclusion”, “wellbeing”, and “personal development” were commonplace.

In such a conception of participation, there is an implied assumption that people do not naturally choose to participate, and the role of policymakers is therefore to persuade them to do so for their own good. Success and failure therefore relate to whether people take part in the activities prescribed for them by policymakers, as well as the benefits that participation brings to them both socially and personally. This framework ignores the “everyday participation” that has been identified by other writers to be more prevalent (Taylor, 2016) and which we shall show in Chap. 6 to be most important to participants.

The majority of our sample agreed that one of their primary interests therefore was in “who” participates in what they fund, and this is at least influenced by the type of surveys, such as Taking Part (DCMS, 2018) and Active People (Sport England, no date), which we showed in Chap. 2 to have supported the elitism hypothesis. Many also stated that this evidence placed a responsibility to increase participation on the whole cultural sector, rather than being the preserve of those who define themselves as participatory artists. It is therefore common practice for policymakers to ask the organisations or artists they fund to report on rates of participation in their funded activities, particularly from what the surveys suggest are under-represented groups. When pushed to consider why, despite this policy focus, these surveys suggest that rates of participation have failed to increase, these same tools were accused of measuring the wrong criteria. Policymakers therefore seemed happy to use such surveys to inform policy development, but were less keen on using them to evaluate the effectiveness of their interventions.

An Acknowledgment of Failure Or a Failure to Acknowledge?

Creative People and Places (CPP) is an Arts Council England (ACE) initiative that was launched in 2012 as a response to data from the Active People Survey. This survey was devised by Sport England to obtain a granular analysis of participation in different types of physical activities or sport at a local level, but Arts Council England augmented a lengthy survey about different physical activities with only one additional question about “attendance in arts, libraries and participation in any creative, artistic, theatrical or musical activity or crafts in the last 12 months” (Sport England, n.d.).

The results found significant differences in rates of participation between different locations, which were seen by some as “an acknowledgement of failure” in participation. From the outset, however, there was disagreement within ACE about whether this failure should be understood as a social failure on the part of those not participating or a failure of policy to distribute funding equitably. Many people with whom we spoke acknowledged a correlation between places that the data suggested had low rates of cultural participation, areas of socio-economic deprivation, and low levels of public investment. As a result, an ACE senior manager said: “[…] we had a long debate when we were setting up Creative People and Places about what is the data that we should use. You know, should we use deprivation, should we use our own funding levels blah blah blah—and in the end we came down to the cultural engagement stat, just because we’re saying all of the others are not ones that I think we can claim the territory to try and shift”.

But while ACE might not be able to eradicate social deprivation on their own, there seems no reason they could not address the unequal levels of their own funding in different parts of the country. As we have written elsewhere (Jancovich, 2017a), however, internal opposition to owning the failure within ACE meant the only way to get internal support to release money to these areas was to use the narrative of social rather than policy failure. This was said by some to have “enforced a kind of deficit model about certain places and certain communities which was really unhelpful” in supporting people’s participation.

ACE initially discussed whether to use the data to “level up” their funding by giving all the underperforming areas more significant investment.Instead, however, it was decided to develop CPP as short-term action research, making £37 million available by competitive application to a small number of locations whose residents were in the bottom 20% of participation in the country (this was an arbitrary cut off which has since shifted to the bottom 30%). In so doing, it may be argued that they continued their long-standing logic of funding “few but roses” (Arts Council of Great Britain, 1951, p. 51) which we argue has contributed to inequality between places. We therefore support the view that, as one policymaker said, the failure to acknowledge, “people have got a right to that investment” meant the policy was flawed from the outset

Furthermore, despite basing their criteria on the data from the Active People survey, ACE removed their investment from the survey and thereby limited the possibility of measuring the programme’s success or failure against the very outcomes it was set up to address. As one policymaker stated, this was because it was believed that it would be “setting the projects up to fail”, which ACE could not countenance. This clearly demonstrates a reticence to acknowledge even the possibility of failure to reach the very “non-participant” whom the policy was designed to target and undermines their claim that it is evidence based policy in action.

While there was consensus among our sample of policymakers about the importance of who participates in these initiatives, there was less consensus on what they were being asked to participate in. Those from arts councils and some local authorities, who directly spend money obtained from taxation, most frequently defined their goal as increasing participation in the activities they already fund. This relates to the democratisation of culture, or public participation (Brodie et al., 2009) discussed in Chap. 2. Both employ a model of participation based on the relationship between the public and the institutions of state, which assumes that the institutions currently funded are the best mechanism to achieve any policy goal. Success and failure are therefore defined by whether people engage with these cultural institutions, and how they feel after doing so. Since the publication of Arnstein’s ladder of participation in 1969, however, many theorists from other disciplines have questioned whether participation in pre-determined activities by state-sanctioned institutions should be described as participation at all. For Arnstein, only control over the decisions about the types of activities, services, or cultural projects offered should be defined as participation.

This approach has informed the broader direction of public policy which increasingly defines participation in relation to the level of power and agency that participants have. Many theorists suggest that the opportunity to bring about desired outcomes is directly proportional to the power wielded by the participant (Bevir & Rhodes, 2010; Dryzek & List, 2003; Ostrom, 1990). In England, this approach was evident in a government edict, or “duty to involve” (DCLG, 2008), the public in decision making about all public services, including cultural policy, which came into force briefly in 2008. In Scotland, this edict remains within the Community Empowerment Act (The Scottish Government, 2015) from which the National Standards for Community Engagement were developed. While the duty was later removed in England, all the policymakers across England, Scotland and Wales, to whom we spoke, did refer to the idea of participatory decision making, and in principle, many supported the view that “if you say you want to reach everyone, then what you fund in terms of the type of cultural activity needs to shift”. This approach defines participation in relation to notions of cultural democracy or social participation and horizontal relationships between peers. In line with theories on everyday participation, this challenges the assumption described above that the role of cultural policy is to persuade people to participate in activities in which they do not currently participate. Instead, it posits that culture is something we are already all part of, and that cultural policy fails to provide adequate resources to the diverse cultures and cultural activities in which people take part (Miles & Gibson, 2017).

Many of the policymakers to whom we spoke saw this as the inevitable direction of travel for cultural policy, and some cited a significant shift in language within the national policy bodies as an example of this. The change from the Scottish Arts Council to Creative Scotland, for instance, was described as moving the emphasis from the professional arts to wider definitions of creativity. Similarly, the change in title between Arts Council England’s ten-year strategies, from “Great Art for Everyone” (2010–2020) to “Get Creative” (2020–2030), was seen by many as part of this shift. Furthermore, under the banner of supporting cultural democracy, each of the public bodies supporting arts and culture in England, Scotland, and Wales have developed schemes, of which Creative People and Places is one, which invest directly in communities and allow them some form of participation in decisions about what cultural activities are funded.

The Shift from Participation in Art to Participation in Decision Making

From the outset, Creative People and Places aimed to “test new approaches” to increase participation, including participation in decision making about the types of cultural activities people want to see funded locally. As a result, one of the conditions of funding is that areas are managed by consortia of local groups, and many have programming panels made up of local residents.

The intention is that including those outside of the cultural policy sector will challenge what might be defined as cultural participation, and many of those working for CPP said that this was influencing thinking across the cultural sector.

They also claimed that it was clear when working in these areas that the barriers to participation implied by the Active People survey, which define eligible places as having low levels of cultural participation, are not in fact present. Instead, most of the CPP staff we spoke to acknowledged a desire to take part in local activity, whether that activity was what ACE deemed to be culture or not. This clearly supports the idea that the failure in cultural participation may be more to do with what is funded and measured as participation rather than a social failure in these places.

Despite this, however, CPP staff said they felt limited in how radical they could be in terms of redefining cultural participation because they were under pressure to achieve ambitious targets in terms of the number of people they engaged, and to report on the “quality of art as much as the quality of participation”. This was said to lead them to put on crowd pleasers rather than take risks with their programming or provide the space for a deliberative decision-making process. Some of the participants we spoke to also expressed anger that it often resulted in bringing in high profile artists, many of whom were not from the area, rather than adding resources to existing cultural activities.

A requirement from ACE for “arts expertise” within the governance of CPP and the exclusion of local authorities further means that, in practice, arts organisations hold the majority of places in the consortia (Fleming & Bunting, 2015), and although local panels might have influence over the artistic programme, they have littlepower in terms of managing the programmes themselves. The staff we spoke to from local authorities said their exclusion risked the sustainability of these activities.

As a result, we argue that while CPP may have raised the profile of participatory decision making, rather than solely participation in cultural activities, it has left power very much with professional artists and established cultural organisations.

It should be noted, however, that the shift from defining participation in relation to taking part in cultural activities to defining participation in relation to power and decision making was not universally supported by the policymakers we interviewed. Many also expressed concern that once this shift occurred, questions such as “where is the art in all of this?” abounded. Furthermore, even those who supported this shift questioned whether it was more than rhetorical and whether there was any real evidence of change in the distribution of funding.

Arts Council England’s strategy, for example, states that “we only have limited investment available to support new initiatives…this means many of our arts organisations… will need to change [how they operate]” (2020, pp. 3–6), thus placing the onus on funded organisations to change rather than changing what they choose to fund. In Scotland, the leadership of Creative Scotland attempted to redistribute funds but failed to deliver real change in the face of opposition from those already funded, who had a vested interest in maintaining the status quo (Stevenson, 2014). Some of those we interviewed from local authorities also claimed that it is difficult to reallocate funds to support people’s participation in their own cultural practices within a context where officers are fighting to safeguard any funding for culture, and where any suggestion of change is more likely to see reductions in funding rather than reallocation. Even for some of those working for trusts and foundations, which are less affected by variations in government funding, there was acknowledgement that they continue to focus funding to deliver participatory activity via professional cultural intermediaries rather than resourcing the everyday cultural activities in which people might already be engaging. We argue, however, that it is this gap between cultural policy discourse and a “failure to follow through” in terms of funding that perpetuates inequality and is at the heart of the crisis of legitimacy that policymakers claimed the participation agenda aimed to address. As our primary interest in this book is to consider how such failures inform policy learning, the following section explores the attitudes to talking about and learning from failure among our sample of policymakers.

Attitudes to Failure

As we have demonstrated above, the disparate meanings of participation make this a nebulous area of public policy. Some policymakers claimed that this situation was exacerbated by shifting political priorities between different governments and that it was particularly acute in a context where most agreed that governments, of all political shades, want “quick results”. As a result, although most of the people in our sample believe that they are personally open to talking about failure, many felt that the context within which they work makes it difficult to do so in practice.

This is supported in the literature by those who argue that policymakers and funders closest to government are often least honest, as success stories are politically expedient (Howlett et al., 2015). We found this assertion to hold true, at least in part, in our own research. There was more evidence of organisational discussion about failure and what can be learnt from it by those working for independent trusts and foundations, as compared to those working in arm’s length bodies or local authorities. Many supported the view that those who “don’t have that accountability […] to the public and to the politicians” therefore have space to be more reflective. The representatives of trusts and foundations to whom we spoke were all part of a peer network in which they openly discuss failure by constantly “asking three questions. What’s gone well, what’s gone wrong, what have you learnt?”

Despite the fact that the Arts Councils of both England and Wales and Creative Scotland are all notionally shielded from direct government interference through the arm’s length principle, they were no more confident in openly discussing failures than those working directly for local authorities. Those working in the arm’s length bodies were particularly conscious of “[…] managing a delicate balance with national politicians. Both for us as an organisation and for the sector as a whole, being honest about when things do or don’t work can have consequences”.

This made them particularly reticent to talk about failures. In fact, some of those from local authorities claimed that because they had a more direct relationship with local poiticians in some cases, they could be more honest and open. Unlike the trusts and foundations whose money, as well as governance, is independent of government, however, both the representative of local authorities and arm’s length bodies said that it was their dependence on non-statutory government spending that meant they were always in “lobbying mode”. As such, the desire to review and learn from the policies they implement and the projects they fund is replaced by a desire to please those who provide the funds they distribute. As a result, success and failure are often defined more by the ability to raise the profile of the work that they fund rather than the delivery of stated policy goals.

Peer Learning Or Controlling the Narrative

Creative People and Places (CPP) was said to be the first instance in which Arts Council England (ACE) employed an action research approach to test new ways of working, and therefore a large emphasis was initially placed on learning and knowledge exchange. A budget was allocated for a peer learning network as well as an independent evaluation, and both were supposed to identify “what worked and what did not work” (Ecorys yr. 1). Through these mechanisms, most of those directly involved in the programme felt that there was more honesty about failure internally than elsewhere in the cultural sector. Many practitioners outside of CPP questioned whether this was in fact true, and if it was, they wondered how this was shared. Some of the practitioners we spoke to said that the communications promoted a celebratory tone, which “offer PR for the CPP brand” rather than critical reflection of learning or comparison between different approaches.

While some of the CPP staff claimed this was because ACE “keep a tight control over the narrative” about CPP externally, when asked to talk about what they saw as the successes and failures within the programme, both policymakers and CPP staff were reticent to compare different approaches. Instead, the responses largely related to the programme as a whole and its success at raising the profile of the initiative rather than any evidence that it had succeeded in increasing participation, which was its stated aim. For ACE, CPP was said to have helped persuade politicians of the value of the arts in general, and some staff from CPP defined their successes in relation to the fact that their funding had been renewed, rather than whether or not they had succeeded at raising rates of participation in their town.

It was acknowledged that none of the areas receiving funding had moved up to, let alone above, the national average of cultural participation as defined by the survey through which they were originally identified as being eligible for funding. Rather than viewing this as a failure from which something might be learnt (whether that be about the design or the implementation of the initiative), alternative measurements were employed to create narratives of success. Box office data and postcode analysis celebrated the increased number of participations (not people) and likelihood of them being new to the arts based on where they lived, and case studies tell stories about how the arts have changed the lives of individuals, rather than celebrating the fact that individuals might have already been participating in their own cultural activities, something the data overlooks.

Significantly, despite Arts Council Wales and Creative Scotland also developing place-based approaches to funding after CPP was established, there remained a sense of competition rather than collaboration between countries. Different approaches taken in each country were said to be the result of different nations wanting to own their own policy rather than policy learning to avoid repeating past mistakes. This suggests that where profile, and the ability to advocate, become the main criteria of success for policymakers the impetus to learn from failure is reduced.

The desire of policymakers to advocate for the cultural sector, rather than reflect on its success and failures, was not seen as solely a feature of the relationship with politicians as mentioned above. Some of the policy consultants to whom we spoke saw the barrier to acknowledging failure as having more to do with the level of familiarity between policymakers and those they regularly fund. This was seen as particularly prevalent in arms-length national bodies, where, despite some supporting the view that “the way we’re funding things at the moment isn’t working [so we] need to be more comfortable about a higher turnover of organisations”, many still firmly see their priorities as supporting the “long term stability of the cultural sector”. This was defined in relation to protecting the existing organisations they fund. As a result, despite the prevalent discourse surrounding participation, as one policymaker noted, it was simply “a veneer, and there was no real will to make a difference”. This, we argue, reduces the value placed on honesty about failure.

Those we spoke to from trusts and foundations, however, said that their charitable status requires them to have narrow remits and more clearly defined goals than arms-length bodies or local authorities. This was seen by those working within trusts and foundations as helpful in maintaining their distance from politicians and practitioners alike. Some argued this means they have a clearer understanding of their purpose in relation to cultural participation as well as a clearer sense of how to evaluate success or failure. They therefore saw the openness to discuss failure among themselves, as mentioned above, to not only stem from their independence but equally from their clarity of purpose and the value they place on evaluation. Conversely, some of those from trusts and foundations criticised the cultural sector for “want[ing] the right to fail without taking responsibility for learning to prevent repetition of mistakes”, which should be the true purpose of evaluating success and failure.

Many theorists have also claimed that the level of confidence in acknowledging failure is related to the function evaluation plays, and whether accounting for money spent or improving services provided are most valued by policymakers (Jancovich & Stevenson, 2021). Where evaluation is aimed at improvement, it is argued there is a more open attitude towards failure, and more willingness to change, while a focus on accountability encourages success stories that support maintaining the status quo (Hogwood & Gunn, 1984). Both the policymakers and policy consultants we spoke to felt that the audit culture which predominates in the public sector in the UK has encouraged a focus on “monitoring backwards not evaluating forward”, or “a prove [rather than improve] agenda” in which the priority is accountability rather than learning. As a result, “[…] it wouldn’t be in anyone’s interests to stand back and go ‘I’ll tell you what, it’s not really working is it, can we do this differently.”

This suggests therefore that the lack of a learning culture may make acknowledging failure difficult for certain policymakers. At the start of our interviews, most policymakers agreed that the aim of the participation agenda is addressing inequality, and they expressed an openness to acknowledge failures, but it was apparent in many of our conversations that as the interviews progressed, they became less comfortable with the idea of acknowledging failure, especially when confronted with the idea that this might mean significantly changing their policy or funding decisions.

When asked about the nature of the policy learning that had taken place, or to provide specific examples of changes in policy in response to failure, policymakers were vague, and most admitted that there remained a gap between acknowledging failure and learning from it: “I really recognise when things haven’t worked. I think probably the area where we don’t then sort of go forward on is […] what then should we do differently?”

For many, this had less to do with a lack of will, and more to do with the fact that the practical process of policy making is difficult, as you “need to align so many different forces in order to make policy in the first place”. Identifying a problem, holding a consultation with stakeholders, and reformulating the programme design are lengthy processes which require much negotiation and persuasion to convince anyone to act. As a result, policymakers “[…] do a lot of work upfront when things are very hypothetical, and analysing applications and things like that, and less work on analysing the reality of what really happened and what we learnt”.

The fear that these efforts would be unpicked, thus returning them back to square one, seemed to be the most unifying factor in rendering policymakers averse to admitting when their policies or projects were failing to deliver the results for which they had hoped. Furthermore, many policy bodies were described as not being set up to capture learning for the long term, and that “when staff change … very often a lot of that knowledge base goes with the person who leaves, rather than being embedded in that cultural body”. We therefore support the view that policymakers must improve at reviewing their own policies and challenging themselves by asking, “not what’s the evidence to do something new [but rather] what’s your evidence for doing it exactly the same”.

Locating Failure and Learning from It

As we have already shown, there exists an acceptance from policymakers that the participation agenda is in and of itself an acknowledgement of a failure to address the inequalities in participation, both in terms of audience and workforce. Everyone we spoke to uses the statistics from surveys as evidence for these observations and tend to see addressing this as at least part of their mission. As we discussed earlier, however, there exist differences of opinion about where this failure is located.

Some of the policymakers to whom we spoke saw the failure of cultural participation as related to wider social problems and “a whole dynamic to that which is about social economic class”. They therefore felt it was unrealistic to suggest that cultural policy could ever “see a shift in that data”, but rather cultural participation offered an escape for those who chose to participate despite their circumstances and embrace the possibility of transforming their lives. The responsibility of policymakers was therefore to ensure that people had the opportunity to participate in the cultural sector. Failure was thus seen as not having articulated well enough, to both politicians and the public, what cultural participation could do, and a failure to convince these same groups about the value of the cultural sector.

A larger number of the policymakers to whom we spoke, however, as discussed in Chap. 2, supported the findings from the everyday participation research (Miles & Gibson, 2017) that people are already active participants in their own cultural lives. One of the failures is therefore the very measurements commonly cited as evidence of a problem, which defines those people as cultural non-participants. Many agreed that the survey data only captures participation in specific artistic practices, mainly those currently subsidised by cultural policymakers. Some suggested this means that the problem of participation is overstated, and the failure is that policymakers have been pulled too far into focusing on participation at the expense of artistic practice. This also supports the view that it is not cultural policy that is failing which we argue limits learning, let alone change.

Some of the policymakers to whom we spoke, including some who were no longer working within the institutions concerned, claimed that they had wanted cultural policy to shine a light on the greater diversity of cultural participation experiences. For them, the failure was in not redistributing funding accordingly. Some policy consultants suggested that this was not happening because the increased interest in cultural democracy, discussed above, and the desire to capture the wider range of activities in which people participate had more to do with a desire for “more pleasing statistics” than changing the direction of policy or funding decisions. As such, some consultants claimed that policymakers are less interested in learning from failure and are instead more interested in disguising it. Some further argued that the diverse interpretations of participation employed by policymakers contribute to this by rendering the term meaningless, enabling practice to remain unchanged. This supports our argument in Chap. 2 that broad definitions make participation become an empty signifier where different approaches are not acknowledged as such.

The majority of policymakers, however, saw these variations in meaning as important to providing space for a diversity of approaches and cultural practices. Even those who showed an interest in the public policy discourse that defined participation in relation to power and agency preferred to see participation as a continuum, inclusive of everything from being an audience member to a creative participant through to having a say in decision making, rather than as a hierarchy in which some forms of participation are more valued than others. They were not interested in comparative analysis of their varying benefits, impact, or value. Indeed, many policymakers stated they sought not to define participation at all but preferred to leave it for practitioners to define for themselves. We argue that employing such broad definitions and this hands-off approach makes honesty about failure more important rather than less. Without comparison between different practices and an understanding of their different purposes, it is not possible for them to be contrasted, or discussed in terms of the extent to which each advances equity in the cultural sector.

The Failure to Compare Different Approaches to Participation

One of the biggest challenges for Creative People and Places (CPP) was understood to be how they would achieve the policy aim of increasing participation while also allowing for bespoke approaches in each local area. It is clear from our research that CPP areas are free to follow their own approaches to defining participation. Some areas work hard at involving the community in decision making, while others claimed that they did not see this as their role whatsoever. While some defined their target participants as those who had not previously engaged in the arts, others discussed engaging the artist or business communities within their locations. As action research with the objective of attempting different approaches, this is key to learning what can be gained from each approach. The reliance on self-reporting on the successes and failures of each location, however, was seen by many to limit understanding about which approaches achieve which aims.

Rather than CPP leading to a change in understanding, many of the practitioners we spoke to saw it as “Arts Council England’s defence” against accusations of elitism, and rather than being the vanguard of greater change, one staff member of CPP acknowledged that “there can be a danger in thinking that CPP is the box that ticks cultural democracy” to avoid further change. Despite CPP being cited as a success story, at time of writing this book it only accounts for 2% of ACE’s spending, and the overall budget has decreased from £37 million when it started in 2012 to £25 million in 2020. As a result, one person argued, “[…] when you hear senior figures in the Arts Council standing up and saying Creative People and Places is the best thing the Arts Council’s done, as the chair said recently, and then you look at where their resources are going and you think, well, how can that make any sense—how you can stand by that decision?”

While there were differences of opinion concerning the value of defining different approaches to participation in order to facilitate comparative research, there was broader acceptance about a failure in the processes of cultural policy and the need to change these due to the fact that “[…] we’ve done equalities this way for a generation […] so obviously if you keep on doing the same things you’re going to get the same results, so I think we’re recognising we’ve got to do that sort of thing in a completely different way”.

Many accepted that the nature and structure of funding applications reinforces inequality, and that “funders kind of set the tone for a lot of the bad practice that goes on because of what we ask for, how we ask for it, who the calls go out to”. Most application processes were said to be biased towards those who were already part of the system and excluded newcomers. Furthermore, participatory processes specifically were seen to require policymakers to be more comfortable with funding having “no outcomes defined”. Some policymakers, including those from national arms-length bodies, cited examples where they had attempted to provide funding without pre-established targets, but all agreed that they found it difficult to give away that much control: “I tell you what I think was the failure, was not to let go fully, you know, we were really bold and innovative, and we’re doing exactly the right thing, but then we’d get cold feet at the end, and that that was what led to the failures”.

Some policymakers claimed that this was because there was a lack of confidence that they could encourage deliverers to undertake any evaluation or learning without targets or objectives to measure against. Most policymakers blamed the lack of honesty about failure on the fact that individual organisations, rather than they as policymakers, were uninterested in learning, and objective setting was thus their way of trying to embed practices of reflection and knowledge generation. As we have shown above, however, there is little evidence that policymakers themselves do in fact prioritise learning, and this suggests a potential case of blame avoidance, discussed in Chap. 3. One consultant suggested that the model of reporting against objectives or targets encouraged the problem of putting more importance on accountability rather than improvement, which we have argued limits discussion about failure. Instead, they suggest a “patient capital model”, which would allocate money without time scales or outcomes attached and is instead a process to devolve budgets in response to learning what intervention work was needed. This would also begin to address what was agreed by all to be the biggest failure of the policy process: the short-term project mentality built into the funding system.

All the policymakers to whom we spoke accepted that short-term project funding meant that there was often no space for reflection and learning, and more significantly that “creating that kind of systemic or long-term change is not going to be possible”. In other words, short-term projects are inherently counter to the aims of the participation agenda. Everyone agreed that participatory work was slower than other creative goals, and it therefore follows that it should require more, not less, long-term investment. Yet policymakers recognised that short-term funding was not only endemic in the cultural sector in general but was at its worst in participatory work.

The Failure of Short-Term Funding

Applicants for Creative People and Places (CPP) originally applied for “3 years funding for a ten-year vision”. The Arts Council England (ACE) staff to whom we spoke claimed that they could not promise longer term funding because of the nature of their own funding from government, but that they did recognise the importance of a long-term commitment to embedding systemic change. As we have written elsewhere, however, according to internal minutes of meetings, the programme was also seen as high-risk internally, and senior management themselves were loath to sign up to taking responsibility for a long-term commitment (Jancovich, 2017a). Instead, they put the onus on the areas in receipt of funding to consider how to create practices that could be self-sustaining from the outset, even though, as one ACE staff member acknowledged, “we don’t expect that at all for our NPO’s [national portfolio organisations]”.

They did, however, give significant levels of investment to successful areas (between £1–3 million each) to demonstrate what one called their “serious intent”. In reality, at the end of the first three years, all the areas who successfully applied in phase 1 had their funding extended but significantly, despite the prevalent rhetoric about how successful it had been as a policy initiative, they were all offered a decreased level of funding.

All the CPP staff to whom we spoke stated that having large investments upfront meant that they were under pressure to deliver activity at speed, and the expectation to make these practices self-sustaining made it harder to test out new approaches which require time to experiment. It is also counter to the aim of involving people in decision making, which all acknowledged takes time to develop the necessary contacts and build trust.

Action research requires the ability to test and adapt approaches in much the same way that theories of fast failure do, which were discussed in Chap. 3. Many areas said they assumed that this was how they would work: by experimenting and learning about what works, they would then have the space to build on that learning in their own areas, as well as share that learning further afield. This would require an increase in the level of CPP investment to both allow for growth within existing areas while also bringing in new locations. Instead, the fact that funding decreased over time was said to be deeply problematic as it meant CPP “raised expectation they could not sustain”. This further created a competitive mentality in which the CPP staff we interviewed said the activities they continued to fund were not chosen based on learning what approaches were successful, but rather on what was most cost efficient.

As a result, everyone we spoke to, including those from ACE, supported the view that, in hindsight, it would have been better to give smaller amounts to begin with, “[…] adding on a little bit extra each year rather than trying to have a big burst which then you can’t sustain […] that’s not the way to do it, to throw large amounts of money at [something]—it’s to grow something, learn from it”.

Despite recognising this, ACE has not changed this approach. We argue therefore that the problem is not simply the short-term nature of investment from government to the Arts Council, but also the project mentality in the Arts Council that perpetuates the problem.

Another recognised failure was that policymakers “are very removed from the recipients or beneficiaries or participants”. As a result, although they might claim to be able to learn from what works, they are reliant on what those funded (and therefore with most to lose from any change in funding) tell them. Many policymakers claimed that one of the failures was that the evaluations they receive are too reliant on “self-reporting”, but some, particularly those from the local authorities that we spoke to, also acknowledge that despite (or maybe because of) their close accountability to an electorate, they do not “do as much as we should do in terms of reflecting with our communities”. Both the arms-length bodies and trusts and the foundations further questioned how they would do this when working on a national scale. We argue that it is the reliance on a single narrative from the sector which is most dominant in informing policy, and it is this failure to hear other perspectives by policymakers themselves that contributes to what all accepted was a breakdown of trust between policymakers and the public they serve.

Despite the wealth of data those in receipt of funding are asked to produce by policymakers, there exists nevertheless a scepticism among practitioners about whether this was ever even looked at, let alone used for policy learning. If the data is used to improve policy decisions, many said that they did not know how these insights and the changes they had brought about were shared. As one practitioner said, “[…] if [policymakers] were actually interested in learning anything […] what they should be doing is hiring somebody […] to read all the evaluations […] and tell us what pattern is emerging from that […] that would challenge practice, but none of that happens”.

There were differences of opinion among policymakers about whether it was true that evaluations were largely unread. While most of the policymakers we spoke to claimed to support the view that “we want people to evaluate in order that they learn”, and therefore acknowledging failure should be part of that learning and something which funding bodies should encourage and exemplify, some acknowledged “how rare it is that funders learn from their own projects”. This suggests that learning is devolved to the sector rather than something that policymakers feel they should do themselves. Some did say, however, that they both read and used the evaluations they received for policy learning, but either way most acknowledge there is a problem with the communication of their learning: “[…] we read everything […] we shaped and developed new funds based on the learning […] but we didn’t share it publicly you know, it was very much within the team […] that is a big gap actually, particularly for major funders”.

It was therefore broadly accepted that policymakers themselves must improve at both modelling a spirit of learning from failure and being “clear that if we ask organisations to provide us the information, they’re clear on how we will use it” if they expect those they fund to be honest.

Through our interviews, it became clear that this is a two-way problem. It was seen as inevitable that the public and practitioners would not trust policymakers because of the “power imbalance, because you know obviously we hold the big power chip of the money”, but it was also seen as related to their failure “to walk the walk, so that if we’re asking people we fund to talk to us about failure, we need to evidence that we’re prepared to be vulnerable in that way too”. The main reason this was proving so hard to do, however, was a lack of confidence among policymakers about being able to bring about necessary, but difficult, change. Most policymakers accepted the view that one of the biggest failures “was to keep pushing money at those same organisations and say to them, ‘you must widen participation’, and that just didn’t work”. Despite this view being widespread, nobody believed this would change, and most supported the view that “it would be really difficult to get out of the cage that we’re in” because nobody wanted to be blamed for defunding those already funded. There was thus a resignation to the fact that it is easier not to do something new than to actively change the way they currently operate, and it was accepted that this was the primary reason for the “failure of follow through” in terms of redistribution of funding.

Conclusion

This chapter has demonstrated a gap between the values that are said to underpin the participation agenda in cultural policy and policymaking. We have shown how policymakers acknowledge the importance of survey data that demonstrates inequalities in who participates in certain activities only when it suits their purposes to do so, and then question the efficacy of those very same surveys when they do not suit their narrative. We have also shown how goals concerned with addressing such inequalities are used interchangeably with those focused on providing aspirational experiences for individuals.

This was shown to be the result not only of the relationships between policymakers and politicians, but also of those between policymakers and those they fund. The structures within public administration encourage vested interest in maintaining the status quo from those currently in receipt of funding, and this has contributed to a lack of trust and respect which we argue makes it difficult to acknowledge, let alone learn from, failure. Instead, a focus on accountability to justify expenditure replaces any desire to learn from what has gone wrong with a view to improving services offered to the public. An acceptance of all definitions of participation in practice, moreover, contributes to rendering the word meaningless. We argue that the combination of these factors makes it impossible for policymakers to create the legitimacy that would facilitate their purported aims.

We have identified tensions between those who define participation as taking part in existing cultural infrastructure and those who define it as having agency over what they get to participate in. Despite almost universal acceptance that current methods are not working, we have shown how patterns of professional practice and funding are replicated at least in part by a lack of confidence among policymakers that they would be able to bring about significant change.

While policymakers all claimed that they valued the importance of learning and acknowledging failure, the tendency to devolve this responsibility to those they fund limits the potential for policymakers to learn themselves. Instead, policymakers were more comfortable when discussing the failures of others, of politicians, participants, and in some cases their funded practitioners.

We have also shown that policymakers describe the success and failure of policies very differently from how they define their purpose, often highlighting the profile and reception of funded projects more than any evidence of sustained change. Similarly, while the purpose might be to increase participation in the cultural sector, the success is often measured in relation to the quality of the artistic product. We argue that evaluations focused on outcomes and end results fail to learn from the processes that inform and shape them. We argue, therefore, for a more nuanced understanding of success and failure that considers different facets of a policy or project separately. We will discuss this in more detail in Chap. 7, where we introduce our new framework for talking about failures in cultural policies and projects. However, we will first explore the implications of the cultural participation agenda for artistic practice in Chap. 5, before then considering the participant perspective in Chap. 6. In doing so, we will also show how success and failure mean different things to different people, and thus also the value in seeking out different narratives when designing, implementing and evaluating cultural policy and projects.