Abstract
We introduce a new way to measure interest group agendas and demonstrate an approach to extending the CAP topic coding scheme to policy domains at lower levels of analysis. We use public comments on regulatory proposals in US education policy to examine the topics contained in policy arguments. We map the education policy space using a data set of 493 comments and 5315 hand-coded comment paragraphs. A unique measurement model accounts for group and topic diversity and allows us to validate our approach. The findings have implications for measuring topic agendas in lower-level policy domains and understanding group coalitions and competition in education policy. We contribute to text-as-data approaches tracing policy change in the study of public policy. The findings suggest the relationship between issue attention observed by scholars and larger policy reform movements.
Introduction
Scholars of public policy recognize the importance of issue attention in explaining agenda change (Baumgartner and Jones 2009; Jones and Baumgartner 2005). Originally, scholars thought these attention shifts to be the result of breakdowns in public and elite understanding of an issue or as shifts in decision venues for substantive policy (Pralle 2006). Increasingly, shifting issue attention spills out of traditional substantive and jurisdictional boundaries. Modern policy problems like climate change, food security, and education span these boundaries and introduce tradeoffs and complexity among components of a problem. This complexity demands more integrative approaches to government problem solving and scholarly thinking (McGee and Jones 2019; Jones and Jenkins-Smith 2009).
These concerns require new ways of approaching measurement. Measures of agenda change should be adaptable to levels of analysis and substantive complexity and span the conceptual boundaries of policy problems. These concerns are especially true in social policy, where natural, social, and governing systems collide to produce “wicked problems” (Conklin 2005; Rittel and Webber 1973). Interest group arguments are key to understanding agendas in contexts where substantive and jurisdictional lines are blurry. Groups allow tracing policy reforms through governing systems because they permeate the boundaries constraining bureaucracies, legislative bodies, and other institutionalized actors.
This research measures the topical focus of actors involved in the regulatory process. Our baseline question is a simple one. Can we use policy arguments to measure group issue attention systematically? Do these systematic measures tell us something about the policy debate not evident in popular depictions of education policy? The answers to these questions offer opportunities to understand agenda change and problem definition in complex issues. The research builds on fundamental concerns about how groups compete, collaborate, and organize on the policy agenda (Fagan et al. 2019; Halpin and Thomas 2012).
We use the US Department of Education’s (USED) regulatory proposals to measure shifts in topical attention within the policy domain and offer a “map” of the agenda space for our sample of comments and groups. We use the notice-and-comment process for regulatory proposals defined by the Administrative Procedures Act (APA) of 1946.Footnote 1 Government agencies, interest groups, policy experts, and citizens provide comments on these proposals, which serve as an indicator of issue attention within the policy domain. We examine the comments for 31 proposed rules issued across offices in USED and employ a coding scheme to categorize 493 comments by paragraph. We hand-coded 5315 comment paragraphs, including information on the characteristics of the commenter and the policy topic. The resulting data set is the first of its kind in the study of education policy.
Our findings indicate that the topics within education policy bundle together in intuitive ways. Interest group comments are useful indicators of agenda change, and these arguments exhibit substantive foci distinct from the larger public debate about education policy. We demonstrate how to extend the Comparative Agendas Project (CAP) coding scheme to more detailed issues. Our contributions relate to developing measures of interest group agendas within a policy domain. The study has descriptive value beyond the arguments made here and builds on Yackee’s (2005, 2012) work on interest group politics in the regulatory process. It also extends the study of interest groups and agenda-setting at the federal level (Baumgartner et al. 2009; Baumgartner and Leach 1998). The measurement model contributes to understanding attention shifts and group alignment in coalitional settings where groups pursue diverse and conflicting agendas.
Next, we discuss the motivations for the study, particularly the need for attention measures applicable at lower levels of the policy process to understand agenda change within issues. From there, we offer a conceptualization of attention shifts rooted in group policy argumentation. We bring these concerns together in a measurement model, mapping the agenda space for education policy groups.
Interest groups and policy agendas
We build on empirical scholarship on interest groups’ role in agenda-setting, public policy, and regulatory change. This literature contains three central findings. First, interest groups’ mobilization is uneven, largely reinforcing the representation of private interests, especially businesses. Second, much of this influence comes from efforts at coalition building and maintenance. Third, much of interest groups’ strategizing and argumentation is to maintain the status quo. The thread running through this scholarship is the importance of interest groups in agenda-setting, especially in curating and perpetuating how governing systems understand issues.
Baumgartner et al. (2009) examine interest group lobbying across the entire range of issues in American politics at the federal level. In a comprehensive and systematic examination of interest group behavior and strategy, they find substantial evidence for the mobilization of bias in federal policy—lobbying to support the status quo. Even in instances of policy reform, those interests and experts involved in an issue long term will channel reform to benefit them over time (Baumgartner et al. 2009, p. 260). Much of this bias comes from the issue agenda, where interest groups perpetuate curated problem definitions. Consistent with theories of public policy (Baumgartner and Jones 2009), interest groups employ strategies aimed at stabilizing problem definitions within supportive policy venues by using time-tested frames crafted to support preferred definitions of problems (Pralle 2006; Klüver and Mahoney 2015).
Lobbying in favor of the status quo is easier for interest groups, indeed for most actors, in the American political system. Several studies suggest that proactive lobbying by interest groups leads to very little in the way of policy gains or outright losses in the face of public opposition (Haider-Markel 2006; Smith 2000; Heinz et al. 1997). Moreover, coalition building and lobbying are dependent on the substantive nature of issues and institutions serving as the venues for participation (Mahoney 2007). Fagan et al. (2019) find that parties draw interest groups into conflicts they may otherwise avoid. These forces collectively bracket the nature of interest group influence in lobbying policy change.
Susan Yackee finds many of these themes persistent in federal agency policymaking in her pioneering work on interest groups and regulatory policy. Her studies of interest group influence in regulation demonstrate a proactive role for groups. She finds interest group preferences contained in comments on regulatory proposals significantly alters how much “regulation” is contained in final versions (Yackee 2005). Consistent with the large-scale policy studies, Nelson and Yackee (2012) find interest group lobbying on regulation is impactful when employing coalition building and lobbying in tandem. Finally, she finds that interest groups can shape agency agendas by providing information to regulators, heading off proposals before they are fully developed (Yackee 2012).
The informational value of public comments is key to understanding interest group influence on agency policymaking. Libgober and Rashin (2018) posit that interest groups choose between providing comments that threaten versus inform the regulator. Using data on financial regulation, they find that informing the regulator has more marginal benefit to the group. We complement their study by conceiving of information more broadly as topical dimensions that characterize a problem, not merely data, research, or bits of factual information.
Regulation and policy arguments
Public comments are ideal for capturing the policy arguments of groups and other actors. Comments offer some advantages over strategies like canvassing websites, surveys, or media accounts of group arguments. They are shaped in a broad sense by the content of the rules agencies issue but not dictated by them. Group comments must target aspects of the specific policy laid out in the proposal to be effective. Thus, comments are more likely policy oriented and less likely to shape public opinion or curate a public image.
Substantive specificity of regulatory proposals and public comments offer an opportunity to measure the topics in policy arguments at a finer level of detail than is characteristic of macro-level studies. These approaches locate measurement within an institution. Big policy reforms are rarely confined to one or even a few institutions. Instead, they ripple vertically over layered governance structures (e.g., federalism) and horizontally over policy domains (e.g., climate change). We aim to develop a system to measure interest group agendas in an intuitive way that incorporates the advances made by existing approaches. The focus on groups extends the possibility of studying issues spanning these structures and domains.
Actors commenting on regulatory proposals self-select into the process, unlike giving testimony to legislative bodies for example. Arguments are bound only by the nature of the issue and the venue addressing the substantive problem. The process has great potential for understanding the dynamics of issue attention within policy domains that span governing structures. Many policy areas have ragged or fuzzy edges separating one problem from its nearest neighbor. Ill-formed, nascent, or non-coherent (Ingold et al. 2016; May et al. 2006) policy subsystems are an understudied feature of governance. The commenting process allows for examining issue attention at these ragged edges, where the complexity of policy problems blurs the boundaries of subsystems.
Bureaucracies play a central role in the allocation of attention regarding regulatory topics. After all, comments must target a proposal or program. In the background, bureaucracies set the general topical boundaries by issuing the proposals—setting the agenda. Interest groups and other actors are dependent on these proposals to allocate attention and map arguments that shape policy. Bureaucracies set the agenda working within policy regimes and geared toward an overarching approach to related problems (May et al. 2008). Our measurement strategy is attentive to the structure existing in the background while tapping these greater, more vivid reform efforts.
Research design
We extend the CAP coding scheme for education policy (Jones 2016).Footnote 2 We adapt the system to lower levels of analysis and issues that span substantive institutional jurisdictions by deriving a more nuanced set of topic codes that reflect the full range of issues within the federal education policy domain. If topic codes are mutually exclusive and new topics maintain the hierarchy in the system, the CAP topic coding system is almost infinitely adaptable and extensible (Workman et al. 2021).
We hand-code 5315 paragraphs of text from public comments. While this may sound prohibitive for scholars to replicate, modern computing capabilities and machine learning techniques for classification create economies of scale for future work (see Loftis and Mortensen 2020). Our effort builds on this growing body of “text-as-data” approaches in public policy (Pencheva et al. 2020; Greene and Cross 2017; Wilkerson et al. 2015; Grimmer 2010).
An indicator of group agendas
Regulations.gov houses an archive of public comments on proposed rules.Footnote 3 The site owes to eRulemaking initiatives undertaken in 2002 and managed by the Program Management Office at the Environmental Protection Agency (EPA). The Chief Information Officers, Regulatory Policy Officers, and Deputy Secretaries from over 40 federal agencies make up the executive committee.Footnote 4 The site archives public comments on each regulation issued by participating bureaucracies. The comments span citizens, interest groups, policy experts, and governmental units across the federal, state, tribal, and local levels.
Rulemaking involves the development of a proposed rule by an agency. The Federal Register publishes the proposed rule, and the agency requests comments from the public and other interested parties. Normally, this comment period lasts 30–60 days but may be expedited or extended for several reasons. Once the formal comment period ends, the agency revises the rule pursuant to these comments. Legally, the agency must respond to comments in the revision of the rule or its preamble or provide a reasoned legal argument for not doing so. Once comments are incorporated, a “final rule” is issued and eventually becomes law. So, there is a proposal, followed by an intervention in the form of public comments, followed by a final rendition of the rule, ostensibly responsive to public comments.
The rulemaking process is a good general measure of attention for bureaucracies (see Workman 2015, pp. 90–92) and for the interest groups who engage the policy debates surrounding proposals. It is the key way that congressional and bureaucratic policies are institutionalized and endure since other avenues (e.g., guidance documents) are ephemeral. As Kerwin (2003) notes, rulemaking is one of the most time-consuming and resource-intensive activities bureaucracies undertake. The agenda-setting value of public commenting lies in information provision. Opting out of the institutionalized process cedes prominence to other groups hoping to steer the debate. The informational value of public commenting comes in the form of signaling salience for interested groups and policy or substantive information.
Data collection
We collect data from the regulatory agenda of USED from 2007 to 2016.Footnote 5 Our population of rules contains those published during this period receiving 1–100 comments. Most rules appeared as both proposed and final rules; we used the proposed rule for our sample.Footnote 6 This process led to a population of 82 unique rules, of which the sample contains 31 (37.8%). We selected rules using a stratified random sample, strategically representative of the offices within USED, accounting for the possibility that offices differed in terms of potential commenters. This sample accounts for offices able to strategize the rulemaking process both procedurally and substantively (Potter 2019; Workman 2015).
We collected the full text of public comments submitted to regulations.gov and hand-coded variables of interest at the paragraph level for each comment. We chose the paragraph as the unit of observation, reasoning the paragraph level is the best expression of a coherent idea. A key feature of our design is capturing topical complexity in the public comments as groups ply various substantive dimensions of argument. The sentence level remains too confined to assess topical complexity. Meanwhile, the document level washes away much of the topical nuance of argumentation. Flexibility concerning the unit of text is consistent with CAP data sets, which contain data coded at the quasi-sentence level (e.g., presidential and Queen’s speeches) up to summary descriptions (e.g., congressional hearings) and entire law titles (e.g., US public laws).
These comments represent feedback on the rule from interested parties, including state education organizations, think tanks and academics, teachers’ unions, interest groups, and individual citizens. Altogether, our sample of 31 rules provided us with 493 comments and 5315 total paragraphs. Each paragraph was nested hierarchically within a comment and rule. Consistent with CAP coding rules, each paragraph received one and only one topic code. We collected metadata at the comment level, including the commenter category (e.g., individual, school district, state agency), the organization name, and the comment’s length.
Extending CAP’s education policy topic
To build our topic coding scheme for education, we drew from the CAP’s description of education topics.Footnote 7 The CAP coding scheme includes a major topic category for education policy broken down into nine subtopics, each supplemented by five to twenty descriptive examples. For instance, the subtopic “Higher Education” includes “student financial aid programs” and “rising costs of operating higher education institutions” as examples of content in this category. To create our detailed topic coding scheme, we compiled each of the descriptive examples from all ten subtopics into a master list (n = 104), which was then evaluated and condensed by three education policy experts into 26 initial topic codes.
We iteratively refined the initial set of topic codes in the early stages of coding based on a series of inter-coder reliability assessments. Three additional topic codes were added through this process, leading to a final set of 29 mutually exclusive and exhaustive topics. These topics nest hierarchically under the CAP’s nine subtopics for education policy, creating three layers of mutually exclusive codes for the topic of education policy and allowing the analyst to study education in finer detail. We achieved a 90% inter-coder agreement between three undergraduate coders on the final set of topics.
Categorizing commenters
We also refined the categorization of commenter type. For each USED rule, the agency requires commenters to select a category from an exhaustive list of 42 options, including individuals, businesses, Congress members, and state agency officials. We noted that some of these categories overlapped, while others were unnecessarily broad. In light of this, we opted to create a modified categorization variable by collapsing some and expanding other categories (see online Appendix for additional details).
Figure 1 presents the number of paragraphs on each of the education topics on the left.Footnote 8 The topics display a considerable amount of variation. The amount of attention devoted to Government Operations is especially striking. This topic pertains to staffing, budgets, and organizational relationships between public and private sector actors and touches on the education system at all levels.
The Government Operations topic also receives attention from more distinct commenters than the other issues. This facet of the topic is important because it departs from popular depictions of education policy as a tug-of-war between traditional public education and alternatives to that sector like charter schools or voucher systems. This makes sense because all these more salient aspects of the problem boil down to governance—who will hold sway in the system.
Figure 1 suggests an important point about the difference between elite policy debates and emphases versus the public perception of the issue in the broader policy community, where the question of governance is not so visible. There, issues like school choice or STEM education are prominent. Though important here, they take a backseat to debates about governance, given that the federal government’s main role in education policy is the allocation of money through grants and programs. Therefore, debates on the governance of these programs are akin to debates about the allocation of resources.
On the right, Fig. 1 displays the raw frequencies for commenting in the data set by group.Footnote 9 The figure displays data for the 30 commenting groups that contributed the most paragraphs across our topics.Footnote 10 To give the reader a sense of the specific organizations involved in advocacy, we present the raw organizations rather than our categorizations used in the model below. The figure suggests two things crucial for the development of our approach. First, activity levels vary tremendously across topics and commenters, even in this top slice of the data. The National Education Association (NEA) contributed the most paragraphs at 137. Considering the typical number of paragraphs per page is 2–3, this amounts to 45–69 pages of written comments or 1.5–2 pages per proposal. The Foundation for Individual Rights in Education, a nonprofit focused on campus free speech, contributed the minimum paragraphs among the top 30 groups. Their 29 paragraphs amount to 9–15 pages. The median number of paragraphs provided by these top groups was 50 (16–25 pages).
Figure 1 also reveals variation in type, not just quantity of commenter content. By looking at organization names rather than categories, we can see the variation occurs along three components: sector, career and vocational interests, and geography. At the top of the figure are stalwarts like the NEA or the American Federation of Teachers (AFT), the two largest teachers’ unions in the USA. They are well-known education advocates from the public and nonprofit sectors commenting on USED rules. However, lesser-known entities like the Career Education Corporation and the American Association of Cosmetology Schools join them. These entities represent for-profit education and embody an important second component of variation—career and vocational interests. The third component of variation involves geographical representation. The Colorado Department of Education, the National Indian Education Association, or the Texas Classroom Teachers Association suggest the data’s geographical coverage. Finally, state-level bureaucracies are prominent as commenters in actively steering the understanding of education policies at the federal level. This finding parallels similar state-level involvement in federal criminal justice policy and advocacy (Miller 2004).
Overall, the figure suggests two features of our sample of comments on education regulations. The variation in the data gives us some confidence in our data collection strategy. The figure also offers face validity to our approach. The data vary in intuitive ways that, if not apparent, stop exercises like ours in their tracks. Nevertheless, the data suggest some non-intuitive facets of the education agenda that promise a better understanding of education politics. Given these data features, we proceed with a measurement model.
Methodological approach
Our data set of paragraphs coded by topic allows us to analyze interest group activity across the set of education policy topics. We want to uncover two features of the data. The first is the underlying structure of the topical agenda—its dimensionality. Second, we want to assess similarities between types of groups in the policy space regarding topical foci. We can then assess whether the bundling of topics and group types makes sense regarding what we know about education politics. The underlying structure gives us a chance to evaluate the face validity of our extension of the CAP coding system.
Advocacy around education contains vastly different problem definitions for education policy, leading to different topical foci. We know groups exhibit heterogeneity concerning their topical agendas from the descriptive analysis of the comments (i.e., group heterogeneity across the topics). Traditional factor analysis will be deficient for uncovering the agenda’s underlying structure since it assumes homogeneity, or similar topical mixes, across commenter types. We opt to use a factor mixture model (FMM) to gauge group topical mixes directly (Viroli 2012; Montanari and Viroli 2011; Lubke and Muthén 2005; Ahlquist and Breunig 2012). Online Appendix contains a complete explication of the FMM, model specification, and selection.
We use the FMM to estimate the latent structure contained in the paragraphs across our topics and ask whether this structure makes sense in the context of the politics of education reform. We also use the model to map the agenda space for education commenters and locate commenter types within this space. Our findings reveal components of group argumentation not reflected in public debates and relationships between groups in the policy space heavily influenced by institutional players.
Findings
We have no way to know how many dimensions or clusters the model should assume to describe and reduce the data accurately. Our approach to model selection is empirical, relying on the tremendous effort we have devoted to data collection and coding. We programmed the FMM algorithm to iterate through combinations of up to three dimensions and up to five clusters of group types. We return the BIC at each iteration of the measurement model at given values for dimensions and clusters. The BIC minimized on the combination of three dimensions and four clusters.Footnote 11
Latent dimensions of the education agenda
We surmise that topical attention should reflect larger, integrative policy reforms. To that end, Fig. 2 displays the factor loadings for the topics in education regulatory proposals on the three dimensions estimated in our model. The figure double encodes loading strength with both bar length and shading to highlight the differences. Darker, longer bars are stronger loadings; lighter, shorter bars are weaker loadings. The three dimensions decrease in their contribution to the variance in topical variation left to right.
Social scientists often reduce models like ours to two dimensions for simplicity. We choose not to do so because we are not using the resulting factors for causal modeling or hypothesis testing. Instead, we think the third dimension is instructive in capturing the relevant, real-world components of debate surrounding education policy (i.e., speaks to the validity of the coding system).
We label the first dimension “Legacy Agenda.” It represents the federal government’s historical partitioning of education into higher education and K-12 education. The topics divide along this distinction from top to bottom. The federal bureaucracy has long been involved in issues at the top of the Legacy Agenda, such as tuition assistance for veterans, the Federal Pell Grant, and funds for institutions serving historically underrepresented student populations. All are key issues in higher education. The bottom of the legacy agenda, in contrast, is occupied by K-12 education issues. The federal government has only become meaningfully involved in these issues since the No Child Left Behind Act of 2001.Footnote 12
We label the second dimension the “Reform Agenda.” Policy reform does not occur in a vacuum—reform layers on existing policies and programs (Patashnik 2008). The topics emergent in the reform agenda herald a shift in substance and governance that gets layered onto the Legacy Agenda. Simultaneously, the old partitioning between federal involvement in higher education and K-12 education persists.
The Reform Agenda bundles topics that strongly relate to education reform’s core ideas over the past two decades. Issues like School Choice and K-12 Accountability not only emerge out of the reform movement in K-12 education but come to organize this movement and its set of reforms. This substantive shift and its structural consequences pierce the veil of the older partition and seep into the broader issue of higher education with outcome-based reforms, accountability, assessment, and emphasis on skill development. Many issues appearing along this second dimension relate to specific populations of students or specific stages of educational development (e.g., Early Childhood Education). These issues will likely continue to inform education reform as concerns mount over social, civic, and economic inequities in education systems at all levels.
We label the third dimension “College and Career Readiness.” This dimension can be understood in terms of various initiatives to prepare students for college or the workforce. In K-12 education, these reforms increase focus on Educational Standards, which intend to prepare all students for college or careers. For example, the college and career readiness reforms sparked by the Carl D. Perkins Career and Technical Education Act of 2006 emerged from the realization of a skills gap in many STEM careers requiring the advancement of technical training in K-12 systems.Footnote 13 Moreover, many industry vacancies did not require a four-year degree, increasing the certificate programs students could complete in high school. These accommodated low-income families’ financial constraints and provided an avenue for social mobility without student loan debt.
In higher education, the college and career readiness dimension reflects the movement toward skills development. This orientation is most evident in STEM training to fill industry needs for engineers, computer scientists, and mathematicians contributing to rapid technological advancements. Finally, Adult Literacy and Education is salient as commenters focus on adult learners seeking career changes and improvements.
Figure 2 identifies the latent structure along topical lines and relates it to broader reforms in education. Operationally, this gives us confidence that the topic and group coding approach capture meaningful conceptual variation in the real-world policy debate. The bundling of topics along reform lines is an important point. These larger reforms alter education’s implementation at the ground level and reflect the regulatory agenda at the elite level. These reforms modify the resources and solutions applied to the various problems in education and alter how we understand those problems.
Mapping the agenda space for group types
Figure 3 displays the scores for group types within the space defined by the three substantive dimensions of education policy. The Legacy Agenda appears on the y-axis and the Reform Agenda on the x-axis. The size of the plotted point for each group type relates to the College and Career Readiness dimension. Larger circles indicate more focus on College and Career Readiness, while small circles indicate less attention.
The historical partitioning arising from the legacy agenda is evident. Groupings of higher education institutions and organizations anchor the agenda on one end, with organizations dedicated to K-12 education opposite these. We are encouraged that the model places these groups in their correct polar spheres—a rudimentary check on our coding approach. Along the Legacy Agenda dimension, institutions of higher education and higher education organizations compete for policy attention as the federal government considers changes to major programs that could shift their bottom line and resulting oversight. For example, the federal crackdown on poorly performing programs in the gainful employment rule incited for-profit institutions to become active participants on the Legacy Agenda. K-12 organizations, such as local school districts and teacher unions, are less involved in these debates and have little stake in expanding accountability or funding for higher education.
While this characterization generally holds, think tanks and policy organizations occupy the space between the ends. This placement reflects the diverse agenda for many of these policy organizations and think tanks, such as The Education Trust, to reduce inequality in both Higher Education and K-12 Education. It also speaks to the trend of tethering higher education and K-12 education debates, especially along the lines of accountability, assessment, and student outcomes. It speaks to the prominence of think tanks and other research-producing organizations in the debate as brokers for ideas and problem definitions (Fagan 2020; Rich 2004).
State agencies anchor the far-right end of the Reform Agenda on the x-axis. Charter schools, unions, and their allies fall on the lower left of the figure. The agenda along this dimension is probably best understood in terms of questions of governance. Those organizations seeking to craft reform policies and programs fall to the left, while those opposing them fall to the right. Many of these debates are about the distribution and oversight of resources, program location, and system structure. Occupying the central ground are those institutions typically the target of reform efforts.Footnote 14
Think tanks, policy organizations, and K-12 organizations occupy a middle ground on each of the dimensions. We think this reflects their importance in structuring debates about education reform along the Reform and College and Career Readiness dimensions. These organizations are the standard-bearers for discussions of student outcomes, accountability, and assessment. They are also the conduits for similar debates in higher education. In this, our research dovetails with the findings in political science generally.
Taken together, Figs. 2 and 3 suggest the validity of our extension of the CAP coding scheme and the measurement exercise. Using the agenda as revealed in the public comments on education regulations, we uncover underlying structure to the policy topics that strongly relate to greater reform efforts in education policy. The model also enables us to map the agenda space for groups in ways that relate well to the latent dimensions. Groups map into the agenda space in intuitive ways as measured by their attention to topics. Our attempt to measure interest group regulatory agendas holds some promise for better understanding interest group competition, coalitions, and framing strategies. It is also instructive for researchers studying policy agendas at lower levels of analysis, especially for interest groups.
Mixture classifications
From our model, remember that four clusters of group types minimized the BIC.Footnote 15 Table 1 presents the classifications for all group types in the analysis in the first column. The columns to the right show each classification’s mean scores on the factors. Taken overall, Table 1 demonstrates the utility of our measurement model in accounting for heterogeneous variance across group types. The group types classified in the same cluster make sense regarding perceptions in the policy community. They also conform to the map of the agenda space presented in Fig. 3 and relate well to the factors.
Perhaps, the most important finding in Table 1 concerns the fourth cluster. These organizations provide research directly or fund it. They score high on the Reform Agenda as expected but moderately high on the Legacy Agenda and College and Career Readiness. Taken together, Fig. 3 and Table 1 show these organizations’ centrality in defining the policy debate, specifically in supplying policy research, crafting solutions, and curating problem definitions. Think tanks and research and policy organizations are important at a lower level of analysis and within a policy domain, replicating trends nationally.
The cluster of higher education groups scored highly on the Legacy Agenda and very low on the Reform Agenda. Note that these organizations score low on College and Career Readiness. These debates are only recently becoming features of higher education policy with moves toward accountability and assessment. We expect their topical focus to shift accordingly in the coming years.
The top two clusters of groups operate primarily in K-12 education. They tend to have more parochial interests in the debate. Their argumentation is limited in scope and usually addresses a specific solution that does not integrate concerns across the policy space (e.g., charter schools or unions). Both clusters score low on the Legacy Agenda and College and Career Readiness. The distinction between the first and second clusters is on the Reform Agenda. The first cluster of group types is concerned with rights and liberties for students, teachers, protected minorities, and local autonomy. These concerns for inequity and its remedies are part and parcel of the Reform Agenda.
Conclusions
Ours was a measurement exercise. We set out to measure topical agendas at a finer level of detail than existing coding systems. Our goals were to assess the worth of public commenting to measure the agenda for education policy and interest groups. In doing so, our exercise extends the CAP coding scheme in two ways, both of which adapt it to lower levels of analysis. The first is the derivation of a domain-specific coding scheme from the CAP topics for education still nested within these. Our contribution is an approach to extending the topic categorization scheme and presenting a method to validate the extension (for alternatives, see Fagan and Shannon 2020). The second is its extension to study group agendas in public commenting. We thus build on the development of the measurement system itself (Jones 2016) and further the study of interest group agendas (Baumgartner et al. 2009; Crosson et al. 2021) and advocacy in the bureaucracy (Haeder and Yackee 2020).
Our data collection and coding effort represent a tremendous amount of work. In our view, the result is a template for building out a CAP coding scheme for studying agenda-setting within specific policy domains. Our coding system preserves the basic hierarchical logic of the CAP system and integrates well with it. A scholar using our data could now study the education agenda from initial hearings and public laws through implementing regulations. Our data would serve as a convenient training data set for supervised machine learning (ML) for education issues. We also contribute to the bourgeoning text-as-data approaches to the study of policy (Wilkerson et al. 2015; Workman 2015; Grimmer 2010). These approaches hold out the prospect of economies of scale for projects like ours, especially when conducting supervised ML where comparatively smaller hand-coding investments can propagate larger data sets (Loftis and Mortensen 2020).
We would note there are many ways to study bureaucratic policymaking and interest group influence. Carpenter et al (2020) outline the various options and strategies for data collection and analysis. In particular, ex parte meeting logs and agency guidance documents are key departures from the institutionalized rulemaking process and warrant further attention. Still, our public comments indicator tells us something important about extending topical measurement systems. It also enables us to visualize the agenda space for education policy, lending validity to our measure and teaching us some important facets of the issue and how groups align on the arguments.
Theoretically and conceptually, we find that public commenting on regulations is a useful way to understand interest group lobbying and strategizing and tells us much about how actors understand policy problems. We think the way interest groups bundle policy topics holds promise for understanding competition and cooperation in a policy domain and how policy proposals shape both. Broader reform efforts are reflected in the comments on regulatory proposals and can be understood in bundles of topics.
Finally, we suggest that latent, substantive structure undergirds issue attention as a general feature of agenda-setting. The underlying latent structure of the policy agenda is not static. Like tectonic plates, shifting substantive structure leads to the emergence of new issues, the receding of others, or the bundling of existing issues, forming a completely different understanding of the broader problem. We argue that greater efforts at policy reform (e.g., No Child Left Behind or Every Student Succeeds) from the top serve as one of the more visible of these structural changes, realigning topics and advocacy groups.
Notes
P.L. 79–404, 60 Stat 237; 5 U.S.C. ch. 5, subch. I 500 et seq.
The webpage for the Comparative Agendas Project and codebook is found at https://www.comparativeagendas.net.
The electronic interface for rulemaking is found at http://www.regulations.gov.
Among these, the Department of Education is a member, meaning that organizations within this department submit regulatory information to the site.
This range includes all years for which USED regulations were published on regulations.gov at the time of collection. As noted, regulations.gov is the federal government database of regulations and related documents.
The final rule was included in the few instances where the proposed rule was not available.
For specific counts and percentages for each topic and commenter category, see Tables A2 and A3 in online appendix.
These figures do not include data for the topic labeled “No Substantive Information.”
There were 291 distinct commenters in the data set.
Code for the model iteration and the resulting heatmap are available in online appendix.
P.L. 107–110.
P.L. 109–270.
Those groups or categories not labeled include businesses, teachers, parents, consultants, and individuals from specific offices not representing their organization among others.
The BIC for this model was 1112.72, which was more than 261 below the next smallest BIC value.
References
Ahlquist, John S., and Christian Breunig. 2012. Model-Based Clustering and Typologies in the Social Sciences. Political Analysis 20 (1): 92–112. https://doi.org/10.1093/pan/mpr039.
Baumgartner, Frank R., Jeffery M. Berry, Marie Hojnacki, David C. Kimball, and Beth L. Leech. 2009. Lobbying and Policy Change: Who Wins, Who Loses, and Why. Chicago: University of Chicago Press.
Baumgartner, Frank R., and Bryan D. Jones. 2009. Agendas and Instability in American Politics, 2nd ed. Chicago: University of Chicago Press.
Baumgartner, Frank R., and Beth L. Leach. 1998. Basic Interests: The Importance of Groups in Politics and in Political Science. Princeton, NJ: Princeton University Press.
Carpenter, Daniel, Deven Judge-Lord, Brian Libgober, and Steven Rashin. 2020. Data and Methods for Analyzing Special Interest Influence in Rulemaking. Interest Groups and Advocacy. 9: 425–435. https://doi.org/10.1057/s41309-020-00094-w.
Crosson, Jesse M., Alexander C. Furnas, and Geoffrey M. Lorenz. 2021. Resources and agendas: Combining Walker’s insights with new data sources to chart a path ahead. Interest Groups and Advocacy. 10: 85–90. https://doi.org/10.1057/s41309-021-00113-4.
Conklin, Jeff. 2005. Dialogue Mapping: Building Shared Understanding of Wicked Problem. Chichester, UK: Wiley.
Fagan, E. J. 2020. Information Wars: Party Elites, Think Tanks and Polarization in Congress. Dissertation. https://repositories.lib.utexas.edu/handle/2152/33349.
Fagan, E.J., and Brooke Shannon. 2020. Using the Comparative Agendas Project to Examine Interest Group Behavior. Interest Groups and Advocacy 9: 361–372. https://doi.org/10.1057/s41309-020-00081-1.
Fagan, E.J., Zachary McGee, and Herschel F. Thomas III. 2019. The Power of the Party: Conflict Expansion and the Agenda Diversity of Interest Groups. Political Research Quarterly 71 (4): 90–102.
Greene, Derek, and James P. Cross. 2017. Exploring the Political Agenda of the European Parliament Using a Dynamic Topic Modeling Approach. Political Analysis 25 (1): 77–94. https://doi.org/10.1017/pan.2016.7.
Grimmer, Justin. 2010. A Bayesian Hierarchical Topic Model for Political Texts: Measuring Expressed Agendas in Senate Press Releases. Political Analysis 18 (1): 1–35. https://doi.org/10.1093/pan/mpp034.
Haeder, Simon F., and Susan Yackee. 2020. Out of the public’s eye? Lobbying the President’s Office of Information and Regulatory Affairs. Interest Groups and Advocacy. 9: 410–424. https://doi.org/10.1057/s41309-020-00093-x.
Haider-Markel, Donald P. 2006. Acting as Fire Alarms with Law Enforcement? American Politics Research 34 (1): 95–130. https://doi.org/10.1177/1532673x05275630.
Halpin, Darren R., and Herschel F. Thomas III. 2012. Evaluating the Breadth of Policy Engagement by Organized Interests. Public Administration 90: 582–599. https://doi.org/10.1111/j.1467-9299.2011.02005.x.
Heinz, John P., Edward O. Laumann, and Robert L. Nelson. 1997. The Hollow Core: Private Interests in National Policy Making. HARVARD UNIV PR. https://www.ebook.de/de/product/3723111/john_p_heinz_edward_o_laumann_robert_l_nelson_the_hollow_core_private_interests_in_national_policy_making.html.
Ingold, Karin, Manuel Fischer, and Paul Cairney. 2016. Drivers for Policy Agreement in Nascent Subsystems: An Application of the Advocacy Coalition Framework to Fracking Policy in Switzerland and the UK. Policy Studies Journal 45 (3): 442–463. https://doi.org/10.1111/psj.12173.
Jones, Bryan D. 2016. The Comparative Policy Agendas Projects as Measurement Systems: Response to Dowding, Hindmoor and Martin. Journal of Public Policy 36 (1): 31–46. https://doi.org/10.1017/S0143814X15000161.
Jones, Bryan D., and Frank R. Baumgartner. 2005. The Politics of Attention: How Government Prioritizes Problems. Chicago: Chicago University Press.
Jones, Michael D., and Hank C. Jenkins-Smith. 2009. Trans-Subsystem Dynamics: Policy Topography, Mass Opinion, and Policy Change. Policy Studies Journal 37 (1): 37–58. https://doi.org/10.1111/j.1541-0072.2008.00294.x.
Kerwin, Cornelius M. 2003. Rulemaking: How Government Agencies Write Law and Make Policy. Washington, DC.: Congressional Quarterly Press.
Klüver, Heike, and Christine Mahoney. 2015. Measuring Interest Group Framing Strategies in Public Policy Debates. Journal of Public Policy 35 (2): 223–244. https://doi.org/10.1017/s0143814x14000294.
Libgober, Brian and Steven Rashin. 2018. “What Public Comments During Rulemaking Do (and Why).” Working Paper. Accessed at https://libgober.files.wordpress.com/2018/09/what-comments-do-and-why-libgober-rashin.pdf.
Loftis, Matt W., and Peter B. Mortensen. 2020. Collaborating with the Machines: A Hybrid Method for Classifying Policy Documents. Policy Studies Journal 48: 184–206. https://doi.org/10.1111/psj.12245.
Lubke, Gitta H., and Bengt Muthén. 2005. Investigating Population Heterogeneity with Factor Mixture Models. Psychological Methods 10 (1): 21–39. https://doi.org/10.1037/1082-989x.10.1.21.
Mahoney, Christine. 2007. Networking Vs. Allying: The Decision of Interest Groups to Join Coalitions in the US and the EU. Journal of European Public Policy 14 (3): 366–383. https://doi.org/10.1080/13501760701243764.
May, Peter J., Joshua Sapotichne, and Samuel Workman. 2006. Policy Coherence and Policy Domains. Policy Studies Journal 34 (3): 381–403. https://doi.org/10.1111/j.1541-0072.2006.00178.x.
May, Peter J., Samuel Workman, and Bryan D. Jones. 2008. Organizing Attention: Responses of the Bureaucracy to Agenda Disruption. Journal of Public Administration Research and Theory 18 (4): 517–541. https://doi.org/10.1093/jopart/mun015.
McGee, Zachary A., and Bryan D. Jones. 2019. Reconceptualizing the Policy Subsystem: Integration with Complexity Theory and Social Network Analysis. Policy Studies Journal 47 (S1): S138–S158. https://doi.org/10.1111/psj.12319.
Miller, Lisa L. 2004. Rethinking bureaucrats in the policy process: Criminal justice agents and the national crime agenda. Policy Studies Journal 32 (4): 569–588.
Montanari, Angela, and Cinzia Viroli. 2011. Dimensionally Reduced Mixtures of Regression Models. Journal of Statistical Planning and Inference 141 (5): 1744–1752. https://doi.org/10.1016/j.jspi.2010.11.024.
Nelson, David, and Susan Webb Yackee. 2012. Lobbying Coalitions and Government Policy Change: An Analysis of Federal Agency Rulemaking. The Journal of Politics 74 (2): 339–353. https://doi.org/10.1017/s0022381611001599.
Pencheva, Irina, Marc Esteve, and Slava Jankin Mikhaylov. 2020. Big Data and AI—A Transformational Shift for Government: So, What Next for Research? Public Policy and Administration 35: 24–44. https://doi.org/10.1177/0952076718780537.
Potter, Rachel Augustine. 2019. Bending the Rules: Procedural Politicking in the Bureaucracy. Chicago, IL: University of Chicago Press.
Pralle, Sarah B. 2006. Branching Out, Digging in: Environmental Advocacy and Agenda-setting (American Government and Public Policy). Georgetown University Press.
Rich, Andrew. 2004. Think Tanks, Public Policy, and the Politics of Expertise. New York, NY: Cambridge University Press.
Rittel, Horst, and Melvin Webber. 1973. Dilemmas in a General Theory of Planning. Policy Sciences 4: 155–169. https://doi.org/10.1007/BF01405730.
Smith, Mark A. 2000. American Business and Political Power: Public Opinion, Elections, and Democracy. UNIV OF CHICAGO PR. https://www.ebook.de/de/product/3653653/mark_a_smith_american_business_and_political_power_public_opinion_elections_and_democracy.html.
Viroli, C. 2012. Using Factor Mixture Analysis to Model Heterogeneity, Cognitive Structure, and Determinants of Dementia: An Application to the Aging, Demographics, and Memory Study. Statistics in Medicine 31 (19): 2110–2122. https://doi.org/10.1002/sim.5320.
Wilkerson, John, David Smith, and Nicholas Stramp. 2015. Tracing the Flow of Policy Ideas in Legislatures: A Text Reuse Approach. American Journal of Political Science 59: 943–956. https://doi.org/10.1111/ajps.12175.
Workman, Samuel. 2015. The Dynamics of Bureaucracy in the US Government: How Congress and Federal Agencies Process Information and Solve Problems. Cambridge, UK: Cambridge University Press.
Workman, Samuel, Frank R. Baumgartner, and Bryan D. Jones. 2021, forthcoming. The Code and Craft of Punctuated Equilibrium. In Methods of the Policy Process, ed. Christopher M. Weible and Samuel Workman. Milton Park, UK: Routledge Press.
Yackee, Susan Webb. 2005. Sweet-Talking the Fourth Branch: The Influence of Interest Group Comments on Federal Agency Rulemaking. Journal of Public Administration Research and Theory 16: 103–124.
Yackee, Susan Webb. 2012. The Politics of Ex Parte Lobbying: Pre-Proposal Agenda Building and BlocBloc During Agency Rulemaking. Journal of Public Administration Research and Theory 22: 373–393.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Workman, S., Carlson, D., Bark, T. et al. Measuring interest group agendas in regulatory proposals: a method and the case of US education policy. Int Groups Adv 11, 26–45 (2022). https://doi.org/10.1057/s41309-021-00129-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1057/s41309-021-00129-w
Keywords
- Interest groups
- Agenda-setting
- Regulatory policy
- Education
- Measurement
- Issue attention