Introduction

The inequitable distribution of impacts from the development of the modern highway transportation system in the United States (US) is well documented and discussed (Bullard et al. 2004; Golub et al. 2013; Nall 2018; Sanchez et al. 2003; Yarbrough 2021). Environmental justice (EJ) is a concept developed in the US which seeks to address social inequalities resulting from environmental impacts of development, such as transportation infrastructure. Historically defined in the US as EJ communities, Black, Latine, low-income, and low-English proficiency neighborhoods often experience marginalization, displacement, inequitable impacts, and a lack of meaningful engagement in the transportation project delivery process (Karner and Marcantonio 2018). Since the turn of the century, transportation literature has developed policies, methods, and frameworks for adequately identifying and accounting for the distribution of impacts to EJ communities. Subsequently, decades of federal policy and guidelines have helped transportation agencies further consider impacts to historically marginalized communities, promoting EJ and social equity in transportation infrastructure development (Sanchez et al. 2003). These impacts are primarily investigated at a project level in the National Environmental Policy Act (NEPA), regionally in regional transportation plans (RTPs) and long-range transportation plans (LRTP) by metropolitan planning organizations (MPOs), and statewide by state departments of transportation (DOTs). State and regional plans provide program-level analyses of EJ impacts as specific efforts to mitigate EJ impacts are incorporated into individual projects (Federal Highway Administration 2015). Recent federal programs and initiatives such as Justice40, Equity Action Plan, Reconnecting Communities, and others, seek to deliver prescriptive evaluations of project impacts in a continued effort to shift benefits toward historically marginalized neighborhoods, thus overcoming decades of disproportionate impacts and underinvestment.

Transportation EJ scholarship and practice predominantly evaluates and operationalizes quantitative measures to identify (Rowangould et al. 2016) and assess impacts to EJ communities within these varying planning scales and programs using indices (Chakraborty 2006), GIS (Chakraborty et al. 1999; Forkenbrock and Schweitzer 1999), and other methods to avoid, minimize, or mitigate impacts to EJ communities in transportation infrastructure development. More recently, community impact assessment (CIA) has emerged as an extension of EJ analysis, evaluating a more comprehensive set of measures above and beyond impacts to EJ communities, attempting to evaluate impacts to a community’s quality of life from transportation actions (FHWA 2018). While CIA is not a specifically federally mandated analysis, it does support additional non-discrimination measures like EJ, Title VI of the Civil Rights Act of 1964, the Americans with Disabilities Act, and others (FHWA 2018).

Studies have argued technical “agency-led,” or “state-centered,” equity analyses (not to be conflated with EJ analyses) and actions “mainly reform—rather than transform—the transportation system” and do little to further justice for disadvantaged populations (Karner et al. 2020, pp. 441–442; Pulido and De Lara 2018; Sanchez et al. 2003). There is also limited state-centered action or research that seeks to understand how agency practitioners understand and operationalize equity and justice analyses, as Kågström and Richardson (2015) note. This gap misses a critical component of how practitioners influence and expand the practice, potentially contributing to variation in state-centered EJ improvements (Karner et al. 2020). From a governance perspective, policies and practices are only as effective as the means, methods, and individuals who carry them out. To further advance EJ in transportation project delivery, there is a need to study the planners and engineers responsible for implementing equity-based policies at state DOTs, investigate workforce development and organizational practices, and to examine the associated impacts for EJ in transportation project delivery. The purpose of this case study is to uncover state DOT EJ/CIA practitioner (referred to simply as “practitioners” in the remainder of the article) operationalization of EJ and CIA analysis through the following research questions: (1) What are practitioners’ experiences carrying out EJ assessments in federally funded highway transportation projects? (2) How do practitioners understand and operationalize federal policies into state level action?

Literature review

Three strands of transportation EJ literature will be reviewed as part of this case study: (1) existing practitioner studies, (2) methodological considerations for EJ/CIA analysis, and (3) evaluation of agency actions regarding EJ.

Existing practitioner studies

No known qualitative studies of CIA practitioners exist in the transportation literature and few qualitative EJ practitioner studies exist (Fields et al. 2020; Sen 2008; Strelau and Köckler 2016), with none evaluating practitioner experiences implementing EJ policy in state level transportation governance.

A salient theme over decades throughout existing studies is the impact of individual practitioner discretion on EJ policy implementation. Through individual discretion, environmental assessment practitioners may have varying experiences conceptualizing, applying, and implementing an impact analysis (Zhang et al. 2018). While discretion and variability are inherent in nearly any human process, human variability is not something to be eliminated but rather understood in how it impacts justice in a particular governance circumstance like a transportation project (Zhang et al. 2018). An early study into experiences of municipal, state, and regional EJ practitioners in the Baltimore-Washington DC region in 2002 highlighted practitioner discretion in the study’s finding that “human agency plays a significant role in determining the level of concern for EJ among [agencies]” (Sen 2008, p. 123). Strelau and Köckler (2016) found similar difficulties over a decade later in Germany where practitioners in environmental agencies expressed varying degrees of legitimization of EJ, or other social dimensions of policies, as part of their assessment purview. Practitioners in largely technical capacities may not perceive social or justice oriented issues as within the mandate of an organization (Holifield 2004; Strelau and Köckler 2016). The ways practitioners understand and conceptualize their own individual practice, shaped by social forces, policies, and governance frameworks, has significant implications in how they take action and influence assessment practice at an individual and institutional level (Kågström and Richardson 2015).

Existing studies highlight project governance and management as elements which influence the successful application of social impact analysis (Mottee 2022). Wong and Ho (2015) identified through a systematic review of the literature that practitioners may engage with social impact assessment through either project, community, or social impact assessment development.

Studies have also shown that practitioners and community members alike recognize the need for early, integrated, and interdisciplinary approaches to incorporating social impacts like EJ into transportation project governance (Fields et al. 2020; Lucas et al. 2022; Mottee et al. 2020a, b; Mottee 2022; Sen 2008). Successfully integrating social impact measures into a governance framework can be hindered by unclear and loosely enforced regulations, long project planning and construction timeframes lasting years or decades, and power dynamics in decision-making processes both internal and external to an implementing agency like a state DOT (McNair 2020; Mottee et al. 2020a, b; Mottee 2022). Legal or regulatory catalysts for EJ and broad public participation in the project delivery process are shown to be important factors influencing the integration of EJ within an implementing agency’s assessment and governance strategy (Sen 2008; Strelau and Köckler 2016). Additionally, studies suggest interdisciplinary collaborative approaches have benefits for transportation practitioners (Fields et al. 2020) and students (Miller et al. 2019), raising the importance of interprofessional governance approaches as a possibility of effectively meeting the needs of EJ communities in transportation project delivery.

Methodologically, previous research has utilized policy review, surveys, interviews, and focus groups of transportation project stakeholders (e.g., Mottee et al. 2020a, b; Mottee and Howitt 2018; Sen 2008). Often, practitioner research utilizes interviews and focus groups with project management or impact assessment personnel and/or community members involved in a specific project (Lucas et al. 2022; Mottee et al. 2020a, b; Mottee 2022). While the context specific nature of evaluating project-level governance is an important and holistic approach, a gap remains in understanding the implications surrounding a specific practitioner role within a particular, yet contextually variable, transportation governance structure impacting many projects across many regions.

EJ/CIA analysis in transportation project delivery

Methodological considerations for impact analysis

Early efforts to evaluate EJ in transportation projects primarily focused on utilizing geographical information systems (GIS) to understand the interactions between EJ communities and transportation infrastructure. In particular, disproportionate impacts from proximity to air pollution and noise have been (and still are) a primary analysis lens as they generally represent more direct and traditionally understood health and environmental impacts stemming from transportation infrastructure (Chakraborty et al. 1999; Forkenbrock and Schweitzer 1999; Most et al. 2003). As Bowen and Wells (2002) critique, EJ analysis typically relies on proximity rather than risk, which has the potential to overrepresent and underrepresent the risk of exposure to environmental hazards a particular population may experience, presenting a methodological concern for agencies to consider.

Methodological considerations are crucial for ensuring social impact analyses like EJ are not “lost, buried or undervalued and marginalized within the wide and largely monetized [transportation project] appraisal process” (Lucas et al. 2022, p. 3). Of particular note is the difficult process of attempting to quantify social impacts in transportation infrastructure development which can create conflicting transportation project valuations based on the methodology and factors considered (Mouter et al. 2021). This critique highlights limitations identified in broader transportation equity research pointing out the difficulty of quantitative methods, like cost–benefit analysis, to holistically incorporate intangible social impacts (Martens 2011; Martínez-Muñoz et al. 2022; Thomopoulos et al. 2009).

The process of assessing disproportionate impacts in transportation planning is often influenced by the method of analysis used to identify EJ communities at the outset. Techniques employed by agencies to compare impacts on exposed EJ communities to surrounding unexposed areas can be loosely characterized as either thresholds, graduated scales, weighted averages, or indices (Oswald Beiler and Mohammed 2016; Rowangould et al. 2016). These scales/indices attempt to draw statistical conclusions about the level of proportionate, or disproportionate, impacts on EJ communities. These methods may be limited in their usefulness when populations of interest are integrated amongst other population groups rather than segregated (Duthie et al. 2007). EJ analysis is often exacerbated by the geographical unit of scale used, referred to as the “modifiable areal unit problem” (MAUP) (Pereira et al. 2019). MAUP can potentially underrepresent EJ impacts from a particular project, either intentionally or unintentionally (Baden et al. 2007). Separate from MAUP, McNair (2020) noted this potential, or tendency, to find no impact in a review of airport expansion environmental impact statements (EIS) in the US between 2000–2010, even when impacts were identified by agencies. Sensitivity of EJ analysis to geographic units can shift the determination of a disproportionate impact from significant to null (i.e. show no significant impact), and is often highly sensitive to changes in infrastructure placement/alignment, leading many authors to recommend sensitivity analysis as a critical component for transparent EJ analysis (Baden et al. 2007; Chakraborty 2006; Davis and Jha 2011; Most et al. 2003; Noonan 2008; Pereira et al. 2019; Rowangould et al. 2016).

A study of rail projects in Amsterdam and Sydney found “transport planning practice is slow to adopt [the shift toward more qualitative methods] and continues to be dominated by a focus on technical and economic concerns” (Mottee et al. 2020a, b, p. 52). In a US case study, quantitative information generated within the regional transportation planning agency travel demand model was found to be privileged over the experiential knowledge gained within the transportation planning process (Nostikasari and Casey 2020). Research continues interrogating quantitative methodologies lacking robust equity and justice measures for regional and other levels of transportation governance (Bills 2022). Paradoxically, while there is a large reliance on quantitative performance measures in tracking equity and EJ in transportation planning, many state and regional transportation agencies still acknowledge data availability and disaggregation as a constraint on effectively carrying out such analysis (Barajas et al. 2022). In short, lagging advances in technical approaches like long-range transportation planning may undermine the rapid advance in goals to incorporate measures like EJ and other social impact priorities in transportation plans, but methodological discussions should not overtake the focus of addressing contextually specific distributive concerns and mitigating unintended consequences (Field et al. 2022; Handy 2008; Martens 2011).

Evaluation of agency actions regarding EJ

Ensuring consistent application of EJ/CIA analysis across the US is imperative to not only achieving more just and equitable transportation infrastructure, but also to advancing more just governance at all levels of government. However, as Karner notes in relation to equity analysis in regional governance, “despite the proliferation of academic studies, sophisticated data and methods are slow to diffuse to practice” (Karner 2016, p. 47). Agencies do not mature in their EJ governance at the same pace or scale (Amekudzi et al. 2012). Variations in agency and state resources impact the integration and application of EJ policies, procedures, and practices influencing the effectiveness of EJ analyses at the state and regional level (Amekudzi et al. 2012; Amekudzi and Meyer 2006; Barajas et al. 2022; Karner 2016). Despite a broadly increasing focus on social equity in transportation agencies, practitioners face significant challenges as they attempt to integrate new policies and procedures while also attempting to evaluate the social equity impacts of their plans and policies, at times leaving specific social objectives or performance measures by the wayside (Manaugh et al. 2015). On the other hand, some states are considered highly effective in their EJ/CIA implementation and stand out as examples for other agencies, creating a need for broad scale analysis of nationwide variations (Amekudzi et al. 2012; Amekudzi and Meyer 2006; Ward 2005). This may be due to varied environmental paradigms (e.g. EJ, sustainable development, etc.) which influence competing priorities that may be approached in either a proactive or reactive manner depending on the level of integration of priorities into the institutional framework (Amekudzi and Meyer 2006). Persistent inconsistencies in transportation governance have led to recent arguments that the focus on state-centric actors “have generally not worked to achieve transformative outcomes in the transportation system and mitigate prior injustice” (Karner et al. 2020, p. 442).

Finally, elements of embedded structural racism play a significant role in the both the procedural and distributional impacts and benefits creating unacknowledged systems of privilege and status quo in transportation infrastructure development (Pulido 2000). Personal experience of practitioners and communities alike are “grounded to different degrees in particulars and abstractions” (Kurtz 2009, p. 686). Recent arguments cite how state-centric approaches involving state actors like practitioners Differences in experience necessitates longitudinal evaluation of EJ across multiple contexts to understand the structurally disproportionate ways EJ communities are impacted by less visible racialized impacts (Kurtz 2009). In their EJ evaluations, practitioners both knowingly and unknowingly operate within a racialized system of governance with significant inertia that keeps benefits and impacts flowing in particular directions (Táíwò 2022). Organizational inertia tends to lead toward a governance system which does not seek to disrupt itself but instead performs the motions laid out by a regulatory framework (Pulido 2017). Shifting toward a more just EJ governance system requires agencies to interrogate how stakeholders are included in the transportation planning and project delivery process but, rather, how power is actualized within the process and how power is leveraged in a way that continues to privilege state interest or cedes power to communities impacted by a transportation project (Karner et al. 2020). These interrogations of the operationalization of power, or social capital, in the research must shift to focus on the contextual elements creating conditions for inequality to perpetuate in a system (Schwanen et al. 2015).

Due to the lack of research focus on practitioners responsible for EJ analysis, and the critical role governance structures play in perpetuating the inertia of environmental racism and injustice, we hope to raise the importance of speaking with practitioners themselves, as actors within their unique governance systems, to understand, through their lenses, how EJ analysis is contextually carried out from a personal and agency perspective to evaluate areas of transformative change within the practice.

Methods

Case study context

The context of this study focuses on state DOT practitioner experiences involved in evaluating impacts to EJ communities adjacent to transportation infrastructure projects. Previous qualitative studies of EJ practitioners in the US have not covered state-wide contexts and have been primarily focused on urbanized areas (Fields et al. 2020; Sen 2008). To bridge this gap, this qualitative case study encompasses practitioners working at state DOTs in the US who are federally responsible for carrying out transportation infrastructure EJ reviews for projects in both urban and rural contexts.

From a governance perspective, state DOTs are responsible for planning and coordinating transportation infrastructure projects within their own states and are further responsible for implementation and construction of projects in coordination with regional, municipal, and community stakeholders (Fig. 1). State DOTs can be thought of as implementing agencies. When USDOT provides funding for a project implemented by a state DOT or any other level of government, like a regional (MPO) or municipal government, this requires the implementing agency to meet federal requirements like the environmental review process, NEPA, which includes EJ analysis. While additional scales of long- and short-range transportation planning also incorporate EJ impact analysis, this case study focuses on project level experiences. USDOT, the federal transportation agency in the US, is responsible for planning and coordinating federally funded transportation projects including road, air, transit, and sea, with each mode generally administered by a different agency (e.g., Federal Highway Administration (FHWA), Federal Aviation Administration, Federal Transit Administration). Agencies like FHWA interact with each state DOT through local (state-level) FHWA offices. State DOTs may be additionally responsible to their state governments for requirements above and beyond federal standards. These state-by-state experiences in carrying out federal standards at the project level is the contextual backdrop for this transportation governance case study.

Fig. 1
figure 1

Simplified state DOT project delivery governance process

Purposeful sampling and participant bounding criteria sought out practitioners in urban and rural contexts with at least two years’ experience in EJ/CIA work at state DOTs. When possible, multiple practitioners from the same agency were interviewed to increase opportunities for data triangulation, validity, reliability, and credibility (Lincoln and Guba 1985; Yin 2006). Nineteen practitioners were interviewed across fourteen states (Fig. 2). Their EJ/CIA experience ranged between two and thirty years with a median experience level of thirteen years. Eleven practitioners self-identified as female and the remaining eight self-identified as male. Sixteen practitioners identified as white, two identified as Asian, and one identified as Latinx (Table 1).

Fig. 2
figure 2

State and practitioner case study participant geographic distribution (US Census Bureau 2011)

Table 1 Participant demographic summary (n = 19)

From an organizational structure perspective, agencies sampled in this study ranged in personnel size from 1000 to nearly 20,000 employees, with twelve agencies having 5,000 employees or less. All states within the sample are broken up into operational support networks or regions/districts. Regions or districts are in general geographic areas within a state in close proximity to one another; states in this study had 3–12 regions or districts. A large majority of practitioners interviewed serve in the environmental departments of their respective agencies and support NEPA review processes (Fig. 1). These practitioners either serve projects within specific regions or districts or serve as a shared resource for projects across the entire state. While practitioners described elements of EJ in other programs and contexts like Title VI or equity, NEPA was the primary programmatic engagement point for most practitioners interviewed. Practitioners interviewed engaged with the project lifecycle at multiple points. Some practitioners reported being engaged in the delivery process through construction and operation and maintenance, others reported only being involved in project planning and NEPA documentation preparation. Some variation in NEPA engagement between states and FHWA was also observed. Two states have agreements for what is referred to as NEPA assignment, meaning their agencies assume control and responsibility for federal environmental reviews and NEPA decision-making within their states, eliminating FHWA project-specific reviews. Put simply, states with NEPA assignment assume federal jurisdiction for their decisions to allow for a faster NEPA review process. Categorical exclusions (CEs) represented the largest proportion of environmental review designations received by federal aid projects in each state included in the sample. Every practitioner interviewed indicated 90% or higher of agency federal-aid projects qualifying for CE as a proportion of total projects. Federal-aid projects receiving CE designations are considered to have no significant environmental impacts and may need less extensive NEPA reviews. This means projects with a higher potential for impact made up 10% or less of projects reviewed by practitioners in this study.

Positionality

The first author is a white, male, upper-middle-class American citizen with a background as a professional engineer in the utility industry where he interacted with state DOTs and other local and regional transportation agencies on many occasions and in varied contexts. This previous experience contributed to his understanding of variability in state and regional project management practices. The second author is a Black, female, upper-middle-class, American citizen with a background as a professor and professional engineer at a state DOT. Participants involved in this study work at organizations neither author has previous project experience with, helping reduce bias. Additionally, both author’s identities as engineers are traditionally interpreted as a position of power in design and construction environments and organizations, requiring critical reflection at all stages of the study. No previous connections existed between the authors and participants.

Data collection

Interviews, agency documents and artifacts were collected as primary data sources. When available, agency documents such as publicly available standards, guidelines, process diagrams, records of decision (ROD), websites, and other artifacts were collected. A semi-structured interview protocol was implemented, with the protocol adjusting throughout the data collection phase to address emerging themes (Creswell and Poth 2018). Virtual interviews (n = 18) were carried out with participants between May and July 2022, with interview times ranging from 28 to 65 min. All interviews were one-on-one, except for three interviews where two participants from the same agency and department were interviewed at one time. Audio recordings of interviews were initially transcribed with the assistance of either Otter.ai software or a professional transcription service; the first author finalized transcripts.

Adherence to confidentiality and voluntary participation ensured participants had free, prior, and informed consent before, during, and after completion of participation (Hanna and Vanclay 2013; Vanclay et al. 2013). Participant names, agencies, and any other identifying information were removed from transcripts and kept completely anonymous with names recorded in a double password protected spreadsheet on a locked computer only available to the authors. All names used in this study are pseudonyms selected by the participants and unmodified by the authors. This case study received Southern Methodist University Institutional Review Board approval.

Data analysis

Transcripts were read through several times to be familiar with the information before engaging in a first round of reflexive memoing of all transcripts, documents, and artefacts (Creswell and Poth 2018). Qualitative data underwent an initial round of open, descriptive coding with reliability increased through the development of a codebook (Bazeley 2013; Saldaña 2016). Codes and themes were formed, arranged, and validated through the constant comparative method throughout the data analysis process as new data was compiled (Glaser 1965). The codebook and a second round of reflexive memos informed the emergence of themes in the qualitative data (Bazeley 2013; Creswell and Poth 2018).

Results

While practitioners reported many similarities in their EJ assessment experiences, results from the practitioner interviews revealed several key themes (summarized in Fig. 3) relating to: (1) variations in practitioner roles, (2) challenges accessing data, (3) differences in the subjective nature of assessing disproportionate impacts, and (4) internal and external EJ collaboration. Each of the following sections will describe the main themes using quotes to reinforce thematic findings.

Fig. 3
figure 3

Main themes from EJ/CIA practitioner interviews

Practitioners support EJ in the NEPA process with varying levels of specialization and ability to influence EJ/CIA practice in the agency

Three types of practitioner roles and orientations to EJ were revealed through the interviews: Type 1 being a general NEPA practitioner, Type 2 being a NEPA practitioner with additional EJ/CIA subject matter expert (SME) responsibilities, and Type 3 being a dedicated EJ/CIA SME who does not necessarily engage directly in the NEPA review process. Type 1 practitioners, operating primarily at the region or district level, often described EJ as part of many other environmental review processes they carried out within their broader NEPA responsibilities and rarely identified specialization in the topic. Type 2 practitioners were formally designated as an EJ or CIA SME in their department, carrying out similar responsibilities as a Type 1 practitioner but with the added responsibility of being an EJ specialist contributing somewhat to the development of internal EJ guidance. Finally, Type 3 practitioners operated as dedicated EJ resources in the agency with their primary responsibility being EJ or CIA review, often engaging in more formalized EJ policy and guidance development and collaboration for the agency. Type 2 and 3 practitioners often also described operating in a support role for the entire state.

NEPA emerged as a prevailing cornerstone for practitioners to orient themselves to EJ within their agencies, either through environmental review or public involvement (PI) in project delivery. Frank, a district/regional environmental planner with over twenty years’ experience, described his role primarily “as a NEPA practitioner in [his agency’s] transportation planning and project development arena, preparing NEPA documentation, submitting NEPA documentation for approval, and documentation for EJ or community impact assessment, as needed.” Type 1 practitioners like Frank noted having other responsibilities in the environmental permitting process such as wetland permitting, state permits, Army Corps, and other related permits.

Environmental planners in several states had a designated role as either a socioeconomic or EJ SME. This can be thought of as a designated specialty on top of their primary role as environmental planners (NEPA + SME). As Mr. O indicated,

“I review any possible socioeconomic or environmental justice concerns and also help with the guidance and implementation of those processes, as well as overseeing most of the environmental portion of a project…any of our seven sections. But, my job as the subject matter expert is to be well versed in the socioeconomic and environmental justice aspect.”

Additional, specialized knowledge sets Type 2 practitioners like Mr. O. apart from traditional environmental planners (Type 1) as it formalizes EJ expertise within the agency, providing a focal person for resources, review, and implementation.

Finally, Type 3 practitioners share a common trait within their respective agencies as being dedicated socioeconomic, CIA, or EJ SMEs responsible for overseeing EJ/CIA at some level within the agency. Angie and Heather described their experiences in their agency where they “oversee that whole (EJ/CIA) process” and “develop the guidance that [they] give to [their] staff and district staff.” Further formalizing EJ/CIA within the agency, agencies interviewed with dedicated EJ/CIA SMEs described more elements relating to structure in EJ/CIA review, guidance, and implementation. Juliet expressed the tension not having a dedicated specialist (Type 3) has on an agency and the questions arising from that tension,

“‘Should we bring in an actual specialist just to focus on [EJ]?’...my title is NEPA Program Manager. I’m not an EJ specialist. So, I’m teaching myself while doing this, so that I’m constantly asking myself, ‘Are my efforts enough?’”

This tension highlights a desire from some practitioners for the creation of Type 3 roles within their agency to carry out EJ analysis more effectively, which was often described as being internally limited due to resource constraints, particularly when an agency only contained Type 1 practitioners.

Where Type 1 and Type 2 or 3 practitioners existed within an agency (typically district/region and headquarters/division, respectively), Type 1 practitioners described having an opportunity to readily interact with the other types of practitioners in their agency for guidance, document review, and support. Additionally, Type 2 and 3 practitioners discussed being resources for other departments on topics of socioeconomics or EJ and, in some cases, interacting directly with EJ peers in other state agencies and the state FHWA office in the review process. While Both states with NEPA assignment had a combination of Type 3 and 1 practitioners.

Access to updated and disaggregated data is a challenge for both urban and rural practitioners

While discussing data analysis, many practitioners noted the importance of going beyond quantitative data to support analysis with qualitative data collection and impact assessment. Ground truthing data was described as an extension of initial quantitative data assessments and not a substitute. Additionally, data access and aggregation were noted as an issue for both urban and rural practitioners.

Decennial Census, American Community Survey (ACS), and EJSCREEN data are a nearly unanimous starting point for all practitioners when determining the level of EJ analysis or impact a project may initiate. However, practitioners often followed their list of quantitative data sources with descriptions of needs for on the ground assessments and verification of data. As Juliet noted, “when you’re looking at numbers, you take away the human part of it…the numbers [aren’t] going to tell you what’s important to that community.” Similarly, Swan noted that “impacts to people [aren’t] the same analysis as analyzing soil conditions.” Practitioners expressed how quantitative data can only get them so far, often just serving as a baseline for further, deeper qualitative EJ impact analysis.

While qualitative analysis is important, several practitioners described a tension between gathering data in a computer environment and what could be described as a reticence of practitioners or project team members to engage with the community. One practitioner explained how “a lot of times what [she sees] is folks will look at Census data, but there’s no ground truthing…there may still be something else missing if you’re not out in the community, understanding the situation.” Patto summarized this point with a reflection about the impacts a practitioner’s perspective often as community outsiders can have without seeking further contextual and relational understanding,

“…there are a lot of things that I don’t think are immediately obvious until you really start talking to people within the community, and then you find out some stuff that otherwise, without being a member of that community, you would never know.”

Other practitioners emphasized project impacts caused by a failure to engage in ground-truthing early enough by noting how not fully leveraging community context, engagement, and input may leave a project vulnerable to delays and increased costs if EJ impacts are not identified sooner in the process.

Finally, while quantitative data may be generally available, several practitioners in both urban and rural contexts noted data access limitations and difficulties with aggregation. These limitations often leave practitioners wanting for greater levels of information and seeking additional methods for understanding impacts. Angie, an urban practitioner, noted her agency’s struggle:

“it’s not always easy to get the information we need as far as…who’s living where, who might we impact. The Census data is only so good, it’s only so up to date, it only gets down to a certain level, and it’s not all easily available.”

Grace B., a rural practitioner, also described struggles with Census data not being “super accurate, or not accurate” at a rural aggregation level and being let down by other data sources as well because “the information is just not available for the area that [they] are looking at.” Several practitioners noted their agencies leveraging relationships in the community, performing door to door surveys, capturing information in the PI process, or developing their own internal data sources as ways of combatting these publicly available data limitations. As a result, practitioners and agencies engage in a lot of uncertainty when carrying out EJ community identification and impact analysis. These uncertainties create opportunities for differences of opinion in the process, and challenges to results and findings.

Differences of opinion and the subjective nature of disproportionate impacts can make reaching consensus difficult

Disproportionate impact determinations were described by many practitioners as either “murky,” a “smell test,” or “subjective” in nature, creating opportunities for differences of opinion. The need for increased guidance and definition of EJ was discussed as an approach to improve clarity; however, practitioners also noted increased guidance could restrict decision-making flexibility.

Determining what qualified as impacts to EJ communities, as with other EJ practitioner experiences, varied from state to state, and as Mr. O noted “once we start getting into impacts and remediation, it gets a little bit fuzzy.” The geographic area and unit of analysis can influence the calculated severity of a particular impact, especially when impacts can be direct or indirect. As Stacy discussed, “it totally depends based on your project. We don’t have definitive [limits], it’s just what are your potential impacts and how will that relate to the area that you’re in? So, it’s context and the intensity of the impact.” Stacy was not alone, as other practitioners noted using a “context and intensity” approach in their agency assessments of impacts. One practitioner described it as “a kind of…smell test just to see if it registers in your gut as though it may be a disproportionate effect…to an EJ community.”

While some practitioners admitted they had yet to encounter a disproportionate impact assessment due to the nature of their agency context and personal background, those with experience assessing disproportionate impacts often provided uncertain responses about process and outcome. As Frank noted, “there’s no broad understanding of [disproportionate impact], that [he’s] aware of, that transfers from one person to the other.” One practitioner described determining disproportionate impacts as “a little black box.” Discussions of the subjective, and individually determined or experienced nature, of impacts had no clear consensus among practitioners other than there was no consensus. Practitioner rationale around this problem indicated it may be an inherent feature of the entire impact determination process and “it’s going to always be a qualitative exercise and you’ll always have room for disagreement.” Ultimately, working toward consensus through the PI process and in conjunction with the local FHWA office was explicitly described as the current solution for several practitioners.

Further complicating practitioners’ ability to develop consensus, a lack of EJ guidance and definitions were often cited as sources of confusion and “a floating atmosphere.” Mike, a Type 3 (SME) practitioner, described the complex relationship between practitioners and guidance in this way:

“Every year I hope FHWA is going to define some of these things a little more clearly. I mean, it’s good, sometimes, to have room to maneuver, so you can try to do your best job of calling out what the true impacts are and bringing adequate mitigation into play, but it would be helpful [to have] some clear guidelines.”

Other practitioners echoed Mike’s sentiment and expressed frustration with sparse guidance on EJ impacts and interdependencies which can hinder more accurate representations of impacts in project documentation determinations and overall project decision-making.

Thresholds were a useful tool for some practitioners to use in determining when potential impacts might require further evaluation. One practitioner described how they recently changed their tract demographic threshold from 40 to 15%, where “if [the tract has] more than 15% low-income or People of Color, then [they] need to take a closer look at the impacts.” While not all practitioners used thresholds, those that did not indicated that clearer guidance from either FHWA or their own agency would make “decision-making easier.”

While thresholds were described as helpful to determine if further review or a higher NEPA documentation level was required, individual opinions of practitioners and stakeholders could still lead to disagreement about impact assessment. One practitioner noted “how people use the tools and what they ultimately conclude, or may say in a conclusion, can be different from one… practitioner to the next.” Differences in practitioners’ individual judgement and assessment highlights a significant methodological, and interpersonal hurdle practitioners face in their own agencies and interacting with EJ communities. Further, differences in practitioners’ individual judgement create a potential source of disparity in the treatment of impacts and mitigation across and between practitioners.

Practitioner collaboration through the form of internal and external working groups is becoming more common and can help create more cohesive approaches and help reduce functional and individual isolation

Practitioners expressed in several interviews how collaboration among EJ specialists was a common benefit. Internal (within the agency) and external (statewide) working groups focusing on EJ, CIA, civil rights (Title VI, etc.), or other topical intersections provided practitioners with opportunities for developing cohesion of visions and processes with existing resources in their own contexts. While not common in many interviews, several practitioners noted their isolation or compartmentalization on project teams and within their agencies.

Collaboration served as a benefit to practitioners on several levels. Practitioners described collaborative efforts as an informal back and forth or checks and balance system between practitioners and other agency departments or state agencies. Examples ranged from something as simple as project-level decision making to state-wide agreement on factors for EJ impact identification and analysis. Peter, and other practitioners, noted how “competing definitions and interests [between Title VI, EJ, equity, etc.]…has made things complicated.” Collaborative measures appear to be one step practitioners and agencies are taking to reduce complexity and increase efficiency in their processes.

Aside from quantifiable benefits, Mr. O discussed how the good relationships his agency developed between state agencies helped facilitate the EJ impact analysis process and created space where project teams are willing to hear each other out while working toward a decision. While practitioners primarily discussed benefits to their own work and processes, benefits were not only described in unilateral terms, but included discussions of cross-agency impacts, such as benefits for FHWA and other collaborating agencies. One of the increasingly prevalent mechanisms practitioners described for facilitating internal and external collaboration was through topic specific working groups. EJ, CIA, civil rights, equity, and similarly aligned topics served either individually or collectively as the organizing charter for working groups. Often, this varied based on the degree of specialization present within an agency; larger, and more specialized, agencies created working groups within departments and across the agency, and smaller agencies tended to discuss developing state-wide working groups. Meeting frequency responses included monthly, quarterly, and annually, generally increasing in frequency as the degree of practitioner specialization increased.

Angie, who works in an agency with Type 1 and 2 practitioners, described how their EJ working group started “based on the expectations and new guidance coming out of the federal level on EJ. [They]…pulled together a group from environmental…planning…MPOs…[DEI]…planning…transit…local programs…a whole host of people who do different things.” A few practitioners echoed Angie’s experience in starting their own working groups. Agencies with primarily Type 1 practitioners noted state agencies reaching out to them to better understand their process, and vice versa. Describing a more ad-hoc approach, Stacy described being “curious about what other government agencies within [her] state are doing…[and is] looking to understand [adverse impacts] better.” Carol, who meets with her working group monthly, discussed how working groups in her agency created an environment for sharing “ideas, interesting projects, innovative practices, new guidance, [and asking] questions.” Practitioners at large agencies noted this was especially important for them as it is difficult to coordinate actions across multiple groups who all have the same goal.

Further highlighting the importance of collaboration and working groups, practitioners in some states expressed frustration from the isolation or compartmentalization in their departments or agencies. On one level, practitioners hoped other departments involved in project delivery would catch “the EJ fever,” allowing for earlier or even clear integration into the project delivery process. John, a Type 2 practitioner, described how it is “easy to get siloed within my discipline. I don’t always have a clear idea or understanding of where I (CIA) will fit for each specific project.” Others described how even similar topical groups in an agency do not interact with each other, keeping their processes separate in a way one practitioner frustratingly described as a “you just do your thing” mentality. Michelle highlighted how without the type of working group collaboration inside her agency she “[feels] so alone in [her state]” and relies on outside conferences and groups to find a shared experience give her “the will to continue moving forward” in her EJ SME role.

These examples illustrate the importance of collaboration in any form for supporting practitioners and improving project delivery through increased information sharing and collective goal orientation. While the benefits appear to accrue regardless of agency size and specialization, it also appears not every agency is engaging in collaborative project delivery practices, let alone across topical areas like EJ. The following section will further discuss each of these themes in further depth, compare with extant literature, share limitations, and offer opportunities for further research and development.

Discussion

Differences between EJ practitioners and practices from state to state, region to region, and within in the same agency, have serious, and persistent, implications for ensuring consistent, and equitable, EJ assessment across the US. As the key findings highlight, agencies face challenges due to (1) practitioner roles, (2) data accessibility, (3) differences of opinion in impact assessment, and (4) variations in internal and external collaboration (Fig. 3).

Across the study, it is striking how similar practitioners were within their respective specializations (Type 1, 2, or 3) and how much that specialization varied practitioners’ EJ work within the agency. As Kågstrom and Richardson (2015) note, “commitment to the advancement of [EJ/CIA] and the ‘team’ affect capacity to influence” spaces for action (p. 115). The ‘capacity to influence’ appeared to be moderated by an agency’s perceived resource availability, and ability to go above and beyond what was required, as was particularly evident when talking with Type 1 practitioners. Type 2 and 3 practitioners appeared to have the greatest opportunity for engaging in EJ action in their agencies, even if their capacity for influence was highly contextual. This contextual variation was consistent with existing ideas of agencies having differing EJ maturation (Amekudzi et al. 2012). However, simply having an EJ specialist alone did not appear to be panacea if the agency context was not set up to support broader action. Agencies seeking to implement greater EJ practitioner specialization could benefit from reflection on their capacity for implementing broader agency-wide EJ actions if they and the communities they serve seek to sustain the greatest EJ benefits from agency workforce development and morale.

In terms of the tools and data used in impact analysis, practitioners echoed the three challenges identified in the literature by Duthie et al. (2007) about effective EJ analysis in long-range transportation planning: collecting data, generating consensus on the definition or application of EJ, and choosing an appropriate unit of analysis. These challenges are documented in previous practitioner and agency studies (Barajas et al. 2022; Sen 2008) and noted as having varying impacts based on agency resources. Several practitioners in different regions noted their agency’s strained resources as limiting their ability to perform more extensive EJ/CIA analysis. This raises the question of whether the ability to have dedicated resources is a requirement for effective EJ analysis, as well as what constitutes a meaningful level of analysis?

Practitioner calls for the need to ground truth quantitative data is in line with existing desires to fill a gap in data availability across multiple contexts. While quantitative EJ identification and impact analysis methods are pervasive in literature and practice, albeit not always accessible for practitioners in practice (Karner 2016), qualitative approaches for gathering information and assessing impacts are also significant elements of the EJ process, and rightfully so. Whether gathering survey data, public or EJ stakeholder comments, or site visit information, practitioners are consistently engaging in qualitative methods. This highlights an important gap in the literature that is supports qualitative methods or the application of mixed methods frameworks in transportation planning related to EJ analysis. Emerging SIA research highlights the importance of integrating quantitative and qualitative methods to assess impacts more comprehensively. State DOTs have many elements already in place to leverage comprehensive mixed methods approaches in their decision making (Lucas et al. 2022). Techniques like those demonstrated in this case study can be used to develop more formal themes that can be used to explore themes and expand understanding of EJ stakeholder concerns about the impact assessment process. Opportunities for increased community-based participatory approaches, borrowing from the broader EJ research community (Sadd et al. 2014), may help bridge agency-community divides and move toward more equitable impact assessments.

Practitioners’ repeated calls for clearer guidance in determining EJ impacts highlights a significant challenge in successfully translating an impact analysis into an adverse or disproportionate impact determination (Mottee et al. 2020a, b; Petti et al. 2018). Tension between greater clarity and decision flexibility, while frustrating to practitioners, is an important element of impact assessment and, as some practitioners noted, forces practitioners to engage in deep reflection as opposed to a simple check box exercise. One-size-fits-all approaches cannot possibly meet the seemingly infinite contexts practitioners find themselves operating in across the US and “having room to maneuver” in impact determination was an explicit and implicit subtext in several interviews. However, how EJ is defined and interpreted at scale is a crucial consideration for policymakers and state DOTs alike, as practitioner- and community-level differences are an inescapable element of EJ analysis.

Continually evolving federal- and state-level differences in policies related to emphasis on areas such as EJ, Title VI, and equity can be at odds with each other, potentially compounding differences in EJ review across the country. Additionally, current actions by US states and courts to overturn legal precedent in areas such as affirmative action and diversity programs, while other states double down on such programs, also have significant implications for EJ and NEPA review work. Practitioners’ real or perceived understandings of EJ, its importance, and legally enforceability will inherently influence the level of EJ integration and application within the infrastructure delivery process (Strelau and Köckler 2016; Zhang et al. 2018). It is not the goal of this paper to determine what is or is not an appropriate definition of EJ. We highlight instead how state DOTs may benefit from the ways in which their unique mix of federal, state, project, and practitioner contexts influence EJ analysis, particularly as agencies increasingly look to their peers for collaboration and inspiration. Context matters, especially with states that have NEPA assignment and greater responsibility for decision-making.

Working group collaborations are among the many ways that agencies manage uncertainty in guidance. In the presence of increasingly divergent views and confusion around EJ practices, practitioners and agencies showed evidence of engaging in collaborative alignment efforts to build capacity. In a resource scarce environment, working groups appear to offer a relatively low burden approach to not only share information and advance cohesive EJ methodologies but also to reduce functional and social isolation for practitioners. With significant workforce turnover, and time and financial resources perennially limited, agencies at all levels of governance may have an opportunity for expertise development through working groups. The degree of meeting intensity did appear to be highly variable across contexts, but the benefits appeared to accrue nonetheless even at minimal levels of interaction (i.e., annually) and regardless of the nature of the collaboration, either internal to the agency or external across the state. This is an important note because agencies do not appear to need pre-existing EJ/CIA specialization to engage in this type of collaboration. Working groups instead provide an entry-level opportunity for agencies who may feel that specialized data or analysis tools are out of reach given their available resources. Additionally, working groups often supplement existing intra-agency governance structures, while also reducing the risk related to practitioner siloing and isolation, themes that increasingly impact practitioner functioning as described by John and Michelle, in an increasingly contentious subject area. Reductions in isolation through working groups may also open the door for the creation of more formalized codes of conduct or ethical standards for EJ practice, as well as a heightened sense of responsibility and identity as EJ practitioners. Increased senses of identity can provide less specialized or under resourced practitioners with greater confidence to influence practice in their agencies, hopefully helping practitioners define actions and appropriate responses to relevant challenges.

“There’s always room for improvement” was one of the most common phrases offered by practitioners across the study. This response is emblematic of the openness of EJ practitioners at state DOTs to change, and willingness to continually push themselves for improvement in their processes. While many practitioners described being proud of their existing EJ processes, they often followed up with recommendations to further enhance and develop their process. This does not imply that practitioners or their agencies engage in rapid processes of change and improvement, merely that there is active reflection and desire for continuous process improvement. Evidence of this improvement, and desire to continue improving EJ processes, is apparent across all four themes identified in this case study. This undercurrent of improvement harkens at calls for increased attention to practitioner capacity building and training to further diffuse best practices into wider practice (Karner 2016). USDOT and federal programs like the Thriving Communities Program provide a template for developing capacity building across the EJ ecosystem (United States Department of Transportation 2023) Academic institutions are also recognized as playing a role in future capacity building within DOTs and transportation agencies. Academic institutions, often criticized for past extractive and marginalizing approaches to research, are beginning to shift research practices toward more participatory and community-centered approaches (e.g., community-based participatory action research), increasingly in line with EJ principles and research (Bacon et al. 2013; Sadd et al. 2014). If agencies are as inclined toward improvement as the practitioners in this study suggest, then agencies would be well served to continue evaluating how equitably engaging partners like communities and academic institutions may help catalyze symbiotic capacity building within DOTs and across the communities they serve.

This study is limited in a few ways. First, the findings in this study represent only one to three people in each agency and do not fully represent all EJ practitioners within an agency. Additionally, not all agencies across the US participated in the study and the experiences provided are a sample of less than half of all states. With a focus on EJ practitioners, no data triangulation or counter arguments were gathered from other intra- or inter-agency personnel interacting with the practitioners. Finally, finding the correct practitioners in each state responsible for EJ/CIA review proved to be a significant methodological challenge in carrying out the study due to variations in publicly available information, diversity of job functions, and role descriptions.

Future work could further analyze intra-agency practitioner understandings and develop categorizations of agencies based on varying EJ maturity, providing a framework for progressive agency development of EJ expertise and building on existing frameworks proposed by Amekudzi et al. (2012). More robust and integrated mixed-method EJ/CIA/SIA methodologies are needed to provide platforms for practitioners and agencies to integrate quantitative and qualitative data assessment more effectively. Additionally, governance interventions, like the work groups described by practitioners, could be explored in more depth to evaluate potentialities for procedural justice within agencies.

Conclusions

The practitioner case study presented in this paper helps bridge the research gap in qualitative research on practitioner experiences in carrying out EJ analysis in transportation project delivery. Practitioner experiences shared in this study represent a significant information sharing of the day-to-day EJ analysis processes carried out in state DOTs across the US. Major themes emerging from practitioner’s interviews centered around practitioner roles, access to data, differences of opinion in impact determination, and developments in internal and external practitioner collaboration. A limitation of this study is that not all states are represented nor are all practitioners in a given agency represented. It was found that practitioners generally fall into three categories of specialization with Type 1 being a general NEPA practitioner, Type 2 being a NEPA practitioner with additional EJ/CIA SME responsibilities, and Type 3 being a dedicated EJ/CIA SME who does not engage in the NEPA review process. Practitioners described using similar quantitative data sources to develop impact profiles and highlighted the limitations of quantitative sources in providing the full extent of information needed. The use of ground truthing to bring the qualitative perspective of the EJ community impacts into analysis was also discussed. Differences of opinion in what is determined as adverse or disproportionate remains a hurdle for practitioners to overcome in effective impact assessment. Finally, practitioners and agencies are engaging in internal and external working groups to collaborate on EJ, CIA, civil rights, and equity topics both within and between agencies, with the aim to create greater cohesion. As noted, state DOTs operate within different contexts, resource capabilities, and methodological frameworks, and it remains important for policymakers to keep these differences at the top of mind when developing EJ policies, tools, and strategies. The insights highlighted in this research allow state DOTs and practitioners to perform a self-examination of existing processes and evaluate areas for potential improvements, to ultimately advance more holistic EJ assessments in transportation project delivery.