Rigor in social life cycle assessment: improving the scientific grounding of SLCA

  • Emily Grubert



Social life cycle assessment (SLCA) is developing rapidly and represents a valuable complement to other life cycle methods. As methodological development continues, a growing number of case studies have noted the need for more scientific rigor in areas like data collection, allocation methods, and incorporation of values and cultural context. This work aims to identify opportunities, especially in the social sciences, to improve rigor in SLCA.


A review of existing literature and tools is based on both hand coding of the SLCA literature as represented in Web of Science’s “All Collections” database and on computer-aided review of the SLCA and other related literatures (including social impact assessment (SIA), life cycle sustainability assessment (LCSA), and corporate social responsibility (CSR)) using a text mining technique known as topic modeling. Rapid diagnosing of potentially valuable contributions from literatures outside of SLCA through computer-aided review led to more detailed, manual investigation of those literatures for further insight.

Results and discussion

Data collection can benefit from increased standardization and integration with social science methods, especially frameworks for surveys and interviews. Sharing examples of questionnaires and ethics committee protocols will likely improve SLCA’s accessibility. SIA and CSR also represent empirical data sources for SLCA. Impact allocation techniques can benefit from reintegration with those in ELCA, in particular by allocating (when necessary) at facility—rather than product—level. The focus on values and subjectivity in SLCA is valuable not only for SLCA but also for other methods, most notably ELCA. Further grounding in social science is likely to improve rigor in SLCA.


SLCA is increasingly robust and contributing to interdisciplinary discussions of how best to consider social impacts. This work makes three major recommendations for continued growth: first, that SLCA standardize human subject research used for data gathering; second, that SLCA adopt allocation techniques from ELCA; and third, that SLCA continue to draw on social science and other literatures to rigorously include value systems.


Life cycle assessment (LCA) Life cycle sustainability assessment (LCSA) Social impact assessment (SIA) Social life cycle assessment (SLCA) Social science Values 

1 Introduction

Social life cycle assessment (SLCA) is developing as a major methodological contribution to the goal of full triple bottom line life cycle assessment (LCA) focused on environment, society, and financial considerations. Like its counterparts in the environmental and financial spheres, environmental life cycle assessment (ELCA) and life cycle costing (LCC), respectively, one of the major values of SLCA is its commitment to providing sufficient information about social impacts such that impacts are not shifted across life cycle stages in service of improving conditions at one location or one stage in a product, process, system, or service’s progression from design and creation to use and disposal.

SLCA as a term was originally coined by O’Brien et al. as part of an integrated social and environmental life cycle assessment (SELCA) approach (1996). Since then, about 65 articles dealing with methodological development and presenting case studies have been published in the English language Web of Science-indexed literature, about half of which have been published since 2013. A frequent call for more case studies has also been heard, evidenced by the fact that about half of the literature comprises case studies on topics ranging from informal recycling to ski manufacturing, 14 of which have used the SLCA methodology described in the United Nations Environment Programme (UNEP) and Society of Environmental Toxicology and Chemistry (SETAC) methodological sheets introduced by Benoît-Norris et al. (2011). Thus, not only has SLCA’s representation in the literature recently grown rapidly, but there is also evidence of both interest in and willingness to use a standard methodology for SLCA studies.

Making SLCA rigorous is a challenge anticipated by the methodological development literature and confirmed by case studies. The power and interdisciplinary nature of a tool like SLCA demand that it be understood as rigorous by diverse stakeholders with significantly different epistemological groundings, such as social scientists, engineers, ELCA practitioners, and decision makers on company, community, and government levels, among others.

The challenge of rigor in SLCA appears in two major settings: first, in questions about how data are collected; and second, in whether the fundamental questions and approaches in SLCA are defensible. The first issue of rigor in data collection is data specificity and quality, particularly given the length and intensity of data collection (e.g., Dreyer et al. 2010). Ensuring a common definition of rigor concerning data is particularly challenging, as epistemological positions and methodological choices might differ substantially across practitioners (see e.g., Iofrida et al. 2014 for an in-depth discussion). As Ekener-Petersen and Finnveden note, SLCA has sometimes been criticized as a method that will not bring new information to light (although their pre- and post-study focus groups indicated that even a generic analysis was able to highlight some unexpected potential hotspots) (2013). A major reason for this concern is the sensitive nature of much of the data that SLCA practitioners need, preferably at highly resolved levels. As Lehmann et al. note, applications like comparing technologies for use in a project require much more detailed information at company and facility level than are available (2013). This resolution requirement means not only that many practitioners need nonanonymized, sensitive data about social impacts but also that they must usually collect their own, given geographic and temporal resolution requirements. However, the expertise required to conduct rigorous social scientific inquiry into controversial topics is significant (Freudenburg 1986; Burdge et al. 1995). Furthermore, the consequences of poor data collection could be significant if SLCA achieves its goal of use in decision making. As one example, the moral hazard created for international bodies to release data that appear socially better than reality for generic inventory tools (and conversely, the moral hazard posed to specific actors within countries with generic “poor” ratings that might reduce incentives to improve conditions) could become noticeable were SLCA to be deployed at scale.

The second arena where rigor remains challenging is defending the topics of inquiry and the lenses of interpretation applied to results. Some examples of where this challenge appears include the choice of impact subcategories and indicators, the choice of activity unit for allocating impacts across products, and even more fundamentally, the choice of the area (or areas) of protection (AoP) that SLCA should address. Additionally, the challenge of incorporating diverse value systems is large, particularly in a field as human-focused as SLCA. Authors like Baumann et al. have noted that indicators might be interpreted as ideological rather than science-based and that one’s view of harm is deeply embedded in one’s culture (2013). Arvidsson et al. (2014) also note that indicators are actively treated differently across case studies depending on the context: for example, working hours are seen both as monotonically positive or likely negative in different case studies, suggesting that more consistent treatment of issues with U-shaped benefit curves might be beneficial for SLCA (2014). Additionally, there remain questions about what and whom SLCA is designed to do and help: for example, a company trying to design or improve its processes (Dreyer et al. 2010; Luthe et al. 2013; Musaazi et al. 2015), a consumer trying to choose a preferred product (Franze and Ciroth 2011), or a government trying to design policy (Macombe et al. 2013; Manik et al. 2013; Ekener-Petersen et al. 2014).

These challenges of rigor have been sufficiently highlighted in the growing SLCA literature to enable a reflective review that seeks to identify resources from outside the LCA community that might effectively advance the method. Other authors have reviewed the SLCA literature in order to advance topics like indicator choice and data resolution (Jørgensen et al. 2008), characterization models (Wu et al. 2014), and life cycle impact assessment (LCIA) approaches (Parent et al. 2010; Chhipi-Shrestha et al. 2015). This work uses both manual and digital review techniques to identify major needs in the SLCA literature that could potentially be aided by use of techniques from outside the current SLCA community. Given the increasingly robust SLCA literature, it seems timely to reflect on issues that have persisted across multiple methods papers and case studies and to consider how to broaden the valuable interdisciplinary network of SLCA with the experience of other communities.

Specifically, this work advances three major arguments: first, that data collection can benefit from increased standardization and integration with social science methods; second, that impact allocation techniques can benefit from reintegration with those in ELCA; and third, that the focus on values and subjectivity in SLCA is extremely valuable not only for SLCA but also for other methods, most notably ELCA. As Benoit et al. write, “Social life cycle assessment is a systematic process using best available science to collect best available data” (2010): This work aims to highlight some of the sources of that best available science in order to advance SLCA.

2 Methods

This work is based on two major sources of information. The first source is full-text reading and hand coding of 65 SLCA articles gathered from Web of Science. These articles were downloaded in June 2015 based on a topic search for “social life cycle assessment” in the “All Collections” database, which returned 71 articles. Of the 71, two were duplicates based on name spellings with nonstandard characters, three were conference papers with no full text access, and one was an industry publication behind a paywall. These six articles were excluded from analysis. Of the 65 unique and available articles, a corpus (set of texts for analysis) of 53 articles was deemed closely related to SLCA, including 51 journal publications and two conference papers. Articles not considered closely related were those that, for example, noted the value of SLCA or commented on an interest in using SLCA but did not add new methodological or data gathering insight beyond noting that the method is still developing and can be challenging to implement alongside other work. Not all articles discovered through this search process use social life cycle assessment as a keyword: Topic searches are slightly broader than keyword searches. A list of texts included in this analysis can be found in Table S1 in the Electronic Supplementary Material.

Note that not all of the relevant literature is returned with a search for social life cycle assessment: in particular, articles using the earlier term “societal life cycle assessment” (e.g., Hunkeler 2006), referring to social impacts (e.g., Norris 2006), or not indexed in Web of Science (e.g., O’Brien et al. 1996) are excluded, as are many conference papers and nonacademic literature. While many of these articles that are frequently cited as part of the SLCA literature but excluded from the Web of Science list were discovered and read, they were not formally included in the coding. The rationales for this choice are twofold. First, social life cycle assessment as a method and keyword has standardized significantly since the design and publication of the United Nations Environment Programme/Society of Environmental Toxicology and Chemistry (UNEP/SETAC) methodological sheets (UNEP/SETAC 2013). Second, the majority of frequently cited but excluded papers have been well reviewed elsewhere and are generally slightly older on average than the reviewed literature in this study, and their notes on methodological gaps and advances are by now reasonably well incorporated into the literature. These exclusions notwithstanding, note that other recent reviews using a broader topic keyword search returned fewer total articles using Web of Knowledge (Chhipi-Shrestha et al. 2015), which implies that this Web of Science search is reasonably comprehensive.

Articles were hand-coded based on attributes listed in Table 1, which were based on author judgment after reading a sample of about one third of the corpus for insight.
Table 1

Coding attributes used in hand coding



Publishing year


Closely SLCA related?

Corresponding author country

Corresponding author field or department

Corresponding author field or department is identifiably social science

Article published in special issue?

Article focuses on method or application?

Method discussed or utilized

Main subject matter

Quick notes

Area of protection noted

Impact pathways noted

Corporate focus?

CSR mentioned

SETAC guidelines mentioned

Indicators are individual or society focused

Inventory unit/activity variable used

Number of subcategories used

Number of indicators used

End product is single score

The second source of information used in this work is computer-aided review based on topic modeling through Machine Learning for language Toolkit (MALLET) (McCallum 2002) via the Topic Modeling Tool UI. Topic modeling works by sorting all words in an inputted corpus, or collection of documents like journal articles or abstracts, into clusters of words called “topics” that tend to appear together. Topic modeling is a many-to-many mapping between documents and topics, intuitively described as a way of sorting all of the words in a given document into themes that might or might not be coherent. For more information on topic modeling, see e.g., Blei et al. 2003, Blei and Lafferty 2006, and Jockers 2011.

Using a computer-aided review technique like topic modeling in this work accomplishes two primary objectives. First, computer-aided review augments manual review by contributing a naive and algorithmically grounded perspective on what the SLCA literature contains. Hand coding is typically biased by author perspectives and contexts and can be subject to issues like reader fatigue and confirmation bias, particularly as more documents are read and coded. Computer-aided review provides a basis for external judgment that is replicable and more easily shared for validation across human readers. Second, computer-aided review enables cross-corpus analysis via relatively quick analysis of large corpora that are too resource intensive for manual review. Specifically, in this work, topic modeling was used to review the contents of the social impact assessment (SIA) literature (n = 434), corporate social responsibility (CSR) literature (n = 8759), and life cycle sustainability assessment (LCSA) literature (n = 53) to compare with the SLCA literature (both full text and abstracts + titles only) without requiring hand coding of thousands of documents. These literatures were reviewed as sets of article titles and abstracts based on Web of Science results given searches summarized in Table 2, then manually investigated further based on insights from topic modeling.
Table 2

Web of Science topic searches (English language; all collections)


Topic search

Social life cycle assessment

“Social life cycle assessment” or “societal life cycle assessment” or “social LCA” or “SLCA”

Life cycle sustainability assessment

“Life cycle sustainability assesment”a

Social impact assessment

“Social impact assessment” or “SIA”

Corporate social responsibility

“Corporate social responsibility” or “CSR”b

a“LCSA” was excluded as it is an acronym used more commonly in the life sciences

bFiltered to research areas related to social sciences, environmental sciences, and business

For this work, topic thresholds were set at 0.05, which means that any set of clustered words (a topic) comprising less than 5 % of the total content-bearing words was excluded. This 0.05 threshold is common for topic modeling and helps ensure emergence of robust topics. Classification of words as “content-bearing” proceeds by removing a set of words that do not provide information considered relevant to the question at hand, known as a stopwords list: Here, stopwords include very common words like “a,” “it,” and “than,” defined by the Journal of Machine Learning Research list, plus context-specific additions based on common phrases specific to this study that tend to appear in most or all topics, like “social,” “life,” “cycle,” “assessment,” and others (Table S2, Electronic Supplementary Material). Assignment of words to topics takes place over 1000 iterations of Gibbs sampling, where the algorithm loops through each word during each iteration and updates the word’s assignment to a topic based on the prevalence of both the word and current formulation of topics. Topic coherence, or stability, is improved with more iterations, and 1000 iterations is a relatively high value.

3 Results

Hand coding of the 53 available, unique, and relevant full-text articles returned through a topic search for social life cycle assessment in Web of Science’s English-language “All Collections” revealed three major questions that are explored further in the discussion. First, how can SLCA improve data collection efforts? Second, how should impacts be allocated when direct links between impacts and outputs are unclear? And third, how can SLCA use scientifically valid methods to address inherently value-laden issues? These questions are also reflected in the continued focus on methodology implied by topic modeling of the SLCA full texts (Table 3).
Table 3

SLCA topic model (k = 7) based on full texts (n = 53)

Topic tag

Supply chain

Waste manag.

Culture and values


Waste manag.

Country-level data


Word 1








Word 2








Word 3








Word 4








Word 5








Word 6








Word 7








Word 8








Word 9








Word 10








Comparative topic modeling of abstracts from the SLCA, LCSA, SIA, and CSR literatures suggests that SIA is likely the most valuable source of data for SLCA practitioners based on topical focus on particular social impacts and contexts; largely due to its breadth, the CSR literature is also likely useful as a source for case studies. LCSA draws from many of the same communities as SLCA and is itself an emerging literature. Given the close connections between SLCA and LCSA, no additional recommendations are made based on the LCSA literature. Topic words for adjacent literatures can be found in Tables S3S5 (Electronic Supplementary Material).

4 Discussion

4.1 Advancing data collection for SLCA

Multiple authors reference a desire to base SLCA on rigorous science and note that a more rigorous grounding in the social sciences and explicit data quality standards could add significant value (see e.g., Arvidsson et al. 2014; Brent and Labuschagne 2006). One of the most immediate ways that SLCA practitioners can likely benefit from turning to the social sciences is in data collection. Many of the applications where authors are able to collect their own data are reliant on core social scientific methods like surveys (e.g., Arcese et al. 2013; Aparcana and Salhofer 2013; Bouzid and Padilla 2014; Foolmaun and Ramjeeawon 2013; Hayashi et al. 2010; Manik et al. 2013; Ramirez et al. 2014; Umair et al. 2015) and open-ended interviews (e.g., Hosseinijou et al. 2014; Umair et al. 2015; Vavra and Bednarikova 2013). Furthermore, many authors recognize the particular challenges of eliciting reliable data about the types of sensitive social impact topics frequently covered during data gathering. However, authors might or might not have ready solutions to issues like treating discrepancies between accounts by those in authority roles versus others, ensuring a representative sample of respondents, or avoiding situations where respondents are motivated to provide incorrect information (see e.g., Bouzid and Padilla 2014; Dreyer et al. 2010; Franze and Ciroth 2011; Foolmaun and Ramjeeawon 2013). In addition, frequent references to authors developing new questionnaires and analytical approaches to scoring responses, aggregating data, and other tasks (see e.g., Chhipi-Shrestha et al. 2015 for a review) suggest that there is significant value to be gained both from standardization and from looking to examples that exist in other literatures. Many social science applications deal with very similar problems as those articulated for SLCA, including reconciliation of multiple value systems, gathering rigorous and replicable qualitative data, using qualitative and semi-qualitative data to support quantitative conclusions, and others (e.g., Lindblom and Cohen 1979).

A particularly actionable recommendation for SLCA, drawing on the social science literature, is that questionnaires used for data collection (both through surveys and interviews) be standardized and shared. Some SLCA researchers already have published survey instruments and metadata about their sample groups (see e.g., Hosseinijou et al. 2014; Ramirez et al. 2014), and increased transparency and external review of data collection schemes will likely improve the trustworthiness of results. This is true for instrument-based data collection for both generic and site-specific assessments, though the benefits of standardization and sharing of questionnaires focused on gathering generic data (e.g., across an entire industry) might manifest primarily as an ability to repeat data collection over a period of time, while the benefits for site-specific research might manifest primarily as an ability to compare outcomes in different contexts.

Research in survey methodology suggests that consistent survey design is important for validity across studies and replicability, particularly given well-known sources of bias from certain design elements and ongoing questions about effects of specific word choices (e.g., Choi and Pak 2005; Cohen et al. 1996; McGorry 2000; Schuldt et al. 2011; Villar and Krosnick 2011). The existence of such biases implies that internally consistent SLCA inventories could be built more quickly if practitioners used standardized questionnaires. The design of such questionnaires would benefit from demonstrated principles of question design, question ordering, and administration techniques, among others, that increase data reliability (e.g., Billiet and Loosveldt 1988; Groves et al. 2011; Johnson et al. 2002; Krosnick 1999; Presser et al. 2004; Schaeffer and Presser 2003; Tourangeau and Smith 1996). Such standardization and care in question design are especially critical for the types of controversial questions often core to an SLCA analysis. Similarly, results can be biased by population characteristics (e.g., Schuldt et al. 2015), so researchers should aim to demonstrate understanding of potential sampling error. When results from questionnaire-based research are reported, authors can improve confidence in data quality by including basic information on how subjects and/or informants are chosen and recruited, statistics on the surveyed or interviewed groups and how they differ from the population at large, and other information that aids in quality assessment of human subjects research (see e.g., Assael and Keon 1982). Standardization and acknowledgment of the substantial scientific grounding of such research would also likely help in the uptake of SLCA, as researchers who are less familiar with details of participant recruitment, questionnaire design, qualitative and semi-qualitative data analysis, and Institutional Review Board (IRB) or other ethical body processes for human subjects research approvals would be able to draw upon implemented examples of well-documented methods for their own work. Easing entry to SLCA data collection would likely lead to more comfort with the method for new practitioners.

In addition to standardizing methods for data collection based on tested and scientific principles, SLCA practitioners looking to the literature for context on the types of social impacts they might expect to find might benefit from the relatively deep literature and published case studies from SIA and CSR. Topic models of the SIA and CSR literatures indicate relatively coherent topics (Tables S4 and S5, Electronic Supplementary Material). SIA appears to address both social impacts in particular contexts, like mining projects, and specific types of social impacts, like cultural and economic changes, while the CSR literature perhaps predictably reflects a company-level focus and emphasis on stakeholder engagement. Both the SIA and CSR literatures contain substantial amounts of empirical data that could be very useful as facility- and industry-level data sources for SLCA. SIA practitioners gather empirical impact data to support projections of potential impacts (Burdge et al. 1995), and CSR expectations drive increasingly widespread and consistent disclosure of social impact by diverse businesses, perhaps especially in industries like mining that generate significant public concern about social and environmental performance (Higgins et al. 2015; Jenkins and Yakovleva 2006; Peck and Sinding 2003). Some examples of potential data sources include commentary on data availability and reporting (e.g., Higgins et al. 2015) and case studies focused impacts at various life cycle phases, including extraction, production, use, waste disposal, and low-probability events (e.g., Karnani 2007; Lockie et al. 2009; Slovic et al. 1991; Webler and Lord 2010; Wolsink 1988; Yu 2008). While SIA and CSR, like SLCA, continue to struggle with issues like data quality, consistency, and robustness given resource constraints, they provide substantial additional experience and case studies that can contribute to advancement of SLCA inventories. Ongoing work to improve links between SIA and CSR (e.g., Bice 2015) could be extended to include and benefit SLCA as well.

4.2 Impact allocation procedures

Not all impacts, social or environmental, are easily attributed to a specific process in an explicit way, particularly when a given impact is associated with a complex system that results in multiple (or no) products. While challenges in attributing a given outcome to a specific product system is perhaps more common for SLCA, for example because impacts might result more clearly from a company’s overall decisions rather than from the production line for a given product, both SLCA and ELCA have had to consider methods for allocating impacts when impacts are not injectively matched with a functional unit. Currently, SLCA and ELCA use considerably different approaches to allocation that limit the interoperability of simultaneous SLCA and ELCA studies of the same analyte, for example in an LCSA framework. This work suggests that SLCA adopt the ELCA approaches to allocation in order to improve consistency as well as to avoid overweighting some impacts relative to others.

Currently, the allocation metric for SLCA relies on activity variables or inventory units that reflect some quantifiable characteristic of the end product, such as working hours devoted to production or final mass contribution by stage. Such activity variables, particularly working hours, have gained some traction: for example, the Social Hotspots Database uses working hours (Benoit-Norris et al. 2014). However, some authors have expressed concerns about the impact of activity variable choice on final results (Ekener-Petersen and Moberg 2013; Hauschild et al. 2008). This review finds that activity variables that assign impact proportional to a facility’s contribution to the end product rather than proportional to the amount of facility output dedicated to the end product can produce distortionary effects inconsistent with ELCA processes.

To illustrate the problem of allocation based on contribution to the end product rather than proportional output from an upstream process, consider an example of allocating social impacts from mining to end products based on the proportion of total worker hours devoted to a product that are associated with the mining process. Assume that the output of a gold mine is split evenly between jewelry manufacturing and computer manufacturing but that labor hours associated with mining account for only 10 % of total labor hours for each end product. In this case, the social impacts of mining are capped at 10 % of the impact associated with jewelry and 10 % of the impact associated with computer manufacturing, regardless of the relative magnitude of mining’s social impacts relative to other input impacts (Fig. 1).
Fig. 1

Assessing or allocating impact based on contributions as a proportion of a facility’s outputs preserves the magnitude of impacts better than as a proportion of an end product’s inputs. To illustrate this, consider the case where each of two products consumes half of a mine’s output (a), but the mine output comprises only 10 % of each product’s inputs (b). In a, 50 % of the mine’s impacts are allocated to each product, for a total of 100 % of the total impact accounted for. In b, 10 % of 100 % (10 %) of the mine’s impacts are allocated to each product, for a total of 20 % of the total impact accounted for

Such a hypothetical represents a fairly common situation where extractive stages of a product’s life cycle (e.g., mining, timbering, agriculture) pose great potential for major social impact, such as community relocation, localized health effects, or regional economic disruption. However, particularly in an increasingly mechanized world, the total labor hours in these extraction phases associated with the end product are likely to be measured in seconds or less, often in locations and industries with lower wages, benefits, and worker protections, while the crafting of the final product might be measured in hours. Thus, the acknowledged social impacts of these upstream processes are distorted (and likely to be underweighted) by using worker hours as a metric. Other metrics based on end products lead to similar concerns. Weighting based on value added is similar to weighting based on worker hours. Weighting based on contribution to end product mass is also distortionary: Consider a piece of gold jewelry made by sweatshop workers from gold mined in an exemplary fashion. Even when activity units are used to identify parts of the life cycle that bear greater investigation rather than as an explicit weighting factor (Benoît et al. 2009), they are unlikely to capture potential impact magnitude better than generic empirical results from, e.g., social impact assessments of facilities of a given type. Further, the choice of activity variable fundamentally privileges one stakeholder category (like workers) over others (like communities affected by highly mechanized mining activity) by weighting on their behalf.

Instead of assigning impact weights to suppliers based on end product attributes in order to aggregate social impacts (Hauschild et al. 2008), this review recommends that SLCA return to allocation solutions used in ELCA when impacts are not easily linked to a particular process, such as complicated factory flows, accidental releases, and others. In ELCA, when necessary, impacts are allocated by either (1) expanding the system boundary or (2) allocating impact at the facility level based on some measure of how significant the product is at the facility (ISO 14040 2006). Facility-level allocation still requires some activity variable like worker hours devoted to the process (and is thus not favored relative to system expansion). However, by allocating at the facility level rather than the end product level, social impact magnitudes are preserved and a global assessment will capture the entirety of the social impacts experienced, as expected. As shown in Fig. 1, end product-level allocation results in distortion of total impact since the sum of contributions to all end products on some social indicator basis, like work hours, is not guaranteed to be 100 %. Facility-level allocation will result in proper accounting since total impacts will always sum to 100 %. Facility-level rather than end product-level allocation is both more consistent with ELCA and more able to preserve impact magnitudes. During prescreening exercises where the goal is simply to determine where impacts might be, reliance on data and databases comprised of empirical information by facility type is likely to be more representative than assuming impact scales with measures of contribution to end products.

4.3 Values in LCA

The treatment of activity variables as a concept, and particularly acknowledgment of activity variables as a weighting technique (Benoît et al. 2009), reflects a strength of the growing SLCA literature: Its acknowledgment of subjectivity and value systems inherent to any method that seeks to assess a reasonable number of noncomparable characteristics. SLCA authors acknowledge subjectivity, the importance of highly varied cultural contexts, and the importance of being transparent about value choices (e.g., Baumann et al. 2013; Franze and Ciroth 2011; Feschet et al. 2013; Ekener-Petersen and Moberg 2013; Hosseinijou et al. 2014; Neugebauer et al. 2014; Pizzirani et al. 2014). Reitinger et al. (2011) propose a normative foundation for SLCA in an effort to make presumptions of value more explicit. Some authors express discomfort that such subjectivity might make SLCA less scientific than ELCA, but ELCA faces many of the same challenges in subjective and value-laden methodological choices, like which indicators to use and how to interpret results across multiple criteria (Grubert 2016 under review). ELCA guidelines explicitly forbid weighting across these criteria when comparative analyses are intended for public disclosure (ISO 14044 2006), despite the problem acknowledged in the SLCA literature that not weighting is in fact a form of weighting that implies that all elements are equally important (Ekener-Petersen and Moberg 2013; Martínez-Blanco et al. 2014). Given that scientific data (like LCA outputs) are often afforded deference by those who view themselves as less expert (Anderson et al. 2012), this default weighting is likely to affect decision makers’ views of data they receive.

Particularly since a stated goal of the LCA methods is to affect decision outcomes, continuing to explore how values and perceptions can shape results and how to incorporate them as transparently and scientifically as possible is critical to the methods’ value and is already being explored in SLCA (e.g., Ekener-Petersen et al. 2014; Foolmaun and Ramjeeawon 2013; Lehmann et al. 2011; Manik et al. 2013). Indeed, as written in the SIA Guidelines, “the decision to accept one set of perceptions while excluding another[,] may not be scientifically defensible” (Burdge et al. 1995). How to identify, measure, and use information about individual and community preferences are matters of long-term and active study in many social science fields: While such issues are not considered fully resolved, there is a strong opportunity for the SLCA community to learn from and help advance existing research.

A major component of making value-based assessments scientifically defensible is ensuring appropriate representation of perspectives and perceptions in assessing who stakeholders are, which impacts and indicators to use in determining existence or potential for social impact, and how results should be interpreted. Major methodological steps like the release of the UNEP/SETAC Guidelines (Benoît et al. 2009) have dramatically improved both standardization in SLCA and pathways to future agreement on methodological changes in SLCA by establishing a recommended set of stakeholder and impact categories that are intended to be adapted to particular contexts. Given that framework, explicit methods of appropriately capturing necessary perspectives can be drawn from other contexts. The diversity of existing methods for identifying and approaching stakeholders is particularly relevant given SLCA’s use of both generic and site-specific assessments of social impact, especially given that generic assessment is often more concerned with quantifying and cataloging impact, while site-specific assessment can be more concerned with prioritizing and alleviating impact. Thus, approaches to interacting with stakeholders, as well as the type of information that is sought, differ between the two. For example, research on generic social impacts might wish to engage large-group representatives from companies, industry groups, and the public to gather and validate data, while research on site-specific social impacts might focus more closely on the felt impacts and needs of subpopulations in an area. Methods currently employed in CSR research might be more appropriate for generic assessments, with their focus on company-wide assessment; on the other hand, methods employed in SIA research might be more appropriate for site-specific assessments, with their focus on identifying the most significant social impacts faced by affected communities. While both assessment types can benefit from high-quality stakeholder identification, engagement, and data gathering, the scale and goal of such pursuits differ somewhat.

Many frameworks for stakeholder engagement exist, at a variety of levels. One important framework described in a recent contribution to the SLCA methodological literature is the use of participatory approaches (Mathe 2014). This contribution is based in a model of stakeholder theory holding that stakeholders are those who can affect or are affected by a given activity (Freeman 1984) rather than only those with some form of special expertise or power, which is quite similar to models used for social impact assessment (e.g., Burdge et al. 1995; Burdge and Vanclay 1996). Korhonen (2003) writes (alongside an early, if unnamed, call for SLCA in CSR) that CSR increasingly demands such broad views of who stakeholders are. Using these models to identify stakeholders and elicit information about preferences, priorities, and existing knowledge is one way to operationalize some of the SLCA Guidelines. The broader notion of who stakeholders are also reflects an entry point into the substantial research that has been done in areas like social justice theory (Mertens 2007; Sen 1999), ways of knowing (Ottinger 2013), and methods for gathering information and preferences from stakeholders in quantitative and qualitative ways. For example, asking stakeholders to participate in processes like choice experiments and contingent valuation, risk registers, engaged anthropology, and others enables researchers to collect data that remain subjective but are more reflective of a broader, methodically identified community of stakeholders rather than choices made by researchers who might or might not be intimately familiar with cultural and other contexts of our study settings (Adamowicz et al. 1998; Hanley et al. 1998; Kirsch 2010; Patterson and Neailey 2002; Willams 1994).

Just as SLCA can benefit from research on how to include a broader base of stakeholders, SLCA can draw on related literatures to broaden notions of impact and identify methods for assessing such impact. Currently, much of the SLCA literature reflects a focus on society as a collection of individuals, with impact categories used in the case studies more often focusing on an impact that falls on an individual rather than an impact to a community overall (methodological contributions examining community-level impacts include Feschet et al. 2013 and Bocoum et al. 2015, which both address relationships between economic activity and population health). Indicators in approaches like SIA, by contrast, tend to focus more strongly on impacts to communities rather than individuals (Burdge et al. 1995; Brent and Labuschagne 2006). Drawing on the experiences of SIA could be useful to those who wish to incorporate social justice and ethical acceptability issues, particularly regarding the social value of a product, process, system, or service in the use phase, to their work (see e.g., Baumann et al. 2013; Ekener-Petersen et al. 2014; Ekener-Petersen and Moberg 2013; Lehmann et al. 2013; Parent et al. 2013 for SLCA experiences and interest; see e.g., Walker 2010 for SIA-based discussion). Ongoing work to improve the scientific grounding and usefulness of impact indicators in fields like SIA (e.g., Cloquell-Ballester et al. 2006; Vanclay 2002; Vanclay 2006) can potentially be leveraged in SLCA as well. As with environmental impact assessment (EIA) and ELCA (Tukker 2000), SIA and SLCA are compatible and can likely strengthen one another.

5 Conclusions and recommendations

Social life cycle assessment has advanced significantly in the last several years, adding to the rapidly growing life cycle literature as a whole. Many important methodological contributions, including the UNEP/SETAC methodological sheets, have helped increase standardization and have spurred the completion of nearly 30 case studies in the peer-reviewed English language literature indexed on Web of Science. Reviewing these case studies alongside methodological papers shows that data collection, impact allocation, and value-based science represent challenges that are likely to spur further activity in SLCA.

Fortunately, many of the issues now facing SLCA also exist for other methods, such as ELCA and SIA. This work makes three recommendations for the field, aligned with the questions noted above:
  1. 1.

    Involve and incorporate social scientists and social science literature, especially in data collection. Retain pathways to data inclusion that account for the epistemological diversity of the social sciences. Designing and releasing standardized data collection questionnaires (both surveys and interviews), releasing examples of IRB or other ethics committee protocol applications for human subjects research, and using empirical data from the social science literature addressing how social impacts occur and are felt can strengthen the scientific grounding of SLCA data and help broaden access to rigorous SLCA studies that need data more specific than that found in generic databases.

  2. 2.

    Adopt ELCA techniques for impact allocation when links between impact and product are unclear, specifically the principles of system expansion or facility-level allocation. Such techniques both improve the cross validity of studies that perform both ELCA and SLCA and more accurately account for true social impact than weighting based on inputs to the final product.

  3. 3.

    Continue work on incorporating values and diverse cultural contexts to SLCA and other methods, including ELCA, in particular by exploring the application of multiparadigmatic scientifically grounded techniques like participatory research and preference elicitation. Consider more formal introduction of scenario analysis, sensitivity testing, and other similar techniques for incorporating different worldviews.


As part of a broader research agenda supporting high-resolution life cycle assessment of environmental and social impacts at community scales, this work investigating the SLCA literature has sought to identify areas in the field where interdisciplinary applications might be most valuable. Future work will focus on community-anchored, rapid, and scientifically defensible techniques for assessing social and environmental priorities, with the intent of being able to generate both situation-specific prioritization schemas for LCA-based analysis and archetypical prioritization patterns for use in sensitivity analysis of value frameworks at the interpretation stage.



The contributions of several anonymous reviewers are gratefully acknowledged. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-114747. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.

Supplementary material

11367_2016_1117_MOESM1_ESM.docx (41 kb)
ESM 1 (DOCX 40 kb)


  1. Adamowicz W, Boxall P, Williams M, Louviere J (1998) Stated preference approaches for measuring passive use values: choice experiments and contingent valuation. Am J Agric Econ 80:64–75CrossRefGoogle Scholar
  2. Anderson AA, Scheufele DA, Brossard D, Corley EA (2012) The Role of Media and Deference to Scientific Authority in Cultivating Trust in Sources of Information about Emerging Technologies. Int J Publ Opin Res 24:225–237CrossRefGoogle Scholar
  3. Aparcana S, Salhofer S (2013) Application of a methodology for the social life cycle assessment of recycling systems in low income countries: three Peruvian case studies. Int J Life Cycle Assess 18:1116–1118CrossRefGoogle Scholar
  4. Arcese G, Lucchetti MC, Merli R (2013) Social life cycle assessment as a management tool: methodology for application in tourism. Sustainability 5:3275–3287CrossRefGoogle Scholar
  5. Arvidsson R, Baumann H, Hildenbrand J (2014) On the scientific justification of the use of working hours, child labour and property rights in social life cycle assessment: three topical reviews. Int J Life Cycle Assess 20:161–173CrossRefGoogle Scholar
  6. Assael H, Keon J (1982) Nonsampling vs. Sampling Errors in Survey Research. J Mar 46:114–123CrossRefGoogle Scholar
  7. Baumann H, Arvidsson R, Tong H, Wang Y (2013) Does the production of an airbag injure more people than the airbag saves in traffic?: opting for an empirically based approach to social life cycle assessment. J Ind Ecol 17:517–527CrossRefGoogle Scholar
  8. Benoît C, Mazijn B, Andrews ES (2009) Guidelines for social life cycle assessment of products: social and socio-economic LCA guidelines complementing environmental LCA and Life Cycle Costing, contributing to the full assessment of goods and services within the context of sustainable development. United Nations Environment ProgrammeGoogle Scholar
  9. Benoît C, Norris GA, Valdivia S, Ciroth A, Moberg A, Bos U, Prakash S, Ugaya C, Beck T (2010) The guidelines for social life cycle assessment of products: just in time! Int J Life Cycle Assess 15:156–163CrossRefGoogle Scholar
  10. Benoît-Norris C, Vickery-Niederman G, Valdivia S, Franze J, Traverso M, Ciroth A, Mazijn B (2011) Introducing the UNEP/SETAC methodological sheets for subcategories of social LCA. Int J Life Cycle Assess 16:682–690CrossRefGoogle Scholar
  11. Benoit-Norris C, Norris GA, Aulisio D (2014) Efficient Assessment of Social Hotspots in the Supply Chains of 100 Product Categories Using the Social Hotspots Database. Sustainability 6:6973–6984CrossRefGoogle Scholar
  12. Bice S (2015) Bridging corporate social responsibility and social impact assessment. Impact Assess Proj Apprais 33:160–166CrossRefGoogle Scholar
  13. Billiet J, Loosveldt G (1988) Improvement of the quality of responses to factual survey questions by interviewer training. Publ Opin Q 52:190–211CrossRefGoogle Scholar
  14. Blei D, Lafferty J (2006) Correlated Topic Models. Adv Neural Inf Proces Syst 18:147Google Scholar
  15. Blei DM, Ng AY, Jordan MI (2003) Latent Dirichlet Allocation. J Mach Learn Res 3:993–1022Google Scholar
  16. Bocoum I, Macombe C, Revéret JP (2015) Anticipating Impacts on Health Based on Changes in Income Inequality Caused by Life Cycles. Int J Life Cycle Assess 20:405–417CrossRefGoogle Scholar
  17. Bouzid A, Padilla M (2014) Analysis of Social Performance of the Industrial Tomatoes Food Chain in Algeria. New Medit: Mediterr J Econ, Agric Environ 13:60–65Google Scholar
  18. Brent A, Labuschagne C (2006) Social Indicators for Sustainable Project and Technology Life Cycle Management in the Process Industry. Int J Life Cycle Assess 11:3–15CrossRefGoogle Scholar
  19. Burdge RJ, Vanclay F (1996) Social impact assessment: a contribution to the state of the art series. Impact Assess 14:59–86CrossRefGoogle Scholar
  20. Burdge R, Fricke P, Finsterbusch K, Freudenburg W, Gramling R, Holden A, Llewellyn L, Petterson J, Thompson J, Williams G (1995) Guidelines and principles for social impact assessment. Environ Impact Assess Rev 15:11–43CrossRefGoogle Scholar
  21. Chhipi-Shrestha GK, Hewage K, Sadiq R (2015) “Socializing” sustainability: a critical review on current development status of social life cycle impact assessment method. Clean Technol Environ Policy 17:579–596CrossRefGoogle Scholar
  22. Choi BC, Pak AW (2005) A catalog of biases in questionnaires. Prev Chronic Dis 2:A13Google Scholar
  23. Cloquell-Ballester VA, Cloquell-Ballester VA, Monterde-Díaz R, Santamarina-Siurana MC (2006) Indicators validation for the improvement of environmental and social impact quantitative assessment. Environ Impact Assess Rev 26:79–105CrossRefGoogle Scholar
  24. Cohen G, Forbes J, Garraway M (1996) Can different patient satisfaction survey methods yield consistent results? Comparison of three surveys. BMJ 313:841–844CrossRefGoogle Scholar
  25. Dreyer LC, Hauschild MZ, Schierbeck J (2010) Characterisation of social impacts in LCA. Part 2: implementation in six company case studies. Int J Life Cycle Assess 15:385–402CrossRefGoogle Scholar
  26. Ekener-Petersen E, Finnveden G (2013) Potential hotspots identified by social LCA—part 1: a case study of a laptop computer. Int J Life Cycle Assess 18:127–143CrossRefGoogle Scholar
  27. Ekener-Petersen E, Moberg Å (2013) Potential hotspots identified by social LCA–Part 2: reflections on a study of a complex product. Int J Life Cycle Assess 18:144–154CrossRefGoogle Scholar
  28. Ekener-Petersen E, Höglund J, Finnveden G (2014) Screening potential social impacts of fossil fuels and biofuels for vehicles. Energy Policy 73:416–426CrossRefGoogle Scholar
  29. Feschet P, Macombe C, Garrabé M, Loeillet D, Saez AR, Benhmad F (2013) Social impact assessment in LCA using the Preston pathway: the case of banana industry in Cameroon. Int J Life Cycle Assess 18:490–503CrossRefGoogle Scholar
  30. Foolmaun RK, Ramjeeawon T (2013) Comparative life cycle assessment and social life cycle assessment of used polyethylene terephthalate (PET) bottles in Mauritius. Int J Life Cycle Assess 18:155–171CrossRefGoogle Scholar
  31. Franze J, Ciroth A (2011) A comparison of cut roses from Ecuador and the Netherlands. Int J Life Cycle Assess 16:366–379CrossRefGoogle Scholar
  32. Freeman RE (1984) Strategic management: a stakeholder approach. Cambridge University PressGoogle Scholar
  33. Freudenburg WR (1986) Social Impact Assessment. Annu Rev Sociol 12:451–478CrossRefGoogle Scholar
  34. Groves RM, Fowler, FJ, Couper MP, Lepkowski JM, Singer E, Tourangeau R (2011) Survey Methodology. John Wiley & SonsGoogle Scholar
  35. Hanley N, Wright RE, Adamowicz V (1998) Using Choice Experiments to Value the Environment. Environ Resource Econ 11:413–428CrossRefGoogle Scholar
  36. Hauschild MZ, Dreyer LC, Jørgensen A (2008) Assessing social impacts in a life cycle perspective—Lessons learned. CIRP Ann Manuf Technol 57:21–24CrossRefGoogle Scholar
  37. Hayashi K, Sato M, Darnhofer I, Grötzer M (2010) Farmers’ responses to social impact indicators for agricultural and community practices: a case study of organic rice production in Japan. 9th European IFSA Symposium, Vienna, pp 4–7Google Scholar
  38. Higgins C, Milne M, Gramberg B (2015) The Uptake of Sustainability Reporting in Australia. J Bus Ethics 129:445–468CrossRefGoogle Scholar
  39. Hosseinijou SA, Mansour S, Shirazi MA (2014) Social life cycle assessment for material selection: a case study of building materials. Int J Life Cycle Assess 19:620–645CrossRefGoogle Scholar
  40. Hunkeler D (2006) Societal LCA Methodology and Case Study (12 pp). Int J Life Cycle Assess 11:371–382CrossRefGoogle Scholar
  41. Iofrida N, De Luca AI, Strano A, Gulisano G (2014) Social Life Cycle Assessment in a constructivist realism perspective: a methodological proposal. In: Macombe C and Loeillet D (eds). Social LCA in progress. Pre-Proceedings of the 4th International Seminar in Social LCA. Montpellier, France, November 19–21 2014, ISNN 1256–5458Google Scholar
  42. ISO 14040:2006 (2006) Environmental management -- Life cycle assessment -- Principles and framework. Accessed 28 June 2015
  43. ISO 14044:2006 (2006) Environmental management -- Life cycle assessment -- Requirements and guidelines. Accessed 28 June 2015
  44. Jenkins H, Yakovleva N (2006) Corporate social responsibility in the mining industry: exploring trends in social and environmental disclosure. J Clean Prod 14:271–284CrossRefGoogle Scholar
  45. Jockers M (2011) The LDA Buffet Is Now Open; Or, Latent Dirichlet Allocation for English Majors. Stanford University. Accessed 26 May 2015
  46. Johnson TP, O’Rourke D, Burris J, Owens L (2002) Culture and survey nonresponse. Survey nonresponse 55–69Google Scholar
  47. Jørgensen A, Le Bocq A, Nazarkina L, Hauschild M (2008) Methodologies for social life cycle assessment. Int J Life Cycle Assess 13:96–103CrossRefGoogle Scholar
  48. Karnani A (2007) Doing well by doing good—case study: “Fair & Lovely” whitening cream. Strateg Manag J 28:1351–1357CrossRefGoogle Scholar
  49. Kirsch S (2010) Experiments in Engaged Anthropology. Collab Anthropol 3:69–80CrossRefGoogle Scholar
  50. Korhonen J (2003) Should we measure corporate social responsibility? Corp Soc Responsib Environ Manag 10:25–39CrossRefGoogle Scholar
  51. Krosnick JA (1999) Survey Research. Annu Rev Psychol 50:537–567CrossRefGoogle Scholar
  52. Lehmann A, Russi D, Bala A, Finkbeiner M, Fullana-i-Palmer P (2011) Integration of Social Aspects in Decision Support, Based on Life Cycle Thinking. Sustainability 3:562–577CrossRefGoogle Scholar
  53. Lehmann A, Zschieschang E, Traverso M, Finkbeiner M, Schebek L (2013) Social aspects for sustainability assessment of technologies—challenges for social life cycle assessment (SLCA). Int J Life Cycle Assess 18:1581–1592CrossRefGoogle Scholar
  54. Lindblom CE, Cohen DK (1979). Usable knowledge: social science and social problem solving. Yale University PressGoogle Scholar
  55. Lockie S, Franettovich M, Petkova-Timmer V, Rolfe J, Ivanova G (2009) Coal mining and the resource community cycle: a longitudinal assessment of the social impacts of the Coppabella coal mine. Environ Impact Assess Rev 29:330–339CrossRefGoogle Scholar
  56. Luthe T, Kägi T, Reger J (2013) A systems approach to sustainable technical product design: combining life cycle assessment and virtual development in the case of skis. J Ind Ecol 17:605–617CrossRefGoogle Scholar
  57. Macombe C, Leskinen P, Feschet P, Antikainen R (2013) Social life cycle assessment of biodiesel production at three levels: a literature review and development needs. J Clean Prod 52:205–216CrossRefGoogle Scholar
  58. Manik Y, Leahy J, Halog A (2013) Social life cycle assessment of palm oil biodiesel: a case study in Jambi Province of Indonesia. Int J Life Cycle Assess 18:1386–1392CrossRefGoogle Scholar
  59. Martínez-Blanco J, Lehmann A, Muñoz P, Antón A, Traverso M, Rieradevall J, Finkbeiner M (2014) Application challenges for the social Life Cycle Assessment of fertilizers within life cycle sustainability assessment. J Clean Prod 69:34–48CrossRefGoogle Scholar
  60. Mathe S (2014) Integrating participatory approaches into social life cycle assessment: the SLCA participatory approach. Int J Life Cycle Assess 19:1506–1514CrossRefGoogle Scholar
  61. McCallum AK (2002) MALLET: a Machine Learning for Language Toolkit. Accessed 26 May 2015
  62. McGorry SY (2000) Measurement in a cross-cultural environment: survey translation issues. Qual Mark Res 3:74–81CrossRefGoogle Scholar
  63. Mertens DM (2007) Transformative paradigm: mixed methods and social justice. J Mixed Methods Res 1:212–225CrossRefGoogle Scholar
  64. Musaazi MK, Mechtenberg AR, Nakibuule J, Sensenig R, Miyingo E, Makanda JV, Hakimian A, Eckelman MJ (2015) Quantification of social equity in life cycle assessment for increased sustainable production of sanitary products in Uganda. J Clean Prod 96:569–579CrossRefGoogle Scholar
  65. Neugebauer S, Traverso M, Scheumann R, Chang YJ, Wolf K, Finkbeiner M (2014) Impact Pathways to Address Social Well-Being and Social Justice in SLCA—Fair Wage and Level of Education. Sustainability 6:4839–4857CrossRefGoogle Scholar
  66. Norris GA (2006) Social Impacts in Product Life Cycles - Towards Life Cycle Attribute Assessment. Int J Life Cycle Assess. doi: 10.1065/lca2006.04.017 Google Scholar
  67. O’Brien M, Doig A, Clift R (1996) Social and environmental life cycle assessment (SELCA). Int J Life Cycle Assess 11:97–104Google Scholar
  68. Ottinger G (2013) Refining expertise: how responsible engineers subvert environmental justice challenges. NYU Press, New YorkCrossRefGoogle Scholar
  69. Parent J, Cucuzzella C, Revéret JP (2010) Impact assessment in SLCA: sorting the sLCIA methods according to their outcomes. Int J Life Cycle Assess 15:164–171CrossRefGoogle Scholar
  70. Parent J, Cucuzzella C, Revéret JP (2013) Revisiting the role of LCA and SLCA in the transition towards sustainable production and consumption. Int J Life Cycle Assess 18:1642–1652CrossRefGoogle Scholar
  71. Patterson FD, Neailey K (2002) A Risk Register Database System to aid the management of project risk. Int J Proj Manag 20:365–374CrossRefGoogle Scholar
  72. Peck P, Sinding K (2003) Environmental and social disclosure and data richness in the mining industry. Bus Strateg Environ 12:131–146CrossRefGoogle Scholar
  73. Pizzirani S, McLaren SJ, Seadon JK (2014) Is there a place for culture in life cycle sustainability assessment? Int J Life Cycle Assess 19:1316–1330CrossRefGoogle Scholar
  74. Presser S, Couper MP, Lessler JT, Martin E, Martin J, Rothgeb JM, Singer E (2004) Methods for Testing and Evaluating Survey Questions. Publ Opin Q 68:109–130CrossRefGoogle Scholar
  75. Ramirez PKS, Petti L, Haberland NT, Ugaya CML (2014) Subcategory assessment method for social life cycle assessment. Part 1: methodological framework. Int J Life Cycle Assess 19:1515–1523CrossRefGoogle Scholar
  76. Reitinger C, Dumke M, Barosevcic M, Hillerbrand R (2011) A conceptual framework for impact assessment within SLCA. Int J Life Cycle Assess 16:380–388CrossRefGoogle Scholar
  77. Schaeffer NC, Presser S (2003) The Science of Asking Questions. Annu Rev Sociol 29:65–88CrossRefGoogle Scholar
  78. Schuldt JP, Konrath SH, Schwarz N (2011) “Global warming” or “climate change”?: whether the planet is warming depends on question wording. Publ Opin Q 75:115–124CrossRefGoogle Scholar
  79. Schuldt JP, Roh S, Schwarz N (2015) Questionnaire design effects in climate change surveys: implications for the partisan divide. Ann Am Acad Polit Soc Sci 658:67–85CrossRefGoogle Scholar
  80. Sen A (1999) Development as Freedom. Oxford University PressGoogle Scholar
  81. Slovic P, Layman M, Kraus N, Flynn J, Chalmers J, Gesell G (1991) Perceived risk, stigma, and potential economic impacts of a high-level nuclear waste repository in Nevada. Risk Anal 11:683–696CrossRefGoogle Scholar
  82. Tourangeau R, Smith TW (1996) Asking sensitive questions the impact of data collection mode, question format, and question context. Publ Opin Q 60:275–304CrossRefGoogle Scholar
  83. Tukker A (2000) Life cycle assessment as a tool in environmental impact assessment. Environ Impact Assess Rev 20:435–456CrossRefGoogle Scholar
  84. Umair S, Björklund A, Petersen EE (2015) Social impact assessment of informal recycling of electronic ICT waste in Pakistan using UNEP SETAC guidelines. Resour Conserv Recycl 95:46–57CrossRefGoogle Scholar
  85. Vanclay F (2002) Conceptualising social impacts. Environ Impact Assess Rev 22:183–211CrossRefGoogle Scholar
  86. Vanclay F (2006) Principles for social impact assessment: a critical comparison between the international and US documents. Environ Impact Assess Rev 26:3–14CrossRefGoogle Scholar
  87. Vavra J, Bednarikova M (2013) Application of social life cycle assessment in Metallurgy. METAL 2013: 22nd International Conference on Metallurgy and MaterialsGoogle Scholar
  88. Villar A, Krosnick JA (2011) Global warming vs. climate change, taxes vs. prices: does word choice matter? Clim Chang 105:1–12CrossRefGoogle Scholar
  89. Walker G (2010) Environmental justice, impact assessment and the politics of knowledge: the implications of assessing the social distribution of environmental outcomes. Environ Impact Assess Rev 30:312–318CrossRefGoogle Scholar
  90. Webler T, Lord F (2010) Planning for the Human Dimensions of Oil Spills and Spill Response. Environ Manag 45:723–738CrossRefGoogle Scholar
  91. Willams TM (1994) Using a risk register to integrate risk management in project definition. Int J Proj Manag 12:17–22CrossRefGoogle Scholar
  92. Wolsink M (1988) The social impact of a large wind turbine. Environ Impact Assess Rev 8:323–334CrossRefGoogle Scholar
  93. Wu R, Yang D, Chen J (2014) Social Life Cycle Assessment Revisited. Sustainability 6:4200–4226CrossRefGoogle Scholar
  94. Yu X (2008) Impacts of corporate code of conduct on labor standards: a case study of Reebok’s athletic footwear supplier factory in China. J Bus Ethics 81:513–529CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Emmett Interdisciplinary Program in Environment and ResourcesStanford UniversityStanfordUSA

Personalised recommendations