International Journal of Primatology

, Volume 37, Issue 6, pp 617–627 | Cite as

Editorial: Changes and Clarifications to the Policies of the International Journal of Primatology to Promote Transparency and Open Communication

  • Joanna M. Setchell
  • Eduardo Fernandez-Duque
  • James P. Higham
  • Jessica M. Rothman
  • Oliver Shülke

The joint meeting of the International Primatological Society and the American Society of Primatologists in Chicago 2016 provided an opportunity to discuss and update the policies of the International Journal of Primatology, the official journal of the International Primatological Society. As a result, we have made several changes and clarifications to journal policy. Most of these are to improve transparency. Scientific progress requires transparency and open communication among scientists. However, an emphasis on innovation combined with insufficient and selective reporting of methods and results impede progress. In this editorial we clarify our policies on replication, reproducibility, null results, statistical reporting, and methods validation. We have updated the Instructions for Authors and introduced badges to acknowledge open science. We also take this opportunity to summarize other changes to the International Journal of Primatology.


It is the policy of the International Journal of Primatology to encourage sound research that addresses new questions and ideas in primatology. We also encourage studies that are designed to assess the validity and generality of previously reported empirical studies. In other words, we encourage replication studies (new tests of existing ideas: Endler 2015) when new, robust datasets are available for a particular species or question for which limited existing data are available.

Replication of findings is an essential component of scientific progress. Replication and quantitative synthesis (e.g., meta-analysis) allow researchers to assess the validity of findings from individual studies and to probe their generality. Replication can be exact, partial, or conceptual (Kelly 2006). Exact replication aims to duplicate an earlier study completely, and can never be attained perfectly. This is particularly the case for primatology, where many studies concern individual study subjects, groups, or populations living in a particular location during a particular period. Close replication, however, is possible, e.g., by using the same methods at the same site but at a different time, or by partitioning long-term data sets (Nakagawa and Parker 2015). Partial replications lie on a continuum from close replication to studies with some methodological differences. Conceptual replication involves a distinctly different study, with very different methods, evaluating the same hypothesis. Partial and conceptual replications may also take the form of quasi-replications, which expand the scope of study to a new species or system (Nakagawa and Parker 2015; Palmer 2000) and allow us to understand the generalizability of findings across species, and to examine factors underlying species differences. The different levels of replication involve a trade-off between testing the validity of findings and the scope of generality (Nakagawa and Parker 2015). We encourage all forms of replication, as well as meta-analyses to synthesise findings.

A policy of encouraging replication raises the question of when a finding becomes common knowledge. For example, this might include confirmation of the diet or activity budget of a well-described species. In our view, this occurs with observations that are basic descriptions of the natural history of a well-studied species and have previously been made by dozens of researchers. Descriptive papers reporting such characteristics may still be valuable for little-known species, but may not be sufficiently novel for publication in the International Journal of Primatology if reported for well-described species.


It is the policy of the International Journal of Primatology to strongly encourage public archiving of all data required to repeat the analyses presented. These are the data required to support the claims made in the publication, not the entire data set, although there are also good reasons to archive additional data (see Caetano and Aisenberg 2014).

It is the policy of the International Journal of Primatology to strongly encourage public archiving of analysis code if statistics were not run in widely available statistics programs.

It is the policy of the International Journal of Primatology to require that users of archived data cite both the data and the original publication. In addition, users must ensure that they read the data correctly, know the strengths and weaknesses of the data set, and have details of all methods and data management required for a meaningful interpretation of new analyses or reanalyses. While the ideal of data archiving is that other researchers should be able to understand the data as it has been archived, users should contact the original authors about their planned analyses and ask for their advice.

Sharing the evidence for claims, including data, details of methods, and computer code is good scientific practice and facilitates evaluation, interpretation, critique, extension, synthesis, and application (Borries et al. 2016). We aim to promote a data-sharing culture and to move our field toward mandatory data archiving. Data archiving in a public repository is required by many journals in ecology and evolutionary biology, among other disciplines (see the Joint Data Archiving Policy: It is not yet a requirement of any primatological journal. Data not stored in a repository are lost rapidly (Vines et al. 2013). This is particularly problematic for primatology, given the difficulties involved in replication, and the threats to our study species, which make some data truly irreplaceable. Data repositories are more reliable for archiving than providing data as supplementary information.

Sharing data brings benefits to the individual researcher, as well as to the scientific community (Caetano and Aisenberg 2014). For example, data archiving promotes careful and efficient data organization, and provides a stable backup. Sharing data also facilitates collaboration and promotes discoverability of research. Researchers are often concerned that other researchers may perform and publish an analysis before the original authors. However, embargo periods alleviate such concerns. Moreover, reviewers and editors can detect the use of data without permission or attribution.

Examples of public, open-access repositories include the Open Science Framework ( and the various Dataverse networks. numerous other qualifying data and materials repositories.

We expect authors to attend to the FAIR principles when archiving data:
  • Data should be Findable.

  • Data should be Accessible.

  • Data should be Interoperable.

  • Data should be Reusable.

See for details.

Authors submitting a manuscript to the International Journal of Primatology must indicate whether they will make their data available to other researchers in the “Data Availability” section of the manuscript. There are circumstances in which it is not possible or advisable to share any or all data and materials publicly, including human participants’ data or the location of Endangered species. Authors may include an explanation of such circumstances in their manuscript.

Our data availability policies align with Research Data Policy Type 3 (for life sciences) of our publisher Springer Nature’s standardized research data policies. Type 3 journals encourage data sharing and require statements of data availability (see

Null Results

It is the policy of the International Journal of Primatology to publish scientifically rigorous research, including research that does not reject the null hypothesis (often termed “negative” results), provided the analyses include adequate reporting of effect sizes and an a priori power analysis (Johnson et al. 2015).

The absence of an effect, or the existence of only a weak effect, is as important to our understanding of a phenomenon as a strong or statistically significant effect. Publishing a biased subset of results obscures our view of the true effect. Scientists have a responsibility to publish their results, to avoid the “file drawer effect,” which describes a tendency to publish statistically significant results, but not to publish null results, and resultant publication bias (Møller and Jennions 2001; Rosenberg 2005).

Comprehensive Statistical Reporting

It is the policy of the International Journal of Primatology to require comprehensive details of data selection, data manipulation, and all data analyses conducted as part of a study, such that analyses can be reproduced, replicated, and fully understood. Authors should report full outcomes from all statistical analyses in the results, including alternative tests of the same hypothesis and all covariates tested.

It is the policy of the International Journal of Primatology to require numerical or graphical summaries of data; to show the full distribution of the data, rather than summary statistics for small sample sizes; and to report effect sizes (means, slopes of regressions, correlation coefficients, Cohen’s d, odds ratios, etc.) in addition to the statistical significance of analyses.

Results reported in scientific articles are often a biased subset of the results of the statistical analysis conducted for a study (Parker et al. 2016). The selected results may be those that are statistically significant in null hypothesis testing, consistent with a favored hypothesis, or surprising. Selective analysis and reporting conceals the number of comparisons made, and therefore the likelihood of a false positive (type I error), a practice that has been termed “P-hacking” (Head et al. 2015; Simonsohn et al. 2014). Linked to this is the practice of first exploring the data, then building an article around the strongest results, leading to hidden “researcher degrees of freedom” (Simmons et al. 2011) and “hypothesizing after the results are known,” or HARKing (Kerr 1998).

Full transparency requires thorough reporting of how data were treated and analyzed (archiving analysis code addresses this) and full reporting of results. The relevant information differs by analysis, but for most analyses, this includes, but is not limited to, basic parameter estimates of central tendency (e.g., means) or other basic estimates (regression coefficients, correlation) and variability (e.g., standard deviation) or associated estimates of uncertainty (e.g., confidence/credible intervals). In the case of model building, model selection, and multimodel inference, authors should report the results of all models. Authors employing model selection should not interpret their results as support for or evidence against a hypothesis, but as generating predictions to be tested in future studies. Where hypotheses were formulated after data analysis, this should be acknowledged. The line between preplanned and post hoc analyses can be blurred in long-term studies, when existing data are analyzed to test new predictions in a planned fashion. We consider this preplanned analysis. However, if that analysis leads to further hypotheses, which are then tested with the same data, this would be post hoc analysis.

Primatology and related fields are often characterized by small sample sizes (Garamszegi 2015; Taborsky 2010) because of limited availability of study subjects, observation conditions, and field logistics, or the conservation status of a study species. This limits our ability to detect patterns and estimate parameters reliably (Garamszegi 2015). Small sample sizes yield low statistical power and increase the rate of false negatives (type II error) in a null hypothesis testing framework. In other words, biologically important effects may not be statistically significant when sample sizes are small. Reporting effect sizes separates the strength of the biological effect from whether the findings are likely to be due to chance. Primatologists should move toward the use of effect sizes rather than just null hypothesis testing, and report parameters with the associated confidence intervals, rather than binary decisions based on null hypothesis testing.

In addition to increasing the risk false negatives, underpowered studies also increase the risk of false positives (type I error) (Parker et al. 2016), make it impossible to control for confounding variables, and mean that single data points can be highly influential. We encourage authors to conduct randomization and simulation-based analyses to examine the stability of results and the influence of single data points (Garamszegi 2015). We also encourage the use of Bayesian inference in addition to, and as an alternative to, traditional hypothesis testing (Congdon 2016).

Many data sets in primatology are pseudo-replicated; for example, they may include multiple observations of the same individual, violating the assumption of independence of data that underlies many statistical approaches. Traditionally, researchers addressed this by calculating an average for each individual, discarding all intraindividual variation and losing a great deal of information about individual plasticity. However, various approaches allow researchers to use the entire data set. These include randomization procedures, spatial or temporal autocorrelation models, and mixed models that allow parallel investigation of intra- and interindividual levels of variation and hierarchical levels of organization (Janson 2012). The latter, in particular, have revolutionized the analysis of animal behavior. Nevertheless such models come with assumptions, and authors must explore their data and ensure that these assumptions are not violated and include this information in their description of methods.

Methods Validation

It is the policy of the International Journal of Primatology to require authors to report validation for field and laboratory methods, or to indicate the location of such published information.

Noninvasive methods, in particular, require rigorous validation to ensure that a proxy variable predicts the target variable reliably (i.e., that the two are strongly correlated) and that measures are repeatable (i.e., that measurement error is acceptable). Measurement error itself does not introduce bias, but can lead to type II statistical error (failing to reject a null hypothesis) and to inaccurate parameter estimates (see Garamszegi 2015 for further details).

Updates to the Instructions for Authors

To further promote transparency and reproducibility, and to ensure that authors comply with journal policy, including the policies reported in this editorial, we have added a checklist to the Instructions for Authors (Table I), and ask authors to confirm that they comply with the checklist when submitting a manuscript. The checklist is adapted from the Tools for Transparency in Ecology and Evolution ( The Tools facilitate the promotion of transparency by academic journals in ecology, evolutionary biology, or other fields. The questions rest primarily within the Transparency and Openness Promotion (TOP) framework (, designed for use across empirical disciplines.
Table I

Author checklist for transparency in empirical studies




 Study purpose

State the original purpose for which the study was conducted and data were gathered.



If the study is a meta-analysis, comply with the required components of meta-analysis checklist (see TEE checklist at


If the article reports results from a portion of a larger study, include a statement about the broader scope of the larger study and, if appropriate, indicate other publications from this study.


If possible, data recorders should be blind to the experimental treatment imposed on the subjects when gathering data. Report whether or not blinding was implemented.


For field studies, include specific location(s) (e.g., latitude and longitude, elevation).

 Timing of study

Report study start date, end date, duration, and justification for duration and end date.

 Timing of sampling

Report timing (date, time of day if appropriate, etc.) and frequency of sampling, including storage duration for samples.

 Study conditions

Describe environmental or other conditions that may be relevant to the study question and taxa (e.g., temperature, light: dark cycle, etc.).

 Subjects and treatments

Where relevant, report methods used to choose subjects and to allocate subjects to treatments (e.g., randomized assignment), including organism taxon/taxa, source, and background (e.g., inbred lines, commercial seed, wild caught from X number of males and females and laboratory bred for Y generations, etc.) with institutional approvals as required and appropriate.


Describe the design of experiment or study, including complete treatment factors and interactions, design structure (e.g., factorial, blocked, nested, hierarchical), nature of experimental units and replicates.

 Magnitude of treatment

Report both treatment and control values (with units and variation) for independent (explanatory/predictor) variables.

 Sample size determination

Report how sample size was decided or determined. If sample size was not set prior to initiation of study, explain stopping rule for sampling.

 Sample sizes

Report sample sizes for all data, including subsets of data (e.g., each treatment group, other subsets), and sample size used for all statistical analyses.

 Analysis methods

We encourage authors to provide the precise details of data analysis (including information on computer software programs and packages, and annotated full code or set of commands) as supplementary materials with submission and archived on a permanently supported platform on publication.


We encourage authors to post data on which analyses are based as supplementary materials with submission and archive them in a permanently supported, publicly accessible database on publication.


We encourage authors to provide comprehensive materials as supplementary documentation with submission and archived on a permanently supported platform on publication. These are materials that are excluded from the methods section but which might be important for interpreting results or later attempts to replicate the study.

 Voucher specimens

If relevant, possible and allowable, deposit voucher specimens of the studied taxon/taxa in an appropriate curated collection.


If study is a replication, identify it as such and identify differences in methods between this study and the original.

 Funding and conflicts of interest

Disclose all funding sources and potential conflicts of interest.

 Ethics and permit

Provide relevant details of ethical and other required permits if applicable (e.g., name of permit, permit number, etc.)


 Complete statistical reporting

List each statistical test and analysis conducted in sufficient detail such that they can be replicated and fully understood by those experienced in those method. Use either indents or small gaps to separate these points, for east of reading

Fully report outcomes from each statistical analysis. For most analyses, this includes (but is not limited to) basic parameter estimates of central tendency (e.g., means) or other basic estimates (regression coefficients, correlation) and variability (e.g., standard deviation) or associated estimates of uncertainty (e.g., confidence/credible intervals)

Thorough and transparent reporting will involve additional information that differs depending on the type of analyses conducted.

For null hypothesis tests, this also should at minimum include test statistic, degrees of freedom, and p-value.

For Bayesian analyses, this also should at a minimum include information on choice of priors and MCMC (Markov chain Monte Carlo) settings (e.g. burn-in, the number of iterations, and thinning intervals).

For hierarchical and other more complex experimental designs, full information on the design and analysis, including identification of the appropriate level for tests (e.g. identifying the denominator used for split-plot experiments) and full reporting of outcomes (e.g. including blocking in the analysis if it was used in the design).

Relevant information will differ among other types of analyses but in all cases should include enough information to fully evaluate the design and analysis

post hoc acknowledge-ment

Acknowledge when hypotheses were formulated after data analysis


 Citation of archived data, code, and materials

Properly cite any archived data, code, or materials made available by others and used in this manuscript

 Literature cited

By citing an article, authors certify they have read the original article

Modified from Tools for Transparency in Ecology and Evolution (TTEE) 1.0, downloaded from August 31, 2016).

Many of the questions in Table I are standard components of a manuscript, but some require information that does not always appear in a manuscript (e.g., justifications for the sample size, selection of subjects, and study duration in a field study). We have also added this checklist to the reviewer guidelines and we (the editors) will check these points to assess author compliance.

Badges to Acknowledge Open Science

To further encourage transparency, the International Journal of Primatology provides an incentive for researchers to share the data and materials underlying their articles by acknowledging open practices with Open Science Framework badges in publications ( Badges indicate that the paper conforms to specific transparency standards and are displayed on the first page of a paper. Badges are effective in promoting data archiving (Kidwell et al. 2016). For complete details, consult

The International Journal of Primatology awards badges for
  1. 1.

    Open Data. The Open Data badge is earned for making the digitally shareable data necessary to reproduce the reported results publicly available.

  2. 2.

    Open Materials. The Open Materials badge is earned by making publicly available the components of the research methodology needed to reproduce the reported procedure and analysis.

Authors may apply for one or both badges (Fig. 1) when submitting the final version of their manuscript. For each badge selected, authors complete the disclosure items. They are checked by the editor, but accountability remains with the author. If authors cannot meet badge criteria, they may provide text to appear in the manuscript such as “We will grant all reasonable data requests from qualified researchers.”
Fig. 1

Badges awarded for Open Data and Open Materials in the International Journal of Primatology. Badges indicate that the paper conforms to specific transparency standards and are displayed on the first page of a paper. For more details, see Open Science Framework badges in publications (

If the application for a badge or badges are accepted, then the Disclosures and badges are printed in the journal article.

Other Changes to the Journal

Submission Categories

In addition to original research articles, which can include short articles, the International Journal of Primatology considers review articles and book reviews for publication. To these, we have added the category of News & Views pieces. News & Views are either short communications reporting new brief observations or results or critical commentaries on recently published papers in the International Journal of Primatology or other journals. These are limited to 1000 words and 5 references, with a maximum of one figure or table and no abstract. Short communications should have important implications for our understanding of primates and have theoretical significance beyond the species involved.

We continue to welcome proposals for Special Issues or Special Sections on a particular theme. A Special Issue is one whole issue of the journal and should include approximately 12–14 articles. A Special Section is a smaller collection of articles. Articles in a Special Issue or Section can include original research articles, reviews, commentaries, and guest editorials. The Editor-in-Chief provides full support to Guest Editors, and advice as needed. To propose a Special Issue, please send the following information to the Editor-in-Chief, Jo Setchell (
  1. 1.

    A proposed title

  2. 2.

    Proposed Guest Editors

  3. 3.

    A 250-word abstract that explains why the topic is important

  4. 4.

    A list of the intended contributions

  5. 5.

    An estimated timeline for submissions



Conscientious peer review is a time-consuming task, but is essential to ensure the quality of scientific research. The International Journal of Primatology is very grateful to reviewers for the time and effort they invest in the review process. We have initiated a new process to offer our thanks more explicitly. We will publish a list of reviewer names with our thanks in the last issue of the journal published before each IPS congress (a 2-year cycle). We will also announce in that issue, and at the following IPS congress, an award for the best reviewer, which will be presented at the congress. This award will be determined on the basis of confidential editor (Editor-in-Chief, Associate Editor, and Guest Editor) scores of the usefulness of all provided reviews.

Associate Editors and the Editorial Board

We are in the process of adding a new Associate Editor, as well as revising the Editorial Board. The Editorial Board serves to broaden the scope and range of expertise beyond that of the editors. It also reflects the international nature of the journal. Members of the Editorial Board are listed on the journal homepage and in the printed issues. The members act as ambassadors for the journal, support and promote the journal, seek out the best work and actively encourage submissions, and review submissions on a more regular basis than other reviewers.


It is the policy of the International Journal of Primatology to encourage authors to give details of their contributions in the acknowledgments. The Instructions for Authors refer to the Committee on Publication Ethics (COPE: guidelines on authorship.


The policies we outline here are intended to facilitate the scientific process by promoting transparency, openness, and reproducibility in primatology. In implementing these policies, we join editors of other journals in promoting a more open research culture (Nosek et al. 2015). The International Journal of Primatology is the official journal of the International Primatological Society, and we encourage submissions from all areas of primatology, including, but not limited to anthropology, anatomy, ethology, paleontology, psychology, sociology, and zoology. If you are interested in submitting your work and have questions, please send an e-mail inquiry to any of the editors.



We, as editors of the International Journal of Primatology, gratefully acknowledge the authors who provide us with potential content for the journal and reviewers for their constructive comments on manuscripts. At Springer Nature, we are grateful to Janet Slobodien, life sciences editor; Edlyn Apolonio, the journal editorial assistant; Terry Kornak, the copyeditor; and Leonora Panday, the production assistant.


  1. Borries, C., Sandel, A. A., Koenig, A., Fernandez-Duque, E., Kamilar, J. M., et al. (2016). Transparency, usability, and reproducibility: guiding principles for improving comparative databases using primates as examples. Evolutionary Anthropology, 25, 232–238.Google Scholar
  2. Caetano, D. S., & Aisenberg, A. (2014). Forgotten treasures: the fate of data in animal behaviour studies. Animal Behaviour, 98, 1–5.CrossRefGoogle Scholar
  3. Congdon, P. (2016). Bayesian statistical modelling (2nd ed.). Chichester: John Wiley & Sons.Google Scholar
  4. Endler, J. A. (2015). Writing scientific papers, with special reference to evolutionary ecology. Evolutionary Ecology, 29, 465–478.CrossRefGoogle Scholar
  5. Garamszegi, L. Z. (2015). A simple statistical guide for the analysis of behaviour when data are constrained due to practical or ethical reasons. Animal Behaviour, 120, 223–234.Google Scholar
  6. Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of P-hacking in science. PLoS Biology, 13, 1–15.CrossRefGoogle Scholar
  7. Janson, C. H. (2012). Reconciling rigor and range: observations, experiments, and quasi-experiments in field primatology. International Journal of Primatology, 33, 520–541.CrossRefGoogle Scholar
  8. Johnson, P. C. D., Barry, S. J. E., Ferguson, H. M., & Müller, P. (2015). Power analysis for generalized linear mixed models in ecology and evolution. Methods in Ecology and Evolution, 6, 133–142.CrossRefPubMedGoogle Scholar
  9. Kelly, C. D. (2006). Replicating empirical research in behavioral ecology: how and why it should be done but rarely ever is. The Quarterly Review of Biology, 81, 221–236.CrossRefPubMedGoogle Scholar
  10. Kerr, N. L. (1998). HARKing: hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196–217.CrossRefPubMedGoogle Scholar
  11. Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., et al. (2016). Badges to acknowledge open practices: a simple, low-cost, effective method for increasing transparency. PLoS Biology, 14, e1002456.CrossRefPubMedPubMedCentralGoogle Scholar
  12. Møller, A. P., & Jennions, M. D. (2001). Testing and adjusting for publication bias. Trends in Ecology and Evolution, 16, 580–586.CrossRefGoogle Scholar
  13. Nakagawa, S., & Parker, T. H. (2015). Replicating research in ecology and evolution: feasibility, incentives, and the cost-benefit conundrum. BMC Biology, 13, 88.CrossRefPubMedPubMedCentralGoogle Scholar
  14. Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., et al. (2015). Promoting an open research culture. Science, 348, 1422–1425.CrossRefPubMedPubMedCentralGoogle Scholar
  15. Palmer, A. (2000). Quasi-replication and the contract of error: lessons from sex ratios, heritabilities and fluctuating asymmetry. Annual Reviews of Ecology and Systematics, 31, 441–480.CrossRefGoogle Scholar
  16. Parker, T. H., Forstmeier, W., Koricheva, J., Fidler, F., Had, J. D., et al. (2016). Transparency in ecology and evolution: real problems, real solutions. Trends in Ecology and Evolution, 31, 1–9.CrossRefGoogle Scholar
  17. Rosenberg, M. S. (2005). The file-drawer problem revisited: a general weighted method for calculating fail-safe numbers in meta-analysis. Evolution, 59, 464–468.CrossRefPubMedGoogle Scholar
  18. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.CrossRefPubMedGoogle Scholar
  19. Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). p-curve and effect size: correcting for publication bias using only significant results. Psychological Science, 9, 666–681.Google Scholar
  20. Taborsky, M. (2010). Sample size in the study of behaviour. Ethology, 116, 185–202.CrossRefGoogle Scholar
  21. Vines, T. H., Albert, A. Y. K., Andrew, R. L., Débarre, F., Bock, D. G., et al. (2013). The availability of research data declines rapidly with article age. Current Biology, 24, 94–97.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Joanna M. Setchell
    • 1
  • Eduardo Fernandez-Duque
    • 2
  • James P. Higham
    • 3
  • Jessica M. Rothman
    • 4
    • 5
  • Oliver Shülke
    • 6
    • 7
  1. 1.Department of AnthropologyDurham UniversityDurhamUK
  2. 2.Department of AnthropologyYale UniversityNew HavenUSA
  3. 3.Department of AnthropologyNew York UniversityNew YorkUSA
  4. 4.Department of AnthropologyHunter College of the City University of New YorkNew YorkUSA
  5. 5.New York Consortium in Evolutionary PrimatologyNew YorkUSA
  6. 6.Department for Behavioral EcologyGeorg August UniversityGöttingenGermany
  7. 7.Primate Social Evolution GroupGerman Primate Centre – Leibniz Institute for Primate ResearchGöttingenGermany

Personalised recommendations