Indicator system for managing science, technology and innovation in universities

  • Soleidy Rivero Amador
  • Maidelyn Díaz Pérez
  • María José López-Huertas Pérez
  • Reinaldo Javier Rodríguez Font
Open Access
Article
  • 59 Downloads

Abstract

The formulation of standardized measurement indicators of science, technology and innovation at the international, regional and institutional level remains a continuing need. Although there are various schools of thought and different ways of obtaining information for measurement, one of the most favorable proposals today in the development of measuring instruments is the use of the researcher’s Curriculum Vitae. The objective of this research is to design a system of indicators to measure the performance of science, technology and innovation in universities. The proposal includes specific analysis for the definition of each indicator, the mathematical procedure for its calculation, aggregation levels and time periods, as well as its meaning and usefulness. The study compiles documentary analysis of the theoretical and conceptual references that support the proposal in the Latin American context. Furthermore, an empirical survey method is proposed to assess specific contexts in the institution under study. As a result, the design of a system of indicators adjusted to the characteristics of university institutions and current trends in the Latin American region is achieved. The use and analysis of these indicators allow us to establish patterns, trends and regularities in the organization that favour institutional knowledge management on science, technology and innovation processes; and deliver adequate information management and institutional knowledge for decision-making.

Keywords

Metric indicators, science, technology and innovation Curriculum vitae, curriculum information system Information and knowledge management 

Introduction

Much of the efforts of science itself focus on developing appropriate indicators that reflect standardized measurement of scientific and technological activities at regional and international level. However, inputs calculation is a task more closely related to economic sciences, statistics and administration, which have largely world-wide standardized methodologies and procedures. On the other hand, the theoretical-methodological concepts of science intended to formulate indicators in science and technology make this a complex and difficult undertaking (Albornoz 2007; Chavarro et al. 2014; Moravcsik 1986; Spinak 2001; Peralta et al. 2015; Sancho 2003). Measurement techniques for research results have been in existence for only a few decades and are not completely consolidated. There are excellent standards set by bibliometrics, such as the patent metrics and scientometrics expressed in indicators which are classified and applied to different situations, but there are still pending issues for the accurate measurement of results at institutional level, adapted to regional peculiarities, in addition to the use of other sources of information to establish measurement indicators (Rodrígues and Mello 2016; Spinak 2001).

In essence, scientific results, the knowledge generated, their impact and benefits to society are very difficult to quantify. However, the study of scientific literature (books, articles, reports, patents, new products, etc.) gives an approximate measure of results. It is usual to assess performance and productivity through the number of publications and citations in specialized, international, refereed and indexed journals. This practice can accurately reflect the work and quality of certain areas or fields such as physics, chemistry and biomedicine. But in other specialties and fields of application (such as in the social sciences) results and differentiated products are distributed through channels that are not always scientific journals with broad international impact (González and Molina 2009).

In bibliometrics, relevant methods have been established, as well as indicators and patterns to follow in the application of measurement tools, using scientific publications and traditional citation indexes which have been constantly improving (Peralta et al. 2015). From another perspective, innovative proposals can be found that use alternative information sources for the application of indicators, such as the Curriculum Vitae (CV) (Báez et al. 2008; Sempere and Rey-Rocha 2003; Rey-Rocha et al. 2006; Barandiarán and D´Onofrio 2013; Solís et al. 2010; Picinin et al. 2016). This approach reaffirms the need to develop Scientific Information Systems (SIS) to facilitate access to information related to the scientific results of research groups, institutions and regions to establish important parameters in the development of indicators adjusted to regional particularities and institutional realities (Cañibano and Bozeman 2009; Navarro et al. 2016).

SIS using the CV of the researcher as a source of information are called Curricular Information Systems and may have a level of institutional, national or regional aggregation. This type of computer system favorably influences the development of measuring instruments, complements quantitative analysis based on scientific publications and offers possibilities for normalization at institutional and regional level (Barandiarán and D´Onofrio 2013; Díaz et al. 2016). The CV has become a source of information that favors science, technology and innovation measurement and which can be supplemented by other sources of information such as surveys, bibliographic databases and patents. Despite this, CV standardization at field level is insufficient (Martín and Rey-Rocha 2009; Navarrete et al. 2005). However, significant progress has been made in metric resources in the Latin American region and in the integration of Curricular Information Systems. The following representative examples in the Hispanic world may be mentioned: Andalusia´s Scientific Information System (its Spanish acronym SICA) and the Latin American and Caribbean CV project in Science and Technology (its acronym in Spanish: CvLAC) (Ríos et al. 2016; Ríos and Santana 2001).

In this context, Cuba like any other nation needs to improve regulations, national policies, data sources, as well as the design and scope of its scientific indicators, adjusted to the new potential of the Latin American region. The challenge for quantitative studies of science is to go beyond a mere quantitative approach and to influence the process of strategic decision-making designed to promote, consolidate or improve scientific activity assessment in the country (Arencibia 2012; Chía and Escalona 2009). The Cuban university sector, as in other Latin American nations, is the main producer and disseminator of the knowledge sector in society. Consequently, the application of tools to manage science and technology in these institutions becomes a determining factor to promote scientific production and its management in other institutions within the region (Arencibia et al. 2012; Barandiarán and D´Onofrio 2013; Miguel et al. 2006).

In this sense, there are still some gaps in the measurement of science and technology, such as the need to know the level of specialization in several topic areas and the structural dimension of disciplinary and interdisciplinary phenomena of scientific results, among other outstanding issues (Arencibia et al. 2013).

The present research takes place within this whole context and investigates part of the problem, in this case, measurement and design of indicators tailored to data sources. The overall objective is to design a system of indicators to measure performance of science, technology and innovation in universities. The proposal includes specific analysis of the definition of each indicator, the procedure for its calculation, mathematical expression, aggregation levels and temporality, as well a its meaning and usefulness.

The use and analysis of these indicators will allow patterns, trends and regularities in the institutional knowledge organization to be established, favoring the management of the institution´s science, technology and innovation processes; and also an adequate level of institutional knowledge and information management for strategic, operational and functional decision-making in the organization.

Methodology

This paper uses, as a starting point, documentary analysis of important methodological and conceptual referents internationally recognized and specifically in the Latin American context. The main manuals consulted were: Frascati Manual (2002), the Canberra Manual (1995), Manual of Bogota (2005), Manual of Lisbon (2007) and Manual of Santiago (2007) (Organización para la Cooperación y el Desarrollo Económico [OCDE] 1995, 2003; Red Iberoamericana de Indicadores de Ciencia y Tecnología [RICYT] 2007, 2009). In addition, the so-called Manual of Buenos Aires, conceived with a view to using the researcher’s CV as a source of insider information for the construction of indicators of trajectories of scientific and technological researchers (D’Onofrio et al. 2010).

The proposal uses as a tool the Information Management and Institutional Knowledge System at Pinar del Río University (CV-UPR), developed by its Information, Knowledge and Technology Management Group, (proGINTEC). The curricular structure of the platform is adjusted to the characteristics of the institution and national regulations. To manage the researcher’s CV, the CV-UPR system uses the structural foundations established by the CvLAC, also known as Curriculum Lattes. This regional platform is well used in our countries, so that its structural premise favors normalizing CV fields to generate measurement indicators (Díaz et al. 2016). In addition, the survey, as an empirical method is used along with the questionnaire as a tool to obtain information from science and technology processes observed in the institution. A questionnaire was applied to researchers who coordinate research projects, aiming to deepen the characteristics of the results obtained and their interdisciplinary relationships. The population was composed of researchers from the university who are responsible for coordinating research projects. For this study, the list of research projects in the period 2011–2014 was taken as the source. A population of 33 researchers was identified and the questionnaire was applied to the total. The Statistical Package for Social Science software (SPSS version 11.5 2004) was used for data processing and Mindjet MindManager software (version 8.0.217) was used to create diagrams visualizing the structures of variables and indicators.

The indicators obtained were grouped into six variables with common measurement objectives. This structure allows specific analysis of certain activities related to science and technology management and, at the same time, comparison of the metric values of the different variables. Variables cover the institutional research process from academic-research results and scientific publishing to institutional visibility at territorial and international levels. Each group of indicators describe the dimensions of each variable, aimed at identifying specific patterns in measuring science and technology at the institutional level which characterize institutional knowledge in its various dimensions. The values of the indicators can be compared to establish a relationship in the behavior of each variable. In this way, the science and technology process can be characterized in a more comprehensive way at the institutional level.

Variable I: Characterization of researchers, as its name suggests, researchers are characterized based on the scientific findings that are evaluated in the institution. The parameters characterizing researchers and their behaviour, over time, help us to understand the favorable or unfavorable trends in the scientific results of the institution. From this perspective, the measurement analysis is focused on the researchers and their performance assessment, their different activities and those aspects that distinguish them. The goal of this measure is to focus on the relationship of researcher performance evaluation with the institution they belong to. Although this type of assessment is complicated, using a statistical approach, it can be balanced with other types of qualitative analysis and other personnel management tools within the institutional management framework (Wildgaard 2016). The variable is structured according to the following categories and subcategories:
  • Category: Personal characteristics:
    • Subcategory: according to sex

    • Subcategory: according to age

  • Category: Level of training of researchers.

  • Category: Teaching activities and directives of researchers

  • Category: Typology of researchers according to their scientific production

  • Category: Academic and research trajectory of the researcher.

Variable II: Scientific and technological production looks at specific aspects of this type of production in the institution. The grouping of the categories is based on the concept of scientific and technological production of the institution. This covers scientific publication, results of research projects, participation in scientific events, patents and registrations obtained and other activities of institutional relevance (Piedra and Martínez 2007). Major types of institutional scientific and technological results are easily identified in the researcher CV data. Indicators can therefore be obtained that reflect institutional and personal performance in the production of scientific and technological knowledge. The advantage of the CV format as a source of information for measuring research results has been exploited in other studies at institutional or regional levels (Barandiarán and D´Onofrio 2013; Dietz et al. 2000; Milanés 2016). Accordingly, Variable II is divided into the following categories and subcategories:
  • Category: Institutional Production

  • Category: Characteristics of publication in scientific journals: It is divided into two subcategories aimed at characterizing the publication process of scientific journals
    • Subcategory: Productivity and source of publications

    • Subcategory: Quality and authorship of publications.

  • Category: Research projects

Variable III: Academic and research trajectory complements the previous variable by focussing on the impact that scientific research has on the development of institutional academic activities. This feature, typical of university institutions, needs accurate information related to academic and research processes to assess institutional performance balancing these two very relevant aspects for university excellence. Considering this close relationship, institutional knowledge is in constant interaction with academic training and scientific knowledge development. This third variable is structured into two categories:
  • Category: Teaching activities.

  • Category: Research activities.

Variable IV: Dynamics and scientific collaboration allows the study of the interaction between researchers and institutions to obtain science and technology results; aspect that expresses the level of institutional socialization and dissemination of scientific knowledge. It is particularly beneficial to merge CV data to analyze the different ways that collaboration achieves scientific results, as reflected in research mobility history. It is common for mobility to increase scientific production (Sandström 2009; Gaughan 2009). This fourth variable is divided into three categories:
  • Category: Collaboration in scientific publications

  • Category: Institutional collaborations

  • Category: Support for research

Variable V: Territorial visibility focuses on the local impact of the institution. One of the ways to enrich the process of measuring science and technology management in universities is to highlight the strategic role and influence they have in the development of the local area or nationally. This mission of the university to reach out to the local and national communities justifies the need for measurement standards to enhance scientific results and visibility at national level. The author affiliation approach together with the analysis of the researcher’s CV, is a commendable way to interpret scientific collaboration at institutional and regional levels, as it encourages the analysis and interpretation of the results obtained (Moed and Halevi 2014). From this perspective, this variable is composed of 4 main categories:
  • Category: Awards

  • Category: Projects

  • Category: Training activities and advice

  • Category: Relevance of publications in scientific journals in the territory.

Variable VI: International visibility is an approach to measure the internationalization of science at institutional level and allows international visibility of the institution to be assessed in any given period, as a result of the researcher’s performance in international cooperation activities. It is necessary to consult the results of Variable IV indicators to deepen the analysis of scientific results from research grants, interacting with international universities (Cañibano et al. 2010). The latter variable is divided into the following categories:
  • Category: Awards

  • Category: Projects

  • Category: Training activities and advice

  • Category: Visibility of scientific results

Results and discussion

The proposed system of indicators characterizes a group of activities within the institution, linking the researchers’ behavior with the institutional environment. Therefore, the interest is not focused on obtaining specific values, but rather on the possibilities offered by the contrasts and comparisons between observations, approaches and analysis of variables that describe the process of science, technology and innovation, through the study of scientific and academic results. In this way, the analysis that can be performed by applying the indicators’ system can be interpreted as measuring institutional capacity for the generation, dissemination and evaluation of institutional knowledge.

Each indicator was identified with a denomination and a number with respect to the variable to which it belongs. Specific analysis was made of the definition of each indicator, the procedure for its calculation, its mathematical expression, and its meaning and usefulness, its level of aggregation and temporality examined (Rivero 2016). These aspects favor the implementation of this measurement system as a tool for science and technology process management at the institutional level. In the Electronic Supplementary Material of this article, there is a summary of these aspects and the specifics of each indicator.

Figure 1 shows the set of 15 indicators to characterize the researchers working in the period chosen by the evaluator. From the generational point of view (age and institutional entry dates), it is possible to analyze the number of researchers who have been more time in the institution and also to evaluate the researchers’ training and their degree of involvement in teaching or management activities related to science and technology. In this dimension of analysis, the researchers are classified according to productivity levels in scientific journals and the areas of knowledge where they publish.
Fig. 1

Indicator structure. Variable I

Indicator 13: Researchers who have scientific publications in various areas of knowledge selects scientific publications with results that classify in several knowledge areas. To obtain this measurement, the results of the researcher’s scientific publications are classified (in various formats) from the items in their CV.

The CV-UPR System uses the taxonomy classification of the Organization for Economic Co-operation and Development (OCDE, for its acronym in Spanish). This classification of scientific knowledge has been featured in the main internationally established manuals as methodological tools for science and technology measurement. Its greatest influence is in European countries, but it has also been widely used in Latin America. Highlighted among its benefits is a more harmonious treatment of the social sciences disciplines which allows a closer approximation to social reality (Red Internacional de Fuentes de Información y Conocimiento para la Gestión de la Ciencia y la Tecnología e Innovación [Red ScienTI] 2004). Assessment of results classified into different areas of science can identify interdisciplinarity and transdisciplinarity processes of science, at least preliminarily (Elleby and Ingwersenb 2010). The study of this aspect by the researcher is proposed through their classification of items in their CV, specifically by selecting OCDE classification patterns (Hjørland and Albrechtsen 1995). It is therefore possible in the CV-UPR to assign various areas or disciplines of knowledge to the same scientific result to identify interdisciplinary intersection.

To enrich the analysis of this variable, the history of the researcher is studied together with the academic and research institution relationship. The average index of research performance and the average academic performance index are two indicators to evaluate researcher performance in relation to their research and academic results in a specific period. Furthermore, they can be calculated at the individual, group or institutional levels (D’Onofrio et al. 2010).

Indicator 14: Average index of research performance refers to the average of activities carried out by the researcher in their scientific research field over a certain period. The interpretation of this type of indicator requires data collection over a given period and the measurement of growth rate at least annually. It is very feasible to compare this indicator in an accumulated 5-year period. From the mathematical point of view, as the sum of the numerator increases, so do the results. Increase in the denominator is conditioned by the number of years to be analyzed and as this will be constant for each researcher, so the increase or decrease is due only to the sum of the numerator. The most productive researchers will have a high rate, related to the amount produced and not quality. For this reason, we suggest comparing this indicator to publication percentages in high impact journals, in Variable II indicators.

From another perspective, the Indicator 15: Average academic performance index refers to the researcher´s activities in the educational sphere, over time. Namely, activities in teaching undergraduate and graduate students averaged over a defined period. With the implementation of these two indicators, the history of the researcher in academic and research activities is linked in the same time window. The analysis combines the two performances, the teaching activities carried out by the researcher during the same period in which results are obtained through scientific research. Minimum and maximum standard values of these indicators depend on the number of researchers in the institution, the number of accumulated years in the period selected by the analyst and the total scientific production of the institution. Based on these parameters, a default value is set to limit the maximum value attained by the researcher to balance the two performance indices.

The second variable groups a total of 14 indicators, in the first instance scientific production in its various types, as well as emphasis made on research projects and publication in scientific journals (see Fig. 2). Traditional bibliometric indicators are applied and combined employing the benefits of using the CV as a source of information (Arencibia et al. 2013; Fernández et al. 1998; Peralta et al. 2015). For example, Indicator 22: Origin of the publication identifies the origin of the scientific journal where results are published while Indicator 23 on impact levels, analyzes the databases in which the journal is indexed. For more information, view the Electronic Supplementary Material of this article. During the design of the indicators and their contextualization within the institution under study, differences were addressed in the classification of certain scientific results. The questionnaire technique allowed digging deeper into research projects with results interacting in various scientific disciplines, 80% of project coordinators agreed with this. Indicator 29: Research Projects with results in several areas of knowledge, is designed with this purpose in mind and takes into consideration the field classification of research projects in the researcher’s CV. This aspect of the measurement must be supplemented by in-depth analysis and discussion within the discourse communities of researchers grouped into projects specialized in each area of science (Hjørland and Albrechtsen 1995).
Fig. 2

Indicator structure. Variable II

The third variable (see Fig. 3) focuses on measuring the relationship between teaching and research activities, performing further analysis. A set of 12 indicators are grouped that interact in the process of measurement of undergraduate activity, graduate studies and scientific research. Preliminary indicators show a measure of the impact of the research process in the development of the academic activities of the institution.
Fig. 3

Indicator structure. Variable III

The indicators grouped into variable 4 concentrate on measuring institutional and author collaboration to obtain shared scientific results. This proposal can be integrated by determining collaboration in academic and research activities, which are detailed in the researcher CV. It achieves harmony between collaboration among researchers and institutions (see Fig. 4).
Fig. 4

Indicator structure. Variable IV

The last two variables contrast influence at regional level and the visibility of the institution at international level. Nine indicators are grouped in the regional perspective, which are related to territorial and national impact, scientific awards, research projects linked directly to identified national or regional priorities and the participation of the institution in the postgraduate training of the territory in which it operates (see Fig. 5). The relevance of scientific journal publication for the territorial role of the university is also considered.
Fig. 5

Indicator structure. Variable V

From the perspective of international visibility, nine indicators are proposed with a similar structure, but set in the context of university internationalization (see Fig. 6). The indicators explore related scientific and technical consulting activities, publication in scientific journals, counseling on academic graduate research and interaction in financing and co-authorship of scientific research projects; aspects that can be identified in the CV and visualize scientific findings internationally.
Fig. 6

Indicator structure. Variable VI

Concluding remarks

The proposed system of indicators allows precise monitoring of the results of the research activity of an institution, in close interaction with the academic activity. Knowing the results in any given period from the calculation of this indicator system, is essential for managing the science, technology and innovation process in any university. The analysis and interpretation of results reveal the research and academic strengths and weaknesses of the organization, aspects that will document strategic improvement, plans of action, measurement criteria and policies of the institution in the short, medium and long term.

Sources of reliable, standardized and accessible data to optimize measurement processes of scientific results are a requirement for a university. This study considers teacher-researcher CV data to manage the process of science, technology and innovation. The proposed indicators system is a working tool for the measurement, analysis and forecasting of scientific results in keeping with the characteristics of this type of institution.

Notes

Acknowledgements

Our thanks to the professors of the Information, Knowledge and Technologies Management Group (proGINTEC) of Pinar del Río University, to the professors who collaborated by updating their CVs, and to those who translated the article into English.

Supplementary material

11192_2018_2721_MOESM1_ESM.docx (72 kb)
Supplementary material 1 (DOCX 71 kb)

References

  1. Albornoz, M. (2007). La RICYT: Resultados y desafíos pendientes. Ponencia presentada en el VII Congreso Iberoamericano de Indicadores de Ciencia y Tecnología. Red Iberoamericana de Indicadores de Ciencia y Tecnología (RICYT) y la Fundación de Amparo a la Investigación del Estado de Sao Paulo (FAPESP). Sao Paulo, Brazil.Google Scholar
  2. Arencibia, J. R. (2012). Sistematicidad en la evaluación de la actividad científica desde una perspectiva cienciométrica. Acimed, 23(3), 215–218.Google Scholar
  3. Arencibia, J. R., Corera, E., Chinchilla, Z., & de Moya-Anegón, F. (2013). Relaciones intersectoriales, producción científica y políticas nacionales para el desarrollo de la investigación: un estudio de caso sobre Cuba 2003–2007. Revista Cubana de Información en Ciencias de la Salud, 24(3), 111–157.Google Scholar
  4. Arencibia, J. R., Vega, R. L., Araújo, J. A., Corera, E., & de Moya-Anegón, F. (2012). Hitos de la ciencia cubana en el siglo XXI, una revisión a partir de los trabajos más citados en Scopus en el período 2001–2005. Acimed, 23(1), 45–58.Google Scholar
  5. Báez, J. M., Peset, F., Núñez, F., & Ferrer, A. (2008). CVN: normalización de los currículos científicos. El Profesional de la Información, 17(2), 213–220.CrossRefGoogle Scholar
  6. Barandiarán, S., & D´Onofrio, M. G. (2013). Construción y Aplicación de una tipología de perfiles de diversidad profesional de los investigadores argentinos: aportes al Manual de Buenos Aires. Ponencia presentada en el IX Congreso de Indicadores de Ciencia y Tecnología de la RICIYT. Bogotá. Colombia.Google Scholar
  7. Cañibano, C., & Bozeman, B. (2009). Curriculum vitae method in science policy and research evaluation: The state-of-the-art. Research Evaluation, 18(2), 86–94. (Special issue on the use of CVs in research evaluation).CrossRefGoogle Scholar
  8. Cañibano, C., Otamendi, J., & Solís, F. (2010). Investigación y movilidad internacional: Análisis de las estancias en centros extranjeros de los investigadores andaluces. Revista Española de Documentación Científica, 33(3), 428–457.CrossRefGoogle Scholar
  9. Chavarro, D., Tang, P., & Rafols, I. (2014). Interdisciplinarity and research on local issues: Evidence from a developing country. Research Evaluation, 23(195–209), 2017.  https://doi.org/10.1093/reseval/rvu012.Google Scholar
  10. Chía, J., & Escalona, C. I. (2009). La medición del impacto de la ciencia, la tecnología y la innovación en Cuba: Análisis de una experiencia. Revista CTS, 13(5), 83–96.Google Scholar
  11. D’Onofrio, M. G., Solís F., Tignino, M. V., & Cabrera, E. (2010). Indicadores de trayectorias de los investigadores iberoamericanos: Avances del Manual de Buenos Aires y resultados de su validación técnica. Informe de la Red de Indicadores de Ciencia y Tecnología Iberoamericana e Interamericana (RICYT). Elaboración del Manual de Buenos Aires. http://www.ricyt.org/manuales/doc_view/144-indicadores-de-trayectorias-de-los-investigadores-iberoamericanos-avances-del-manual-de-buenos-aires-y-resultados-de-su-validacion-tecnica. Accessed 11 March 2015.
  12. Díaz, M., Peña, D. A., Rodríguez, R. J., & Carrillo-Calvet, H. (2016). Sistemas curriculares para la gestión de información y conocimiento institucional. Estudio de caso. Revista General de Información y Documentación, 26(1), 11–24.Google Scholar
  13. Dietz, J. S., Chompalov, I., Bozeman, B., O’Neil Lane, E., & Park, J. (2000). Using the curriculum vita to study the career paths of scientists and engineers: An exploratory assessment. Scientometrics, 49, 419–442.  https://doi.org/10.1023/A:1010537606969.CrossRefGoogle Scholar
  14. Elleby, A., & Ingwersen, P. (2010). Publication point indicators: A comparative case study of two publication point systems and citation impact in an interdisciplinary context. Journal of Informetrics, 4, 512–523.  https://doi.org/10.1016/j.joi.2010.06.001.CrossRefGoogle Scholar
  15. Fernández, M. T., Gómez, I., & Sebastián, J. (1998). La cooperación científica de los países de América Latina a través de indicadores bibliométricos. Interciencia, 23(6), 328–337.Google Scholar
  16. Gaughan, M. (2009). Using the curriculum vitae for policy research: An evaluation of National Institutes of Health center and training support on career trajectories. Research Evaluation, 2(18), 117–124.CrossRefGoogle Scholar
  17. González, M.V., & Molina, M. (2009). La evaluación de la ciencia: revisión de sus indicadores. Revista Contribuciones a las Ciencias Sociales, noviembre. http://www.eumed.net/rev/cccss/06/ggmp.htlm. Accessed 16 December 2014.
  18. Hjørland, B., & Albrechtsen, H. (1995). Toward a new horizon in information science: Domain analysis. Journal of the American Society for Information Science, 46(6), 400–425.CrossRefGoogle Scholar
  19. Miguel, S., de Moya, F., & Herrero, V. (2006). Aproximación metodológica para la identificación del perfil y patrones de colaboración de dominios científicos universitarios. Revista Española de Documentación Científica, 29(1), 36–55.CrossRefGoogle Scholar
  20. Milanés, Y. (2016). Evaluación Multidimensional de la Investigación. Análisis micro en la Universidad de Granada durante el período 2009–2013. Universidad de Granada. Tesis Doctorales. ISBN: 978-84-9125-583-3. http://hdl.handle.net/10481/42894. Accessed 24 March 2016.
  21. Moed, H. F., & Halevi, G. (2014). A bibliometric approach to tracking international scientific migration. Scientometrics, 101(1987), 2015.  https://doi.org/10.1007/s11192-014-1307-6.Google Scholar
  22. Moravcsik, M. J. (1986). The classification of science and the science of classificatin. Scientometrics, 10(3–4), 179–197.CrossRefGoogle Scholar
  23. Navarrete, J., Santa, S., Rios, C., González, A., de Moya, F., Banqueri, J., & Solis, F. (2005). Sistema de Información Científica de Andalucía (Spain). Un Modelo para la Gestión de la Ciencia y Tecnología. Revista CENIC Ciencias Biológicas, Vol 36 (Especial).Google Scholar
  24. Navarro, C., Vidal, A., González de Dios, J., & Aleixandre, R. (2016). Comunicación científica (XXXII). Cómo hacer un currículum vítae. Revista Acta Pediátrica Esp, 74, 3–4.Google Scholar
  25. Organización para la Cooperación y el Desarrollo Económico (OCDE). (1995). Manual on the measurement of human resources devoted to S&T “Canberra Manual”.Google Scholar
  26. Organización para la Cooperación y el Desarrollo Económico [OCDE]. (2003). Manual de Frascaty, 2002. Definiciones y convenciones básicas. http://www.edutecne.utn.edu.ar/ocde/frascati-03-30-34.pdf. Accessed 4 May 2009.
  27. Peralta, M. J., Frías, M., & Chaviano, O. G. (2015). Criterios, clasificaciones y tendencias de los indicadores bibliométricos en la evaluación de la ciencia. Revista Cubana de Información en Ciencias de la Salud, 26(3), 290–309.Google Scholar
  28. Picinin, C. T., Pilatti, L. A., Kovaleski, J. L., Graeml, A. R., & Pedroso, B. (2016). Comparison of performance of researchers recipients of CNPq productivity grants in the field of Brazilian production engineering. Scientometrics, 109(855–870), 2017.  https://doi.org/10.1007/s11192-016-2070-7.Google Scholar
  29. Piedra, Y., & Martínez, A. (2007). Producción científica. Ciencias de la Información, 3(38), 33–38.Google Scholar
  30. Red Iberoamericana de Indicadores de Ciencia y Tecnología [RICYT]. (2007). Programa Iberoamericano de Ciencia y Tecnología para el Desarrollo (CYTED). Manual de indicadores de internacionalización de la ciencia y la tecnología, Manual de Santiago.Google Scholar
  31. Red Iberoamericana de Indicadores de Ciencia y Tecnología [RICYT]. (2009). Manual de Lisboa. Pautas para la interpretación de los datos estadísticos disponibles y la construcción de indicadores referidos a la transición de Iberoamérica hacia la Sociedad de la Información.Google Scholar
  32. Red Internacional de Fuentes de Información y Conocimiento para la Gestión de la Ciencia y la Tecnología e Innovación [Red ScienTI]. (2004). Normalización de Clasificaciones. III Reunión de Coordinación Regional de la Red ScienTI. Buenos Aires, Argentina.Google Scholar
  33. Rey-Rocha, J., Garzon-García, B., & Martín-Sempere, J. (2006). Scientists’ performance and consolidation of research teams in biology and biomedicine at the Spanish Council for Scientific Research. Scientometrics, 69(2), 183–212.CrossRefGoogle Scholar
  34. Ríos C., Navarrete, J., Santa, S., Solis, F., Fernández, J. A., & Chaichio, J. A. (2006). Sistema de Información Científica de Andalucía: Una herramienta para la evaluación y gestión de los resultados de la actividad científica. Actas del 8vo Congreso Nacional de Bibliotecología y Ciencias de la Información. Cartagena de Indias (Colombia).Google Scholar
  35. Ríos, R., & Santana, P. H. A. (2001). El espacio virtual de intercambio de información sobre recursos humanos en ciencia y tecnología de América Latina y el Caribe del CV Lattes al CvLAC. Ciência da Informação, 30, 42–47.CrossRefGoogle Scholar
  36. Rivero, S. (2016) Sistema de indicadores para la gestión de la ciencia y la tecnología en la Universidad de Pinar del Río (Cuba), mediante la utilización del Curriculum Vitae del investigador como fuente principal de información. Universidad de Granada. Tesis Doctorales. ISBN: 978-84-9125-583-3. http://digibug.ugr.es/handle/10481/43331. Accessed 24 March 2016.
  37. Rodrígues, A., & Mello, C. F. (2016). Importance and susceptibility of scientific productivity indicators: Two sides of the same coin. Scientometrics, 109, 697–722.  https://doi.org/10.1007/s11192-016-2047-6.CrossRefGoogle Scholar
  38. Sancho, R. (2003). Versión española de la sexta edición del Manual de Frascati: Propuesta de norma práctica para encuestas de investigación y desarrollo experimental. http://redc.revistas.csic.es/index.php/redc/article/viewFile/200/255. Accessed 14 May 2009.
  39. Sandström, U. (2009). Combining curriculum vitae and bibliometric analysis: Mobility, gender and research performance. Research Evaluation, 18(2), 135–142.CrossRefGoogle Scholar
  40. Sempere, J. R., & Rey-Rocha, J. (2003). El currículum vitae y la encuesta como fuentes de datos para la obtención de indicadores de la actividad científica de los investigadores. https://www.researchgate.net/publication/242594292_El_’Curriculum_Vitae’_y_la_Encuesta_como_fuente_de_datos_para_la_obtencion_de_indicadores_de_actividad_cientifica_de_los_investigadores. Accessed 16 December 2016.
  41. Solís, F. M., Milanés, Y., & Navarrete, J. (2010). Evaluación de la investigación científica. El caso de Andalucía. Revista Fuentes, 10, 83–100.Google Scholar
  42. Spinak, E. (2001). Indicadores cienciométricos. Revista Acimed, 9(Suppl), 42–49.Google Scholar
  43. Wildgaard, L. (2016). A critical cluster analysis of 44 indicators of author-level performance. Journal of Informetrics, 10, 1055–1078.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Soleidy Rivero Amador
    • 1
  • Maidelyn Díaz Pérez
    • 2
  • María José López-Huertas Pérez
    • 3
  • Reinaldo Javier Rodríguez Font
    • 4
  1. 1.Faculty of Economics and BusinessUniversity of Pinar del RíoPinar del RioCuba
  2. 2.Department of Publications and the Information and Knowledge Management Group (proGINTEC)University of Pinar del RíoPinar del RioCuba
  3. 3.Department of Library ScienceUniversity of GranadaGranadaSpain
  4. 4.Information and Knowledge Management Group (proGINTEC)University of Pinar del RíoPinar del RioCuba

Personalised recommendations