Skip to main content

University reputation and technology commercialization: evidence from nanoscale science


We study how the scientific reputations and technology transfer policies of universities affect patenting by university researchers, with particular regard to whether they assign patent ownership to their university or to an outside firm. Using data on the career output of over 33,000 researchers in nanosciences, we find a strongly positive relationship of university reputation in nanosciences with the number of university-assigned patents, but almost a negligible association with firm-assignment of patents. University technology transfer office resources are related positively to both types of patents, but with diminishing returns. In contrast, the share of license revenue offered upfront to researchers is positively associated with university-assigned patents, but negatively related to firm-assigned patents. Taken together, our results suggest that universities that streamline their technology transfer efforts and improve their research reputation through support for basic research will see long-term success in technology commercialization.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2


  1. Licensing revenue is net of payments to other institutions and legal fees. TTO staffing cost is based on the number of full-time staff and the median wage bill of $638,000 per institution as reported in Thursby and Thursby (2007).

  2. These figures are based on our calculations from Figure 1 in Fini et al. (2010, p. 1063).

  3. Since passage of the Bayh–Dole Act in 1980 gave universities the right to the ownership of inventions from federally funded research.

  4. The authors do not have knowledge of specific cases of lawsuits filed by universities against faculty for non-disclosure. Our conversations with TTO practitioners suggest, anecdotally, that such conflicts are typically resolved internally. We are aware of a case in which a university sued a company which had been assigned a valuable HIV-testing patent by a faculty member. This case, Stanford v. Roche, clarified that the Bayh–Dole Act does not automatically vest rights to discoveries made in university labs to a university.

  5. Researchers can publicly access Nanobank data at after clearing a brief application process. For details of this application process, refer to its website (linked above).

  6. Since ISI lists for each article the names of authors separately from the list of organizational affiliations, one cannot unambiguously match authors with organizations in a given article.

  7. Hence, researchers who were never listed as a corresponding author of an article were excluded from our analysis.

  8. This treats all researchers with the same name within the same state as the same person but considers as different the same person patenting or publishing in different states. However, our identification scheme shows an over 95 % congruence rate with the identification by another research team (Lai et al. 2011), who devised a Bayesian learning based algorithm to disambiguate inventor names in the U.S. patent data.

  9. This does not include patents outside nanotechnology fields.

  10. Patents are admittedly a crude measure of commercialization. Not all disclosures intended for commercialization are patented. Not all patents ultimately find their ways to commercial products or applications. Nevertheless, we believe that our measure reasonably captures the researcher’s proclivity to commercialize, particularly because the measure is aggregated over multiple years of academic activity.

  11. Excluding the last four of these fields does not alter the results.

  12. Criticisms do exist on the statistics contained in the NRC report, particularly the rankings reported in its 2006 version. Unlike in the 1993 version where a single quality score was assigned for department performance, which can then be readily converted into a reputational ranking, the 2006 study NRC rankings provided two different sets of rankings (“R” and “S”) expressed only in ranges. This method caused confusion and controversy within the academic community, rendering the direct use of these rankings to represent the institutional reputation potentially problematic (we owe this point to an anonymous reviewer). With this issue in mind, we were very careful about not basing our reputation measure on either R or S rankings, which were essentially subjective scores taken from surveys of faculty, students and administrators, but rather drawing on the raw data that directly reflected the quality of research activity. As explained below, we also used the 1993 NRC rankings in a robustness test and found the results holding consistently.

  13. Citation-based metrics such as total citations or the number of citations per article may be a better measure of researcher reputation. However, we lack such citation data.

  14. We used the 2003 values for the category-specific median journal impact factors, as 2003 was the earliest year for which the median scores were tracked.

  15. We thank Anne Fuller and Henry Sauermann for generously sharing their data.

  16. Using a logit model instead produced almost identical results. Because the hurdle model uses maximum likelihood estimations, linear models such as the linear probability model cannot be an alternative for this part of estimation.

  17. For consistency with the baseline model, we should ideally use a truncated negative binomial estimation. With this approach, however, the model failed to converge. See McDowell (2003) for a discussion on the estimation of hurdle models in Stata.

  18. With logged explanatory variables in the negative binomial regression, the coefficient estimates can be interpreted as elasticities.

  19. The calculation is as follows. \( \left. {{\raise0.7ex\hbox{${d\left( {university patenting} \right)}$} \!\mathord{\left/ {\vphantom {{d\left( {university patenting} \right)} {d\left( {TTO resources} \right)}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${d\left( {TTO resources} \right)}$}}} \right|_{mean} \) = 14.189 − 2*64.882*(0.02) = 11.594. This means that one unit increase in the per-faculty TTO full-time staff is associated with a 1159.4 % increase in patenting through university. Thus, for a one standard deviation (0.04) increase, this corresponds to a 46.4 % increase.

  20. The coefficients imply a U-shape relationship, with an inflection point at around 9.5 %. Given that the lowest inventor share in our sample is 20 %, the relationship is practically positive over the entire range of support.

  21. The calculation is analogous to that of TTO size.

  22. Including the (average) tenure in the model did not alter the results.


  • Agrawal, A., & Henderson, R. (2002). Putting patents in context, exploring knowledge transfer from MIT. Management Science, 48, 44–60.

    Article  Google Scholar 

  • Audretsch, D., Aldridge, T., & Oettl, A. (2006). The knowledge filter and economic growth: The role of scientist entrepreneurship. Kansas City, MO: Ewing Marion Kauffman Foundation Large Research Projects Research Paper Series.

    Book  Google Scholar 

  • Azoulay, P., Ding, W., & Stuart, T. (2007). The determinants of faculty patenting behavior: Demographics or opportunities? Journal of Economic Behavior and Organization, 63, 599–623.

    Article  Google Scholar 

  • Bradley, S., Hayter, C., & Link, A. (2013). Proof of concept centers in the United States: An exploratory look. Journal of Technology Transfer, 38, 349–381.

    Article  Google Scholar 

  • Cameron, A. C., & Trivedi, P. K. (2010). Microeconometrics using Stata. College Station, TX: Stata Press.

    Google Scholar 

  • Darby, M. R. & Zucker, L. G. (2003). Grilichesian breakthroughs, inventions of methods of inventing and firm entry in nanotechnology. NBER Working Paper No. 9825.

  • Di Gregorio, D., & Shane, S. (2003). Why do some universities generate more start-ups than others? Research Policy, 32, 209–227.

    Article  Google Scholar 

  • Fini, R., Lacetera, N., & Shane, S. (2010). Inside or outside the IP system? Business creation in academia. Research Policy, 39, 1060–1069.

    Article  Google Scholar 

  • Gulbranson, C., & Audretsch, D. (2008). Proof of Concept Centers: Accelerating the commercialization of university innovation. Journal of Technology Transfer, 33, 249–258.

    Article  Google Scholar 

  • Gurmu, S., Black, G. C., & Stephan, P. E. (2010). The knowledge production function for university patenting. Economic Inquiry, 48, 192–213.

    Article  Google Scholar 

  • Hayter, C. S., & Link, A. N. (2015). On the economic impact of university proof of concept centers. Journal of Technology Transfer, 40(1), 178–183.

    Article  Google Scholar 

  • Jensen, R., & Thursby, M. (2001). Proofs and prototypes for sale: The licensing of university inventions. American Economic Review, 91, 240–259.

    Article  Google Scholar 

  • Jensen, R., Thursby, J., & Thursby, M. (2003). Disclosure and licensing of university inventions: ‘The best we can do with the s**t we get to work with’. International Journal of Industrial Organization, 21, 1271–1300.

    Article  Google Scholar 

  • Jensen, R., Thursby, J., & Thursby, M. (2010). University-industry spillovers, government funding, and industrial consulting. NBER Working Paper No. 15732.

  • Jung, H., & Lee, J. (2014). The impacts of science and technology interventions on university research: Evidence from the U.S. National Nanotechnology Initiative. Research Policy, 43, 74–91.

    Google Scholar 

  • Lach, S., & Schankerman, M. (2008). Incentives and invention in universities. RAND Journal of Economics, 39, 403–433.

    Article  Google Scholar 

  • Lai, R., D’Amour, A., Yu, A., Sun, Y., & Fleming, L. (2011). Disambiguation and co-authorship networks of the U.S. patent inventor database 1975–2010. Cambridge, MA: Harvard Institute for Quantitative Social Science.

    Google Scholar 

  • Maia, C., & Claro, J. (2013). The role of a proof of concept center in a university ecosystem: An exploratory study. Journal of Technology Transfer, 38(5), 641–650.

    Article  Google Scholar 

  • Markman, G., Gianiodis, P., & Phan, P. (2008). Full-time faculty or part-time entrepreneurs? IEEE Transactions on Engineering Management, 55, 29–36.

    Article  Google Scholar 

  • McDowell, A. (2003). From the help desk: Hurdle models. Stata Journal, 3, 178–184.

    Google Scholar 

  • Mowery, D., Nelson, R., Sampat, B., & Ziedonis, A. (2004). Ivory tower and industrial innovation: University-industry technology transfer before and after the Bayh–Dole act. Stanford: Stanford University Press.

    Google Scholar 

  • Murray, F., & Stern, S. (2007). Do formal intellectual property rights hinder the free flow of scientific knowledge? An empirical test of the anti-commons hypothesis. Journal of Economic Behavior and Organization, 63, 648–687.

    Article  Google Scholar 

  • Owen-Smith, J., & Powell, W. (2001). To patent or not: Faculty decisions and institutional success at technology transfer. Journal of Technology Transfer, 26, 99–114.

    Article  Google Scholar 

  • Rivette, K., & Kline, D. (2000). Discovering new value in intellectual property. Harvard Business Review, 55, 54–66.

    Google Scholar 

  • Sauermann, H., Cohen, W. & Stephan, P.E. (2012). Complementing Merton: The motives, incentives, and commercial activities of academic scientists and engineers. Working Paper, Georgia Institute of Technology, Duke University and Georgia State University.

  • Siegel, D., Waldman, D., & Link, A. (2003). Assessing the impact of organizational practices on the relative productivity of university technology transfer offices: An exploratory study. Research Policy, 32, 27–48.

    Article  Google Scholar 

  • Sine, W., Shane, S., & Di Gregorio, D. (2003). The halo effect and technology licensing: The influence of institutional prestige on the licensing of university inventions. Management Science, 49, 478–496.

    Article  Google Scholar 

  • Stephan, P. E. (1996). The economics of science. Journal of Economic Literature, 34, 1199–1235.

    Google Scholar 

  • Thursby, J., Fuller, A., & Thursby, M. (2009). US faculty patenting, inside and outside the university. Research Policy, 38, 14–25.

    Article  Google Scholar 

  • Thursby, J., Jensen, R., & Thursby, M. (2001). Objectives, characteristics and outcomes of university licensing: A survey of major US universities. Journal of Technology Transfer, 26, 59–72.

    Article  Google Scholar 

  • Thursby, J., & Thursby, M. (2007). Knowledge creation and diffusion of public science with intellectual property rights. Frontiers in Globalization, 2, 199–232.

    Article  Google Scholar 

  • Zucker, L. G., & Darby, M. R. (2009). Nanobank data description, release 2.0 (beta-test). Los Angeles, CA: UCLA Center for International Science, Technology, and Cultural Policy.

    Google Scholar 

  • Zucker, L. G., Darby, M. R., & Armstrong, J. S. (2002a). Commercializing knowledge: University science, knowledge capture, and firm performance in biotechnology. Management Science, 48, 138–153.

    Article  Google Scholar 

  • Zucker, L. G., Darby, M. R., & Torero, M. (2002b). Labor mobility from academe to commerce. Journal of Labor Economics, 20, 629–660.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Jeongsik Lee.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lee, J., Stuen, E. University reputation and technology commercialization: evidence from nanoscale science. J Technol Transf 41, 586–609 (2016).

Download citation

  • Published:

  • Issue Date:

  • DOI:


  • Technology transfer
  • Nanoscience
  • Commercialization
  • Reputation
  • Prestige
  • Circumvention

JEL Classification

  • O30
  • O31
  • O32
  • O34