Detecting bursty terms in computer science research
Abstract
Research topics rise and fall in popularity over time, some more swiftly than others. The fastest rising topics are typically called bursts; for example “deep learning”, “internet of things” and “big data”. Being able to automatically detect and track bursty terms in the literature could give insight into how scientific thought evolves over time. In this paper, we take a trend detection algorithm from stock market analysis and apply it to over 30 years of computer science research abstracts, treating the prevalence of each term in the dataset like the price of a stock. Unlike previous work in this domain, we use the free text of abstracts and titles, resulting in a finer-grained analysis. We report a list of bursty terms, and then use historical data to build a classifier to predict whether they will rise or fall in popularity in the future, obtaining accuracy in the region of 80%. The proposed methodology can be applied to any time-ordered collection of text to yield past and present bursty terms and predict their probable fate.
Keywords
Computer science Bibliometrics Term life cycles Machine learning DBLP MACDIntroduction
In 2012, a group of scientists from the University of Toronto built a convolutional neural network (CNN) and applied it to a well-known image classification task. Their paper (Krizhevsky et al. 2017) sparked a revolution in the field of deep learning; an explosion of popularity and interest that is still continuing today. CNNs have since spread outwards from their original domain and can be found in diverse fields, such as biomedicine (Chen et al. 2018) and astronomy (Dambre et al. 2015).
Number of abstracts containing the term “Convolutional neural network(s)” over time (1988–2017) in DBLP. While there were isolated mentions of CNNs in the 90s and 00s, the topic underwent exponential growth in popularity beginning in the year 2012, which continues to the present
Being able to detect bursty terms automatically in scientific literature would have a number of applications. Firstly, early detection might allow funding agencies and publishers to take note of the most promising new ideas and channel new support that way. For newcomers to a field and researchers in the sociology of science, automatically listing the hottest topics over time would give an instant snapshot of the evolution of the field. Finally, compiling a corpus of historical bursty terms over time might make it possible to characterise the life cycles that new ideas go through as they develop.
In this paper, we explore a burst detection methodology that requires little tuning and can be used on a large dataset. We build on work by He and Parker (2010), who used a technique from stock market analysis to detect bursty keywords in PubMed, a very large online bibliography of biomedical citations.
To adapt an existing burst detection methodology to the free text of a large corpus of computer science abstracts. To our knowledge, this is the first use of this method on free text rather than a controlled vocabulary of keywords.
To report a list of historical and current bursts in the computer science literature.
To predict the future prevalence of existing bursty terms using machine learning.
Background
Burst detection
The problem of tracking topics in time-ordered corpora was formalised by a DARPA-sponsored initiative under the name Topic Detection and Tracking (TDT). Early research focused on segmenting a corpus into topics, finding the first mention of each topic and then tracking and plotting their popularity over time (Allan et al. 1998). As computer hardware improved, it became common to use Latent Dirichlet Allocation (LDA) for this kind of topic modelling (Blei et al. 2003). A typical method involves splitting the corpus into time steps, finding topics in each chunk and then linking them together across time steps based on some measure of similarity (Griffiths and Steyvers 2004; Steyvers et al. 2004; Mei and Zhai 2005). The prevalence of each topic can then be tracked over time and bursty periods identified. However, this comes with a number of disadvantages, such as the lack of interpretability of the results and the difficulty in coherently linking LDA topics together between subsequent time-steps.
The opposite approach is to first identify the bursty terms in a dataset, and then cluster them together into topics, using, for instance, Kleinberg (2002)’s burst detection algorithm. Originally developed to detect topics in email chains, Kleinberg’s method assumes that terms in documents are emitted by a two-state automaton. The automaton may spontaneously transition from a non-bursty state to a bursty state, or vice versa. Variants of this have been applied across several domains: Diao et al. (2012) and Mathioudakis and Koudas (2010) used it to detect bursty topics in Twitter data, the latter in real time, while Fung et al. (2005) and Takahashi et al. (2012) applied it to news streams.
However, when it comes to scientific literature, there are a few reasons why Kleinberg’s method is a less natural fit. He and Parker (2010) point out that, unlike Tweets and news articles, scientific papers tend to enter the world in batches, such as when a new edition of a journal or the proceedings of a conference is published. This violates Kleinberg’s underlying assumption that new items enter the dataset in a continuous fashion. It also forces us to impose longer time steps, such as years rather than seconds. This causes a second problem: the quantity of data available.
While there are several large open-access corpora of scientific abstracts, such as PubMed (biomedicine), arXiv (physics and computer science), Semantic Scholar (assorted) and DBLP (computer science), all of them cover short intervals, relative to the size of the time steps. Even in the best case scenario, we are likely to have less than a hundred years worth of usable data—which means approximately a hundred time steps. There has also been a vast change in the underlying landscape over the span of the dataset, because science in general (Bornmann and Mutz 2015) and computer science in particular (Wu et al. 2018), have both seen strong and sustained growth in the last century. By contrast, unless one collects many years of Twitter data, the size and characteristics of the dataset do not change substantially over time.
Several burst detection methods from other domains have been proposed for use on scientific documents. For instance, Stroup et al. (1989) take inspiration from epidemiology, and Zhang and Shasha (2006) take inspiration from gamma rays. However, of particular interest to us is He and Parker (2010)’s work that takes a popular technique from stock market analysis and applies it to PubMed data. This is an attractive idea: a great deal of work has been done in analysing stocks, because some people are highly motivated to predict what prices will do in later time steps.
Moving average convergence divergence
The basic item in the toolkit of the stock market analyst is the moving average. While a moving average necessarily lags behind real time data, it can smooth out random fluctuations to reveal underlying trends. The simple moving average (SMA) of a time series is the sum of its values in a set interval (called the span of the SMA), divided by the width of that interval. More advanced methods use exponential moving averages (EMAs), which assign more weight to more recent data.
Illustration showing how moving averages can be used to detect changepoints. a Shows how the crossover of two simple moving averages, one with a span of 12 and another with a span of 6, can generate a sell signal, b shows the same phenomenon, but with exponential moving averages, c shows the MACD graph of the time-series; note how the sell signal comes earlier than in (a) and (b)
Using a controlled vocabulary rather than free text has advantages in that it ensures the terms in the vocabulary will be meaningful, unique and free of spelling errors. However, a controlled vocabulary will necessarily lag behind the forefront of scientific development because new terms can only be added after they have risen to prominance. Additionally, keywords are not available for all datasets, limiting the scope of the method. For this reason, we use the free text of abstracts and titles.
Prediction
As well as detecting historically bursty terms, we would like to be able to predict whether they will become more or less prevalent in the future. Some related work has been done on this task. Prabhakaran et al. (2016) took 2.4 million abstracts from Web of Science and used an implementation of LDA to identify 500 topics. They then used a logistic regression classifier to predict whether their topics would rise or fall in popularity over subsequent time steps, yielding an accuracy of 70%. Balili et al. (2017) looked at a slightly different task; they took 21.2 million PubMed abstracts, clustered their MeSH keywords based on co-occurrence in article annotations and then trained a gradient-boosted trees classifier to predict whether individual clusters would survive or dissolve. Finally, in their 2011 paper, He and Parker took a database of approximately 100,000 Californian grant abstracts which were pre-labelled with “project terms” (keywords from a defined vocabulary of biomedical concepts (RePORT 2018) that were automatically assigned to grant applications) (He and Parker 2011). They calculated the MACD and histogram values for each term, then used various classifiers to predict whether the histogram itself would rise or fall in the future. While this is not exactly the same as predicting whether term prevalence would increase, their best classifier had an accuracy of 88%.
Materials and method
The code used in this section can be found on GitHub at https://github.com/etattershall/burst-detection
Dataset
We use a corpus gathered from DBLP, which is a large computer science bibliography hosted by Trier University in Germany. DBLP is considered to be reasonably comprehensive in its coverage of the field: Cavacini (2015) compared a number of bibliographies and found that DBLP had the greatest number of unique computer science articles indexed. It is freely available to download, either directly (DBLP 2019), or via Semantic Scholar (Allen Institute for Artificial Intelligence 2015).
After downloading, cleaning, and filtering out foreign language abstracts, we had a dataset of 2.6 million articles spanning the years 1988-2017. For each article, we combined title and abstract to form a document, then used Python’s Natural Language Toolkit (NLTK) (Bird et al. 2009) to tokenise, lemmatise and remove a short list of standard English stopwords.
From there, we read in the dataset year by year and formed a vocabulary of uni-, bi- and tri-gram terms. We calculated the document frequency of each term in each year, which left us with a 31 (year) \(\times\) 4.1 million (term) matrix. We filtered again, removing the terms that did not occur in more than 0.02% of abstracts for at least three consecutive years. This substantially reduced the amount of noise in the dataset from digitization errors and very rare bi- and tri-gram terms. The final size of our vocabulary was approximately 70,000 terms.
Normalisation
The number of documents added to DBLP per year (1988–2017). There is a modest dip in 2017 because documents are often added to DBLP retroactively and backdated
Changes in DBLP title and abstract length over time. While the number of characters in titles seems to increase linearly, the length of abstracts fluctuates
Applying MACD
The first parameter we chose was the length of the moving average spans, (\(n_1\), \(n_2\), \(n_3\)). In stock market analysis, it is common to use (12, 26, 9) months (Murphy 1999). However, our dataset has just 31 time steps, so moving averages of this length would leave us with too little data with which to work. Therefore, after some experimentation, we used (6, 12, 3) years.
He and Parker (2010) used the raw value of the histogram as their metric for burstiness. However, when we attempted to apply it to our dataset, we discovered that there were some issues with scale. Common terms, such as “data”, often showed large numerical shifts on the histogram that are still insignificant when compared to their historical baseline level. Therefore, we introduced a scaling factor.
Initially, we experimented with using the mean or median value of each term’s historical prevalence. However, this biased the metric in favour of new terms which did not exist in the dataset before becoming popular. Then we tried the historical maximum, but found that this produced variable results; the prevalence over time is not generally smooth, so anomalous spikes occur frequently. Finally, we decided on the square root of the historical maximum since this produces more consistent results than the other metrics.
Predicting the future prevalence of terms using MACD features
- 1.
Take the last 20 years of data (\(\mathcal {D}(y_{i-20}, y_i)\)).
- 2.
Apply our burst detection method to \(\mathcal {D}(y_{i-20}, y_i)\). Select all terms above a burstiness threshold.
- 3.
Extract time series features such as the MACD and histogram values, the standard deviation, min and max. This forms the X part of our dataset (Fig. 5).
- 4.
Take a smoothed value of term prevalence during \(y_{i+I}\) and calculate whether it is above or below prevalence during \(y_i\). This forms the Y part of our dataset.
- 5.
Append X and Y onto the data for the previous year.
The feature extraction process. For instance, for the year 2008, we take data in the range 1998–2008 and extract features to form the X part of our dataset. We then take the prevalence for each term during 2011 as our Y
Process diagram for the methodology
Results and discussion
Burst detection
The terms “novel” and “state [of the] art” have increased substantially in popularity over the last 30 years
The prevalence of the terms “social network”, “social media”, “twitter” and “facebook” over time
Changing word use over time as “web site” is replaced with “website”
There are, however, some issues with the terms detected. For example, the top 30 bursty terms over the span of the dataset are:
deep, neural network, neural, convolutional, reserved, right reserved, science bv right, bv right, bv right reserved, convolutional neural, convolutional neural network, elsevier bv, deep learning, elsevier bv right, right, elsevier science bv, science bv, elsevier science, spl, bv, elsevier, cnn, iot, learning, deep neural, deep neural network, elsevier ltd, xml, internet thing, elsevier ltd right
The 114 clusters of the burstiest terms, after copyright declarations had been removed
Journal economic, classification number, economic literature classification, journal economic literature, literature classification number, economic literature, literature classification | Non orthogonal, multiple access noma, non orthogonal multiple, access noma, orthogonal multiple, noma, orthogonal multiple access | Short term, term memory lstm, long short, memory lstm, term memory, short term memory, long short term, lstm |
Available http github, com, github com, http github com, availability implementation, http github, github | cnn, convolutional neural, convolutional, network cnn, neural network cnn, convolutional neural network | Service oriented architecture, soa, web service, service oriented, oriented architecture |
Network cnns, neural network cnns, deep convolutional, deep convolutional neural, cnns | Router, relay, throughput, qos, wireless network, traffic, ip, aware, packet, end end | Expression data, gene expression, protein, microarray, gene, gene expression data |
Wireless sensor, wireless, node, sensor, sensor network, wireless sensor network | Task, efficient, novel, paper propose, network, proposed, propose, different | Fifth, 5g, 5g network, fifth generation, generation 5g, fifth generation 5g |
mmwave, millimeter, wave mmwave, millimeter wave, millimeter wave mmwave | Particle swarm, particle swarm optimization, swarm, swarm optimization | Software defined, software defined networking, sdn, defined networking |
Cognitive radio network, cognitive, cognitive radio, primary user | Smartphones, android, app, smartphone, tablet, smart phone, apps | Massive, massive multiple input, massive mimo, massive multiple |
dnn, deep, deep learning, deep neural network, deep neural | iot, internet thing iot, internet thing, thing, thing iot | Show proposed, result show, simulation result, simulation |
Embedding, word embeddings, embeddings, word embedding | Recurrent neural network, recurrent neural, recurrent | 0point18 spl, mu cmos, mu, spl mu cmos, spl, spl mu |
Content, technology, service, web, internet, online | Wireless mesh network, mesh network, wireless mesh | Network architecture, neural network architecture |
Differentiated service, differentiated, diffserv | Business, electronic commerce, market, commerce | Antenna, fading, fading channel, mimo, channel |
Ad hoc network, hoc, ad, hoc network, ad hoc | cmos, 0point18, mum, 0point18 mum, mum cmos | acm subject, acm subject classification |
Cloud service, cloud, cloud computing | Deep learning method, learning method | Reinforcement, reinforcement learning |
Network rnn, rnn, neural network rnn | Deep learning model, learning model | Learning, machine, machine learning |
Convolutional network, convolution | Facebook, twitter, social medium | Semantic web, ontology, semantic |
Energy, efficiency, significant | Mobile ad hoc, mobile ad, manet | Peer network, peer peer network |
Intranet, knowledge management | Originality value, originality | Network trained, deep network |
gan, generative, adversarial | State art method, art method | Virtual reality, virtual, vr |
iot device, iot application | xml document, xml data, xml | lte, 3g, wimax, small cell |
Multicore, multi core, gpu | Portal, keywords, metadata | Streaming, poster, session |
17point00 ieee, 17point00 | Analytics, data analytics | Warehouse, data warehouse |
Wearable device, wearable | Overlay, overlay network | Trained, train, training |
cdma, turbo, space time | Representation learning | Smart grid, grid, smart |
802point11, ieee, wlan | Learned, learns, learn | Neural, neural network |
Social, social network | nfv, network function | Semantic segmentation |
State, state art, art | sub, baseline, sparse | Peer peer, p2p, peer |
Web 2point0, 2point0 | Convolutional layer | Data mining, mining |
Dataset, datasets | soc, work present | Security, attack |
uml, corba, java | wsn, sensor node | rfid, bluetooth |
Grid computing | Mobile, device | Remote sensing |
Site, web site | Word embedding | Big, big data |
Crowdsourcing | Loss function | spl time, sup |
Deep feature | Mobile agent | Propose deep |
Autoencoder | Cross layer | Feature map |
Health care | Pre trained | Smart city |
Autonomic | avc, h264 | cnn model |
Key word | sub spl | sub sub |
Kinect | ar iv | Drone |
Spark | Fog | Owl |
A sample of the burstiest clusters over time, ordered approximately by the date at which the bulk of their activity occurred. Note the different scales of the subgraphs
In Fig. 10, we notice that many of the later bursts seem to involve deep learning in some way. The earlier ones are more diverse. There are a number of different growth patterns. “Security” undergoes a nearly linear increase, as does “dataset”. Others peak twice, such as “neural networks” and “virtual reality”. This ties in neatly with what we know of the history of these two ideas: the first consumer VR headsets were in the headlines in the 90s (Kahaner 1994), and a number of neural network breakthroughs happened in a similar time period (Rumelhart et al. 1986). Some of the terms reach peak popularity, then persist, such as “data mining” and “gene expression”. Others fall out of favour fairly swiftly, such as “intranet” and “web 2.0”. We note that this does not mean that these concepts are no longer used, only that these particular terms no longer find their way into titles and abstracts.
Prediction
As described in “Predicting the future prevalence of terms using MACD features” section, we aim to predict whether a given term will rise or fall in popularity after a time interval I. Since the classes (rise, fall) are unbalanced, we subsampled the majority class, and trained a random forest classifier on the data. We chose the number of trees and the maximum depth by considering the training/testing error, then experimented with a range of prediction intervals (1–5 years) and burstiness thresholds (0.0006–0.0016).
Choosing a prediction interval. We vary the interval, I and measure the F1 score of the classifier (in terms of how often it correctly predicted whether terms would rise or fall), for a number of different burstiness thresholds (see legend). What we find is that optimal performance is reached when the prediction is made 3–4 years in the future. The error bars represent standard deviation over 10 folds
The effect of changing the burstiness threshold on classifier accuracy
Burst threshold | Dataset size | Accuracy | F1 |
---|---|---|---|
0.0006 | 56136 | 0.71 ± 0.01 | 0.72 ± 0.01 |
0.0008 | 25570 | 0.75 ± 0.01 | 0.75 ± 0.01 |
0.0010 | 12978 | 0.79 ± 0.01 | 0.79 ± 0.01 |
0.0012 | 7450 | 0.82 ± 0.01 | 0.82 ± 0.01 |
0.0014 | 4886 | 0.82 ± 0.02 | 0.83 ± 0.02 |
0.0016 | 3332 | 0.83 ± 0.02 | 0.84 ± 0.02 |
Increasing the burstiness threshold increases the performance of the classifier substantially. However, thresholding this way comes at the cost of the amount of data available. At the highest threshold, there are just over 3000 terms, some of which describe the same concept (e.g. “convolutional neural”, “convolutional neural network”). The prediction interval also matters. When we vary both parameters together, we see that performance is highest when the prediction is made 3 years into the future.
Prediction of term prevalence in 2020
Predicting whether bursty computer science terms will rise or fall in popularity in the future
Terms | |
---|---|
Predicted to be more prevalent in abstracts in 2020 | Novel, state [of the] art, machine learning, big data, dataset, http://github.com, neural, deep neural network, convolutional neural network (cnn), convolution, recurrent neural network , neural network architecture, deep learning model, autoencoder, learn, train, training, adversarial, deep learning method, long short term memory (lstm), pre-trained, word embeddings, internet [of] things (iot), iot device, iot application, sensor, wearable, cloud, fog, smart, vr, fifth generation (5g), millimetre wave (mmwave), mu, spl, massive mimo, software defined (networking) (sdn), end-to-end |
Predicted to be less prevalent in abstracts in 2020 | Efficient, originality value, different, semantic web, ontology, science, social media, social network, twitter, analytics, web service, web site, service oriented, cloud computing, 5g network, mobile, smartphone, tablet, android, app, network, wireless mesh network, wireless sensor network, grid computing, node, electronic commerce, cognitive radio, manet, web 2.0, ad hoc network, peer [to] peer (p2p), parallel , differentiated , packet, ip, relay, channel, cross layer, lte, xml, uml, ion, h264, java, sup, cdma, kinect, expert system, microarray, gene expression |
Most machine learning terms are expected to rise, while some web, networking and social media terms are expected to fall. Encouragingly, the only overlap between the two groups was “5g” and “fifth generation” in the rising group, and “5g network” in the falling group.
Limitations
Using titles and abstracts only While abstracts are much more accessible than the full text of papers, they give us a somewhat limited view of scientific research. Terms that have become less common in abstracts may have moved to the methods section of papers, as they are seen as more mature technologies that can be used as tools—e.g. “cloud computing”, “xml”, “java”. When abstracts alone are used, there is no way to distinguish between these terms and ideas that have genuinely fallen out of use.
Validation It is not trivial to validate the list of bursty terms in Fig. 10. So far as we are aware, there is no gold standard list of “hot topics” that covers the last thirty years of computer science. The scope of the dataset is also quite large; validation by a domain expert would be likely to have low coverage over the different sub-disciplines.
Historical data The burst detection method we have used requires a span of historical data to detect bursts. This means that it cannot effectively detect bursts in the earliest years of the dataset.
The trade-off between accuracy and dataset size We can have high accuracy (in the region of 84%) at the cost of most of our data, by choosing a higher burstiness threshold.
Scientific progress can happen without warning There is no way to predict the future prevalence of terms that have not even appeared in our dataset. Some trends grow swiftly and suddenly; see “big data”, “Kinect” and “smart grid”.
Conclusion
We have explored a stock market-inspired burst detection algorithm, and used it to find bursty terms in over thirty years of computer science abstracts. These terms represent a snapshot of computer science research over the years, from Java, e-commerce and peer-to-peer networking, to fog computing, 5G, word embeddings and deep learning. We see terms that have peaked twice, such as neural networks and virtual reality, and terms which have experienced a linear increase in popularity, such as “novel”. Most interestingly though, we find that many of our terms display a characteristic life cycle in their popularity over time, and note that it shares some similarities with the famous Gartner hype cycle (Fenn and Raskino 2008). Our classifier, which is, to our knowledge, the first built using only bursty terms, is able to predict whether terms will rise or fall in popularity with accuracy in the region of 80%.
Notes
Acknowledgements
This research was supported by the Manchester Centre for Doctoral Training in Computer Science, EP/I028099/1
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
Supplementary material
References
- Allan, J., et al. (1998). Topic detection and tracking pilot study final report. In In Proceedings of the DARPA broadcast news transcription and understanding workshop (pp. 194–218).Google Scholar
- Allen Institute for Artificial Intelligence. (2015). Semantic scholar. Retrieved April 13, 2019 from https://www.semanticscholar.org/.
- Appel, G. (2005). Technical analysis: Power tools for active investors. Upper Saddle River: FT Press.Google Scholar
- Balili, C., Segev, A., & Lee, U. (2017). Tracking and predicting the evolution of research topics in scientific literature. In 2017 IEEE international conference on big data (big data) (pp. 1694–1697).Google Scholar
- Bird, S., Klein, E., & Loper, E. (2009). Natural language processing with python. Sebastopol: O’Reilly Media Inc.zbMATHGoogle Scholar
- Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993–1022.zbMATHGoogle Scholar
- Bornmann, L., & Mutz, R. (2015). Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology, 66(11), 2215–2222.CrossRefGoogle Scholar
- Cavacini, A. (2015). What is the best database for computer science journal articles? Scientometrics, 102(3), 2059–2071.CrossRefGoogle Scholar
- Chen, H., Engkvist, O., Wang, Y., Olivecrona, M., & Blaschke, T. (2018). The rise of deep learning in drug discovery. Drug Discovery Today, 23(6), 1241–1250.CrossRefGoogle Scholar
- Dambre, J., Dieleman, S., & Willett, K. W. (2015). Rotation-invariant convolutional neural networks for galaxy morphology prediction. Monthly Notices of the Royal Astronomical Society, 450(2), 1441–1459.CrossRefGoogle Scholar
- DBLP. (2019). DBLP bulk download. Retrieved April 13, 2019 from https://dblp.uni-trier.de.
- Diao, Q., Jiang, J., Zhu, F., & Lim, E. -P. (2012). Finding bursty topics from microblogs. In Proceedings of the 50th annual meeting of the association for computational linguistics: Long papers-volume 1, ACL ’12 (pp. 536–544).Google Scholar
- Fenn, J., & Raskino, M. (2008). Mastering the hype cycle: How to choose the right innovation at the right time. Gartner series. Brighton: Harvard Business Press.Google Scholar
- Fung, G. P. C., Yu, J. X., Yu, P. S., & Lu, H. (2005). Parameter free bursty events detection in text streams. In Proceedings of the 31st international conference on very large data bases, VLDB ’05 (pp. 181–192). VLDB Endowment.Google Scholar
- Griffiths, T. L., & Steyvers, M. (2004). Finding scientific topics. Proceedings of the National Academy of Sciences, 101(suppl 1), 5228–5235.CrossRefGoogle Scholar
- He, D., & Parker, D. S. (2010). Topic dynamics: An alternative model of bursts in streams of topics. In Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 443–452).Google Scholar
- He, D., & Parker, D. S. (2011). Learning the funding momentum of research projects. In Advances in knowledge discovery and data mining (pp. 532–543).Google Scholar
- Investopedia. (2019). How is the exponential moving average (EMA) formula calculated? Retrieved April 13, 2019 from www.investopedia.com/technical-analysis-basic-education-4689655.
- Kahaner, D. (1994). Japanese activities in virtual reality. IEEE Computer Graphics and Applications, 14(1), 75–78.CrossRefGoogle Scholar
- Kleinberg, J. (2002). Bursty and hierarchical structure in streams. In Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’02 (pp. 91–101).Google Scholar
- Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84–90.CrossRefGoogle Scholar
- Mane, K. K., & Borner, K. (2004). Mapping topics and topic bursts in PNAS. Proceedings of the National Academy of Sciences, 101(suppl 1), 5287–5290.CrossRefGoogle Scholar
- Mathioudakis, M., & Koudas, N. (2010). Twittermonitor: Trend detection over the twitter stream. In Proceedings of the 2010 ACM SIGMOD international conference on management of data, SIGMOD ’10 (pp. 1155–1158).Google Scholar
- Mei, Q., & Zhai, C. (2005). Discovering evolutionary theme patterns from text: An exploration of temporal text mining. In Proceedings of the eleventh ACM SIGKDD international conference on knowledge discovery in data mining, KDD ’05 (pp. 198–207).Google Scholar
- Murphy, J. (1999). Technical analysis of the financial markets: A comprehensive guide to trading methods and applications. New York: New York Institute of Finance.Google Scholar
- National Library of Medicine. (2019). Medical subject headings (MeSH). Retrieved April 13, 2019 from https://www.nlm.nih.gov/mesh/meshhome.html.
- Prabhakaran, V., Hamilton, W. L., McFarland, D., & Jurafsky, D. (2016). Predicting the rise and fall of scientific topics from trends in their rhetorical framing. In Proceedings of the 54th annual meeting of the association for computational linguistics (volume 1: long papers) (pp. 1170–1180).Google Scholar
- RePORT. (2018). The research, condition, and disease categorization (RCDC) system. Retrieved April 13, 2019 from https://report.nih.gov/rcdc/process.aspx.
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536.CrossRefGoogle Scholar
- SciPy. (2019). Hierarchical clustering (scipy.cluster.hierarchy). Retrieved April 13, 2019 from https://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html.
- Steyvers, M., Smyth, P., Rosen-Zvi, M., & Griffiths, T. (2004). Probabilistic author-topic models for information discovery. In Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’04 (pp. 306–315).Google Scholar
- Stroup, D., David Williamson, G., Herndon, L. J., & Karon, J. (1989). Detection of aberrations in the occurrence of notifiable diseases surveillance data. Statistics in Medicine, 8, 323–329.CrossRefGoogle Scholar
- Takahashi, Y., Utsuro, T., Yoshioka, M., Kando, N., Fukuhara, T., Nakagawa, H., & Kiyota, Y. (2012). Applying a burst model to detect bursty topics in a topic model. In H. Isahara, K. Kanzaki (Eds.), Advances in natural language processing (pp. 239–249). Berlin: Springer.CrossRefGoogle Scholar
- Wu, Y., Venkatramanan, S., Chiu, D. (2018). A population model for academia: Case study of the computer science community using DBLP bibliography 1960–2016. IEEE Transactions on Emerging Topics in Computing. https://doi.org/10.1109/TETC.2018.2855156.CrossRefGoogle Scholar
- Zhang, X., & Shasha, D. (2006). Better burst detection. In 22nd international conference on data engineering (ICDE’06) (pp. 146–146).Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.