Introduction

In 2012, a group of scientists from the University of Toronto built a convolutional neural network (CNN) and applied it to a well-known image classification task. Their paper (Krizhevsky et al. 2017) sparked a revolution in the field of deep learning; an explosion of popularity and interest that is still continuing today. CNNs have since spread outwards from their original domain and can be found in diverse fields, such as biomedicine (Chen et al. 2018) and astronomy (Dambre et al. 2015).

If we were to imagine the trajectory of the phrase “convolutional neural network” in terms of its popularity over time, we might imagine an exponential curve upwards. And indeed, when we search for it in a large database of computer science abstracts (DBLP 2019), that is what we see (Fig. 1). This behaviour—a sudden and sustained rise in popularity relative to some historical baseline level—is referred to as a burst in most of the literature (Kleinberg 2002).

Fig. 1
figure 1

Number of abstracts containing the term “Convolutional neural network(s)” over time (1988–2017) in DBLP. While there were isolated mentions of CNNs in the 90s and 00s, the topic underwent exponential growth in popularity beginning in the year 2012, which continues to the present

Being able to detect bursty terms automatically in scientific literature would have a number of applications. Firstly, early detection might allow funding agencies and publishers to take note of the most promising new ideas and channel new support that way. For newcomers to a field and researchers in the sociology of science, automatically listing the hottest topics over time would give an instant snapshot of the evolution of the field. Finally, compiling a corpus of historical bursty terms over time might make it possible to characterise the life cycles that new ideas go through as they develop.

In this paper, we explore a burst detection methodology that requires little tuning and can be used on a large dataset. We build on work by He and Parker (2010), who used a technique from stock market analysis to detect bursty keywords in PubMed, a very large online bibliography of biomedical citations.

This work has three main research objectives:

  • To adapt an existing burst detection methodology to the free text of a large corpus of computer science abstracts. To our knowledge, this is the first use of this method on free text rather than a controlled vocabulary of keywords.

  • To report a list of historical and current bursts in the computer science literature.

  • To predict the future prevalence of existing bursty terms using machine learning.

Background

Burst detection

The problem of tracking topics in time-ordered corpora was formalised by a DARPA-sponsored initiative under the name Topic Detection and Tracking (TDT). Early research focused on segmenting a corpus into topics, finding the first mention of each topic and then tracking and plotting their popularity over time (Allan et al. 1998). As computer hardware improved, it became common to use Latent Dirichlet Allocation (LDA) for this kind of topic modelling (Blei et al. 2003). A typical method involves splitting the corpus into time steps, finding topics in each chunk and then linking them together across time steps based on some measure of similarity (Griffiths and Steyvers 2004; Steyvers et al. 2004; Mei and Zhai 2005). The prevalence of each topic can then be tracked over time and bursty periods identified. However, this comes with a number of disadvantages, such as the lack of interpretability of the results and the difficulty in coherently linking LDA topics together between subsequent time-steps.

The opposite approach is to first identify the bursty terms in a dataset, and then cluster them together into topics, using, for instance, Kleinberg (2002)’s burst detection algorithm. Originally developed to detect topics in email chains, Kleinberg’s method assumes that terms in documents are emitted by a two-state automaton. The automaton may spontaneously transition from a non-bursty state to a bursty state, or vice versa. Variants of this have been applied across several domains: Diao et al. (2012) and Mathioudakis and Koudas (2010) used it to detect bursty topics in Twitter data, the latter in real time, while Fung et al. (2005) and Takahashi et al. (2012) applied it to news streams.

However, when it comes to scientific literature, there are a few reasons why Kleinberg’s method is a less natural fit. He and Parker (2010) point out that, unlike Tweets and news articles, scientific papers tend to enter the world in batches, such as when a new edition of a journal or the proceedings of a conference is published. This violates Kleinberg’s underlying assumption that new items enter the dataset in a continuous fashion. It also forces us to impose longer time steps, such as years rather than seconds. This causes a second problem: the quantity of data available.

While there are several large open-access corpora of scientific abstracts, such as PubMed (biomedicine), arXiv (physics and computer science), Semantic Scholar (assorted) and DBLP (computer science), all of them cover short intervals, relative to the size of the time steps. Even in the best case scenario, we are likely to have less than a hundred years worth of usable data—which means approximately a hundred time steps. There has also been a vast change in the underlying landscape over the span of the dataset, because science in general (Bornmann and Mutz 2015) and computer science in particular (Wu et al. 2018), have both seen strong and sustained growth in the last century. By contrast, unless one collects many years of Twitter data, the size and characteristics of the dataset do not change substantially over time.

Several burst detection methods from other domains have been proposed for use on scientific documents. For instance, Stroup et al. (1989) take inspiration from epidemiology, and Zhang and Shasha (2006) take inspiration from gamma rays. However, of particular interest to us is He and Parker (2010)’s work that takes a popular technique from stock market analysis and applies it to PubMed data. This is an attractive idea: a great deal of work has been done in analysing stocks, because some people are highly motivated to predict what prices will do in later time steps.

Moving average convergence divergence

The basic item in the toolkit of the stock market analyst is the moving average. While a moving average necessarily lags behind real time data, it can smooth out random fluctuations to reveal underlying trends. The simple moving average (SMA) of a time series is the sum of its values in a set interval (called the span of the SMA), divided by the width of that interval. More advanced methods use exponential moving averages (EMAs), which assign more weight to more recent data.

For a given span, n, the exponential moving average of a time series, y(t), is (Investopedia 2019):

$$\begin{aligned} \text {EMA}(t_i) = \text {EMA}(t_{i-1}) + \frac{2}{n+1}(y(t_i)-\text {EMA}(t_{i-1})) \end{aligned}$$
(1)

Buy and sell signals for stocks can be generated by taking two EMAs with different spans and seeing where they cross. The moving average with the longer span responds more slowly to new data, so when there is a sudden change in the price of the stock, the shorter moving average will cross it in an upwards or downwards direction (Murphy 1999) (Fig. 2b). Moving Average Convergence Divergence (MACD) takes this idea a step further (Appel 2005). The long EMA is subtracted from the short EMA to give the MACD line. This MACD line is then itself averaged, to create a fourth time series called the signal line (Fig. 2c). The difference between the MACD and signal lines is called the histogram of the data, and can be thought of as an approximate measure of curve acceleration. When the histogram is positive, the price of the stock is accelerating upwards. When it is negative, the reverse is happening.

Fig. 2
figure 2

Illustration showing how moving averages can be used to detect changepoints. a Shows how the crossover of two simple moving averages, one with a span of 12 and another with a span of 6, can generate a sell signal, b shows the same phenomenon, but with exponential moving averages, c shows the MACD graph of the time-series; note how the sell signal comes earlier than in (a) and (b)

Here we introduce these notions more formally. For two moving averages with spans \(n_1\) and \(n_2\):

$$\begin{aligned} \text {MACD}[n_1, n_2] = \text {EMA}[n_1] - \text {EMA}[n_2] \end{aligned}$$
(2)

The signal line is the MACD line, smoothed with an EMA with span \(n_3\);

$$\begin{aligned} \text {Signal}[n_1, n_2, n_3] = \text {EMA}[n_3](\text {MACD}[n_1, n_2]) \end{aligned}$$
(3)

And the histogram is:

$$\begin{aligned} \text {Histogram}[n_1, n_2, n_3] = \text {MACD}[n_1, n_2] - \text {Signal}[n_1, n_2, n_3] \end{aligned}$$
(4)

This is the technique that He and Parker (2010) applied to PubMed data between 1950–2008. Instead of stock prices, they looked at the frequency of MeSH terms assigned to scientific papers over time [MeSH terms, or Medical Subject Headings, are a hierarchically-ordered taxonomy maintained by the National Library of Medicine (National Library of Medicine 2019)]. Their model was evaluated by comparing their detected bursts to real events; for instance, they found that “Morphine” was a popular keyword during the Vietnam war, and “Sexually Transmitted Diseases” was a popular keyword during the AIDS crisis. They also compared several of their bursts to the results reported by Mane and Borner (2004) in a similar burst detection study. He and Parker found that both methods identified the same bursty periods for the terms.

Using a controlled vocabulary rather than free text has advantages in that it ensures the terms in the vocabulary will be meaningful, unique and free of spelling errors. However, a controlled vocabulary will necessarily lag behind the forefront of scientific development because new terms can only be added after they have risen to prominance. Additionally, keywords are not available for all datasets, limiting the scope of the method. For this reason, we use the free text of abstracts and titles.

Prediction

As well as detecting historically bursty terms, we would like to be able to predict whether they will become more or less prevalent in the future. Some related work has been done on this task. Prabhakaran et al. (2016) took 2.4 million abstracts from Web of Science and used an implementation of LDA to identify 500 topics. They then used a logistic regression classifier to predict whether their topics would rise or fall in popularity over subsequent time steps, yielding an accuracy of 70%. Balili et al. (2017) looked at a slightly different task; they took 21.2 million PubMed abstracts, clustered their MeSH keywords based on co-occurrence in article annotations and then trained a gradient-boosted trees classifier to predict whether individual clusters would survive or dissolve. Finally, in their 2011 paper, He and Parker took a database of approximately 100,000 Californian grant abstracts which were pre-labelled with “project terms” (keywords from a defined vocabulary of biomedical concepts (RePORT 2018) that were automatically assigned to grant applications) (He and Parker 2011). They calculated the MACD and histogram values for each term, then used various classifiers to predict whether the histogram itself would rise or fall in the future. While this is not exactly the same as predicting whether term prevalence would increase, their best classifier had an accuracy of 88%.

Materials and method

The code used in this section can be found on GitHub at https://github.com/etattershall/burst-detection

Dataset

We use a corpus gathered from DBLP, which is a large computer science bibliography hosted by Trier University in Germany. DBLP is considered to be reasonably comprehensive in its coverage of the field: Cavacini (2015) compared a number of bibliographies and found that DBLP had the greatest number of unique computer science articles indexed. It is freely available to download, either directly (DBLP 2019), or via Semantic Scholar (Allen Institute for Artificial Intelligence 2015).

After downloading, cleaning, and filtering out foreign language abstracts, we had a dataset of 2.6 million articles spanning the years 1988-2017. For each article, we combined title and abstract to form a document, then used Python’s Natural Language Toolkit (NLTK) (Bird et al. 2009) to tokenise, lemmatise and remove a short list of standard English stopwords.

From there, we read in the dataset year by year and formed a vocabulary of uni-, bi- and tri-gram terms. We calculated the document frequency of each term in each year, which left us with a 31 (year) \(\times\) 4.1 million (term) matrix. We filtered again, removing the terms that did not occur in more than 0.02% of abstracts for at least three consecutive years. This substantially reduced the amount of noise in the dataset from digitization errors and very rare bi- and tri-gram terms. The final size of our vocabulary was approximately 70,000 terms.

Normalisation

Over the 31 year span of the dataset, the number of documents published each year has risen substantially (Fig. 3). The mean length of titles has increased, while the mean length of abstracts has fluctuated (Fig. 4). This presents a problem; terms later in the dataset will be more likely to be flagged as bursts because of the underlying increase of size of the dataset. Therefore, we normalised the document frequency counts twice, first by dividing the data for each year by the total number of documents in that year, and then dividing by the mean number of tokens per document. This means that each element in the year-term matrix can be viewed as a normalised measure of prevalence.

Fig. 3
figure 3

The number of documents added to DBLP per year (1988–2017). There is a modest dip in 2017 because documents are often added to DBLP retroactively and backdated

Fig. 4
figure 4

Changes in DBLP title and abstract length over time. While the number of characters in titles seems to increase linearly, the length of abstracts fluctuates

Applying MACD

The first parameter we chose was the length of the moving average spans, (\(n_1\), \(n_2\), \(n_3\)). In stock market analysis, it is common to use (12, 26, 9) months (Murphy 1999). However, our dataset has just 31 time steps, so moving averages of this length would leave us with too little data with which to work. Therefore, after some experimentation, we used (6, 12, 3) years.

He and Parker (2010) used the raw value of the histogram as their metric for burstiness. However, when we attempted to apply it to our dataset, we discovered that there were some issues with scale. Common terms, such as “data”, often showed large numerical shifts on the histogram that are still insignificant when compared to their historical baseline level. Therefore, we introduced a scaling factor.

Initially, we experimented with using the mean or median value of each term’s historical prevalence. However, this biased the metric in favour of new terms which did not exist in the dataset before becoming popular. Then we tried the historical maximum, but found that this produced variable results; the prevalence over time is not generally smooth, so anomalous spikes occur frequently. Finally, we decided on the square root of the historical maximum since this produces more consistent results than the other metrics.

Therefore, for the prevalence p(wt) of term w over time, we have:

$$\begin{aligned} \text {Burstiness}[n_1, n_2, n_3](p(w,t)) = \frac{\text {histogram}[n_1, n_2, n_3](p(w,t))}{\sqrt{\mathrm{max}(p(w,t))}} \end{aligned}$$
(5)

Predicting the future prevalence of terms using MACD features

In order to train a supervised classifier, we need to create a training set, (XY), where X is a matrix of terms and features and Y is a binary class indicating whether each term rose or fell in prevalence after a number of years. First, we choose how far in the future we wanted to predict (e.g. 3 years) and call this the prediction interval I. Then, for each year, \(y_i\), we:

  1. 1.

    Take the last 20 years of data (\(\mathcal {D}(y_{i-20}, y_i)\)).

  2. 2.

    Apply our burst detection method to \(\mathcal {D}(y_{i-20}, y_i)\). Select all terms above a burstiness threshold.

  3. 3.

    Extract time series features such as the MACD and histogram values, the standard deviation, min and max. This forms the X part of our dataset (Fig. 5).

  4. 4.

    Take a smoothed value of term prevalence during \(y_{i+I}\) and calculate whether it is above or below prevalence during \(y_i\). This forms the Y part of our dataset.

  5. 5.

    Append X and Y onto the data for the previous year.

Both He and Parker (2011) and Balili et al. (2017) used a tree-based method to predict whether their clusters or terms would rise or fall in popularity. We follow them, using a random forest classifier, tested via 10-fold cross validation. A process diagram for the full methodology can be found in Fig. 6.

Fig. 5
figure 5

The feature extraction process. For instance, for the year 2008, we take data in the range 1998–2008 and extract features to form the X part of our dataset. We then take the prevalence for each term during 2011 as our Y

Fig. 6
figure 6

Process diagram for the methodology

Results and discussion

Burst detection

As an initial evaluation, we applied the burst detection method to our dataset and sorted the burstiness scores in descending order. Some terms with high burstiness were surprising, such as “novel” and “state [of the] art”. However, when we plotted them on a graph over time, we found that they have indeed become more popular over the span of the dataset (Fig. 7).

Fig. 7
figure 7

The terms “novel” and “state [of the] art” have increased substantially in popularity over the last 30 years

Social media is another interesting example (Fig. 8). “Social network” began to climb in popularity around 1998, with a growth curve that mirrors “social media”, 5 years later. “Twitter” and “facebook” climb together, but “facebook” reaches a plateau earlier. We also find some cases where the orthography of words has changed over time. For instance, “web site”, peaked in 2001, then was gradually replaced by “website” (Fig. 9).

Fig. 8
figure 8

The prevalence of the terms “social network”, “social media”, “twitter” and “facebook” over time

Fig. 9
figure 9

Changing word use over time as “web site” is replaced with “website”

There are, however, some issues with the terms detected. For example, the top 30 bursty terms over the span of the dataset are:

deep, neural network, neural, convolutional, reserved, right reserved, science bv right, bv right, bv right reserved, convolutional neural, convolutional neural network, elsevier bv, deep learning, elsevier bv right, right, elsevier science bv, science bv, elsevier science, spl, bv, elsevier, cnn, iot, learning, deep neural, deep neural network, elsevier ltd, xml, internet thing, elsevier ltd right

This list contains terms that refer to the same idea, such as “convolutional neural” and “convolutional neural network”. It also contains publishing artefacts such as “elsevier bv right”. This is part of a copyright declaration that was often included at the end of abstracts—e.g. “©1999 Elsevier Science B.V. All rights reserved”. In order to remove these and merge duplicates, we cluster the top 500 bursty terms based on their co-occurrence in abstracts using SciPy’s hierarchical clustering algorithm (SciPy 2019), then manually remove the clusters containing publishing artefacts. This leaves 114 clusters, which are displayed in Table 1.

Table 1 The 114 clusters of the burstiest terms, after copyright declarations had been removed

To investigate how bursty terms have developed over time, we sort the clusters by year, based on when the bulk of the activity occurred, then manually choose a sample that is fairly evenly spread over the span of the dataset. For each of the 52 chosen clusters, we choose a single representative term, or a term and an acronym, such as [recurrent neural, rnn] for [recurrent neural network, recurrent neural, recurrent, network rnn, rnn, neural network rnn], then track the term over time and display it on a graph (Fig. 10).

Fig. 10
figure 10

A sample of the burstiest clusters over time, ordered approximately by the date at which the bulk of their activity occurred. Note the different scales of the subgraphs

In Fig. 10, we notice that many of the later bursts seem to involve deep learning in some way. The earlier ones are more diverse. There are a number of different growth patterns. “Security” undergoes a nearly linear increase, as does “dataset”. Others peak twice, such as “neural networks” and “virtual reality”. This ties in neatly with what we know of the history of these two ideas: the first consumer VR headsets were in the headlines in the 90s (Kahaner 1994), and a number of neural network breakthroughs happened in a similar time period (Rumelhart et al. 1986). Some of the terms reach peak popularity, then persist, such as “data mining” and “gene expression”. Others fall out of favour fairly swiftly, such as “intranet” and “web 2.0”. We note that this does not mean that these concepts are no longer used, only that these particular terms no longer find their way into titles and abstracts.

Prediction

As described in “Predicting the future prevalence of terms using MACD features” section, we aim to predict whether a given term will rise or fall in popularity after a time interval I. Since the classes (rise, fall) are unbalanced, we subsampled the majority class, and trained a random forest classifier on the data. We chose the number of trees and the maximum depth by considering the training/testing error, then experimented with a range of prediction intervals (1–5 years) and burstiness thresholds (0.0006–0.0016).

The results are shown in Fig. 11, while the effect of the burstiness threshold on dataset size is shown in more detail in Table 2.

Fig. 11
figure 11

Choosing a prediction interval. We vary the interval, I and measure the F1 score of the classifier (in terms of how often it correctly predicted whether terms would rise or fall), for a number of different burstiness thresholds (see legend). What we find is that optimal performance is reached when the prediction is made 3–4 years in the future. The error bars represent standard deviation over 10 folds

Table 2 The effect of changing the burstiness threshold on classifier accuracy

Increasing the burstiness threshold increases the performance of the classifier substantially. However, thresholding this way comes at the cost of the amount of data available. At the highest threshold, there are just over 3000 terms, some of which describe the same concept (e.g. “convolutional neural”, “convolutional neural network”). The prediction interval also matters. When we vary both parameters together, we see that performance is highest when the prediction is made 3 years into the future.

Prediction of term prevalence in 2020

To predict the future prevalence of the discovered bursty terms we chose parameters \(I=3\) and \(B_{\mathrm{pred}} = 0.0012\) and trained our classifier on data from 1988–2014. Given the results in Table 2, we expect this classifier to achieve  81% accuracy. We then selected terms that were above a significance threshold in 2017 and generated predictions of whether their prevalence in abstracts will rise or fall in 2020. Table 3 shows the results.

Table 3 Predicting whether bursty computer science terms will rise or fall in popularity in the future

Most machine learning terms are expected to rise, while some web, networking and social media terms are expected to fall. Encouragingly, the only overlap between the two groups was “5g” and “fifth generation” in the rising group, and “5g network” in the falling group.

Limitations

There are several limitations to this work:

  • Using titles and abstracts only While abstracts are much more accessible than the full text of papers, they give us a somewhat limited view of scientific research. Terms that have become less common in abstracts may have moved to the methods section of papers, as they are seen as more mature technologies that can be used as tools—e.g. “cloud computing”, “xml”, “java”. When abstracts alone are used, there is no way to distinguish between these terms and ideas that have genuinely fallen out of use.

  • Validation It is not trivial to validate the list of bursty terms in Fig. 10. So far as we are aware, there is no gold standard list of “hot topics” that covers the last thirty years of computer science. The scope of the dataset is also quite large; validation by a domain expert would be likely to have low coverage over the different sub-disciplines.

  • Historical data The burst detection method we have used requires a span of historical data to detect bursts. This means that it cannot effectively detect bursts in the earliest years of the dataset.

  • The trade-off between accuracy and dataset size We can have high accuracy (in the region of 84%) at the cost of most of our data, by choosing a higher burstiness threshold.

  • Scientific progress can happen without warning There is no way to predict the future prevalence of terms that have not even appeared in our dataset. Some trends grow swiftly and suddenly; see “big data”, “Kinect” and “smart grid”.

However, despite these limitations, this method has a number of strengths. It can be applied to datasets for which the authors have little domain knowledge to create a snapshot of the history of the field. It also has an obvious use to funding agencies and researchers exploring the research landscape.

Conclusion

We have explored a stock market-inspired burst detection algorithm, and used it to find bursty terms in over thirty years of computer science abstracts. These terms represent a snapshot of computer science research over the years, from Java, e-commerce and peer-to-peer networking, to fog computing, 5G, word embeddings and deep learning. We see terms that have peaked twice, such as neural networks and virtual reality, and terms which have experienced a linear increase in popularity, such as “novel”. Most interestingly though, we find that many of our terms display a characteristic life cycle in their popularity over time, and note that it shares some similarities with the famous Gartner hype cycle (Fenn and Raskino 2008). Our classifier, which is, to our knowledge, the first built using only bursty terms, is able to predict whether terms will rise or fall in popularity with accuracy in the region of 80%.