Economic policy statements, social media, and stock market uncertainty: An analysis of Donald Trump’s tweets

This paper investigates the impact of economic policy communication via social media on stock market uncertainty. It uses a sample of Donald Trump’s tweets to identify and cluster policy-related tweets using a double machine learning approach based on natural language processing. The response of uncertainty to these tweets is then estimated using an event-study design. Tweets about foreign policy and trade, monetary policy, and immigration policy significantly increase market uncertainty as measured by the VIX. Independent of their content, also the frequency of tweets and the intensity of tweet sharing matter for stock market uncertainty. Most of the effects are transitory, reaching their peaks around two hours after the publication of tweets.


Introduction
Social networks have rapidly gained relevance as a source of information for stock market participants when forming expectations about future events, especially when the government is involved. Enikolopov et al. (2018) show how firms mentioned in -Alexey Navalnys blog posts, which uncovered corruption scandals in Russian statecontrolled companies, exhibit a later negative market valuation 1 . Further research based on Twitter data, such as Yang et al. (2015) and Piñeiro-Chousa et al. (2016), and Schnaubelt et al. (2020), show that the continuous flow of information through this social network coincides with comovements in market indicators and asset prices. Economic and financial media coverage of Donald Trump's social media behavior reflects the general concern about the adverse effects of the misuse of official social media channels, especially when used to disclose information regarding future national economic policies 2 . This case provides an opportunity to understand the increasing role of social networks in real-time economic policy communication and its immediate effect on stock market uncertainty.
To uncover this relationship, I retrieve tweet and retweet from Donald Trump's Twitter account for the period between December 31, 2015, and October 21, 2019, with their respective metadata. 3 Text data is aggregated in five-minute intervals to match the frequency of the market uncertainty measure, here given by the closing value of the Chicago Board Options Exchange (CBOE) Volatility Index (VIX). Thereafter, I developed a double unsupervised machine learning approach, similar to Bybee et al. (2020), to identify and cluster policy-related statements from Trump's tweets and retweets. This approach is based on two algorithms. The first one retrieves a set of possible topics using the Latent Dirichlet Allocation (LDA). The second algorithm clusters topics hierarchically based on a measure of semantic distance. I aggregate similar economic-related topics into cluster topics and label them according to their implicit economic policy issue. The high frequency effect of policy-related tweets on market uncertainty is estimated in an event study context, such as in Beechey and Wright (2009). For this, I generate two sets of indicator variables: the first set is based on policy-related cluster topics, hereafter policy statements events; the second set is based on content-independent measures, such as posting or retweeting frequency. Finally, I estimate the effect of selected events on the change in the VIX over an estimation window covering from 15 minutes before the event up to five hours afterward.
The results for policy statement events suggest that tweets about foreign policy, trade, and immigration have a statistically significant uncertainty-promoting effect. Tweets regarding monetary policy, with high levels of sharing (retweeting), show the highest estimated impact on stock market volatility. Tweets about fiscal and health care policies did not increase perceived uncertainty. Content-independent events regarding unexpected changes in disclosure frequency and sharing levels have a statistically significant positive impact on market volatility. The intensity of these effects increases as the effect becomes more unusual. Most of the estimated effects are short-lived: significant responses to statement and content-independent events response appear between one and four hours after the occurrence of the event, except for immigration, which becomes significant at the end of the estimation window.
This paper complements and adds evidence to similar studies based on Trump's tweet data, such as Colonescu et al. (2018), who studies the effect of tweets on foreign exchange markets; Bianchi et al. (2019), who provides evidence on the impact of tweets on Fed funds futures; and Fan et al. (2020), who studies firm-level exposure around political events by using a (dis)agreement among social media users who jointly mention firms from the S&P 500 composite and Trump. In terms of data frequency level, this paper follows (Kinyua et al. 2021), who documents the intraday of the S&P 500 and DJIA indexes to Trump's tweets using sentiment analysis. Studies concerning uncertainty measures, such as Baker et al. (2019) and Burggraf et al. (2020a), and Burggraf et al. (2020b), also demonstrate the direction of the causal relationship between Trump's announcements regarding trade policy and an increase in stock market volatility. Klaus and Koser (2020) show a similar effect for European financial markets.
However, this paper deviates from previous related literature in three aspects. First, the estimated uncertainty effect is not based on a particular set of tweets including a word or single estimated topic. Instead, similar topics are clustered in broad but recognizable policy categories. Second, the high-frequency evolution of the uncertainty response to policy statements and to different levels of disclosure and sharing are described. Finally, it combines content-dependent and content-independent information to identify particular scenarios in which policy statements generate more uncertainty.
It is possible to circumscribe this paper within two similar strands of literature: financial market reaction to news or announcements and financial market reaction to policy uncertainty. Authors that have provided evidence on the sensitivity of asset prices to the disclosure of unexpected macroeconomic indicators and FOMC statements include (Beechey and Wright 2009) for treasury inflation-protected securities, Lapp and Pearce (2012) for federal funds rate futures prices. More recently, Gilbert et al. (2017) showed that the heterogeneity in asset price responses depends on the forecasting power of the announcement.
The literature on asset volatility, such as Graham et al. (2003), suggests that announcements regarding employment, NAPM (manufacturing), producer price indices, import and export price indices, and the employment cost index have a significant impact on implied volatility and thus on stock valuation. In terms of index volatility, Clements et al. (2007) shows that the VIX falls significantly on FOMC meeting days. Bomfim (2003) and Lee and Ryu (2019) reaffirm the importance of the timing of the announcements for volatility dynamics. Clements et al. (2007) links FOMC preannouncement periods to relatively calm levels of conditional volatility. Finally, Lee and Ryu (2019) suggests that the effect of announcements, especially monetary policy ones, are also more pronounced in crisis and postcrisis periods than in the precrisis period.
Early literature on policy uncertainty, such as Bittlingmayer (1998) and Voth (2002), provide evidence of financial market reactions to political and policy uncertainty from a historical perspective by using German and U.S. data, respectively. Liu and Zhang (2015) and Baker et al. (2016) documents the predictive power of the Economic Policy Uncertainty (EPU) Index when forecasting realized and implied volatility of the S&P 500 index. The positive relationship among political uncertainty, economic policy, and options volatility is documented in Pástor and Veronesi (2013), Kelly et al. (2016) for political events, 4 and Amengual and Xiu (2018) for monetary policy uncertainty. Finally, it is important to consider how firm's behavior change in high economic policy uncertainty scenarios. Goel and Nelson (2021) and William and Fengrong (2022) documented a negative effect on technological innovation at the country and industry level respectively. This effect is heterogeneous and depends on country and firms characteristics; for example (Goel and Nelson 2021) finds that R&D oriented firms may introduce innovations to hedge against economic policy uncertainty. William and Fengrong (2022) and Liu and Ma (2020) shows that in countries with high levels of trade and financial market liberalization the negative effect of policy uncertainty on technology innovation is milder This paper is structured as follows. Section 2 presents the data. Section 3 describes the policy statements identification approach and the event study design. Section 4 summarizes the results. Section 5 concludes.

Data
I use three data sources for the empirical analysis: Donald Trump's tweets, the closing prices of the CBOE VIX at a five-minute frequency, and the daily U.S. EPU index. The tweets sample cover the period between December 31, 2015, and October 21, 2019. I retrieved Twitter data from the @realdonaldtrump account using the Twitter Application Programming Interface (API) and from trumptwitterarchive.com. Each observation or post 5 in this sample is either a tweet or retweet text, accompanied by metadata such as timestamp 6 , an indicator of whether the text is a tweet or a retweet, the number of times it was retweeted, and the number of times it was marked as a favorite. The VIX data samples span over the same period as the Twitter sample.
I concatenate posts' text within a 5-minute interval and aggregate post metadata at the same level. The resulting 5-minute tweet samples are then merged with the VIX to generate the consolidated data samples. After the merging process, I set tweet variables to 0 for periods without tweet activity and discard observations with tweet activity but missing uncertainty measure data. Table 1 presents the summary statistics of the Twitter sample and provides insights into Trump's tweet behavior. It shows that he publishes more than one post per 5minute interval on average. A breakdown analysis of the total post counts shows that he prefers to tweet himself, almost ten tweets per day, as opposed to retweeting 4 For literature related to elections, uncertainty, and high abnormal stock returns, see Pantzalis et al. (2000) and Li and Born (2006), and Bialkowski et al. (2008). For studies featuring changes in option-implied volatility around elections, see Gemmill (1992) and Goodell and Vähämaa (2013). 5 Hereafter I will use the word post to refer to an observation independent if it is a tweet or retweet. In a time interval, posts will refer to the sum of tweet and/or retweet counts within the interval. I will use the term total count and posts interchangeably. 6 Timestamps come originally in UTC. I transform them to CST time to match the VIX data sample. This table presents summary statistics for the tweet sample retrieved from the account @realdonaldtrump via twitter API and trumptwitterarchive.com. The sample period ranges between 31.12.2015 23:11 CST to 21.10.2019 12:31 CST. Note that favorite count for retweets is always zero. The VIX series is used as base for the merging process given the larger data availability external content. The maximum observed tweet count is seven tweets in less than 5 minutes. Favorited and retweeted counts report followers' behavior in terms of how many times a post is liked and how many times it is shared. These two variables are observed ex-post, i.e., their values represent their counts when the data were downloaded from the Twitter API and not during the 5-minute interval. Finally, variable averages and standard deviations change after merging with the VIX series since 228,456 periods without tweet activity periods were added.

Methodology
The methodology in this paper consists of three steps. Section 3.1 describes how topics from the tweet sample are estimated and then clustered into cluster topics on different narratives. 7 Section 3.2 uses the estimated probabilities of topics and cluster topics as an intermediate input for the generation of event variables. Section 3.3 describes the regression model used to estimate the effect of selected events on the high-frequency uncertainty measure.

Topic modeling and clustering
This section introduces the basic elements of the topic modeling and clustering algorithms (for further details, definitions, and extended results, please read Appendix -A). The approach presented here is based on Bybee et al. (2020) and the intuition behind it is that the combination of these two algorithms simulate the behavior of a representative market participant who follows Trump's Twitter account and derives the main topic k, out of K possible topics, for each tweet.
To accomplish this, I rely on an unsupervised learning algorithm based on the LDA model described in Blei et al. (2003). The input for the LDA model is the corpus, which is composed of a set of documents D, d = (d 1 , . . . , d D ), each of these being the concatenation of tweet or retweet texts within the 5-minute interval before merging with the VIX data. 8 Each document in the corpus is represented as a collection of unique and preprocessed terms w i for all i = 1, . . . , V from a vocabulary of size V . The LDA algorithm interprets each document in the corpus as a mixture of K topics, given by a topic-terms distribution β K , and a topic-document distribution θ K . Posterior inference about β K and θ K distributions is obtained using the Gibbs sampling algorithm 9 as in Griffiths and Steyvers (2004).
I define the optimal prior number of topics K * for the LDA model based on two measures: in-sample coherence score and out-of-sample perplexity score. Topic coherence provides a rank for topic models by measuring the degree of semantic similarity between high-scoring words within a set of topics, helping to identify which topics are semantically interpretable (Stevens et al. 2012). The coherence measure (UMass) used in this paper is based on co-occurrences of word pairs within the corpus used to train the topic model. Given an ordered list of words T k = w 1 , . . . , w n , for each resulting topic k ∈ K , the UMass-coherence is defined as follows: The smoothing count 1/D is added to avoid calculating the logarithm of zero. The perplexity measure, based on Newman et al. (2009), evaluates how well a probability topic model predicts a sample based on held-out data. A lower perplexity value is desirable.
The log-likelihood is given by: These measures are calculated for a series of models with different values for K. Figure 1 compares these alternative models in terms of perplexity and coherence. I define the optimal prior number of topics K * , as the value where the mean and median coherence is higher in the region where perplexity is strictly below the average over all possible specifications. This approach suggests 50 topics as the optimal  Table 2 below show the top-ten keywords of selected economic topics.
The representative market participant cares only about policy insights, so they ignore non-policy-related topics and associate similar policy-related topics with a broad policy category. To achieve this association, I hierarchically cluster these topics based on their linguistic distance, as given by the Hellinger distance, H d, between estimated topic probabilities,θ k .
I define similar topics within a cluster as cluster topics, s i for i = 1, . . . , S, and label them according to their implied narrative. 10 For example, topics k = 36, 7, and 16 belong to the same cluster. These three topics share trade and foreign 10 Cluster topics are defined by the largest cluster below a distance threshold, set here to H d = 0.7. See Fig. 7 for the distribution of topics by cluster. These individual stories converge to a broader narrative of the type " President Trump's statements regarding foreign trade and foreign policy issues." For simplicity, economic policy-related cluster topics will be labeled as policy statements hereafter. Figure 2 plots the weekly distribution of the identified cluster topics over time and summarizes the main results for this section. One can identify 12 cluster topics from the 5-minute sample. The first six cluster topics appear in gray tones and refer to political affairs, whereas the remaining six relate to economic policy issues. On average, around three-quarters of the cluster topic proportions per week relates to political issues and only a quarter to economic policy issues. This figure is also a good indicator of the accuracy of the LDA model, given that the distribution of topics over time matches the timing of the main events of Trump's presidency, such as the presidential campaign and debates in 2016, the hurricane in 2017, the Tax Cut and Jobs Act in 2017/2018, the 2018/2019 trade wars, and the 2019 impeachment inquiry.

Fig. 2
Cluster topics distribution over time, weekly aggregation. Note: Average probability of a clustertopics over a seven-day period. The probability of a cluster-topics is the sum of the probabilities of its constituents. Cluster-topics in gray tones refer to politic related clusters

Events generation
For the study event approach, I define event variables based on three criteria: i) different points in the distribution of metadata variables, ii) the occurrence of topics, and iii) the occurrence of policy statements.
An event occurs each time President Trump generates a new post which is identified with an existing economic policy narrative. The event triggers a transmission mechanism, which could be the retweeting channel, the news, or word of mouth that allows the implied story to spread to the general public, including market participants. In cases where the post is not identified with an existing narrative, the event does not occur, and the spreading mechanism is not triggered.
The first type of events are content-independent or metadata-based events. They occur if by a given time, t, a milestone in the distribution of metadata variables is achieved. They can be interpreted as unusual tweeting behavior of Trump himself or his followers, materialized in tweet/retweet or retweeted count variables being above their expected values. Specialized and general media explicitly focus on these types of events. One example of this is the media coverage of the Volfefe Index created by JP Morgan. 11 Equation 5 describes how the metadata event event m , given "condition" m, is generated.
The condition in Eq. 5 could take any form, such as "more than two tweets in 5 minutes," "retweeted count is above the sample mean" or a specific milestone in the distribution of count variables, such as "Retweeted count > 80th percentile." These type of events will be later labeled directly by the generating condition.
The second type of events, topic specific events, are based on the estimated topic probabilitiesθ k , of the K identified topics by the LDA algorithm. Equation 6 shows how they are generated. For a specific topic k, I create an indicator variables that is equal to 1, if at period t, the topic probabilityθ k,t exceeds a threshold, and 0 otherwise. I set the threshold value to 0.1 for all topics, based on the distribution of the maximum probability value per document. This threshold value guarantees that the topic is primary for the document and allows for co-occurrence of multiple primary topics.
Analog to the previous type of events, policy statement events refers to the occurrence of an economic policy-related cluster topic s at time t, if the cluster topic probability, defined as the sum of the individual topic probabilities of the topics composing a cluster topic, exceeds a predefined threshold, as in Eq. 7.
The threshold for policy statement events is set to 0.1 as in the previous case, such that a single topic within the cluster suffices for the generation of the event. In this paper, I concentrate on events based on policy statements for two reasons: first, presidential tweets seek to address the public and spread widely, so one can expect and observe ambiguous language in the tweets, such that similar keywords identify more than one topic. Second, inference about the effect of single-topic events rely on fewer observations, leading to low variation in the predictor variables and thus to lower precision on the estimated responses. Table 3 provides a summary of the resulting policy statement events at a 5-minute frequency before and after the merging with the VIX series. In the 5-minute sample, the number of events and their probability reduces drastically after the merge due to a high number of VIX observations without corresponding tweet activity.

Estimation of event's effects on uncertainty
To estimate the high-frequency effect of a policy statement or metadata events, I follow an event study approach as in Beechey and Wright (2009) and Gilbert et al. (2017), using an uncertainty measure as the dependent variable, similar to Graham et al. (2003). In general, I estimate the following time series regression to examine the impact of the selected events on the uncertainty proxy. Pre and post-merge refers to cluster-topics counts and probabilities before and after the merge with the VIX sample. Topic coherence provides a rank for topic models by measuring the degree of semantic similarity between high scoring words within a set of topics, see Eq. 1 for a formal definition of the coherence measure Where vix t+h is the change in percentage points (pp) of the VIX in the window from 15 minutes before the event to h periods afterward, up to 60 periods or 5 hours. 12 The time-invariant intercept is denoted by α. Standardized events of the type j enter into the regression as independent variables. This specification allows for multiple events of the same or different types (m, k, or s) to happen simultaneously. Controls of the vector x t are raw counts of posts, tweets, and retweets as well as the log-transformed values of favorite and retweeted counts. Additional controls will be explained in Section 4.4 (robustness checks). This regression runs over all windows where there is at least one event. Events and count variables are set to 0 for periods when the number of posts is equal to 0. Equation 8 implies a multi-step forecast that can be formulated in terms of local projections (LPs), as proposed by Jordà (2005). Equation 9 shows the LP representation for an identified shock event at period t: Impulse responses (IRs) for the shock event are directly computed for each period in the estimation horizon. In this specification, additional policy statements and/or metadata events enter as controls in the linear projection in addition to the count variables in the vector of controls x t . Estimated IRs are given byÎ R(h) = {β h }, whereβ is the h step-ahead estimated coefficient. Confidence bands are reported at the 5th to 95th percentile range, the 15th to 85th percentile range, and the interquartile range.
Finally, I examine the connection between policy statements and the level of sharing by allowing for interaction terms between statement and metadata events in Eq. 8.

-
The resulting estimation equation is given by Eq. 10: Coefficients from Eqs. 8, 9, and 10 are obtained via ordinary least squares with heteroskedasticity consistent standard errors.

Results
This section presents the high-frequency effects of selected events on stock market uncertainty, given by the change in the VIX, vix t+h , from 15 minutes before the event to h periods afterward. The first part of this section concentrates on the effect of content-independent events regarding the high post and sharing frequency. The second part examines the effect of events based on identified policy statements. This section concludes with the interaction between content-independent and contentrelated events in a scenario of policy statements with a high level of diffusion. For each type of event, I first present the impulse response functions for selected events estimated as in Equation 9. The estimated impulse responses serve as an overview of the size of the effect and its dynamics. Thereafter, I examine the effect at specific time periods, namely 2 hours and 5 hours after the event. These effects are estimated as in Eq. 8, including concurrent events and controls.

Impact of content-independent events on market volatility
The objective of this section is to test if less frequent events originating from observations in the tails of the distribution of metadata variables can be interpreted as uncertainty shock, as proposed in Kozeniauskas et al. (2018). Two types of events are of special interest, namely events regarding post frequency and events regarding sharing frequency. These events are generated as in Eq. 5 by using the following conditions: "post count > threshold" with threshold = {0, 1, 2} for post frequency events and "retweeted count > percentile" with percentile = {50, 60, 80} for sharing events. Figure 3 presents estimated responses from Eq. 9 to the events mentioned above, excluding controls.
The left panels of Fig. 3 show the progression in the magnitude of the estimated coefficients as the event condition is defined from a point further to the right in the distribution of post counts. Panel (a) shows the estimated effect of at least one tweet or retweet (postcount > 0). There is a medium-size 13 positive effect in the period between 1.25 and 3.3 hours after the event, significant at the 10% level. The effect achieves its peak after 2.5 hours with a magnitude equal to 0.023 pp. The response's The right panels in Fig. 3 depict a similar pattern for three events based on sharing or retweeted frequency. As the percentile threshold increases, meaning increasing the number of times a tweet or retweet should be retweeted to satisfy the event's condition, the response peak almost double from 0.019 pp in panel (d) to 0.037 pp in panel (f). This progression suggests that as posts become viral (massively retweeted), they have a larger impact on uncertainty. A closer look at these responses at periods h = 24 (2 hours) and h = 60 (5 hours) in Table 4 (2-3, 7) show that the effect of Table 4 Cumulative effect of metadata and policy statement events on change in the VIX  Estimated coefficients from Eq. 8 using 5-minutes frequency data for the VIX. Robust standard errors are shown in parenthesis. Significance at the 10%, 5% and 1% levels are denoted by *, ** and ***, respectively sharing events is not robust after controlling for the total number of posts, and it is not statistically different from 0 at the end of the estimation window. Altogether, these results suggest that unexpected changes in Trump's tweet behavior or followers' behavior can be considered uncertainty shocks in the options market, and the magnitude of their effects increases as the underlying event becomes less likely.

Impact of policy statements events on market volatility
The second set of results focuses on the content of the tweets and retweets. Figure 4 presents the impulse responses of five policy statement events regarding foreign trade and policy, monetary policy, fiscal policy, immigration policy, and health care policy. , with the remaining statement events as controls but excluding the count controls, x.
The first panel in Fig. 4 presents the market response in terms of perceived uncertainty to statements regarding foreign policy and trade. This response smoothly builds up and becomes significant at the 10% level after 1 hour and 15 minutes. It remains near 0.05 pp for around 2 hours, reaching its maximum level of around 0.06 pp at 4 hours and 10 minutes (or 50 periods) after the event, then it slowly decreases and becomes not statistically similar to 0 at the end of the horizon. Table 4 (4-6) shows that the effect size at the 2-hour horizon decreases once it is controlled by the number of posts and the number of tweets. This result supports the findings reported in Burggraf et al. (2020a), suggesting that Trump's tweets dealing with foreign policy and international trade issues have a significant negative effect (uncertainty increasing) on options market volatility.
A breakdown analysis of the partial contribution of each topic composing this policy statement in Fig. 5 suggests that the aggregate positive response is mostly driven by topic 7 with a maximum effect of 0.17 pp after 38 periods, topic 10 with roughly 0.12 pp after 38 periods, and topic 36 with 0.075 after 25 periods. These maximum estimates are significant at the 1%, 5%, and 10% levels, respectively. The other constituent of this policy statement, topic 16, displays an overall negative but not significant behavior. Because all four topics share some of the same top-30 words but differ in their rank within each topic, it is not possible to say which topic specifically identify a specific well-known narrative, such as the "trade war with China" or "tentative agreement with China regarding trade." However, the idea of using a topic model approach allows us to identify the overall story even, when it is told with different words. Panels (b) and (c) in Fig. 4 present the estimated effect of statement events regarding monetary and fiscal policies. These responses are not statistically significant at the 10% level during the whole horizon; however, one of the two constituents of the monetary policy statement, specifically topic 14 in Fig. 6, is significant at the 10% level for 30 minutes, 2 hours after the occurrence of the topic event. The response to the topic 14 event reaches its maximum value of about 0.075 pp after 28 periods (2 hours and 20 minutes).
The last two panels of Fig. 4 show the cumulative effect of policy statements regarding immigration policy and health care policy. Immigration policy was a workhorse for President Trump since he campaigned for the Republican candidacy in 2015. The main narratives implied in these policy statements include building a wall with Mexico and revoking the DACA program. Panel (d) shows that the response to this event wanders around 0 for the first 3 hours after the event and slowly builds up after that. It becomes significantly different than 0 in the last two periods (or after 4 hours and 50 minutes) and achieves its maximum value of 0.081 pp. in the last period. This effect slightly decreases to 0.075 once post count controls are added, as in Table 4 (9). The sign of this response coincides with the concerns of many CEOs who warned about the negative consequences of a restrictive immigration policy 14 . Finally, the VIX response to health care policy statements events, as well as their components, are not significantly different from 0 at any time horizon.

Interaction between policy statements and content-independent events
This section concludes with a scenario analysis based on policy statements with a high level of diffusion. To do this, I introduce interaction terms between contentindependent and content-related events as in Eq. 10, controlling for the number of posts. Two levels of diffusion are considered: i) "above the expected level" given by the event "retweeted count above the 50th percentile," and ii) "high level" given by the event "retweeted count above the 80th percentile." Table 5 summarizes the cumulative effect of the main events and interactions terms at the 15-minute, 2-hour, and 5-hour horizons.
The overall effect of statements regarding "foreign policy and trade" above the median sharing (retweet) levels is negative and significant at the horizons specified in Table 5 ( 1-4, 7, 8). The driver of this effect is the interaction term, which is slightly higher than the main effect. The interaction term creates a subset of the statement event in which the relative weights of the topics composing the cluster change. In this case, the subset is strongly influenced by topic 16, which also displays a negative slope. The size of the overall negative effect is smaller at the 2-hour horizon (3 and 4), which coincides with the positive effect on Table 4. In a scenario with a high sharing level, when the retweeted count is above the 80th percentile (5-6 in Table 5), Fig. 6 Responses of the change in the VIX to constituents of monetary policy metatopic. Note: This figure plots the estimated coefficients β h in Eq. 9 against h. Blue shaded areas represent confidence bands at the 5 th to 95 th percentile range, the 15 th to 85 th percentile range, and the interquartile range based on robust standard errors the interaction effect is not significant, and the main effect is positive. In summary, the level of sharing decreases the uncertainty generated by foreign policy and trade statements if it exceeds the expected (median) value. This effect disappears as the sharing level increases.
Similar to monetary policy statement events in Table 4, the main effect and the interaction with retweet level above the 50th percentile of this event is not statistically significant at the selected horizons in Table 5 (1-4, 7,8). However, the interaction term by a high level of sharing in (5) and (6) is significant, positive, and robust to the inclusion of count controls. Topic 14 (see Fig. 6) appears to be the driving force behind the response to the interaction effect. The dynamics of this topic closely resembles the dynamics of the aggregate effect, and it includes most of the keywords directly related to monetary policy. 15 The response to the monetary policy statement event fits well with the narrative on Trump's threats to central bank independence already documented in Bianchi et al. (2019). The results discussed in this section suggest that Trump's advocacy for lower interest rates via Twitter not only threatens central bank independence and affects Fed funds futures, as documented in Bianchi et al. (2019), but also S&P 500 options prices.
Finally, the interaction between immigration policy statements and retweet count above the 50th percentile was positive and significant at the 2-hour and 5-hour horizons. These effects are robust to the inclusion of the total post control, and the overall effect is 0.052 pp after 2 hours. The interaction effect increases with the estimation horizon, whereas the main effect is not significant after 5 hours. The results from this topic suggest that the impact of this narrative starts earlier than observed in Table 4 in a scenario with high levels of sharing.
The results from this section highlight the relevance of the propagation mechanism of the narratives implied in the policy statements. The subset of events with higher levels of sharing becomes significant as early as 15 minutes for the case of foreign Table 5 Cumulative effect of interactions between metadata and policy statement events on change in the VIX 15 minutes  Estimated coefficients from Eq. 10 using 5-minutes frequency data for the VIX. Robust standard errors are shown in parenthesis. Significance at the 10%, 5% and 1% levels are denoted by *, ** and ***, respectively. Percentile 50 th is used for interaction terms in regression (1-4, 7-8). Percentile 80 th is used for interaction terms in regressions (5)(6) policy and trade, and the size of the interaction terms effect is higher than the effect observed for the main events in Table 4. In an unusual scenario with high levels of sharing, monetary policy statements become relevant for stock market uncertainty.

Robustness Checks: FOMC announces and news coverage
In this section, I examine the robustness of the main results to two important additional sources of variation in the VIX index: the Federal Open Market Committee FOMC announcements and the flow of economic news. Du et al. (2018) provides evidence that implied volatility of options based on S&P 500 listed stocks are sensitive to monetary policy announcements, particularly to those disclosed after FOCM meetings. Taking advantage of the fact that the start and end date-time of these meetings is publicly available, it is possible to construct a set of indicator variables for the exact start and end of each FOCM meeting. Based on 161 FOCM start values and 79 FOCM end values 16 , I construct a variable named FOMC Composite which is equal to one for two hours (24 periods) after the start of a meeting and 5 hours (60 periods) after it concludes, and zero in all other five-minutes intervals.
To control for the effect of policy-related news in the VIX index, I create a set of variables based on the traffic of news searches using Google Trends. Google Trends provides the frequency of search queries based on one or more keywords on a daily basis. I restrict the results to Google News coverage to guarantee, at some level, the institutional character of the sources on which the trend is constructed. I start by constructing five news indices, one for each of the policy statement variables 17 , Then I aggregate these indices to generate a variable named Economic Policy News Index. Additionally, I generate two indexes to cast general Economy and Stock Market news, which may include relevant information for the formation of options market volatility and price expectations. Finally, I aggregate these three major indexes into a News Composite index. Table 6 reproduce the main results already presented in Table 4, but using adding FOCM Composite and News Composite as controls in Eq. 8.
After comparing the results from Table 4 (5) and (9) with the results from Table 6 (4) and (6) one can see that the estimated coefficients for the effect of policy statements change very little or nothing in magnitude and significance level after the inclusion of the new controls. Similar results can be observed when the new controls are included in the model with interaction terms in Eq. 10 (see Table 12 in appendix). These results speak for the robustness of the effect of policy statements delivered via Twitter on uncertainty perception at the high-frequency level. These results also hold for the individual components of both composite controls. 18 One can see from Table 6 that the News composite is positive and significantly different from zero from 15 minutes to 5 hours and its magnitude increases with time horizon. In the case of Note: Estimated coefficients from Eq. 10 using 5-minutes frequency data for the VIX. Robust standard errors are shown in parenthesis. Significance at the 10%, 5% and 1% levels are denoted by *, ** and ***, respectively the FOCM composite, it is not significant at the 15 minutes horizon, it is negative and significant at the 2 hours horizon, it finally flipped the sign and remains significant at the end of the estimation horizon. The negative effect of the FOCM composite at the two hours horizon is driven by the FOCM start component, while the positive effect at the end of the estimation is given by the FOCM end component. This section reinforces the informative value of news announces and statements for the formation of volatility expectations at least at the short time horizon.

Conclusions
Economic policy narratives extracted from Trump's Twitter data using machine learning, without any sort of political bias in their construction, explain to an extent a fraction of the variation on stock market uncertainty represented by the VIX. Contentindependent events also reveal some variation in the VIX. Combined, these results provide evidence of significant high-frequency market uncertainty movements after presidential announcements made on Twitter. Economic policy statements regarding foreign policy, trade, and immigration generate an overall increasing uncertainty effect, whereas the monetary policy statements are only significant in combination with high levels of retweet activity. Fiscal and health care policy statements did not influence perceived uncertainty significantly. Content-independent events regarding unexpected changes in disclosure frequency and sharing levels have a statistically significant positive impact on market volatility. The intensity of these effects increases as the event becomes more unusual. The dynamics of both statement and content-independent events suggest a short-lived uncertainty effect. The typical response becomes significant between 1 and 4 hours after the occurrence of the event, except for immigration statements, which becomes significant at the end of the estimation window. These results are robust to the inclusion of further sources of variation in the VIX index such as economic news and monetary policy (FOMC) announcements.
The policy implications of the results presented in this paper should not be constrained to the content independence of a particular politician (here Mr. Trump). Instead, they should call the attention of policymakers', independent of political affiliation, to the negative economic impact of uncontrolled social network activity. As social networks become more popular among politicians as the preferred channel to connect with the general public, these effects may become more pronounced, and policymakers should, to some extent, be accountable for the social and economic consequences of their social network activity.

A.1: Preprocessing
I drop stop-words and special characters such as emojis, links, and symbols. The preprocesing is completed by defining collocations, or set of terms that come together, such as "Hillary Clinton", at least 25 times in the corpus, and reducing terms to its lemma.

A.2: Estimation
The generative model for the LDA, as described in Blei et al. (2003), consists of the following steps.
1. Determine term distribution, β, for each topic, which is given by: 2. Determine proportions, θ , of the topic distribution for each document, w: This model is estimated using the Gibbs sampling, as proposed in Griffiths and Steyvers (2004). Draws from the posterior distribution p(z|w) are obtained by sampling from: The dot (.) implies that summation over the index is performed. The hyperparameter α, prior parameter for the distribution of topics over documents, is set to 0.1, and δ, prior parameter for the distribution of words over topics, is set to 0.1. The optimal number of topics K = K * will be defined in the next section.
Estimatesβ andθ are given by: Table 7 presents summary statistics for the text corpus and provides additional insight into Trump's tweeting content-independent . The difference in the average vocabulary size between both corpora is only slightly more than three terms, even though the common document size of the day corpus is nine times larger. This indicates that Trump's topics do not vary much within a day or handle very few, therefore one can expect similar estimated topics from the LDA algorithm at both frequencies.

Fig. 8
Responses of the change in the VIX to main counts. Note: This figure plots the estimated coefficients β h in Eq. 9 against h. Blue shaded areas represent confidence bands at the 5 th to 95 th percentile range, the 15 th to 85 th percentile range, and the interquartile range based on robust standard errors Fig. 9 Responses of the change in the VIX to constituents of fiscal policy. Note: This figure plots the estimated coefficients β h in Eq. 9 against h. Blue shaded areas represent confidence bands at the 5 th to 95 th percentile range, the 15 th to 85 th percentile range, and the interquartile range based on robust standard errors Fig. 10 Responses of the change in the VIX to constituents of immigration policy. Note: This figure plots the estimated coefficients β h in Eq. 9 against h. Blue shaded areas represent confidence bands at the 5 th to 95 th percentile range, the 15 th to 85 th percentile range, and the interquartile range based on robust standard errors Fig. 11 Responses of the change in the VIX to constituents of health care. Note: This figure plots the estimated coefficients β h in Eq. 9 against h. Blue shaded areas represent confidence bands at the 5 th to 95 th percentile range, the 15 th to 85 th percentile range, and the interquartile range based on robust standard errors Estimated coefficients from Eq. 10 using 5-minutes frequency data for the VIX. Robust standard errors are shown in parenthesis. Significance at the 10%, 5% and 1% levels are denoted by *, ** and ***, respectively. Percentile 50 th is used for interaction terms in regression (1-4, 7-8). Percentile 80 th is used for interaction terms in regressions (5-6)