Skip to main content

Part of the book series: Dynamic Modeling and Econometrics in Economics and Finance ((DMEF,volume 17))

Abstract

We use a Bayesian time-varying parameter structural VAR with stochastic volatility to investigate changes in both the reduced-form relationship between vacancies and the unemployment rate, and in their relationship conditional on permanent and transitory output shocks, in the post-WWII United States. Evidence points towards similarities and differences between the Great Recession and the Volcker disinflation, and widespread time variation along two key dimensions. First, the slope of the Beveridge curve exhibits a large extent of variation from the mid-1960s on. It is also notably pro-cyclical, whereby the gain is positively correlated with the transitory component of output. The evolution of the slope of the Beveridge curve during the Great Recession is very similar to its evolution during the Volcker recession in terms of both its magnitude and its time profile. Second, both the Great Inflation episode and the subsequent Volcker disinflation are characterized by a significantly larger negative correlation between the reduced-form innovations to vacancies and the unemployment rate than the rest of the sample period. Those years also exhibit a greater cross-spectral coherence between the two series at business-cycle frequencies. This suggests that they are driven by common shocks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    From an empirical perspective, we prefer their methodology over, for instance, structural break tests for reasons of robustness to uncertainty regarding the specific form of time-variation present in the data. While time-varying parameter models can successfully track processes subject to structural breaks, Cogley and Sargent (2005) and Benati (2007) show that break tests possess low power when the true data-generating process (DGP) is characterized by random walk time variation. Generally speaking, break tests perform well if the DGP is subject to discrete structural breaks, while time-varying parameter models perform well under both scenarios.

  2. 2.

    We also note that at the same time the coverage regions are tightly clustered around the median estimate during the period of highest instability, namely the last 1970s and the Volcker disinflation, whereas they are more spread out in the beginning and towards the end of the sample.

  3. 3.

    While this rules out strict hysteresis effects, in the sense that temporary shocks can have permanent effects, it can still lead to behavior that looks over typical sample periods as hysteresis-induced. Moreover, the empirical evidence concerning hysteresis is decidedly mixed.

  4. 4.

    The specification of the prior follows Lubik (2013). Posterior estimates and additional results are available from the authors upon request.

References

  • Andrews, D. K. (1991). Heteroskedasticity and autocorrelation-consistent covariance matrix estimation. Econometrica, 59, 817–858.

    Article  Google Scholar 

  • Barnichon, R. (2010). Building a composite help-wanted index. Economics Letters, 109, 175–178.

    Article  Google Scholar 

  • Benati, L. (2007). Drifts and breaks in labor productivity. Journal of Economic Dynamics & Control, 31, 2847–2877.

    Article  Google Scholar 

  • Blanchard, O. J., & Diamond, P. (1989). The Beveridge curve. Brookings Papers on Economic Activity, 1, 1–60.

    Article  Google Scholar 

  • Blanchard, O. J., & Quah, D. (1989). The dynamic effects of aggregate demand and supply disturbances. The American Economic Review, 79, 655–673.

    Google Scholar 

  • Carter, C. K., & Kohn, R. (1994). On Gibbs sampling for state space models. Biometrika, 81, 541–553.

    Article  Google Scholar 

  • Cogley, T., & Sargent, T. J. (2002). Evolving post-World War II U.S. inflation dynamics. NBER Macroeconomics Annual, 16, 331–388.

    Article  Google Scholar 

  • Cogley, T., & Sargent, T. J. (2005). Drifts and volatilities: monetary policies and outcomes in the post-WWII U.S. Review of Economic Dynamics, 8, 262–302.

    Article  Google Scholar 

  • Fernández-Villaverde, J., & Rubio-Ramirez, J. (2007). How structural are structural parameter values? NBER Macroeconomics Annual, 22, 83–132.

    Google Scholar 

  • Furlanetto, F., & Groshenny, N. (2012). Mismatch shocks and unemployment during the Great Recession. Manuscript, Norges Bank.

    Google Scholar 

  • Galí, J. (1999). Technology, employment, and the business cycle: do technology shocks explain aggregate fluctuations? The American Economic Review, 89, 249–271.

    Article  Google Scholar 

  • Galí, J., & Gambetti, L. (2009). On the sources of the great moderation. American Economic Journal: Macroeconomics, 1, 26–57.

    Google Scholar 

  • Jacquier, E., Polson, N. G., & Rossi, P. E. (1994). Bayesian analysis of stochastic volatility models. Journal of Business & Economic Statistics, 12, 371–389.

    Google Scholar 

  • Lubik, T. A. (2013). The shifting and twisting Beveridge curve: an aggregate perspective. Manuscript, Federal Reserve Bank of Richmond.

    Google Scholar 

  • Newey, W., & West, K. (1987). A simple positive semi-definite heteroskedasticity and autocorrelation-consistent covariance matrix. Econometrica, 55, 703–708.

    Article  Google Scholar 

  • Primiceri, G. (2005). Time varying structural vector autoregressions and monetary policy. Review of Economic Studies, 72, 821–852.

    Article  Google Scholar 

  • Rubio-Ramirez, J., Waggoner, D., & Zha, T. (2005). Structural vector autoregressions: theory of identification and algorithms for inference. Review of Economic Studies, 77, 665–696.

    Article  Google Scholar 

  • Sahin, A., Song, J., Topa, G., & Violante, G. L. (2012). Mismatch unemployment (Staff Reports No. 566). Federal Reserve Bank of New York.

    Google Scholar 

  • Shimer, R. (2005). The cyclical behavior of equilibrium unemployment and vacancies. The American Economic Review, 95, 25–49.

    Article  Google Scholar 

  • Stock, J., & Watson, M. M. (1996). Evidence of structural instability in macroeconomic time series relations. Journal of Business & Economic Statistics, 14, 11–30.

    Google Scholar 

  • Stock, J., & Watson, M. M. (1998). Median-unbiased estimation of coefficient variance in a time-varying parameter model. Journal of the American Statistical Association, 93, 349–358.

    Article  Google Scholar 

Download references

Acknowledgements

The views in this paper are those of the authors and should not be interpreted as those of the Federal Reserve Bank of Richmond, the Board of Governors, or the Federal Reserve System. We are grateful to participants at the Applied Time Series Econometrics Workshop at the Federal Reserve Bank of St. Louis and the Midwest Macroeconomics Meetings at the University of Colorado Boulder for useful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas A. Lubik .

Editor information

Editors and Affiliations

Appendices

Appendix A: The Data

The series for real GDP (‘GDPC96, Real Gross Domestic Product, 3 Decimal, Seasonally Adjusted Annual Rate, Quarterly, Billions of Chained 2005 Dollars’) is from the U.S. Department of Commerce: Bureau of Economic Analysis. It is collected at quarterly frequency and seasonally adjusted. A quarterly seasonally adjusted series for the unemployment rate has been computed by converting the series UNRATE (‘Civilian Unemployment Rate, Seasonally Adjusted, Monthly, Percent, Persons 16 years of age and older’) from the U.S. Department of Labor: Bureau of Labor Statistics to quarterly frequency (by taking averages within the quarter). A monthly seasonally adjusted series for the vacancy rate has been computed as the ratio between the ‘Help Wanted Index’ (HWI) and the civilian labor force. The HWI index is from the Conference Board up until 1994Q4, and from Barnichon (2010) after that. The labor force series is from the U.S. Department of Labor: Bureau of Labor Statistics (‘CLF16OV, Civilian Labor Force, Persons 16 years of age and older, Seasonally Adjusted, Monthly, Thousands of Persons’). The monthly seasonally adjusted series for the vacancy rate has been converted to the quarterly frequency by taking averages within the quarter.

Appendix B: Deconvoluting the Probability Density Function of \(\hat{\lambda}\)

This appendix describes the procedure we use in Section 2 to deconvolute the probability density function of \(\hat{\lambda}\). We consider the construction of a (1−α) % confidence interval for \(\hat{\lambda}\), [\(\hat{\lambda}_{(1-\alpha )}^{L},\hat{\lambda}_{(1-\alpha )}^{U}\)]. We assume for simplicity that λ j and \(\hat{\lambda}\) can take any value over [0;∞). Given the duality between hypothesis testing and the construction of confidence intervals, the (1−α) % confidence set for \(\hat{\lambda}\) comprises all the values of λ j that cannot be rejected based on a two-sided test at the α % level. Given that an increase in λ j automatically shifts the probability density function (pdf) of \(\hat{L}_{j}\) conditional on λ j upwards, \(\hat{\lambda}_{(1-\alpha )}^{L}\) and \(\hat{\lambda}_{(1-\alpha )}^{U}\) are therefore such that:

$$ P \bigl( \hat{L}_{j}>\hat{L} | \lambda _{j}=\hat{ \lambda}_{(1-\alpha )}^{L} \bigr) =\alpha /2, $$
(18)

and

$$ P \bigl( \hat{L}_{j}<\hat{L}| \lambda _{j}=\hat{ \lambda}_{(1-\alpha )}^{U} \bigr) =\alpha /2. $$
(19)

Let \(\phi _{\hat{\lambda}}(\lambda _{j})\) and \(\varPhi _{\hat{\lambda}}(\lambda _{j})\) be the pdf and, respectively, the cumulative pdf of \(\hat{\lambda}\), defined over the domain of λ j . The fact that [\(\hat{\lambda}_{(1-\alpha )}^{L},\hat{\lambda}_{(1-\alpha )}^{U}\)] is a (1−α) % confidence interval automatically implies that (1−α) % of the probability mass of \(\phi _{\hat{\lambda}}(\lambda _{j})\) lies between \(\hat{\lambda}_{(1-\alpha )}^{L}\) and \(\hat{\lambda}_{(1-\alpha )}^{U}\). This, in turn, implies that \(\varPhi _{\hat{\lambda}}(\hat{\lambda}_{(1-\alpha )}^{L})=\alpha /2\) and \(\varPhi _{\hat{\lambda}}(\hat{\lambda}_{(1-\alpha )}^{U})=1-\alpha /2\). Given that this holds for any 0<α<1, we therefore have that:

$$ \varPhi _{\hat{\lambda}}(\lambda _{j})=P ( \hat{L}_{j}> \hat{L}|\lambda _{j} ) . $$
(20)

Based on the exp-Wald test statistic, \(\hat{L}\), and on the simulated distributions of the \(\hat{L}_{j}\)’s conditional on the λ j ’s in Λ, we thus obtain an estimate of the cumulative pdf of \(\hat{\lambda}\) over the grid Λ, \(\hat{\varPhi}_{\hat{\lambda}}(\lambda _{j})\). Finally, we fit a logistic function to \(\hat{\varPhi}_{\hat{\lambda}}(\lambda _{j})\) via nonlinear least squares and we compute the implied estimate of \(\phi _{\hat{\lambda}}(\lambda _{j})\), \(\hat{\phi}_{\hat{\lambda}}(\lambda _{j})\), whereby we scale its elements so that they sum to one.

Appendix C: Details of the Markov-Chain Monte Carlo Procedure

We estimate (4)–(12) using Bayesian methods. The next two subsections describe our choices for the priors, and the Markov-Chain Monte Carlo algorithm we use to simulate the posterior distribution of the hyperparameters and the states conditional on the data. The third section discusses how we check for convergence of the Markov chain to the ergodic distribution.

3.1 C.1 Priors

The prior distributions for the initial values of the states, θ 0 and h 0, which we postulate to be normally distributed, are assumed to be independent both from each other and from the distribution of the hyperparameters. In order to calibrate the prior distributions for θ 0 and h 0 we estimate a time-invariant version of (4) based on the first 15 years of data. We set:

$$ \theta _{0}\sim \mathcal{N} \bigl[ \hat{\theta}_{OLS},4\cdot \hat{V}(\hat{\theta}_{OLS}) \bigr] , $$
(21)

where \(\hat{V}(\hat{\theta}_{OLS})\) is the estimated asymptotic variance of \(\hat{\theta}_{OLS}\). As for h 0, we proceed as follows. Let \(\hat{\varSigma}_{OLS}\) be the estimated covariance matrix of ϵ t from the time-invariant VAR, and let C be its lower-triangular Cholesky factor, \(CC^{\prime }=\hat{\varSigma}_{OLS}\). We set:

$$ \ln h_{0}\sim \mathcal{N}(\ln \mu _{0},10\times I_{N}), $$
(22)

where μ 0 is a vector collecting the logarithms of the squared elements on the diagonal of C. As stressed by Cogley and Sargent (2002), this prior is weakly informative for h 0.

Turning to the hyperparameters, we postulate independence between the parameters corresponding to the two matrices Q and A for convenience. Further, we make the following standard assumptions. The matrix Q is postulated to follow an inverted Wishart distribution:

$$ Q\sim \mathcal{IW} \bigl( \bar{Q}^{-1},T_{0} \bigr) , $$
(23)

with prior degrees of freedom T 0 and scale matrix \(T_{0}\bar{Q}\). In order to minimize the impact of the prior, we set T 0 equal to the minimum value allowed, the length of θ t plus one. As for \(\bar{Q}\), we calibrate it as \(\bar{Q}=\gamma \times \hat{\varSigma}_{OLS}\), setting γ=1.0×10−4, as in Cogley and Sargent (2002). This is a comparatively conservative prior in the sense of allowing little random-walk drift. We note, however, that it is smaller than the median-unbiased estimates of the extent of random-walk drift discussed in Section 2, ranging between 0.0235 and 0.0327 for the equation for the vacancy rate, and between 0.0122 and 0.0153 for the equation for the unemployment rate. As for α, we postulate it to be normally distributed with a large variance:

$$ f ( \alpha ) =\mathcal{N}(0,10000\cdot I_{N(N-1)/2}). $$
(24)

Finally, we follow Cogley and Sargent (2002, 2005) and postulate an inverse-Gamma distribution for \(\sigma _{i}^{2}\equiv \operatorname {Var}(\nu _{i,t})\) for the variances of the stochastic volatility innovations:

$$ \sigma _{i}^{2}\sim \mathcal{IG} \biggl( \frac{10^{-4}}{2}, \frac{1}{2} \biggr) . $$
(25)

3.2 C.2 Simulating the Posterior Distribution

We simulate the posterior distribution of the hyperparameters and the states conditional on the data using the following MCMC algorithm (see Cogley and Sargent 2002). x t denotes the entire history of the vector x up to time t, that is, \(x^{t}\equiv [ x_{1}^{\prime }, x_{2}^{\prime }, x_{t}^{\prime }]^{\prime }\), while T is the sample length.

  1. 1.

    Drawing the elements of θ t : Conditional on Y T, α, and H T, the observation equation (4) is linear with Gaussian innovations and a known covariance matrix. Following Carter and Kohn (1994), the density p(θ T|Y T,α,H T) can be factored as:

    $$ p\bigl(\theta ^{T}|Y^{T},\alpha ,H^{T}\bigr)=p \bigl(\theta _{T}|Y^{T},\alpha ,H^{T}\bigr)\prod _{t=1}^{T-1}p\bigl(\theta _{t}| \theta _{t+1},Y^{T},\alpha ,H^{T}\bigr). $$
    (26)

    Conditional on α and H T, the standard Kalman filter recursions determine the first element on the right hand side of (26), p(θ T |Y T,α,H T)=N(θ T ,P T ), with P T being the precision matrix of θ T produced by the Kalman filter. The remaining elements in the factorization can then be computed via the backward recursion algorithm found in Cogley and Sargent (2005). Given the conditional normality of θ t , we have:

    $$ \theta _{t|t+1}=\theta _{t|t}+P_{t|t}P_{t+1|t}^{-1} ( \theta _{t+1}-\theta _{t} ) , $$
    (27)

    and

    $$ P_{t|t+1}=P_{t|t}-P_{t|t}P_{t+1|t}^{-1}P_{t|t}, $$
    (28)

    which provides, for each t from T−1 to 1, the remaining elements in (4), p(θ t |θ t+1,Y T,α,H T)=N(θ t|t+1,P t|t+1). Specifically, the backward recursion starts with a draw from \(\mathcal{N}(\theta _{T},P_{T})\), call it \(\tilde{\theta}_{T}\). Conditional on \(\tilde{\theta}_{T}\), (27)–(28) give us θ T−1|T and P T−1|T , thus allowing us to draw \(\tilde{\theta}_{T-1}\) from N(θ T−1|T ,P T−1|T ), and so on until t=1.

  2. 2.

    Drawing the elements of H t : Conditional on Y T,θ T, and α, the orthogonalized innovations \(u_{t}\equiv A(Y_{t}-X_{t}^{^{\prime }}\theta _{t})\), with \(\operatorname {Var}(u_{t})=H_{t}\), are observable. Following Cogley and Sargent (2002), we then sample the h i,t ’s by applying the univariate algorithm of Jacquier et al. (1994) element by element.

  3. 3.

    Drawing the hyperparameters: Conditional on Y T,θ T, H T, and α, the innovations to θ t and to the h i,t ’s are observable, which allows us to draw the hyperparameters, namely the elements of Q and the \(\sigma _{i}^{2}\), from their respective distributions.

  4. 4.

    Drawing the elements of α: Finally, conditional on Y T and θ T, the ϵ t ’s are observable. They satisfy:

    $$ A\epsilon _{t}=u_{t}, $$
    (29)

    with the u t being a vector of orthogonalized residuals with known time-varying variance H t . Following Primiceri (2005), we interpret (29) as a system of unrelated regressions. The first equation in the system is given by ϵ 1,t u 1,t , while the following equations can be expressed as transformed regressions:

    $$\begin{aligned} \begin{aligned} \bigl( h_{2,t}^{-\frac{1}{2}}\epsilon _{2,t} \bigr) &=-\alpha _{2,1} \bigl( h_{2,t}^{-\frac{1}{2}}\epsilon _{1,t} \bigr) + \bigl( h_{2,t}^{-\frac{1}{2}}u_{2,t} \bigr), \\ \bigl( h_{3,t}^{-\frac{1}{2}}\epsilon _{3,t} \bigr) &=-\alpha _{3,1} \bigl( h_{3,t}^{-\frac{1}{2}}\epsilon _{1,t} \bigr) -\alpha _{3,2} \bigl( h_{3,t}^{-\frac{1}{2}}\epsilon _{2,t} \bigr) + \bigl( h_{3,t}^{-\frac{1}{2}}u_{3,t} \bigr) , \end{aligned} \end{aligned}$$
    (30)

    where the residuals are independently standard normally distributed. Assuming normal priors for each equation’s regression coefficients the posterior is also normal and can be computed as in Cogley and Sargent (2005).

Summing up, the MCMC algorithm simulates the posterior distribution of the states and the hyperparameters, conditional on the data, by iterating on (1)–(4). In what follows, we use a burn-in period of 50,000 iterations to converge to the ergodic distribution. After that, we run 10,000 more iterations sampling every 10th draw in order to reduce the autocorrelation across draws.

Appendix D: A Simple Search and Matching Model of the Labor Market

The model specification follows Lubik (2013). Time is discrete and the time period is a quarter. The model economy is populated by a continuum of identical firms that employ workers, each of whom inelastically supplies one unit of labor. Output Y t of a typical firm is linear in employment N t :

$$ Y_{t}=A_{t}N_{t}. $$
(31)

A t is a stochastic aggregate productivity process. It is composed of a permanent productivity shock, \(A_{t}^{P}\), which follows a random walk, and a transitory productivity shock, \(A_{t}^{T}\), which is an AR(1)-process. Specifically, we assume that \(A_{t}=A_{t}^{P}A_{t}^{T}\).

The labor market matching process combines unemployed job seekers U t with job openings (vacancies) V t . This can be represented by a constant returns matching function, \(M_{t}=m_{t}U_{t}^{\xi }V_{t}^{1-\xi }\), where m t is stochastic match efficiency, and 0<ξ<1 is the match elasticity. Unemployment is defined as those workers who are not currently employed:

$$ U_{t}=1-N_{t}, $$
(32)

where the labor force is normalized to one. Inflows to unemployment arise from job destruction at rate 0<ρ t <1, which can vary over time. The dynamics of employment are thus governed by the following relationship:

$$ N_{t}=(1-\rho _{t}) \bigl[ N_{t-1}+m_{t-1}U_{t-1}^{\xi }V_{t-1}^{1-\xi } \bigr] . $$
(33)

This is a stock-flow identity that relates the stock of employed workers N t to the flow of new hires \(M_{t}=m_{t}U_{t}^{\xi }V_{t}^{1-\xi }\) into employment. The timing assumption is such that once a worker is matched with a firm, the labor market closes. This implies that if a newly hired worker and a firm separate, the worker cannot re-enter the pool of searchers immediately and has to wait one period before searching again.

The matching function can be used to define the job finding rate, i.e., the probability that a worker will be matched with a firm:

$$ p(\theta _{t})=\frac{M_{t}}{U_{t}}=m_{t}\theta _{t}^{1-\xi }, $$
(34)

and the job matching rate, i.e., the probability that a firm is matched with a worker:

$$ q(\theta _{t})=\frac{M_{t}}{V_{t}}=m_{t}\theta _{t}^{-\xi }, $$
(35)

where θ t =V t /U t is labor market tightness. From the perspective of an individual firm, the aggregate match probability q(θ t ) is exogenous and unaffected by individual decisions. Hence, for individual firms new hires are linear in the number of vacancies posted: M t =q(θ t )V t .

A firm chooses the optimal number of vacancies V t to be posted and its employment level N t by maximizing the intertemporal profit function:

$$ E_{0}\sum_{t=0}^{\infty }\beta ^{t} [ A_{t}N_{t}-W_{t}N_{t}- \kappa _{t}V_{t} ] , $$
(36)

subject to the employment accumulation equation (33). Profits are discounted at rate 0<β<1. Wages paid to the workers are W t , while κ t >0 is a firm’s time-varying cost of opening a vacancy. The first-order conditions are:

$$\begin{aligned} &N_{t}{:}\quad \mu _{t}=A_{t}-W_{t}+ \beta E_{t} \bigl[ (1-\rho _{t+1})\mu _{t+1} \bigr] , \end{aligned}$$
(37)
$$\begin{aligned} &V_{t}{:}\quad \kappa _{t}=\beta q(\theta _{t})E_{t} \bigl[ (1-\rho _{t+1})\mu _{t+1} \bigr] , \end{aligned}$$
(38)

where μ t is the multiplier on the employment equation.

Combining these two first-order conditions results in the job creation condition (JCC):

$$ \frac{\kappa _{t}}{q(\theta _{t})}=\beta E_{t} \biggl[ (1-\rho _{t+1}) \biggl( A_{t+1}-W_{t+1}+\frac{\kappa _{t+1}}{q(\theta _{t+1})} \biggr) \biggr] . $$
(39)

This captures the trade-off faced by the firm: the marginal effective cost of posting a vacancy, \(\frac{\kappa _{t}}{q(\theta _{t})}\), that is, the per-vacancy cost κ adjusted for the probability that the position is filled, is weighed against the discounted benefit from the match. The latter consists of the surplus generated by the production process net of wage payments to the workers, plus the benefit of not having to post a vacancy again in the next period.

In order to close the model, we assume in line with the existing literature that wages are determined based on the Nash bargaining solution: surpluses accruing to the matched parties are split according to a rule that maximizes their weighted average. Denoting the workers’ weight in the bargaining process as η∈[0,1], this implies the sharing rule:

$$ \mathcal{W}_{t}-\mathcal{U}_{t}=\frac{\eta }{1-\eta } ( \mathcal{J}_{t}-\mathcal{V}_{t} ) , $$
(40)

where \(\mathcal{W}_{t}\) is the asset value of employment, \(\mathcal{U}_{t}\) is the value of being unemployed, \(\mathcal{J}_{t}\) is the value of the marginal worker to the firm, and \(\mathcal{V}_{t}\) is the value of a vacant job. By free entry, \(\mathcal{V}_{t}\) is assumed to be driven to zero.

The value of employment to a worker is described by the following Bellman equation:

$$ \mathcal{W}_{t}=W_{t}+E_{t}\beta {}\bigl[ (1- \rho _{t+1})\mathcal{W}_{t+1}+\rho _{t+1} \mathcal{U}_{t+1}\bigr]. $$
(41)

Workers receive the wage W t , and transition into unemployment next period with probability ρ t+1. The value of searching for a job, when the worker is currently unemployed, is:

$$ \mathcal{U}_{t}=b_{t}+E_{t}\beta {}\bigl[ p_{t}(1-\rho _{t+1})\mathcal{W}_{t+1}+ \bigl(1-p_{t}(1-\rho _{t+1})\bigr)\mathcal{U}_{t+1} \bigr]. $$
(42)

An unemployed searcher receives stochastic benefits b t and transitions into employment with probability p t (1−ρ t+1). Recall that the job finding rate p t is defined as p(θ t )=M(V t ,U t )/U t which is decreasing in tightness θ t . It is adjusted for the probability that a completed match gets dissolved before production begins next period. The marginal value of a worker \(\mathcal{J}_{t}\) is equivalent to the multiplier on the employment equation, \(\mathcal{J}_{t}=\mu _{t}\), so that the respective first-order condition defines the Bellman-equation for the value of a job. Substituting the asset equations into the sharing rule (40) results in the wage equation:

$$ W_{t}=\eta ( A_{t}+\kappa _{t}\theta _{t} ) +(1-\eta )b_{t}. $$
(43)

Wage payments are a weighted average of the worker’s marginal product A t , which the worker can appropriate at a fraction η, and the outside option b t , of which the firm obtains the portion (1−η). Moreover, the presence of fixed vacancy posting costs leads to a hold-up problem where the worker extracts an additional ηκ t θ t from the firm.

Finally, we can substitute the wage equation (43) into (39) to derive an alternative representation of the job creation condition:

$$ \frac{\kappa _{t}}{m_{t}}\theta _{t}^{\xi }=\beta E_{t}(1- \rho _{t+1}) \biggl[ (1-\eta ) ( A_{t+1}-b_{t} ) - \eta \kappa _{t}\theta _{t+1}+\frac{\kappa _{t}}{m_{t+1}}\theta _{t+1}^{\xi } \biggr] . $$
(44)

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Benati, L., Lubik, T.A. (2014). The Time-Varying Beveridge Curve. In: Schleer-van Gellecom, F. (eds) Advances in Non-linear Economic Modeling. Dynamic Modeling and Econometrics in Economics and Finance, vol 17. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-42039-9_5

Download citation

Publish with us

Policies and ethics