Renormalization schemes for SFT solutions

In this paper, we examine the space of renormalization schemes compatible with the Kiermaier and Okawa [arXiv:0707.4472] framework for constructing Open String Field Theory solutions based on marginal operators with singular self-OPEs. We show that, due to freedom in defining the renormalization scheme which tames these singular OPEs, the solutions obtained from the KO framework are not necessarily unique. We identify a multidimensional space of SFT solutions corresponding to a single given marginal operator.


Introduction
The problem of finding analytic String Field Theory (SFT) solutions corresponding to different boundary CFTs has received plenty of attention in the last few years. There are several different approaches to this problem, including a perturbative approach based on marginal deformations of the boundary CFT, as well as a more general non-perturbative approach based on boundary condition changing (bcc) operators. In much of this work, analytic solutions to SFT are constructed as wedge states with insertions. Operators inserted on the boundary of the wedge state are built using either a marginal operator or a bcc operator, together with ghosts and other universal elements such as the stress energy tensor. The largest technical challenge to be overcome in constructing SFT solutions is due to the generically singular nature of marginal deformation operators and bcc operators: when these operators have singular OPEs, regularization schemes must be introduced. Earliest attempts to construct SFT solutions dealt mainly with finite boundary operators (for example, [2,3]), but the challenge due to singular operators has been mostly overcome: for marginal operators, in [1,4], and, for bcc operators, in several recent works, including [5,6,7]. Other approaches have also been considered in [8] and [9,10,11].
One issue that has not been addressed very much is that of uniqueness of the SFT solutions being constructed. In particular, there is the question of whether different renormalization schemes could result in different SFT solutions. Here, we attempt to study the issue of uniqueness, focusing on the formalism developed in [1]. This is one of the formalisms able to handle a time-dependent singular marginal deformation, such as the time-symmetric rolling tachyon solution generated by an exactly marginal operator √ 2 cosh(X 0 / √ α ). Since the SFT solution is built out of renormalized operators, we can ask whether different choices of renormalization schemes will lead to different SFT solutions.
We do uncover a two-parameter family of SFT solutions all corresponding to the same marginal operator and discuss the possibility that more might exist. We do not, at this point, have a clear interpretation of these solutions. Notice that previous numerical studies of the rolling tachyon solutions with regular OPE have not always agreed on coefficients [2,5,12], and a few possible explanations for the meaning of those coefficients have been suggested in [13]. If these different solutions are gauge equivalent, our analytic approach might make it easier to demonstrate that fact. Should the solutions, however, prove not to be gauge equivalent, under the equivalence of boundary CFTs and open SFT solutions each of them would correspond to a new boundary CFT.
The structure of this paper is as follows: In section 2 we review the conditions that renormalized operators must meet for the construction of [1] to be valid and provide a description of our initial approach. In section 3 we study the space of all possible renormalization schemes at second order in the deformation parameter. In section 4 we consider the cubic order. In section 5 we consider all orders for a particular two-parameter family of renormalization schemes and prove that all the conditions set out by [1] are satisfied for this family. In section 6 we discuss which of the free parameters present in the renormalization scheme actually affect the SFT solution and how. Finally, we propose the existence of an even larger family of solutions. Appendix C contains the proof of an important technical result necessary to prove the first BRST condition of [1] which was not included in that paper.

Setup
The approach taken in [1] starts with a marginal operator V (t) with a self-OPE given by with no 1 t term. A deformed boundary condition on the interval (a, b) is achieved by inserting an exponential of the marginal operator integrated between a and b, defined in terms of a Taylor series in the deformation parameter λ: where Since V (t) has a singular self-OPE, the the above expressions need to be regulated. We will denote the regulated (or renormalized) operators by enclosing them with [ ] r . In [1], a list of conditions which must be satisfied by the renormalization procedure is given. If these conditions are satisfied, the formal solution constructed in [1] will satisfy the SFT equations of motion and be real; however, different renormalization schemes can possibly lead to different SFT solutions. The main goal of this paper is to examine the space of possible renormalization schemes compatible with the condition required for a real SFT solution. We begin by reviewing the conditions that any renormalization scheme must satisfy to construct a SFT solution using the approach of [1]. These conditions are basically physical conditions which ensure that when [e λV (a,b) ] r is inserted on the boundary, the effect is a conformal change of boundary conditions on the interval (a, b), and nothing else.
The first condition ensures that the insertion [e λV (a,b) ] r does not modify the boundary conditions away from the interval (a, b). In particular, it requires that when products of operators that are inserted away from each other are renormalized, it is sufficient to renormalize each term separately. In other words, the renormalized operator factorizes for operators with disjoint support. For example: . . . e λ 1 V (a,b) e λ 2 V (c,d) . . . r = . . . e λ 1 V (a,b) r e λ 2 V (c,d) . . . r , for b < c .
Further, changing the boundary condition on the interval (a, b) and (b, c) using the same deformation parameter should be the same as changing the boundary condition on the interval (a, c). In other words, renormalization should not spoil factorization of exponentials.
This condition was called the 'replacement condition' in [1] to differentiate it from the factorization condition (4a). We we will continue to use this term. The next two conditions ensure that the resulting boundary condition is conformal. The first condition defines two local (unintegrated) operators O L and O R , which play an important role in the solution. We have which requires the existence and finiteness of the renormalized operators O L (a)e λV (a,b) r and e λV (a,b) O R (b) r , implying that the OPE of the marginal operator V with O L,R is not so singular that it cannot be renormalized within the scheme we choose. The second of these two assumptions expresses the fact that Q B is anti-commuting: To obtain a real solution, it is important not to violate the reflection symmetry: The last condition is: Therefore, the subtractions involved in renormalizing operators need to depend on the operators being renormalized only, and not on the size of the wedge state on which they are inserted. To achieve this last property, in [1], renormalization took two steps: in the first, the infinities were cancelled by subtracting the two-point function; in the second, the finite part of the two-point function (which depends on the width of the wedge) was compensated for. In contrast, we use the divergent part of the two-point function alone to compensate for the divergence and then study the impact of the finite part on the renormalization scheme, which means that in our approach, (4f) is automatically satisfied. In addition to these explicitly stated conditions, a very natural condition of translation invariance was also implied in [1].
At this point, it is relevant to ask what classes of operators we need to provide a renormalization scheme for. Clearly, we need to be able to renormalize exponentials and their products. This is done order by order, so operators such as V (a, b) n must be renormalizable. Further, the action of the BRST operator . Thus, we must be able to at least write down such operators as V (a)e λV (a,b) r . In fact, we will see that this is sufficient: we need to renormalize products of exponentials of integrated operators with possible insertions of a single unintegrated V on either the left, or the right, or both. These operators also arise naturally when derivatives are taken, for example: ∂ ∂a e λV (a,b) r .
Once we have decided on the renormalization scheme for e λV (a,b) r , derivative operators such as Q B e λV (a,b) r and ∂ ∂a e λV (a,b) r will be fixed. The choice of renormalization scheme for such operators as V (a)e λV (a,b) r can influence the explicit form of operators O R,L and the existence of natural properties such as but it does not change Q B e λV (a,b) r or ∂ ∂a e λV (a,b) r themselves. In other words, our choice of renormalization scheme for operators with unintegrated insertions will not affect the SFT solution. However, it does affect the linearity of the renormalization scheme (for example, property (5)).
We then need to ask: do the set of assumptions (4) imply that the renormalization scheme is linear? The answer is that the replacement condition (4e) can be interpreted as a statement about linearity. If we accept that . . . e λV (a,c) . . . = . . . e λV (a,b) e λV (b,c) . . .
(a statement about the singular operators and not about the renormalization), then the replacement condition seems to be a tautology. Its true meaning is revealed when we rewrite it order by order and then bring the combinatorial sum outside the renormalization: Viewed this way, the replacement condition becomes a nontrivial statement about linearity of the renormalization scheme when applied to the exponentials and their products. We will see that this condition places restrictions on possible renormalization schemes. Repeated application of the replacement condition implies linearity for all operators of the form . . ≤ a k < b k as long as the lengths b i − a i of the intervals involved are all finite. It also applies to operators with an extra insertion of an unintegrated operator on either the left, the right or both (for example: V (a 0 )V (n 1 ) (a 1 , b 1 ) . . . V (n k ) (a k , b k ) where a 0 ≤ a 1 ), but with one restriction: in all the parts of the sum, the unintegrated operator must be inserted at the same point. Therefore, the replacement condition does not imply such linearity properties as (5), which requires that linearity be extended to addition of operators where the unintegrated operator is inserted at different points. Such 'extended' linearity holds only for some choices of renormalization schemes involving unintegrated operator insertions. Linearity beyond the replacement condition does not seem necessary to construct the SFT solution, and does not affect the details of this solution. However, it is implicitly assumed in the analysis of conformal properties of the renormalized operator, for example in [4] (for more details, see section 3.3).
Our initial approach to renormalization will be to consider products of integrated operators, regulate them with a cut off by modifying the domain of integration so that all insertions are separated by a minimum distance of , and introduce counterterms that cancel the divergences when approaches zero. We will modify the domain of integration so that the insertions of integrated operators are separated by at least from any fixed insertions. We will use the notation ( ) to denote regulated operators, for example A crucial property of our regularization is that it is linear. To show this, consider the most general operator with n V -insertions, M V (t 1 ) . . . V (t n ) where M is some measure on This linearity property will be important to ensure that our renormalization satisfies the replacement condition.
We should point out that our implementation here differs slightly from [1]. In particular, the -regularization of the operator V (a, b)V (b, c) in that work was not linear, being equivalent to The difference between our definition (8b) and the above equation is illustrated in figure 1. This lack of linearity makes it difficult to see whether there exists a complete renormalization scheme consistent with the assumption (4b) using the approach in [1]. , equation (9). The difference is the gray strips, which are not covered using the latter choice. The dashed line indicates the location of a singularity due to colliding operators.
In the → 0 limit, finite operators can be constructed by canceling the divergences in the regulated operators with counterterms, so that generally [A] r = (A) − counterterms. While the divergent part of the counterterms is fixed by the OPEs of the operators in question, the finite part is constrained only by the assumptions (4) and we will see that there is considerable freedom there.

Renormalization of operators quadratic in V
In this section, we begin to construct a general renormalization scheme, starting with the simplest nontrivial situation: operators quadratic in V . We will discuss some (but not all) of the conditions (4). Those we do not discuss in this section will be proved in all generality in section 5.
To start with, setting any other operator insertions aside, consider the operator V (a)V (a, b). The corresponding -regulated operator has the following behaviour for small : (V (a)V (a, b)) = 1 + terms finite when goes to zero.
Therefore, the corresponding renormalized operator can be defined as where the counterterm G L ab is given by Note that, by translation invariance, the counterterm G L ab depends on a and b only through the difference b − a. Discussion for G R ab parallels that of G L ab . Now, consider another operator requiring regularization, V (a, b) 2 . We have: (Our factor of 1 2 is convenient when evaluating the integral: the two components in definition (8a) are equal to each other and therefore either one of them is equal to 1 2 (V (a, b) 2 ) .) The corresponding renormalized operator is then given by where the "double" counterterm G D ab for doubly integrated operators is By a similar process, we also define where G E abc is the "edge" counterterm for operators meeting only at a single shared edge, As we have discussed already in section 2, our choice of C L,R ab cannot influence the SFT solution, but the choice of C D b−a and C E c−b,b−a certainly can. We now find restrictions on these finite parts of the counterterms due to the replacement condition (4b) and the more general assumption of linearity.

Replacement condition (4b)
We begin by using condition (4b) with an insertion of V (a) on the left: For the last term we have used the factorization condition (4a) to remove the renormalization.
Since it is trivially true that we obtain that The finite part of the counterterm for [V (a)V (a, b)] r is a constant C L which does not depend on the size of the integration region. A similar argument, condition (4b) with an integrated operator inserted on the left: when expanded to first order in λ and λ 1 , gives which, together with yields This, together with a similar statement given when the extra integrated operator is inserted on the right, implies that C E c−b,b−a = C E is independent of the values of a, b and c. Next, we examine condition (4b) without any extra insertions at second order in λ: Since, as follows trivially from the linearity of ( · ) , after canceling the -dependent terms, we are left with Therefore the finite counterterm must be of the form C D ∆t = C 0 + C 1 ∆t, and we must have At this point, our renormalization scheme is parametrized by 4 parameters: C 0 , C 1 , C R and C L . We will see that C R and C L do not affect the eventual SFT solution (and we could have set them to zero without a loss of generality) but that C 0 and C 1 do. It is worth mentioning that, at this order, our scheme reproduces that of [1] if we take C 1 = 0 and C 0 = −1, though comparison for operators with multiple integration regions is complicated by the discrepancy described in section 2.

The BRST assumptions (4c) and (4d)
These two assumptions are easily proven at second order in λ. Throughout this section we will omit the limit → 0, and it should be inferred. Recall that the first BRST assumption is At second order (in λ) this statement reads as where L/R is a local operator to be determined. The behaviour of primitive operators when acted on by the BRST charge is not difficult to determine by integrating the BRST current on a contour about the operator in question. The results we will need are Using these and the definition of the renormalization scheme, we can start working out the left hand side of (23) explicitly.
The next step is to integrate by parts: The integral in the last term is equal to (neglecting all terms that go to zero as Thus: This has the form of (23) where Our operators O R,L depend explicitly on the renormalization parameters C L,R . However, this dependence only serves to cancel the dependence of the renormalization scheme on these parameters, so that in fact O L (a)e λV (a,b) r and e λV (a,b) O R (b) r are independent of C L,R (as they must be). Notice that condition (4e) implies that but this does not restrict C R and C L to be equal. For the second BRST condition, (4d), at this order we just need to show that This is shown as follows: Writing V (a)V (a + ) = −2 +finite terms and c(a)c(a + ) = c∂c(a) + 1 2 2 c∂ 2 c, several terms cancel and we end up with Thus, the second BRST condition is satisfied at this order as well.

Linearity and boundary condition changing operators
In this section, we discuss linearity of the renormalization scheme beyond the replacement condition. We investigate the consequences of such natural and related 1 assumptions as One can ask why we would be interested in such conditions, given that they don't seem to be needed to construct a SFT solution. The answer is that these properties are related to the conformal properties of the corresponding boundary condition changing (bcc) operator. We might assume (as has been the focus of recent work, for example [5,6,7]), that the point where the boundary condition is changed behaves as if a bcc operator σ was inserted there. If this operator is primary and has conformal weight h(λ), we would expect that [4] Thus, to compare with our formula for O L , we need to know the form that the derivative takes. However, as the equivalent assumption (34a) is easier to investigate, we start there. The definition of the -regularization implies that Subtracting this expression from equation (34a), we obtain Since this equation should be true for arbitrary a and b, we obtain a new constraint C L = C 1 and confirm our previous result that C 0 = −C E . A similar argument with the unintegrated operator on the right implies that C R = C 1 .
Let us now examine the statement about a derivative, (34b). We look at where we have used that C 0 = −C E . Now, we carefully examine the integration regions for the two -regulated expressions and discover that they can be recombined to give In the last line we have used the fact that Thus, we obtained a formula for the derivative: The linear result (34b) holds only for C 1 = C L , which is (unsurprisingly) the same condition that we obtained from requiring (34a). Now, using equation (40), we can compare equations (35) and (29). We see that the bcc operator must have conformal weight 1 2 λ 2 (this was already discussed in [4]) and that C 1 must be zero. Thus, interestingly, while we can assume any C 1 to construct a SFT, only for C 1 = 0 will this solution have a primary bcc operator.
In our computation of the derivative, we were careful to not bring the ∆ → 0 limit inside the regularization bracket [. . .] r , as pathologies can develop when doing so. For example, an explicit computation using the V V OPE gives that which is infinite in the ∆ → 0 limit. Naively, lim ∆→0 V (a − ∆, a) 2 might be thought to be zero, since the operators are integrated over a set whose measure approaches zero. But this too is suspect, as it's not clear what lim ∆→0 V (a − ∆, a) 2 means without any regulation. Further, at fixed , lim ∆→0 (V (a − ∆, a) 2 ) = 0, so we could write that which is again infinite.
The divergence for ∆ → 0 in equation (41) is necessary and it has a simple interpretation in terms of the OPE of the corresponding boundary condition changing (bcc) operator, where h is the conformal weight of the bcc operator σ. As we already saw, the conformal weight is related to λ by 2h = λ 2 . At the lowest nontrivial order in λ, the divergent part of the above OPE is σ(s)σ(0) = e −λ 2 ln s + . . . = −λ 2 ln s + terms that are finite or higher order in λ . (44) The term −λ 2 ln s is exactly what we obtained in equation (41): Finally, we conclude this subsection with a warning. Naively, the following two regulated operators should be equal: However, it is easy to see that which is not the same as [V (a, b) 2 ] r . In particular, (47) is missing the divergent ln part of the counterterm, so it's not even finite. What went wrong? On the LHS of equation (47) we included a small operator lim t→b V (t)V (t, b), which is divergent even when regulated. The two operators in (46) would only be equal if we were able to commute the order of integration and regularization, which fails when the operators involved are small. Notice that we were careful not to use small operators when we wrote down equation (34a).
To summarize section 3, we have found that the factorization and replacement conditions restrict possible renormalization schemes for two operators to Parameters C R and C L do not change the SFT solution and could be set to zero without loss of generality. Insisting on linearity implies a further condition that C L = C 1 = C R , while the bcc operator corresponding to the renormalized boundary deformation is primary only if C 1 = 0.

Third order
Before we plunge into a computation at all orders, we will consider our renormalization scheme at third order, i.e. the renormalization of a product of three V s. At this order, we define a regularized operator involving a single integrated operator by following the same regularization pattern as we did for the quadratic operator: where G The extra superscript (3) indicates that these are the counterterms at third order. We also define a regularized operator involving two integrated operators: and three operators: Notice that we have four new and potentially different counterterms. Using translation invariance together with the factorization and replacement conditions in a way similar to that presented in the quadratic case, we can show that where the constants C 1 and C 0 are necessarily the same as the ones used at quadratic order, but C 0 is a new independent constant. One can check, by examining all combinations, that the replacement condition at third order is satisfied for any value of C We also need to define renormalized operators involving unintegrated insertions; using factorization and replacement conditions, these can be constrained to There are two new constants: C . Just like C L and C R , however, these constants cannot change the SFT solution and can only affect the form of the BRST insertions O L and O R . For example, an explicit computation shows that the first BRST condition holds with corrected boundary operators: and As we did at quadratic order, this should be compared with We see that the bcc operator corresponding to our solution is still primary at this order as long as C 1 = 0. At this order we did find one new free parameter that can affect the SFT solution: C 0 . It is clear that if we were to continue our order-by-order approach to renormalization, we would find new free parameters. However, at quartic and higher orders, this approach in unwieldy: it is hard to write down the most general renormalized operator that is demonstratively finite. To study renormalization to all orders, we will no longer try to study the space of all renormalizations and instead focus on a particular renormalization scheme. The scheme we chose will have C 0 and C 1 as free parameters, however we will not add new constants at every order. We will return to the question of classifying all renormalization schemes in section 6.

Renormalization to all orders
In this section, we present an example renormalization scheme at all orders. Our scheme is demonstratively finite, and we prove that it satisfies all the conditions set out in section 2. At second order, our scheme matches that described in section 3, and it has the same two free parameters, C 0 and C 1 .
To define the full renormalization scheme, we need to consider what kind of singularities can appear when considering products of three or more operators. One class of singularities appears when any two of these operators are inserted at the same point; we can deal with this class of singularities by recursively subtracting divergences that occur when any two operators are inserted at the same point. However, it is also possible to have additional singularities. Since the finite part of the OPE of any two operators that are close together will contain operators other than the identity, another operator inserted close can then have a singular OPE with these operators. In other words, we can have additional divergences caused by three or more operators inserted at the same point. Following equation (4.10) of [1], we require that such singularities are not present and restrict our arguments to a class of operators V such that remains finite even when more than two of the coordinates t i collide simultaneously. This implies that to decide whether any renormalization scheme leads to a finite operator, we only have to ensure that the renormalized operator stays finite in the limit t i → t j for any pair of coordinates t i and t j . With this restriction in place, it is sufficient, for composite operators with more than two factors, to subtract the divergence which results from any two operators coming together. Now, consider for example e λV (a,b) . As → 0, this operator diverges. To regulate it, we might propose an expression such as Using the quadratic counterterm G D ab to define the renormalization at all orders would correspond to making many choices about finite parts of higher order counterterms, for example choosing C (3) 0 = C 0 at third order. However, as we will see in Appendix A, the above definition does not lead to a finite operator.

The renormalization scheme
To obtain an operator that is demonstrably finite and that, for simplicity's sake, can be obtained from the quadratic counter term alone, we will generalize equation (58) to include finite terms in the counterterm: where g(t 1 , t 2 ) = 1 (t 1 −t 2 ) 2 + finite terms. We can rewrite equation (14) using this new notation To compare with equation (14) we notice that since the integrand in the above equation is finite, we can equivalently write which allows us to split the integral into two pieces (neither of which is finite for → 0). The above equation introduces a new notation: Γ a,b (t 1 , . . . , t n ) := {(t 1 , . . . , t n ) | a ≤ t i ≤ b, |t i − t j | > }. This is the same region of integration that is used for (V (a, b) n ) . For the sake of brevity, we often omit the list of parameters (t 1 , . . . , t n ).
Requiring that the function g D ab not depend on , to match (14) we must have which can be satisfied by, for example We will be able to show shortly that details of the function g D ab are not important as long as (63a) is satisfied. However, notice that the counterterm • • • •g D ab does depend on a and b: it is not a 'local' regulator like that in (58).
At higher orders, we now make the following definition for a specific higher order regularization scheme: (64) The guiding principle of this scheme, which makes it easier to prove that is satisfies all the required conditions, is to use the same integration region for every term related to a single renormalized integrated operator. This requirement fixes finite parts of the counterterms at higher order in terms of those at quadratic orders so the only free parameters are C 0 and C 1 (which enter through the specific counterterm g we are using). For example, in the language of the previous section, we have C The renormalization scheme has a simple form when applied to an exponential: The notation here is similar to that commonly used for the Chern-Simmons action on a Dbrane: under an n-dimensional integral, we include all the terms from the Taylor expansion of the integrand that have the right number of variables to saturate the integral. It is easy to see that this is the same definition as that in equation (64). Expanding the above in powers of λ we obtain a different form V (t σ(j) ) .
(66) Reinstating regularization allows us to remove the cumbersome symmetrization sum from the above expression (67) In contrast to equation (66), the integrand in equation (67) is not finite and the integration region must be modified appropriately. Now, consider a renormalization scheme with a different functiong D ab = g D ab (t 1 , t 2 ) + ∆ D ab (t 1 , t 2 ) where the difference ∆ D ab is assumed to be a finite function of t 1 and t 2 . As is This implies, in particular, that if b a ds 1 ds 2 ∆ D ab (s 1 , s 2 ) = 0, then the operator renormalized usingg D ab is the same as that renormalized using g D ab . However, if ∆ ab := b a ds 1 ds 2 ∆ D ab (s 1 , s 2 ) = 0, the new operator is different, but the difference exponentiates:

Renormalization of unintegrated operators
Having defined a regularization scheme for V (a, b) n , we now move on to V (a)V (a, b) n . Again, we want to exponentiate our second-order scheme. With a bit of abuse of notation, we will use to mean an operator in which a pairwise divergence between any two insertions is regulated by subtracting, as required, either g D ab , g L ab or g R ab , where and where the definition of g R ab follows along similar lines. As was the case with g D ab , the exact form of the finite part of functions g L ab and its counterpart g R ab is not important, and only its average value affects the operator. We have used a convenient and simple constant form in our definition above.
So, for example, in the context of

Multiple regions of integration and replacement condition (4b)
Using the exponential notation, we extend our definition (65) to more complicated operators with several regions of integration where we must define another counterterm function g E : In equation (74), all functions should be considered zero outside of their natural domain, such as (a i , a i+1 ) 2 for g D a i ,a i+1 . To remove ambiguity, we have decorated λ i V with its appropriate domain as well: λ i V a i ,a i+1 . Finally, special attention needs to paid to the domain of g E a i ,a i+1 ,a i+2 . We have taken it to be the region (a, b) × (b, c) ∪ (b, c) × (a, b), instead of (a, b) × (b, c). This choice, to double the domain of the function, which will be convenient below, has resulted in a factor of 1 2 in the exponent containing g E a i ,a i+1 ,a i+2 . To verify that our renormalization scheme satisfies the replacement condition, we write a simpler version of (74) with only two exponentials and equal couplings λ 1 = λ 2 = λ: We can now prove replacement in exponential notation: the -map (. as long as g D ac − (g D ab + g D bc + g E abc ) is a finite function and c a d t g D ac − (g D ab + g D bc + g E abc ) = 0. That g D ac − (g D ab + g D bc + g E abc ) is finite is obvious from equations (63b) and (75a) when keeping in mind that the union of natural domains of g D ab , g D bc and g E abc is the same as the domain of g D ac . Further, c a d 2 t g D ac − (g D ab + g D bc + g E abc ) = G D ac − (G D ab + G D bc + G E abc ), which vanishes if C 0 = C E , the same condition we obtained from the replacement condition at second order.

Assumptions (4a), (4e) and (4f)
To start with, we notice that the factorization assumption (4a) follows quite obviously from our renormalization scheme: renormalized operators which are inserted away from each other do not undergo any further renormalization when combined.
The assumption (4e) is also fairly straightforward to verify. Examining equation (65) we see that assumption (4e) is satisfied because the region of integration relevant to (V (a, b) n ) , parametrized by t 1 , . . . , t n , is invariant under the map t i → (a+b)−t i and because g D ab (t 1 , The last assumption, (4f), is trivial in our construction, since at no point in the renormalization of the integrated operators have we considered the wedge state on which they are embedded. By constructing the counterterms using the local OPE rather than the two-point functions, we have avoided any difficulties that this assumption may have caused. It is here that our approach differs from that in work [1].

The first BRST condition (4c)
Our proof follows that in [1] quite closely, while filling in some missing technical steps. We present it here in detail for completeness and to highlight where our lemma (92) comes in.
The renormalized operator we start with this time is, as in (64), The limit → 0 will be implied throughout, but not stated explicitly. Also, because the counterterm g D ab appears very frequently, we will drop the indices and simply refer to it as g when this does not result in ambiguity.
We wish to show that The BRST Q B operator acts like a derivative on the marginal operators V (see equation (24)), but not the counterterms g. If the BRST operator acted on both, then its action on the renormalized operator would naturally contain complete total derivatives and the proof of the BRST condition would be simple. Since it does not, we effectively proceed as if it did and then subtract the unnecessary extra terms this generates. To do so, we need to give a precise implementation of the morally correct statement that We will achieve this with an add-and-subtract trick. The final result of this lengthy calculation is presented in equations (95) and (96). To begin, we use the action of the BRST operator on the marginal deformation (24) We have left the term (n − 2k) explicit (instead of canceling it against the exponential) so that the sum could be extended to k = n/2 for n even. We now add and subtract the following quantity: In going between the two lines, we have shifted the range of k, cancelled a factor of 2k against the combinatorial factor in front and relabeled the integration variables t i for i < n. Now we take (82) and we add (83a) and subtract (83b). This gives us where To evaluate A, we observe that if we symmetrize its integrand over the variables t 1 , . . . , t n−1 , it will take on the form where In this form, A is completely symmetric in all n variables t i , except for the factor of c(t n ).
As we already showed when demonstrating the finiteness of the renormalization scheme, this integrand is completely finite. In this symmetrized form, it is safe to change the integration region to (a, b) n and perform the (trivial) integral over t n using the fundamental theorem of calculus. We can then change the remaining (n − 1)-dimensional region of integration back to an -regulated one Γ a+ ,b− (t 1 , . . . , t n−1 ). Finally, we relabel the integration variables and obtain a simpler expression The expression above has a form reminiscent of [V (a, b) n−1 (cV (b) − cV (a))] r , as required; however, one more adjustment is necessary: in [V (a, b) n−1 (cV (b) − cV (a))] r , c(a)g L ab (t i , a) and c(b)g R ab (t i , b) should appear in the correct places, but in the expression above, it is c(a)g D ab (t i , a) and c(b)g D ab (t i , b) that appear instead (we have restored the decorations on g here to make this more apparent). Fortunately, the difference between g D ab (a, t i ) and g L ab (a, t i ) is finite, so we can write From the definitions in (63b) and (71a) we then have To evaluate B, we notice that the integrand diverges whenever t n−1 and t n approach each other, but not when these two variables approach any of the others. This alone is not enough to factorize the region of integration, but with (81) in mind we notice that the rest of the integrand (including the sum and combinatorial factors) is what we would see for [V (a, b) n−2 ] g r , so there are no divergences due to t i approaching any other t j as long as i < n − 1. In appendix C, we show that for any function f ( t) which is finite on (a, b) n . Thus, the domain of integration can be changed to Γ a,b (t 1 , . . . , t n−2 ) × Γ a,b (t n−1 , t n ) and we evaluate the integrals with respect to t n−1 and t n : Putting (91) and (93d) together, several terms cancel and we get Multiplying this it by λ n and then summing over n, we arrive at the precise form we wanted: where As has already been discussed, the explicit dependence of O L and O R on C L and C R is there to cancel the dependence of the renormalization scheme on these parameters. It can be shown that

The second BRST condition (4d)
To avoid clutter, in this section we will set C L and C R to zero. As we have stressed so far, these constants are a matter of choice and do not affect the eventual SFT solution.

A note on notation
To prove the second BRST assumption, it will be useful to introduce more flexible notation than what we have introduced so far. In particular, we have written in equation (64) [ (98a) To be more specific, we could have written Such notation will allow us to use a counterterm g D ab whose parameters do not match the region of integration of V exactly, for example (98c) Further, since the counterterms g D , g L and g R will need to be modified independently, we will use a notation to list the appropriate counterterms and (when necessary) their parameters.
To verify the second BRST condition, we must compute From the first BRST condition, the second term is In what follows, we need to know what happens when the BRST operator acts on operators renormalized using a different countertermg D = g D ab + ∆ D ab instead of g D ab . Recalling equation (69), we can write If ∆ R ab = ∆ L ab , we have that i.e., the first BRST condition has the same form and uses the same operators O L/R given in equation (96) for any counterterms g D and g L/R . We will make use of this fact wheng is different from g because it uses different values of a and b by a small amount . For example, we might haveg R ab = g R a+ ,b andg L ab = g L a,b− . Then, since we are taking C L/R = 0 in this section, ∆ L ab = ∆ R ab = 1 (b−a+ ) 2 − 1 (b−a) 2 , and we can use results (95) and (96) without any changes.
With these preliminaries out of the way, the main part of the proof of (4d) consists of calculating the first term in (99).
At this point, we introduce a small parameter which is implicitly taken to zero. Since the integrand is finite, we can modify the integration region. We make an -sized modification to the integration region at a to examine the divergence there and write, using notation (98c): (103b) Using the fact that Q B (cV ) = 0 and then rewriting some operators in the renormalized form with an understanding that the implicit counterterm present in [ ] r be taken to zero before gives The BRST operator can now act on these renormalized operators using (102) since the -regulator is holding the unintegrated insertion 'away', resulting in: Rearranging and recombining some of the integrands into finite combinations, and in one place using the fact that b a+ dt g L ab (t) = −1 + O( ), we get Where the integrands are finite, we can now remove the -regulator on the integration region. The resulting expression is where, to simplify the last and most complicated term in equation (103e), we have examined the following chain of equalities, 2 (104c) 2 Consistent with our notation, we include a counterterm for V (a)V (a + ) in • • V (a)V (a + ) . . .
• •g ab . The finite part of this counterterm is irrelevant since the ghost factor will suppress it.
These, together with the observation that • • cV (a)cV (a + )V (a + , b) n−2 • •g ab is of order , imply that we can replace the parentheses in (103e) with c∂ 2 c(a) •g ab . Parenthetically, it is worth noting that we can use explicit third order calculations to show that all of these steps are correct at that order. For example, at third order (103c) and (103d) both match the explicitly computed third order result (105) Finally, we add the two pieces (103f) and (100) together to see that (106c) Summing this expression over n gives This proves that the second BRST assumption (4d) holds in this particular renormalization scheme at all orders.

Conclusions
In this section, we will discuss the effect that our free parameters have on the corresponing SFT solution. We will discuss first the effect of the already explicitly identified free parameters, and then consider the existence of other free parameters.
In sections 3 and 4, we have discussed parameters such as C L , C R , C that affect the renormalization scheme in a relatively trivial way. They change only the explicit form of O R and O L but do not change the corresponding SFT solution. In contrast, parameters C 1 , C 0 and C (3) 0 appear at first glance to affect the renormalization scheme and the SFT solution. Let's examine these in some detail.
From equation (69), we see that our free parameters C 0 and C 1 are simply a rescaling of the renormalized operator: (108) To understand whether this implies a change in the SFT solution, we consider equations (3.11) and (3.12) of [1]: This pair of equations defines a string field U from which the SFT solution of Kiermaier and Okawa is constructed. Following the details of the construction, we see that a rescaling of U by a λ-dependent factor changes U and therefore has impact on the SFT solution. This is because, in equation (109b), the interval on which V is integrated is different at every order: b − a = n − 1. With a λ-dependent rescaling factor, in the resulting expression for U , the width of the integration interval will no longer match the power to which V is raised, and the final expression for U will be different. A numerical computation [14] indicates that indeed C 0 and C 1 do affect the SFT solution. We leave the question of whether SFT solutions given by different values of C 1 and C 0 are related by a gauge transformations to future work, and offer only one more observation: introducing a nonzero C 1 is the same as replacing V (t) with V (t) − λC 1 . Notice that the rescaling (108) is consistent with our comparison between equations (35) and (40): if C 1 is not zero, the ∂ ∂a derivative in equation (35) has an additional term from the derivative acting on the rescaling factor e −λ 2 (C 1 (b−a)) . The apparent non-primarity of the bcc operator seems to arise from this rescaling of the renormalized operator.
Leaving now the confines of the renormalization scheme defined in section 5, we can ask whether changing C does not result in a change of C . This, however, is complicated. Not only are there more terms, but constructing the most general finite renormalization scheme at this order is nontrivial: recall that the naive guess in equation (59) turned out to not be finite (see Appendix A for details). We are not able to offer an analysis beyond third order here, but do briefly discuss a possible approach in the following subsection.

Renormalization operator
A good renormalization scheme must make the operator e λV (a,b) finite and satisfy the conditions (4). In our analysis in sections 3 and 4, we saw that the conditions of factorization (4a) and replacement (4b) place strong constraints on possible renormalization parameters. Since we have already identified the replacement condition as essentially a linearity condition, to get this condition 'for free', we could implement our renormalization scheme as a linear operator. This approach will require the 'extended linearity' of (5), and so it will produce restrictions such as C L = C R = C 1 , which we will assume for this subsection.
Consider then an operator L given by It has the property that thus it correctly produces the counterterms at quadratic order. We could then ask whether, for any operator A built out of integrated or fixed insertions of the marginal operator V , we should define The answer is no: this would be equivalent to using equation (59), which we know not to be finite. However, we might be able to 'patch up' this problem (and introduce more free parameters at the same time) by using a more general operator L . Consider, for example This gives us a parametrization of sorts of possible renormalization schemes at different orders. At the quadratic order, we have and at third order we could have where the free parameters uncovered in section 4 are shown, by an explicit calculation, to be reproduced with Since renormalization arises through an action of an operator here, it is naturally linear, so the replacement condition (4b) would be naturally satisfied. If we want to satisfy the factorization condition (4a), we just need some strategically placed δ-functions, as is explicit in equation (117).
To extend this analysis to the next (quartic) order, we must account for subleading divergences at fourth order that were uncovered in Appendix A. An explicit calculation gives additional divergent counterterms at fourth order: Finite terms are of course allowed as well, and will contribute additional free parameters. While we have not demonstrated that our scheme [ ] g r is of this type, we believe this to be true.
With this approach, we could in principle write down the most general finite scheme at quartic order that satisfies conditions (4b) and (4a). Then, we would need to check that the BRST conditions do not impose any extra restriction on the free parameters. This would allow us to discover whether there are any free parameters at quartic order that affect the SFT solution in a nontrivial way, without analyzing all possible restrictions due to the replacement condition at this order.

A Comments on equation (59)
While the renormalization schemes and look very similar, they are not equivalent. The scheme we have been using, (65), is given at order λ n by (67), and (59) is similarly written out as The critical difference between these two schemes is illustrated by We might try to argue that since the integrand has no singularity where one of the s j approaches a t j or an s j belonging to another counterterm, the difference vanishes as → 0 and the difference between the integration regions shrinks. The flaw in this reasoning is that when, for example, s 1 is close to one of the t j s, then the integrand does become large for |s 2 − t j | < , an integration region which is included in one case but not in the other. As a concrete example, at third order it can be shown that  (122) At fourth order, the problem becomes worse. Examining the difference between the two renormalization schemes at this order, we see that The term on the last line is not finite as t 1 approaches t 2 and so the difference between the two renormalization schemes is not finite as approaches zero. Since [V (a, b) 4 ] g r is demonstratively finite, it must be that 1

B Proof of equation (68)
We will explicitly write out the operator [V (a, b) n ]g =g+∆ r in order to compare it to the same operator renormalized with g: While the exponential form would automatically make the combinatorial factors 'work out', using this form makes it easier to ensure that the integrand stays finite at every step, a crucial part of the proof.

C Proof of equation (92)
We wish to show that for any function f ( t) which is bounded on (a, b). The difference between integrals over the two regions can be written in terms of three other other integrals:   The first and second lines of the right hand side both vanish independently, so we will compute them separately, starting with the first line.
Because the function f is finite and is integrated over a region with area of order , we notice that each of those integrals over t is times a finite function of one of the two remaining coordinates. Specifically, by defining the first line of (126) is Γ ab d 2 s ∂ s 2 g D ab (s 1 , s 2 )c(s 2 ) (F (s 1 ) + F (s 2 )) .
We will not need to know the precise form of F (s) so long as it and its derivative are finite. With the full expression having an factor out front from the small area of the t i integral, we know that the finite part of g D ab will not play any role, and we only need to consider the singular term. Integrating by parts, we have which goes to zero in the → 0 limit. Turning now to the last line in (126), where t i is close to both s 1 and s 2 , we define F 2 (s 1 , s 2 ) = 1 n j=1 Γ ab ∩|t i −s 1 |< ∩|t i −s 2 |< d n t f ( t), F 3 (s 1 , s 2 ) = F 2 (s 1 , s 2 )c(s 2 ) .
Both of these functions are finite for the same reasons as F (s) above: they are finite functions integrated over a region with area proportional to , and then divided by . The term we wish to evaluate is As with the other term, we will integrate this by parts. For the terms with a 1 2 we will gather like denominators, shifting the integration variable when necessary to match intervals. For the other single integrals, the functions F 3 (s, a) and F 3 (s, b) can be Taylor expanded about the endpoints a and b and only the first term will contribute, with the rest of the Taylor series giving at most terms of order O( ln ). For the double integrals, we will also Taylor expand ∂ s 2 F 2 (s 1 , s 2 )c(s 2 ) in s 2 about s 2 = s 1 and again only the first term will contribute. In addition, the last two double integrals will not contribute at all because the s 1 integrals there provide extra suppression.
Here ∂ 2 F 2 is the derivative with respect to the second parameter, and ∂ 1 will be with respect to the first. Now we Taylor expand the numerators on the first line and evaluate an integral for everything else.
In order to remove the middle term, we would like to change (∂ 1 − ∂ 2 ) to − (∂ 1 + ∂ 2 ) = −∂ s in the first term, which we can do by adding an extra ∂ 1 piece. Now we look back at the definition of F 2 (s 1 , s 2 ) and see that it is a symmetric function of its two parameters, so that the two derivatives are equal when acting on the line s 1 = s 2 . We thus have zero for all of (126).