Appendix A: Proof of Lemma 3.1
In this appendix we prove Lemma 3.1. The main line of proof is based on the idea in [6, 31]. For simplicity, we only consider \(\delta \)-singularity in equation (1.2), hence \(u_0(x)=\delta (x)+f(x)\), where \(f(x)\) is sufficiently smooth and has a compact support on the computational domain \(\Omega \).
A.1 The weight function
Let \(\varphi (x)\) be a positive bounded function, which can be taken as a weight function. For any function \(q\in H^1_h,\) we define the weighted \(L^2\)-norm as
$$\begin{aligned} \Vert q\Vert _{\varphi ,D}=\left(\,\int _Dq^2\varphi dx\right)^{\frac{1}{2}} \end{aligned}$$
in the domain \(D.\) If \(\varphi =1\) or \(D=\Omega ,\) the corresponding subscript will be omitted.
In this paper, we will consider two weight functions \(\varphi ^1(x,t)\) and \(\varphi ^{-1}(x,t)\), respectively, in order to determine the left-hand and right-hand boundary of the region \(\mathcal R _T\) such that, outside this region, we can resume the \((k+1)\)th order accuracy in the \(L^2\)-norm. Both weight functions are related to the cut-off of the exponent function \(\phi (r)\in C^1:\Omega \rightarrow \mathbb R \),
$$\begin{aligned} \phi (r)=\left\{ \begin{array}{l@{\quad }l}2-e^r,&r<0,\\ e^{-r},&r>0, \end{array}\right. \end{aligned}$$
and they are defined as the solutions of the linear hyperbolic problem,
$$\begin{aligned}&\varphi _t^a+\varphi _x^a=0,\end{aligned}$$
(7.1)
$$\begin{aligned}&\varphi ^a(x,0)=\phi \left(\frac{a(x-x_c)}{\gamma h^\sigma }\right), \end{aligned}$$
(7.2)
where \(\gamma >0,\,0<\sigma <1\) and \(x_c\) are three parameters which will be chosen later. We always assume \(\gamma h^{\sigma -1}\ge 1\) in this section.
In [31], the authors have listed several properties about the two weight functions. Here, we state some of them that will be used.
Proposition 7.1
For each of the weight function \(\varphi ^a(x,t)\), the following properties hold
$$\begin{aligned}&1\le \varphi ^a(x,t)\le 2,&\quad a(x-x_c-t)\le 0,\end{aligned}$$
(7.3)
$$\begin{aligned}&0<\varphi ^a(x,t)<h^s,&\quad a(x-x_c-t)>s \, \log (1/h)\gamma h^\sigma . \end{aligned}$$
(7.4)
Lemma 7.1
Let \(\mathbb V \) be a Gauss–Radau projection, either \(\mathbb P _-\) or \(\mathbb P _+\). For any sufficiently smooth function \(p(x)\), there exists a positive constant \(C\) independent of \(h\) and \(p\), such that
$$\begin{aligned} \Vert \mathbb V ^\bot p\Vert _{\varphi ,D}&\le Ch^{k+1}\Vert \partial _x^{k+1}p\Vert _{\varphi ,D},\end{aligned}$$
(7.5)
$$\begin{aligned} \Vert \mathbb V ^\bot (\varphi v_h)\Vert _{\varphi ^{-1},D}&\le C\gamma ^{-1}h^{1-\sigma }\Vert v_h\Vert _{\varphi ,D},\end{aligned}$$
(7.6)
$$\begin{aligned} \Vert \mathbb V (\varphi v_h)\Vert _{\varphi ^{-1},D}&\le C\Vert v_h\Vert _{\varphi ,D}. \end{aligned}$$
(7.7)
where \(D\) is either the single cell \(I_j\) or the whole computational domain \(\Omega \).
Lemma 7.2
For any function \(v\in V_h\) there holds the following identity
$$\begin{aligned} \mathcal H (v,\varphi v)=-\frac{1}{2}\sum _j\varphi _{j+\frac{1}{2}}[v]^2_{j+\frac{1}{2}}+\frac{1}{2}(v,\varphi _x v). \end{aligned}$$
(7.8)
A.2 The smooth solution
We consider the following problem
$$\begin{aligned} v_t+v_x&= 0,\end{aligned}$$
(7.9)
$$\begin{aligned} v(x,0)&= v_0(x), \end{aligned}$$
(7.10)
where the initial condition \(v_0(x)\), is a sufficiently smooth function modified from the original initial condition \(u_0(x)=\delta (x)+f(x)\) such that it agrees with \(u_0(x)\) for all \(x\in \Omega \backslash I_i\), and satisfies
$$\begin{aligned} |\partial _x^\alpha v_0(x)|\le Ch^{-\alpha -1},\quad x\in I_i, \end{aligned}$$
where \(I_i\) is the cell containing \(x=0\).
A.3 Error representation and error equations
Denote the error by \(e=v-u_h\), where \(u_h\) approximates to equation (1.2) or equation(7.9). Clearly, \(e\) also satisfies the scheme (2.2) with \(g(x,t)=0\). We divide the error into the form \(e=\eta -\xi \), where
$$\begin{aligned} \eta =v-\mathbb P _-v=\mathbb P _-^\bot v,\quad \mathrm{and}\quad \xi =u_h-\mathbb P _-v. \end{aligned}$$
Then following [31], we obtain
$$\begin{aligned} \frac{d\Vert \xi \Vert _\varphi ^2}{dt}&= 2\left(\xi _t,\mathbb P _+^\bot (\varphi \xi )\right)+2\left(\eta _t,\mathbb P _+(\varphi \xi )\right)+2\mathcal H (\xi ,\varphi \xi )-(\xi ,\varphi _x\xi )\\&= 2\Pi _1+2\Pi _2-\Pi _3, \end{aligned}$$
where
$$\begin{aligned} \Pi _1=\left(\xi _t,\mathbb P _+^\bot (\varphi \xi )\right),\quad \Pi _2=\left(\eta _t,\mathbb P _+(\varphi \xi )\right),\quad \Pi _3=\sum _j\varphi _{j+\frac{1}{2}}[\xi ]^2_{j+\frac{1}{2}}. \end{aligned}$$
First we estimate \(\Pi _1\). Denote \(w=\xi _t-\mathbb P _{k-1}\xi _t.\) From the scheme (2.3), we have
$$\begin{aligned} (\xi _t,w)_j=(\eta _t,w)_j-(e_t,w)_j=(\eta _t,w)_j-[\xi ]_{j-\frac{1}{2}}w^+_{j-\frac{1}{2}}. \end{aligned}$$
Plugging the above into \(\Pi _1\) and defining \(\psi =\sqrt{\varphi },\) we obtain
$$\begin{aligned} (\xi _t,\mathbb P _+^\bot (\varphi \xi ))_j&= \left(\frac{(\xi _t,w)_j}{\Vert w\Vert _{I_j}^2}w,\mathbb P _+^\bot (\varphi \xi )\right)_j\\&= \left(\left((\eta _t,w)_j-[\xi ]_{j-1/2}w^+_{j-\frac{1}{2}}\right)\frac{w}{\Vert w\Vert _{I_j}^2},\mathbb P _+^\bot (\varphi \xi )\right)_j\\&\le \frac{C}{\Vert w\Vert _{I_j}}\left(\left|(\psi \eta _t,w)_j\right|+\left|[\psi \xi ]_{j-1/2}w^+_{j-\frac{1}{2}}\right|\right)\ \left\Vert\psi ^{-1}\mathbb P _+^\bot (\varphi \xi )\right\Vert_{I_j}\\&\le \frac{Ch^{1-\sigma }}{\gamma }\left(\Vert \eta _t\Vert _{\varphi ,I_j}^2+\Vert \xi \Vert _{\varphi ,I_j}^2\right) +\frac{Ch^{1/2-\sigma }}{\gamma }\left(\varphi _{j-1/2}[\xi ]_{j-1/2}^2+\Vert \xi \Vert _{\varphi ,I_j}^2\right). \end{aligned}$$
Summing up with respect to \(j\), we obtain
$$\begin{aligned} (\xi _t,\mathbb P _+^\bot (\varphi \xi ))\!\le \!\frac{Ch^{1-\sigma }}{\gamma }\left(\Vert \eta _t\Vert _\varphi ^2\!+\!\Vert \xi \Vert _\varphi ^2\right) \!+\!\frac{Ch^{1/2-\sigma }}{\gamma }\left(\sum _j\varphi _{j-1/2}[\xi ]_{j-1/2}^2\!+\!\Vert \xi \Vert _\varphi ^2\right). \end{aligned}$$
For \(\Pi _2\), it is not difficult to find out that
$$\begin{aligned} \Pi _2\le C\Vert \eta _t\Vert _\varphi \ \Vert \xi \Vert _\varphi \le C(\Vert \eta _t\Vert _\varphi ^2+\Vert \xi \Vert _\varphi ^2). \end{aligned}$$
Then if \(\gamma \) is large enough and \(\sigma =\frac{1}{2}\), we have
$$\begin{aligned} 2\Pi _1+2\Pi _2-\Pi _3\le C\left(\Vert \eta _t\Vert _\varphi ^2+\Vert \xi \Vert _\varphi ^2\right). \end{aligned}$$
By Gronwall’s inequality,
$$\begin{aligned} \Vert \xi (T)\Vert _\varphi ^2\le C\int _0^T\Vert \eta _t\Vert _\varphi ^2dt+C\Vert \xi (0)\Vert _\varphi ^2. \end{aligned}$$
(7.11)
A.4 The final estimate
This part is almost the same as in [31]. We will only discuss the left-hand boundary of \(\mathcal R _T\) since the discussion for the right one is similar. Denote \(x_L(t)=t+x_c\) with
$$\begin{aligned} x_c=-2s\log (1/h)\gamma h^\sigma , \end{aligned}$$
where \(s\) and \(\gamma \) are sufficiently large and \(\sigma =1/2\). As we have mentioned before, the \(\delta \)-singularity in the initial datum is contained in the cell \(I_i\). Then by proposition A.1, we obtain \(0<\phi (x)<h^s\) for any \(x\in I_i\). We choose \(v_0\) to satisfy \(\mathbb P _kv_0=\mathbb P _ku_0=u_h(0)\), then
$$\begin{aligned} \Vert \xi (0)\Vert _\varphi \le \Vert \xi (0)\Vert _{\varphi ,L^2(\mathbb R \backslash I_i)}+\Vert \xi (0)\Vert _{\varphi ,L^2(I_i)}\le Ch^{k+1}\Vert f\Vert _{k+2}+Ch^{s-1/2}. \end{aligned}$$
If \(s\) is large enough, then \(\Vert \xi (0)\Vert _\varphi \le Ch^{k+1}.\)
Define the domain \(\mathcal R _T^+=(x_L(T),\infty )\), then
$$\begin{aligned} \Vert u_h-v\Vert _\mathbb{R \backslash \mathcal R _T^+}\le \Vert u_h-v\Vert _{\varphi ,\mathbb R \backslash \mathcal R _T^+}\le \Vert \eta \Vert _{\varphi ,\mathbb R \backslash \mathcal R _T^+}+\Vert \xi \Vert _\varphi \le Ch^{k+1}\Vert f\Vert _{k+1}+\Vert \xi \Vert _\varphi . \end{aligned}$$
To estimate the second term on the right hand side, we need to use (7.11). Denote
$$\begin{aligned} w(t)=\max \left\{ x_{j+\frac{1}{2}}:x_{j-\frac{1}{2}}<t+\frac{1}{2}x_c,\forall j\right\} , \end{aligned}$$
and \(\mathcal R _1(t)=(-\infty ,w(t)),\,\mathcal R _2(t)=\mathbb R \backslash \mathcal R _1(t)=(w(t),\infty )\). If \(\gamma h^{\sigma -1}\) is large enough, \(\mathcal R _1(t)\) stays away from the bad interval \([t-h,t+h]\) where \(v(x,t)\ne u(x,t),\) then we have
$$\begin{aligned} \Vert \eta _t\Vert _{\varphi ,\mathcal R _1(t)}\le Ch^{k+1}\Vert f\Vert _{k+2}. \end{aligned}$$
Now we proceed to estimate \(\Vert \eta _t\Vert _{\varphi ,\mathcal R _2(t)}\). Since \(\mathcal R _2\) contains the whole bad region, we will use the property of the weight function. By (7.4) we have \(\varphi \le h^s\) in this zone. Then we obtain
$$\begin{aligned} \Vert \eta _t\Vert _{\varphi ,{\mathcal{R }_{2}(t)}}&\le Ch^{s/2}\Vert \eta _t\Vert _{\mathcal{R }_{2}(t)}\le Ch^{s/2+k+1}\Vert \partial _x^{k+2}v\Vert _{\mathcal{R }_{2}(t)} \\&\le Ch^{(s-3)/2}+Ch^{s/2+k+1}\Vert f\Vert _{k+2,{\mathcal{R }_{2}(t)}}. \end{aligned}$$
Similarly, we can estimate the right-hand side of the non-smooth region. If we take s large enough, we have
$$\begin{aligned} \Vert u_h\!-\!u(x,T)\Vert _\mathbb{R \backslash \mathcal R _T^+}\!=\!\Vert u_h-v(x,T)\Vert _\mathbb{R \backslash \mathcal R _T^+}\!\le \! Ch^{k+1}\Vert f\Vert _{k+2}\!+\!Ch^{(s-3)/2}\!\le \! Ch^{k+1}. \end{aligned}$$