Assuming the following definition for $G[s]$, the context of this post is the convergence of $Re[G[s]]$ to $Re[\zeta[s]]$ for $Re[s]\in(0,\,1)$.

**(1)** $\quad G[s]=\int_{1-\epsilon}^N\left(\left(\sum_n \delta[x-n]\right)-1\right)\,x^{-s}\,dx\,,\ N\to\infty$

I’ve noticed $Re[G[s]]$ seems to approximate $Re[\zeta[s]]$ very closely for $Re[s]=1$ as $N\to\infty$.

The following two plots illustrate $Re[G[1+i\,t]]-Re[\zeta[1+i\,t]]$ for $N=100$ and $N=1000$ respectively. Note the amplitude of the error oscillation is virtually the same across the entire range of $t$ for both of the plots, and as the value of $N$ increases by an order of magnitude from the first to the second plot, the amplitude of the error oscillation seems to decrease by an order of magnitude from the first to the second plot.

I’ve been wondering if the Prime Number Theorem predicts $Re[G[s]]$ converges to $Re[\zeta[s]]$ for $Re[s]=1$ as $N\to\infty$.I’ve also noticed $Re[G[s]]$ seems to approximate $Re[\zeta[s]]$ with an error bound of $\frac{1}{2}$ for $Re[s]=0$ as $N\to\infty$ and $Im(s)\to\infty$.

The following two plots illustrate $Re[G[i\,t]]-Re[\zeta[i\,t]]$ for $N=100$ and $N=1000$ respectively. Note that as the value of $N$ increases by an order of magnitude from the first to the second plot, there is no discernible decrease in the amplitude of the error oscillation from the first to the second plot.

I’ve been wondering if the Riemann Hypothesis predicts $Re[G[s]]$ approximates $Re[\zeta[s]]$ with an error bound of $\frac{1}{2}$ for $Re[s]=0$ as $N\to\infty$ and $Im(s)\to\infty$.The formula I’m using to evaluate $G[s]$ is provided in (2) below to aid others who might be interested in exploring this relationship for themselves. All four plots above use the value $\epsilon=0.000001$ and the $Zeta[s]$ function provided by the Wolfram language as the reference for $\zeta[s]$.

**(2)** $\quad G(s)=\frac{\left((\epsilon -1) N^s+N (1-\epsilon )^s\right) (N-N \epsilon )^{-s}}{s-1}+\sum _{n=1}^N n^{-s}\ ,\ N\to\infty$