Modular Forms III: The Petersson Inner Product
Given all the discussion of the Hecke algebra, we finally let them act on modular forms! We write
\[ V_k = \{\text{all functions } f: \mathbb{H} \to \mathbb{C}\}.\]
This has a right $G = \mathrm{GL}_2(\mathbb{Q})^+$ action on the right by
\[ g: f \mapsto f \underset{k}{|}g.\]
Then we have $M_k \subseteq V_k^\Gamma$. For $f \in V_k^\Gamma$, and $g \in G$, we again write
\[ \Gamma g \Gamma = \coprod \Gamma g_i,\]
and then we have
\[ f \underset{k}{|} [\Gamma g \Gamma] = \sum f\underset{k}{|}g_i \in V_k^\Gamma.\]
Recall when we defined the slash operator, we included a determinant in there. This gives us
\[ f\underset{k}{|} R(n) = f\]
for all $n \geq 1$, so the $R(n)$ act trivially. We also define
\[ T_n = T_n^k: V_k^\Gamma \to V_k^\Gamma\]
by
\[ T_n f = n^{k/2 - 1} f \underset{k}{|} T(n).\]
Since $\mathcal{H}(G, \Gamma)$ is commutative, there is no confusion by writing $T_n$ on the left instead of the right.
Proposition
- $T_{mn}^k T_m^k T_n^k$ if $(m, n) = 1$, and \[ T_{p^{r + 1}}^k = T_{p^r}^k T_p^k - p^{k - 1} T_{p^{r - 1}}^k.\]
- If $f \in M_k$, then $T_n f \in M_k$. Similarly, if $f \in S_k$, then $T_n f \in S_k$.
- We have
\[ a_n (T_m f) = \sum_{1 \leq d | (m, n)} d^{k - 1}a_{mn/d^2} (f).\]
In particular,
\[ a_0(T_m f) = \sigma_{k - 1}(m) a_0(f). \]
We prove 3. If $r \in \mathbb{Z}$, then
\[ q^r \underset{k}{|} T(m) = m^{k/2} \sum_{e | m, 0 \leq b < e} e^{-k}\exp\left(2\pi i \frac{mzr }{e^2} + 2\pi i \frac{br}{e}\right),\]
where we use the fact that the elements of $\Pi_m$ are those of the form
\[ \Pi_m = \left\{ \begin{pmatrix} a & b\\ 0 & e \end{pmatrix} : ae = m, 0 \leq b < e \right\}. \]
Now for each fixed $e$, the sum over $b$ vanishes when $\frac{r}{e} \not\in \mathbb{Z}$, and is $e$ otherwise. So we find
\[ q^r \underset{k}{|} T(m) = m^{k/2} \sum_{e | (n, r)} e^{1 - k} q^{mr/e^2}. \]
So we have
\[T_m(f) = \sum_{r \geq 0} a_r(f) \sum_{e | (m, r)} \left(\frac{m}{e}\right)^{k - 1} q^{mr/e^2} \]
\[= \sum_{1 \leq d | m} e^{k - 1} \sum a_{ms/d} (f) q^{ds} \]
\[= \sum_{n \geq 0} \sum_{d | (m, n)} d^{k - 1} a_{mn/d^2} q^n,\]
where we put $n = ds$.$\blacksquare$
So we actually have a rather concrete formula for what the action looks like. We can use this to derive some immediate corollaries.
Corollary Let $f \in M_k$ be such that
\[ T_n(f) = \lambda f\]
for some $m > 1$ and $\lambda \in \mathbb{C}$. Then
- For every $n$ with $(n, m) = 1$, we have \[ a_{mn}(f) = \lambda a_n(f).\]
- If $a_0(f) \not= 0$, then $\lambda = \sigma_{k - 1}(m)$.
This gives a close relationship between the eigenvalues of $T_m$ and the Fourier coefficients. In particular, if we have an $f$ that is an eigenvector for all $T_m$, then we have the following corollary:
Corollary Let $0 \not= f \in M_k$, and $k \geq 4$ with $T_m f = \lambda_m f$ for all $m \geq 1$. Then
- If $f \in S_k$, then $a_1(f) \not= 0$ and \[ f = a_1(f) \sum_{n \geq 1} \lambda_n q^n. \]
- If $f \not \in S_k$, then \[ f = a_0 (f) E_k. \]
Proof:
- We apply the previous corollary with $n = 1$.
- Since $a_0(f) \not= 0$, we know $a_n(f) = \sigma_{k - 1}(m) a_1(f)$ by (both parts of) the corollary. So we have \[f = a_0(f) + a_1(f) \sum_{n \geq 1} \sigma_{k - 1}(n) q^n = A + B E_k.\] But since $F$ and $E_k$ are modular forms, and $k \not= 0$, we know $A = 0$.
Definition Let $f \in S_k \setminus \{0\}$. Then $f$ is a Hecke eigenform if for all $n \geq 1$, we have
\[T_n f = \lambda_n f \]
for some $l_n \in \mathbb{C}$. It is normalized if $a_1(f) = 1$.
Theorem There exists a basis for $S_k$ consisting of normalized Hecke eigenforms.
So this is actually typical phenomena!
Example: We take $k = 12$, and $\dim S_{12} = 1$. So everything in here is an eigenvector. In particular,
\[ \Delta(z) = \sum_{n \geq 1} \tau(n) q^n\]
is a normalized Hecke eigenform. So $\tau(n) = \lambda_n$. Thus, from properties of the $T_n$, we know that
\[ \tau(mn) = \tau(m) \tau(n) \]
\[ \tau(p^{r + 1}) = \tau(p) \tau(p^r) - p^{11}\tau(p^{r - 1}) \] whenever $(m, n) = 1$ and $r \geq 1$.
We can do this similarly for $k = 16, 18, 20, 22, 26$, because $\dim S_k = 1$, with Hecke eigenform $f = E_{k - 12} \Delta$.
To show that there is a basis, note that the Hecke operators commute so it suffices to define an inner product on the space of cusp forms for which the Hecke operators are self-adjoint.
We let $f, g \in S_k(\Gamma)$. Then the function $y^k f(z) \overline{g(z)}$ is $\Gamma$-invariant, and is bounded on $\mathbb{H}$, since $f$ and $g$ vanish at cusps. Also, recall that $\frac{d x\; d y}{y^2}$ is an $\mathrm{GL}_2(\R)^+$-invariant measure. So we can define
\[ \bra f, g\ket = \frac{1}{v(\Gamma)} \int_{\Gamma \setminus \mathbb{H}} y^k f(z) \overline{g(z)} \frac{d x\; d y}{y^z} \in \mathbb{C}, \]
where $\int_{\Gamma \setminus \mathbb{H}}$ means we integrate over any fundamental domain, and $v(\Gamma)$ is the volume of a fundamental domain,
\[ v(\Gamma) = \int_{\Gamma \mathbb{H}} \frac{d x\; d y}{y^2} = (\overline{\Gamma(1)}:\bar{\Gamma}) \int_\mathcal{D} \frac{d x\;d y}{y^2}.\]
The advantage of this normalization is that if we replace $\Gamma$ by a subgroup $\Gamma'$ of finite index, then a fundamental domain for $\Gamma'$ is the union of $(\bar{\Gamma}: \bar{\Gamma}')$ many fundamental domains for $\Gamma$. So the expression $(*)$ is independent of $\Gamma$, as long as both $f, g \in S_k(\Gamma)$.
This is called the Petersson inner product.
Proposition
- The inner product is a Hermitian inner product on $S_k(\Gamma)$ thatis invariant under translations by $\mathrm{GL}_2(\mathbb{Q})^+$.
- If $f, g \in S_k(\Gamma(1))$, then \[ < T_n f, g>= <f, T_n g>. \]
We prove 2. Note that $T_n$ is a polynomial with integer coefficients in $\{T_p : p \mid n\}$. So it is enough to do it for $n = p$. We claim that
\[ < T_p f, g> = p^{\frac{k}{2} - 1} (p + 1) < f\underset{k}{|} \delta, g>, \]
where $\delta \in \mathrm{Mat}_2(\mathbb{Z})$ is any matrix with $det (\delta) = p$.
Assuming this, we let
\[ \delta^a = p \delta^{-1} \in \mathrm{Mat}_2(\mathbb{Z}), \]
which also has determinant $p$. Now as
\[ g|_k \begin{pmatrix} p & 0\\ 0 & p \end{pmatrix} = g, \]
we know
\[ < T_p f, g> &= p^{\frac{k}{2} - 1}(p + 1) < f|_k\delta, g>= p^{\frac{k}{2} - 1} (p + 1) < f, g|_k} \delta^{-1}> = p^{\frac{k}{2} - 1} (p + 1) < f, g|_k \delta^{a }> = < f, T_p g> \]
To prove the claim, we let
\[\Gamma(1) \begin{pmatrix} p & 0\\0 & 1 \end{pmatrix} \Gamma(1) =\coprod_{0 \leq j \leq p} \Gamma(1) \delta \gamma_i \]
for some $\gamma_i \in \Gamma(1)$. Then we have
\[ < T_p f, g> = p^{\frac{k}{2} - 1}< \sum_j f |_k} \delta \gamma_j, g>\]
\[= p^{\frac{k}{2} - 1} \sum_j < f|_k \delta \gamma_j, g |_k \gamma_j> = p^{\frac{k}{2} - 1} (p + 1) < f|_k \delta, g> \]
using the fact that $g|_k \gamma_j = g$.
Comments
Post a Comment