Commit 77b0f321 by Han-Miru Kim

### QM angular momentum, symmetries, harmonic oscillator

parent 8a7734ea
 ... ... @@ -332,6 +332,7 @@ Setup numbering system and define a function that creates environment. \defboxenv{xmp}{Example}{green!50!blue!30!white} \defboxenv{exr}{Exercise}{green!30!blue!50!white} \defboxenv{rem}{Remark}{red!20} \defboxenv{prop}{Proposition}{red!20!blue!10!white} #+END_SRC * Quotes ... ...
 \section{Formalism of Quantum Mechanics} In wave mechanics, we saw that particles were represented by wavefunctions that satisfy the \underline{linear} Schrödinger equation \begin{align*} i \hbar \del_t\Psi(x,t) = \left(- \frac{\hbar^{2}}{2m} \del_x^{2} + V(x,t)\right) \Psi(x,t \end{align*} since this equation is linear, it means that the space of solutions to this equation forms a $\C$-vector space. Moreover, for any fixed $t$, the vector space consists of $\C$-valued functions on $\R^{n}$. Letting $V = \Hom_{\textsf{Vec}-\C}(\R^{n},\C)$, we can say that such a wave-function corresponds to a function \begin{align*} \Psi: \R \to V, \quad t \mapsto \Psi(-,t) \end{align*} We will use upper-case letters $\Psi,\Phi,F,G$ for such functions, and lower-case letters $\psi,\phi,f,g$ for $\Psi(t),\Phi(t)$ etc. for fixed $t$. Moreover, we saw that the solutions should be square-integrable. So we also want \begin{align*} \Psi(-,t) \in L^{2}(\R^{n}) = \left\{f: \R^{n} \to \C : \int_{\R^{n}}\abs{f(x)}^{2} < \infty\right\} \end{align*} We also introduced a inner product \begin{align*} \braket{\psi|\phi} := \int dx \psi^{*}(x) \phi(x) \end{align*} which turns out to give us all the requirements of a \emph{Hilbert space}. \begin{dfn}[] A \textbf{Hilbert space} is a $\C$-vector space $\mathcal{H}$ with an inner product \begin{align*} \braket{-|-}: \mathcal{H} \times \mathcal{H} \to \C \end{align*} that is \emph{complete}\footnote{Every Cauchy-sequence is convergent} with respect to the norm $\|f\| := \sqrt{\braket{f|f}}$. A sesqulinear map is \begin{itemize} \item linear in the second argument and semi-linear in the first argument:\footnote{Mathematicians often take the other convention that sesquilinear forms be linear in the first and semi-linear in the second. For our purposes, the physicist notation makes quite a few things nicer.} For all $\alpha \in \C, f,g,h \in \mathcal{H}$: \begin{align*} \braket{f|\alpha g + h} = \alpha \braket{f|g} + \braket{f|h}\\ \braket{\alpha f + g|h} = \alpha^{*} \braket{f|h} + \braket{g|h} \end{align*} \item positive definite: $\forall f \in \mathcal{H}$: \begin{align*} \braket{f|f} \geq 0, \quad \text{and} \quad \braket{f|f} = 0 \iff f = 0 \end{align*} \end{itemize} A Hilbertt space is called \textbf{separable}, if there exists a countable basis. This means that there exists $\psi_{1}, \psi_{2}, \ldots$ in $\mathcal{H}$ such that for all $\psi \in \mathcal{H}$, there exist $a_1,a_2,\ldots \in \C$ such that \begin{align*} \psi = \sum_{n \in \N} a_n \psi_n \end{align*} \end{dfn} \begin{xmp}[] The simplest example of a Hilbertt space is $\C^{n}$ with the inner product \begin{align*} \braket{(v_i)_{1 \leq i \leq n}|(w_i)_{1 \leq i \leq n}} := \sum_{i=1}^{n} v_i^{\ast}w_i \end{align*} $L^{2}(\R^{n})$ is a separable Hilbertt space with inner product \begin{align*} \braket{f|g} = \int dx f^{\ast}(x)g(x) \end{align*} \end{xmp} \begin{lem}[] (Assuming Zorn's Lemma:) Every separable Hilbert space has an orthonormal basis. \end{lem} \begin{proof} This follows form Schmid's orthonormalisation process which you have seen in Linear Algebra II: By randomly chosing'' linearly independent $f_1,f_2,\ldots$ in $\mathcal{H}$, we set \begin{align*} h_1 &:= \frac{f}{\|f\|}\\ g_2 &:= f_2 - \braket{h_1|f_2}h_1, \quad h_2 := \frac{g_2}{\|g_2\|}\\ g_3 &:= f_3 - \braket{h_1|f_3}h_1 - \braket{h_2|f_3}h_2, \quad h_3 := \frac{g_3}{\|g_3\|} \end{align*} and so on. \end{proof} In the chapter about wave-functions, we saw that physical observables (such as position, momentum etc.) corresponded to \emph{eigenvalues of linear operators} \begin{align*} A: \mathcal{H} \to \mathcal{H} \end{align*} In particular, we saw the \textbf{momentum operator} \begin{align*} \hat{p}: L^{2}(\R) \to L^{2}(\R), \quad f(x) \mapsto (\hat{p}f)(x) := -i \hbar \del_x f(x) \end{align*} Of course, not all functions in $L^{2}(\R)$ are differentiable, but a result in functional analysis is that the differentiable functions form a \emph{dense} subset of $L^{2}(\R)$. So, we have to broaden the definition of operators to be that operators are allowed to only be defined on dense subsets of $\mathcal{H}$. The \textbf{position operator} was defined as \begin{align*} \hat{x}: L^{2}(\R) \to L^{2}(\R), \quad f(x) \mapsto (\hat{x}f)(x) = x \cdot f(x) \end{align*} whose eigenvectors were the delta functions \begin{dfn}[] Let $\mathcal{H}$ be a Hilbert space. An \textbf{operator} is a linear map \begin{align*} A: D(A) \subseteq \mathcal{H} \to \mathcal{H} \end{align*} where $D(A)$ is some dense subspace of $\mathcal{H}$. For a \emph{state} $\psi \in \mathcal{H}$, the \textbf{expectation value} of an operator $A$ is \begin{align*} \scal{A}_{\psi} := \scal{\psi| A\psi} \end{align*} The \textbf{adjoint} of an operator $A$ is the operator $A^{\dagger}$ uniquely determined by \begin{align*} \forall f \in D(A^{\dagger}),g \in D(A): \exists A^{\dagger}f \in \mathcal{H}: \quad \braket{f|A g} = \braket{A^{\dagger}f|g} \end{align*} An operator $A$ is called \textbf{self-adjoint} if \begin{align*} A^{\dagger} = A \end{align*} \end{dfn} In the finite dimensional case $\mathcal{H} = \C^{n}$, the adjoint of a matrix $A \in \C^{n \times n}$ is the conjugate-transpose ${A^{\ast}}^{T}$. \begin{rem}[] There is a slightly weaker version of self-adjoint operators, namely an operator $A$ is called \textbf{symmetric}, if for all $\psi \in D(A): \braket{\psi|A \psi} = \braket{A \psi|\psi}$. The difference is that if $A$ is symmetric, it's actual adjoint could have domain $D(A^{\dagger}) \neq D(A)$. One can show that if $A$ is symmetric, one always has the inclusion $D(A) \subseteq D(A^{\dagger})$ \end{rem} \subsection{Dirac Notation} We call the vectors of $\mathcal{H}$ \emph{kets}'' and the covectors of $\mathcal{H}^{\ast} = \Hom(\mathcal{H},\C)$ as \emph{bras}''. \begin{align*} \ket{\psi} \in \mathcal{H}, \quad \bra{\psi} \in \mathcal{H}^{\ast} \end{align*} where application is denoted by \begin{align*} \braket{\alpha|f} := \bra{\alpha}( \ket{f}) \in \C \end{align*} Obviously, dual vectors are projectors, i.e. they satisfy the relation \begin{align*} \bra{\alpha} \circ \bra{\alpha} = \bra{\alpha} \end{align*} \begin{thm}[Sepctral theorem] Let $(\psi_n)_{n \in \N}$ be an ONB of a Hilbert space $\mathcal{H}$. Then \begin{align*} P_n = \ket{\psi_n} \bra{\psi_n}: \mathcal{H} &\to \mathcal{H}\\ \psi_m &\mapsto \ket{\psi_n} \cdot \underbrace{\braket{\psi_n|\psi_m}}_{\in \C} \end{align*} is a projector where its image is the subspace generated by $\ket{\psi_n}$. In particular, every operator $A: D(A) \to \mathcal{H}$, can be decomposed into a linear combination of projectors: \begin{align*} A = \sum_{n \in \N} \lambda_n P_n \end{align*} \end{thm} As a result, one finds that the expectation value of $A$ for some state $\psi$ is given by \begin{align*} \scal{A}_{\psi} = \braket{\psi|A \psi} = \sum_{n \in \N} \lambda_n \braket{\psi|\left(\ket{\psi_n} \bra{\psi_n}\right)(\psi)} = \sum_{n \in \N} \lambda_n \braket{\psi|\psi_n} \braket{\psi|\psi_n} = \sum_{n \in \N}\lambda_n \abs{\braket{\psi|\psi_n}}^{2} \end{align*}
 The interpretation of this formula tells us the following \begin{itemize} \item[(d')] The possible results after measuring $A$ are the eigenvalues $\lambda_n$ \item[(e')] The probability of getting the result $\lambda$ is \begin{align*} \mathbb{P}[A = \lambda] = \sum_{\lambda_n = \lambda} \abs{\braket{\psi_n|\psi}}^{2} \end{align*} \end{itemize} these are the generalisations of the postulates (d) and (e) from the previous section. The other Postulates (a), (b), (c) from the beginning of this chapter can also also be generalized. \begin{itemize} \item[(a')] For a given system, the set of states is a Hilbert space $\mathcal{H}$, on which there is a self-adjoint Hamilton operator $H$. Elements of $\mathcal{H}$ are streaks $\psi(t)$ which are equivalence classes of normed vectors $\chi$, where $\chi_1 \sim \chi_2$ if $\chi_1 = e^{i \alpha}\chi_2$. \item[(b')] The time evolution is described with the Schrödinger Equation \begin{align*} i \hbar \frac{\del }{\del t} \psi(t) = H \psi(t) \end{align*} note that the time evolution is well defined on the equivalence classes. \item[(c')] Observables are described by self-adjoint operators. \end{itemize} \subsection{Generalisation of $\infty$-dimensional Hilbert spaces} To generalize the above notions to the infinite dimensional case, we need some more maths. \begin{dfn}[] Let $A: \mathcal{H} \to \mathcal{H}$ be an operator. Its \textbf{spectrum} $\sigma(A) \subseteq \C$ consists of values $\lambda \in \C$ such that \begin{align*} \forall \epsilon > 0\ \exists \psi \in \mathcal{H}, \|\psi\| = 1 \quad \text{such that} \quad \|(A - \lambda \id)\psi\| \leq \epsilon \end{align*} \end{dfn} In the finite dimensional case, the spectrum is exactly the set of eigenvalues. As we saw earlier, if $A$ is self-adjoint, then all its eigenvalues are real. One can also show that $\sigma(A) \subseteq \R$. With this, we can postulate that the set of possible results of an operator $A$ is exactly the spectrum $\sigma(A)$. \begin{xmp}[] The spectrum can be discrete, continuous or in parts both: \begin{itemize} \item For a free particle ($V = 0$), we have the following spectra: \begin{align*} \sigma(\hat{x}) = \sigma(\hat{p}) = \R, \quad \sigma(H) = \R^{+} \end{align*} \item For the particle in a finite well, we saw \begin{align*} \sigma(H) = \{E_{1}, \ldots, E_{n}\} \cup \R^{+} \end{align*} where the $E_i$ were the energies of the bound states given as the intersection of a tangent curve and a circle. \item For the harmonic oscillator, we se \begin{align*} \sigma(H) = \{\hbar \omega(\frac{1}{2} + n \big\vert n = 0,1,2,\ldots\} \end{align*} \end{itemize} \end{xmp} In the finite dimensional case, let $A: \mathcal{H} \to \mathcal{H}$ be an operator with discrete spectrum and $f: \C \to \C$ any function. This induces an operator \begin{align*} f(A) := \sum_{\lambda \in \sigma(A)}f(\lambda) P_{\lambda} = \sum_{\lambda \in \sigma(A)} f(\lambda) \ket{\psi_{\lambda}} \bra{\psi_{\lambda}}: \quad \mathcal{H} \to \mathcal{H} \end{align*} This association satsfies \begin{align*} (\alpha_1 f_1 + \alpha_2 f_2)(A) &= \alpha_1f_1(A) + \alpha_2 f_2(A), \quad \forall \alpha_i \in \C\\ (f_1 f_2)(A) &= f_1(A) f_2(A)\\ \overline{f}(A) &= f(A)^{\dagger} \end{align*} In particular, if $f= \id_\C$, then $f(A) = A$ and if $f(x) = 1$, then $f(A) = \id_{\mathcal{H}}$. The spectral theorem tell us, that even in the infinite dimensional case, there exists a unique association \begin{align*} f \mapsto f(A) \end{align*} that satisfies the above properties. With the spectral theorem, we can now generalize the probabilistic interpretation. If $A$ is self-adjoint, its sepctrum is real. So let $I \subseteq \R$ be an interval and $P_{I}(x)$ its characteristic function. Then $P_I(A)$ is an orthogonal projector. Moreover, for disjoint intervals $I,J$ we have \begin{align*} P_{I \sqcup J}(A) = P_I(A) + P_J(A) \end{align*} Then, for a given state $\psi$, we can define a probability measure on $\R$ given by \begin{align*} \mu_{\psi}(I) = \braket{\psi|P_I(A)|\psi} \end{align*} We interperet $\mu_{\psi}(I)$ to be the probability that in the state $\psi$, $A$ measures a value of $a \in I$. With this, we can define the expectationvalue of $A$ in state $\psi$ to be \begin{empheq}[box=\bluebase]{align*} \braket{A}_{\psi} = \int \lambda \mu_{\psi}\left( (-\infty,\lambda] \right) d \lambda = \braket{\psi|A|\psi} \end{empheq}
 \section{Heisenberg's uncertainty relation} Many phenomenons of quantum mechanics arise from non-commutativity of operators. In particular, we saw that the commutator of position and momentum operators is non-zero: \begin{align*} [\hat{x},\hat{p}] = i \hbar \end{align*} As we will see soon, this is the reason behind the Heisenbergs uncertainty relation. \subsection{Non-commuting observables}
 \subsection{Heisenberg's uncertainty relation} \begin{thm}[] Let $A,B$ be self-adjoint operators. \begin{align*} \Delta A \Delta B \geq \frac{1}{2} \abs{\braket{i[A,B]|}} \end{align*} \end{thm} \begin{proof} It's easy to show that if $A,B$ are self-adjoint, then $i[A,B]$ is self-adjoint aswell. Therefore, the expectation value $\braket{i[A,B]}$ is also real. So the equation we want to prove is equivalent to \begin{align*} (\Delta A)^{2} (\Delta B)^{2} \geq \frac{1}{4} \scal{i[A,B]}^{2} \end{align*} Let $C = A - \scal{A}$, and $D = B - \scal{B}$. Since $A,B$ are self-adjoint, $C$ and $D$ are also self-adjoint. Moreover, we have \begin{align*} [C,D] = [A - \scal{A},B - \scal{B}] = [a,B] \end{align*} so it suffices to show \begin{align*} \scal{C^{2}}\scal{D^{2}} \geq \frac{1}{4} \scal{i[C,D]}^{2} \end{align*} If we denote the state of the system as $\psi$, and let $s \in \R$, then \begin{align*} 0 \leq \braket{(C + is D) \psi|(C + is D)\psi} = \braket{\psi|(C + isD)^{\dagger})(C + isD)\psi} = \scal{(C + isD)^{\dagger})(C + isD)} \end{align*} since $C,D$ are self-adjoint, we have \begin{align*} 0 \leq \scal{(C - isD)(C + isD)} = \scal{C^{2}} + s\scal{i[C,D]} + s^{2} \scal{D^{2}} \end{align*} Now, we want to divide by $\scal{D^{2}}$, but have to check that it is non-zero. From \begin{align*} \scal{D^{2}} = \braket{\psi|D^{2}\psi} = \braket{D \psi| D \psi} \geq 0 \end{align*} it follows that $\scal{D^{2}} = 0 \implies D \psi = 0$, because the inner product is positive definite. In the case $D \psi = 0$, our inequality is trivial as we have \begin{align*} \scal{[C,D]} = \braket{\psi|[C,D] \psi} = \underbrace{\braket{\psi|CD \psi}}_{=0} - \braket{D \psi|C \psi} = 0 \end{align*} In other case, we can divide by $\scal{D^{2}} > 0$. Setting \begin{align*} s = - \frac{1}{2} \frac{\scal{i[C,D]}}{\scal{D^{2}}} \end{align*} we obtain \begin{align*} 0 \leq \scal{C^{2}} - \frac{1}{2} \frac{\scal{i[C,D]}^{2}}{\scal{D^{2}}} + \frac{1}{4} \frac{\scal{i[C,D]}^{2}}{\scal{D^{2}}} = \scal{C^{2}} - \frac{1}{4} \frac{\scal{i[C,D]}}{\scal{D^{2}}} \end{align*} Which, after multiplying by $\scal{D^{2}}$ results in our desired equation \begin{align*} \frac{1}{4} \scal{i[C,D]}^{2} \leq \scal{C^{2}} \scal{D^{2}} \end{align*} \end{proof} For $A = \hat{x}, B = \hat{p}$, we obtain \begin{align*} \Delta \hat{x} \Delta \hat{p} \geq \frac{1}{2} \abs{\scal{i[\hat{x},\hat{p}]}} = \frac{\hbar}{2} \end{align*} In this case, the lower bound is independent of $\psi$, but in general, the lower bound can be dependent on $\psi$.
 \section{Angular Momentum}
 \subsection{Degenerate time-independent perturbation theory} Until now, we assume that all the eigenvalues of the ONB of solutions $\phi_n$ to the equation $H_0 \phi_n = \mathcal{E}_n \phi_n$ were pairwise different. We used this assumptions where we divided by $\mathcal{E}_n - \mathcal{E}_l$, which only made sense in the non-degenerate case. We first consider the case where we have a doubled degeneracy, that is, there are exactly two eigenvalues $\mathcal{E}_n,\mathcal{E}_m$ such that $E_0 = \mathcal{E}_n = \mathcal{E}_m$. It's clear that the equations \begin{align*} (H_0 - E_0) \Psi_0 &= 0\\ (H_0 - E_0) \Psi_s &= - H' \Psi_{s-1} + \sum_{j=1}^{s}E_j \Psi_{s-j} \end{align*} still hold. The only difference is that the kernel of $(H_0 - E_0)$ is two-dimensional, as it is spanned by $\phi_n,\phi_m$. So $\Psi_0$ is of the form \begin{align*} \Psi_0 = a_m \ket{m} + a_n \ket{n} \end{align*} where $a_m,a_n$ are some complex constants. To get the solution for order $s=1$, we apply the covectors $\bra{m},\bra{n}$ and find \begin{align*} \braket{m|H_0 - E_0|\Psi_1} &= \braket{m|E_1-H'|\Psi_0}\\ &= a_m \braket{m|E_1|m} + a_n \underbrace{\braket{m|E_1|n}}_{=0} - a_m \braket{m|H'|m} - a_n \braket{m|H'|n} \end{align*} and analogously for $\bra{n}$. Both equations can be brought into the following form: \begin{align} \left( \braket{m|H'|m} - E_1 \right) a_m + \braket{m|H'|n}a_n &=0\\ \braket{n|H'|m}a_n + \left( \braket{n|H'|n} - E_1 \right) a_n &=0 \label{eq:projecting-m-n} \end{align} Now we define \begin{align*} \mathcal{E}_m' = \braket{m|H'|m}, \quad \mathcal{E}_n' = \braket{n|H'|n}, \quad \delta' = \braket{m|H'|n}, \quad {\delta'}^{\ast} = \braket{n|H'|m} \end{align*} With these , we can write Equation \ref{eq:projecting-m-n} as follows: \begin{align*} \begin{pmatrix} \mathcal{E}_m' - E_1 & \delta'\\ {\delta'}^{\ast} & \mathcal{E}_n' - E_1 \end{pmatrix} \begin{pmatrix} a_m\\ a_n \end{pmatrix} = 0 \end{align*} And a solution $\Psi_0 \neq 0$ exists, if the determinant is nonzero, i.e. if \begin{align*} (\mathcal{E}_m' - E_1)(\mathcal{E}_n' - E_1) - \abs{\delta'}^{2} = 0 \end{align*} This allows us to calculate $E_1$, which gives \begin{align*} E_1^{\pm} = \frac{\mathcal{E}_m' + \mathcal{E}_n'}{2} \pm \sqrt{ \left( \frac{\mathcal{E}_m' - \mathcal{E}_n'}{2} \right)^{2} + \abs{\delta'}^{2} } \end{align*} \subsubsection{The Stark effect} Consider a hydrogen atom in an electrical field $\vec{\mathcal{E}} = (0,0,\mathcal{E})$. The error term to our Hamiltonian is then $H' = e \mathcal{E}z$. We consider the degeneracy in the niveau $n=2$ in the states \begin{align*} \ket{n,l,m} = \ket{2s_0}, \ket{2p_1}, \ket{2p_0}, \ket{2p_{-1}} \end{align*} where $\ket{2s_0}$ denotes the state $l=m=0$, and $\ket{2p_m}$ the states $l=1$. There are $n^{2} = 4$ degenerate states Since $[L_z,z] = 0$, the error term $H'$ swaps $\ket{2s_0}$ and $\ket{2p_0}$.
 \subsection{Exercise Class w06} \subsubsection*{Bloch sphere} For $\mathcal{H} = \C^{2}$ with ONB $\ket{1},\ket{2}$, the Bloch sphere is the set \begin{align*} S = \left\{ \cos(\tfrac{\theta}{2}) \ket{1} + e^{i \phi} \sin(\tfrac{\theta}{2}) \ket{2} \big\vert \theta \in [0,\pi], \phi \in [0,2 \pi) \right\} \end{align*}
 \section{Harmonic oscillator} The harmonic oscillator is a simple but really interesting example of a dynamic system in quantum mechanics. In classical mechanics, the Hamiltonian is \begin{align*} H = \frac{1}{2m}(p^{2} + m^{2} \omega q^{2}) = \frac{p^{2}}{2m} + \frac{f}{2}q^{2} \end{align*} And for our purposes, $p$ is replaced by the operator $\hat{p} = - \hbar \frac{\del }{\del t}$. Introducing the dimensionless complex variable $x$ given by \begin{align*} x = \sqrt{\frac{m \omega}{\hbar}}q \end{align*} we can rewrite \begin{align*} q = \sqrt{\frac{\hbar}{m \omega}}x, \quad \text{and} \quad \frac{\del }{\del q} = \sqrt{\frac{m \omega}{\hbar}} \frac{\del }{\del x} \end{align*} So the hamiltonian turns into \begin{align*} H = \frac{\hbar \omega}{2}(- \del_x^{2} + x^{2}) \end{align*} And we now turn to solving the TISE, which has the form \begin{align*} (\del_x^{2} + \lambda - x^{2})\psi(x) = 0 \end{align*} The bound solution solutions must satisfy $\lim_{\abs{x} \to \infty} \psi(x) = 0$. \subsection{The classic solution} An approximative solution is given by \begin{align*} \psi(x) = e^{-\tfrac{x^{2}}{2}} \end{align*} an in order to make it exact, we use the Ansatz $\psi(x) = H(x) e^{-x^{2}/2}$, which gives us the following differential equation for $H(x)$\footnote{The function $H(x)$ should not be confused with the Hamiltonian. The usage of the letter $H$ is historic and stands for (as we will see later) the mathematician Charles Hermite.}: \begin{align*} H'' - 2x H' - (\lambda - 1)H = 0\tag{$\ast$} \end{align*} And to solve this one, we use the Fuch's Ansatz \begin{align*} H(x) = x^{s}\sum_{n=0}^{\infty}a_nx^{n}, \quad \text{where} \quad a_0 \neq 0, s \geq 0 \end{align*} By looking at the power series expansion, we see \begin{align*} H(x) &= \sum_{n=0}^{\infty}a_nx^{s+n} \\ 2xH'(x) &= \sum_{n=0}^{\infty}2(s+n)a_n x^{s+n}\\ H''(x) &= \sum_{n=0}^{\infty}(s+n)(s+n-1)a_nx^{s+n-2}\\ &= s(s-1)a_0x^{s-2} + (s+1)sa_1x^{-1} \\ &+ \sum_{n=0}^{\infty}(s+n+2)(s+n+1)a_{n+2}x^{s+n} \end{align*} So in order to solve $(\ast)$, we see that the coefficients must satisfy: \begin{align*} s(s-1)a_0 = 0 , \quad (s+1)sa_1 &= 0\\ (s + n + 2)(s + n + 1)a_{n+2} - (2s - 2n + 1 - \lambda)a_n &= 0, \quad n \geq 2 \end{align*} Since $a_0 \neq 0$, the first equation requires that $s = 0,1$. So either $a_1$ must equal zero, or there is no requirement for $a_1$, which means there are multiple solutions. Either way, we can assume without loss of generality that $a_1 = 0$. So there are types of solutions: \begin{itemize} \item $s = 0$: \quad $H(x) = a_0 + a_2x^{2} + \ldots$ is even in $x$. \item $s = 1$: \quad $H(x) = x(a_0 + a_2x^{2} + \ldots)$ is odd in $x$. \end{itemize} Here we see that because the potentia is symmetric ($V(q) = V(-q)$, it suffices to find symmetric and anti-symmetric solutions. Now, let's consider what happens to the coefficient in the limit $n \to \infty$. We see that \begin{align*} \tfrac{a_{n+2}}{a_n} = \frac{2n + \mathcal{O}(1)}{n^{2} + \mathcal{O}(n)} \to \frac{2}{n} \quad \text{for large $n$} \end{align*} So unless the sequence $a_n$ terminates, the function $H(x)$ would grow faster than the exponential function (which satisfies $\tfrac{a_{n+2}}{a_{n}} \to \tfrac{1}{n^{2}}$ If that were the case, our Ansatz $\psi(x) = H(x) e^{-x^{2}/2}$ would not work, as the super-exponential growth would outdo the exponential decay of $e^{-x^{2}/2}$, so $\psi$, could not satisfy the condition $\lim_{x \to \infty}\psi(x) = 0$. We therefore conclude that the sequence $(a_n)_{n \in \N}$ must terminate at some point. This is only possible, if in the recurrence relation, we have $\lambda = 2n + 1$. Then, the differential equation becomes \begin{align*} H'' - 2xH' + 2nH = 0 \end{align*} for which the solution is an $n$-th order polynomial. We call these the \textbf{$n$-th Hermite polynomials} $H_n(x)$, of which the first few are \begin{align*} H_0 = 1, H_1 = 2x, H_2 = 4x^{2} - 2 \end{align*} or more generally \begin{align*} H_n(x) = (-1)^{n} e^{x^{2}} \del_x^{n}e^{-x^{2}} \end{align*} We therefore get the solutions \begin{align*} \psi_n(x) = N_n H_n(x) e^{-x^{2}/2}