String Theory: Chapter 4 Problem Set Solutions for Barton Zwiebach's "A First Course in String Theory"


Warning: These are my own solutions for the given problems. If you are a student, use these at your own risk. They have not been subjected to grading in a university setting. Your professor may require more explicit statements, or other details worked out in the calculations.




Quick Calculations

Quick Calculation 4.1: Prove \( \delta S \) for String Equation

Quick Calc 4.1: Solution

Quick Calculation 4.2: Prove \( \delta S \) for String Equation in \( \mathcal{P} \) Terms

Quick Calc 4.2: Solution

Quick Calculation 4.3: Show \( \delta S \) Equations for Strings are the Same in Both Forms

Quick Calc 4.3: Solution


Problems

Problem 4.1: Consistency of Small Transverse Oscillations

Problem 4.1: Solution (*)

Problem 4.2: Longitudinal Waves on Strings

Problem 4.2: Solution

Problem 4.3: A Configuration With Two Joined Strings (*)

Problem 4.3 (a): Solution

Problem 4.3 (b): Solution

Problem 4.3 (c): Solution

Problem 4.4: Evolving an Initial Open String Configuration

Problem 4.4 (a): Solution

Problem 4.4 (b): Solution

Problem 4.4 (c): Solution

Problem 4.4 (d): Solution

Problem 4.5: Closed String Motion (**)

Problem 4.5 (a): Solution

Problem 4.5 (b): Solution

Problem 4.5 (c): Solution

Problem 4.6: Stationary Action: \( \Delta S \) Minima and Saddles for Classical Harmonic Oscillator

Problem 4.6 (a): Solution

Problem 4.6 (b): Solution

Problem 4.6 (c): Solution

Problem 4.7: Variational Approximation of Lowest Frequency \( \omega_{0} \) for Classical Strings

Solution Heuristic: Varitional Approximation of Ground State Energy \( E_{0} \) for Quantum Harmonic Oscillator

Problem 4.7 (a): Solution

Problem 4.7 (b): Solution

Problem 4.8: Deriving Euler-Lagrange Equations for Dynamical Variable q(t) and Dynamical Field \( \phi(t,\overrightarrow{x} ) \) \( ( \dagger ) \)

Problem 4.8 (a): Solution

Problem 4.8 (b): Solution

(*) Asterisk indicates solution may be fuzzy or lacking sufficient rigor.
(**) Double asterisk indicates solution may be significantly flawed.
\( (\dagger) \) Dagger indicates the problem is marked in the textbook as referenced again in later chapters.
(Unmarked indicates solution should be correct other than minor quibbles.)


Quick Calculations

Quick Calculation 4.1:

Problem Statement: Prove equation (4.37)

Equation (4.36): \( S = \int^{t_{f}}_{t_{i}} L(t) dt = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{1}{2} \mu_{0} (\frac{\partial y}{\partial t} )^{2} - \frac{1}{2} T_{0} (\frac{\partial y}{\partial x} )^{2} ] \)

Equation (4.37): \( \delta S = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \mu_{0} \frac{\partial y}{\partial t} \frac{\partial (\delta y)}{\partial t} - T_{0} \frac{\partial y}{\partial x} \frac{\partial (\delta y)}{\partial x} ] \)

Solution:

In order to derive the term \( \delta S[y] \), we have to subject it to the variation \( y(t,x) \rightarrow y(t,x) + \delta y(t,x) \):

\( S[y] \rightarrow S[y + \delta y] \)

This is expanded just by performing the algebra on the differential equations:

\( S[y + \delta y] = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{1}{2} \mu_{0} (\frac{\partial (y + \delta y)}{\partial t} )^{2} - \frac{1}{2} T_{0} (\frac{\partial (y + \delta y)}{\partial x} )^{2} ] \)
\( S[y + \delta y] = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{1}{2} \mu_{0} (\frac{\partial y}{\partial t} + \frac{\partial (\delta y)}{\partial t} )^{2} - \frac{1}{2} T_{0} (\frac{\partial y}{\partial x} + \frac{\partial (\delta y)}{\partial x} )^{2} ] \)
\( S[y + \delta y] = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{1}{2} \mu_{0} ( (\frac{\partial y}{\partial t})^{2} + 2 \frac{\partial y}{\partial t}\frac{\partial ( \delta y )}{\partial t} + ( \frac{\partial (\delta y)}{\partial t} )^{2} ) - \frac{1}{2} T_{0} ( (\frac{\partial y}{\partial x})^{2} + 2 \frac{\partial y}{\partial x}\frac{\partial ( \delta y )}{\partial x} + (\frac{\partial (\delta y)}{\partial x} )^{2} ) ] \)

Then you can separate out the factors into the \( S, \delta S, O((\delta y)^{2}) \) terms:

\( S[y + \delta y] = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ ( \frac{1}{2} \mu_{0} (\frac{\partial y}{\partial t})^{2} - \frac{1}{2} T_{0} ( (\frac{\partial y}{\partial x})^{2} ) + ( \frac{1}{2} \mu_{0} ( 2 \frac{\partial y}{\partial t}\frac{\partial ( \delta y )}{\partial t} ) - \frac{1}{2} T_{0} ( 2 \frac{\partial y}{\partial x}\frac{\partial ( \delta y )}{\partial x} ) ) + ( \frac{1}{2} \mu_{0} ( \frac{\partial (\delta y)}{\partial t} )^{2} - \frac{1}{2} T_{0} ( (\frac{\partial (\delta y)}{\partial x} )^{2} ) ) ) ] \)
\( S[y + \delta y] = S[y] + \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{1}{2} \mu_{0} ( 2 \frac{\partial y}{\partial t}\frac{\partial ( \delta y )}{\partial t} ) - \frac{1}{2} T_{0} ( 2 \frac{\partial y}{\partial x}\frac{\partial ( \delta y )}{\partial x} ) ] + \mathcal{O}((\delta y)^{2}) \)
\( S[y + \delta y] = S[y] + \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \mu_{0} (\frac{\partial y}{\partial t}\frac{\partial ( \delta y )}{\partial t} ) - T_{0} ( \frac{\partial y}{\partial x}\frac{\partial ( \delta y )}{\partial x} ) ] + \mathcal{O}((\delta y)^{2}) \)

The higher order terms are unnecessary for the action to be stationary, so we discard \( \mathcal{O}((\delta y)^{2}) \) and have:

\( S[y + \delta y] = S[y] + \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \mu_{0} ( \frac{\partial y}{\partial t}\frac{\partial ( \delta y )}{\partial t} ) - T_{0} ( \frac{\partial y}{\partial x}\frac{\partial ( \delta y )}{\partial x} ) ] \)
\( S[y] + \delta S[y] = S[y] + \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \mu_{0} ( \frac{\partial y}{\partial t}\frac{\partial ( \delta y )}{\partial t} ) - T_{0} ( \frac{\partial y}{\partial x}\frac{\partial ( \delta y )}{\partial x} ) ] \)

\( \Rightarrow \delta S = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \mu_{0} ( \frac{\partial y}{\partial t}\frac{\partial ( \delta y )}{\partial t} ) - T_{0} ( \frac{\partial y}{\partial x}\frac{\partial ( \delta y )}{\partial x} )] \) (Q.E.D.)


Quick Calculation 4.2:

Problem Statement: Derive equation (4.49)

Equation (4.48): \( \delta S = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{\partial \mathcal{L}}{\partial \dot{y}} \delta \dot{y} + \frac{\partial \mathcal{L}}{\partial y'} \delta y'] = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \mathcal{P}^{t} \delta \dot{y} + \mathcal{P}^{x} \delta y' ] \)

Equation (4.49): \( \delta S = \int^{a}_{0} [ \mathcal{P}^{t} \delta y ]^{t=t_{f}}_{t=t_{i}} dx + \int^{t_{f}}_{t_{i}} [ \mathcal{P}^{x} \delta y ]^{x=a}_{x=0} dt - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx ( \frac{\partial \mathcal{P}^{t}}{\partial t} + \frac{\partial \mathcal{P}^{x}}{\partial x} ) \delta y \)

Solution:

First note what these symbols mean. \( \mathcal{P}^{t} \equiv \frac{\partial \mathcal{L}}{\partial \dot{y}} = \mu_{0} \frac{\partial y}{\partial t} \) and \( \mathcal{P}^{x} \equiv \frac{\partial \mathcal{L}}{\partial y'} = -T_{0} \frac{\partial y}{\partial x} \), where \( \mathcal{L} \) is the Lagrange density for the non-relativistic string, \( \mu_{0} \) is its mass per unit length, and \( T_{0} \) is its tension. \( \dot{y} = \frac{\partial y}{\partial t} \) and \( y' = \frac{\partial y}{\partial x} \). \( \delta S \) is the variation of the action S, given the Langrangian for the string, with an interval of time and x-position 0 to a. Equation 4.48 equals Equation 4.49.

The question is how to manipulate Equation 4.48 to: (1) Turn it into variations of \( \delta y \) instead; and, (2) Possess terms that are partial derivatives of \( \mathcal{P} \). Since this is removing a derivative from the \( \delta \) terms and including a derivative of the \( \mathcal{P} \) terms, we can get this by substituting the terms in Equation 4.48 with their equivalents from the product rules for \( \mathcal{P} \delta y \):

\( \mathcal{P}^{t} \delta \dot{y} = \frac{\partial}{\partial t} ( \mathcal{P}^{t} \delta y ) - \frac{\partial \mathcal{P}^{t}}{\partial t} \delta y \)
\( \mathcal{P}^{x} \delta y' = \frac{\partial}{\partial x} ( \mathcal{P}^{x} \delta y ) - \frac{\partial \mathcal{P}^{x}}{\partial x} \delta y \)

Substituting the right-hand sides into Equation 4.48:

\( \delta S = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{\partial \mathcal{L}}{\partial \dot{y}} \delta \dot{y} + \frac{\partial \mathcal{L}}{\partial y'} \delta y'] = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{\partial}{\partial t} ( \mathcal{P}^{t} \delta y ) - \frac{\partial \mathcal{P}^{t}}{\partial t} \delta y + \frac{\partial}{\partial x} ( \mathcal{P}^{x} \delta y ) - \frac{\partial \mathcal{P}^{x}}{\partial x} \delta y ] \)

Thus, each of these terms integrates separately, and the partial derivative mixed with the corresponding integral merely evaluates at the end points:

\( \delta S = \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx \frac{\partial}{\partial t} ( \mathcal{P}^{t} \delta y ) - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx \frac{\partial \mathcal{P}^{t}}{\partial t} \delta y + \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx \frac{\partial}{\partial x} ( \mathcal{P}^{x} \delta y ) - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx \frac{\partial \mathcal{P}^{x}}{\partial x} \delta y \)
\( \delta S = \int^{a}_{0} dx [ \int^{t_{f}}_{t_{i}} \frac{\partial}{\partial t} ( \mathcal{P}^{t} \delta y ) dt ] - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx \frac{\partial \mathcal{P}^{t}}{\partial t} \delta y + \int^{t_{f}}_{t_{i}} dt [ \int^{a}_{0} \frac{\partial}{\partial x} ( \mathcal{P}^{x} \delta y )dx ] - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx \frac{\partial \mathcal{P}^{x}}{\partial x} \delta y \)
\( \delta S = \int^{a}_{0} dx [ \int^{t_{f}}_{t_{i}} \frac{\partial}{\partial t} ( \mathcal{P}^{t} \delta y ) dt ] + \int^{t_{f}}_{t_{i}} dt [ \int^{a}_{0} \frac{\partial}{\partial x} ( \mathcal{P}^{x} \delta y )dx ] - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{\partial \mathcal{P}^{t}}{\partial t} \delta y + \frac{\partial \mathcal{P}^{x}}{\partial x} \delta y ] \)

\( \Rightarrow \delta S = \int^{a}_{0} [ \mathcal{P}^{t} \delta y ]^{t=t_{f}}_{t=t_{i}} dx + \int^{t_{f}}_{t_{i}} [ \mathcal{P}^{x} \delta y ]^{x=a}_{x=0} dt - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx [ \frac{\partial \mathcal{P}^{t}}{\partial t} + \frac{\partial \mathcal{P}^{x}}{\partial x} ] \delta y \) (Q.E.D.)

Notice that for stationary \( \delta S = 0 \), the last term \( \frac{ \partial \mathcal{P}^{t}}{\partial t} + \frac{\partial \mathcal{P}^{x}}{\partial x} = 0 \), which is the equation of motion for the string.


Quick Calculation 4.3:

Problem Statement: Match in detail equations (4.49) and (4.39).

Equation (4.39): \( \delta S = \int^{a}_{0} [ \mu_{0} \frac{\partial y}{\partial t} \delta y ]^{t=t_{f}}_{t=t_{i}} dx + \int^{t_{f}}_{t_{i}} [ -T_{0} \frac{\partial y}{\partial x} \delta y ]^{x=a}_{x=0} dt - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx ( \mu_{0} \frac{\partial^{2}y}{\partial t^{2}} - T_{0} \frac{\partial^{2} y}{\partial x^{2}} ) \delta y \)

Equation (4.49): \( \delta S = \int^{a}_{0} [ \mathcal{P}^{t} \delta y ]^{t=t_{f}}_{t=t_{i}} dx + \int^{t_{f}}_{t_{i}} [ \mathcal{P}^{x} \delta y ]^{x=a}_{x=0} dt - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx ( \frac{\partial \mathcal{P}^{t}}{\partial t} + \frac{\partial \mathcal{P}^{x}}{\partial x} ) \delta y \)

Solution:

As noted in Quick Calculation 4.2, \( \mathcal{P}^{t} \equiv \frac{\partial \mathcal{L}}{\partial \dot{y}} = \mu_{0} \frac{\partial y}{\partial t} \) and \( \mathcal{P}^{x} \equiv \frac{\partial \mathcal{L}}{\partial y'} = -T_{0} \frac{\partial y}{\partial x} \), which makes converting Equation (4.49) into (4.39) a simple and immediate substitution:

\( \delta S = \int^{a}_{0} [ \mathcal{P}^{t} \delta y ]^{t=t_{f}}_{t=t_{i}} dx + \int^{t_{f}}_{t_{i}} [ \mathcal{P}^{x} \delta y ]^{x=a}_{x=0} dt - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx ( \frac{\partial \mathcal{P}^{t}}{\partial t} + \frac{\partial \mathcal{P}^{x}}{\partial x} ) \delta y \)
\( \delta S = \int^{a}_{0} [ \mu_{0} \frac{\partial y}{\partial t} \delta y ]^{t=t_{f}}_{t=t_{i}} dx + \int^{t_{f}}_{t_{i}} [ -T_{0} \frac{\partial y}{\partial x} \delta y ]^{x=a}_{x=0} dt - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx ( \mu_{0} \frac{\partial }{\partial t} \frac{\partial y}{\partial t} - T_{0} \frac{\partial }{\partial x} \frac{\partial y}{\partial x} ) \delta y \)
\( \Rightarrow \delta S = \int^{a}_{0} [ \mu_{0} \frac{\partial y}{\partial t} \delta y ]^{t=t_{f}}_{t=t_{i}} dx + \int^{t_{f}}_{t_{i}} [ -T_{0} \frac{\partial y}{\partial x} \delta y ]^{x=a}_{x=0} dt - \int^{t_{f}}_{t_{i}} dt \int^{a}_{0} dx ( \mu_{0} \frac{\partial^{2}y}{\partial t^{2}} - T_{0} \frac{\partial^{2} y}{\partial x^{2}} ) \delta y \) (Q.E.D.)

Notice that for stationary \( \delta S = 0 \), the last term \( \mu_{0} \frac{\partial^{2}y}{\partial t^{2}} - T_{0} \frac{\partial^{2} y}{\partial x^{2}} = 0 \), which is Equation (4.6) the equation of motion for the string.



Problems

Problem 4.1

Problem Statement: Consistency of small transverse oscillations.

Reconsider the analysis of transverse oscillations in Section 4.1. Calculate the horizontal force \( dF_{h} \) on the little piece of string shown in Figure 4.1. Show that for small oscillations this force is much smaller than the veritcal force \( dF_{v} \) responsible for the transverse oscillations.

Solution:

(Warning: This solution might be slightly too fuzzy in the middle, but I believe its argument is essentially correct.)

The vertical force in Section 4.1 is given by the different slopes of the string at (x + dx) and (x), which is effectively a second derivative:

\( dF_{v} = T_{0} \frac{\partial y}{ \partial x} \rvert_{x + dx} - T_{0} \frac{\partial y}{ \partial x} \rvert_{x} \approx T_{0} \frac{\partial^{2} y}{\partial x^{2}} dx \)

The horizontal force is a little more tricky to express. Consider that the hypoteneuse, given the linear approximation \( (1 + x)^{n} \approx (1 + nx) \), formed by dx and dy is:

\( \sqrt{dx^{2} + dy^{2}} = dx \sqrt{ 1 + ( \frac{\partial y}{\partial x} )^{2} } \approx dx ( 1 + \frac{1}{2} ( \frac{\partial y}{\partial x} )^{2} ) \)

The horizontal force can be taken to be the difference of this between the second and first halves of dx:

\( dF_{h} = T_{0} [ 1 + (\frac{\partial y }{ \partial x})^{2} ]^{-\frac{1}{2}}_{x + dx} - T_{0} [ 1 + (\frac{\partial y }{ \partial x})^{2} ]^{-\frac{1}{2}}_{x} \)

(Note: Since \( \frac{\partial y}{\partial x} << 1 \), we have a small angle approximation, where the horizontal \( cos(\theta) \approx 1 - \frac{1}{2} \theta^{2} \). The corrections are of order \( \theta^{2} \approx (\frac{\partial y }{ \partial x})^{2} \).)

This difference is another slope change factor over the displacement dx, on top of the approximate second derivative, where the force is opposite the displacement:

\( dF_{h} \approx -T_{0} \frac{\partial y}{\partial x} \frac{\partial^{2} y}{\partial x^{2} } dx \)

Since \( \frac{\partial y}{\partial x} << 1 \), this implies the magnitudes of \( dF_{h} << dF_{v} \). (Q.E.D.)


Problem 4.2

Problem Statement: Longitudinal waves on strings.

Consider a string with uniform mass density \( \mu_{0} \) stretched between x = 0 and x = a. Let the equilibrium tension be \(T_{0}\). Longitudinal waves are possible if the tension of the string varies as it stetches or compresses. For a piece of this string with equilibrium length L, a small change \( \Delta L\) of its length is accompanies by a small change \( \Delta T \) of the tension where

\( \frac{1}{\tau_{0}} \equiv \frac{1}{L} \frac{\Delta L}{ \Delta T} \)

Here \( \tau_{0} \) is a tension coefficient with units of tension. Find the equation governing the small longitudinal oscillations of this string. Give the velocity of the waves.

Solution:

Longitudinal waves are fluctuations parallel to the direction of the wave propagation, so we will construct an equation of the form:

\( \Delta T = \tau_{0} \frac{\Delta L}{ L} \)

We used the variable y for the perpendicular direction of the transverse wave, so here we will a variable z to vary along x (i.e. z is the string vibrating parallel to x-axis), assuming \( \frac{\partial z}{\partial x} << 1 \):

\( T(x + dx) - T(x) = \tau_{0} \frac{\partial z}{ \partial x} \rvert_{x + dx} - \tau_{0} \frac{\partial z}{ \partial x} \rvert_{x} \)

This is the same argument that was used for the vertical force of the transverse wave, except here it is the difference in tension along x. Effectively second derivative.

\( T(x + dx) - T(x) \approx \tau_{0} \frac{\partial^{2} z}{\partial x^{2}} \)

Since the tension is a force F = ma, and \( \mu_{0} \) is the uniform mass density, we know that the tension difference is also:

\( T(x + dx) - T(x) = \mu_{0} \frac{\partial^{2} z}{\partial t^{2}} \)

The equation of motion governing the string for small oscillations \( \frac{\partial z}{\partial x} << 1 \) is therefore:

\( \frac{\partial^{2} z}{\partial t^{2}} = \frac{\tau_{0}}{\mu_{0}} \frac{\partial^{2} z}{\partial x^{2}} \)

This is a wave equation of the form: \( \frac{\partial^{2} z}{\partial t^{2}} = v^{2} \frac{\partial^{2} z}{\partial x^{2}} \). Therefore the velocity is:

\( v = \sqrt{\frac{\tau_{0}}{\mu_{0}}} \) (Q.E.D.)


Problem 4.3

Problem Statement: A configuration with two joined strings.

A string with tension \( T_{0} \) is stretched from x= 0 to x = 2a. The part of the string x \( \epsilon \) (0,a) has constant mass density \( \mu_{1} \), and the part of the string x \( \epsilon \) (a,2a) has constant mass density \( \mu_{2} \). Consider the differential equation (4.20) that determines the normal oscillations.

(a) What boundary conditions should be imposed on y(x) and \(\frac{dy}{dx}(x)\) at x = a?

(b) Write the conditions that determine the possible frequencies of oscillation.

(c) Calculate the lowest frequency of oscillation of this string when \( \mu_{1} = \mu_{0} \) and \( \mu_{2} = 2 \mu_{0} \).

Solution:

(Warning: I am not highly confident this problem was done to the intended rigor or if it reaches the intended point.)

Equation 4.20: \( \frac{\partial^{2} y}{\partial x^{2}} + \frac{\mu(x)}{T_{0}} \omega^{2} y(x) = 0\)

Problem 4.3 (a): What boundary conditions should be imposed on y(x) and \(\frac{dy}{dx}(x)\) at x = a?

The two parts of the string will act as two strings \( y_{1}, y_{2} \) with different velocities \( v_{1}, v_{2} \) because of their unequal mass densities \( \mu_{1}, \mu_{2} \). However, they are connected at x = a, which subjects them to Dirichlet and Neumann boundary conditions:

\( y_{1}(x=a) = y_{2}(x=a) \) (Dirichlet condition)
\( \frac{\partial y_{1}(x)}{\partial x}\rvert_{x=a} = \frac{\partial y_{2}(x) }{\partial x}\rvert_{x=a} \) (Neumann condition)

The Dirichlet condition holds because the strings must be at the same position y at x = a since they are connected. The Neumann condition is because they have to move together at x = a at the same rate to stay connected. It is a statement of slope continuity.

Problem 4.3 (b): Write the conditions that determine the possible frequencies of oscillation.

The boundary condition is such that the x \( \epsilon \) (0,a) segment has the ordinary sinusoidal solutions, while x \( \epsilon \) (a,2a) is instead scaled by the ratio \( \sqrt{ \frac{\mu_{1}}{\mu_{2}}} \):

\( y_{1,n}(x) = A_{n} sin(\frac{n \pi x}{a} ) \), for x \( \epsilon \) (0,a).
\( y_{2,n}(x) = B_{n} sin(\sqrt{\frac{\mu_{1}}{\mu_{2}}} \frac{n \pi}{a} (2a - x) ) \), for x \( \epsilon \) (a,2a).

When you consider that the frequency \( \omega_{1,n} = \sqrt{\frac{T_{0}}{\mu_{1}}} \frac{n \pi}{a} \) for x \( \epsilon \) (0,a) and \( \omega_{2,n} = \sqrt{\frac{T_{0}}{\mu_{2}}} \frac{n \pi}{a} \) for x \( \epsilon \) (a,2a), the ratio of \( \sqrt{ \frac{\mu_{1}}{\mu_{2}} } \) scales the frequency and velocities. When \( \mu_{2} > \mu_{1} \) the frequency is lower, the propagation velocity is lower, wavelength is longer. The Dirichlet boundary condition requires:

\( A_{n} sin(n \pi ) = B_{n} sin(\sqrt{\frac{\mu_{1}}{\mu_{2}}} n \pi ) \)

The Neumann boundary condition is the partial derivative with respect to x:

\( A_{n} \frac{n \pi }{a} cos ( n \pi ) = - B_{n} \sqrt{\frac{\mu_{1}}{\mu_{2}}} \frac{n \pi }{a} cos ( n \pi ) \)

Problem 4.3 (c) Calculate the lowest frequency of oscillation of this string when \( \mu_{1} = \mu_{0} \) and \( \mu_{2} = 2 \mu_{0} \).

The two parts of the string have different frequencies, where lowest frequency is taken to mean n = 1:

\( \omega_{\mu 1} = \sqrt{\frac{T_{0}}{\mu_{0}}} \frac{\pi}{a} \)
\( \omega_{\mu 2} = \sqrt{\frac{T_{0}}{2\mu_{0}}} \frac{\pi}{a} = \frac{1}{\sqrt{2}} \omega_{1} \)


Problem 4.4

Problem Statement: Evolving an initial open string configuration.

A string with tension \( T_{0} \), mass density \( \mu_{0} \), and wave velocity \( v_{0} = \sqrt{\frac{\tau_{0}}{\mu_{0}}} \), is stretched from (x,y) = (0,0) to (x,y) = (a,0). The string endpoints are fixed, and the string can vibrate in the y direction.

(a) Write y(t,x) as in (4.11), and prove that the above Dirichlet boundary conditions imply

\( h_{+}(u) = -h_{-}(-u) \) and \( h_{+}(u) = h_{+}(u + 2a) \)

Here u \( \epsilon \) \( (-\infty, \infty ) \) is a dummy variable that stands for the argument of the functions \(h_{\pm}\).


Now consider an initial value problem for this string. At t = 0 the transverse displacement is identically zero, and the velocity is

\( \frac{\partial y}{\partial t} (0,x) = v_{0} \frac{x}{a} ( 1 - \frac{x}{a} )\), x \( \epsilon \) (0,a).

(b) Calculate \( h_{+}(u)\) for u \( \epsilon \) (-a,a). Does this define \( h_{+}(u) \) for all u?

(c) Calculate y(t,x) for x and \( v_{0} t\) in the domain D defined by the two conditions: \( D = \{ (x, v_{0} t ) \rvert 0 \leq x \pm v_{0} t < a \} \). Exhibit the domain D in a plane with axes x and \( v_{0} t\).

(d) At t = 0 the midpoint x = a/2 has the largest velocity of all points in the string. Show that the velocity of the midpoint reaches the value of zero at time \( t_{0} = a/(2 v_{0}) \) and that \( y(t_{0}, a/2) = a/12 \). This is the maximum vertical displacement of the string.


Solution:

Problem 4.4 (a): Write y(t,x) as in (4.11), and prove that the above Dirichlet boundary conditions imply \( h_{+}(u) = -h_{-}(-u) \) and \( h_{+}(u) = h_{+}(u + 2a) \).

The situation is a string that is bound at the end points x = 0 and x = a, more specifically (x,y)=(0,0) and (x,y)=(a,0), but otherwise free to move in the y direction. The Dirichlet boundary conditions are thus:

\( y(x = 0) = y(x = a) = 0 \)

The general solution to the wave equation is a superposition of two waves, \( h_{+} \) and \( h_{-}\), which move to the right and left respectively:

Equation (4.11): \( y(t,x) = h_{+} (x - v_{0} t) + h_{-} (x + v_{0} t) \)

Therefore consider the cases of x = 0, and x = a, where y(t,x) = 0 per the Dirichlet boundary conditions:

\( y(t,0) = h_{+} ( - v_{0} t) + h_{-} ( v_{0} t) = 0\)

\( y(t,a) = h_{+} (a - v_{0} t) + h_{-} (a + v_{0} t) \)

Step 1: x = 0 Dirichlet Boundary Condition

First, the x = 0 Dirichlet boundary condition proves \( h_{+}(u) = -h_{-}(-u) \), by letting \( u = - v_{0} t \):

\( y(t,0) = h_{+} ( - v_{0} t) + h_{-} ( v_{0} t) = 0\)
\( y(t,0) = h_{+} ( u ) + h_{-} ( - u ) = 0 \)
\( \Rightarrow h_{+} ( u ) = - h_{-} (- u) \) (Q.E.D.)

Step 2: x = a Dirichlet Boundary Condition

Then consider the u substitution for (x = a) Dirichlet boundary condition:

\( y(t,a) = h_{+} (a - v_{0} t) + h_{-} (a + v_{0} t) = 0 \)
\( y(t,a) = h_{+} (a + u) + h_{-} (a - u) = 0 \)

Substitute in the relation from the x = 0 Dirichlet boundary condition to change \( h_{-} \rightarrow -h_{+} \):

\( y(t,a) = h_{+} (a + u) - h_{+} (-a + u) = 0 \)
\( y(t,a) = h_{+} (u + a) - h_{+} ( u - a) = 0 \)
\( \Rightarrow h_{+} (u + a) = h_{+} ( u - a ) \)

This is actually the form we are looking for in disguise, because it is a translation of 2a. To illustrate this, choose a dummy variable \( j = (u - a) \):

\( h_{+} ((j + a) + a) = h_{+} (j) \)
\( h_{+} (j + 2a) = h_{+} (j) \)

These are just dummy variables for whatever the argument is of the h functions. We can say the (x = a) Dirichlet boundary condition yields the form: \( h_{+} (u) = h_{+} (u + 2a) \). (Q.E.D.)


Problem 4.4 (b): Calculate \( h_{+}(u)\) for u \( \epsilon \) (-a,a). Does this define \( h_{+}(u) \) for all u?

This problem also involves a special case of an initial condition constraint, so first we have to consider the general solution.

The general solution for y(t,x) is:

Equation (4.11): \( y(t,x) = h_{+} (x - v_{0} t) + h_{-} (x + v_{0} t) \)

Step 1: The Initial Condition of the General Solution

The initial condition is t = 0, where (x,y) = (x,0), so Equation (4.11) becomes:

\( y(0,x) = h_{+} (x ) + h_{-} (x ) = 0 \)

The initial condition \( \frac{\partial y}{\partial t} (0,x) \) in terms of the general solution Equation (4.11) yields:

\( \frac{\partial y}{\partial t} (t,x) = -v_{0} h'_{+}(x - v_{0} t ) + v_{0} h'_{-}(x + v_{0} t) \)
\( \Rightarrow \frac{\partial y}{\partial t} (0,x) = -v_{0} h'_{+}(x ) + v_{0} h'_{-}(x ) \)

Step 2: The Initial Condition Value Constraint for the x ~ (0,a) Interval

This is true in general. The problem is then constrained by the special case of having a defined initial condition value:

\( \frac{\partial y}{\partial t} (0,x) = v_{0} \frac{x}{a} ( 1 - \frac{x}{a} )\), x \( \epsilon \) (0,a)

Substituting into the left-hand side of the above equation, for x \( \epsilon \) (0,a):

\( v_{0} \frac{x}{a} ( 1 - \frac{x}{a} ) = -v_{0} h'_{+}(x ) + v_{0} h'_{-}(x ) \)
\( \Rightarrow - h'_{+}(x ) + h'_{-}(x ) = \frac{x}{a} ( 1 - \frac{x}{a} ) \)

These derivatives \( h'_{\pm}(x) \) can be turned into \( h'_{\pm}(x) \) by integrating this from 0 to x:

\( - \int^{x}_{0} h'_{+}(x )dx + \int^{x}_{0} h'_{-}(x ) dx = \int^{x}_{0} \frac{x}{a} ( 1 - \frac{x}{a} ) dx \)
\( - [h_{+}(x) - h_{+}(0)] + [ h_{-}(x) - h_{-}(0) ] = \int^{x}_{0} \frac{x}{a} ( 1 - \frac{x}{a} ) dx \)
\( - [h_{+}(x) - 0] + [ h_{-}(x) - 0 ] = \int^{x}_{0} \frac{x}{a} ( 1 - \frac{x}{a} ) dx \)
\( - h_{+}(x) + h_{-}(x) = \int^{x}_{0} \frac{x}{a} ( 1 - \frac{x}{a} ) dx \)

The right-hand side integrates over \( 0 \leq x \) into the form:

\( - h_{+}(x) + h_{-}(x) = \frac{x^{2}}{2a} - \frac{x^{3}}{3a^{2}} + c \)

We know from the Dirichlet boundary condition for x = 0, in Problem 4.4 (a), that \( h_{+}(u) = - h_{-}(u) \):

\( - h_{+}(x)) + ( h_{-}(x) ) = \frac{x^{2}}{2a} - \frac{x^{3}}{3a^{2}} + c \)
\( - h_{+}(x) - h_{+}(x) = \frac{x^{2}}{2a} - \frac{x^{3}}{3a^{2}} + c \)
\(- 2 h_{+}(x) = \frac{x^{2}}{2a} - \frac{x^{3}}{3a^{2}} + c \)
\(h_{+}(x) = - \frac{1}{2} ( \frac{x^{2}}{2a} - \frac{x^{3}}{3a^{2}} + c ) \)
\( \Rightarrow h_{+}(x) = \frac{1}{2} ( \frac{x^{3}}{3a^{2}} - \frac{x^{2}}{2a} - c ) \)

For a general dummy variable u as the argument of \( h_{+}(u) \) on x \( \epsilon \) (0,a) :

\( h_{+}(u) = \frac{1}{2} ( \frac{u^{3}}{3a^{2}} - \frac{u^{2}}{2a} - c ) \) (Q.E.D.)

Step 3: The Initial Condition Value Constraint for the x ~ (-a,0) Interval

The initial condition value was only defined for x \( \epsilon \) (0,a), but we know from the Dirichlet boundary condition for (x = a) from Problem 4.4 (a) that:

\( h_{+}( u + a ) = h_{+} ( u - a) \)

This implies that the function \( h_{+} \) is periodic, so we can extend the definition to the interval \( -a \leq 0 \leq x \) as a mirror \( a \rightarrow -a \):

\( - h_{+}(x) + h_{-}(x) = \int^{x}_{0} - \frac{x}{a} ( 1 + \frac{x}{a} ) dx \)
\( - h_{+}(x) + ( h_{-}(x) ) = - \frac{x^{2}}{2a} - \frac{x^{3}}{3a^{2}} + c \)
\( - 2 h_{+}(x) = - \frac{x^{2}}{2a} - \frac{x^{3}}{3a^{2}} + c \)
\( h_{+}(x) = \frac{1}{2} ( \frac{x^{3}}{3a^{2}} + \frac{x^{2}}{2a} - c ) \)

Therefore for a general dummy variable u as the argument of \( h_{+}(u) \) on x \( \epsilon \) (-a,0) :

\( h_{+}(u) = \frac{1}{2} ( \frac{u^{3}}{3a^{2}} + \frac{u^{2}}{2a} - c ) \) (Q.E.D.)

These two equations define the whole interval x \( \epsilon \) (-a,a). Since we have a periodic translation condition \( h_{+}(u) = h_{+}(u + 2a) \), for arbitrary a \( \epsilon \) \( ( -\infty, \infty )\), this defines \( h_{+}(u) \) for all arguments u \( \epsilon \) \( ( -\infty, \infty )\). (Q.E.D.)


Problem 4.4 (c): Calculate y(t,x) for x and \( v_{0} t\) in the domain D defined by the two conditions: \( D = \{ (x, v_{0} t ) \rvert 0 \leq x \pm v_{0} t < a \} \). Exhibit the domain D in a plane with axes x and \( v_{0} t\).

This is a straight forward substitution into the equation for the \( 0 \leq x \leq a \) interval for the function \( h_{+}(u) \) with the initial value condition:

\( h_{+}(u) = \frac{1}{2} ( \frac{u^{3}}{3a^{2}} - \frac{u^{2}}{2a} - c ) \)

The general solution y(t,x) is:

\( y(t,x) = h_{+} (x - v_{0} t) + h_{-} (x + v_{0} t) \)

We know that there is a periodic symmetry that lets us substitute for some u and some a:

\( h_{+}( u + a ) = h_{+} ( u - a) \), or equivalently, \( h_{+}(u) = h_{+}(u + 2a) \)

\( \Rightarrow h_{+}( x - v_{0} t ) = h_{+}( x + v_{0} t ) \)

We also know that \( - h_{+}(u) = h_{-}(-u) \) for some u, so we can re-express y(t,x) as:

\( y(t,x) = h_{+} (u) - h_{+} (u + 2 v_{0} t) \)

Therefore substituting in the \(h_{+}\) for \( 0 \leq x \leq a \) from Problem 4.4 (b):

\( y(t,x) = \frac{1}{2} ( \frac{u^{3}}{3a^{2}} - \frac{u^{2}}{2a} - c ) - \frac{1}{2} ( \frac{(u + 2 v_{0}t)^{3}}{3a^{2}} - \frac{(u + 2 v_{0}t)^{2}}{2a} - c ) \)
\( y(t,x) = \frac{1}{2} ( \frac{(x - v_{0}t)^{3}}{3a^{2}} - \frac{(x - v_{0}t)^{2}}{2a} ) - \frac{1}{2} ( \frac{(x + v_{0}t)^{3}}{3a^{2}} - \frac{(x + v_{0}t)^{2}}{2a} ) \)

\( y(t,x) = \frac{1}{2} ( \frac{(-t^{3} v_{0}^{3} + 3t^{2}v_{0}^{2}x - 3tv_{0}x^{2} + x^{3} )}{3a^{2}} - \frac{x^{2} - 2 v_{0}tx + v_{0}^{2}t^{2}}{2a} ) - \frac{1}{2} ( \frac{ t^{3} v_{0}^{3} + 3t^{2}v_{0}^{2}x + 3tv_{0}x^{2} + x^{3} )}{3a^{2}} - \frac{x^{2} + 2 v_{0}tx + v_{0}^{2} t^{2}}{2a} ) \)
\( y(t,x) = \frac{1}{2} ( \frac{(-t^{3} v_{0}^{3} + 3t^{2}v_{0}^{2}x - 3tv_{0}x^{2} + x^{3} )}{3a^{2}} - \frac{ t^{3} v_{0}^{3} + 3t^{2}v_{0}^{2}x + 3tv_{0}x^{2} + x^{3} )}{3a^{2}} ) + \frac{1}{2} ( - \frac{x^{2} - 2 v_{0}tx + v_{0}^{2}t^{2}}{2a} + \frac{x^{2} + 2 v_{0}tx + v_{0}^{2} t^{2}}{2a} ) \)
\( y(t,x) = \frac{1}{2} ( \frac{(-t^{3} v_{0}^{3} - 3tv_{0}x^{2} )}{3a^{2}} - \frac{(t^{3} v_{0}^{3} + 3tv_{0}x^{2} )}{3a^{2}} ) + \frac{1}{2} (\frac{ 2 v_{0}tx}{2a} + \frac{ 2v_{0} t x}{2a} ) \)
\( y(t,x) = \frac{1}{2} ( \frac{(-2 t^{3} v_{0}^{3} - 6tv_{0}x^{2} )}{3a^{2}} ) + \frac{1}{2} (\frac{ 4 v_{0}t x}{2a} ) \)
\( y(t,x) = ( \frac{(-t^{3} v_{0}^{3} - 3tv_{0}x^{2} )}{3a^{2}} ) + (\frac{ v_{0}t x}{a} ) \)
\( y(t,x) = ( \frac{-t^{3} v_{0}^{3}}{3a^{2}} - \frac{3tv_{0}x^{2} }{3a^{2}} ) + (\frac{ v_{0}t x}{a} ) \)
\( y(t,x) = v_{0}t( \frac{-t^{2} v_{0}^{2}}{3a^{2}} - \frac{x^{2} }{a^{2}} + \frac{ x}{a} ) \)

\( \Rightarrow y(t,x) = v_{0} t ( \frac{x}{a} - \frac{x^{2}}{a^{2}} - \frac{v_{0}^{2} t^{2}}{3 a^{2}} )\) (Q.E.D.)


Problem 4.4 (d): At t = 0 the midpoint x = a/2 has the largest velocity of all points in the string. Show that the velocity of the midpoint reaches the value of zero at time \( t_{0} = a/(2 v_{0}) \) and that \( y(t_{0}, a/2) = a/12 \). This is the maximum vertical displacement of the string.

The velocity given by \( \frac{\partial y}{\partial t} \) is a simple differentiation of the result of Problem 4.4 (c):

\( \frac{\partial y}{\partial t} = v_{0} ( \frac{x}{a} - \frac{x^{2}}{a^{2}} - \frac{v_{0}^{2} t^{2}}{a^{2}} ) \)

The quadratic equation of x can be re-expressed as a square to make it clear that x = a/2 is the position of maximum velocity at t = 0:

\( \frac{\partial y}{\partial t} = v_{0} ([ \frac{1}{4} - (\frac{x}{a} - \frac{1}{2} )^{2}] - \frac{v_{0}^{2} t^{2}}{a^{2}} ) \)

At the midpoint x = a/2 this velocity equals its maximum, as the subtracted squared term vanishes:

\( \frac{\partial y}{\partial t} = v_{0} ( [ \frac{1}{4} - ( \frac{a}{2a} - \frac{1}{2} )^{2} ] - \frac{v_{0}^{2} t^{2}}{a^{2}} ) \)
\( \Rightarrow \frac{\partial y}{\partial t} = v_{0} ( \frac{1}{4} - \frac{v_{0}^{2} t^{2}}{a^{2}} ) \)

Therefore the velocity is highest at t = 0, and lower for any other value of t. The velocity of the mid-point at \( t_{0} = a/2v_{0} \) is zero:

\( \frac{\partial y}{\partial t} = v_{0} ( \frac{1}{4} - \frac{v_{0}^{2} (\frac{a}{2 v_{0}})^{2}}{a^{2}} ) \)
\( \Rightarrow \frac{\partial y}{\partial t} = v_{0} ( \frac{1}{4} - \frac{1}{4} ) = 0 \)

Using the equation for y(t,x) derived in Problem 4.4 (c), we can find that the height is \( y(t_{0},a/2) = a/12 \) when the velocity of the midpoint is zero:

\( y(t,x) = v_{0} t ( \frac{x}{a} - \frac{x^{2}}{a^{2}} - \frac{v_{0}^{2} t^{2}}{3 a^{2}} )\)
\( y(t_{0},a/2) = v_{0} \frac{a}{2 v_{0}} ( \frac{\frac{a}{2}}{a} - \frac{(\frac{a}{2})^{2}}{a^{2}} - \frac{v_{0}^{2} (\frac{a}{2 v_{0}})^{2}}{3 a^{2}} )\)
\( y(t_{0},a/2) = \frac{a}{2} ( \frac{\frac{a}{2}}{a} - \frac{(\frac{a}{2})^{2}}{a^{2}} - \frac{ (\frac{a}{2})^{2}}{3 a^{2}} )\)
\( y(t_{0},a/2) = \frac{a}{2} ( \frac{1}{2} - \frac{1}{4} - \frac{1}{12} )\)
\( y(t_{0},a/2) = \frac{a}{2} ( \frac{2}{12} )\)

\( \Rightarrow y(t_{0},a/2) = \frac{a}{12} \) (Q.E.D.)

As an aside, note that y(t,x) is highest at the midpoint, from the same equation form manipulation:

\( y(t,x) = v_{0} t ( \frac{x}{a} - \frac{x^{2}}{a^{2}} - \frac{v_{0}^{2} t^{2}}{3 a^{2}} )\)
\( y(t,x) = v_{0} t ( [ \frac{1}{4} - ( \frac{x}{a} - \frac{1}{2} )^{2} ] - \frac{v_{0}^{2} t^{2}}{3 a^{2}} ) \)
\( y(t,a/2) = v_{0} t ( [ \frac{1}{4} - ( \frac{1}{2} - \frac{1}{2} )^{2} ] - \frac{v_{0}^{2} t^{2}}{3 a^{2}} ) \)

When the velocity \( \frac{\partial y}{\partial t} \) of the midpoint reaches zero, we have thus reached maximum vertical displacement, which is then: \( y(t_{0},a/2) = \frac{a}{12} \). Notice as an aside that at \( t = a / v_{0} \), exactly double this length of time, the midpoint of the string is at the negative of the maximum vertical displacement, \( y(t,a/2) = -\frac{a}{12}\).


Problem 4.5

Problem Statement: Closed string motion.

We can describe a nonrelativistic closed string fairly accurately by having the string wrapped around a cylinder of large circumference \( 2\pi R \) on which it is kept taut by the string tension \( T_{0} \). We assume that the string can move on the surface of the cylinder without experiencing any friction. Let x be a coordinate along the circumference of the cylinder x ~ \( x + 2\pi R \) and let y be a coordinate perpendicular to x, thus running parallel to the axis of the cylinder. As expected, the general solution for transverse motion is given by

\( y(t,x) = h_{+} (x - v_{0} t) + h_{-} (x + v_{0} t) \)

where \(h_{+}(u)\) and \(h_{-}(u)\) are arbitrary functions of single variables u and v with \( - \infty \leq u, v \leq \infty \). The string has mass per unit length \( \mu_{0} \), and \( v_{0} = \sqrt{ \frac{T_{0}}{\mu_{0}} } \).

(a) State the periodicity condition that must be satisfied by y(x,t) on account of the identification that applies to the x coordinate. Show that the derivatives \(h'_{+}(u)\) and \(h'_{-}(v)\) are, respectively, periodic functions of u and v.

(b) Show that one can write \( h_{+}(u) = \alpha u + f(u), h_{-}(v) = \beta v + g(v) \), where f and g are periodic functions and \( \alpha \) and \( \beta \) are constants. Give the relation between \( \alpha \) and \( \beta \) that follows from (a).

(c) Calculate the total momentum carried by the string in the y direction. Is it conserved?

Solution:

(Warning: I suspect this derivation is defective and may not be the answers the problem is looking for.)

Problem 4.5 (a): State the periodicity condition that must be satisfied by y(x,t) on account of the identification that applies to the x coordinate. Show that the derivatives \(h'_{+}(u)\) and \(h'_{-}(v)\) are, respectively, periodic functions of u and v.

Since we know that x ~ \( x + 2 \pi R \), it follows that y(t,x) and \( \frac{\partial y}{\partial x} \) must be equal at these points, since the string meets:

\( y(t,x) = y(t, x + 2 \pi R) \)
\( \frac{\partial y(t,x)}{\partial x} = \frac{\partial y(t, x + 2 \pi R)}{\partial x}\)

The general solution to the wave equation is:

\( y(t,x) = h_{+} (x - v_{0} t) + h_{-} (x + v_{0} t) \)

Insert the first condition into the general solution:

\( y(t,x) = y(t, x + 2 \pi R) \)
\( h_{+} (x - v_{0} t) + h_{-} (x + v_{0} t) = h_{+} (x + 2 \pi R - v_{0} t) + h_{-} (x + 2 \pi R + v_{0} t) \)
\( h_{+} (x - v_{0} t) - h_{+} (x + 2 \pi R - v_{0} t) = h_{-} (x + 2 \pi R + v_{0} t) - h_{-} (x + v_{0} t) \)

Both of these sides must equal zero, since the periodicity condition means the string is at the same point, therefore:

\( h_{+} (x - v_{0}t) = h_{+} (x - v_{0}t + 2 \pi R) \)
\( h_{-} (x + v_{0}t) = h_{-} (x + v_{0}t + 2 \pi R) \)

This can be expressed as \( u = x - v_{0}t \) and \( v = x + v_{0}t \):

\( h_{+} (u) = h_{+} (u + 2 \pi R) \)
\( h_{-} (v) = h_{-} (v + 2 \pi R) \)

With the second boundary condition we have:

\( \frac{\partial y(t,x)}{\partial x} = [h'_{+} (x - v_{0} t) + h_{+}(-v_{0}t)] + [h'_{-} (x + v_{0} t) + h_{-}(v_{0}t)] \)
\( \frac{\partial y(t,x + 2 \pi R)}{\partial x} = [h'_{+} (x + 2 \pi R - v_{0} t) + h_{+}(-v_{0}t)] + [h'_{-} (x + 2 \pi R + v_{0} t) + h_{-}(v_{0}t)] \)

These equations must equal each other, since it is the same point on the closed string:

\( [h'_{+} (x - v_{0} t) + h_{+}(-v_{0}t)] + [h'_{-} (x + v_{0} t) + h_{-}(v_{0}t)] = [h'_{+} (x + 2 \pi R - v_{0} t) + h_{+}(-v_{0}t)] + [h'_{-} (x + 2 \pi R + v_{0} t) + h_{-}(v_{0}t)] \)
\( h'_{+} (x - v_{0} t) + h'_{-} (x + v_{0} t) = h'_{+} (x + 2 \pi R - v_{0} t) + h'_{-} (x + 2 \pi R + v_{0} t) \)
\( h'_{+} (x - v_{0} t) - h'_{+} (x + 2 \pi R - v_{0} t) = h'_{-} (x + 2 \pi R + v_{0} t) - h'_{-} (x + v_{0} t) \)

Both of these sides must be zero, because it is the same point on the closed string:

\( h'_{+} (x - v_{0} t) = h'_{+} (x - v_{0} t + 2 \pi R ) \)
\( h'_{-} (x + v_{0} t) = h'_{-} (x + v_{0} t + 2 \pi R ) \)

Again using \( u = x - v_{0}t \) and \( v = x + v_{0}t \):

\( h'_{+} (u) = h'_{+} (u + 2 \pi R) \)
\( h'_{-} (v) = h'_{-} (v + 2 \pi R) \)

This is the same as saying \( h'_{+} (u - \pi R) = h'_{+} (u + \pi R) \) and \( h'_{-} (v - \pi R) = h'_{-} (v + \pi R) \). \( h'_{+}(u) \) and \( h'_{-}(v) \) are thus periodic functions of u and v. (Q.E.D.)

Problem 4.5 (b): Show that one can write \( h_{+}(u) = \alpha u + f(u), h_{-}(v) = \beta v + g(v) \), where f and g are periodic functions and \( \alpha \) and \( \beta \) are constants. Give the relation between \( \alpha \) and \( \beta \) that follows from (a).

Since u,v are arbitrary functions and Problem 4.5 (a) establishes \( h_{+}(u) \) and \( h_{-}(v)\) are periodic in u,v, it follows that this can be generalized to a linear superposition as long as the functions f,g are periodic in u,v:

\( h_{+}(u) = \alpha u + f(u) \)
\( h_{-}(v) = \beta v + g(v) \)

Since the result in Problem 4.5 (a) showed \( h_{+}(u) \) and \( h_{-}(v) \) are periodic in u and v, and these are right and left moving waves on the same closed string, \( \alpha \) and \( \beta \) must be equal because u and v encode the directions of the wave under a mirror reflection.

Problem 4.5 (c): Calculate the total momentum carried by the string in the y direction. Is it conserved?

This was done for \( \frac{\partial y}{\partial t} \) in Problem 4.5 (a). The momentum is conserved because it is equal at the periodic boundary points, and because these can be done for arbitrary initial position, the momentum must be conserved for the whole closed string.


Problem 4.6

Problem Statement: Stationary action: minima and saddles.

A particle performing harmonic motion along the x axis can be used to show that classical solutions are not always minima of the action functional. The action for this particle is

\( S[x] = \int^{t_{f}}_{0} L dt = \int^{t_{f}}_{0} dt \frac{1}{2} m (\dot{x}^{2} - \bar{\omega}^{2} x^{2} ) \)

where m is the mass of the particle, \(\bar{\omega}\) is the frequency of oscillation, and the motion happens for t \( \epsilon [0,t_{f}] \). Consider a classical solution \( \bar{x}(t) \) and a variation \( \delta x(t) \) that vanishes for t = 0 and \( t = t_{f} \).

(a) Show that the variation of the action is exactly given by

\( \Delta S[\delta x] \equiv S[\bar{x} + \delta x] - S[\bar{x}] = \frac{1}{2} m \int^{t_{f}}_{0} dt ( (\frac{d \delta x}{dt} )^{2} - \bar{\omega}^{2} \delta x^{2} )\).

It is noteworthy that \( \Delta S \) only depends on \( \delta x \); \( \bar{x} \) drops out from the answer.

(b) A complete set of variations that vanish t = 0 and \( t = t_{f} \) takes the form

\( \delta_{n} x = sin \omega_{n} t \), with \( \omega_{n} = \frac{\pi n}{t_{f}} \) and n = 1, 2, ..., \( \infty \).

The general variation \( \delta x \) that vanishes at t = 0 and \( t = t_{f} \) is a linear superposition of variations \( \delta_{n} x \) with arbitrary coefficients \( b_{n} \). Calculate \( \Delta S[\delta_{n} x] \) (your answer should vanish for \( \omega_{n} = \omega \)). Prove that

\( \Delta S [ \sum\limits_{n=1}^\infty b_{n} \delta_{n} x ] = \sum\limits_{n=1}^\infty \Delta S [ b_{n} \delta_{n} x ] \).

(c) Show that for \( t_{f} < \frac{\pi}{\bar{\omega}} \) one gets \( \Delta S[\delta_{n} x] > 0 \) for all \( n \geq 1 \). Explain why this guarantees that the classical solution is a minimum of the action. Show that for \( \frac{\pi}{\bar{\omega}} < t_{f} < \frac{2\pi}{\bar{\omega}} \) all variations \( \delta_{n} x \) lead to \( \Delta S > 0\), except for \( \delta_{1} x \), which leads to \( \Delta S < 0 \). In this case the classical solution is a saddle point: there are variations that increase the action and variations that decrease the action. As \( t_{f} \) increases, the number of variations \( \delta_{n} x \) that decrease the action increases.


Solution:

Problem 4.6 (a): Show that the variation of the action is exactly given by \( \Delta S[\delta x] \equiv S[\bar{x} + \delta x] - S[\bar{x}] = \frac{1}{2} m \int^{t_{f}}_{0} dt ( (\frac{d \delta x}{dt} )^{2} - \bar{\omega}^{2} \delta x^{2} )\).

For some path x the action S[x] for the classical harmonic oscillator is given by:

\( S[x] = \int^{t_{f}}_{0} L dt = \int^{t_{f}}_{0} dt \frac{1}{2} m (\dot{x}^{2} - \bar{\omega}^{2} x^{2} ) \)
\( \Rightarrow S[x] = \int^{t_{f}}_{0} L dt = \int^{t_{f}}_{0} dt \frac{1}{2} m ((\frac{d}{dt} x )^{2} - \bar{\omega}^{2} x^{2} ) \)

Consider the case of a particular path \( \bar{x} \) and its deviations \( \delta x \) which vanish at t = 0 and \( t = t_{f} \):

\( S[\bar{x} + \delta x] = \frac{1}{2} m \int^{t_{f}}_{0} dt ( (\frac{d}{dt} ( \bar{x} + \delta x ) )^{2} - \bar{\omega}^{2} ( \bar{x} + \delta x )^{2} )\)
\( S[\bar{x} + \delta x] = \frac{1}{2} m \int^{t_{f}}_{0} dt ( ( \frac{d}{dt} \bar{x} + \frac{d}{dt} \delta x )^{2} - \bar{\omega}^{2} ( \bar{x} + \delta x )^{2} )\)
\( S[\bar{x} + \delta x] = \frac{1}{2} m \int^{t_{f}}_{0} dt ( [ ( \frac{d}{dt} \bar{x})^{2} + 2 \frac{d}{dt} \bar{x} \frac{d}{dt} \delta x + ( \frac{d}{dt} \delta x )^{2} ] - \bar{\omega}^{2} [ (\bar{x})^{2} + 2 \bar{x} \delta{x} + ( \delta x )^{2} ] )\)
\( S[\bar{x} + \delta x] = \frac{1}{2} m \int^{t_{f}}_{0} dt ( [ ( \frac{d}{dt} \bar{x})^{2} - \bar{\omega}^{2} (\bar{x})^{2} ] + [ 2 \frac{d}{dt} \bar{x} \frac{d}{dt} \delta x - 2 \bar{\omega}^{2} \bar{x} \delta{x} ] + [ ( \frac{d}{dt} \delta x )^{2} - \bar{\omega}^{2} ( \delta x )^{2} ] ) \)

\( S[\bar{x} + \delta x] = \frac{1}{2} m ( \int^{t_{f}}_{0}dt [ ( \frac{d}{dt} \bar{x})^{2} - \bar{\omega}^{2} (\bar{x})^{2} ] + \int^{t_{f}}_{0}dt [ 2 \frac{d}{dt} \bar{x} \frac{d}{dt} \delta x - 2 \bar{\omega}^{2} \bar{x} \delta{x} ] + \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} \delta x )^{2} - \bar{\omega}^{2} ( \delta x )^{2} ] ) \)

Then consider the definition \( \Delta S [\delta x] \equiv S[\bar{x} + \delta x] - S[\bar{x}] \):

\( \Delta S [\delta x] = S[\bar{x} + \delta x] - \int^{t_{f}}_{0} dt \frac{1}{2} m ((\frac{d}{dt} \bar{x} )^{2} - \bar{\omega}^{2} \bar{x}^{2} ) \)
\( \Delta S [\delta x] = \frac{1}{2} m ( \int^{t_{f}}_{0}dt [ 2 \frac{d}{dt} \bar{x} \frac{d}{dt} \delta x - 2 \bar{\omega}^{2} \bar{x} \delta{x} ] + \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} \delta x )^{2} - \bar{\omega}^{2} ( \delta x )^{2} ] ) \)

The mixed terms of \( \bar{x} \delta x \) and \( \dot{\bar{x}} \delta \dot{x} \) vanish, as there is no variation from \( \bar{x} \) along the actual path \( \bar{x} \) itself by definition:

\( \Delta S [\delta x] = \frac{1}{2} m ( 0 + \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} \delta x )^{2} - \bar{\omega}^{2} ( \delta x )^{2} ] ) \)

\( \Rightarrow \Delta S [\delta x] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} \delta x )^{2} - \bar{\omega}^{2} ( \delta x )^{2} ] \) (Q.E.D.)

Problem 4.6 (b): Calculate \( \Delta S[\delta_{n} x] \). Prove that \( \Delta S [ \sum\limits_{n=1}^\infty b_{n} \delta_{n} x ] = \sum\limits_{n=1}^\infty \Delta S [ b_{n} \delta_{n} x ] \).

The problem gives definitions for \( \delta_{n} x \) and \( \bar{\omega}_{n} \) for the classical harmonic oscillator problem:

\( \delta_{n} x = sin \omega_{n} t \), with \( \omega_{n} = \frac{\pi n}{t_{f}} \) and n = 1, 2, ..., \( \infty \).

Step 1: Calculate \( \Delta S \) for this definition of \( \delta_{n} x \), and show that it vanishes if \( \omega_{n} = \bar{\omega} \).

\( \Delta S[sin \omega_{n} t] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} sin \omega_{n} t )^{2} - \bar{\omega}^{2} ( sin \omega_{n} t )^{2} ] \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ (\omega_{n} cos \omega_{n} t )^{2} - \bar{\omega}^{2} ( sin \omega_{n} t )^{2} ] \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ (\omega_{n}^{2} cos^{2} \omega_{n} t ) - ( \bar{\omega}^{2} sin^{2} \omega_{n} t ) ] \)

Let \( \omega_{n} = \bar{\omega} \) and use the trig identity \( cos^{2}(ax) - sin^{2}(ax) = cos(2ax) \):

\( \Delta S[sin \omega_{n} t] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ (\omega_{n}^{2} ( cos^{2} \omega_{n} t - sin^{2} \omega_{n} t ) ] \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{2} m \int^{t_{f}}_{0} [ \omega_{n}^{2} ( cos( 2 \omega_{n} t) ]dt \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{2} m [ \frac{\omega_{n}^{2}}{2 \omega_{n}} (sin( 2 \omega_{n} t) ]\rvert^{t_{f}}_{0} \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{4} m [ \omega_{n} (sin( 2 \omega_{n} t) ]\rvert^{t_{f}}_{0} \)

Substitute in \( \omega_{n} = \frac{\pi n}{t_{f}} \):

\( \Delta S[sin \omega_{n} t] = \frac{1}{4} m [ \frac{\pi n}{t_{f}} (sin( 2 \frac{\pi n}{t_{f}} t) ]\rvert^{t_{f}}_{0} \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{4} m [ \frac{\pi n}{t_{f}} ( (sin( 2 \frac{\pi n}{t_{f}} t_{f}) - sin( 2 \frac{\pi n}{t_{f}} 0) )] \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{4} m [ \frac{\pi n}{t_{f}} ( (sin( 2 \pi n) - sin( 0) )] \)
\( \Delta S[sin \omega_{n} t] = \frac{1}{4} m [ 0 - 0] \)

\( \Rightarrow \Delta S[sin \omega_{n} t] = 0 \) (Q.E.D.)

Step 2: Prove that \( \Delta S [ \sum\limits_{n=1}^\infty b_{n} \delta_{n} x ] = \sum\limits_{n=1}^\infty \Delta S [ b_{n} \delta_{n} x ] \).

The substitution of an infinite sum \(\sum\limits_{n=1}^\infty b_{n} \delta_{n} x \) into \( \Delta S \) gives:

\( \Delta S [ \sum\limits_{n=1}^\infty b_{n} \delta_{n} x ] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} \sum\limits_{n=1}^\infty b_{n} \delta_{n} x )^{2} - \bar{\omega}^{2} ( \sum\limits_{n=1}^\infty b_{n} \delta_{n} x )^{2} ] \)

Consider instead adding together \( \Delta S[ b_{1} \delta_{1} x ] \) and \( \Delta S[ b_{2} \delta_{2} x ] \):

\( \Delta S[ b_{1} \delta_{1} x ] + \Delta S[ b_{2} \delta_{2} x ] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} b_{1} \delta_{1} x )^{2} - \bar{\omega}^{2} ( b_{1} \delta_{1} x )^{2} ] + \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} b_{2} \delta_{2} x )^{2} - \bar{\omega}^{2} ( b_{2} \delta_{2} x )^{2} ] \)
\( \Delta S[ b_{1} \delta_{1} x ] + \Delta S[ b_{2} \delta_{2} x ] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} ( b_{1} \delta_{1} x + b_{2} \delta_{2} x) )^{2} - \bar{\omega}^{2} ( b_{1} \delta_{1} x + b_{2} \delta_{2} x )^{2} ] \)

Extend this to N:

\( \Delta S[ b_{1} \delta_{1} x ] + \Delta S[ b_{2} \delta_{2} x ] + ... + \Delta S[ b_{N} \delta_{N} x ] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} ( b_{1} \delta_{1} x + b_{2} \delta_{2} x + ... + b_{n} \delta_{n}) )^{2} - \bar{\omega}^{2} ( b_{1} \delta_{1} x + b_{2} \delta_{2} x + ... + b_{n} \delta_{n} )^{2} ] \)
\( \sum\limits_{n=1}^\infty \Delta S [ b_{n} \delta_{n} x ] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ ( \frac{d}{dt} \sum\limits_{n=1}^\infty b_{n} \delta_{n} x )^{2} - \bar{\omega}^{2} ( \sum\limits_{n=1}^\infty b_{n} \delta_{n} x )^{2} ] \)

\( \Rightarrow \sum\limits_{n=1}^\infty \Delta S [ b_{n} \delta_{n} x ] = \Delta S [ \sum\limits_{n=1}^\infty b_{n} \delta_{n} x ] \) (Q.E.D.)

Problem 4.6 (c): Show that for \( t_{f} < \frac{\pi}{\bar{\omega}} \) one gets \( \Delta S[\delta_{n} x] > 0 \) for all \( n \geq 1 \). Explain why this guarantees that the classical solution is a minimum of the action. Show that for \( \frac{\pi}{\bar{\omega}} < t_{f} < \frac{2\pi}{\bar{\omega}} \) all variations \( \delta_{n} x \) lead to \( \Delta S > 0\), except for \( \delta_{1} x \), which leads to \( \Delta S < 0 \). In this case the classical solution is a saddle point: there are variations that increase the action and variations that decrease the action. As \( t_{f} \) increases, the number of variations \( \delta_{n} x \) that decrease the action increases.

This is addressing situations where \( \omega_{n} \neq \bar{\omega} \), because Problem 4.6 (b) shows \( \Delta S[\delta_{n}x] = 0 \) for \( \omega_{n} = \bar{\omega} \):

\( \Delta S[\delta_{n} x] = \frac{1}{2} m \int^{t_{f}}_{0} dt [ (\omega_{n}^{2} cos^{2} \omega_{n} t ) - ( \bar{\omega}^{2} sin^{2} \omega_{n} t ) ] \)
\( \Delta S[\delta_{n} x] = \frac{1}{2} m ( [ \frac{1}{4} \omega_{n} (2 \omega_{n} t + sin(2 \omega_{n} t) ] - [ \frac{1}{4 \omega_{n}} \bar{\omega} ( sin(2 \omega_{n} t) - 2 \omega_{n} t ) ] )\rvert^{t_{f}}_{0} \)
\( \Delta S[\delta_{n} x] = \frac{1}{2} m ( \frac{1}{4 \omega_{n}} [ 2 \omega_{n} t (\omega_{n}^{2} - \bar{\omega}^{2}) + (\omega_{n}^{2} + \bar{\omega}^{2})sin(2 \omega_{n} t) ] )\rvert^{t_{f}}_{0} \)

This reduces to \( \Delta S[\delta_{n} x] = \frac{1}{2} m [ \frac{1}{2} \omega_{n} (sin( 2 \omega_{n} t) ]\rvert^{t_{f}}_{0} \) if \( \omega_{n} = \bar{\omega} \). Otherwise we notice that the right term goes to zero, leaving:

\( \Delta S[\delta_{n} x] = \frac{1}{2} m ( \frac{1}{4 \omega_{n}} [ 2 \omega_{n} t_{f} (\omega_{n}^{2} - \bar{\omega}^{2}) ] ) \)
\( \Delta S[\delta_{n} x] = \frac{1}{2} m ( \frac{t_{f}}{4 \pi n} [ 2 \pi n ( (\frac{\pi n}{t_{f}} )^{2} - \bar{\omega}^{2}) ] ) \)
\( \Delta S[\delta_{n} x] = \frac{1}{2} m ( \frac{t_{f}}{2} ( (\frac{\pi n}{t_{f}} )^{2} - \bar{\omega}^{2}) ) \)
\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( (\frac{(\pi n)^{2}}{t_{f}} ) - t_{f} \bar{\omega}^{2}) ) \)

Consider the case of \( t_{f} = \frac{\pi}{\bar{\omega}} \):

\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( \pi (n)^{2} \bar{\omega} - \pi \bar{\omega}) ) \)
\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( \pi \bar{\omega} [ (n)^{2} - 1) ] ) \)
\( \Rightarrow \Delta S[\delta_{1} x] = 0 , \Delta S[\delta_{n} x] > 0 \) for \( n > 1 \)

Consider the case of \( t_{f} < \frac{\pi}{\bar{\omega}} \), then the implied \( \omega_{t} \) term is greater than \( \bar{\omega} \):

\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( \pi (n)^{2} \omega_{t} - \pi \frac{\bar{\omega}^{2}}{\omega_{t}} ) ) \)
\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( \pi \omega_{t} [(n)^{2} - \frac{\bar{\omega}^{2}}{\omega_{t}^{2}} ] ) \)

The term \( \frac{\bar{\omega}^{2}}{\omega_{t}^{2}} < 1 \) because \( \omega_{t} > \bar{\omega} \). Therefore \( \Delta S[\delta_{n} x] > 0 \) for all \( n \geq 1 \). This guarantees the classical solution \(\bar{x}\) is a minimum of the action because there are no variations n that decrease the action (i.e. \( \Delta S[\delta_{n} x] < 0 \) does not exist for any value \( n \geq 1 \) ).

Consider the case of \( \frac{\pi}{\bar{\omega}} < t_{f} < \frac{\pi}{2\bar{\omega}} \). This is equivalent to saying \( t_{f} < \frac{\pi}{\bar{\frac{1}{2} \omega}} \), which implies \( \bar{\omega} > \omega_{t} > \frac{1}{2} \bar{\omega} \). For the case of \( \omega_{t} \) slightly less than \( \bar{\omega} \), \( \frac{\bar{\omega}^{2}}{\omega_{t}^{2}} > 1 \), which means \( \Delta S[\delta_{1} x] < 0 \). The floor on how much less it can be is then given by \( \omega_{t} > \frac{1}{2} \bar{\omega} \) which implies \( \frac{\bar{\omega}}{\omega_{t}} < \frac{\bar{\omega}}{\frac{1}{2} \bar{\omega}} \):

\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( \pi \omega_{t} [(n)^{2} - \frac{\bar{\omega}^{2}}{\omega_{t}^{2}} ] ) \)
\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( \pi \omega_{t} [(n)^{2} - \frac{\bar{\omega}^{2}}{(\frac{1}{2} \bar{\omega})^{2}} ] ) \)
\( \Delta S[\delta_{n} x] = \frac{1}{4} m ( \pi \omega_{t} [(n)^{2} - 4 ] ) \)

Which means that \( \Delta S[\delta_{2} x] = 0 \) if \( (\frac{\bar{\omega}}{\omega_{t}})^{2} = 4 \) and \( \Delta S[\delta_{2} x] > 0 \) if \( (\frac{\bar{\omega}}{\omega_{t}})^{2} < 4 \). For n = 1, \( \Delta S[\delta_{1} x] < 0\), since \( \Delta S[\delta_{1} x] > - \frac{3 \pi \omega_{t}}{4} m \). For \( n \geq 2 \), \( \Delta S[\delta_{n} x] > 0 \). This illustrates there is a saddle point in \( \Delta S[\delta_{n} x] \) being positive or negative, meaning there are variations from the classical solution \(\bar{x}\) that increase or decrease the action, depending on the value of n in the range \( \frac{\pi}{\bar{\omega}} < t_{f} < \frac{\pi}{2\bar{\omega}} \). The higher the value of \( t_{f} \) becomes, the more values of n allow \( \Delta S[\delta_{n} x] < 0 \), which means that the number n of variations that decrease the action increases with \( t_{f} \). (Q.E.D.)


Problem 4.7

Problem Statement: Variational problem for strings.

Consider a string stretched from x = 0 to x = a, with a tension \( T_{0} \) and a position-dependent mass density \( \mu(x) \). The string is fixed at the endpoints and can vibrate in the y direction. Equation (4.20) determines the oscillation frequences \( \omega_{i} \) and associated profiles \( \psi_{i} (x) \) for this string.

(a) Set up a variational procedure that gives an upper bound on the lowest frequency of oscillation \( \omega_{0} \). (This can be done roughly as in quantum mechanics, where the ground state energy \( E_{0} \) of a system with Hamiltonian H satisfied \( E_{0} \leq (\psi , H \psi )/(\psi , \psi ) \).) As a useful first step consider the inner product

\( ( \psi_{i} , \psi_{j} ) = \int^{a}_{0} \mu (x) \psi_{i} (x) \psi_{j} (x) dx \)

and show that it vanishes at \( \omega_{i} \neq \omega_{j} \). Explain why your variational procedure works.

(b) Consider the case \( \mu (x) = \mu_{0} \frac{x}{a} \). Use your variational principle to find a simple bound on the lowest oscillation frequency. Compare with the answer \( \omega_{0}^{2} \approx (18.956) \frac{T_{0}}{\mu_{0} a^{2}} \) obtained by a direct numerical solution of the eigenvalue problem.

Solution Heuristic:

The textbook is assuming prior knowledge of the variational method, so to understand how to approach this problem, we will first recap the variational approximation for the quantum harmonic oscillator. The idea of the variation theorem is to choose a trial wavefunction \( \psi \), then assuming it is well-behaved and satisifies the boundary conditions, we can approximate the ground state energy \( E_{0} \) of \( \hat{H} \) (the lowest eigenvalue of the Hamiltonian) as follows:

\( \frac{\int \psi^{*} \hat{H} \psi d x}{ \int \psi^{*} \psi d x} \geq E_{0} \)

This textbook's notation for that is \( (\psi , H \psi )/(\psi , \psi ) \geq E_{0} \). The ground state happens to be a Gaussian wavefunction, so using a Gaussian function for \( \psi \) will end up giving the exact answer for \( E_{0} \).

Let \( \psi(x) = e^{-\alpha x^{2}} \), where \( \alpha > 0 \) is the variational parameter. The denominator is the integral \( \int \psi^{*} \psi d x \) and the function \( \psi \) is real so the complex conjugate \( \psi^{*} \) is the same:

\( \int^{\infty}_{-\infty} \psi^{*} \psi d x = \int^{\infty}_{-\infty} e^{-\alpha x^{2}} e^{-\alpha x^{2}} dx \)
\( \int^{\infty}_{-\infty} \psi^{*} \psi d x = \int^{\infty}_{-\infty} e^{- 2\alpha x^{2}} dx \)
\( \int^{\infty}_{-\infty} \psi^{*} \psi d x = \sqrt{ \frac{\pi}{2 \alpha} } \)

The numerator is the integral \( \int \psi^{*} \hat{H} \psi d x \), where \( \hat{H} = \frac{- \hbar^{2}}{2m} \frac{d^{2}}{d x^{2}} + \frac{1}{2} m \omega^{2} x^{2} \):

\( \int^{\infty}_{-\infty} \psi^{*} \hat{H} \psi d x = \int^{\infty}_{-\infty} e^{-\alpha x^{2}} ( \frac{- \hbar^{2}}{2m} \frac{d^{2}}{d x^{2}} + \frac{1}{2} m \omega^{2} x^{2} ) e^{-\alpha x^{2}} dx \)
\( \int^{\infty}_{-\infty} \psi^{*} \hat{H} \psi d x = \frac{\alpha \hbar^{2}}{m} \sqrt{ \frac{\pi}{2 \alpha} } + ( \frac{1}{2} m \omega^{2} - \frac{2 \alpha^{2} \hbar^{2}}{m} ) \frac{1}{8} \sqrt{ \frac{2 \pi}{\alpha^{3}} } \)

Let the variational method be a function G:

\( G = \frac{\int \psi^{*} \hat{H} \psi d x}{ \int \psi^{*} \psi d x} \)

\( G = \frac{ \frac{\alpha \hbar^{2}}{m} \sqrt{ \frac{\pi}{2 \alpha} } + ( \frac{1}{2} m \omega^{2} - \frac{2 \alpha^{2} \hbar^{2}}{m} ) \frac{1}{8} \sqrt{ \frac{2 \pi}{\alpha^{3}} } }{ \sqrt{ \frac{\pi}{2 \alpha} } } \)
\( G = \frac{\alpha \hbar^{2}}{2m} + \frac{m \omega^{2}}{8 \alpha} \)

The point of the variational method is to take the minimum of G, which is then the approximation of the lowest eigenvalue of H:

\( \frac{dG}{d \alpha} = 0 \)
\( \frac{\hbar^{2}}{2m} - \frac{m \omega^{2}}{8 \alpha^{2}} = 0\)
\( \Rightarrow \alpha = \pm \frac{m \omega}{2 \hbar} \)

Instead of the ground-state energy \( E_{0} \) of the string, we want a variational method in terms of frequency \( \omega_{0} \).


Solution:

Equation 4.20: \( \frac{d^{2} y}{d x^{2}} + \frac{\mu(x)}{T_{0}} \omega^{2} y(x) = 0 \)

Problem 4.7 (a): Set up a variational procedure that gives an upper bound on the lowest frequency of oscillation \( \omega_{0} \). Show that \( ( \psi_{i} , \psi_{j} ) \) vanishes at \( \omega_{i} \neq \omega_{j} \). Explain why your variational procedure works.

Step 1: Show \( ( \psi_{i} , \psi_{j} ) \) vanishes at \( \omega_{i} \neq \omega_{j} \)

Instead of a single trial wavefunction with a complex conjugate over infinity, we are using the waves of a string bound at the points 0 and a:

\( ( \psi_{i} , \psi_{j} ) = \int^{a}_{0} \mu (x) \psi_{i} (x) \psi_{j} (x) dx \)

Let \( \psi_{i} = sin( \frac{i \pi x}{a} ) , \psi_{j} = sin( \frac{j \pi x}{a} ) \):

\( ( \psi_{i} , \psi_{j} ) = \int^{a}_{0} \mu (x) sin( \frac{i \pi x}{a} ) sin( \frac{j \pi x}{a} ) dx \)

Ignore the \( \mu(x) \) term and consider how \( g(x) = sin( \frac{i \pi x}{a} ) sin( \frac{j \pi x}{a} ) \) integrates:

\( \int^{a}_{0} g(x) dx = sin( \frac{i \pi x}{a} ) sin( \frac{j \pi x}{a} ) dx \)
\( \int^{a}_{0} g(x) dx = \frac{a}{2 \pi} ( \frac{sin( \frac{\pi x (j - i)}{a} )}{j - i} - \frac{sin(\frac{\pi x (j + i)}{a} )}{j + i} )\rvert^{a}_{0} \)
\( \int^{a}_{0} g(x) dx = \frac{a}{2 \pi} ( ( \frac{sin( \pi (j - i) )}{j - i} - \frac{sin( \pi (j + i) )}{j + i} ) - ( \frac{sin( 0 )}{j - i} - \frac{sin( 0 )}{j + i} ) )\)
\( \int^{a}_{0} g(x) dx = \frac{a}{2 \pi} ( \frac{sin( \pi (j - i) )}{j - i} - \frac{sin( \pi (j + i) )}{j + i} ) \)

This form is undefined for i = j, but for \( i \neq j \), every argument of sine is an integer multiple of \( \pi \): \( sin ((j \pm i) \pi) = 0 \)

Then let \( f'(x) = \mu (x) \), such that integration by parts implies:

\( f(x)g(x)\rvert^{a}_{0} - \int^{a}_{0} f(x) g'(x) dx = \int^{a}_{0} f'(x) g(x) dx \)

The right-hand side is the denominator of the variational equation. The second term involves the derivative g'(x):

\( g'(x) = \frac{\pi}{a} [ (i sin(\frac{i \pi x}{a} ) cos(\frac{i \pi x}{a} ) + (j cos(\frac{j \pi x}{a} ) sin(\frac{j \pi x}{a} ))] \)

Both g(x) and g'(x) terms will thus go to zero when determined at 0 and a when \( i \neq j \), which means f(x)g(x) and \( \int^{a}_{0} f(x) g'(x) dx \) are zero. Therefore for \( i \neq j \):

\( ( \psi_{i} , \psi_{j} ) = \int^{a}_{0} \mu (x) sin( \frac{ i \pi x}{a} ) sin( \frac{j \pi x}{a} ) dx = 0\)

Since this will be the denominator of the variational equation, we must require \( i = j \):

\( ( \psi_{i} , \psi_{j} ) = \int^{a}_{0} \mu (x) sin^{2}( \frac{ i \pi x}{a} )dx \)


Step 2: A Digression on Non-Zero Denominator (Not Important)

If \( \mu (x) \) were a constant, we could appeal to the integral identity:

\( \mu_{0} \int^{\pi}_{-\pi} sin(i x) sin(j x) dx = \mu_{0} \pi \delta_{i j} \)

But for some unknown function \( \mu (x) \) the expression is equivalent to:

\( ( \psi_{i} , \psi_{j} ) = \int^{a}_{0} \mu (x) \frac{1}{2} dx - \int^{a}_{0} \mu (x) \frac{1}{2} cos( \frac{ i 2 \pi x}{a} )dx \)
\( ( \psi_{i} , \psi_{j} ) = \frac{1}{2} ( \int^{a}_{0} \mu (x) dx - \int^{a}_{0} \mu (x) cos( \frac{ i 2 \pi x}{a} )dx \)

Depending on the function \( \mu (x) \), \( ( \psi_{i} , \psi_{j} ) \neq 0 \). Consider with the generic mass density \( \mu (x) \):

Let \( h'(x) = cos( \frac{ i 2 \pi x}{a} ) , f(x) = \mu (x) \)

\( f(x)h(x)\rvert^{a}_{0} - \int^{a}_{0} f'(x) h(x) dx = \int^{a}_{0} f(x) h'(x) dx \)
\( \mu (x) ( \frac{a sin (\frac{i 2 \pi x}{a})} {i 2 \pi } )\rvert^{a}_{0} - \int^{a}_{0} \mu'(x) ( \frac{a sin (\frac{i 2 \pi x}{a})} {i 2 \pi } ) dx = \int^{a}_{0} \mu (x) cos( \frac{ i 2 \pi x}{a} )dx \)
\( - \int^{a}_{0} \mu ' (x) ( \frac{a sin (\frac{i 2 \pi x}{a})} {i 2 \pi } ) dx = \int^{a}_{0} \mu (x) cos( \frac{ i 2 \pi x}{a} )dx \)

The left-hand side will be a mix of sine and cosine terms times a \( \mu (x) \) term. The sine terms go to zero and the cosine do not. Specific examples of definite integrals involving \( sin^{2}(x) \) terms show non-zero results.


Step 3: Variational Method Defined

The observable \( \mathcal{O} \) we want is seemingly frequency \( \omega \) rather than energy, so we will try to define the variational principle in terms of \( \mathcal{O} = \omega \) first:

\( ( \psi , \mathcal{O} \psi) / ( \psi , \psi ) \geq \omega_{0} \)
\( ( \psi , \omega \psi) / ( \psi , \psi ) \geq \omega_{0} \)

\( \frac{\int \mu (x) \psi [ \mathcal{O} ] \psi d x}{ \int \mu (x) \psi \psi d x} \geq \omega_{0} \)
\( \frac{\int \mu (x) \psi [ \omega ] \psi d x}{ \int \mu (x) \psi \psi d x} \geq \omega_{0} \)
\( \frac{\int^{\infty}_{-\infty} \mu (x) \psi_{i} [ \omega ] \psi_{i} d x}{ \int^{\infty}_{-\infty} \mu (x) \psi_{i} \psi_{i} d x } \geq \omega_{0} \)
\( \frac{\int^{a}_{0} \mu (x) [ \omega ] sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu (x) sin^{2}( \alpha x )dx } \geq \omega_{0} \)

\( \frac{\int^{a}_{0} \mu (x) ( \sqrt{ \frac{T_{0}}{\mu (x)} } ( \frac{n \pi}{a} ) ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu (x) sin^{2}( \alpha x )dx } \geq \omega_{0} \)

This variational procedure should work because only the same \( \psi_{i} \) will contribute, leaving a positive non-zero denominator, and the numerator has the frequency \( \omega_{i} \) being averaged over those \( \psi_{i} \). Taking the minimum of the variational parameter \( \alpha \) should get us close to the \( \omega_{0} \) eigenvalue as we are using the sine wave function solutions to the differential equation as the trial wavefunction. When we assume the actual solutions from Equation 4.20 as the trial wavefunctions, \( \psi_{i} = sin( \frac{ i \pi x}{a} ) \), the calculation should be close to the numerical computation without using \( \alpha \).

However, when we actually do part (b), we will also use a more general approximation that cancels out the sine terms, and show it gives the same answer. This will remove the variational parameter \( \alpha \) from the problem. We will also re-define the variational method to find \( \omega^{2} \) instead of \( \omega \) because of mathematical convenience with the specific case of \( \mu (x) = \mu_{0} \frac{x}{a} \).


Problem 4.7 (b): Consider the case \( \mu (x) = \mu_{0} \frac{x}{a} \). Use your variational principle to find a simple bound on the lowest oscillation frequency. Compare with the answer \( \omega_{0}^{2} \approx (18.956) \frac{T_{0}}{\mu_{0} a^{2}} \) obtained by a direct numerical solution of the eigenvalue problem.

Step 1: Variational Approximation using \( \mathcal{O} = \omega \)

We will initially try to use the variational method to find observable \( \mathcal{O} = \omega \) using a variational parameter \( \alpha \) (but this will become intractable):

Insert the case \( \mu (x) = \mu_{0} \frac{x}{a} \) into the variational equation G(x):

\( G(x) = \frac{\int^{a}_{0} \mu (x) ( \sqrt{ \frac{T_{0}}{\mu (x)} } ( \frac{n \pi}{a} ) ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu (x) sin^{2}( \alpha x )dx } \geq \omega_{0} \)

\( G(x) = \frac{\int^{a}_{0} \mu (0) \frac{x}{a} ( \sqrt{ \frac{T_{0} a}{\mu (0) x} } ( \frac{n \pi}{a} ) ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu (0) \frac{x}{a} sin^{2}( \alpha x )dx } \)

Simplify these terms, assuming a and x are positive:

\( G(x) = \frac{\int^{a}_{0} \mu (0) \frac{x}{a} \sqrt{\frac{a}{x}} ( \omega_{i} ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu (0) \frac{x}{a} sin^{2}( \alpha x )dx } \)
\( G(x) = \frac{\int^{a}_{0} \mu (0) \sqrt{\frac{x}{a}} ( \omega_{i} ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu (0) \frac{x}{a} sin^{2}( \alpha x )dx } \)
\( G(x) = \sqrt{a} \omega_{i} \frac{\int^{a}_{0} \sqrt{x} sin^{2}( \alpha x ) d x}{ \int^{a}_{0} x sin^{2}( \alpha x )dx } \)

The numerator seems to only evaluate as Fresnel functions. Unclear how to proceed for an exact solution when the observable is \( \omega_{1} \). We could try to use exponentials for the trial wavefunction, similar to the quantum harmonic oscillator solution, but the integrals do not obviously combine significantly cleaner and could involve intractable error functions.

Step 2: Change the Observable to \( \mathcal{O} = \omega_{1}^{2} \)

Intuitively, a reasonably close result likely to arise from integrating sine functions would be \( \omega = (\sqrt{2} \pi) \sqrt{ \frac{ T_{0} }{\mu_{0} a^{2}} } > \omega_{0} \). This would yield \( \omega^{2} \approx (19.739) \frac{ T_{0} }{\mu_{0} a^{2}} > (18.956) \frac{T_{0}}{\mu_{0} a^{2}} \approx \omega_{0}^{2} \).

If we were to instead re-define G(x) to try to approximate \( \omega_{1}^{2}\), instead of \( \omega_{1} \) directly, we notice that the ugly x terms in the numerator will cancel out because of the choice of \( \mu (x) \). The integrals are then exactly solvable with trigonometric functions in general, but it is a mess of sine and cosine terms, making finding the value of \( \alpha \) at \( \frac{d G}{d \alpha} = 0 \) difficult. We cannot assume \(sin(\alpha a) = 0 , cos(\alpha a) = 1 \) without implicitly assuming away \( \alpha \), though arguably \( \alpha \) is trivially constrained, as the trial wavefunctions must satisfy the boundary conditions. What we are looking for is a number as constant coefficient to \( \sqrt{ \frac{ T_{0} }{\mu_{0} a^{2}} } \).

We can approach this two different ways. In the first method we instead approximate \( sin^{2}(\alpha x) \approx \alpha^{2} x^{2} \), and use a l'Hospital's rule type simplification, merely dividing out the polynomials. \( \alpha \) drops out entirely. In the second method we use Equation 4.20 to assume \( \psi_{i} = sin( \frac{\pi x}{a} ) \), so that the sine and cosine terms go to 0 and 1 at the boundary points and there is no \( \alpha \) to be solved. Both methods yield the same answer:

Method 1: Approximating \( sin^{2}(\alpha x) \approx \alpha^{2} x^{2} \)

\( G(x) = \frac{\int^{a}_{0} \mu (0) \frac{x}{a} ( \omega_{1}^{2} ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu_{0} \frac{x}{a} sin^{2}( \alpha x )dx } \geq \omega^{2}_{0} \)
\( G(x) = \frac{\int^{a}_{0} \mu (0) \frac{x}{a} ( \frac{T_{0} a}{\mu_{0} x} ( \frac{\pi^{2}}{a^{2}} ) ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu_{0} \frac{x}{a} sin^{2}( \alpha x )dx } \)
\( G(x) = \frac{\int^{a}_{0} \mu (0) \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2}} ) sin^{2}( \alpha x ) d x}{ \int^{a}_{0} \mu_{0} \frac{x}{a} sin^{2}( \alpha x )dx } \)
\( G(x) = \frac{\int^{a}_{0} \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2}} ) d x}{ \int^{a}_{0} \frac{x}{a} dx } \)
\( G(x) = \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) \frac{ x }{ \frac{x^{2}}{2a} } \rvert^{a}_{0} \)
\( G(x) = 2 \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) \frac{ a^{2} }{ a^{2} } \)

\( G(x) = 2 \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) \geq \omega^{2}_{0} \) (Q.E.D.)

Method 2: Assuming \( sin^{2}(\pi x / a) \) Terms

\( G(x) = \frac{\int^{a}_{0} \mu (0) \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2}} ) sin^{2}( \frac{ \pi x}{a} ) d x}{ \int^{a}_{0} \mu_{0} \frac{x}{a} sin^{2}( \frac{ \pi x}{a} )dx } \)
\( G(x) = \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) a \frac{\int^{a}_{0} sin^{2}( \frac{ \pi x}{a} ) d x}{ \int^{a}_{0} x sin^{2}( \frac{ \pi x}{a} )dx } \)
\( G(x) = \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) a \frac{ \frac{x}{2} - \frac{2 \pi x}{4 \pi}}{ \frac{x^{2}}{4} - \frac{x sin(2 \pi x) }{4 \pi} + ( \frac{cos(2 \pi x ) }{8 \pi^{2}} ) }\rvert^{a}_{0} \)
\( G(x) = \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) a \frac{ \frac{a}{2} - 0 - 0 - 0 }{ \frac{a^{2}}{4} - 0 - 0 - 0 + ( \frac{1 - 1 }{8 \pi^{2}} ) } \)
\( G(x) = \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) ( a \frac{ 2 }{ a} ) \)
\( G(x) = 2 \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) \geq \omega^{2}_{0} \) (Q.E.D.)

This is exactly the answer we reached by intuition from looking at the numerical computation eigenvalue result: \( \omega_{1}^{2} = 2 \pi^{2} ( \frac{T_{0} }{\mu_{0} a^{2} } ) \approx (19.739) \frac{ T_{0} }{\mu_{0} a^{2}} \geq (18.956) \frac{T_{0}}{\mu_{0} a^{2}} \approx \omega_{0}^{2} \)

\( \Rightarrow \omega_{1} = (\sqrt{2} \pi) \sqrt{ \frac{ T_{0} }{\mu_{0} a^{2}} } > \omega_{0} \) (Q.E.D.)


Problem 4.8

Problem Statement: Deriving Euler-Lagrange equations.

(a) Consider an action for a dynamical variable q(t)

\( S = \int dt L( q(t) , \dot{q}(t) ; t ). \)

Calculate the variation \( \delta S \) of the action under a variation \( \delta q(t) \) of the coordinate. Use the condition \( \delta S = 0 \) to find the equation of motion for the coordinate q(t) (the Euler-Lagrange equation).

(b) Consider an action for a dynamical field variable \( \phi(t, \overrightarrow{x}) \). As indicated, the field is a function of space and time, and is briefly written as the spacetime function \( \phi (x) \). The action is obtained by integrating the Lagrangian density \( \mathcal{L} \) over spacetime. The Lagrangian density is a function of the field and the spacetime derivative of the field

\( S = \int d^{D} x \mathcal{L} ( \phi(x) , \partial_{\mu} \phi (x) ) \)

Here \( d^{D} x = dt dx^{2} ... dx^{d} \), and \( \partial_{\mu} \phi = \partial \phi / \partial x^{\mu} \). Calculate the variation \( \delta S \) of the action under a variation \( \delta \phi(x) \) of the field. Use the condition \( \delta S = 0 \) to find the equation of motion for the field \( \phi(x) \) (the Euler-Lagrange equation.)

Solution:

Problem 4.8 (a): Calculate the variation \( \delta S \) of the action under a variation \( \delta q(t) \) of the coordinate. Use the condition \( \delta S = 0 \) to find the equation of motion for the coordinate q(t) (the Euler-Lagrange equation).

\( S = \int dt L( q(t) , \dot{q}(t) ; t ) \)
\( \delta S = \delta \int dt L( q(t) , \dot{q}(t) ; t ) \)

Expand this as the total derivative in terms of differentials for \( \delta L\), which is a sum of infinitesimals \( df = \sum\limits_{n=1}^n \frac{\partial f}{\partial x_{i}} dx_{i} \) in linear approximation of f:

\( \delta S = \int dt ( \frac{\partial L}{\partial q} \delta q + \frac{\partial L}{\partial \dot{q}} \delta \dot{q} ) \)

We want to express the whole thing in terms of \( \delta q \) instead of \( \delta \dot{q} \), so we substitute the second term with its equivalent from derivative product rule:

Product Rule: \( \frac{d}{dt}( \frac{\partial L}{\partial \dot{q}} \delta q ) = (\frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) \delta q + \frac{\partial L}{\partial \dot{q}} \delta \dot{q} \)

\( \delta S = \int dt ( \frac{\partial L}{\partial q} \delta q + [ \frac{\partial L}{\partial \dot{q}} \delta \dot{q} ] ) \)
\( \delta S = \int dt ( \frac{\partial L}{\partial q} \delta q + [ \frac{d}{dt}( \frac{\partial L}{\partial \dot{q}} \delta q ) - ( \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) \delta q] ) \)
\( \delta S = \int dt ( [ \frac{\partial L}{\partial q} \delta q - ( \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) \delta q ] + \frac{d}{dt}( \frac{\partial L}{\partial \dot{q}} \delta q ) ) \)
\( \delta S = \int dt ( [ \frac{\partial L}{\partial q} \delta q - ( \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) \delta q ] ) + \int \frac{d}{dt}( \frac{\partial L}{\partial \dot{q}} \delta q ) ) dt \)
\( \delta S = \int dt ( [ \frac{\partial L}{\partial q} \delta q - ( \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) \delta q ] ) + ( \frac{\partial L}{\partial \dot{q}} \delta q ) \rvert^{t_{f}}_{t_{i}} \)
\( \delta S = \int dt ( [ \frac{\partial L}{\partial q} \delta q - ( \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) \delta q ] + 0 ) \)

\( \delta S = \int dt ( \frac{\partial L}{\partial q} - ( \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) ) \delta q \)

\( \delta S = 0 \) when the action is stationary, therefore the Euler-Lagrange equation is: \( \frac{\partial L}{\partial q} - ( \frac{d}{dt} \frac{\partial L}{\partial \dot{q}} ) = 0 \) (Q.E.D.)


Problem 4.8 (b): Calculate the variation \( \delta S \) of the action under a variation \( \delta \phi(x) \) of the field. Use the condition \( \delta S = 0 \) to find the equation of motion for the field \( \phi(x) \) (the Euler-Lagrange equation.)

Given a dynamical field variable \( \phi(t, \overrightarrow{x}) \), briefly written as \( \phi(x) \), \( S = \int d^{D} x \mathcal{L} ( \phi(x) , \partial_{\mu} \phi (x) ) \) where \( d^{D} x = dt dx^{2} ... dx^{d} \), and \( \partial_{\mu} \phi = \partial \phi / \partial x^{\mu} \).

\( S = \int d^{D} x \mathcal{L} ( \phi(x) , \partial_{\mu} \phi (x) ) \)
\( \delta S = \delta \int d^{D} x \mathcal{L} ( \phi(x) , \partial_{\mu} \phi (x) ) \)

We apply the total differential expansion \( \delta \mathcal{L} \) for the Langrangian density \( \mathcal{L} \):

\( \delta S = \int d^{D} x [ \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi + \frac{\partial \mathcal{L}}{\partial \partial_{u} \phi} \delta (\partial_{\mu} \phi) ] \)

This is a mirror of Problem 4.8 (a). Instead of the product rule substitution (to get the variation in terms of only \( \delta \phi(x) \) ) with \( \frac{d}{dt}\), it is the spacetime covariant derivative \( \partial_{\mu} \):

Product Rule: \( \partial_{\mu} ( \frac{\partial \mathcal{L}}{\partial \partial_{u} \phi} \delta \phi ) = \frac{\partial \mathcal{L}}{\partial \partial_{u} \phi} \delta (\partial_{\mu} \phi) + ( \partial_{\mu} \frac{\partial \mathcal{L}}{\partial \partial_{\mu} \phi} )\delta \phi \)

\( \delta S = \int d^{D} x [ \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi + \frac{\partial \mathcal{L}}{\partial \partial_{u} \phi} \delta (\partial_{\mu} \phi) ] \)
\( \Rightarrow \delta S = \int d^{D} x [ \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi + \partial_{\mu} ( \frac{\partial \mathcal{L}}{\partial \partial_{u} \phi} \delta \phi ) - ( \partial_{\mu} \frac{\partial \mathcal{L}}{\partial \partial_{\mu} \phi} )\delta \phi ) ] \)

Rearrange the terms:

\( \delta S = \int d^{D} x [ ( \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi - ( \partial_{\mu} \frac{\partial \mathcal{L}}{\partial \partial_{\mu} \phi} )\delta \phi ) ) + \partial_{\mu} ( \frac{\partial \mathcal{L}}{\partial \partial_{u} \phi} \delta \phi ) ] \)

This splits into two integrals where the last term goes to zero at the initial and final points for the same \( \delta \phi = 0 \) reason as part (a):

\( \delta S = [ \int d^{D} x( \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi - ( \partial_{\mu} \frac{\partial \mathcal{L}}{\partial \partial_{\mu} \phi} )\delta \phi ) ) + \int d^{D} x \partial_{\mu} ( \frac{\partial \mathcal{L}}{\partial \partial_{u} \phi} \delta \phi ) ] \)
\( \delta S = [ \int d^{D} x( \frac{\partial \mathcal{L}}{\partial \phi} \delta \phi - ( \partial_{\mu} \frac{\partial \mathcal{L}}{\partial \partial_{\mu} \phi} )\delta \phi ) ) + 0 ] \)

\( \Rightarrow \delta S = \int d^{D} x( \frac{\partial \mathcal{L}}{\partial \phi} - ( \partial_{\mu} \frac{\partial \mathcal{L}}{\partial \partial_{\mu} \phi} )) \delta \phi \)

Since \( \delta S = 0 \), it follows that the Euler-Lagrange equation (of motion) for the dynamical field \( \phi(x) \) is: \( \frac{\partial \mathcal{L}}{\partial \phi} - ( \partial_{\mu} \frac{\partial \mathcal{L}}{\partial \partial_{\mu} \phi} ) = 0 \). (Q.E.D.)



Up One Level: String Theory
Last updated: 12/21/2020