Time evolution of a Gaussian wave packet, why convert to k-space?












3












$begingroup$


I'm trying to do a homework problem where I'm time evolving a Gaussian wave packet with a Hamiltonian of $ frac{p^{2}}{2m} $



So if I have a Gaussian wave packet given by:



$$ Psi(x) = Ae^{-alpha x^{2}} , .$$



I want to time evolve it, my first instinct would be to just tack on the time evolution term of $e^{-frac{iEt}{hbar}}$.



However, in the solution it tells me that this is incorrect, and I first need to convert the wave function into k-space by using a Fourier transform due to the Hamiltonian being $ p^2/2m$. Can anyone tell me why I need to convert it to k-space first? In a finite well example with the same Hamiltonian we can just multiply the time evolution term to each term of the wave function. Why can't we can't do that to a Gaussian wave packet?










share|cite|improve this question









New contributor




M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$








  • 1




    $begingroup$
    Ask yourself this: why do you think you can tack on the time dependence? What reason do you have to think that's correct?
    $endgroup$
    – DanielSank
    3 hours ago






  • 1




    $begingroup$
    And, more importantly, what value of the energy would you choose? Is your state an eigenstate of the hamiltonian, with a well-defined energy?
    $endgroup$
    – Emilio Pisanty
    3 hours ago
















3












$begingroup$


I'm trying to do a homework problem where I'm time evolving a Gaussian wave packet with a Hamiltonian of $ frac{p^{2}}{2m} $



So if I have a Gaussian wave packet given by:



$$ Psi(x) = Ae^{-alpha x^{2}} , .$$



I want to time evolve it, my first instinct would be to just tack on the time evolution term of $e^{-frac{iEt}{hbar}}$.



However, in the solution it tells me that this is incorrect, and I first need to convert the wave function into k-space by using a Fourier transform due to the Hamiltonian being $ p^2/2m$. Can anyone tell me why I need to convert it to k-space first? In a finite well example with the same Hamiltonian we can just multiply the time evolution term to each term of the wave function. Why can't we can't do that to a Gaussian wave packet?










share|cite|improve this question









New contributor




M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$








  • 1




    $begingroup$
    Ask yourself this: why do you think you can tack on the time dependence? What reason do you have to think that's correct?
    $endgroup$
    – DanielSank
    3 hours ago






  • 1




    $begingroup$
    And, more importantly, what value of the energy would you choose? Is your state an eigenstate of the hamiltonian, with a well-defined energy?
    $endgroup$
    – Emilio Pisanty
    3 hours ago














3












3








3





$begingroup$


I'm trying to do a homework problem where I'm time evolving a Gaussian wave packet with a Hamiltonian of $ frac{p^{2}}{2m} $



So if I have a Gaussian wave packet given by:



$$ Psi(x) = Ae^{-alpha x^{2}} , .$$



I want to time evolve it, my first instinct would be to just tack on the time evolution term of $e^{-frac{iEt}{hbar}}$.



However, in the solution it tells me that this is incorrect, and I first need to convert the wave function into k-space by using a Fourier transform due to the Hamiltonian being $ p^2/2m$. Can anyone tell me why I need to convert it to k-space first? In a finite well example with the same Hamiltonian we can just multiply the time evolution term to each term of the wave function. Why can't we can't do that to a Gaussian wave packet?










share|cite|improve this question









New contributor




M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I'm trying to do a homework problem where I'm time evolving a Gaussian wave packet with a Hamiltonian of $ frac{p^{2}}{2m} $



So if I have a Gaussian wave packet given by:



$$ Psi(x) = Ae^{-alpha x^{2}} , .$$



I want to time evolve it, my first instinct would be to just tack on the time evolution term of $e^{-frac{iEt}{hbar}}$.



However, in the solution it tells me that this is incorrect, and I first need to convert the wave function into k-space by using a Fourier transform due to the Hamiltonian being $ p^2/2m$. Can anyone tell me why I need to convert it to k-space first? In a finite well example with the same Hamiltonian we can just multiply the time evolution term to each term of the wave function. Why can't we can't do that to a Gaussian wave packet?







quantum-mechanics homework-and-exercises






share|cite|improve this question









New contributor




M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question









New contributor




M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question








edited 3 hours ago









DanielSank

17.9k45178




17.9k45178






New contributor




M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 4 hours ago









M-BM-B

182




182




New contributor




M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






M-B is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 1




    $begingroup$
    Ask yourself this: why do you think you can tack on the time dependence? What reason do you have to think that's correct?
    $endgroup$
    – DanielSank
    3 hours ago






  • 1




    $begingroup$
    And, more importantly, what value of the energy would you choose? Is your state an eigenstate of the hamiltonian, with a well-defined energy?
    $endgroup$
    – Emilio Pisanty
    3 hours ago














  • 1




    $begingroup$
    Ask yourself this: why do you think you can tack on the time dependence? What reason do you have to think that's correct?
    $endgroup$
    – DanielSank
    3 hours ago






  • 1




    $begingroup$
    And, more importantly, what value of the energy would you choose? Is your state an eigenstate of the hamiltonian, with a well-defined energy?
    $endgroup$
    – Emilio Pisanty
    3 hours ago








1




1




$begingroup$
Ask yourself this: why do you think you can tack on the time dependence? What reason do you have to think that's correct?
$endgroup$
– DanielSank
3 hours ago




$begingroup$
Ask yourself this: why do you think you can tack on the time dependence? What reason do you have to think that's correct?
$endgroup$
– DanielSank
3 hours ago




1




1




$begingroup$
And, more importantly, what value of the energy would you choose? Is your state an eigenstate of the hamiltonian, with a well-defined energy?
$endgroup$
– Emilio Pisanty
3 hours ago




$begingroup$
And, more importantly, what value of the energy would you choose? Is your state an eigenstate of the hamiltonian, with a well-defined energy?
$endgroup$
– Emilio Pisanty
3 hours ago










1 Answer
1






active

oldest

votes


















3












$begingroup$

Tacking on a term $e^{-iEt/hbar}$ is the correct interpretation of the Schrödinger equation $$ihbar |partial_t Psirangle = hat H |Psirangle$$only for those eigenstates for which $$hat H |Psirangle = E|Psirangle,$$as otherwise you do not know what value of $E$ should be used to substitute. Hypothetically you can still do it, but you pay a very painful cost that the $E$ is in fact a full-fledged operator and you therefore need to exponentiate an operator, which is nontrivial.



If this is all sounding a bit complicated, please remember that QM is just linear algebra in funny hats, and so you could get an intuition for similar systems by just using some matrices and vectors, for example looking at $$ihbar begin{bmatrix} f'(t) \ g'(t) end{bmatrix} = epsilon begin{bmatrix} 0&1\1&0end{bmatrix} begin{bmatrix} f(t) \ g(t)end{bmatrix}.$$One can in fact express this as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-ihat H t/hbar} begin{bmatrix} f_0\ g_0end{bmatrix},$$ but one has to exponentiate this matrix. That is not hard because it squares to the identity matrix, causing a simple expansion, $$begin{bmatrix}f(t)\g(t)end{bmatrix} = cos(epsilon t/hbar) begin{bmatrix} f_0\ g_0end{bmatrix} - i sin(epsilon t/hbar) begin{bmatrix} g_0\ f_0end{bmatrix}. $$ One can then confirm that indeed this satisfies the Schrödinger equation given above. One can also immediately see that this does not directly have the form $e^{-iepsilon t/hbar} [f_0; g_0],$ but how could it? That would be a different Hamiltonian $hat H = epsilon I.$



But, with some creativity, one can see that if $f_0 = g_0$ those two remaining vectors would be parallel, or if $f_0 = -g_0$, and one can indeed rewrite this solution in terms of those eigenvectors of the original $hat H$ as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-iepsilon t/hbar} alpha begin{bmatrix} 1\ 1end{bmatrix} + e^{iepsilon t/hbar} beta begin{bmatrix} -1\ 1end{bmatrix}. $$ So the trick to more easily finding general solutions is to find these eigenvectors first and then form a general linear combination of those eigenvectors once they have been multiplied individually by their time dependence. Then for a given initial state, we need to find the $alpha$ and $beta$ terms: in this case it is simple enough by looking at $t=0$ where $alpha - beta = f_0$ while $alpha + beta = g_0.$



Similarly for your Hamiltonian $hat H = hat p^2/(2m) = -frac{hbar^2}{2m}frac{partial^2~}{partial x^2},$ you know that the eigenvectors are plane waves, $$phi_k(x) = e^{ikx}.$$You know that you can then add time dependence to them in the obvious way, $$Phi_k(x, t) = e^{i(k x - omega_k t)},$$ where of course $$hbar omega_k = frac{hbar^2k^2}{2m}.$$ So the eigenvector story is just beautifully simple for you to do, all you need is the ability to take derivatives of exponentials.



The part of the story that is more complicated is assembling an arbitrary $psi(x)$ as a sum of these exponentials. However while it is complicated it is not impossible: you know from Fourier's theorem that $$psi(x) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i k x} int_{-infty}^infty dxi ~e^{-ikxi} ~psi(xi).$$ Let your eyes glaze over the second integral and see it as just what it is, some $psi[k]$ coefficent in $k$-space. What we have here then is a sum—a continuous sum, but still a sum!—of coefficients times eigenfunctions:$$psi(x) = int_{-infty}^{infty}frac{dk~psi[k]}{2pi}~phi_k(x).$$



And we know how to Schrödinger-ize such a sum, we just add $e^{-iomega_k t}$ terms to each of the eigenfunctions, turning $phi_k$ into $Phi_k.$ So we get,
$$Psi(x, t) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i (k x - omega_k t)} ~psi[k].$$
You do not have to do it this way, you can try to do some sort of $$expleft[-i frac{hbar t}{2m} frac{partial^2~}{partial x^2}right] e^{-a x^2}$$
monstrosity, expanding the operator in a power series and then seeing whether there are patterns you can use among the $n^text{th}$ derivatives of Gaussians to simplify. But the operator expansion way looks really pretty difficult, while the eigenvector way is really easy.



The reason it is really easy is that both $hat H$ and $ihbar partial_t$ are linear operators: they distribute over sums. So if you are still feeling queasy about this procedure, convince yourself by just writing it out: calculate this value $$0 = left(ihbar frac{partial~}{partial t} + frac{hbar^2}{2m}frac{partial^2~}{partial x^2}right) frac{1}{2pi} int_{-infty}^infty dk~psi[k] ~e^{i (k x - omega_k t)}.$$ Notice that it holds with pretty much no restriction on the actual form of $psi[k]$ so that you only need to choose coefficients $psi[k]$ such that $Psi(x, 0) = psi(x).$






share|cite|improve this answer











$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "151"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });






    M-B is a new contributor. Be nice, and check out our Code of Conduct.










    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f473865%2ftime-evolution-of-a-gaussian-wave-packet-why-convert-to-k-space%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    Tacking on a term $e^{-iEt/hbar}$ is the correct interpretation of the Schrödinger equation $$ihbar |partial_t Psirangle = hat H |Psirangle$$only for those eigenstates for which $$hat H |Psirangle = E|Psirangle,$$as otherwise you do not know what value of $E$ should be used to substitute. Hypothetically you can still do it, but you pay a very painful cost that the $E$ is in fact a full-fledged operator and you therefore need to exponentiate an operator, which is nontrivial.



    If this is all sounding a bit complicated, please remember that QM is just linear algebra in funny hats, and so you could get an intuition for similar systems by just using some matrices and vectors, for example looking at $$ihbar begin{bmatrix} f'(t) \ g'(t) end{bmatrix} = epsilon begin{bmatrix} 0&1\1&0end{bmatrix} begin{bmatrix} f(t) \ g(t)end{bmatrix}.$$One can in fact express this as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-ihat H t/hbar} begin{bmatrix} f_0\ g_0end{bmatrix},$$ but one has to exponentiate this matrix. That is not hard because it squares to the identity matrix, causing a simple expansion, $$begin{bmatrix}f(t)\g(t)end{bmatrix} = cos(epsilon t/hbar) begin{bmatrix} f_0\ g_0end{bmatrix} - i sin(epsilon t/hbar) begin{bmatrix} g_0\ f_0end{bmatrix}. $$ One can then confirm that indeed this satisfies the Schrödinger equation given above. One can also immediately see that this does not directly have the form $e^{-iepsilon t/hbar} [f_0; g_0],$ but how could it? That would be a different Hamiltonian $hat H = epsilon I.$



    But, with some creativity, one can see that if $f_0 = g_0$ those two remaining vectors would be parallel, or if $f_0 = -g_0$, and one can indeed rewrite this solution in terms of those eigenvectors of the original $hat H$ as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-iepsilon t/hbar} alpha begin{bmatrix} 1\ 1end{bmatrix} + e^{iepsilon t/hbar} beta begin{bmatrix} -1\ 1end{bmatrix}. $$ So the trick to more easily finding general solutions is to find these eigenvectors first and then form a general linear combination of those eigenvectors once they have been multiplied individually by their time dependence. Then for a given initial state, we need to find the $alpha$ and $beta$ terms: in this case it is simple enough by looking at $t=0$ where $alpha - beta = f_0$ while $alpha + beta = g_0.$



    Similarly for your Hamiltonian $hat H = hat p^2/(2m) = -frac{hbar^2}{2m}frac{partial^2~}{partial x^2},$ you know that the eigenvectors are plane waves, $$phi_k(x) = e^{ikx}.$$You know that you can then add time dependence to them in the obvious way, $$Phi_k(x, t) = e^{i(k x - omega_k t)},$$ where of course $$hbar omega_k = frac{hbar^2k^2}{2m}.$$ So the eigenvector story is just beautifully simple for you to do, all you need is the ability to take derivatives of exponentials.



    The part of the story that is more complicated is assembling an arbitrary $psi(x)$ as a sum of these exponentials. However while it is complicated it is not impossible: you know from Fourier's theorem that $$psi(x) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i k x} int_{-infty}^infty dxi ~e^{-ikxi} ~psi(xi).$$ Let your eyes glaze over the second integral and see it as just what it is, some $psi[k]$ coefficent in $k$-space. What we have here then is a sum—a continuous sum, but still a sum!—of coefficients times eigenfunctions:$$psi(x) = int_{-infty}^{infty}frac{dk~psi[k]}{2pi}~phi_k(x).$$



    And we know how to Schrödinger-ize such a sum, we just add $e^{-iomega_k t}$ terms to each of the eigenfunctions, turning $phi_k$ into $Phi_k.$ So we get,
    $$Psi(x, t) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i (k x - omega_k t)} ~psi[k].$$
    You do not have to do it this way, you can try to do some sort of $$expleft[-i frac{hbar t}{2m} frac{partial^2~}{partial x^2}right] e^{-a x^2}$$
    monstrosity, expanding the operator in a power series and then seeing whether there are patterns you can use among the $n^text{th}$ derivatives of Gaussians to simplify. But the operator expansion way looks really pretty difficult, while the eigenvector way is really easy.



    The reason it is really easy is that both $hat H$ and $ihbar partial_t$ are linear operators: they distribute over sums. So if you are still feeling queasy about this procedure, convince yourself by just writing it out: calculate this value $$0 = left(ihbar frac{partial~}{partial t} + frac{hbar^2}{2m}frac{partial^2~}{partial x^2}right) frac{1}{2pi} int_{-infty}^infty dk~psi[k] ~e^{i (k x - omega_k t)}.$$ Notice that it holds with pretty much no restriction on the actual form of $psi[k]$ so that you only need to choose coefficients $psi[k]$ such that $Psi(x, 0) = psi(x).$






    share|cite|improve this answer











    $endgroup$


















      3












      $begingroup$

      Tacking on a term $e^{-iEt/hbar}$ is the correct interpretation of the Schrödinger equation $$ihbar |partial_t Psirangle = hat H |Psirangle$$only for those eigenstates for which $$hat H |Psirangle = E|Psirangle,$$as otherwise you do not know what value of $E$ should be used to substitute. Hypothetically you can still do it, but you pay a very painful cost that the $E$ is in fact a full-fledged operator and you therefore need to exponentiate an operator, which is nontrivial.



      If this is all sounding a bit complicated, please remember that QM is just linear algebra in funny hats, and so you could get an intuition for similar systems by just using some matrices and vectors, for example looking at $$ihbar begin{bmatrix} f'(t) \ g'(t) end{bmatrix} = epsilon begin{bmatrix} 0&1\1&0end{bmatrix} begin{bmatrix} f(t) \ g(t)end{bmatrix}.$$One can in fact express this as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-ihat H t/hbar} begin{bmatrix} f_0\ g_0end{bmatrix},$$ but one has to exponentiate this matrix. That is not hard because it squares to the identity matrix, causing a simple expansion, $$begin{bmatrix}f(t)\g(t)end{bmatrix} = cos(epsilon t/hbar) begin{bmatrix} f_0\ g_0end{bmatrix} - i sin(epsilon t/hbar) begin{bmatrix} g_0\ f_0end{bmatrix}. $$ One can then confirm that indeed this satisfies the Schrödinger equation given above. One can also immediately see that this does not directly have the form $e^{-iepsilon t/hbar} [f_0; g_0],$ but how could it? That would be a different Hamiltonian $hat H = epsilon I.$



      But, with some creativity, one can see that if $f_0 = g_0$ those two remaining vectors would be parallel, or if $f_0 = -g_0$, and one can indeed rewrite this solution in terms of those eigenvectors of the original $hat H$ as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-iepsilon t/hbar} alpha begin{bmatrix} 1\ 1end{bmatrix} + e^{iepsilon t/hbar} beta begin{bmatrix} -1\ 1end{bmatrix}. $$ So the trick to more easily finding general solutions is to find these eigenvectors first and then form a general linear combination of those eigenvectors once they have been multiplied individually by their time dependence. Then for a given initial state, we need to find the $alpha$ and $beta$ terms: in this case it is simple enough by looking at $t=0$ where $alpha - beta = f_0$ while $alpha + beta = g_0.$



      Similarly for your Hamiltonian $hat H = hat p^2/(2m) = -frac{hbar^2}{2m}frac{partial^2~}{partial x^2},$ you know that the eigenvectors are plane waves, $$phi_k(x) = e^{ikx}.$$You know that you can then add time dependence to them in the obvious way, $$Phi_k(x, t) = e^{i(k x - omega_k t)},$$ where of course $$hbar omega_k = frac{hbar^2k^2}{2m}.$$ So the eigenvector story is just beautifully simple for you to do, all you need is the ability to take derivatives of exponentials.



      The part of the story that is more complicated is assembling an arbitrary $psi(x)$ as a sum of these exponentials. However while it is complicated it is not impossible: you know from Fourier's theorem that $$psi(x) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i k x} int_{-infty}^infty dxi ~e^{-ikxi} ~psi(xi).$$ Let your eyes glaze over the second integral and see it as just what it is, some $psi[k]$ coefficent in $k$-space. What we have here then is a sum—a continuous sum, but still a sum!—of coefficients times eigenfunctions:$$psi(x) = int_{-infty}^{infty}frac{dk~psi[k]}{2pi}~phi_k(x).$$



      And we know how to Schrödinger-ize such a sum, we just add $e^{-iomega_k t}$ terms to each of the eigenfunctions, turning $phi_k$ into $Phi_k.$ So we get,
      $$Psi(x, t) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i (k x - omega_k t)} ~psi[k].$$
      You do not have to do it this way, you can try to do some sort of $$expleft[-i frac{hbar t}{2m} frac{partial^2~}{partial x^2}right] e^{-a x^2}$$
      monstrosity, expanding the operator in a power series and then seeing whether there are patterns you can use among the $n^text{th}$ derivatives of Gaussians to simplify. But the operator expansion way looks really pretty difficult, while the eigenvector way is really easy.



      The reason it is really easy is that both $hat H$ and $ihbar partial_t$ are linear operators: they distribute over sums. So if you are still feeling queasy about this procedure, convince yourself by just writing it out: calculate this value $$0 = left(ihbar frac{partial~}{partial t} + frac{hbar^2}{2m}frac{partial^2~}{partial x^2}right) frac{1}{2pi} int_{-infty}^infty dk~psi[k] ~e^{i (k x - omega_k t)}.$$ Notice that it holds with pretty much no restriction on the actual form of $psi[k]$ so that you only need to choose coefficients $psi[k]$ such that $Psi(x, 0) = psi(x).$






      share|cite|improve this answer











      $endgroup$
















        3












        3








        3





        $begingroup$

        Tacking on a term $e^{-iEt/hbar}$ is the correct interpretation of the Schrödinger equation $$ihbar |partial_t Psirangle = hat H |Psirangle$$only for those eigenstates for which $$hat H |Psirangle = E|Psirangle,$$as otherwise you do not know what value of $E$ should be used to substitute. Hypothetically you can still do it, but you pay a very painful cost that the $E$ is in fact a full-fledged operator and you therefore need to exponentiate an operator, which is nontrivial.



        If this is all sounding a bit complicated, please remember that QM is just linear algebra in funny hats, and so you could get an intuition for similar systems by just using some matrices and vectors, for example looking at $$ihbar begin{bmatrix} f'(t) \ g'(t) end{bmatrix} = epsilon begin{bmatrix} 0&1\1&0end{bmatrix} begin{bmatrix} f(t) \ g(t)end{bmatrix}.$$One can in fact express this as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-ihat H t/hbar} begin{bmatrix} f_0\ g_0end{bmatrix},$$ but one has to exponentiate this matrix. That is not hard because it squares to the identity matrix, causing a simple expansion, $$begin{bmatrix}f(t)\g(t)end{bmatrix} = cos(epsilon t/hbar) begin{bmatrix} f_0\ g_0end{bmatrix} - i sin(epsilon t/hbar) begin{bmatrix} g_0\ f_0end{bmatrix}. $$ One can then confirm that indeed this satisfies the Schrödinger equation given above. One can also immediately see that this does not directly have the form $e^{-iepsilon t/hbar} [f_0; g_0],$ but how could it? That would be a different Hamiltonian $hat H = epsilon I.$



        But, with some creativity, one can see that if $f_0 = g_0$ those two remaining vectors would be parallel, or if $f_0 = -g_0$, and one can indeed rewrite this solution in terms of those eigenvectors of the original $hat H$ as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-iepsilon t/hbar} alpha begin{bmatrix} 1\ 1end{bmatrix} + e^{iepsilon t/hbar} beta begin{bmatrix} -1\ 1end{bmatrix}. $$ So the trick to more easily finding general solutions is to find these eigenvectors first and then form a general linear combination of those eigenvectors once they have been multiplied individually by their time dependence. Then for a given initial state, we need to find the $alpha$ and $beta$ terms: in this case it is simple enough by looking at $t=0$ where $alpha - beta = f_0$ while $alpha + beta = g_0.$



        Similarly for your Hamiltonian $hat H = hat p^2/(2m) = -frac{hbar^2}{2m}frac{partial^2~}{partial x^2},$ you know that the eigenvectors are plane waves, $$phi_k(x) = e^{ikx}.$$You know that you can then add time dependence to them in the obvious way, $$Phi_k(x, t) = e^{i(k x - omega_k t)},$$ where of course $$hbar omega_k = frac{hbar^2k^2}{2m}.$$ So the eigenvector story is just beautifully simple for you to do, all you need is the ability to take derivatives of exponentials.



        The part of the story that is more complicated is assembling an arbitrary $psi(x)$ as a sum of these exponentials. However while it is complicated it is not impossible: you know from Fourier's theorem that $$psi(x) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i k x} int_{-infty}^infty dxi ~e^{-ikxi} ~psi(xi).$$ Let your eyes glaze over the second integral and see it as just what it is, some $psi[k]$ coefficent in $k$-space. What we have here then is a sum—a continuous sum, but still a sum!—of coefficients times eigenfunctions:$$psi(x) = int_{-infty}^{infty}frac{dk~psi[k]}{2pi}~phi_k(x).$$



        And we know how to Schrödinger-ize such a sum, we just add $e^{-iomega_k t}$ terms to each of the eigenfunctions, turning $phi_k$ into $Phi_k.$ So we get,
        $$Psi(x, t) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i (k x - omega_k t)} ~psi[k].$$
        You do not have to do it this way, you can try to do some sort of $$expleft[-i frac{hbar t}{2m} frac{partial^2~}{partial x^2}right] e^{-a x^2}$$
        monstrosity, expanding the operator in a power series and then seeing whether there are patterns you can use among the $n^text{th}$ derivatives of Gaussians to simplify. But the operator expansion way looks really pretty difficult, while the eigenvector way is really easy.



        The reason it is really easy is that both $hat H$ and $ihbar partial_t$ are linear operators: they distribute over sums. So if you are still feeling queasy about this procedure, convince yourself by just writing it out: calculate this value $$0 = left(ihbar frac{partial~}{partial t} + frac{hbar^2}{2m}frac{partial^2~}{partial x^2}right) frac{1}{2pi} int_{-infty}^infty dk~psi[k] ~e^{i (k x - omega_k t)}.$$ Notice that it holds with pretty much no restriction on the actual form of $psi[k]$ so that you only need to choose coefficients $psi[k]$ such that $Psi(x, 0) = psi(x).$






        share|cite|improve this answer











        $endgroup$



        Tacking on a term $e^{-iEt/hbar}$ is the correct interpretation of the Schrödinger equation $$ihbar |partial_t Psirangle = hat H |Psirangle$$only for those eigenstates for which $$hat H |Psirangle = E|Psirangle,$$as otherwise you do not know what value of $E$ should be used to substitute. Hypothetically you can still do it, but you pay a very painful cost that the $E$ is in fact a full-fledged operator and you therefore need to exponentiate an operator, which is nontrivial.



        If this is all sounding a bit complicated, please remember that QM is just linear algebra in funny hats, and so you could get an intuition for similar systems by just using some matrices and vectors, for example looking at $$ihbar begin{bmatrix} f'(t) \ g'(t) end{bmatrix} = epsilon begin{bmatrix} 0&1\1&0end{bmatrix} begin{bmatrix} f(t) \ g(t)end{bmatrix}.$$One can in fact express this as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-ihat H t/hbar} begin{bmatrix} f_0\ g_0end{bmatrix},$$ but one has to exponentiate this matrix. That is not hard because it squares to the identity matrix, causing a simple expansion, $$begin{bmatrix}f(t)\g(t)end{bmatrix} = cos(epsilon t/hbar) begin{bmatrix} f_0\ g_0end{bmatrix} - i sin(epsilon t/hbar) begin{bmatrix} g_0\ f_0end{bmatrix}. $$ One can then confirm that indeed this satisfies the Schrödinger equation given above. One can also immediately see that this does not directly have the form $e^{-iepsilon t/hbar} [f_0; g_0],$ but how could it? That would be a different Hamiltonian $hat H = epsilon I.$



        But, with some creativity, one can see that if $f_0 = g_0$ those two remaining vectors would be parallel, or if $f_0 = -g_0$, and one can indeed rewrite this solution in terms of those eigenvectors of the original $hat H$ as $$begin{bmatrix}f(t)\g(t)end{bmatrix} = e^{-iepsilon t/hbar} alpha begin{bmatrix} 1\ 1end{bmatrix} + e^{iepsilon t/hbar} beta begin{bmatrix} -1\ 1end{bmatrix}. $$ So the trick to more easily finding general solutions is to find these eigenvectors first and then form a general linear combination of those eigenvectors once they have been multiplied individually by their time dependence. Then for a given initial state, we need to find the $alpha$ and $beta$ terms: in this case it is simple enough by looking at $t=0$ where $alpha - beta = f_0$ while $alpha + beta = g_0.$



        Similarly for your Hamiltonian $hat H = hat p^2/(2m) = -frac{hbar^2}{2m}frac{partial^2~}{partial x^2},$ you know that the eigenvectors are plane waves, $$phi_k(x) = e^{ikx}.$$You know that you can then add time dependence to them in the obvious way, $$Phi_k(x, t) = e^{i(k x - omega_k t)},$$ where of course $$hbar omega_k = frac{hbar^2k^2}{2m}.$$ So the eigenvector story is just beautifully simple for you to do, all you need is the ability to take derivatives of exponentials.



        The part of the story that is more complicated is assembling an arbitrary $psi(x)$ as a sum of these exponentials. However while it is complicated it is not impossible: you know from Fourier's theorem that $$psi(x) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i k x} int_{-infty}^infty dxi ~e^{-ikxi} ~psi(xi).$$ Let your eyes glaze over the second integral and see it as just what it is, some $psi[k]$ coefficent in $k$-space. What we have here then is a sum—a continuous sum, but still a sum!—of coefficients times eigenfunctions:$$psi(x) = int_{-infty}^{infty}frac{dk~psi[k]}{2pi}~phi_k(x).$$



        And we know how to Schrödinger-ize such a sum, we just add $e^{-iomega_k t}$ terms to each of the eigenfunctions, turning $phi_k$ into $Phi_k.$ So we get,
        $$Psi(x, t) = frac{1}{2pi}int_{-infty}^{infty} dk ~e^{i (k x - omega_k t)} ~psi[k].$$
        You do not have to do it this way, you can try to do some sort of $$expleft[-i frac{hbar t}{2m} frac{partial^2~}{partial x^2}right] e^{-a x^2}$$
        monstrosity, expanding the operator in a power series and then seeing whether there are patterns you can use among the $n^text{th}$ derivatives of Gaussians to simplify. But the operator expansion way looks really pretty difficult, while the eigenvector way is really easy.



        The reason it is really easy is that both $hat H$ and $ihbar partial_t$ are linear operators: they distribute over sums. So if you are still feeling queasy about this procedure, convince yourself by just writing it out: calculate this value $$0 = left(ihbar frac{partial~}{partial t} + frac{hbar^2}{2m}frac{partial^2~}{partial x^2}right) frac{1}{2pi} int_{-infty}^infty dk~psi[k] ~e^{i (k x - omega_k t)}.$$ Notice that it holds with pretty much no restriction on the actual form of $psi[k]$ so that you only need to choose coefficients $psi[k]$ such that $Psi(x, 0) = psi(x).$







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited 18 mins ago

























        answered 2 hours ago









        CR DrostCR Drost

        23.2k11964




        23.2k11964






















            M-B is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            M-B is a new contributor. Be nice, and check out our Code of Conduct.













            M-B is a new contributor. Be nice, and check out our Code of Conduct.












            M-B is a new contributor. Be nice, and check out our Code of Conduct.
















            Thanks for contributing an answer to Physics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f473865%2ftime-evolution-of-a-gaussian-wave-packet-why-convert-to-k-space%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            What other Star Trek series did the main TNG cast show up in?

            Berlina muro

            Berlina aerponto