Sum of two exponential series with equal means and variances





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







2












$begingroup$


Assuming $A$ and $B$ are two non-negative real-valued random variables such that





  1. $mathrm{E}(A)=mathrm{E}(B)$ (equal means)


  2. $mathrm{Var}(A)=mathrm{Var}(B)<epsilon$ (equal small variances)


is there a way to prove that
$frac{1}{N}sum_{j=1}^Ne^{-a_{j}}$ and $frac{1}{N}sum_{j=1}^Ne^{-b_{j}}$ are arbitrarily close to each other where $a_j$ and $b_j$ are realizations taken from $A$ and $B$, respectively. ($N$ can be assumed to be large as well)










share|cite|improve this question









New contributor




nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$



















    2












    $begingroup$


    Assuming $A$ and $B$ are two non-negative real-valued random variables such that





    1. $mathrm{E}(A)=mathrm{E}(B)$ (equal means)


    2. $mathrm{Var}(A)=mathrm{Var}(B)<epsilon$ (equal small variances)


    is there a way to prove that
    $frac{1}{N}sum_{j=1}^Ne^{-a_{j}}$ and $frac{1}{N}sum_{j=1}^Ne^{-b_{j}}$ are arbitrarily close to each other where $a_j$ and $b_j$ are realizations taken from $A$ and $B$, respectively. ($N$ can be assumed to be large as well)










    share|cite|improve this question









    New contributor




    nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      2












      2








      2





      $begingroup$


      Assuming $A$ and $B$ are two non-negative real-valued random variables such that





      1. $mathrm{E}(A)=mathrm{E}(B)$ (equal means)


      2. $mathrm{Var}(A)=mathrm{Var}(B)<epsilon$ (equal small variances)


      is there a way to prove that
      $frac{1}{N}sum_{j=1}^Ne^{-a_{j}}$ and $frac{1}{N}sum_{j=1}^Ne^{-b_{j}}$ are arbitrarily close to each other where $a_j$ and $b_j$ are realizations taken from $A$ and $B$, respectively. ($N$ can be assumed to be large as well)










      share|cite|improve this question









      New contributor




      nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      Assuming $A$ and $B$ are two non-negative real-valued random variables such that





      1. $mathrm{E}(A)=mathrm{E}(B)$ (equal means)


      2. $mathrm{Var}(A)=mathrm{Var}(B)<epsilon$ (equal small variances)


      is there a way to prove that
      $frac{1}{N}sum_{j=1}^Ne^{-a_{j}}$ and $frac{1}{N}sum_{j=1}^Ne^{-b_{j}}$ are arbitrarily close to each other where $a_j$ and $b_j$ are realizations taken from $A$ and $B$, respectively. ($N$ can be assumed to be large as well)







      variance mean exponential






      share|cite|improve this question









      New contributor




      nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question









      New contributor




      nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question








      edited 4 hours ago







      nOp













      New contributor




      nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 4 hours ago









      nOpnOp

      113




      113




      New contributor




      nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      nOp is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          Although it is not explicitly specified, I will assume that you intend for all the realisations of these random variables to be independent (i.e., I will assume joint independence of all the random variables in both series). The difference between the two series is the random variable defined by the function:



          $$D(N) = frac{1}{N} sum_{i=1}^N (e^{-A_i} - e^{-B_i}).$$



          Since $e^{-a} leqslant 1$ for all $a geqslant 0$, it follows that $mathbb{V}(e^{-A}) leqslant mathbb{E}(e^{-2A}) leqslant 1$ for any non-negative random variable $A$. Thus, we have:



          $$begin{equation} begin{aligned}
          mathbb{V}(D(N))
          &= frac{1}{N^2} sum_{i=1}^N Big( mathbb{V}(e^{-A_i}) + mathbb{V}(e^{-B_i}) Big) \[6pt]
          &leqslant frac{1}{N^2} sum_{i=1}^N Big( 1 + 1 Big) \[6pt]
          &= frac{1}{N^2} cdot 2N \[6pt]
          &= frac{2}{N}. \[6pt]
          end{aligned} end{equation}$$



          We therefore have $lim_{N rightarrow infty} mathbb{V}(D(N)) = 0$, so the variance of the difference converges to zero. We also have the constant mean difference:



          $$begin{equation} begin{aligned}
          mathbb{E}(D(N))
          &= frac{1}{N} sum_{i=1}^N Big( mathbb{E}(e^{-A_i}) - mathbb{E}(e^{-B_i}) Big) \[6pt]
          &= mathbb{E}(e^{-A}) - mathbb{E}(e^{-B}). \[6pt]
          end{aligned} end{equation}$$



          Combining these results we see that $D(N)$ converges in mean square to $mathbb{E}(e^{-A}) - mathbb{E}(e^{-B})$, which is a constant value. Since the variances of both random variables are small, this limiting value should be close to (but not necessarily equal to) zero. Thus, for large values of $N$ you would indeed expect the difference to converge towards a value near zero.






          share|cite|improve this answer











          $endgroup$





















            1












            $begingroup$

            Just apply the law of large numbers, since $e^{-a_j}$ has finite variance because $a$ is positive and has finite variance it self. The result is that the variance of $e^{-a_j}$ is smaller than the variance of $a$. This is elaborated in this question: https://mathoverflow.net/questions/330348/proof-of-variance-bounds-for-transformed-random-variables/330357#330357






            share|cite|improve this answer











            $endgroup$














              Your Answer








              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "65"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });






              nOp is a new contributor. Be nice, and check out our Code of Conduct.










              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f405806%2fsum-of-two-exponential-series-with-equal-means-and-variances%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              2












              $begingroup$

              Although it is not explicitly specified, I will assume that you intend for all the realisations of these random variables to be independent (i.e., I will assume joint independence of all the random variables in both series). The difference between the two series is the random variable defined by the function:



              $$D(N) = frac{1}{N} sum_{i=1}^N (e^{-A_i} - e^{-B_i}).$$



              Since $e^{-a} leqslant 1$ for all $a geqslant 0$, it follows that $mathbb{V}(e^{-A}) leqslant mathbb{E}(e^{-2A}) leqslant 1$ for any non-negative random variable $A$. Thus, we have:



              $$begin{equation} begin{aligned}
              mathbb{V}(D(N))
              &= frac{1}{N^2} sum_{i=1}^N Big( mathbb{V}(e^{-A_i}) + mathbb{V}(e^{-B_i}) Big) \[6pt]
              &leqslant frac{1}{N^2} sum_{i=1}^N Big( 1 + 1 Big) \[6pt]
              &= frac{1}{N^2} cdot 2N \[6pt]
              &= frac{2}{N}. \[6pt]
              end{aligned} end{equation}$$



              We therefore have $lim_{N rightarrow infty} mathbb{V}(D(N)) = 0$, so the variance of the difference converges to zero. We also have the constant mean difference:



              $$begin{equation} begin{aligned}
              mathbb{E}(D(N))
              &= frac{1}{N} sum_{i=1}^N Big( mathbb{E}(e^{-A_i}) - mathbb{E}(e^{-B_i}) Big) \[6pt]
              &= mathbb{E}(e^{-A}) - mathbb{E}(e^{-B}). \[6pt]
              end{aligned} end{equation}$$



              Combining these results we see that $D(N)$ converges in mean square to $mathbb{E}(e^{-A}) - mathbb{E}(e^{-B})$, which is a constant value. Since the variances of both random variables are small, this limiting value should be close to (but not necessarily equal to) zero. Thus, for large values of $N$ you would indeed expect the difference to converge towards a value near zero.






              share|cite|improve this answer











              $endgroup$


















                2












                $begingroup$

                Although it is not explicitly specified, I will assume that you intend for all the realisations of these random variables to be independent (i.e., I will assume joint independence of all the random variables in both series). The difference between the two series is the random variable defined by the function:



                $$D(N) = frac{1}{N} sum_{i=1}^N (e^{-A_i} - e^{-B_i}).$$



                Since $e^{-a} leqslant 1$ for all $a geqslant 0$, it follows that $mathbb{V}(e^{-A}) leqslant mathbb{E}(e^{-2A}) leqslant 1$ for any non-negative random variable $A$. Thus, we have:



                $$begin{equation} begin{aligned}
                mathbb{V}(D(N))
                &= frac{1}{N^2} sum_{i=1}^N Big( mathbb{V}(e^{-A_i}) + mathbb{V}(e^{-B_i}) Big) \[6pt]
                &leqslant frac{1}{N^2} sum_{i=1}^N Big( 1 + 1 Big) \[6pt]
                &= frac{1}{N^2} cdot 2N \[6pt]
                &= frac{2}{N}. \[6pt]
                end{aligned} end{equation}$$



                We therefore have $lim_{N rightarrow infty} mathbb{V}(D(N)) = 0$, so the variance of the difference converges to zero. We also have the constant mean difference:



                $$begin{equation} begin{aligned}
                mathbb{E}(D(N))
                &= frac{1}{N} sum_{i=1}^N Big( mathbb{E}(e^{-A_i}) - mathbb{E}(e^{-B_i}) Big) \[6pt]
                &= mathbb{E}(e^{-A}) - mathbb{E}(e^{-B}). \[6pt]
                end{aligned} end{equation}$$



                Combining these results we see that $D(N)$ converges in mean square to $mathbb{E}(e^{-A}) - mathbb{E}(e^{-B})$, which is a constant value. Since the variances of both random variables are small, this limiting value should be close to (but not necessarily equal to) zero. Thus, for large values of $N$ you would indeed expect the difference to converge towards a value near zero.






                share|cite|improve this answer











                $endgroup$
















                  2












                  2








                  2





                  $begingroup$

                  Although it is not explicitly specified, I will assume that you intend for all the realisations of these random variables to be independent (i.e., I will assume joint independence of all the random variables in both series). The difference between the two series is the random variable defined by the function:



                  $$D(N) = frac{1}{N} sum_{i=1}^N (e^{-A_i} - e^{-B_i}).$$



                  Since $e^{-a} leqslant 1$ for all $a geqslant 0$, it follows that $mathbb{V}(e^{-A}) leqslant mathbb{E}(e^{-2A}) leqslant 1$ for any non-negative random variable $A$. Thus, we have:



                  $$begin{equation} begin{aligned}
                  mathbb{V}(D(N))
                  &= frac{1}{N^2} sum_{i=1}^N Big( mathbb{V}(e^{-A_i}) + mathbb{V}(e^{-B_i}) Big) \[6pt]
                  &leqslant frac{1}{N^2} sum_{i=1}^N Big( 1 + 1 Big) \[6pt]
                  &= frac{1}{N^2} cdot 2N \[6pt]
                  &= frac{2}{N}. \[6pt]
                  end{aligned} end{equation}$$



                  We therefore have $lim_{N rightarrow infty} mathbb{V}(D(N)) = 0$, so the variance of the difference converges to zero. We also have the constant mean difference:



                  $$begin{equation} begin{aligned}
                  mathbb{E}(D(N))
                  &= frac{1}{N} sum_{i=1}^N Big( mathbb{E}(e^{-A_i}) - mathbb{E}(e^{-B_i}) Big) \[6pt]
                  &= mathbb{E}(e^{-A}) - mathbb{E}(e^{-B}). \[6pt]
                  end{aligned} end{equation}$$



                  Combining these results we see that $D(N)$ converges in mean square to $mathbb{E}(e^{-A}) - mathbb{E}(e^{-B})$, which is a constant value. Since the variances of both random variables are small, this limiting value should be close to (but not necessarily equal to) zero. Thus, for large values of $N$ you would indeed expect the difference to converge towards a value near zero.






                  share|cite|improve this answer











                  $endgroup$



                  Although it is not explicitly specified, I will assume that you intend for all the realisations of these random variables to be independent (i.e., I will assume joint independence of all the random variables in both series). The difference between the two series is the random variable defined by the function:



                  $$D(N) = frac{1}{N} sum_{i=1}^N (e^{-A_i} - e^{-B_i}).$$



                  Since $e^{-a} leqslant 1$ for all $a geqslant 0$, it follows that $mathbb{V}(e^{-A}) leqslant mathbb{E}(e^{-2A}) leqslant 1$ for any non-negative random variable $A$. Thus, we have:



                  $$begin{equation} begin{aligned}
                  mathbb{V}(D(N))
                  &= frac{1}{N^2} sum_{i=1}^N Big( mathbb{V}(e^{-A_i}) + mathbb{V}(e^{-B_i}) Big) \[6pt]
                  &leqslant frac{1}{N^2} sum_{i=1}^N Big( 1 + 1 Big) \[6pt]
                  &= frac{1}{N^2} cdot 2N \[6pt]
                  &= frac{2}{N}. \[6pt]
                  end{aligned} end{equation}$$



                  We therefore have $lim_{N rightarrow infty} mathbb{V}(D(N)) = 0$, so the variance of the difference converges to zero. We also have the constant mean difference:



                  $$begin{equation} begin{aligned}
                  mathbb{E}(D(N))
                  &= frac{1}{N} sum_{i=1}^N Big( mathbb{E}(e^{-A_i}) - mathbb{E}(e^{-B_i}) Big) \[6pt]
                  &= mathbb{E}(e^{-A}) - mathbb{E}(e^{-B}). \[6pt]
                  end{aligned} end{equation}$$



                  Combining these results we see that $D(N)$ converges in mean square to $mathbb{E}(e^{-A}) - mathbb{E}(e^{-B})$, which is a constant value. Since the variances of both random variables are small, this limiting value should be close to (but not necessarily equal to) zero. Thus, for large values of $N$ you would indeed expect the difference to converge towards a value near zero.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited 21 mins ago

























                  answered 2 hours ago









                  BenBen

                  29.4k234130




                  29.4k234130

























                      1












                      $begingroup$

                      Just apply the law of large numbers, since $e^{-a_j}$ has finite variance because $a$ is positive and has finite variance it self. The result is that the variance of $e^{-a_j}$ is smaller than the variance of $a$. This is elaborated in this question: https://mathoverflow.net/questions/330348/proof-of-variance-bounds-for-transformed-random-variables/330357#330357






                      share|cite|improve this answer











                      $endgroup$


















                        1












                        $begingroup$

                        Just apply the law of large numbers, since $e^{-a_j}$ has finite variance because $a$ is positive and has finite variance it self. The result is that the variance of $e^{-a_j}$ is smaller than the variance of $a$. This is elaborated in this question: https://mathoverflow.net/questions/330348/proof-of-variance-bounds-for-transformed-random-variables/330357#330357






                        share|cite|improve this answer











                        $endgroup$
















                          1












                          1








                          1





                          $begingroup$

                          Just apply the law of large numbers, since $e^{-a_j}$ has finite variance because $a$ is positive and has finite variance it self. The result is that the variance of $e^{-a_j}$ is smaller than the variance of $a$. This is elaborated in this question: https://mathoverflow.net/questions/330348/proof-of-variance-bounds-for-transformed-random-variables/330357#330357






                          share|cite|improve this answer











                          $endgroup$



                          Just apply the law of large numbers, since $e^{-a_j}$ has finite variance because $a$ is positive and has finite variance it self. The result is that the variance of $e^{-a_j}$ is smaller than the variance of $a$. This is elaborated in this question: https://mathoverflow.net/questions/330348/proof-of-variance-bounds-for-transformed-random-variables/330357#330357







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited 1 hour ago

























                          answered 4 hours ago









                          Peter Mølgaard PallesenPeter Mølgaard Pallesen

                          39319




                          39319






















                              nOp is a new contributor. Be nice, and check out our Code of Conduct.










                              draft saved

                              draft discarded


















                              nOp is a new contributor. Be nice, and check out our Code of Conduct.













                              nOp is a new contributor. Be nice, and check out our Code of Conduct.












                              nOp is a new contributor. Be nice, and check out our Code of Conduct.
















                              Thanks for contributing an answer to Cross Validated!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f405806%2fsum-of-two-exponential-series-with-equal-means-and-variances%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Last logged in always never, not logging

                              Iĥnotaksono

                              Colouring column values based on a specific condition. How could I do this?