What are actual Tesla M60 models used by AWS?












1















Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



> sudo nvidia-smi
Tue Mar 12 00:13:10 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
| N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 6779 C python 7362MiB |
+-----------------------------------------------------------------------------+


What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?










share|improve this question







New contributor




hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    1















    Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



    I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



    > sudo nvidia-smi
    Tue Mar 12 00:13:10 2019
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
    |-------------------------------+----------------------+----------------------+
    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
    |===============================+======================+======================|
    | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
    | N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
    +-------------------------------+----------------------+----------------------+

    +-----------------------------------------------------------------------------+
    | Processes: GPU Memory |
    | GPU PID Type Process name Usage |
    |=============================================================================|
    | 0 6779 C python 7362MiB |
    +-----------------------------------------------------------------------------+


    What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?










    share|improve this question







    New contributor




    hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.























      1












      1








      1








      Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



      I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



      > sudo nvidia-smi
      Tue Mar 12 00:13:10 2019
      +-----------------------------------------------------------------------------+
      | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
      |-------------------------------+----------------------+----------------------+
      | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
      | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
      |===============================+======================+======================|
      | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
      | N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
      +-------------------------------+----------------------+----------------------+

      +-----------------------------------------------------------------------------+
      | Processes: GPU Memory |
      | GPU PID Type Process name Usage |
      |=============================================================================|
      | 0 6779 C python 7362MiB |
      +-----------------------------------------------------------------------------+


      What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?










      share|improve this question







      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.












      Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



      I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



      > sudo nvidia-smi
      Tue Mar 12 00:13:10 2019
      +-----------------------------------------------------------------------------+
      | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
      |-------------------------------+----------------------+----------------------+
      | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
      | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
      |===============================+======================+======================|
      | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
      | N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
      +-------------------------------+----------------------+----------------------+

      +-----------------------------------------------------------------------------+
      | Processes: GPU Memory |
      | GPU PID Type Process name Usage |
      |=============================================================================|
      | 0 6779 C python 7362MiB |
      +-----------------------------------------------------------------------------+


      What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?







      amazon-web-services graphics-processing-unit nvidia






      share|improve this question







      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 1 hour ago









      hanshans

      1062




      1062




      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          3














          Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "2"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            hans is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f957832%2fwhat-are-actual-tesla-m60-models-used-by-aws%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3














            Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






            share|improve this answer




























              3














              Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






              share|improve this answer


























                3












                3








                3







                Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






                share|improve this answer













                Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 54 mins ago









                Michael HamptonMichael Hampton

                171k27314640




                171k27314640






















                    hans is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    hans is a new contributor. Be nice, and check out our Code of Conduct.













                    hans is a new contributor. Be nice, and check out our Code of Conduct.












                    hans is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Server Fault!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f957832%2fwhat-are-actual-tesla-m60-models-used-by-aws%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    What other Star Trek series did the main TNG cast show up in?

                    Berlina muro

                    Berlina aerponto