Replies: 13 comments 9 replies
-
Setting the device used in pipelines manually is not currently supported. |
Beta Was this translation helpful? Give feedback.
-
If we can define the device, we could use the new pytorch optimized for M1 on Macs as explained here: |
Beta Was this translation helpful? Give feedback.
-
Yes, it is pretty trivial. I have it working. Will make a PR later this
week.
~Bach
…On Sat, Nov 26, 2022 at 4:57 PM Fabrizio Ferrari ***@***.***> wrote:
If we can define the device, we could use the new pytorch optimized for M1
on Macs as explained here:
https://towardsdatascience.com/installing-pytorch-on-apple-m1-chip-with-gpu-acceleration-3351dc44d67c
—
Reply to this email directly, view it on GitHub
<#1155 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIF5GAIQT56X7YN4R5V6NDDWKKWYDANCNFSM6AAAAAASC6UFH4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Awesome! Thank you!
… On Nov 28, 2022, at 9:01 AM, Bach Bui ***@***.***> wrote:
Yes, it is pretty trivial. I have it working. Will make a PR later this
week.
~Bach
On Sat, Nov 26, 2022 at 4:57 PM Fabrizio Ferrari ***@***.***>
wrote:
> If we can define the device, we could use the new pytorch optimized for M1
> on Macs as explained here:
>
>
> https://towardsdatascience.com/installing-pytorch-on-apple-m1-chip-with-gpu-acceleration-3351dc44d67c
>
> —
> Reply to this email directly, view it on GitHub
> <#1155 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIF5GAIQT56X7YN4R5V6NDDWKKWYDANCNFSM6AAAAAASC6UFH4>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***
> com>
>
—
Reply to this email directly, view it on GitHub <#1155 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ACMKWRG5F3DIKQWXS54T3K3WKTQORANCNFSM6AAAAAASC6UFH4>.
You are receiving this because you commented.
|
Beta Was this translation helpful? Give feedback.
-
@mbui0327 Any news about all this? |
Beta Was this translation helpful? Give feedback.
-
This PR adds a new |
Beta Was this translation helpful? Give feedback.
-
Thank you! It works great! It is much, much faster! 20 seconds vs 3 minutes for a 40 seconds dialog. What an improvement! I just need to analyze the results because for some reason they seem to be different... I'll be back with more info. |
Beta Was this translation helpful? Give feedback.
-
Yeah, for some reason, I get screwed results using the mps device. Is the code below the correct way to use it?
With the CPU, which takes over 3 minutes to process, I get the following correct results:
With the GPU (using pipeline.to("mps")) which takes just 20 seconds, I get the following results:
Any ideas? |
Beta Was this translation helpful? Give feedback.
-
Well, after some research, I think the problem was that I didn't have PyTorch Nightly installed on my Mac. Fact is, after having created a new conda environment from scratch with the Nightly Pytorch on it, by performing:
It seems to downgrade pytorch audio and other stuff so that once I execute the python script above, I get a bunch of errors:
Is pyannote compatible with PyTorch Nightly? |
Beta Was this translation helpful? Give feedback.
-
Has anyone managed to run pyannote on mps? 🤔 |
Beta Was this translation helpful? Give feedback.
-
Also wondering about this. Running into issues with diarization results when using torch.device('mps'). @fablau @adeviney Did either of you find a solution to your issues? |
Beta Was this translation helpful? Give feedback.
-
Isn't it this issue? |
Beta Was this translation helpful? Give feedback.
-
I have found this post that maybe would help |
Beta Was this translation helpful? Give feedback.
-
I could not find any document on running the diarization on GPU or MPS. The earlier version (v1) can be loaded into a torch pipeline and can be assigned device to it. Is there equivalent thing in v2.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions