You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I run the llm_inference in localhost, it's ok to access model file like "gemma-2b-it-gpu-int4.bin" that is in project folder, but when I run llm_inference in Firebase Hosting, it can not access on-device's model file, it will show "Not allowed to load local resource: file:///D:/model/gemma-2b-it-gpu-int4.bin".
And I query it to get that info 'In standard HTML and JavaScript, it is not possible to directly specify to read files with a specific path on the local machine. This is due to browser security restrictions designed to protect user privacy and prevent malicious websites from automatically accessing the local file system'.
When I run the llm_inference in localhost, it's ok to access model file like "gemma-2b-it-gpu-int4.bin" that is in project folder, but when I run llm_inference in Firebase Hosting, it can not access on-device's model file, it will show "Not allowed to load local resource: file:///D:/model/gemma-2b-it-gpu-int4.bin".
And I query it to get that info 'In standard HTML and JavaScript, it is not possible to directly specify to read files with a specific path on the local machine. This is due to browser security restrictions designed to protect user privacy and prevent malicious websites from automatically accessing the local file system'.
But I try your sample in MediaPipe Studio(https://mediapipe-studio.webapps.google.com/studio/demo/llm_inference), I can click 'Choose a model file' and select model file in my device and run OK, I would like to ask how can it do that? Thank you!
The text was updated successfully, but these errors were encountered: