Replies: 2 comments
-
If you are interested in following the TextGeneration part more closely there is this app over in the Gen-AI repo Real basic at the moment but more features will be added as the OnnxRuntime-GenAi project progresses |
Beta Was this translation helpful? Give feedback.
-
Thanks for that :) Unfortunately I didn't managed to make it work the Phi-3-mini from Microsoft (https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) I got a weird error that I don't really understand yet when loading the Model (tried both CPU/DirectML versions): Maybe this will help you for further updates I don't know, I'm not posting this to seek help but rather inform you. You must have already a ton of work to do and not really the time to deal with beginners like me, I'm just gonna keep diving into this to learn more, do testing and experiments, repeat and wait for updates ;) Cheers :D EDIT: I did update onnxruntime-genai to the latest build and it's working now using phi-3, working well even on my 5600 CPU, I will try to implement DirectML in Genny just to see and then try using a Mistral model to make it work with it (if i understand it well, it requires the genai_config.json in order to load/work properly the onnx model), unexpectedly it appears way less complicated than I imagined by using your awesome tools but I'm surely at the first top of the Dunning-Kruger curve right now and the worst is to come lol. Have a great day! |
Beta Was this translation helpful? Give feedback.
-
Hello,
I just wanted to come here and say a big thanks to you for all the work you providing here in your repo. I'm pretty experimented in C# dev and it's been quite a while that I wanted to jump into AI but I have to say : I hate working with python, just my personal opinion but I feel this language is well documented but at the price of performance lack plus millions of libs/frameworks to install to make it working resulting in a complete bloat mess.
I'm testing since yesterday your very well made and straightforward app UI text to image gen with SDXL Turbo onnx model and waouh! I'm astonished by the model performances but even more by the execution speed of onnx runtime and seamless compatibility with AMD GPUs (using a 7900xtx).
I'm planning to create in the future my own personal assistant capable of writing, coding, executing procedural functions, listening, talking but also generating things (such as images) and I think using yours libs to facilitate my work but also learn in an easier way how things works as an AI beginner, thanks to you I will be able to create it in my favorite language, C#!
I saw you're already working on text generation, which is fabulous if such things can be integrated with your core libs, Mistral would be awesome on this!
I'm still at the learning state when it comes to AI, and damn those things are very complex so please forgive me if I'm writing nonsense :'D
All that said, big big BIG thanks to you guys! You are amazing!
Beta Was this translation helpful? Give feedback.
All reactions