-
Notifications
You must be signed in to change notification settings - Fork 872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] own models #69
Comments
Yes, this makes sense. I would put it on the task list. |
Hackishly, you can use LocalAI (another project) to simulate OpenAI's API. Then, just override the Open AI host when importing via Python.
You have to set the model as well. In my case I setup Works very well and only took me 20 minutes to figure out. Granted, it wasn't a clean way to do it... .I replaced everything that mentioned "gpt-4" and other models with my model |
Yeah LocalAI is a really cool project. And I would feel safer about using this tool if my penetration test session weren't going to OpenAI. But, hit a local model and API. |
@LarsBingBong Unfortunately that is impossible for now. There is no existing model that can complete pentest as well as GPT-4. If you have concerns, you may also OpenAI API. OpenAI claims that they won't collect information from it, and the data will be deleted after 30 days. |
Fair enough. I'll consider my options. Thank you very much for the response. At the same time though isn't running through Local-AI exactly what @absane did? |
So, LocalAI just mimics the OpenAI API. Thus, as far as PentestGPT or any other tool is concerned that uses OpenAI API, the requests and format of the response don't change. The cool thing about LocalAI is that you can hook it into your own local model OR forward requests to the OpenAI API. You can pick and choose and even implement rules to choose what goes to OpenAI and what stays local. I prefer this way because it provides more flexibility while someone, or even myself, work on a local model that can do just as well as GPT4 for our particular use-case. One thing I have in mind is taking GPT4All or whatever flavor-of-the-day model there is and fine-tuning it on internal documentation like our past reports, notes, bash history logs, etc. OpenAI is fantastic, but I don't like the idea of feeding it endless streams of sensitive information, particularly those of our clients. |
Hmmm could this be closed as per v0.9? As an enhancement we could probably add an installer menu for most recent models as per HF. The GPT4ALL API has a function imported which automatically downloads a model for use if it is not present. Despite the dependency, all we would need to change on the GPT4API is a menu in place of the models string value to be a linked variable of available models on the system(also including ones for automatic download) as per GPT4ALL API. |
can you show more steps on how to make pentestgpt workk with localai? It would be great to simple how to that shows exactly what to change in order to make it work. Pretty please |
To get a free version , let user chose its own model from the huggingface library and run it on gpu to make it work then use this model instead of openai api
The text was updated successfully, but these errors were encountered: