Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] own models #69

Open
KeparYTbcc opened this issue May 10, 2023 · 8 comments
Open

[Feature] own models #69

KeparYTbcc opened this issue May 10, 2023 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@KeparYTbcc
Copy link

To get a free version , let user chose its own model from the huggingface library and run it on gpu to make it work then use this model instead of openai api

@GreyDGL
Copy link
Owner

GreyDGL commented May 10, 2023

Yes, this makes sense. I would put it on the task list.
This probably requires some modular design of the current sessions. Feel free to propose good ideas with PRs:)

@GreyDGL GreyDGL self-assigned this May 10, 2023
@GreyDGL GreyDGL added the enhancement New feature or request label May 10, 2023
@absane
Copy link

absane commented May 10, 2023

Hackishly, you can use LocalAI (another project) to simulate OpenAI's API. Then, just override the Open AI host when importing via Python.

openai.api_base = "http://host.docker.internal:8080"

You have to set the model as well. In my case I setup ggml-gpt4all-j.

Works very well and only took me 20 minutes to figure out. Granted, it wasn't a clean way to do it... .I replaced everything that mentioned "gpt-4" and other models with my model

@LarsBingBong
Copy link

Yeah LocalAI is a really cool project. And I would feel safer about using this tool if my penetration test session weren't going to OpenAI. But, hit a local model and API.

@GreyDGL
Copy link
Owner

GreyDGL commented May 14, 2023

@LarsBingBong Unfortunately that is impossible for now. There is no existing model that can complete pentest as well as GPT-4. If you have concerns, you may also OpenAI API. OpenAI claims that they won't collect information from it, and the data will be deleted after 30 days.

@LarsBingBong
Copy link

LarsBingBong commented May 15, 2023

Fair enough. I'll consider my options. Thank you very much for the response. At the same time though isn't running through Local-AI exactly what @absane did?

@absane
Copy link

absane commented May 24, 2023

Fair enough. I'll consider my options. Thank you very much for the response. At the same time though isn't running through Local-AI exactly what @absane did?

So, LocalAI just mimics the OpenAI API. Thus, as far as PentestGPT or any other tool is concerned that uses OpenAI API, the requests and format of the response don't change. The cool thing about LocalAI is that you can hook it into your own local model OR forward requests to the OpenAI API. You can pick and choose and even implement rules to choose what goes to OpenAI and what stays local.

I prefer this way because it provides more flexibility while someone, or even myself, work on a local model that can do just as well as GPT4 for our particular use-case. One thing I have in mind is taking GPT4All or whatever flavor-of-the-day model there is and fine-tuning it on internal documentation like our past reports, notes, bash history logs, etc. OpenAI is fantastic, but I don't like the idea of feeding it endless streams of sensitive information, particularly those of our clients.

@GreyDGL GreyDGL mentioned this issue May 26, 2023
@SATUNIX
Copy link

SATUNIX commented Sep 16, 2023

Hmmm could this be closed as per v0.9? As an enhancement we could probably add an installer menu for most recent models as per HF. The GPT4ALL API has a function imported which automatically downloads a model for use if it is not present. Despite the dependency, all we would need to change on the GPT4API is a menu in place of the models string value to be a linked variable of available models on the system(also including ones for automatic download) as per GPT4ALL API.

@Whoami451
Copy link

So, LocalAI just mimics the OpenAI API. Thus, as far as PentestGPT or any other tool is concerned that uses OpenAI API, the requests and format of the response don't change. The cool thing about LocalAI is that you can hook it into your own local model OR forward requests to the OpenAI API. You can pick and choose and even implement rules to choose what goes to OpenAI and what stays local.

can you show more steps on how to make pentestgpt workk with localai? It would be great to simple how to that shows exactly what to change in order to make it work. Pretty please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants