Server Configuration:
- Quantity: 1
- Configuration: 8 cores / 16GB memory / 500GB hard disk / GPU Machine
- Operating System: CentOS Linux release 7
- User: User: app owner:apps
The single-node version provides 3 deployment methods, which can be selected based on your needs:
- Install FATE-LLM from PyPI With FATE
- Install FATE-LLM from PyPI with FATE, FATE-Flow, FATE-Client
In this way, user can run tasks with Launcher, a convenient way for fast experimental using.
- Prepare and install conda environment.
- Create a virtual environment:
# FATE-LLM requires Python >= 3.10
conda create -n fate_env python=3.10
conda activate fate_env
This section introduces how to install FATE-LLM from pypi with FATE, execute the following command to install FATE-LLM.
pip install fate_llm[fate]==2.2.0
After installing successfully, please refer to tutorials to run tasks, tasks describe in the tutorials running will Launcher are all supported.
In this way, user can run tasks with Pipeline or Launcher.
Please refer to section-2.1
pip install fate_client[fate,fate_flow,fate_client]==2.2.0
mkdir fate_workspace
fate_flow init --ip 127.0.0.1 --port 9380 --home $(pwd)/fate_workspace
pipeline init --ip 127.0.0.1 --port 9380
ip
: The IP address where the service runs.port
: The HTTP port the service runs on.home
: The data storage directory, including data, models, logs, job configurations, and SQLite databases.
fate_flow start
fate_flow status # make sure fate_flow service is started
FATE-Flow also provides other instructions like stop and restart, use only if users want to stop/restart fate_flow services.
# Warning: normal installing process does not need to execute stop/restart instructions.
fate_flow stop
fate_flow restart
Please refer to tutorials for more usage guides, tasks describe in the tutorials running will Pipeline or Launcher are all supported.