This repository is an evolution of the repository based on the paper "Generative Agents: Interactive Simulacra of Human Behavior."
Note: If you change the environment name from simulacra
, you'll need to update the name in the upcoming bash scripts as well.
conda create -n simulacra python=3.9.12 pip
conda activate simulacra
pip install -r requirements.txt
Create a file called openai_config.json
in the root directory.
OpenAI example:
{
"client": "openai",
"model": "gpt-4o-mini",
"model-key": "<API-KEY>",
"model-costs": {
"input": 0.5,
"output": 1.5
},
"embeddings-client": "openai",
"embeddings": "text-embedding-3-small",
"embeddings-key": "<API-KEY>",
"embeddings-costs": {
"input": 0.02,
"output": 0.0
},
"experiment-name": "simulacra-test",
"cost-upperbound": 10
}
Azure example:
{
"client": "azure",
"model": "gpt-35-turbo-0125",
"model-key": "<API-KEY>",
"model-endpoint": "<MODEL-ENDPOINT>",
"model-api-version": "<API-VERSION>",
"model-costs": {
"input": 0.5,
"output": 1.5
},
"embeddings-client": "azure",
"embeddings": "text-embedding-3-small",
"embeddings-key": "<API-KEY>",
"embeddings-endpoint": "<EMBEDDING-MODEL-ENDPOINT>",
"embeddings-api-version": "<API-VERSION>",
"embeddings-costs": {
"input": 0.02,
"output": 0.0
},
"experiment-name": "simulacra-test",
"cost-upperbound": 10
}
Feel free to change and test also other models (and change accordingly the input and output costs). Note that this repo uses OpenAI's Structured Outputs feature, which is currently only available for certain models, like the GPT-4o series. Check the OpenAI docs for more info.
Be aware that the only supported clients are azure and openai.
The generation and the embedding models are configured separately to be able to use different clients.
Change also the cost-upperbound
according to your needs (the cost computation is done using "openai-cost-logger" and the costs are specified per million tokens).
Next, you will (for now) also need to set up the utils.py
file as described in the original repo's README. After creating the file as described there, add these lines to it and change them as necessary:
use_openai = True
# If you're not using OpenAI, define api_model
api_model = ""
All the following scripts automatically activate a conda environment called
simulacra
using a conda installation at the following path:/home/${USER}/anaconda3/bin/activate
.
You may want to change this line in case you are using a different conda installation (like miniconda) or conda environment name.
If you're running the backend in headless mode (see below), you can skip this step.
./run_frontend.sh <PORT-NUMBER>
Note: omit the port number to use the default 8000.
./run_backend.sh <ORIGIN> <TARGET>
Example:
./run_backend.sh base_the_ville_isabella_maria_klaus simulation-test
See the original README for commands to pass to the server when running it manually. In addition to the commands listed there, you can also use the command headless
in place of run
(i.e. headless 360
rather than run 360
) to run in headless mode.
The following script offer a range of enhanced features:
- Automatic Saving: The simulation automatically saves progress every 200 steps, ensuring you never lose data.
- Error Recovery: In the event of an error, the simulation automatically resumes by stepping back and restarting from the last successful point. This is crucial as the model relies on formatted answers, which can sometimes cause exceptions.
- Automatic Tab Opening: A new browser tab will automatically open when necessary.
- Headless Mode: The scripts support running simulations in headless mode, enabling execution on a server without a UI.
- Configurable Port Number: You can configure the port number as needed.
For more details, refer to: run_backend_automatic.sh and automatic_execution.py.
./run_backend_automatic.sh -o <ORIGIN> -t <TARGET> -s <STEP> --ui <True|None|False> -p <PORT> --browser_path <BROWSER-PATH>
Arguments taken by run_backend_automatic.sh
:
-o <ORIGIN>
: The name of an existing simulation to use as the base for the new simulation.-t <TARGET>
: The new simulation name (Note: you cannot have multiple simulations of the same name).-s <STEP>
: The step number to end on (NOT necessarily the number of steps to run for!).--ui <True|None|False>
: Whether to run the UI or run the simulation headless (no UI). There are two different headless modes: "None" runs in pure headless mode (no browser needed), whereas "False" runs in Chrome's builtin headless mode (needs headless-chrome installed). Prefer "None" over "False" in normal cases.-p <PORT>
: The port to run the simulation on.--browser_path <BROWSER-PATH>
: The path to the UI in the browser.
Example:
./run_backend_automatic.sh -o base_the_ville_isabella_maria_klaus -t test_1 -s 4 --ui None
- http://localhost:8000/ - check if the server is running
- http://localhost:8000/simulator_home - watch the live simulation
http://localhost:8000/replay/<simulation-name>/<starting-time-step>
- replay a simulation
For a more detailed explanation see the original readme.
For the cost tracking is used the package "openai-cost-logger". Given the possible high cost of a simulation, you can set a cost upperbound in the config file to be able to raise an exception and stop the execution when it is reached.
See all the details of your expenses using the notebook "cost_viz.ipynb."
- Model: "gpt-3.5-turbo-0125"
- Embeddings: "text-embedding-3-small"
- N. Agents: 3
- Steps: ~5000
- Final Cost: ~0.31 USD
- See the simulation saved: skip-morning-s-14
- Model: "gpt-3.5-turbo-0125"
- Embeddings: "text-embedding-3-small"
- N. Agents: 25
- Steps: ~3000 (until ~8 a.m.)
- Final Cost: ~1.3 USD
- Model: "gpt-3.5-turbo-0125"
- Embeddings: "text-embedding-3-small"
- N. Agents: 25
- Steps: ~8650 (full day)
- Final Cost: ~18.5 USD