The official repository of "What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks". https://arxiv.org/abs/2305.18365
- [Dec 2023] We updated all test dataset we used and we also add prompts we used (in log format) in the folder of each task. Please see data/ folder for more details!
- [Sep 2023] Our paper has been accepted to NeurIPS 2023 Datasets and Benchmarks Track!
- [Sep 2023] We released the second version (v2) of our paper, we added extra LLMs (GPT-4, GPT-3.5, Davinci-003, LLama2, Galactica) experiments; more baselines, and more investigations on SELFIES, and label interpretation!
- [May 2023] We released the first version (v1) of our paper! Very glad to share our investigations and insights about LLM in chemistry!
The followings are our prompt used in the paper. It's extremely easy to try your own designed prompt! Only need to change the prompt in the Jupyter code of each task and then we can see the results and performance.
The datasets of some tasks are already uploaded in this repository. Becuase of the size limit, please download these datasets according to the link. After downloading these datasets, please move these datasets to the corresponding folder and then you can run our Jupyter code of each task.
Dataset | Link | Reference |
---|---|---|
USPTO_Mixed | download | https://github.com/MolecularAI/Chemformer |
USPTO-50k | download | https://github.com/MolecularAI/Chemformer |
ChEBI-20 | download | https://github.com/blender-nlp/MolT5 |
Suzuki-miyaura | download | https://github.com/seokhokang/reaction_yield_nn |
Butchward-Hariwig | download | https://github.com/seokhokang/reaction_yield_nn |
BBBP,BACE,HIV,Tox21,Clintox | download | https://github.com/hwwang55/MolR |
PubChem | download | https://github.com/ChemFoundationModels/ChemLLMBench/blob/main/data/name_prediction/llm_test.csv |
@misc{guo2023gpt,
title={What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks},
author={Taicheng Guo and Kehan Guo and Bozhao Nan and Zhenwen Liang and Zhichun Guo and Nitesh V. Chawla and Olaf Wiest and Xiangliang Zhang},
year={2023},
eprint={2305.18365},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Taicheng Guo: [email protected]
Xiangliang Zhang: [email protected]