Skip to content

Commit

Permalink
🚧 fix(wip): descriptions
Browse files Browse the repository at this point in the history
  • Loading branch information
mxchinegod committed Dec 23, 2023
1 parent 0fa1fa6 commit 882f6b4
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 30 deletions.
59 changes: 30 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,10 @@

<h1 align="center">magnet</h1>

<p align="center">small, efficient embedding model toolkit</p>
<p align="center"><i>~ fine-tune SOTA LLMs on knowledge bases rapidly ~</i></p>
<h3 align="center"><a href="https://prismadic.github.io/magnet/">📖 docs</a> | 💻 <a href="https://github.com/Prismadic/magnet/tree/main/examples">examples</a></h3>

<p align="center">the small distributed language model toolkit</p>
<p align="center"><i>⚡️ fine-tune state-of-the-art LLMs anywhere, rapidly ⚡️</i></p>
<div align="center">
</p>

Expand All @@ -23,14 +25,12 @@

</div>


</p>

<img src='./divider.png' style="width:100%;height:5px;">

## 🧬 Installation


``` bash
pip install llm-magnet
```
Expand Down Expand Up @@ -62,36 +62,37 @@ await filings.process('./data/filings.parquet','clean','file', nlp=False)
<img src='./divider.png' style="width:100%;height:5px;">

## 🔮 features

<center>
<img src="./clustered_bidirectional.png" style="width:50%;transform: rotate(90deg);margin-top:200px;" align="right">
</center>

- ⚡️ **It's Fast**
- <small>fast on consumer hardware</small>
- <small>_very_ fast on Apple Silicon</small>
- <small>**extremely** fast on ROCm/CUDA</small>
- <small>fast on consumer hardware</small>
- <small>_very_ fast on Apple Silicon</small>
- <small>**extremely** fast on ROCm/CUDA</small>
- 🫵 **Automatic or your way**
- <small>rely on established transformer patterns to let `magnet` do the work</small>
- <small>keep your existing data processing functions, bring them to `magnet`!</small>
- 🛰️ **100% Distributed**
- <small>processing, embedding, storage, retrieval, querying, or inference from anywhere</small>
- <small>as much or as little compute as you need</small>
- 🧮 **Choose Inference Method**
- <small>HuggingFace</small>
- <small>vLLM node</small>
- <small>GPU</small>
- <small>mlx</small>
- 🌎 **Huge Volumes**
- <small>handle gigantic amounts of data inexpensively</small>
- <small>fault-tolerant by design</small>
- <small>decentralized workloads</small>
- 🔐 **Secure**
- <small>JWT</small>
- <small>Basic</small>
- 🪵 **World-Class Comprehension**
- <small>`magnet` optionally logs its own code as it's executed (yes, really)</small>
- <small>build a self-aware system and allow it to learn from itself</small>
- <small>emojis are the future</small>
- <small>rely on established transformer patterns to let `magnet` do the work</small>
- <small>keep your existing data processing functions, bring them to `magnet`!</small>
- 🛰️ **100% Distributed**
- <small>processing, embedding, storage, retrieval, querying, or inference from anywhere</small>
- <small>as much or as little compute as you need</small>
- 🧮 **Choose Inference Method**
- <small>HuggingFace</small>
- <small>vLLM node</small>
- <small>GPU</small>
- <small>mlx</small>
- 🌎 **Huge Volumes**
- <small>handle gigantic amounts of data inexpensively</small>
- <small>fault-tolerant by design</small>
- <small>decentralized workloads</small>
- 🔐 **Secure**
- <small>JWT</small>
- <small>Basic</small>
- 🪵 **World-Class Comprehension**
- <small>`magnet` optionally logs its own code as it's executed (yes, really)</small>
- <small>build a self-aware system and allow it to learn from itself</small>
- <small>emojis are the future</small>

<img src='./divider.png' style="width:100%;height:5px;">

Expand All @@ -104,4 +105,4 @@ await filings.process('./data/filings.parquet','clean','file', nlp=False)
- upload to S3
- ideal cyberpunk vision of LLM power users in vectorspace

<img src='./divider.png' style="width:100%;height:5px;">
<img src='./divider.png' style="width:100%;height:5px;">
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[project]
name = "llm_magnet"
version = "0.0.9"
description = "An embedding model toolkit; fine-tune SOTA LLMs on knowledge bases rapidly."
description = "the small distributed language model toolkit. fine-tune state-of-the-art LLMs anywhere, rapidly."
readme = "dynamic"

[tool.setuptools.packages.find]
Expand Down
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
setup(
name='llm_magnet',
version='0.0.9',
description="the small distributed language model toolkit. fine-tune state-of-the-art LLMs anywhere, rapidly."
long_description=open('README.md').read(),
long_description_content_type='text/markdown',
install_requires=[
Expand Down

0 comments on commit 882f6b4

Please sign in to comment.