Skip to content

v0.1.1

Compare
Choose a tag to compare
@klaasmeinke klaasmeinke released this 16 Aug 23:58
· 5 commits to main since this release
c2786b6

subwiz was crashing when using a large number of prediction due to out-of-memory. we now do inference in batches of 500 sequences at a time. in the future we could:

  • make batch size configurable
  • make batch size set automatically from memory size
  • use quantization to decrease memory usage