TheBloke commited on
Commit
6b3d216
1 Parent(s): 2462d72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -33,7 +33,8 @@ Two sets of models are provided:
33
  * Groupsize = 1024
34
  * Should work reliably in 24GB VRAM
35
  * Groupsize = 128
36
- * May require more than 24GB VRAM, depending on response length
 
37
  * In my testing it ran out of VRAM on a 24GB card around 1500 tokens returned.
38
 
39
  For each model, two versions are available:
 
33
  * Groupsize = 1024
34
  * Should work reliably in 24GB VRAM
35
  * Groupsize = 128
36
+ * Optimal setting for highest inference quality
37
+ * But may require more than 24GB VRAM, depending on response length
38
  * In my testing it ran out of VRAM on a 24GB card around 1500 tokens returned.
39
 
40
  For each model, two versions are available: