altomek commited on
Commit
b1434bc
1 Parent(s): 92ace14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -164,7 +164,7 @@ Please note that all demo inferences are run on CodeRosa-70B-AB1-3.92bpw-EXL2.
164
 
165
  Setting from Midnight-Rose should work in SillyTavern. This is almost same what I use for testing.
166
 
167
- I use max_seq_len 8K with alpha_value 2.65.
168
 
169
  ### Terms and Conditions of Use
170
 
@@ -181,8 +181,7 @@ Please remember that all CodeRosa-70B-AB1 models operate under the llama2 licens
181
 
182
  ### Quants
183
 
184
- [GGUF quants](https://huggingface.co/mradermacher/CodeRosa-70B-AB1-GGUF/discussions/1) for this model do not represent model full potential!
185
-
186
  - [6bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-6bpw-EXL2)
187
  - [5bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-5bpw-EXL2)
188
  - [4.5bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-4.5bpw-EXL2)
 
164
 
165
  Setting from Midnight-Rose should work in SillyTavern. This is almost same what I use for testing.
166
 
167
+ I use max_seq_len 8K with alpha_value 2.65. Model works also with 11K context when alpha_value is set to 5.5. Best outputs are with context around 6K however.
168
 
169
  ### Terms and Conditions of Use
170
 
 
181
 
182
  ### Quants
183
 
184
+ - [GGUF quants](https://huggingface.co/altomek/CodeRosa-70B-AB1-GGUF)
 
185
  - [6bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-6bpw-EXL2)
186
  - [5bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-5bpw-EXL2)
187
  - [4.5bpw](https://huggingface.co/altomek/CodeRosa-70B-AB1-4.5bpw-EXL2)