Out of resource: shared memory

#16
by iszhaoxin - opened

I got the error message as below:

'triton.runtime.autotuner.OutOfResources: out of resource: shared memory, Required: 135200, Hardware limit: 101376. Reducing block sizes or num_stages may help.'

I tried on both RTX A6000 and RTX 6000.
I guess maybe it is because the model is only trained and tested on specific types GPUs, such as A100?

Yes, in my experience as well this model works well only on the GPUs listed as 'tested' in the documentation.

Microsoft org

The recommended adjustment layer is

"target_modules": [
"o_proj",
"qkv_proj"
]

@LeeStott how to achieve

@LeeStott I'm running into this error as well. Can you show us how to adjust the layer?

If using PEFT you can set this using the "target_modules" parameter in LoraConfig
https://huggingface.co./docs/peft/package_reference/lora#peft.LoraConfig

Sign up or log in to comment