Skip to content

Conversation

@vjanfaza
Copy link
Contributor

In this PR, we are adding the support of meta-llama/Llama-Guard-4-12B which is a dense model distilled form llama4 scout moe model. The changes in pytorch_transforms.py file can be applied to any dense model distilled from a moe model with supported architecture in QEfficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant