Skip to content

Conversation

vackosar
Copy link

Generalize the line in TransformerSession that trims the cache to support LongLLaMA tensor layout that has tuple length of 6 instead of 2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant