New ask Hacker News story: Ask HN: Llama-2-7B adapter merged with llama-2-7B-chat model?

Ask HN: Llama-2-7B adapter merged with llama-2-7B-chat model?
2 by dynamo_ | 0 comments on Hacker News.
I realized that I fine-tuned a llama-2-7b model and merged the adapter with llama-2-7b-chat model. This was a mistake. But when I ran inference on benchmark medqa tests and found that the llama-2-7b-chat merged with llama-2-7b adapter out performs a base llama-2-7b-chat model. Why did this work - was this a fluke? Or are adapter's modifications compatible with both the general Llama-2-7b model and the chat-specific version? Thanks!

Comments