|
|
|
<br>DeepSeek-R1 is based on DeepSeek-V3, a mixture of experts (MoE) model just recently [open-sourced](https://shankhent.com) by DeepSeek. This base model is [fine-tuned](https://baitshepegi.co.za) using Group [Relative Policy](http://git.daiss.work) [Optimization](https://pakkalljob.com) (GRPO), a [reasoning-oriented variation](https://storymaps.nhmc.uoc.gr) of RL. The research team also carried out [knowledge distillation](https://www.characterlist.com) from DeepSeek-R1 to open-source Qwen and Llama designs and released a number of versions of each |