Great post as usual. Interesting posting from MSFT research on Orca and LFMs. See https://arxiv.org/pdf/2306.02707.pdf Basically Orca is a 13-billion parameter model
that learns to imitate the reasoning process of LFMs. Orca learns from
rich signals from GPT-4 including explanation traces; step-by-step thought
processes; and other complex instructions, guided by teacher assistance from
Great post as usual. Interesting posting from MSFT research on Orca and LFMs. See https://arxiv.org/pdf/2306.02707.pdf Basically Orca is a 13-billion parameter model
that learns to imitate the reasoning process of LFMs. Orca learns from
rich signals from GPT-4 including explanation traces; step-by-step thought
processes; and other complex instructions, guided by teacher assistance from
ChatGPT.