Microsoft Research and OpenAI team up to introduce Orca, a new AI model that learns by imitating large language models like GPT-4. It overcomes the limitations of smaller models and requires fewer computing resources to run.
Orca, with its 13 billion parameters, learns explanations, step-by-step thought processes, and complex instructions from GPT-4. It aims to promote progressive learning and has already outperformed Vicuna on zero-shot reasoning benchmarks.
Orca is 42% faster than conventional AI models on AGIEval, showcasing its efficiency. Despite its smaller size, Orca performs on par with ChatGPT in reasoning benchmarks like BBH. It even demonstrates competitive performance on academic exams like SAT and LSAT.
The Microsoft research team envisions Orca evolving with improved skills and capabilities as it learns from human-generated explanations and more advanced language models. Exciting times ahead for AI development!
Stay tuned for more updates on the advancements in AI, as Microsoft and OpenAI continue to push the boundaries of language models and pave the way for innovative applications in various domains.