Researchers at UC Berkeley and Microsoft Research have developed Gorilla, a large language model that excels at generating accurate API calls. This LLaMA-based model outperforms state-of-the-art LLMs such as GPT-4 by mitigating the problem of hallucination and adapting to document changes at test time. Gorilla is trained on massive datasets from Torch Hub, TensorFlow Hub, and Hugging Face.

Gorilla's code, model, data, and demo are now available on GitHub, with plans to add more domains such as Kubernetes, GCP, AWS, and OpenAPI.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.