Researchers at UC Berkeley and Microsoft Research have developed Gorilla, a large language model that excels at generating accurate API calls. This LLaMA-based model outperforms state-of-the-art LLMs such as GPT-4 by mitigating the problem of hallucination and adapting to document changes at test time. Gorilla is trained on massive datasets from Torch Hub, TensorFlow Hub, and Hugging Face.
Gorilla's code, model, data, and demo are now available on GitHub, with plans to add more domains such as Kubernetes, GCP, AWS, and OpenAPI.