Ad
Short

Researchers at UC Berkeley and Microsoft Research have developed Gorilla, a large language model that excels at generating accurate API calls. This LLaMA-based model outperforms state-of-the-art LLMs such as GPT-4 by mitigating the problem of hallucination and adapting to document changes at test time. Gorilla is trained on massive datasets from Torch Hub, TensorFlow Hub, and Hugging Face.

Gorilla's code, model, data, and demo are now available on GitHub, with plans to add more domains such as Kubernetes, GCP, AWS, and OpenAPI.

Ad
Ad
Short

Movie extras in Hollywood are concerned that AI could replace them as productions use body scans to create digital replicas of actors, NPR reports. This tension is central to the ongoing strike between background actors and studios. Hollywood has used technology to enhance films for years, but advances in generative AI are enabling synthesizing and digital cloning of actors on a whole new level.

Background actors fear they will be the first in the industry to be made obsolete by AI. SAG-AFTRA, the actors' union, wants to ensure adequate compensation for actors whose digital replicas are used, leading to disputes over consent, compensation, and future use of AI-created digital likenesses.

Short

Wageningen University & Research's AGROS project aims to develop an autonomous greenhouse using AI algorithms and digital twin technology for remote control of cucumber cultivation.

As greenhouse cultivation faces challenges such as staff shortages and increasing resource costs, the project uses non-invasive sensors to monitor crop characteristics and support management decisions for sustainable cultivation. The project has three compartments where it pits digital twin technology, reinforcement learning, and human experts against each other to achieve the best crop yields. It is a public-private partnership that includes Greenport West Holland, Hortilux, Ridder, and Philips.

Ad
Ad
Ad
Ad
Short

Researchers from Johns Hopkins University have found a simple technique to reduce hallucinations in large language models (LLMs) and improve the accuracy of their answers. By adding "according to" in queries, LLMs are more likely to quote observed text and provide factual information instead of fabricating answers.

A review of LLM responses using the QUIP score metric shows a 5-15% increase in the accuracy of cited information when using grounding prompts such as "According to Wikipedia...". While the technique works well across different LLMs, it is most effective with larger instruction-tuned models.

Google News