The Hong Kong branch of a multinational company lost HK$200 million (US$25.6 million) to deepfake fraudsters. The fraudsters used publicly available video and audio footage to create convincing digital representations of the company's finance director and other employees. They instructed a finance employee to carry out a transaction totaling HK$200 million. The employee made 15 wire transfers to five Hong Kong bank accounts before realizing it was a scam. Hong Kong police are investigating the case and warning of the growing use of deepfake technology for fraud. Senior Inspector Tyler Chan Chi-wing recommended verifying the authenticity of people in video calls by asking them to move their head or ask questions.
Meta introduced "Prompt Engineering with Llama 2", an interactive Jupyter Notebook guide for developers, researchers, and enthusiasts working with large language models (LLMs). The guide covers prompt engineering techniques, best practices, and showcases various prompting methods such as explicit instructions, stylization, formatting, restrictions, zero- and few-shot learning, role prompting, chain-of-thought, self-consistency, retrieval-augmented generation, and program-aided language models. The guide also demonstrates how to limit extraneous tokens in LLM outputs by combining roles, rules, explicit instructions, and examples. The resource aims to help users achieve better results with LLMs by effectively using these techniques. The Jupyter notebook is available from the llama-recipes repository.
Hugging Face has introduced a new Chat Assistant feature that allows users to create custom AI chatbots in just two clicks. Similar to OpenAI's GPTs, the Hugging Face Chat Assistant can be defined by its name, avatar, description, and underlying language model, such as Llama2 or Mixtral. Custom system messages can be used to control the behavior of the chatbot, and different message starters are available. The main advantages of Hugging Face Assistants over GPTs include the ability to choose from different open-source models, free inference provided by Hugging Face, and public sharing without a subscription. The feature is still in beta and has some areas that need improvement to match OpenAI GPTs, such as adding RAG and enabling web search. These features are on the roadmap. Another open-source GPT alternative is OpenGPT.
Russia and China have met in Beijing to discuss the military use of artificial intelligence. According to a statement from the Russian Foreign Ministry, there was "a detailed exchange of assessments" on the use of AI technology for military purposes. Both countries agreed to intensify cooperation within the framework of the Group of Governmental Experts (GGE) of the States Parties to the Convention on Inhumane Weapons on Lethal Autonomous Weapons Systems (LAWS). The Russian statement emphasized the closeness of Russian and Chinese approaches to this issue and the need for further close cooperation. The Chinese statement made no mention of discussions on the military use of AI, but did mention consultations on "outer space security, biosecurity and artificial intelligence."
Adept recently introduced Fuyu-Heavy, a new multimodal AI model for digital agents. Fuyu-Heavy is the third most capable multimodal model after GPT-4V and Gemini Ultra, and excels in multimodal reasoning and UI understanding, the company says. It performs well on traditional multimodal benchmarks and matches or exceeds the performance of models in the same performance class on standard text-based benchmarks. The model performs similarly to Claude 2.0 on chat scores, and slightly better than Gemini Pro on the MMMU benchmark. Fuyu-Heavy will soon power Adept's enterprise product, and lessons learned from its development have already been applied to its successor. The following video demonstrates the model's ability to understand a user interface.
Ambassadors from the EU's 27 member states have unanimously approved the world's first comprehensive set of rules for artificial intelligence, confirming a political agreement reached in December. The law regulates AI based on its potential for harm. Despite reservations from France, Germany and Italy, who called for less stringent rules for high-performance AI models such as Open AI's GPT-4, the final version of the law includes transparency requirements for all models and additional obligations for high-risk models. The internal market and civil liberties committees will adopt the AI legislation on February 13, followed by a plenary vote on April 10 and 11.