Content
summary Summary

NeurIPS 2023, the world's premier conference on artificial intelligence, is now underway and has announced this year's outstanding contributions to AI research. Here are the top papers.

The conference on Neural Information Processing Systems (NeurIPS) has announced the award-winning papers for NeurIPS 2023. This year, the prestigious awards include the Test of Time Award and two Outstanding Paper Awards in three categories: Outstanding Main Track Papers, Outstanding Main Track Runner-Ups, and Outstanding Datasets and Benchmark Track Papers.

Privacy in AI models and the mirage of emergent abilities

The two outstanding main track papers are "Privacy Auditing with One (1) Training Run" by Thomas Steinke, Milad Nasr and Matthew Jagielski and "Are Emergent Abilities of Large Language Models a Mirage?" by Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo.

"Privacy Auditing with One (1) Training Run" proposes a method to check the compliance of machine learning systems with privacy policies. The method requires only a single training run of the model, has minimal impact on the accuracy of the model, and is significantly more efficient than other methods.

Ad
Ad

Link to the paper.

"Are Emergent Abilities of Large Language Models a Mirage?" investigates so-called emergent abilities of large language models, i.e. abilities that are only expected to occur above a certain network size. In their work, the team shows that previously known emergent abilities are more a product of the metric used — with the right method, instead of a spontaneous jump in performance, there is a steady increase. The team therefore assumes that emergent abilities found to date are a mirage.

Link to the paper.

AI scaling with constrained data and an alternative to RLHF are runners-up

The two outstanding runners-up are "Scaling Data-Constrained Language Models" by Niklas Muennighoff and colleagues and "Direct Preference Optimization: Your Language Model is Secretly a Reward Model" by Rafael Rafailov and colleagues.

"Scaling Data-Constrained Language Models " deals with the scaling of language models when the amount of data is limited. In an extensive series of experiments in which the team varied the frequency of data repetitions and the computational budget, they found that when training with repeated data for a fixed computational budget of up to four epochs, there is little change in the loss compared to one-time data. However, with more repetitions, the value of the additional computing power eventually drops to zero. From the results, the team derives "scaling laws" for models with a limited data regime.

Recommendation

Link to the paper.

"Direct Preference Optimization: Your Language Model is Secretly a Reward Model" presents a new approach for fine-tuning large language models to human preferences. Direct Preference Optimization (DPO) bypasses the traditional challenges of reinforcement learning with human feedback (RLHF) and enables more efficient and robust optimization. In tests, the team shows that DPO achieves similar or better results than RLHF for various tasks such as summaries or dialogs.

Link to the paper.

ClimSim and DecodingTrust win in the Datasets and Benchmarks categories

The paper "ClimSim: A large multi-scale dataset for hybrid physics-ML climate emulation" by Sungduk Yu and colleagues was awarded in the dataset category.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

"ClimSim: A large multi-scale dataset for hybrid physics-ML climate emulation" presents ClimSim, the largest and most comprehensive dataset for hybrid ML physics-climate simulation research to date. The dataset includes multiscale climate simulations developed by a consortium of climate scientists and ML researchers.

Link to the paper.

In the Benchmarks category, the paper "DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models" by Boxin Wang and colleagues was awarded.

"DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models " provides a platform for assessing the trustworthiness of large-scale language models and provides a large-scale investigation of GPT-4 and GPT-3.5. The study identifies previously unpublished vulnerabilities and shows that GPT models can easily be tricked into generating toxic and biased outputs and leaking private information from training data and conversation histories.

According to the team, GPT-4 is more susceptible to such privacy breaches than GPT-3.5, as the (misleading) instructions are followed more closely.

Link to the paper.

Test of Time Award goes to word2vec paper

The Test of Time Award went to the NeurIPS 2013 paper "Distributed Representations of Words and Phrases and their Compositionality" by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean.

The paper, which has been cited over 40,000 times, introduced the groundbreaking word embedding technique word2vec to represent words in a vector space so that words with similar meanings are close to each other. Word2Vec pioneered the understanding of how machine learning can be used to extract meaning from text and marked the beginning of a new era in natural language processing.

Link to the paper.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The NeurIPS 2023 conference has announced the winners of this year's paper awards, including outstanding contributions in the areas of privacy in AI models and the emergent abilities of large language models.
  • "Privacy Auditing with One (1) Training Run" proposes an efficient method to verify the privacy guarantees of an algorithm, while "Are Emergent Abilities of Large Language Models a Mirage?" questions the existence of emergent abilities of large language models.
  • Other excellent papers cover scaling language models with limited data, direct preference optimization for fine-tuning large language models, the ClimSim climate simulation dataset, and assessing the trustworthiness of GPT models.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.