Ad
Ad
Ad
Short

Anthropic is expanding its bug bounty program to test its "next-generation system for AI safety mitigations." The program focuses on identifying and defending against "universal jailbreak attacks." Anthropic is prioritizing critical vulnerabilities in high-risk areas like chemical, biological, radiological and nuclear (CBRN) defense and cybersafety. Participants get early access to Anthropic's latest safety systems before public release. Their task is to find vulnerabilities or ways to bypass safety measures. Anthropic is offering rewards up to $15,000 for discovering new universal jailbreak attacks.

Ad
Ad
Short

Social media users are sharing AI-generated images of TEDx speakers that look remarkably lifelike. The speaker photos are so convincing that many people initially thought they depicted real individuals. Flux, developed by former members of the Stable Diffusion team, is the AI model behind these images. It was refined using a technique called LoRA to enhance its photorealism. These viral images were generated using the Flux dev model and LoRA in the ComfyUI tool, without any additional editing. To create similar images, users need the LoRA file and ComfyUI workflow. You can access ComfyUI here or via apps like Pinokio.

Ad
Ad
Google News