![]() What is the Generative Red Team Challenge? What vulnerabilities are LLMs likely to have?.Generative Red Team Challenge could influence AI security policy.What is the Generative Red Team Challenge?.This challenge was the largest event of its kind and one that will allow many students to get in on the ground floor of cutting-edge hacking. The three winners each received one NVIDIA RTX A6000 GPU. The contest was scored by a panel of independent judges. On August 29, the challenge organizers announced the winners of the contest: Cody “cod圓” Ho, a student at Stanford University Alex Gray of Berkeley, California and Kumar, who goes by the username “energy-ultracode” and preferred not to publish a last name, from Seattle. The Generative Red Team Challenge organized by AI Village, SeedAI and Humane Intelligence gives a clearer picture than ever before of how generative AI can be misused and what methods might need to be put in place to secure it. The result is a new corpus of information shared with the White House Office of Science and Technology Policy and the Congressional AI Caucus. OpenAI, Google, Meta and more companies put their large language models to the test on the weekend of August 12 at the DEF CON hacker conference in Las Vegas. machine challenge could provide a framework for government and enterprise policies around generative AI. DEF CON Generative AI Hacking Challenge Explored Cutting Edge of Security Vulnerabilitiesĭata from the human vs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |