Be a part of high executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for fulfillment. Learn More
In the present day, Sen. Mark Warner (D-VA), chairman of the Senate Intelligence Committee, despatched a sequence of open letters to the CEOs of AI corporations, together with OpenAI, Google, Meta, Microsoft and Anthropic, calling on them to place safety on the “forefront” of AI growth.
“I write at present concerning the necessity to prioritize safety within the design and growth of synthetic intelligence (AI) techniques. As corporations like yours make speedy developments in AI, we should acknowledge the safety dangers inherent on this know-how and guarantee AI growth and adoption proceeds in a accountable and safe method,” Warner wrote in every letter.
Extra broadly, the open letters articulate legislators’ rising considerations over the safety dangers launched by generative AI.
Safety in focus
This comes simply weeks after NSA cybersecurity director Rob Joyce warned that ChatGPT will make hackers that use AI “far more efficient,” and simply over a month after the U.S. Chamber of Commerce known as for regulation of AI know-how to mitigate the “nationwide safety implications” of those options.
Occasion
Remodel 2023
Be a part of us in San Francisco on July 11-12, the place high executives will share how they’ve built-in and optimized AI investments for fulfillment and averted widespread pitfalls.
The highest AI-specific points Warner cited within the letter have been integrity of the information provide chain (making certain the origin, high quality and accuracy of enter knowledge), tampering with coaching knowledge (aka data-poisoning assaults), and adversarial examples (the place customers enter inputs to fashions that deliberately trigger them to make errors).
Warner additionally known as for AI corporations to extend transparency over the safety controls applied inside their environments, requesting an outline of how every group approaches safety, how techniques are monitored and audited, and what safety requirements they’re adhering to, comparable to NIST’s AI threat administration framework.