Be part of high executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Learn More
OpenAI, a number one synthetic intelligence (AI) analysis lab, introduced immediately the launch of a bug bounty program to assist deal with rising cybersecurity dangers posed by highly effective language fashions like its personal ChatGPT.
This system — run in partnership with the crowdsourced cybersecurity firm Bugcrowd — invitations unbiased researchers to report vulnerabilities in OpenAI’s programs in alternate for monetary rewards starting from $200 to $20,000 relying on the severity. OpenAI mentioned this system is a part of its “dedication to creating protected and superior AI.”
Issues have mounted in latest months over vulnerabilities in AI programs that may generate artificial textual content, pictures and different media. Researchers discovered a 135% enhance in AI-enabled social engineering assaults from January to February, coinciding with the adoption of ChatGPT, in line with AI cybersecurity agency DarkTrace.
Whereas OpenAI’s announcement was welcomed by some consultants, others mentioned a bug bounty program is unlikely to completely deal with the big selection of cybersecurity dangers posed by more and more refined AI applied sciences
Occasion
Rework 2023
Be part of us in San Francisco on July 11-12, the place high executives will share how they’ve built-in and optimized AI investments for achievement and averted frequent pitfalls.
This system’s scope is restricted to vulnerabilities that might immediately affect OpenAI’s programs and companions. It doesn’t seem to handle broader considerations over malicious use of such applied sciences like impersonation, artificial media or automated hacking instruments. OpenAI didn’t instantly reply to a request for remark.
A bug bounty program with restricted scope
The bug bounty program comes amid a spate of safety considerations, with GPT4 jailbreaks rising, which allow customers to develop directions on the right way to hack computer systems and researchers discovering workarounds for “non-technical” customers to create malware and phishing emails.
It additionally comes after a safety researcher generally known as Rez0 allegedly used an exploit to hack ChatGPT’s API and uncover over 80 secret plugins.
Given these controversies, launching a bug bounty platform supplies a chance for OpenAI to handle vulnerabilities in its product ecosystem, whereas situating itself as a corporation appearing in good religion to handle the safety dangers launched by generative AI.
Sadly, OpenAI’s bug bounty program may be very restricted within the scope of threats it addresses. For example, the bug bounty program’s official page notes: “Points associated to the content material of mannequin prompts and responses are strictly out of scope, and won’t be rewarded until they’ve an extra immediately verifiable safety affect on an in-scope service.”
Examples of questions of safety that are thought-about to be out of scope embody jailbreaks and security bypasses, getting the mannequin to “say unhealthy issues,” getting the mannequin to put in writing malicious code or getting the mannequin to let you know the right way to do unhealthy issues.
On this sense, OpenAI’s bug bounty program could also be good for serving to the group to enhance its personal safety posture, however does little to handle the safety dangers launched by generative AI and GPT-4 for society at massive.