Over Two-Thirds Consider An Exterior Evaluation of AI Implementations Is the Greatest Method to Determine AI Security and Safety Points
HackerOne, a frontrunner in human-powered safety, revealed knowledge that discovered 48% of safety professionals imagine AI is probably the most vital safety danger to their group. Forward of the launch of its annual Hacker-Powered Safety Report, HackerOne revealed early findings, which embody knowledge from a survey of 500 safety professionals. With regards to AI, respondents have been most involved with the leaking of coaching knowledge (35%), unauthorized utilization of AI inside their organizations (33%), and the hacking of AI fashions by outsiders (32%).
When requested about dealing with the challenges that AI security and safety points current, 68% mentioned that an exterior and unbiased overview of AI implementations is the best approach to determine AI security and safety points. AI purple teaming gives one of these exterior overview via the worldwide safety researcher neighborhood, who assist to safeguard AI fashions from dangers, biases, malicious exploits, and dangerous outputs.
“Whereas we’re nonetheless reaching trade consensus round AI safety and security finest practices, there are some clear techniques the place organizations have discovered success,” mentioned Michiel Prins, co-founder at HackerOne. “Anthropic, Adobe, Snap, and different main organizations all belief the worldwide safety researcher neighborhood to offer professional third-party perspective on their AI deployments.”
Additional analysis from a HackerOne-sponsored SANS Institute report explored the influence of AI on cybersecurity and located that over half (58%) of respondents predict AI might contribute to an “arms race” between the techniques and strategies utilized by safety groups and cybercriminals. The analysis additionally discovered optimism round the usage of AI for safety staff productiveness, with 71% reporting satisfaction from implementing AI to automate tedious duties. Nevertheless, respondents believed AI productiveness positive factors have benefited adversaries and have been most involved with AI-powered phishing campaigns (79%) and automatic vulnerability exploitation (74%).
“Safety groups should discover one of the best purposes for AI to maintain up with adversaries whereas additionally contemplating its present limitations — or danger creating extra work for themselves,” mentioned Matt Bromiley, Analyst at The SANS Institute. “Our analysis suggests AI must be considered as an enabler, relatively than a risk to jobs. Automating routine duties empowers safety groups to concentrate on extra strategic actions.”
Join the free insideAI Information e-newsletter.
Be a part of us on Twitter: https://twitter.com/InsideBigData1
Be a part of us on LinkedIn: https://www.linkedin.com/firm/insideainews/
Be a part of us on Fb: https://www.fb.com/insideAINEWSNOW