The stakes of one thing going unsuitable with AI are extremely excessive. Solely 29% of organizations really feel totally geared up to detect and stop unauthorized tampering with AI[1]. With AI, rising dangers goal totally different levels of the AI lifecycle, whereas accountability lies with totally different homeowners together with builders, finish customers and distributors.
As AI turns into ubiquitous, enterprises will use and develop a whole lot if not hundreds of AI functions. Builders want AI safety and security guardrails that work for each utility. In parallel, deployers and finish customers are speeding to undertake AI to enhance productiveness, doubtlessly exposing their group to information leakage or the poisoning of proprietary information. This provides to the rising dangers associated to organizations transferring past public information to coach fashions on their proprietary information.
So, how can we make sure the safety of AI techniques? The right way to defend AI from unauthorized entry and misuse? Or stop information from leaking? Making certain the safety and moral use of AI techniques has turn into a essential precedence. The European Union has taken vital steps on this course with the introduction of the EU AI Act.
This weblog explores how the AI Act addresses safety for AI techniques and fashions, the significance of AI literacy amongst staff, and Cisco’s strategy for safeguarding AI by means of a holistic AI Protection imaginative and prescient.
The EU AI Act: A Framework for Safe AI
The EU AI Act represents a landmark effort by the EU to create a structured strategy to AI governance. One in all its parts is its emphasis on cybersecurity necessities for high-risk AI techniques. This contains mandating robust safety protocols to stop unauthorized entry and misuse, guaranteeing that AI techniques function safely and predictably.
The Act promotes human oversight, recognizing that whereas AI can drive efficiencies, human judgment stays indispensable in stopping and mitigating dangers. It additionally acknowledges the vital position of all staff in guaranteeing safety, requiring each suppliers and deployers to take measures to make sure a adequate degree of AI literacy of their employees.
Figuring out and clarifying roles and obligations in securing AI techniques is complicated. The AI Act main focus is on the builders of AI techniques and sure common goal AI mannequin suppliers, though it rightly acknowledges the shared accountability between builders and deployers, underscoring the complicated nature of the AI worth chain.
Cisco’s Imaginative and prescient for Securing AI
In response to the rising want for AI safety, Cisco has envisioned a complete strategy to defending the event, deployment and use of AI functions. This imaginative and prescient builds on 5 key facets of AI safety, from securing entry to AI functions, to detecting dangers corresponding to information leakage and complex adversarial threats, all the way in which to coaching staff.
“When embracing AI, organizations shouldn’t have to decide on between velocity and security. In a dynamic panorama the place competitors is fierce, successfully securing expertise all through their lifecycle and with out tradeoffs is how Cisco reimages safety for the age of AI.”
- Automated Vulnerability Evaluation: By utilizing AI-driven strategies, organizations can robotically and constantly assess AI fashions and functions for vulnerabilities. This helps establish a whole lot of potential security and safety dangers, empowering safety groups to proactively deal with them.
- Runtime Safety: Implementing protections throughout the operation of AI techniques helps defend in opposition to evolving threats like denial of service, and delicate information leakage, and ensures these techniques run safely.
- Consumer Protections and Information Loss Prevention: Organizations want instruments that stop information loss and monitor unsafe behaviors. Firms want to make sure AI functions are utilized in compliance with inner insurance policies and regulatory necessities.
- Managing Shadow AI: It’s essential to watch and management unauthorized AI functions, often called shadow AI. Figuring out third-party apps utilized by staff helps corporations implement insurance policies to limit entry to unauthorized instruments, defending confidential info and guaranteeing compliance.
- Residents and staff coaching: Alongside the appropriate technological options, AI literacy amongst staff is essential for the secure and efficient use of AI. Rising AI literacy helps construct a workforce able to responsibly managing AI instruments, understanding their limitations, and recognizing potential dangers. This, in flip, helps organizations adjust to regulatory necessities and fosters a tradition of AI safety and moral consciousness.
“The EU AI Act underscores the significance of equipping staff with extra than simply technical data. It’s about implementing a holistic strategy to AI literacy that additionally covers safety and moral concerns. This helps make sure that customers are higher ready to securely deal with AI and to harness the potential of this revolutionary expertise.”
This imaginative and prescient is embedded in Cisco’s new expertise resolution “AI Protection”. Within the multifaceted quest to safe AI applied sciences, rules just like the EU AI Act, alongside coaching for residents and staff, and improvements like Cisco’s AI Protection all play an vital position.
As AI continues to rework every business, these efforts are important to making sure that AI is used safely, ethically, and responsibly, finally safeguarding each organizations and customers within the digital age.
[1] Cisco’s 2024 AI Readiness Index
Share: