Frontier Models (LLM) Safety Frameworks: Gemini 3 Pro Case Study
- tags
- #Safety #Gemini #LLM Red-Teaming
- published
- reading time
- 1 minute
Abstract
This talk examines the safety protocols for frontier models to ensure the safe, secure, and governed adoption of Gen AI, utilizing the Gemini 3 Pro Frontier Safety Framework report as a primary case study. We will explore risk assessments across critical domains—including CBRN, cybersecurity, and harmful manipulation —and discuss how Critical Capability Levels (CCLs) and advanced red-teaming strategies are applied to manage risks in real-world agentic systems.
Speaker Bio
Mohamed Elkhawaga is the Founder of Robost, a venture that ensures the safe, secure, and governed adoption of Gen AI. Specializing in LLM red-teaming and compliance-by-design, he draws on his background as a Solutions Architect and AI Consultant for large-scale deployments. Mohamed holds an MSc in Computing from Queen’s University, where his research focused on RAG-based architectures.