Lead AI Security Engineer
Job Overview
Skills & Requirements
Python
Essential skill for this role
Teams
Essential skill for this role
GDPR
Essential skill for this role
Qualifications & Education
Qualifications
You have knowledge and experience with technologies including Kubernetes, Containers, CI/CD, and Cloud Service Providers You are familiar with function and purpose of key AI platform components such as AI gateways (Kong, Databricks Mosaic AI Gateway, custom API orchestration), Model Orchestration (Examples LangChain, LlamaIndex, etc.) You are familiar with key AI regulatory frameworks such as NIST AI RMF, MITRE ATLAS, GDPR, EU AI Act, etc You have information Security certifications (CISSP, SANS GIAC, CISA, etc.) Southern California Base Salary Range: $173,211-$277,138 San Antonio Base Salary Range: $142,394-$227,830 New York Base Salary Range: $183,613-$293,781
About the Role
Lead AI Security Engineer
Curated brief to help you tailor your application.
Role Overview
Start by showing recruiters you understand the team's mission and environment.
“I can succeed as a Lead AI Security Engineer at Capital Group”
As a Lead AI Security Engineer, you will be responsible for securing Capital Group’s enterprise AI Platforms. You will help enable Capital Group’s AI strategy by building and/or procuring solutions to protect a diverse set of enterprise AI platforms being built and deployed at Capital Group. You’ll collaborate with platform engineering, security engineering, and risk teams to ensure their solutions support scalable, secure adoption of AI.
Additionally, you’ll be expected to provide mentoring, advising diverse teams across the organization, and promoting AI Security principles across Capital Group.
AI Security Procurement Managements: You will procure and/or build technical solutions to reduce the risk of misconfiguration, exploitation, and other security issues for multiple enterprise AI platforms.
Embedding Security in the AI Platform Ecosystem: Working closely with platform teams to integrate security into every component of the AI Platform.
Implementing Security Controls & “Guardrails” for GenAI: Designing, deploying, and operating technical controls to prevent misuse of AI systems. Guardrails design includes content filtering systems, usage policies, and safety checks that mitigate issues like prompt injection attacks, unauthorized data extraction, model bias or hallucinations, and other misuse of generative AI platforms.
AI Runtime Security: Engineer continually tests and updates to the guardrails, replacing weaker controls with more robust solutions as threats evolve.
AI Governance: You will work cross functionally with architecture and platform teams to monitor alignment of solutions to AI Governance processes
Contribute to Standards and Policies: You will provide thought leadership for Information Security policies and standards for AI in collaboration with technology risk
AI/Agent SME: You will provide AI/Agent subject matter expertise for AI Incidents and Security Reviews, and help develop incident response playbooks for AI-related security incidents
Skills Snapshot
Double down on these tools and frameworks in your application.
Get Expert Application Review
Boost your chances of landing this role with a professional application review from our expert consultants.
Available exclusively to Skill Farm members