AI Security Engineer
Job Overview
We are looking for an AI Security Engineer to secure our AI-driven systems, including LLM-based applications, machine learning models, and AI-enabled automation tools.
This role will focus on identifying, assessing, and mitigating security risks across the AI lifecycle — from model development and training to deployment and runtime monitoring.
The ideal candidate combines strong security engineering experience with a deep understanding of machine learning systems and emerging AI-specific threats (e.g., prompt injection, model poisoning, data leakage, adversarial attacks).
This role will focus on identifying, assessing, and mitigating security risks across the AI lifecycle — from model development and training to deployment and runtime monitoring.
The ideal candidate combines strong security engineering experience with a deep understanding of machine learning systems and emerging AI-specific threats (e.g., prompt injection, model poisoning, data leakage, adversarial attacks).
We are looking for an AI Security Engineer to secure our AI-driven systems, including LLM-based applications, machine learning models, and AI-enabled automation tools.
This role will focus on identifying, assessing, and mitigating security risks across the AI lifecycle — from model development and training to deployment and runtime monitoring.
The ideal candidate combines strong security engineering experience with a deep understanding of machine learning systems and emerging AI-specific threats (e.g., prompt injection, model poisoning, data leakage, adversarial attacks).
This role will focus on identifying, assessing, and mitigating security risks across the AI lifecycle — from model development and training to deployment and runtime monitoring.
The ideal candidate combines strong security engineering experience with a deep understanding of machine learning systems and emerging AI-specific threats (e.g., prompt injection, model poisoning, data leakage, adversarial attacks).
Key Responsibilities:
- Design and implement security controls for AI/ML systems across development, training, and production.
- Secure LLM integrations, RAG pipelines, and AI APIs.
- Conduct threat modeling for AI systems and data pipelines.
- Define secure-by-design patterns for AI-powered features.
- Identify and mitigate AI-specific threats: prompt injection and jailbreak techniques, model poisoning and data contamination, adversarial attacks, training data leakage, insecure model serialization, excessive permissions in AI agents.
- Develop guardrails, content filters, and output validation mechanisms.
- Implement monitoring for anomalous AI behavior.
- Integrate AI security checks into CI/CD pipelines.
- Perform security reviews of ML code and AI-related infrastructure.
- Secure model registries and artifact storage.
- Collaborate with other engineers and platform teams to enforce security standards.
- Ensure AI systems comply with: GDPR and data privacy regulations, financial industry regulatory requirements, implement controls for sensitive data used in training and inference, perform AI risk assessments aligned with internal risk methodology.
- Contribute to AI security standards and internal policies.
- Define AI risk classification and control frameworks.
- Support security reviews for new AI initiatives / tools.
AI/ML Security Architecture
AI Threat Detection & Mitigation
Secure Development & DevSecOps
Data Protection & Compliance
Governance & Policy
Required Qualifications:
- 3–5+ years in cybersecurity engineering or application security.
- Hands-on experience with ML/AI systems (LLMs, NLP models, or similar).
- Strong understanding of: OWASP Top 10, Secure SDLC, Cloud security (AWS/Azure/GCP).
- Experience with: Python, API security, Containerization (Docker, Kubernetes).
- Knowledge of AI-specific security risks and mitigations.
- Experience conducting threat modeling and risk assessments.
Preferred Qualifications:
- Experience securing LLM-based applications (OpenAI, Anthropic, Azure OpenAI, etc.).
- Familiarity with: RAG architectures, Vector databases, ML pipelines (MLflow, Kubeflow, SageMaker).
- Experience in fintech or regulated environments.
- Knowledge of AI governance frameworks (e.g., EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Experience with AI red teaming.
Soft Skills:
- Strong analytical and problem-solving skills.
- Ability to translate technical risk into business impact.
- Able to explain AI security risks and mitigations to non-security teams.
- Cross-functional collaboration with ML, data, and product teams.
- Clear documentation and communication skills.
What you will get in return:
• Competitive Salary: We believe great work deserves great pay! Your skills and talents will be rewarded with a salary that makes you feel valued and motivated.
• Work-Life Harmony: Join a company that genuinely cares about you - because your life outside of work matters just as much as your time on the clock. #LI-Hybrid
• Generous Time Off: Need a breather? Our annual leave policy lets you recharge and enjoy life outside of work without a worry.
• Employee Referral Program: Love working here? Share the love! Bring your talented friends on board and get rewarded for growing our awesome team.
• Comprehensive Health & Pension Benefits: From medical insurance to pension plans, we’ve got your back. Plus, location-specific benefits and perks!
• Workation Wonderland: Live your digital nomad dreams with 30 extra days to work remotely from anywhere in the world (some restrictions apply). Adventure awaits!
• Volunteer Days: Make a difference! Take two additional paid days each year to support causes you care about and give back to the community.
Be a key player at the forefront of the digital assets movement, propelling your career to new heights! Join a dynamic and rapidly expanding company that values and rewards talent, initiative, and creativity. Work alongside one of the most brilliant teams in the industry.
Make Your Resume Now