Morgan Stanley is seeking multiple AI Security Engineers & AI DevOps candidates across various experience levels (Associate, Director, and Vice President) to join their Testing Execution team as AI Security Engineers in Security Testing and Risk. The role involves performing adversarial testing to evaluate the efficacy of model-level security controls and controls between the end-user and the LLM. Responsibilities include communicating regularly with product leads, operating a security testing platform for adversarial testing against AI/LLM systems, detecting and validating risks such as evasion, prompt injection, and data leakage, collaborating with AI/ML engineers and Gen AI application developers to integrate domain-specific security knowledge, and deploying and managing cloud resources and infrastructure components. Candidates should have 4+ years of experience in cybersecurity technologies or operations, strong programming and scripting skills (Python, Java, C, C++, Bash), basic Linux and Windows command-line knowledge, knowledge of cloud security best practices and compliance standards, familiarity with LLMs and techniques like prompt engineering, fine-tuning, and retrieval-augmented generation (RAG), understanding of threat actor tactics and MITRE ATLAS Framework and OWASP Top 10 for LLM Applications, background in AI threat modeling, attack mitigation or misuse detection, proficiency in cloud platforms such as AWS, Azure, and Google Cloud, and understanding of containerization and orchestration using Docker and Kubernetes. The company emphasizes a commitment to diversity, inclusion, and equal employment opportunity.
Learn more about this Employer on their Career Site