New
Security Researcher II
Microsoft | |
United States, Texas, Irving | |
7000 State Highway 161 (Show on map) | |
Oct 31, 2025 | |
|
OverviewSecurity represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft's mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers' heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Are you a red teamer who is looking to break into the AI field? Do you want to find AI failures in Microsoft's largest AI systems impacting millions of users?We are seeking a Security Researcher II to join Microsoft's AI Red Team where you'll work to proactively hack high GenAI technology pre-launch, informing mitigations, with real examples of how you caused security, trust, and safety failures, in Microsoft's big AI systems. You will be resposible for AI Security and Safety Research as a Red Teamer dedicated to helping make AI security better and help our customers expand with our AI systems. Our team is an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Safety & Responsible AI experts and software developers with the mission of proactively finding failures in Microsoft's big bet AI systems. In this role, you will red team AI models and applications across Microsoft's AI portfolio including Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. This work is sprint based, working with AI Safety, Security, and Product Development teams, to run operations that aim to find safety and security risks that inform internal key business decisions. This a fast moving team with multiple roles and responsibilities within the AI Security and Safety space; people who love to provide agile, practical insights and who enjoy jumping in to solve ambiguous problems excel in this role.More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-s... Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
ResponsibilitiesResponsibilities:Discover and exploit GenAI vulnerabilities end-to-end in order to assess the safety of systemsManage product group stakeholders as priority recipients and collaborators for operational sprintsDrive clarity on communication and reporting for red teaming peers when working with product groupsDevelop methodologies, techniques, and research on emerging threats to scale and accelerate AI Red Teaming and AI Safety & Security across MicrosoftWork alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operationsEmbody our culture and values | |
Oct 31, 2025