Lasso Research

Explore novel threats, develop layered defenses, and share discoveries that help the AI community build safer systems.

View Publications
Text Link
Matan Cohen
Matan Cohen
Text Link
Eliya Saban
Eliya Saban
Text Link
Andrea Siposova
Andrea Siposova
Text Link
Danielle Shtainberg
Danielle Shtainberg
Text Link
Gil Ben
Gil Ben
Text Link
Martin Miller
Martin Miller
Text Link
Drishti Shah
Drishti Shah
Text Link
Tracy Boyes
Tracy Boyes
Text Link
Or Oxenberg
Or Oxenberg
Text Link
Lior Ziv
Lior Ziv
Text Link
The Lasso Team
The Lasso Team
Text Link
Yuval Abadi
Yuval Abadi
Text Link
Elad Schulman
Elad Schulman
Text Link
Simon Linstead
Simon Linstead
Text Link
Sigal Sax
Sigal Sax
Text Link
Ophir Oren
Ophir Oren
Text Link
Roy Azoulay
Roy Azoulay
Text Link
Ophir Dror
Ophir Dror
Text Link
Nir Chervoni
Nir Chervoni
Text Link
Eliran Suisa
Eliran Suisa
Text Link
Bar Lanyado
Bar Lanyado
Text Link
David Primor
David Primor
Text Link
James Azar
James Azar
Text Link
Ante Gojsalić
Ante Gojsalić
Core Values

Lasso Research Core Values

AI Pioneering

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Intent Alignment

Understanding how to preserve authentic intent in the face of adversarial pressure is core to making AI trustworthy in high-stakes environments. This principle drives our investigation into detecting manipulation, distinguishing legitimate requests from malicious ones, and maintaining the integrity of human-AI communication.

A black and white photo of a pair of scissors.

Red Teaming

Validating security claims through systematic, reproducible evidence is essential to building trustworthy defenses. This principle grounds our work in measurable outcomes through comprehensive test suites, clear metrics, and transparent reporting of what works and what doesn't.

Intent Alignment

Understanding how to preserve authentic intent in the face of adversarial pressure is core to making AI trustworthy in high-stakes environments. This principle drives our investigation into detecting manipulation, distinguishing legitimate requests from malicious ones, and maintaining the integrity of human-AI communication.

Publications

Choose Category
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Title
Date
Category
Text Link
Matan Cohen
Matan Cohen
Text Link
Eliya Saban
Eliya Saban
Text Link
Andrea Siposova
Andrea Siposova
Text Link
Danielle Shtainberg
Danielle Shtainberg
Text Link
Gil Ben
Gil Ben
Text Link
Martin Miller
Martin Miller
Text Link
Drishti Shah
Drishti Shah
Text Link
Tracy Boyes
Tracy Boyes
Text Link
Or Oxenberg
Or Oxenberg
Text Link
Lior Ziv
Lior Ziv
Text Link
The Lasso Team
The Lasso Team
Text Link
Yuval Abadi
Yuval Abadi
Text Link
Elad Schulman
Elad Schulman
Text Link
Simon Linstead
Simon Linstead
Text Link
Sigal Sax
Sigal Sax
Text Link
Ophir Oren
Ophir Oren
Text Link
Roy Azoulay
Roy Azoulay
Text Link
Ophir Dror
Ophir Dror
Text Link
Nir Chervoni
Nir Chervoni
Text Link
Eliran Suisa
Eliran Suisa
Text Link
Bar Lanyado
Bar Lanyado
Text Link
David Primor
David Primor
Text Link
James Azar
James Azar
Text Link
Ante Gojsalić
Ante Gojsalić
lasso man

Join the Research team

Help uncover emerging AI threats and build the defenses that keep AI systems secure.
See Open Roles