Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

Large language models cannot replace human participants because they cannot portray identity groups

Taylor Swift deepfakes: new technologies have long been weaponised against women. The solution involves us all

Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models

Red-Teaming for Generative AI: Silver Bullet or Security Theater?

Study: Smart devices’ ambient light sensors pose imaging privacy risk

AI companies are merging or collaborating to even out the gap in access to vital datasets

The Cadaver in the Machine: The Social Practices of Measurement and Validation in Motion Capture Technology

Concrete Problems in AI Safety, Revisited

Navigating algorithmic bias amid rapid AI development in Southeast Asia