New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
The MIT SuperCloud and Lincoln Laboratory Supercomputing Center provided HPC resources for the researchers’ results.
Read this story at MIT News
Story image: The “Faces in Things” dataset is a comprehensive, human-labeled collection of over 5,000 pareidolic images. The research team trained face-detection algorithms to see faces in these pictures, giving insight into how humans learned to recognize faces within their surroundings.
Credits: Image: Alex Shipps/MIT CSAIL
Related Publication:
Mark Hamilton, Simon Stent, Vasha DuTell, Anne Harrington, Jennifer Corbett, Ruth Rosenholtz, and William T. Freeman (2024), Seeing Faces in Things: A Model and Dataset for Pareidolia, arXiv: 2409.16143 [cs.VS]