← Back

About MindLens Lab

Why this exists

Most emotion-recognition tools are trained on single "correct" labels. Real human emotion reading is plural: different viewers legitimately read different emotions from the same moment, for different reasons. This mismatch is particularly acute for autistic learners, whose alternative readings are too often treated as errors rather than as valid variance. MindLens Lab addresses that gap by collecting how people actually read emotion in naturalistic clips.

What we ask of you

You'll watch short clips and tell us what emotion you read, what cues you used, and how confident you are. After you commit your answer, you'll see how others read the same moment, and one AI interpretation. You can't change your answer once submitted — that's part of the research design.

What this is not

This is not a diagnosis tool. This is not a therapeutic tool. This is not an autism intervention product. Phase 1 is a participatory research experience, full stop.

AI's role

Our AI offers ONE interpretation of each clip, reviewed by a human curator before you see it. It is presented alongside your reading and the readings of other participants — never as the correct answer. A measurable part of the research output is documenting where AI interpretation diverges from plural human reading.

Who's behind it

Founder & researcher: Evelyn Kim

Interim advisor: Sean Kim

Built as a follow-up to Evelyn Kim, "Artificial Intelligence in Emotional Intelligence Training for Autism," Curieux Academic Journal, July 2025.