MindLens·Lab

Research

Research toward better emotion-recognition support — for autistic learners and beyond.

MindLens Lab is a teen-led research project building toward future learning tools for people who find emotion difficult to read or exchange — including children and adults on the autism spectrum (ASD), people with selective mutism, social anxiety, and others for whom social-emotional cues feel illegible. Phase 1: collect plural human readings of the same moment as the foundational dataset, before any tool is built.

Thesis

Emotion reading is plural.

One face. One moment. Ten viewers — and six different readings. That's not an error in the data; it's the data. Real emotion recognition involves cultural context, lived experience, and ambiguity that a single “correct” label can't capture. We collect those plural human readings, and place them next to one human-reviewed AI interpretation, so we can study how — and how much — readings legitimately differ.

Origin

Where this started

In July 2025, I (Evelyn Kim, 11th grade, Singapore American School) published a paper in Curieux Academic Journal: “Artificial Intelligence in Emotional Intelligence Training for Autism.”

The paper surveyed how AI is being used in autism intervention tools, and identified a gap: most existing tools assume one “right” emotion label per situation, even though human raters disagree on those labels constantly. Without a dataset of plural human readings, training an AI to replicate one labeller's reading isn't teaching emotion — it's teaching one labeller's preferences.

MindLens Lab is the follow-up: a participatory research platform that collects plural readings as the foundational data, so that any tools we (or others) build later are grounded in how humans actually read emotion.

How we measure

Two axes per reading

Every clip in MindLens Lab is read along two axes: which emotion you see and which signals you used. We don't score you against a “right answer” — instead we track how widely readers' answers spread, using Shannon entropy and Krippendorff's α. Higher entropy = more plural reading on that clip.

9 emotions

  • 1Happy / amused
  • 2Sad / disappointed
  • 3Angry / frustrated
  • 4Anxious / nervous
  • 5Embarrassed / awkward
  • 6Surprised
  • 7Confused / uncertain
  • 8Neutral / hard to tell
  • 9Mixed / more than one

9 cues

  • 1Facial expression
  • 2Tone of voice
  • 3What they said
  • 4Body language / posture
  • 5The situation
  • 6Timing / pauses
  • 7How others reacted
  • 8Something else
  • 9I wasn't sure

Entropy ranges from 0 (all readers chose the same emotion) to 3.17 (uniformly spread across all 9 emotions). For full operational definitions (modal share, divergence rate, reader-style ICC, outlier flag, and how each is reported), see the pre-registration document.

Roadmap

A staged approach

We can't skip ahead. A wearable that helps an autistic child interpret a peer's expression in real time would be extraordinary — but with no dataset of how people actually read those expressions, we'd be guessing. So we start at the beginning.

Phase 1in progress

Now — 2026

Plural human reading dataset

Collect short responses from teens and young adults across countries. For each curated 15-30 second clip: which emotion they read, which cues they used, how confident they are, and optionally a short note. Add one AI interpretation, reviewed by a human curator, alongside.

Phase 2

2026 — 2027

Validated emotion-recognition study materials

Use the plural-reading data to identify clips and scenarios where emotion reading varies most — and least. Develop classroom exercises and self-paced learning modules that explicitly teach the plurality of emotion. Co-design with educators serving neurodiverse learners.

Phase 3

2027+ — long-term aspiration

Adaptive tools for emotion-reading support

With a validated dataset and pedagogy, explore tools that adapt to an individual learner: interactive practice, possibly wearable cueing for real-time social settings, and integration with established support practices. Always grounded in the data, never assuming a single right answer.

Why this order

Data first. Tools later. Always.

It's tempting to start at Phase 3 — to build the wearable, the app, the assistive tool that could change a learner's life. But every assistive tool that misreads emotion in deployment is worse than no tool at all: it teaches the user a wrong model and undermines their trust in their own perception. So before we build, we measure. Before we measure, we collect. And we're honest about how long that takes.

Pre-registration

Read the full Phase 1 hypotheses (H1–H8) →

The plain-English version of what the dataset is designed to test, with directional predictions and effect-size thresholds locked in before analysis. For advisors, collaborators, or anyone who wants the methodology in detail.

Who this is for

Different reasons to participate

Teens and young adults

You’re the data. Each clip you read tells us something we can’t learn from any single labeller. 10–15 minutes of your time contributes to research that may inform learning tools used by people in different situations than yours.

Parents and guardians

If your child is under 18, your consent is required. Read the consent form carefully — it spells out what we collect (very little), what we don’t (no biometrics, no identity, no emotional self-disclosure), and how the data is protected.

Researchers and educators

The platform supports anonymized data exports for analysis. Findings will be published as peer-reviewed work. Reach out at contact@mindlenslab.org if you’re interested in collaborating or replicating the methodology.

Team & support

Who runs this

  • Founder & researcher: Evelyn Kim, 11th grade, Singapore American School. Author of the originating Curieux paper (2025).
  • Interim adult advisor: Sean Kim. Provides infrastructure and scope review during the high-school timeline.
  • Future: open to collaborators, faculty advisors, and educators interested in extending the methodology to additional populations and languages.

Contact: contact@mindlenslab.org

Ready to add your reading? It takes 10–15 minutes. No personal information beyond what's in the consent form is ever collected.