AI labs can generate images and video. But do they know what viewers actually see, understand, and need at each moment? I help research teams ask human questions about machine vision.
I’m a visual literacy, video and image-analysis specialist with an MFA in Visual Arts, Film and Media Studies. I work at the level of visual structure and narrative—producing beat-level scene descriptions, visual narrative analysis, and multimodal evaluations for AI systems through precise analytical writing that articulates camera movement, blocking, spatial hierarchy, light, color theory, pacing, and visual tone, independent of dialogue. I bring over a decade of experience teaching visual literacy, visual arts, film, media, and design, and I support scene structure analysis, visual reasoning, multimodal understanding, visual QA, and alignment tasks that require human judgment and perceptual precision.
AI lab and research inquiries: elcinpiajoyner@gmail.com
Perceptual Beat-by-Beat Analysis of Synthetic Video
Case study: Air Head (Sora / OpenAI generated film by Shy Kids)
A diagnostic reading of how the system constructs narrative coherence moment by moment—and where material logic fails.
→ Read analysis here