Media Arts + Computer Vision Researcher
MFA Media Arts at ASU | Motion Capture, Deep Learning, IoT Hardware | Artist & Technologist exploring embodied AI, temporal perception, and computational media. PhD candidate preparing for research-intensive doctoral programs.
Research interests & technical background
I'm an interdisciplinary researcher bridging computational media, computer vision, and artistic practice. My work explores how technology can expand human perception and embodiment through motion capture systems, deep learning, and sensor fusion.
Currently pursuing an MFA in Interdisciplinary Media Arts at Arizona State University, with deep technical training in computer vision, embedded systems, and GPU-accelerated machine learning. I'm preparing for PhD programs where I can deepen research in temporal perception models, embodied AI, and XR technologies.
My practice spans open-source hardware development (Mesquite MoCap), large-scale installations merging IoT and LLMs (To Wilt), and rigorous computational work on GPU clusters. I believe the future of media arts lies in rigorous technical depth combined with artistic vision.
Ranked 19th among ~54,000 candidates in National M.A. Entrance Exam (Iran, 2019) — full tuition waiver
Anderson Ranch Arts Center Residency (2024) — selective artist residency in Colorado
Instructor of Record for Photography & Digital Media; TA at Northlight Gallery (ASU)
Accepted speaker for IndiaFOSS 2025 — presenting open-source motion capture research
Technical depth meets artistic vision
Open-source real-time motion capture system using 15 wireless ESP32 IMU nodes for full-body capture at 32 FPS with 15ms latency.
IEEE Paper Under Review
Multi-view 3D capture system using synchronized Android phones for scalable, accessible human motion and XR asset creation.
In Development
MFA thesis: 4-space installation exploring emotion & perception through LLM dialogue, sensor fusion, and temporal transformation.
Exhibiting Spring 2026
Image-to-avatar pipeline combining PiFUHD 3D digitization with Blender rigging for rapid XR character creation from single photos.
Active Development
Solar-powered IoT environmental sensor station with multi-sensor fusion, MongoDB logging, and real-time telemetry for climate monitoring.
Deployed
GPU-accelerated projects: Vision Transformers, GANs, diffusion models, custom architectures on NVIDIA A100 clusters via HPC.
EEE 598 Completed
Formal training & professional exhibitions
Opportunities for collaboration & discussion
I'm interested in discussing research opportunities, collaborative projects, and PhD program possibilities. Whether you're a researcher, admissions officer, curator, or fellow technologist — I'd love to connect.