Updated Abstract

Joe, i suggest using this new abstract and the title as it is more representative of the final presentation.

Modelling Human Hand-eye Coordination Using Machine Learning


Titus Mickley

Perform laboratory

Advisors: Kamran Binaee, Rakshit Kothari, Gabriel J. Diaz


We use our visual system to collect information of an environment, and perform an action such as catching an object. Currently machines do not have the same capabilities as humans when it comes to analyzing an environment and performing inside the environment efficientlyBy recording participants head, hand, and eye position/orientation in a controlled virtual environment as the subjects repeatedly attempt to catch a virtual ball with a real paddle. A simulation of the virtual environment was created to convey the subjects data visually. A basic Reinforcement Learning algorithm is then created, allowing an agent to learn autonomously in a 3D ecosystem. The results of the study could assist in integrating more advanced Artificial Intelligence (AI) algorithms into modern robotics software capable of better performing actions in a 3D environment which is inspired by human performance.

Comments

Popular posts from this blog

Day 28

Day 19

Day 26