Joe, i suggest using this new abstract and the title as it is more representative of the final presentation. Modelling Human Hand-eye Coordination Using Machine Learning Titus Mickley Perform laboratory Advisors: Kamran Binaee, Rakshit Kothari, Gabriel J. Diaz We use our visual system to collect information of an environment, and perform an action such as catching an object. Currently machines do not have the same capabilities as humans when it comes to analyzing an environment and performing inside the environment efficiently . By recording participants head, hand, and eye position/orientation in a controlled virtual environment as the subjects repeatedly attempt to catch a virtual ball with a real paddle. A simulation of the virtual environment was created to convey the subjects data visually. A basic Reinforcement Learning algorithm is then created, allowing an agent to learn autonomously in a 3D ecosystem...
Comments
Post a Comment