In the Transformationally Invarient KInematics (TIKi) lab we integrate knowledge and methods from Psychology and Informatics, with particular emphasis on perception and computer graphics. In particular, we combine low-level psychophysics with modern computer graphics to allow an ecological psychology approach to modeling and synthesizing behaviorally relevant, spatiotemporal information.
Although visual scenes contain an enormous amount of information, humans and computers have only a limited amount of resources to extract, represent, manipulate, and use that information. Although psychology and computer science have separately approached the issue of how a system, either synthetic or organic, might accomplish these tasks, a synergistic fusion of the methods and knowledge of these two fields can provide innovative and efficient solutions. For example, knowledge about motion processing, stereopsis, and compensation for internal noise can provide insights into how computer graphics and computer vision might deal with similar problems. Likewise, knowledge about the dimensions and features that are important for more cognitive human abilities can be useful in the design of algorithms for similar domains (e.g., expert systems, visualization). Computer science techniques for solving these problems, on the other hand, can be used as potential models for human mechanisms.