Recent advances in cognitive computing, machine learning, natural language processing, facial gesture analysis can be now be successfully used to detect and analyze our full body expression to detect signals of mental health affections. The possibilities for this type of technologies are endless, and encompass not only mental health but other type of applications for healthcare support.
SimSensei is a virtual human platform specifically designed for healthcare support and is based on the 10+ years of expertise at the University Of Southern California Institute for Creative Technologies (ICT) with virtual human research and development. The core technology of SimSensei is “Multisense”, that automatically tracks and analyzes in real-time facial expressions, body posture, acoustic features, linguistic patterns and higher-level behavior descriptors (e.g. attention and fidgeting). “Multisense” infers from these signals and behaviors, indicators of psychological distress that directly inform SimSensei, the virtual human. SimSensei is a virtual human platform able to sense real-time audio-visual signals captured by Multisense.