Robotics researchers at Carnegie Mellon University have devised a method of tracking poses of multiple people in real time, which allows devices to monitor and interpret human body language.

The new technology, called OpenPose, has been released as open source on Github. It fills a gap between text and audible communication between man and machines.
Researchers say it will allow robots to better perceive what people around them are doing, their moods, and whether they can be interrupted.
OpenPose enables computers to understand the body poses of multiple people; interpreting individual body poses isn't enough to understand non-verbal communication for groups, especially large ones.
It can also detect hand movements and the arrangement of a person's individual fingers, a feature the researchers say will enable developers to devise new ways to interact with computers through gestures.
Sample applications for OpenPose include in self-driving cars, to monitor and detect if people are about to walk into the street so as to avoid hitting them. OpenPose could also be used to behaviourally diagnose and rehabilitate autism, dyslexia, and depression.
The researchers used the decade-old panoptic studio dome with 500 video cameras and Microsoft's Kinect motion sensor inside to develop OpenPose.
For the hand movement imaging, the researchers used only 31 high definition cameras, but were still able to build a large data set with the imagery captured.
OpenPose currently interprets human shapes in two dimensions only, but the CMU researchers hope to move to three dimensions soon.