It’s getting easier and easier to add machine intelligence to your hacks, even to the point where you sometimes don’t have to install any special software. In this case [Dexter Industries] has added the ability to read human emotions to their EmpathyBot robot by making use of Google Cloud Vision.
Press a button on the robot and it moves forward until it’s a certain distance from an object. It then takes a picture and sends it off to Google Cloud Vision along with a request to do face detection. The response that Google returns is in JSON format and, if it finds a face, includes the likelihood of the face being happy, sad, sorrowful or surprised. The robot parses that response and gives an appropriate canned speech using the text-to-speech software, eSpeak e.g. “You seem happy! Tell me why you are so happy!”.
[Dexter] has made the source code available on github. It’s written in python and is easy to read by anyone with even just a little programming experience. The video after the break gives a number of demonstrations, including some with non-human subjects.
On their webpage, [Dexter Industries] also gives a little further analysis. For example, one subject had facial hair which gave Google difficulty with interpreting the emotion. But after some trimming, interpretation improved. It did have difficulty with a baby though, possibly due to chubby cheeks.
We recently saw other examples of hacks using machine intelligence software. [Dexter] has used Google Cloud Vision before for sorting candy. And before that it was Google’s Tensorflow being used by a robot to recognize and say the names of objects around a garage.
Filed under: robots hacks
// from Hackaday http://ift.tt/2gktyJn
site=blogger">IFTTT
EmoticonEmoticon