Here’s how the team describes it:
“The software captures sensor data from the activity center and tries to select a predefined text that is related to that sensor data. We are extending the system so it becomes easier to relate certain patterns of sensor readings with a set of strings.
For example: when Yorin plays with mommy’s picture for over 3 minutes, a twitter message will be posted saying “@mommy_yorin Yorin misses mommy and looks forward playing with her this evening”, or when Yorin is hitting the doorbell button four times in a row, a twitter message will be posted saying “Yorin is showing off his music skills with a new tune”. We hope to even support dynamic composition of new strings in the future.”