Projects > Jabberwocky


Jabberwocky works by bringing current voice and visual recognition systems into a productive interaction with presentation programs. The system is able to analyze the slides in the presentation and make educated assumptions about their attributes, such as words or graphics, on them. Using a probabilistic approach, Bayes' Law, allows us to design a system that is able to operate at a high level while only having a shallow level of understanding.

Current voice recognition systems operate at about 75% accuracy. Since Jabberwocky uses a shallow recognition, that imperfection is accommodated for because the system is actually only listening for key words that appear on the slide or indicatory words, such as "table" or "graph." Other words are passed through the system, and analyzed for semantic and grammatical similarities to such key words, or are thrown out of consideration as unimportant. Jabberwocky listens for words that pertain to specific slides, allowing it to effortlessly operate even when the planned flow of a presentation is altered. This means that a slide being referred to by a question will be pulled up, and that the most applicable slide will be shown during the talk, regardless of any pre-set order.

There are no special requirements placed on the user in the designing of his presentation. Jabberwocky does its analysis on its own and automatically provides extra assistance to the user. For example, it inserts a "button" that will appear on the screen when the slide is shown. The user is able to bypass the system and change slides himself both by "pushing" this button or by calling for, "Next slide." If the user does bypass the system, the system notices. When that presentation is given again, the system is able to recognize where in the lecture the speaker was when the slides were changed, and it will now be able to act on this without the user having to do anything.

Project Papers

  • Jabberwocky: You don't have to be a rocket scientist to change slides for a hydrogen combustion lecture
  • Beyond “Next slide, please”: The use of content and speech in multi-modal control