Workshop on Intelligibility and Control in Pervasive Computing

There will be a Pervasive 2011 workshop on Intelligibility and Control in Pervasive Computing co-organized by Jo Vermeulen, and Brian Lim and Fahim Kawsar to be held on June 12. The Call for Papers is out and more information on the workshop can be found at the workshop website.

Posted in Announcements | Leave a comment

Intelligibility Hello World Tutorial

This tutorial describes how to make a context-aware application intelligible, such that it can explain, what it did, why, and how it works. We will be using the components of the Intelligibility Toolkit. We will be extending the HelloRoom example to make it provide explanations.

Continue reading

Posted in Tutorial | Leave a comment

Intelligibility Primer

In this tutorial, we will introduce the intelligibility components of the Context Toolkit. In order to better explain how they work, context-aware applications should be intelligible. We provide the Intelligibility Toolkit to provide support for automatic generation of explanations and components to help query for, simplify, and present the explanations.

It satisfies the following requirements:

  1. Lower barrier to providing explanations
  2. Flexibility of using explanations
  3. Facilitate appropriate explanations automatically
  4. Support combining explanations
  5. Extensible across
    • Explanation types
    • Application (decision) models
    • Provision styles

Intelligibility Components and Architecture

The Intelligibility Toolkit is built over the Enactor framework to be able to generate (currently) 8 types of explanations (Inputs, Output, Certainty, What, Why, Why Not, How To, What If). For more information about the design principles, see Lim, B. Y. and Dey, A. K. 2010. Toolkit to support intelligibility in context-aware applications. Ubicomp 2010. The Intelligibility Toolkit consists of four main components: Query, Explainer, Reducer, and Presenter.

intelligibility toolkit - architecture

Continue reading

Posted in Tutorial | Leave a comment

Machine Learning: Decision Tree Enactor

In this tutorial we cover how to build a context-aware application that uses a trained decision tree to make inferences. In particular, we will be building an Instant Messaging (IM) application that predicts when a buddy is likely to respond to you. The decision tree is trained on real data collected by [Avrahami et. al. 2006], which we have adapted for our application.

The Context Toolkit uses the WEKA machine learning toolkit to be able to handle classifiers. This is not a tutorial about machine learning or the WEKA; you may want to read the tutorials provided by their website.
Continue reading

Posted in Tutorial | Leave a comment

Context Toolkit Components and Architecture

This article describes the important components (classes) in the Context Toolkit, and gives an architectural overview of how they interact with one another.

Not included: advanced classes, machine learning extensions, intelligiblity

Continue reading

Posted in Uncategorized | Leave a comment

Context Toolkit Primer – Part 2b: Enactors with XML

Enactors with XML

In this tutorial we explain how to create Enactors using XML. In particular, we will be defining RoomEnactor, as described in the original tutorial. Instead of extending the Enactor class, we can describe a widget’s properties in an XML file, and use EnactorXmlParser to create an instance of the enactor.

Note that while Generators are similar to Enactors (and in fact are subclasses of Enactor), there is currently no way to define them in XML. This is because generators are meant to use a “black-box” approach (e.g., hard-coded, extraction from database, or loading from a web service) to update widgets, and these mechanistic behavior is best represented in code. See the original tutorial for how to build a generator, specifically, RoomGenerator.

Continue reading

Posted in Tutorial | Leave a comment

Context Toolkit Primer – Part 1b: Widgets with XML

Widgets with XML

In this tutorial we explain how to create Widgets using XML. In particular, we will be defining widgets, RoomWidget and LightWidget, as described in the original tutorial. Instead of extending the Widget class, we can describe a widget’s properties in an XML file, and use WidgetXmlParser to create an instance of the widget.

Continue reading

Posted in Tutorial | Leave a comment

Context Toolkit Primer – Part 4: Context-Aware Application

With the Widgets that model contexts defined (in Part 1), rules defined to trigger output contexts when input contexts change (via Enactors in Part 2), and behavior defined when contexts take certain values (Part 3), we are ready ready to piece these components together a context-aware application.

Continue reading

Posted in Tutorial | Leave a comment

Context Toolkit Primer – Part 3: Services

In Part 2 of this primer, we described how to model logic using Enactors. In this post, we will cover the third step of modeling behavior with Services attached to Widgets.

Now that we have modeled how our context-aware application makes decisions, we would like to model how it behaves (or what it does) after it decides. We do this using Services. Services would be coupled to widgets so that they can be executed. They can be consideredactuators of widgets, and allow behaviors such as actually turning the lamp on, rather than just indicating a state. Note that widgets do not need services if they just store context state, and do not need any behavior functionality. Services can also be requested to be executed remotely, so the caller (usually an enactor) does not need to be on the same machine.

Continue reading

Posted in Tutorial | Leave a comment

Context Toolkit Primer – Part 2: Enactors

In Part 1 of this primer, we described how to model contexts using Widgets. In this post, we will cover the second step of modeling logic with Enactors.

There is now another way to create Enactors using XML. This would be more convenient to developers familiar with XML.

Context-aware applications take context information sensed from the environment or users, and make decisions on it. Once we have defined the widgets to model the sensed and actuated states that our application cares about, the next step is to model the decisions in the application. In the Context Toolkit, we model decisions using Enactors. An enactor can be thought of functionally as in-out boxes (encapsulations) that decide on an output state based on input state. It needs to subscribe to a widget to track (some or all of) its attribute values. These attribute values of this input widget represents the input state. It then makes a decision on that input state to derive an output state. Currently, enactors only make discrete (or nominal) decisions, i.e., output states are discrete, rather than continuous (like functions). Such decision making processes are also called classifications.

Continue reading

Posted in Tutorial | Leave a comment