Ubiquitous Meta User Interfaces (Ubi-MUI)

Intelligent Environments have the vision of enhancing our everyday environment and interaction with its objects by sensing, computing, and communication capabilities. Major characteristics of such environments are the increasing number of intelligent devices (ubiquity), their complexity, and their integration into the background (transparency). Devices will disappear or blend into the background and will be invisible to the user.

However, because of this transparency, users fail to develop an adequate mental model for interaction with such environments. To overcome these challenges, new types of user interfaces are required that will represent intelligent environments. Such representative user interfaces create an overall system image for intelligent environments in order to help users to better understand the intelligent environment. In this sense, representative user interfaces are Ubiquitous Meta User Interfaces (Ubi-MUI) that could increase the transparency and predictability of the whole system by visualizing the environments’ internal states, perception and decision making processes, available services and devices, as well as ongoing and adoption plans. Using Ubi-MUI users could observe, analyze, understand, control, and customize the adaptive behavior and context-dependent interactions of their surrounding.

Our research on Ubi-MUI includes:

  1. Representing Ambient Intelligence and Smart Environments
    1. Visualization and animation of sensing activities, decisions, and implicit interactions of intelligent systems, which allows users to understand and predict the behavior of the system.
    2. Creating awareness for implicit interaction concepts using multimedia artifacts
    3. Using avatars, storytelling, or gaming to introduce the functional capabilities and adaptive behavior of smart environments to novice users
  2. Concepts for Meta User Interfaces
    1. Multimodal, multimedia interfaces to control and modify implicit interactions and environments’ responsive activities
    2. User interfaces to shut down the perception features or responsive behaviors of intelligent environments
    3. Game-based interfaces to observe, control and modify system behavior
    4. Haptic, multi touch, or tangible ubiquitous meta user interfaces
    5. Supportive user interfaces that create a system face for smart environments
    6. Natural meta interaction concepts allowing users an easy access to multimedia environments (search, exploration, manipulation and control of media and devices), touch and gesture based interfaces, 3D displays and audio immersive systems for augmented-reality.
    7. Metaphors and coordination algorithms for distributed conflict management,
    8. mechanisms that allow users an explicit interaction with smart environments.
  3. Implementation, and Evaluation Aspects
    1. Methods to evaluate the added-value of multimedia assisted meta user interfaces
    2. User studies related to mental models for human-environment-interaction and meta user interfaces
    3. Novel approaches to effectively manage the complexity of development,
    4. real-time, de-centralized media-processing architectures,
    5. middleware architectures for sensor and multimedia integration.
    6. Activity analysis and domain observations related to phenomena such as loss of control and over-automation
    7. Smart multimedia sensors with the ability to capture, store, and process audio and video signals for situation recognition and implicit interaction purposes.

Leave a Reply