Current assistive smart homes have adopted a relatively rigid approach to modeling activities. The use of these activity models have introduced factors which block adoption of smart home technology. To address this, goal driven smart homes have been proposed, these are based upon more flexible activity structures. However, this goal-driven approach does have a disadvantage where flexibility in this activity modeling can lead to difficulty providing illustrative guidance. To address this, a video analysis and nomination mechanism is re-quired to provide suitable assistive clips for a given goal. This paper introduces a novel mechanism for nominating a suitable video clip given a pool of automat-ically generated metadata. This mechanism was then evaluated using a voice based assistant application and a tool emulating assistance requests by a goal-driven smart home. The initial evaluation produced promising results.
|Title of host publication||Ambient Assisted Living and Daily Activities|
|Publication status||Published - 11 Dec 2015|
- Automated Speech Recognition
- Assistive Living
- Semantic Web
- Smart Environments
- Vocal in-teraction