University of Arkansas, Fayetteville


In both real and 3D virtual worlds, people and avatars (representations of people) need to be able to communicate with things around them. Without guidance, however, people cannot use the language that the things can understand. The goal of our research is to extend the 3D virtual world Second Life® to better model pervasive computing and overcome the boundaries of communication. The aim of this paper is to show how to build a dynamic menu-based user interface that enables humans to communicate with model entities. The focus is the applicability of object-specific grammars associated with things (objects in the real and virtual worlds) and a GUI consisting of cascaded menus to guide people in “talking to” things. This paper discusses the prototype model of a new virtual controller that takes us closer to the ultimate goal – a system that extends the Second Life user interface so that people can task robots using a menu interface.