Navigation is the process by which people control their movement in virtual environments and is a core functional requirement for all virtual environment (VE) applications. Users require the ability to move, controlling orientation, direction of movement and speed, in order to achieve a particular goal within a VE. Navigation is rarely the end point in itself (which is typically interaction with the visual representations of data) but applications often place a high demand on navigation skills, which in turn means that a high level of support for navigation is required from the application. On desktop systems navigation in non-immersive systems is usually supported through the usual hardware devices of mouse and keyboard. Previous work by the authors shows that many users experience frustration when trying to perform even simple navigation tasks — users complain about getting lost,becoming disorientated and finding the interface ‘difficult to use’. In this paper we report on work in progress in exploiting natural language processing (NLP) technology to support navigation in non-immersive virtual environments. A multi-modal system has been developed which supports a range of high-level (spoken) navigation commands and indications are that spoken dialogue interaction is an effective alternative to mouse and keyboard interaction for many tasks. We conclude that multi-modal interaction, combining technologies such as NLP with mouse and keyboard may offer the most effective interaction with VEs and identify a number of areas where further work is necessary.