Mobile intelligent multimedia presentation systems are subject to various resource constraints including mobile network characteristics, mobile device capabilities and user preferences. Those presentation systems which incorporate remote multimedia content accessed across HTTP (Hyper Text Transfer Protocol) or RTP (Real Time Transfer Protocol) protocols are particularly reliant on the capabilities of the connecting mobile network (i.e. minimum, average and maximum bandwidth) and in particular on the real-time constraints (i.e. currently available bandwidth, packet loss, bit error rate, latency) which prevail during actual content transmission. One approach to address this is to scale content, thus reducing its data rate requirement, although this technique is inherently limited by the lowest acceptable quality of that media element. Alternatively, content can be converted from one modality to another with a lower resource requirement. TeleMorph, a cross-modality adaptation control platform is detailed here. Initially a brief introduction to Intelligent Multimedia and to Mobile Intelligent Multimedia is given, and key systems discussed. The main premise of TeleMorph is that cross-modality adaptations in mobile presentation systems must be controlled in a manner which gives primary consideration to bandwidth fluctuations as well as the constraints listed above. The current prototype of TeleMorph, which uses a fuzzy inference system to control cross-modality adaptations between video and audio, is described. Particular focus is given to the fuzzy inputs, fuzzy control rules and fuzzy outputs which have been utilised in decision making. TeleTuras, a tourist information application which has been implemented as a testbed for TeleMorph, gives promising evaluation results based on multimedia and bandwidth specific test scenarios. TeleMorph is related to other approaches in the area of Mobile Intelligent Multimedia Presentation Systems. TeleMorph differs from other approaches in that it focuses specifically on the challenges posed by controlling bandwidth determined cross-modality adaptations in a mobile network environment. Future work on TeleMorph’s output presentation composition will incorporate images and text also, thus allowing for extended adaptation between video, audio, images and text, as well as multimodal combinations of these media elements.