TTS systems require a trade-off between size and speech quality. A larger acoustic inventory allows synthesis of speech that sounds more natural. The Asynchronous Interpolation Model improves the quality to size ratio, allowing better compression of large acoustic inventories, as well as better quality speech from a small system. At maximum compression, our method represents most phonemes by a single frame of data. Coarticulation effects are specified as contextspecific non-linear interpolation functions. Dividing the speech features into multiple data streams allows asynchronous interpolation. In this study, AIM was applied to LSF parameters. Varying the number of streams allows for variable amount of compression. We used three different objective measures to investigate the effect of number and partitioning of streams. The first few weight functions (and the last one) seem to offer the most error reduction. Partitions separating the first 6 LSFs score well with all three measures.