We envision that next-generation spoken dialogue systems will be mixed-initiative. However, it is unclear how exactly a mixed-initiative strategy should be designed; under what circumstances should the system take the initiative, and under what circumstances should it let the user do so. The initiative strategies used in human-human conversation are a good starting point, because they are natural for the user to follow. Studying human-human conversation, however, only gives a descriptive account of human strategies. In this paper, we explore the use of computer simulation to better understand human conventions and give an explanatory account. We have two software agents solve a collaborative task using different initiative strategies, the first derived from analysis of human-human dialogues, and two alternatives based on proposals in the literature. Our simulation results show that the former is more efficient than the others. This helps support the explanation that people use an initiative strategy that minimizes collaborative effort.