State-of-the-art approaches for robot navigation among humans are typically restricted to planar movement actions. This work addresses the question of whether it can be beneficial to use interaction actions, such as saying, touching, and gesturing, for the sake of allowing robots to navigate in unstructured, crowded environments. To do so, we first identify challenging scenarios to traditional motion planning methods. Based on the hypothesis that the variation in modality for these scenarios calls for significantly different planning policies, we design specific navigation behaviors, as interaction planners for actuated, mobile robots. We further propose a high-level planning algorithm for multi-behavior navigation, named Interaction Actions for Navigation (IAN). Through both real-world and simulation experiments, we validate the selected behaviors and the high-level planning algorithm, and discuss the impact of our obtained results on our stated assumptions.