Crowdvoting offers companies the opportunity to get business innovation ideas evaluated by a large number of contributors (the crowd), as an alternative to the currently prevalent practice of involving experts. In this paper we investigate whether the evaluation of a large number of complex ideas by the crowd can be conformant with that of experts. The ideas used to compare these evaluations were generated in an experiment involving students, with eighty different business models featuring as complex ideas. The results of the evaluation by experts were compared with that of an online crowd involved via a crowdvoting-platform. Our results show that an anonymous online-crowd is not as adept as experts when it comes to evaluating complex ideas such as business models. In contrast with previous studies of crowd evaluations for simple aesthetic tasks, our study provides first evidence of the limitations of crowd evaluations, and warns against the substitutions of experts for the evaluation of more complex ideas.