The aggregation of individual evaluations into a group evaluation is a key issue in decision theory. Inspired by the collaboration idea on Web2.0, two group evaluation methods of collaborative weighting evaluators where evaluators rate the objects evaluated are presented to form group decision. Among the most commonly applied methods for group evaluation are to average the scores obtained by each objects evaluated and then consider the associated collective evaluation, which not consider the difference between individual evaluators. The proposed methods can offset the effect resulted by some evaluator’s irregular evaluation through considering the contribution of the individual evaluation to the collective evaluation. Sometimes the irregularity is owing to the subjectivity reason. For example, since some evaluators are prejudiced against some objects evaluated, the evaluators will subjectivity give very high or very low rating on the objects evaluated. In this paper, two nonlinear group evaluation methods achieved through mutual restraints between individual evaluators and consequently are more stable than traditional group evaluation method in an actual example and a synthetic data set.