As artificial intelligence (AI) becomes more pervasive, humans will interact with autonomous agents more frequently and in deeper ways. While there is a significant body of work addressing the interface between a single human and a single AI agent, less is known about individuals react to AI when they are part of human-agent hybrids, namely groups of multiple humans and potentially multiple AI. These hybrid forms are unique in that advice is often given simultaneously, i.e., a human decision maker evaluates advice from other humans and algorithms at the same time. This scenario presents a boundary condition on the extant literature, as it is unclear how a human decision maker will differentially appraise a human advisor compared to an algorithmic advisor when advice is simultaneous. This study presents the results of three experiments asking individuals to estimate property rental prices with the support of both human and algorithmic advice. We test whether explicitly labeling an advisor as an algorithm, rather than a human, impacts how individuals perceive both the algorithm and their other advisor. We also examine the role of conflicting advice during simultaneous evaluation. Based on the results of 904 participants, we find that labeling an advisor as an algorithm significantly results in an algorithmic appreciation bias, even when an equivalent human is present. Further, we show that uncertainty induced by conflicting information weakens the appreciation effect, while agreement among advisors results in the strongest behavioral responses.