Due to online sharing, false and misleading information spreads at an alarming rate. In response, social media sites have begun to append credibility warnings to articles that human fact-checkers have deemed unreliable. However, information spreads at rates exceeding that at which humans are capable of fact-checking. This has lead to a boom in computational fact-checking with artificial intelligence (AI). While the computational assessment of credibility is becoming more accurate, users' perceptions of computational fact-checking as a trustworthy supplement for humans and the impact of computational fact-checking on users' abilities to correctly decide on the credibility of an article are not well studied. We conducted a cross-sectional survey in which 204 survey respondents rated the credibility of four news articles, each randomly assigned a credibility warning (i.e., an assessment of the article's credibility determined by either an AI agent or human journalist, or no assessment at all). We found that AI warnings were as successful, if not more so, than warnings provided by a journalist, at influencing participants' assessments of a news article's credibility (regardless of the warning's accuracy). Additionally, our results show that an article's magnitude of sentiment, along with the user's understanding of AI, both play a vital role in determining the effectiveness of AI warnings.