In the last few years, social media have been weaponized to spread false information and affect users' opinions. Bots represent one of the main actors associated with manipulation campaigns and interference operations on social media. Uncovering the strategies behind bots' activities is of paramount importance to develop computational tools for their detection. In this paper, we propose to explore bots' behavior by inspecting the digital weapons (i.e., sharing activities) they utilize to spread content and interact with humans in the run-up to the 2018 US Midterm Election.We observe that bots strategically mimicked the human temporal activity (since several months before the election) and balanced their interaction among human and bot population. Human response to bots' deceptive activity is alarming: One in three retweets performed by humans comes from a bot-generated content.