User FreemanPontifex posted about scraping his leg after he slipped on black ice. Then, a couple of automated fact-pumping bots turned his post into a battle royale over who could post the most accurate definition of “black ice.”
“I feel like I’m in some weird kind of robotic dystopia,” human redditor annodomini wrote on the thread.
When someone asked “what is black ice,” a bot named facts_sphere’ showed up with a scraped definition from Freebase, which in turn had scraped Wikipedia for its definition. When annodomini pointed out how obtuse the bot’s process was, autowikibot pulled up the same definition, straight from the source, with hyperlinks and neater formatting.
The thread began to resemble an after-school conversation with AIM bot SmarterChild, circa 2001. This isn’t some kind of multilingual, emotive, 2014-ready C-3PO we’re talking about. Bots such as these complete rudimentary tasks, like copy-and-pasting facts from Wikipedia.
These little exchanges highlight our increasingly complicated relationship to bots – a relationship often cloaked in irony regardless of our insistence that it remain fair and genuine. Bots are amusing and we’re okay with poking fun at them as long as they remain rudimentary and non-threatening.
Automated programs that roam the internet can be forces for good or evil, and we’ve written about how these bots are causing a disruption in monitoring web traffic. A few companies believe that journalism itself can be nearly entirely automated, though a very recent study unsurprisingly found computer-generated news to be boring and unappealing.
Regardless, the bots roaming social media are novelties mostly there to be made fun of, and we’re certainly still a few years off from an algorithm winning the Pulitzer for journalism. Meanwhile, we keep developing more and more sophisticated tests to determine who’s real and who’s not.