This article is about how algorithms designed by Google and Facebook to bring readers their news has monumentally failed. Large amounts of money and cutting-edge technology have been dedicated to “machine learning divisions” to decipher what news users want to see, however there remains major flaws in this process. AI cannot tell reality in news articles from trickery. A researcher at OpenAI says machines aren’t capable through searching hundreds of sources how truthful a claim may be. Machines are being benchmarked to put them to task finding missions. While they may be making strides in finding information, they still lack the innate human capability of deciphering truthfulness in written form. There is just no competition when compared against a human for fact checking. One idea is to teach an algorithm simply not to trust information from certain sources (websites). Truth is a fickle mistress and while machines may be able to gather information by leaps and bounds, interpretation of said facts lies best with human eyes. Read more about the article here.