NEW YORK - The fight against fake news is not just being waged by Google, Facebook and big media companies.
They are joined in the battle by academics and data scientists who started work on the subject years before bogus news stories were suspected of helping sway the 2016 presidential election.
Their work has yielded tools that help track how "alternative facts" spread, and others that let you identify fake stories or block them altogether.
Some of these are still baby steps, but they're a key, if largely unsung, part of the effort to tamp down the spread of fake stories.
And the researchers were there first.
For Giovanni Luca Ciampaglia, a research scientist at Indiana University, the phenomenon first caught his eye during the Ebola crisis in 2014.
"We started seeing a lot of content that was spreading, completely fabricated claims about importations of Ebola, (such as) entire towns in Texas being under quarantine," he said. "What caught our attention was that these claims were created using names of publications that sounded like newspapers. And they were getting a lot of traction on social media."
So he helped create a tool tracking how unsubstantiated claims spread online.
Deciphering Twitter rumors
Tanushree Mitra, a doctoral student at the Georgia Institute of Technology, began a project three years ago to see how misinformation and fake news spread through Twitter. At the time, she said, "companies like Facebook and Twitter were not paying much attention."
What attracted her to the project was the prevalence of fake news that spread online following natural disasters such as Superstorm Sandy in 2012. When she saw that people were sharing a lot of incorrect or misleading information about the events, Mitra decided to track both big stories and smaller rumors with the goal of creating an app that could help ordinary people sort fact from fiction so they can make decisions that could be crucial to their wellbeing.
Mitra and her fellow researchers scanned 66 million tweets linked to nearly 1,400 real-world events to identify words and phrases linked to perceived levels of credibility. Looking at tweets surrounding news events in 2014 and 2015 - including the Ebola crisis, the Charlie Hebdo attack in Paris and the death of Eric Garner in a confrontation of police officers in New York City - they asked people to judge tweets based on how credible they thought the posts were.
Words such as "eager," ''terrific" and "undeniable" were linked to more credible posts, while words such as "ha," ''grins" and "suspects" were the opposite. A computer matched the humans' opinions 68 percent of the time. The next step, an app, could help people rate the credibility of tweets and other social media posts.
Tracking hoaxes
A group of researchers at Indiana University have created an online tool called Hoaxy that seeks to visualize "the spread of claims and related fact checking online." Although it's still a work in progress, Hoaxy can trace the origin of, for instance, the false claim that millions of votes in the 2016 presidential election were cast by "illegal aliens." Type in your search terms and Hoaxy will report back with stories that spread the claims, as well as fact-checking articles that debunked it.
In this instance, the claim goes back to a November article from Infowars.com that was shared 17,961 times on Twitter and 52,200 times on Facebook, according to Hoaxy. The site only tracks actual links people shared, so it misses anything that's paraphrased or posted without a link.
A data visualization tool shows the intertwined web of Twitter users who spread both the claims and the fact checks, and how they are connected to one another. The researchers focused on Twitter because the service makes more data available to the public, which makes it easier to use in data-tracking tools than Facebook.
Lead a horse to water
Tools like Hoaxy or rumor-identification apps are only helpful if people use them. The same goes for another approach - using a web browser plug-in to identify or block fake-news stories. For instance, the Chrome extension "Fake News Alert," created last year, says it will tell you when you are visiting a site "known for spreading fake news."
But there are a few drawbacks. Many people aren't willing to go to the trouble of adding new extensions to their browser. And such extensions only work on the desktop version of Chrome, not its mobile counterpart.
"Fake News Alert" also uses a widely circulated but oft-criticized list of fake and misleading news sites assembled by a Merrimack College professor. The list casts a very broad net and includes some established, but highly partisan sites such as the right-wing Breitbart News and the left-wing Occupy Democrats.
A final obstacle: While fake news has been in the real news a lot, many people simply aren't that aware of it.
"A lot of consumers are not savvy about it," said Larry Chiagouris, a marketing professor at Pace University who follows the fake news phenomenon. "And of those that are - and it's a small number- not a lot of them add plug-ins to browsers."
Educate the people
Chiagouris believes we are at the "beginning of the beginning" when it comes to defining just what fake news is and how to combat it. But he and other experts say technological solutions like apps and plug-ins are unlikely to get to the root of the problem.
The real solution, he says, will start in school - "not college, grammar school."
The better educated and informed the public is, the more likely they are going to be "asking questions and exploring alternative sources of information," said Mike Posner, co-founder and co-director of the New York University Stern Center for Business and Human Rights. "What you really want is people saying they want to see different sides of an issue, looking at things by people who don't agree with me, so one (part of the solution) is public education."